AWS Certified Solutions Architect Associate Exam: The Shortest Path To Success
AWS Certified Solutions Architect Associate Exam: The Shortest Path To Success
On- EC2
premises (server)
server
What is AWS?
With AWS, you can instantly use infrastructure such as
servers, storage, and databases.
On- EC2
premises (server)
server
S3
Storage (Storage)
What is AWS?
A major feature of AWS is that you can obtain and use a
server for free and in minutes
On- EC2
premises (server)
server
Data Center
Making Physical Equipment a Service
Efficient system management is possible by borrowing the
physical equipment used for system operation via the Internet.
Data Center
Cloud (Internet)
EC2 RDS
#1 Global Share
Amazon is an overwhelming presence with its cloud share
of over 30% for many years.
2019 Global Share
34.6%
5.2% 18.1%
6.2%
AWS Certification
Overview
AWS Certification Overview
There are four categories of AWS certifications:
Foundational, Associate, Professional, and Specialty.
https://aws.amazon.com/jp/blogs/big-data/upgrade-your-resume-with-the-aws-certified-big-data-specialty-certification/
AWS Certification Overview
Qualification level and ideal process to success
参照:https://aws.amazon.com/jp/certification/certified-solutions-architect-associate/
Response Types
◼ Multiple choice: Has one correct response and three incorrect responses
(distractors)
◼ Multiple response: Has two correct responses out of five response options.
AWS Exam Passing Grade
参照:https://aws.amazon.com/jp/certification/certified-solutions-architect-associate/
Question pattern
[Question pattern ①]
[Question pattern ①]
[Question pattern ①]
Your company runs an application where users share videos. This application is hosted on an
EC2 instance for processing videos uploaded by users. I have an EC2 that processes and
publishes video and has an Auto Scaling group set up.
Select the service you should use to increase the reliability of this process.
1) Amazon SQS
2) Amazon SNS
3) Amazon SES
4) CloudFront
Question pattern
[Question pattern ①]
[Question pattern ②]
[Question pattern ②]
As a Solutions Architect, you are building an SFA on AWS. This SFA has a business
requirement for sales staff to upload sales daily. In addition, these records should be kept for
sales reports. Durable and highly available storage is required for report storage. Since many
sales staff use this SFA, it is an important requirement to prevent these records from being
accidentally erased due to operational mistakes.
[Question pattern ①]
[Question pattern ②]
[Question pattern ③]
[Question pattern ③]
You are building a two-tier web application that delivers content while processing transactions
on AWS. The data layer utilizes an online transaction processing (OLTP) database. At the WEB
layer, it is necessary to implement a flexible and scalable architectural configuration.
Option 1 is the correct answer. In order to set up a database server in the private
subnet and a WEB server in the public subnet and communicate between
instances, it is essential to set an appropriate security group that allows the
communication. Security groups allow you to control traffic by specifying IP
addresses between EC2 instances.
Domain 1: Design Resilient Architectures
1) The load balancer will stop sending requests to the failed instance.
2) The load balancer will terminate the failed instance.
3) The load balancer will automatically replace the failed instance.
4) The load balancer will return 504 Gateway Timeout errors until the instance is
replaced
Domain 1: Design Resilient Architectures
1) The load balancer will stop sending requests to the failed instance.
A company currently stores data for on-premises applications on local drives. The chief technology
officer wants to reduce hardware costs by storing the data in Amazon S3 but does not want to make
modifications to the applications. To minimize latency, frequently accessed data should be available
locally.
What is a reliable and durable solution for a solutions architect to implement that will reduce the cost
of local storage?
1) Deploy an SFTP client on a local server and transfer data to Amazon S3 using AWS Transfer for
SFTP.
2) Deploy an AWS Storage Gateway volume gateway configured in cached volume mode.
3) Deploy an AWS DataSync agent on a local server and configure an S3 bucket as the destination.
4) Deploy an AWS Storage Gateway volume gateway configured in stored volume mode.
Domain 1: Design Resilient Architectures
◼ 1.4 Choose appropriate resilient storage
A company currently stores data for on-premises applications on local drives. The chief technology
officer wants to reduce hardware costs by storing the data in Amazon S3 but does not want to make
modifications to the applications. To minimize latency, frequently accessed data should be available
locally.
What is a reliable and durable solution for a solutions architect to implement that will reduce the cost
of local storage?
2) Deploy an AWS Storage Gateway volume gateway configured in cached volume mode
Option 2 is the correct answer. An AWS Storage Gateway volume gateway connects an on-premises
software application with cloudbacked storage volumes that can be mounted as Internet Small
Computer System Interface (iSCSI) devices from on-premises application servers. In cached volumes
mode, all the data is stored in Amazon S3 and a copy of frequently accessed data is stored locally.
Domain2: Design High-Performing
Architectures
You are building a two-tier web application that delivers content while processing
transactions on AWS. The data layer utilizes an online transaction processing
(OLTP) database. At the WEB layer, it is necessary to utilize a flexible and
scalable architectural configuration.
You are building a two-tier web application that delivers content while processing
transactions on AWS. The data layer utilizes an online transaction processing
(OLTP) database. At the WEB layer, it is necessary to realize a flexible and
scalable architectural configuration. Choose the best way to meet this
requirement.
Option 1 is the correct answer. Flexible and scalable server processing on AWS
can be achieved by configuring Auto Scaling and ELB for your EC2 instances. ELB
distributes traffic to multiple instances for increased redundancy, and Auto
Scaling automatically scales under heavy load.
Domain2: Design High-Performing
Architectures
◼ 2.2Select high-performing and scalable storage solutions for a workload
A company operates a set of EC2 instances hosted on AWS. These are all Linux-
based instances and require access to shared data via a standard file interface.
Since the storage where data is stored is used by multiple instances, strong
consistency and file locking are required. As a Solutions Architect, you are
considering the best storage.
1) S3
2) EBS
3) Glacier
4) EFS
Domain2: Design High-Performing
Architectures
◼ 2.2Select high-performing and scalable storage solutions for a workload
A company operates a set of EC2 instances hosted on AWS. These are all Linux-
based instances and require access to shared data via a standard file interface.
Since the storage where data is stored is used by multiple instances, strong
consistency and file locking are required. As a Solutions Architect, you are
considering the best storage. Choose the best storage that meets this
requirement.
4)EFS
Option 4 is the correct answer. EFS allows multiple EC2 instances to access the
EFS file system and share data at the same time. EFS provides a file system
interface and file system access semantics (such as strong integrity and file
locking) that allow simultaneous access from up to thousands of Amazon EC2
instances.
Domain2: Design High-Performing
Architectures
◼ 2.3Select high-performing networking solutions for a workload
A company operates infrastructure located on AWS's private and public subnets. A database
server is installed in the private subnet, and a NAT instance is installed in the public subnet
because the instances in the private subnet send reply traffic to the Internet side. You
recently discovered that NAT instances are becoming a bottleneck.
A company operates infrastructure located on AWS's private and public subnets. A database
server is installed in the private subnet, and a NAT instance is installed in the public subnet
because the instances in the private subnet send reply traffic to the Internet side. You
recently discovered that NAT instances are becoming a bottleneck.
Option 3 is the correct answer. A NAT gateway is a managed service that you can use on
behalf of a NAT instance. Since the performance such as scalability is guaranteed on the AWS
side, using the NAT gateway will lead to improvement of the bottleneck of the NAT instance.
You can scale by changing the instance type of the NAT instance itself, but this action does
not guarantee that the problem will not occur in the future. Therefore, you can easily
improve performance and eliminate bottlenecks by changing your NAT instance to a NAT
gateway.
Domain2: Design High-Performing
Architectures
◼ 2.4Choose high-performing database solutions for a workload
As a system developer at a game company, you are building a database for the
game you are developing. In this game, it is necessary to implement a function
that items appear according to the user behavior data, and high-speed
processing of the user behavior data is required.
1) Redshift
2) ElastiCache
3) Aurora
4) RDS
Domain2: Design High-Performing
Architectures
◼ 2.4Choose high-performing database solutions for a workload
As a system developer at a game company, you are building a database for the
game you are developing. In this game, it is necessary to implement a function that
items appear according to the user behavior data, and high-speed processing of the
user behavior data is required. Choose a service that meets this requirement.
2)ElastiCache
Which actions should be taken to allow the instances to download the needed patches?
(Select TWO.)
Which actions should be taken to allow the instances to download the needed patches?
(Select TWO.)
Options 1 and 2 are correct answers. A NAT gateway forwards traffic from the instances in
the private subnet to the internet or other AWS services, and then sends the response back
to the instances. After a NAT gateway is created, the route tables for private subnets must be
updated to point internet traffic to the NAT gateway.
Domain 3: Design Secure Applications
and Architectures
3.2 Design secure application tiers
One company runs an application hosted on AWS. The application utilizes a VPC and two
public subnets, one subnet where users access the web server over the Internet and the
other subnet where the database server is located. As a security officer, you have begun to
consider improving the security of your architecture. Access to the WEB server is limited to
access from the company intranet and Internet access from employee PCs and does not
require Internet access like open WEB services.
1) Move the database server to a private subnet and use it for RDS.
2) Set up a NAT gateway on the public subnet and install RDS on the private subnet.
3) Move the web server to a private subnet.
4) Move the database and web server to a private subnet.
Domain 3: Design Secure Applications
and Architectures
3.2 Design secure application tiers
One company runs an application hosted on AWS. The application utilizes a VPC and two
public subnets, one subnet where users access the web server over the Internet and the
other subnet where the database server is located. As a security officer, you have begun to
consider improving the security of your architecture. Access to the WEB server is limited to
access from the company intranet and Internet access from employee PCs and does not
require Internet access like open WEB services.
Option 4 is the correct answer. Access to the WEB server is limited to access from the in-
house network and Internet access using employee PCs and does not require an unspecified
number of Internet access like open WEB services. Therefore, do not install the WEB server
on the public subnet, but install it on the private subnet.
Domain 3: Design Secure Applications
and Architectures
3.3 Select appropriate data security options
A company’s security team requires that all data stored in the cloud be encrypted
at rest at all times using encryption keys stored on-premises.
A company’s security team requires that all data stored in the cloud be encrypted
at rest at all times using encryption keys stored on-premises.
Which encryption options meet these requirements? (Select TWO.)
A company needs to maintain access logs for a minimum of 5 years due to regulatory
requirements. The data is rarely accessed once stored but must be accessible with one
day’s notice if it is needed.
What is the MOST cost-effective data storage solution that meets these requirements?
1) Store the data in Amazon S3 Glacier Deep Archive storage and delete the objects
after 5 years using a lifecycle rule.
2) Store the data in Amazon S3 Standard storage and transition to Amazon S3 Glacier
after 30 days using a lifecycle rule.
3) Store the data in logs using Amazon CloudWatch Logs and set the retention period
to 5 years.
4) Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage
and delete the objects after 5 years using a lifecycle rule.
Domain 4: Design Cost-Optimized
Architectures
◼ 4.1 Identify cost-effective storage solutions
A company needs to maintain access logs for a minimum of 5 years due to regulatory
requirements. The data is rarely accessed once stored but must be accessible with one
day’s notice if it is needed.
What is the MOST cost-effective data storage solution that meets these requirements?
1) Store the data in Amazon S3 Glacier Deep Archive storage and delete the objects
after 5 years using a lifecycle rule.
Option 1 is the correct answer. Data can be stored directly in Amazon S3 Glacier Deep
Archive. This is the cheapest S3 storage class.
Domain 4: Design Cost-Optimized
Architectures
◼ 4.2 Identify cost-effective compute and database services
A company uses Reserved Instances to run its data-processing workload. The nightly job
typically takes 7 hours to run and must finish within a 10-hour time window. The company
anticipates temporary increases in demand at the end of each month that will cause the job
to run over the time limit with the capacity of the current resources. Once started, the
processing job cannot be interrupted before completion. The company wants to implement a
solution that would allow it to provide increased capacity as cost-effectively as possible.
A company uses Reserved Instances to run its data-processing workload. The nightly job
typically takes 7 hours to run and must finish within a 10-hour time window. The company
anticipates temporary increases in demand at the end of each month that will cause the job
to run over the time limit with the capacity of the current resources. Once started, the
processing job cannot be interrupted before completion. The company wants to implement a
solution that would allow it to provide increased capacity as cost-effectively as possible.
Option 1 is the correct answer. While Spot Instances would be the least costly
option, they are not suitable for jobs that cannot be interrupted or must
complete within a certain time period. On-Demand Instances would be billed for
the number of seconds they are running
Domain 4: Design Cost-Optimized
Architectures
◼ 4.3 Design cost-optimized network architectures
As a Solutions Architect, you work for a company that operates a global image
distribution site. Currently, the company is considering using a CDN to streamline
the image distribution system. Therefore, you decided to calculate and report the
cost for content distribution using CloudFront.
1) Number of requests
2) Data transfer out
3) Resource type
4) Number of edge locations to use
Domain 4: Design Cost-Optimized
Architectures
◼ 4.3 Design cost-optimized network architectures
As a Solutions Architect, you work for a company that operates a global image
distribution site. Currently, the company is considering using a CDN to streamline the
image distribution system. Therefore, you decided to calculate and report the cost for
content distribution using CloudFront. Select one of the following CloudFront costing
elements: (Select two.)
1) Number of requests
2) Data transfer out
Options 1 and 2 are correct answers. Amazon CloudFront pricing is determined by the
following factors:
-Traffic distribution: Data transfer and request prices vary by region, and prices vary by
location at the edge where content is delivered.
-Requests: The number and type of requests (HTTP or HTTPS) and the region where
the request was made.
-Data transfer out: Amount of data transferred from Amazon CloudFront edge locations
The scope of
AWS services
Analysis of exam question range
We have extracted and analyzed the range of questions
from 1625 examples of from mock test
Amazon EC2 is a service that launches virtual servers such as Windows and
Elastic Compute Cloud Linux. You can choose the processor, storage, networking, operating
(Amazon EC2) system, and purchase model.
Amazon Elastic Block EBS is a dedicated block storage that can be used by attaching to an
Store (EBS) EC2 instance via a network.
AWS Identity & Access IAM is an access management service that securely manages access
Management(IAM) to AWS services and resources.
※ CloudWatch, Cloud trail are including the previous version on the exam but from the 02 version, these services are less likely to appear
Next Most Frequent Services
Adding services with 10 or more questions covers 90% of
exam content
Security Group provides firewall function to control communication
Security Group traffic of instance and ELB.
Amazon
EFS is a simple, scalable, and stretchable, fully managed NFS file
Elastic File System system for use with AWS cloud services and on-premises resources.
(EFS)
API Gateway is a service that creates and manages RESTful API and
Amazon API Gateway WebSocket API that provide real-time two-way communication
applications.
AWS Key KMS is a service that makes it easy to create and manage encryption
Management Service keys to encrypt a wide range of AWS services and applications.
CloudTrail CloudTrail is a log acquisition and monitoring service that tracks user
(SAA-01) activity and API usage.
Occasional Questions for Even Higher Score
Including services with 4 or more questions covers 97%
of the exam content.
7% ⇒ 97%
Athena 7 0.43%
Amazon MQ 6 0.37%
AWS Directory Service 6 0.37%
AWS SSO 6 0.37%
Amazon FSX for Lustre 5 0.31%
AWS Transit Gateway 5 0.31%
AWS Step Functions 5 0.31%
SWF 5 0.31%
CloudHSM 4 0.25%
STS 4 0.25%
Occasional Questions for Even Higher Score
Including services with 4 or more questions covers 97%
of the exam content.
WAF is a web application firewall that protects web applications or
AWS WAF APIs from common web vulnerabilities such as SQL injection and
cross-site scripting.
AWS
ACM is a service that provisions, manages, and deploys Secure
Certificate Manager Sockets Layer / Transport Layer Security (SSL / TLS) certificates.
(ACM)
AWS Database DMS is a database migration tool that enables you to safely migrate
Migration Service your database to AWS in a short period of time.
Occasional Questions for Even Higher Score
Including services with 4 or more questions covers 97%
of the exam content.
Cognito is a service that allows you to quickly and easily add user
Amazon Cognito sign-up / sign-in and access control capabilities to web and mobile
apps.
Amazon Simple Workflow SWF is a workflow creation and management service that builds,
(SWF) runs, and scales background jobs with parallel or consecutive steps.
As a Solutions Architect, you are building a mechanism to store and share reports
generated by your internal applications. This report will be generated by AWS Step
Functions to automate the generation process, but it will generate several terabytes of
data to be used in the report and must be stored in S3.
1) S3 Standard-IA
2) S3 Standard
3) S3 Intelligent Tiering
4) S3 Glacier
Storage class selection
As a Solutions Architect, you are building a mechanism to store and share reports
generated by your internal applications. This report will be generated by AWS Step
Functions to automate the generation process, but it will generate several terabytes of
data to be used in the report and must be stored in S3.
1) S3 Standard-IA
2) S3 Standard
3) S3 Intelligent Tiering
4) S3 Glacier
User
EC2:○
S3:×
Group EC2
Group
EC2:○
S3:○
S3
What is IAM?
IAM is an authentication / authorization tool performing
AWS operation security.
User
EC2:○
S3:×
Group EC2
Group
EC2:○
S3:○
S3
What is IAM?
IAM is an authentication / authorization tool performing
AWS operation security.
User
EC2:○
S3:×
Group EC2
Group
EC2:○
S3:○
S3
The Scope IAM Questions
Frequently asked questions extracted from 1625
questions are as follows
✓ You will be asked how IAM users are to be set up and
IAM User used.
✓ You will be asked about the purpose of the IAM group and
IAM Group how they are set up.
IAM Roles ✓ You will be asked about scenarios for setting up IAM roles.
IAM Database ✓ You will be asked the use of IAM database authentication
Authentication as a method mainly used for RDS authentication
Recording ✓ You will be asked about the purpose of the tools used to
User Activity manage IAM users‘ activities and other records.
Users Groups
Policies Roles
[Q] IAM Policy
The following IAM policies are used to set permissions on AWS resources.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"172.103.1.38/24"
]
}
}
}
]
}
1) All resources except for the IP address (172.103.1.38) are denied access privileges.
2) The IP address (172.103.1.0) has access rights to all resources.
3) The IP address (172.103.1.3) has been denied access to all resources.
4) The IP address (172.103.1.3) has access rights to all resources.
IAM Policy
A configuration document to grant access rights to users
and groups (JSON format document).
A Policy
EC2: ○
S3: X
Group ⇒ IAM Group EC2
B Policy
EC2: ○
S3: ○.
S3
IAM Policy
IAM policy is set in JSON format
"Allow".
Effect "Deny.
Which of the following is the correct explanation for the default privileges of
an IAM user?
personal
EC2: ○
S3: X
Group ⇒ IAM Group EC2
group
EC2: ○
S3: ○.
S3
IAM User
Users on AWS are set up as authorized entities called IAM
users.
• Power users are IAM users with full access to all AWS
Power user
services except IAM admin rights
(IAM user) • No permission to operate the IAM
[Q] Root account
When you register for an account with AWS, an account, called a root
account, is created and allows you to perform AWS operations. There are
certain operations that can only be performed by the root account.
Please select a response that can be implemented for the root account only.
(Select two.)
How should you set up your authority based on the principle of least
authority?
1) Create an IAM policy with the minimum permissions required for each
user and set it for IAM users.
2) Create an IAM policy with the minimum privileges required for each user
and set it up in an IAM group. In addition, place these IAM users in an
IAM group.
3) Create an IAM policy with the minimum permissions required for each
user and set it for IAM users. In addition, place these IAM users in an IAM
group.
4) Create an IAM group for each department and set up an IAM policy that
sets the minimum privileges required for each user.
IAM Group
A unit of authority that is set up collectively as a group. A
group is usually made up of multiple IAM users.
personal
EC2: ○
S3: X
Group ⇒ IAM Group EC2
group
EC2: ○
S3: ○.
S3
[Q]IAM Role
What is the safest way to grant the Lambda function access to the
DynamoDB tables?
A Policy
EC2:○
S3: X
Group ⇒ IAM Group EC2
B Policy
EC2:○
S3: ○.
C Policy
S3
IAM Role EC2: x
S3: ○.
AWS EC2
service Beanstalk
Data Pipeline, etc.
[Q] What type of IAM policy is this?
1) Use the root account to limit administrative privileges to the root account
only.
2) SCP is used to control the maximum privileges that development personnel
can grant to IAM identities.
3) Use IAM groups to limit the privilege settings of the developers' group
personnel.
4) Use permission boundaries to control the maximum privileges that
development personnel can grant to IAM identities.
Type of IAM policy
The IAM policy is called user-based policy and there are
many other policies in existence.
Reference: https://docs.aws.amazon.com/ja_jp/IAM/latest/UserGuide/access_policies.html#access_policy-types
Type of IAM policy
There are many policy types used for AWS permission
control
Select the easiest available IAM policy type to implement this privilege
management.
Management Policy
Customer Management Policy
Administrative policies that are created and managed by
user. The same policy can be attached to multiple IAM
entities
Third-party user
IAM
Role
Transfer
of authority
[Q] IAM authentication method
Choose the best way to configure your application to work with IAM in your
code.
Which of these can enable secure user access with short-term credentials to
improve security?
Reference: https://aws.amazon.com/jp/blogs/news/using-iam-authentication-to-connect-with-pgadmin-
amazon-aurora-postgresql-or-amazon-rds-for-postgresql/
[Q] Recording user activities
Your company has an IAM policy for one S3 bucket that allows external third
party applications to read the files. Therefore, it is necessary to ensure that
these accesses are being used correctly by the expected external users and
are not being used in an unanticipated manner.
1) CloudTrail
2) Server access log
3) Storage Class Analysis
4) IAM Access Analyzer
Recording user activity
A variety of tools can be used to obtain activity records
Analyze S3 buckets, IAM roles, etc., shared with external
IAM Access
entities to identify unintentional access to resources and
Analyzer data that are security risks
Access Advisor
It displays the date and time when an IAM entity (user,
Service Last
group, role) last accessed an AWS service
Accessed Data
You‘ve just created a new AWS account and are setting up AWS
environments. AWS has defined best practices that are required to be
executed for newly created accounts. This doesn't mean that you can't use
AWS if you don't run it, but it is recommended that you do, so you're going
to respond.
As a solution architect, select the items you should deal with in the early
days of AWS. (Choose three.)
✓ Lock the access keys of the AWS root user and do not use the root
account unnecessarily.
✓ Create individual IAM users and manage them with IAM users.
✓ Use the IAM group to assign permissions to IAM users.
✓ Set only minimum privileges for IAM users and IAM groups.
✓ Instead of creating a new policy, use AWS management policies.
✓ Use a customer management policy, not an inline policy.
✓ Use access levels to verify IAM permissions
✓ Set a strong password policy for your users.
✓ Enable MFA.
✓ Use IAM roles for applications running on Amazon EC2 instances
✓ Use IAM roles to transfer permissions when granting temporary
authentication to a third party
✓ Don't share the access key
✓ Rotate authentication information on a regular basis.
✓ Remove unnecessary credentials.
✓ Monitor the activity of your AWS account.
The Scope of S3
What is S3?
S3 is a very durable and highly available storage solution
for medium-long term data storage.
②
①
What is S3?
S3 is a very durable and highly available storage solution
for medium-long term data storage.
③
④
S3 use cases
Image data for content delivery are stored in S3 and
distributed using CloudFront.
Client S3
Images
CMS
The scope of S3 questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked to select storage that meets the storage
S3 storage features requirements of a given scenario.
✓ You will be asked to demonstrate the characteristics of S3
Static Web Hosting ✓ You will be asked how to set up static web hosting
cross-origin
✓ You will be asked how to share an S3 bucket with a
resource sharing domain configured as an origin to another domain.
(CORS)
S3 Encryption ✓ You will be asked the encryption methods available for S3.
The scope of S3 questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked the replication method of S3 and how to
Replication configure it.
Recording ✓ You will be asked the method and services of checking and
S3 Usage Status analyzing data usage relating to S3.
1) Amazon EFS
2) Amazon EBS
3) Amazon S3
4) Amazon EC2 Instance Store
S3 Storage Features
AWS offers three forms of storage services
Key
The name of an object, and the objects in the bucket to be uniquely
identified
Value
It is the data itself, consisting of byte values
Version ID
ID for version control
Metadata
Information about the attributes associated with the object
Sub-resources
Provides support for storing and managing bucket configuration
information Example: access control list (ACL)
S3 Storage Features
S3 divides storage space into bucket units and stores data
in objects
S3
Bucket Bucket
(contents-buckets) (website-buckets)
Which of the following is the correct explanation for the data storage
constraints of Amazon S3? (Please select two.)
1) The storage capacity of S3 is set at the time of bucket creation and then
scaled automatically.
2) The amount of data in storage and the number of objects that can be
stored is unlimited.
3) The maximum number of objects that can be uploaded in one PUT is 5GB
4) The maximum number of objects that can be uploaded in one PUT is 5TB
5) S3 provides file system access semantics (e.g., strong integrity and file
locking) and simultaneously accessible storage.
6) Use the mount helper to access S3.
S3 data capacity limit
S3 has unlimited storage capacity and can store data from
0KB - 5TB
Bucket
Bucket is the space in which the object is stored. The name should be global
and unique as it will be located in the region. The data storage capacity is
unlimited and the storage capacity is automatically expanded.
Object.
This is a file format that is stored in S3 and has a URL assigned to the object.
The number of objects that can be stored in the bucket is unlimited.
Company A, one of the world's four largest audit firms, produces a variety of
audit reports. These audit reports need to be kept for a certain period of
time with strong security. In addition, the data underlying the creation of
these audit reports is stored in S3 and amounts to several hundred terabytes.
The original data and audit reports are frequently accessed.
Which is the most cost-effective storage class for this use case?
1) S3 Standard-IA
2) S3 Standard
3) S3 Intelligent Tiering
4) S3 Glacier
Storage class selection
Choose a storage type according to your S3 usage
List these three storage options, in order of lowest cost, from left to right.
https://aws.amazon.com/jp/s3/pricing/ ...
[Q] S3 usage costs
In this scenario, how would you charge for the image transfer?
1) You only pay for what you use for S3 Transfer Acceleration to upload
images.
2) You will have to pay both the S3 transfer fee and the S3TA transfer fee for
the temporary use of the image upload.
3) Only pay S3 transfer fee to upload images
4) You don't have to pay a transfer fee to upload images.
The cost of using S3
S3 charges for data volume, requests and data transfer
https://aws.amazon.com/jp/s3/pricing/ ...
The cost of using S3
The S3 has a volume discount.
https://aws.amazon.com/jp/s3/pricing/ ...
[Q] Life cycle management
Which of the following lifecycle rules cannot be set? (Please choose three.)
1) S3 Standard ⇒ S3 Intelligent-Tiering
2) S3 Standard-IA ⇒ S3 Intelligent-Tiering
3) S3 Standard-IA ⇒ S3 One Zone-IA
4) S3 Intelligent-Tiering ⇒ S3 Standard
5) S3 One Zone-IA ⇒ S3 Standard-IA
6) S3 Glacier ⇒ S3 Standard-IA
Life Cycle Management
Set rules that automatically change the storage class
which stores objects and delete objects after some time.
Automatic archiving
over a period of time
Setup Method
S3 (Standard) Glacier
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/lifecycle-
transition-general-considerations.html
[Q] Version control
Silicon Valley startups use Amazon S3 to share data among their employees.
To ensure that these data are not accidentally deleted, you need to set up
objects to be protected.
Data B Data B
Apply version control to the
entire bucket. data C data C
Objects are stored for each
version.
version ID
Set how long the version is
00012
retained by lifecycle
Data A
management
It is necessary to delete the old Data B
version separately from the
object. data C
S3 MFA Delete
As an option for the versioning feature, you can require
MFA authentication when deleting objects.
[Q] Access management for S3
You are a solution architect and you are building a video sharing application
on AWS. The application is configured to host video software on an EC2
instance which processes video data stored in an Amazon S3 bucket. The
video data must be restricted to be viewed only by certain users.
Which settings restrict third parties from directly accessing the video data in
the bucket?
1) Set a bucket policy to allow references only from URLs in the web
application.
2) Configure the IAM role to allow only web applications to access the S3
bucket.
3) Configure ACLs to allow only web applications to access the S3 bucket.
4) Configure the setting to allow references only from URLs in the web
application by using a signed URL.
[Q] Access management for S3
You are building a data analytics system in AWS. The system takes data from
IoT sensors and stores it in an Amazon S3 bucket via Kinesis Data Firehose
as it is streamed and processed by Kinesis Data Streams. The encrypted data
must then be simply queried using SQL queries on the data in the S3 bucket
and the results must be written back to the S3 bucket. Because of the
sensitive nature of the data, fine-grained controls must be implemented for
access to the S3 bucket.
1) Query the data with Athena and store the results in buckets.
2) Query the data by Redshift and store the results in buckets.
3) Use bucket ACLs to restrict access to buckets.
4) Query the data by Amazon EMR and store the results in buckets.
5) Use a bucket policy to restrict access to buckets.
6) Use an IAM policy to restrict access to buckets.
S3 Access Management
S3 access management uses different methods for
different purposes
Management System Feature
✓ Configuring IAM users to have access to S3 as an AWS
IAM resource
user policy ✓ Managing permissions to internal IAM users and AWS
resources
You are thinking of setting up a bucket policy to control S3 buckets, and since AWS
provides a number of sample bucket policies, you have decided to copy a bucket policy
that is close to your objectives. We need to understand and customize the contents of
the copied bucket policy.
Which of the following is the correct description of the following bucket policy?
{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS":
["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
S3 Bucket Policy
{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy
{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy
{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy
{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy
{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy
{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy
{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
[Q] Pre-signed URL
You are a solution architect and you are building a video sharing application.
The application stores a large number of video files in an S3 bucket, which
are temporarily shared to users via an EC2 instance. At that time, only
authorized users should be able to access the video data.
1) Disable block public access to S3 buckets so that the URLs can be viewed.
2) Use CloudFront to distribute images based on the cache.
3) Use ACLs to grant access to users whose videos are shared.
4) Generate a pre-signed URL and distribute it to users whose videos are
shared.
Pre-signed URL
Pre-signed URLs make available special URLs that can
only be accessed by certain users.
EC2
(2) Issue Pre-signed URL
(4) Permitted
access
S3
[Q] Public access
You are a solution architect for a media company. You are currently running
web media on AWS and need to set up Amazon S3 buckets to serve static
content for this web media.
Which setting is used to publish all objects uploaded to the S3 bucket to the
Internet? (Select two.)
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
[Q] S3 access point
Manage access with bucket policies Manage access with access point policies
Static Web Hosting
You are building a corporate website for your company on AWS. The site is a
simple static web site and you have deployed it on Amazon S3 to keep costs
as low as possible.
Select the correct Amazon S3 website endpoint for the resulting site. (Select
two.)
1) http://bucket-name.s3-website.Region.amazonaws.com
2) http://s3-website-Region.bucket-name.amazonaws.com
3) http://bucket-name.s3-website-Region.amazonaws.com
4) http://s3-website.Region.bucket-name.amazonaws.com
5) http://bucket-name.Region.s3-website.amazonaws.com
Static Web Hosting
If you want to build a static site, you can build an
inexpensive web page with static web hosting
You are building a corporate website for your company on AWS. The site is a
simple static web site and you have deployed it on Amazon S3 to keep costs
as low as possible. You would also like to set up a new domain name for it
using Route 53.
Select the Region alias to the S3 Web site endpoint as the traffic
destination.
Set up the domain using the A record (IPv4) type of Alias record as
the record type.
Default values are set for evaluating the normality of the target.
The bucket name must be the same as the domain or subdomain
name
[Q] Cross Origin Resource Sharing (CORS)
1) Global replication
2) Cross-account access
3) Cross Origin Resource Sharing (CORS)
4) S3 Access Point
Cross Origin Resource Sharing (CORS)
A single website can be shared with multiple domains.
Reference: https://docs.aws.amazon.com/ja_jp/sdk-for-javascript/v2/developer-guide/cors.html
[Q] S3 Event
You have a photo sharing application built on AWS. The photos are stored in
an S3 bucket and the image processing is performed by an application
hosted on multiple EC2 instances. The solution architect has configured a
mechanism to run image processing on one of the EC2 instances, depending
on the data uploaded.
How should you configure S3 and other AWS services to meet your
requirements?
S3 Event Notification
EC2 S3 user
[Q] S3 encryption.
1) CSE
2) SSE-KMS
3) SSE-S3
4) SSE-C
S3 Encryption
Select one of the following four encryption formats when
storing data in S3
Encryption Method Feature
✓ Easiest S3`s defaults encryption method.
✓ Automatic creation and management of encryption keys on
SSE-S3 the S3 side. There is no need to manage the keys yourself.
✓ Encrypt data using 256-bit Advanced Encryption Standard
(AES-256), a block cipher
1) Enable version control in the Singapore region bucket and create a new
S3 bucket in the Sydney region to configure inter-region replication.
2) Create an S3 bucket with version control set up in the Sydney region and
configure replication from the Singapore region bucket.
3) Create an S3 bucket with version control set up in the Sydney region and
configure cross-origin resource sharing from the Singapore region bucket.
4) Enable version control in the Singapore Region bucket and create a new
S3 bucket in the Sydney Region to configure cross-origin resource sharing.
Replication
Use cross-region replication across regions to increase
resilience
Company B has configured a data lake using Amazon S3 to perform big data
analytics. As a solution architect, you want to put in place a solution that
performs big data analytics by querying data assets directly in the data lake.
Company B has configured a data lake using Amazon S3 to perform big data
analytics; the web application access logs stored in S3 need to be processed
using Apache Hadoop to process the data.
Your company runs a web application on AWS. The application stores log files
in Amazon S3. This log file is used for real-time processing of ad displays, so
there are frequent read operations, but when changes occur to the log file,
the old log file is read.
✓ Consistency Read
Data registration ✓ Data will be reflected immediately
after registration
When the upload is signed with AWS signature version 4, you must
use the x-amz-content-sha256 header instead.
Is this a 3 rd option?
[Q] Increase the speed of uploads.
You are a solution architect and are building a video sharing application on
AWS. The application is configured to host a video processing application on
an EC2 instance that uses video data stored in an Amazon S3 bucket. The
users are global and large amounts of data are uploaded. This has led to
significant delays in uploading large video files to the destination S3 bucket,
resulting in complaints.
Select a method to increase the speed of file uploads to S3. (Please select
two.)
1) Create unique custom prefixes within a single bucket and upload a daily
file with those prefixes.
2) Upload files with Transfer Acceleration enabled within a single bucket.
3) Enable S3 multipart upload and run the upload process.
4) Upload a file that creates a random custom prefix using a hash in a single
bucket.
Improved performance
Improve performance with parallel requests and custom
prefixes
Archive Restore
Cloud
(Internet)
EC2 RDS
The Scope of the EC2 questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked to select an EC2 instances to match the
EC2 Features requirements of a scenario.
EC2 Cost ✓ You will be asked how to reduce the cost of using EC2
✓ You will be asked how to get the metadata from the EC2
Obtaining Metadata instance.
[Q] EC2 Features
A venture company has a web application on AWS. This web application uses
RDS as a database to run batch jobs every day at 7am. In doing so, it needs
to process the log files of the business operations and run about 2000
records sequentially in a batch job via a shell script. Each record requires
about 1-5 seconds of execution time to process each record, so the load on
the batch job is high.
1) AWS Lambda
2) Amazon EC2
3) Amazon EMR
4) Fargate
EC2 Features
A virtual server available on a pay-as-you-go basis (hours
or seconds base) that can be launched in minutes
AZ
AZ AZ
EC2
instance
[Q] EC2 Cost
Which of the following is the correct explanation for EC2 cost incidence?
(Please select two.)
Select Storage
Add Tags
AMI
(Courtesy
of AWS)
AMI
(3rd party)
Save to S3
EC2
Custom Instance
AMI
(your own)
Select AMI (OS Settings)
You can select the OS setting through AMI
[Q] Use of AMI
You have been asked to launch a large number of EC2 instances to build
workloads tasks. In order to perform these tasks efficiently, you need to
automate the process of deploying new computing resources that will be in
the same configuration and the same state.
Share AMI ✓ You can share the AMI with other accounts by
giving permission to specify the AWS account
With other accounts number.
Select Storage
Add Tags
Which of the following is the best instance type to use in this scenario?
t2.nano
Instance capacity
Instance Type
Select the instance type according to the case purpose.
Family: A1, M5, T3, etc.
Provides balanced computing, memory, and network resources for a variety of workloads.
General Purpose This instances that are ideal for applications that use the same percentage of an instance's
resources, such as web servers and code repositories.
Storage Optimization This instances are suitable For workloads that require high sequential read and write
access to large data sets in local storage. Storage optimized instances are ideal for low-
latency random I/O operations with tens of thousands of IOPS
High speed computing High-speed computing instances are ideal for software that uses hardware accelerators
(co-processors) to perform functions such as floating-point computation, graphics
processing, and data pattern matching on the CPU.
How to start EC2
Steps to launch an EC2 instance
Select Storage
Add Tags
Select the features of the EC2 instance that should be used to meet this
requirement.
1) User data
2) Metadata
3) Tags
4) Enable the automatic setting function
Use of user data
You can configure a script to be executed when an EC2
instance is launched with user data.
Select Storage
Add Tags
Select Storage
Add Tags
Your company has multiple departments using AWS and a variety of users
are using AWS. Therefore, the company need to effectively manage AWS
resources. As a solution architect, you have set up a categorization that
allows you to identify Amazon EC2 resources by department.
1) Parameter
2) Metadata
3) Tags
4) User data
Tag settings
You can set up additional tags to give AWS resource
names, and make groups for resources such as EC2
How to start EC2
Steps to launch an EC2 instance
Select Storage
Add Tags
Security
HTTP access group
SSH access
Port 22
permission
(SSH)
EC2
instance
How to start EC2
Steps to launch an EC2 instance
Select Storage
Add Tags
You have created an AWS account and launched your first Linux EC2 instance.
You need to access this instance and install the server software to configure
it as a web server. To do so, you will have to access the instance and
configure it from your local terminal.
Select the authentication method you want to use to securely access your
instance.
1) Key pair
2) Access key
3) Secret access key
4) ID and Password
Key Pair Usage
Uses a key pair to access an instance with a public key
that matches the private key.
key pair
secret key public key
Private key.
to access the
EC2 instance EC2
instance
[Q] Launch template
Launch Template
Auto Scaling
[Q] Internet access
You, as the solution architect, have created a new AWS account and
configured your IT infrastructure. You have created a new subnet on an
Amazon VPC and launched an Amazon EC2 instance on that subnet. You
have attempted to access the EC2 instance directly from the Internet to set
up the EC2 instance, but you can't seem to make a connection.
What steps do you need to take to deal with a failed connection to an EC2
instance? (Please choose two.)
On-Demand
Reserved
Capacity Saving Plan
instance
Reservations
No commitment is
required and can be
Period created and cancelled
Requires a fixed 1-year or 3-year commitment
as needed
Limit of 20 per AZ or
Limit the number of
region but
Instance Constraints on-demand instances None
Application for raising
per region
the limit is possible.
Physically capable instances
A type of instance that can be launched and users can
controll a physical host server to some degree.
Dedicated
Dedicated Host Bare Metal
instance
What cost effective measures should you choose to address in this situation?
AZ/instance size/network
Yes Yes
type changeable or not
Preserved instance
Marketplace. Yes No
Saleability
1) If the spot request is persistent, the spot instance is launched again after
the spot instance has been suspended.
2) Canceling an active spot request will also terminate the associated
instance.
3) If the spot request is persistent, stop the spot instance and then start the
spot instance again.
4) Spot blocks can be interrupted in the same way as spot instances.
5) Canceling an active spot request does not terminate the associated
instance.
Spot instance features
EC2 instances with spare computing capacity available at
a discount (up to 90% off) compared to on-demand
instances
Your company has a batch processing workload that runs weekly and runs
for about two hours. The processing of this workload requires you to
automatically select and launch the lowest priced instance by specifying the
instance type and bid price to increase cost efficiency.
Which is the most cost effective solution that can meet this requirement?
✓ Number of instances: 10
✓ Bid Price: $1.00
✓ Instance type: c4.16xlarge, c3.8xlarge
Your company has a batch processing workload that runs weekly and runs
for about two hours. The processing of this workload requires the use of spot
instances to be cost-effective, but a two-hour workload is not allowed to be
stop while in progress. As a solutions architect, you are considering the best
instance.
The university is now running its data analysis on AWS. This specific analysis
requires high performance server processing, and high performance network
processing with multiple EC2 instances for performance computing is also
essential.
Which EC2 instance configuration should you use to run this application?
The university is now running its genome data analysis on AWS. Genome
analysis requires high performance server processing, and high performance
network processing with multiple EC2 instances for performance computing
is also essential. As a solution architect, you need to configure EC2 instances
to ensure proximity, low latency and high network throughput.
1. Reference: https://aws.amazon.com/jp/premiumsupport/knowledge-center/enable-configure-enhanced-networking/
[Q] Using Elastic Fabric Adapter
Which network components are used by the EC2 instance running the HPC
workflow?
ENA provides the traditional IP networking EFAs have OS bypass capabilities in addition to ENA
capabilities needed to support VPC features: the Libfabric API allows HPC and machine
learning applications to bypass the operating system
kernel and communicate directly with EFA devices
[Q] Run Command
You are an engineer who is responsible for internal AWS operations in your
company, you are running an EC2 instance and running a Windows server
setup. You need to run the PowerShell script for this Windows server, but you
need to run it from the AWS Management Console.
Select a method to run the script on the target EC2 instance from the AWS
Management Console.
Reference: https://aws.amazon.com/blogs/aws/new-ec2-run-command-remote-instance-management-at-scale/
[Q] Automatic recovery of EC2
Your company runs a large web application with over 30 EC2 instances. This
application needs to operate as automatically as possible. As a solution
architect, you can use Amazon CloudWatch alarms to automatically recover if
an EC2 instance fails.
1) The public IPv4 address set for the instance is changed to a different
address when the instance is restored.
2) Public IPv4 addresses configured in the instance are maintained after
recovery.
3) The restored instance will retain the instance ID, private IP address,
Elastic IP address and all metadata.
4) Any data in memory before the instance restoration is preserved.
EC2 Recovery
It is important to backup your EC2 instances on a regular
basis
Reference: https://docs.aws.amazon.com/ja_jp/AWSEC2/latest/UserGuide/ec2-instance-lifecycle.html
✓ Data will be lost and the host may change when the system is rebooted.
✓ An instance fails to start in the following cases.
1. The snapshot is corrupted.
2. EBS volume limit is exceeded.
3. It does not have an encrypted snapshot key.
4. losing the part that requires an instantiated store type AMI.
Restarting the Instance
The status of the instance shows the following
Shutting-
The instance is being prepared for deletion. You won't be charged
down
You have started your EC2 instance. This EC2 instance needs to be
temporarily stopped for maintenance, and you are required to maintain the
data and other data in memory when you do so.
1) AMI.
2) Use the EC2 instance reboot configuration.
3) Restart the EC2 instances.
4) Use hibernation.
Use of hibernation
Hibernation allows for maintenance of the pre-stop state
at the time of reboot
Choose the correct URL path to get the public IP of your instance.
1) http://169.254.169.254/latest/meta-data/public-ipv4
2) http://169.254.169.254/latest/user-data/public-ipv4
3) http://254.169.254.169/latest/meta-data/public-ipv4
4) http://254.169.254.169/latest/user-data/public-ipv4
Obtaining Metadata
To get the instance metadata, use the following
http://169.254.169.254/latest/meta-data/
The IP address 169.254.169.254 is a link local
address, valid only from the instance
The scope of VPC
What is VPC?
VPC is a virtual network service that allows users to carve
out a dedicated area from the AWS cloud network
VPC Settings ✓ You will be asked the configuration method using the VPC
(VPC Wizard) wizard used to set up the VPC.
Subnet mask ✓ You will be asked about CIDR configuration using subnet
Settings masks.
✓ You will be asked questions about the role of the VPC flow
VPC Flow Log log.
AZ
Virtual Private Cloud (VPC)
VPC can include resources in multiple AZs within the
same region
AZ①
AZ (2)
Subnets and VPCs
A combination of VPCs and subnets create a network
space. VPC must be set up with at least one subnet.
AZ
subnet
10.0.1.0/24
EC2
[Q] VPC setting (default VPC)
You have opened a new AWS account and first launched an EC2 instance.
you did not configure a VPC, so this EC2 instance has a default VPC set up.
You need to make sure that the instance has both a private DNS hostname
and a public DNS hostname.
How are DNS host names assigned when using the default VPC? (Please
choose two.)
1) A VPC other than the default will be assigned a private DNS hostname,
but not a public DNS hostname.
2) A non-default VPC instance is assigned a public DNS hostname and a
private DNS hostname.
3) A VPC instance is assigned a private DNS hostname, but not a public DNS
hostname.
4) The default VPC is assigned a public DNS hostname and a private DNS
hostname.
5) The default VPC is not assigned a public or private DNS hostname.
VPC Settings (Default VPC)
When you create an AWS account, a default VPC and
default subnet are automatically generated for each region
You've opened a new AWS account and decided to configure your VPC first.
You can use the VPC wizard to quickly set up the most commonly used
configurations. You need a network configuration to set up a web server that
needs public access and a database server that is limited to private access
for increased security. You decide to use the VPC wizard to select the
configuration that is closest to your desired configuration.
Setting traffic
Create a VPC Create a Set internet route permissions to
(CIDR setting) subnet Configure the Gateway. your VPC
(Network ACL)
Classless Inter-Domain Routing (CIDR)
Set the IP address that allows you to use a subnet mask
and control the number of IP addresses available.
[Notation]
196.51.XXX.XXX/16
Subnet
Fixed IP addresses up to the
16 th digit from the left
[Q] Subnet mask settings
You are planning to set up a new VPC and set up an IT infrastructure with
two public subnets and two private subnets. You need to set up a CIDR for
IPv4 addressing and subnet creation within a single VPC. 200 IP addresses
need to be made available as CIDR settings.
Select a setting that provides the optimal number of IP addresses for the
CIDR subnet mask, but not too many.
1) /21
2) /22
3) /23
4) /24
CIDR
VPC can set the CIDR range between /16 ~ /28
/16 ~ /28
CIDR
Combination of the number of subnets and the number of
IP addresses that can be configured when the CIDR is set
Per subnet.
Subnet mask Number of subnets Number of IP addresses
(Available on AWS)
/18 4 16379
/20 16 4091
/22 64 1019
/26 1024 59
/28 4096 11
CIDR
Some addresses are already in use from the AWS side
.0 Network address
.1 VPC router
You've opened a new AWS account and decided to configure your VPC first.
You can use the VPC wizard to quickly set up the most commonly used
configurations. You need a network configuration to set up a web server that
needs public access and a database server that is limited to private access
for increased security.
EC2 EC2
EC2 EC2
EC2 EC2
Subnet
The type of subnet is separated by the presence or
absence of routing to the Internet gateway.
EC2 EC2
You are planning to set up a new VPC and set up an infrastructure with one
public subnet and one private subnet, but in order to allow access to the
Internet using an IPv4 address, the subnet needs to be configured to
function as a public subnet. Required.
EC2 EC2
Internet gateway
Establish a route to the Internet gateway in a route table.
Select the best AWS architecture configuration to solve this problem. (Please
choose two.)
1) Configure public and private subnets in one AZ, and install NAT gateways
in each public subnet.
2) Configure public and private subnets in the two AZs and install a NAT
gateway in each public subnet.
3) Configure public and private subnets in the two AZs and install a NAT
instance in each public subnet.
4) Route from the private subnet to the NAT gateway (or NAT instance) in
each AZ.
5) Set up a route with one private subnet and one NAT gateway (or NAT
instance).
NAT Gateway
A NAT gateway is required on the public subnet to
connect to the Internet from the private subnet
10.0.0.0/16 AZ
EC2 EC2
NAT
NAT Gateway
Establish a route to the Internet gateway in a route table.
You are a solution architect and you are trying to access DynamoDB
configured outside the VPC from an EC2 instance within the VPC. The
instance must make API calls to DynamoDB and you must ensure that API
calls do not traverse the Internet as per your security policy.
1) Create a gateway endpoint and set up a root table entry for the endpoint.
2) Create an interface endpoint and set up a root table entry for the
endpoint.
3) Create a private type endpoint and set up a root table entry for the
endpoint.
4) Create an endpoint ENI for each VPC subnet
5) Create a VPC peering connection between VPC and DynamoDB
VPC Endpoints
VPC endpoints provide an entrance for AWS services with
global IP to be accessed directly from within the VPC
10.0.0.0/16 AZ
Public Subnet
10.0.0.0/24
S3
EC2
I need access to the
S3 outside of the VPC.
VPC Endpoints
VPC endpoints provide an entrance for AWS services with
global IP to be accessed directly from within the VPC
10.0.0.0/16 AZ
Public Subnet
10.0.0.0/24
VPC S3
EC2 endpoint
VPC Endpoints
Gateway type applies to S3 and DynamoDB only;
Many services use private links (interfaces)
Your company has multiple applications in multiple regions. Each one uses a
separate VPC, but the applications need to work together; these VPCs need
to be connected so that the different applications can communicate with each
other.
Which of the following is the most cost-effective solution for this use case?
VPC peering
Your company runs an AWS-based web application. Recently, there has been
a spike in traffic that is attempting to gain unauthorized access. It seems
that an unauthorized access is being attempted from several fixed IP
addresses. The requests appear to be coming from different IP addresses
within the same CIDR range.
Route table
10.0.0.0/16
AZ network ACL AZ
Public Subnet Private Subnet.
10.0.5.0/24 10.0.10.0/24
Security
Security
group
group
EC2 EC2
Web server DB server
Network ACL
Traffic settings use security groups or network ACLs.
You have built a VPC and created two subnets. We are currently in the
process of setting up the network ACLs. When you do so, you plan to use the
default network ACLs that are set when you set up the VPC.
Which of the following is the correct description of the network ACL default
settings? (Please select two.)
You have built a VPC and created two subnets. Now you are in the process of
setting up your network ACLs.
What happens when a web server on a subnet with this network ACL applied
is accessed from 121.103.215.159?
You plan to host a web application on AWS. First, you have created a VPC
and launched an EC2 instance that will serve as your web server on a public
subnet. You also set up another EC2 instance on a separate subnet that will
host the MySQL database and connect to it from the web server.
How should you set up your database for safety reasons? (Choose two.)
10.0.1.0/16
AZ AZ
Bastion RDS
server
[Q] Connecting to services in VPC: SSH
You've opened a new AWS account and decided to configure your VPC first. You
can use the VPC wizard to quickly set up a commonly used VPC configuration.
We need a network configuration to set up a database server that is limited to
private access for increased security. You need to set up a bastion server on a
public subnet and access it only from the corporate data center via SSH.
You have decided to open a new AWS account and configure a VPC first. You plan to set
up a web server with limited private access for increased security, and you want to use
a bastion server with Microsoft Remote Desktop Protocol (RDP) access to limit
administrative access to all instances.
How should you implement the Bastion server configuration? (Please select two.)
10.0.1.0/16
AZ AZ
Bastion
Server EC2
NAT
gateway
Bastion server
A bastion server is required to connect to instances in the
private subnet. A NAT gateway is required for return
traffic.
10.0.1.0/16
AZ AZ
Bastion
Server EC2
NAT
gateway
[Q] VPC flow log
You have a VPC set up and are using AWS resources, you have multiple EC2
instances running in the VPC for your web application, and you are running
traffic balancing with ELB. As part of your monitoring, you need to capture
information about the traffic reaching the ELB.
1) Enable VPC flow logging for the EC2 instances with which the ELB is
associated.
2) Use Amazon CloudWatch Logs to review the logs from the ELB.
3) Enable VPC flow logging on the network interface associated with the ELB.
4) Enable VPC flow logging for subnets where ELBs are running.
VPC Flow Log
VPC Flow Logging captures network traffic and enables it
to be monitored by CloudWatch
The traffic that originates from/destination of the network interface is the target.
Obtain traffic logs that have been accepted/rejected by security groups and
network ACL rules
Collecting, processing and storing in a time frame called the capture window
(about 10 minutes)
You can also get network interface traffic for RDS, Redshift, ElasticCache and
WorkSpaces.
No additional charge.
[Q] Use of DNS in VPC
You have opened a new AWS account and launched an EC2 instance. This
EC2 instance has a custom VPC configured on it. You want to use this EC2
instance as a web server and set up a custom domain called Pintor.com. As a
solution architect, you want to use Route53's private host zone feature to
make this happen.
Which of the following VPC settings must be enabled? (Please choose two.)
1) enableDnsHostnames
2) enableDnsSupport
3) enableVpcSupport
4) enableVpcHostnames
5) enableDnsDomain
Using DNS in VPC
Instances launched in a VPC need to be configured to
receive the public DNS hostname corresponding to the
public IP address
✓ This settings indicates whether an instance with a public
IP address should get the corresponding public DNS
hostname.
enableDnsHostnames
✓ If this attribute is true and enableDnsSupport attribute is
also true, the instance in the VPC gets the DNS hostname.
As a solution architect, you're looking to reduce AWS costs, and when you
use Cost Explorer to review the cost details, you discover that you're being
charged for Elastic IP addresses that should be available for free.
1) The Elastic IP is not released, but the Elastic IP is not attached to the EC2
instance.
2) Elastic IP attaches to the EC2 instance without releasing the Elastic IP.
3) The free time on Elastic IP has been exceeded.
4) The number of free uses of Elastic IP has been exceeded.
Elastic IP
Elastic IP is an additional IP address that can be used
statically. In order for an instance to access the Internet,
it uses a public IP or Elastic IP.
EIP
EC2 EC2
(1) Failure (2) Automatic
Occurs EIP switching
[Q] ENI
You are building an application that is hosted on an EC2 instance. You have
implemented a configuration for this instance using a private IP address and
MAC address, and if the primary instance is terminated, the ENI must be
attached to the standby secondary instance. This allows traffic flow to
resume within seconds. To do so, you use "warm attach" with the ENI
attachment to the EC2 instance.
ELB
EC2 EC2
What is Auto Scaling?
The ability to add new instances to improve performance
when access to the instance has increased
ELB
[Expansion] [Expansion]
Scale-up: adding or increasing Scale-out: Increase the number of
memory and CPU devices/servers to be processed
[Reduction] [Reduction]
Scale-down: Reduce memory and Scale-in: Reduce the number of
CPU and lower performance devices/servers to be processed
The scope of the Auto Scaling question
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about the configuration method that
Creating a Launch
determines the contents of the instance configuration used
Configuration when setting up an Auto Scaling group.
Creating a launch ✓ You will be asked about the difference between a launch
template configuration and a launch template
Auto Scaling ✓ Based on the given scenario, You will be asked about the
configuration configuration of the architecture using Auto Scaling.
Auto Scaling ✓ Based on the given scenario, you will be asked to confirm
configuration settings the configurations using Auto Scaling.
The behavior of Auto ✓ You will be asked about the behavior of Auto Scaling when
an imbalance occurs during Auto Scaling execution, or
Scaling when an instance is terminated or an anomaly occurs,
The scope of the Auto Scaling question
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about the use and behavior of lifecycle
hooks, which are custom actions that are executed when
Lifecycle hook an instance is launched or deleted by the Auto Scaling
group.
(1) Creating an
ELB target group.
You are a solution architect and you are building a web application on AWS.
The application utilizes multiple EC2 instances behind an ELB for increased
redundancy. In addition, we have configured an Auto Scaling group so that
we can scale out when the load increases. After a while, you find that you
need to change the instance type in the Auto Scaling group.
How do you change the instance type configured in an Auto Scaling group?
1) Create a new launch configuration using the new instance type and
reconfigure the Auto Scaling group to use it.
2) Create a new launch configuration using the new instance type and
modify the Auto Scaling group to use it.
3) Modify the Auto Scaling group by selecting a new instance type on the
Modify Auto Scaling group instance type screen.
4) Edit the launch configuration used by the Auto Scaling group to change to
the new instance type.
[Q] Create a launch template.
Auto Scaling
AZ AZ
Subnet Subnet
10.0.1.0/24 10.0.2.0/24
ELB
You have implemented your web application on AWS. This application has a
multi-AZ configuration with Amazon EC2 instances and Amazon ELB. In
addition, you need to add Auto Scaling to automatically add EC2 instances
and configure them to handle the temporary load increase of incoming
requests.
Select the conditions for adding an existing EC2 instance to your Auto Scaling
group. (Select two.)
You are building a web application on AWS. This web application consists of a
single EC2 instance. Due to the cost and the low level of importance of the
web application, it has been decided not to use multiple instances, but you
need to configure it to maintain a single instance even if you are running
Auto scaling against instance failure.
Which is the most cost effective scaling method that can meet this
requirement?
You have built web application on AWS. The application has a multi-AZ
configuration with an Amazon EC2 instance and Amazon ELB. You need to
add Auto Scaling, automatically adding EC2 instances temporarily to match
the increased load of incoming requests. You need to configure the EC2
instances to handle the temporary increase in load of incoming requests.
1) Set a target tracking scaling policy with a threshold of 60% average total
CPU usage in the Auto Scaling group.
2) Set a step scaling policy with a threshold of 60% average total CPU usage
in the Auto Scaling group.
3) Set a scheduling policy with a threshold of 60% average total CPU usage
in the Auto Scaling group.
4) Set a manual scaling policy with a threshold of 60% average total CPU
usage in the Auto Scaling group.
Target tracking scaling policy
Target tracking scaling policy is scaling using CloudWatch
monitoring metrics.
[Q] Setting a scaling policy
You have implemented web application on AWS. The application has a multi-
AZ configuration with an Amazon EC2 instance and an Amazon ELB. In
addition, you need to add Auto Scaling to automatically add EC2 instances
and configure it to handle temporary load increases for incoming requests.
The application is expected to periodically increase in load at certain times on
weekends.
Perform dynamic
Set up a Scheduled scaling when the
Scaling schedule is exceeded.
[Q] Health check
The web application is currently running on AWS. The web application has an
Auto Scaling group configured on an Amazon EC2 instance behind the ELB.
Today, one EC2 instance experienced an anomaly and the ELB has removed it
from the target, but Auto Scaling did not launch a new instance.
1) The ELB health check type is not being used by Auto Scaling.
2) EC2 health check types are not being used by Auto Scaling.
3) The Auto Scaling group has a cooldown period set.
4) The Auto Scaling group has a timeout grace period set.
Health check
Use either EC2 status information or ELB health checks to
check the health of EC2 under Auto-Scaling
ELB CloudWatch
Auto-Scaling Auto-Scaling
[Q] Termination Policy
The web application is currently running on AWS. The web application has an
Auto Scaling group configured on an Amazon EC2 instance behind the ELB.
As the load increases, the Auto Scaling group spawns new instances across
two availability zones (AZs). When scaling is performed, three EC2 instances
are deployed in ap-northeast-1a and four EC2 instances are deployed in ap-
northeast-1c.
Selecting
an ✓ Remove according to the custom policy of the selected AZ.
Instance
Termination Policy
Configure from which instance to exit when scaling in
based on reduced demand
OldestLaunch
Terminates from the oldest running instance
Configuration
The web application is currently running on AWS. The web application has an
Auto Scaling group configured on an Amazon EC2 instance behind the ELB.
Recently, Auto Scaling has been running for a short period of time, adding
and removing instances at the same time, causing a lot of scaling events to
occur.
1) Change the Auto Scaling group size to increase the desired capacity.
2) Configure scaling using scheduled scaling actions.
3) Change the CloudWatch alarm period that triggers the Auto Scaling scale-
down policy.
4) Change the threshold for CloudWatch alarms that trigger Auto Scaling
scale-down policies.
5) Change the cooldown period for Auto Scaling groups
Cooldown period
Cooldown time can be set at the end of the instance
The web application is currently running on AWS. The web application has an
Auto Scaling group configured on the AmazonEC2 instance behind the ELB.
This Auto Scaling Group uses two AZs, and there are currently six
AmazonEC2 instances running in the group.
What action does Auto Scaling take if one of the EC2 instances fails? (Select
two.)
Redistribution
✓ Adjust an imbalance in the number of instances between
AZs.
✓ Stop the instance that caused the group to become uneven
and launch a new instance in the AZ that was under-
Behavior when an resourced.
unbalance occurs Behavior during redispersion
between AZs ✓ Prevent performance degradation by starting a new
instance before terminating the old one.
✓ Approaching the maximum Auto Scaling capacity can slow
down the redistribution process or stop it altogether. To
avoid this, temporarily increase the maximum capacity (add
10% or +1 of the maximum capacity).
[Q] Lifecycle Hook
The web application is currently running on AWS. The web application has an
Auto Scaling group configured on an Amazon EC2 instance behind the ELB.
When performing scale-in, we would like to be able to download the log files
of the instances to be stopped in order to examine the impact of instance
stoppages.
Which of the following features can be used to enable this custom action?
A cooldown period is
performed at the end of
the instance, and you can
set up an action to be
performed during the time.
Instances can be put on
standby for a period of time
Reference: https://docs.aws.amazon.com/ja_jp/autoscaling/ec2/userguide/lifecycle-hooks.html#lifecycle-hooks-overview
[Q] Troubleshooting
Your company has set up ELBs on EC2 instances to distribute traffic and then
set up an Auto Scaling group. You use a load testing tool to run the Auto
Scaling group to see how it behaves as the load improves. However, the
status check of the EC2 instances launched by Auto Scaling seems to show
“Impaired”.
1) Wait a few minutes for the instance to recover, and if it does not recover,
terminate the instance and then replace it with another one.
2) Immediately terminate the instance and then replace it with another one.
3) The ELB switches the target to another instance.
4) Auto Scaling performs rebalancing between AZs that have not
experienced a failure.
Trouble-shooting
Auto Scaling must be suspended for instance
maintenance and investigation
Cloud
(Internet)
EC2 RDS
The scope of the RDS questionnaire
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked to choose RDS as the best database
Selection of RDS service offered by AWS
Public access ✓ You will be asked about the configuration of direct Internet
configuration access to a DB instance like RDS.
1) EC2
2) RDS
3) Aurora
4) DynamoDB
Data model
There are many data models with different purposes.
Relational model
Graph model
Key value store
Object
Document
Wide column
Hierarchical
Relational model
The relational model is the basic data model for databases.
[Q] Characteristics of RDS
You are a solution architect and you are building a database on AWS. Since
you are currently using MYSQL on-premises, you have decided that you can
easily migrate to MYSQL in RDS, if you use RDS, you need to migrate based
on its features.
-MySQL
-ORACLE
-Microsoft SQL Server
-PostgreSQL
-MariaDB
-Amazon Aurora
RDS Best Practices
The RDS recommends the following best practices
Limitations of RDS
You are building a relational database on AWS. You expect a large number of
transactional processes to occur in this database solution and are concerned
about random I/O latency. As the solution architect, you were asked to
improve performance through database configuration. You are required to do
this without putting extra operational load on the system.
✓ SSD type
✓ Charged for capacity per GB
General Purpose
✓ Capable of delivering 100-10,000 IOPS with bursts in addition
to normal performance (depending on size)
✓ SSD type
✓ Charged for capacity per GB and per provisioned IOPS
Provisioned IOPS
✓ Capable of 1,000-30,000 IOPS with bursts in addition to
normal performance (depending on size)
What is the correct way to set up a connection to RDS MySQL via the
Internet? (Please choose three.)
10.0.0.0/16
AZ
Public Subnet
10.0.0.0/24
Workbench
General Configuration
Install RDS on a private subnet and use an EC2 instance as
a bastion host to access it.
10.0.0.0/16
AZ
Public Subnet
10.0.0.0/24
EC2 Workbench
Pravate Subnet
10.0.1.0/24
General Configuration
Because this configuration relies on a single AZ, there is a
risk of downtime in the event of an AZ failure.
10.0.0.0/16
AZ
Public Subnet
10.0.0.0/24
EC2 Workbench
AZ AZ
Auto Scaling
EC2 EC2
automatic failover
synchronous replication
RDS RDS
[Q] Effect of multi-AZ configuration
A company is using an RDS database configured in a multi-AZ deployment to
improve the availability of its enterprise systems. The primary database of
the RDS has failed.
Select what action is automatically taken on the RDS after the failure.
Which of the following is the most cost effective solution to this problem?
AZ AZ
Synchronous replication
automatic failover
RDS RDS
master slave
Asynchronous
replication
Choose the most cost effective solution that can meet this requirement.
Reference: https://aws.amazon.com/jp/blogs/news/best-practices-for-amazon-rds-for-postgresql-cross-region-read-replicas/
[Q] Scaling of RDS
You are building a two-tier application using EC2 instances and RDS. At this
stage, the workload requirements for the application are clear, but the
performance requirements, such as the expected number of requests
required for database processing, are unknown. Therefore, it is necessary to
scale the database after booting.
RDS
✓ Improve performance by scaling
RDS
RDS
EC2
RDS
RDS Aurora
[Q] RDS encryption.
A large e-commerce company is using the RDS PostgreSQL database for their
e-commerce site. Recently, an IT audit was performed and it was noted that
the RDS database was not encrypted.
Select the correct description for the procedure to encrypt this RDS database.
(Choose two.)
Encryption of
✓ Encrypt data resources in storage.
stored data
DB RDS Encryption
Instances and snapshots can be encrypted
• AES-256 encryption
• Key Management with AWS
• DB instances KMS
• Automatic backups • The same key is used for
• Read Replicas the Read replica
• Snapshots • Encryption can be set only
when an instance is created
• Encryption/restore of
snapshot copies
[Q] Maintenance
Company B is planning to use a database on AWS and is considering using
RDS, but they need to know about the influences of the maintenance
provided by AWS for their managed services. They need to avoid that period
of maintenance time, especially if the DB is forced to go offline during the
maintenance window, as it will have a significant impact.
Select the following maintenance events that will cause database downtime.
(Select two.)
Note that the DB instance will be temporarily taken offline when the necessary
operating system and database patches are applied
[Q] Backup
Company B uses a database environment on AWS using RDS. However, the
database has been corrupted by a failure and needs to be restored. You, as the
solution architect, are using point-in-time recovery to recover the data to its most
recent configuration.
Which is the correct way to restore an RDS database to a specific point in time?
(Please choose two.)
1) Snapshots and transaction logs can restore the DB to the state it was in 5
minutes ago.
2) Only snapshots can restore the DB to the state it was 5 minutes ago.
3) Only the transaction log can restore the DB to the state it was 5 minutes ago.
4) Snapshots and transaction logs can restore DB it was 10 minutes ago
5) Only snapshots can restore the DB to the state it was 10 minutes ago.
6) Only the transaction log can restore the DB to the state it was 10 minutes ago.
Backup
By taking a snapshot, RDS data can be saved and fault
tolerance can be implemented.
AZ AZ
synchronous replication
automatic failover
RDS RDS
master slave
AZ AZ
EC2 EC2
instance instance
Selecting ✓ Based on the scenario, You will be asked to select the EBS
EBS Volume Type volume type to meet workload requirements
EBS RAID ✓ You will be asked about the configuration and use of RAID
configuration 0 and RAID 1 using EBS.
[Q] Select EBS
Which of the following is the most appropriate storage service to meet this
requirement?
1) EFS
2) instance store
3) EBS
4) S3
5) Amazon FSx
Select EBS
AWS offers three forms of storage services
[Basic info]
Tokyo Region
✓ It is used for purposes like OS
operation, application and data
AZ AZ storage.
✓ It attaches EC2 through network.
✓ 99.999% availability
✓ Sizes range from 1 GB to 16 TB
EC2 EC2 ✓ Charged by size and duration of use
instance instance
[Features]
EBS EBS EBS ✓ Volume data is replicated by default
to multiple HWs in the AZ, making
it redundant.
✓ The EBS can be used even if all
ports are closed because it is not
subject to communication control
by security groups.
✓ Data is stored permanently
EBS features
EBS cannot attach to an instance in another AZ.
Tokyo Region
[Features]
EC2 EC2
instance instance
Tokyo Region
[Features]
AZ [Features]
AZ [Features]
As a solution architect, you are building a new EC site for mobile. Currently,
you are provisioning EC2 instances via the EC2 API. These instances will
display the best screen for the customer depending on the customer’s data.
A non-functional requirement for storage is to make the volume type
unavailable for boot volumes.
Which storage volume type cannot be used as a boot volume for an EC2
instance? (Select two.)
You are a solution architect and are responsible for managing AWS
infrastructure within your company. The cost of using AWS is increasing for
this company and you have used AWS Trusted Advisor to test the room for
cost optimization. According to AWS Trusted Advisor, you can reduce costs by
cleaning up unused EBS volumes and snapshots to save space and money.
[Features]
Tokyo Region
✓ Backup EBS data by Snapshot
✓ EBS can be restored from a
AZ AZ snapshot to another AZ
✓ Snapshots are stored in S3.
✓ After the second generation of
EC2 EC2 Snapshot, it becomes an
instance instance incremental backup that saves the
incremental data (it is possible to
EBS EBS EBS EBS restore even if the first generation
is deleted)
✓ Compressed storage at the block
level during snapshot creation,
EBS Snapshot resulting in a fines on the
compressed volume.
✓ EBS is still available when creating
S3 snapshots.
[Q] Snapshot management
We have a web application in our company that uses multiple EBSs. Security
regulations require us to perform backups on a regular basis, but we
currently do it manually and it is very time-consuming. Therefore, we would
like to implement an automated method of creating, maintaining and
deleting backups of EBS volumes.
The company has multiple departments with AWS accounts that use AWS
resources for various purposes, the EBS in department A‘s account A needs
to be used in department B’s account B as well. As the solution architect, you
are required to deal with this set up. This snapshot was taken from an EBS
volume that was encrypted with a custom key.
AZ AZ AZ
S3 S3
Share a snapshot
Snapshots can be transferred to other accounts by
changing the permissions
Account A Account B
AZ AZ AZ
The research team uses EC2 instances for data analysis. The data collected
on a daily basis is run as an analytical workload as a batch job on an EC2
instance with an EBS volume attached to it. While running the analysis, the
team discovered that when the EC2 instance is terminated, the connected
EBS volumes are also lost.
Research institutions use EC2 instances for data analysis. The data collected
on a daily basis is performed as a batch job on an EC2 instance with an EBS
volume. Because this data are highly sensitive, the sensitive data stored in
the EBS must meet HIPAA compliance standards.
You have created an AWS account and launched a new EC2 instance. When
you check the launched EC2 instance, you see that the EC2 status check
shows Insufficient Data.
Reference: https://docs.aws.amazon.com/ja_jp/AWSEC2/latest/UserGuide/monitoring-volume-status.html
[Q] RAID configuration of EBS.
Distribute traffic to
multiple instances
ELB
EC2 EC2
What is ELB?
You can also check the health of your EC2 instances and
use only normal instances.
Distribute traffic to
multiple instances
EC2 EC2
The scope of the ELB questionnaire
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about the use and features of ELBs,
ELB features including how it differs to Route 53.
✓ You will be asked about the function of ALBs and how they
ALB features differ from other ELB types.
✓ You will be asked about the function of NLBs and how they
NLB Features differ from other ELB types.
The scope of the ELB questionnaire
Frequent questions extracted from 1625 questions are as
follows
Cross-zone load ✓ You will be asked about the features and use cases of
balancing cross-zone load balancing in ELB.
Your company is using AWS in multiple departments and you have a VPC set
up for each application. As a solution architect, you are implementing inter-
application integration. To implement this, you want to configure each VPC to
have a peering connection and use a single ELB to route traffic to multiple
EC2 instances in peered VPCs in the same region.
ELB ELB
1) Deploy all four instances to two availability zones in the Singapore region.
2) Deploy the four instances to AZ-a in the Singapore region.
3) All four instances are deployed to AZ-b in the Sydney region.
4) Two instances are deployed to AZ-a in the Singapore region and the other
two instances are deployed to AZ-b in the Sydney region.
ELB Configuration
ELB can be configured to distribute traffic to instances
across multiple AZs.
One region
AZ AZ
Public Subnet Public Subnet
10.0.1.0/24 10.0.2.0/24
ELB
EC2 EC2
[Q] ELB Configuration
Select the configuration you need to make this configuration work. (Select
two.)
One region
AZ AZ
Private Subnet Private Subnet
10.0.1.0/24 10.0.2.0/24
internal
EC2 ELB EC2
ELB Configuration
You can also configure ELBs connected to the public
network for private subnets to distribute traffic
10.0.0.0/16
AZ AZ
Public subnet
ELB
10.0.0.0/24
EC2 EC2
RDS RDS
Synchronous
replication
MySQL DB server Automatic failover
[Q] Select the ELB type
Company A operates a video delivery site and is looking to use the AWS
cloud to deliver its content to users around the world. The video delivery site
has users all over the world and must support at least one million requests
per second.
region
Support for HTTP/HTTPS and TCP/SSL protocols
L4 and L7
AZ AZ
CLB
Identify the source IP address by Proxy Protocol
Subnet Subnet
10.0.1.0/24 10.0.2.0/24 Server certificate authentication between the ELB
and the back-end EC2 instance when using
HTTPS/SSL
All instances functions under the CLB should be
EC2 EC2 EC2 EC2
the same.
It is not possible to do content-based routing,
which checks the contents of requests and
allocates them to different destinations
Application Load Balancer (ALB)
Single load balancer with enhanced Layer 7 support,
enabling requests to be routed to different applications
order function order function Procurement function Procurement function order function order function Procurement function Procurement function
APP server APP server APP server APP server APP server APP server APP server APP server
[Q] NLB features
Company A operates a video delivery site and is looking to use the AWS
cloud to deliver its content to users around the world. The video delivery site
has users all over the world and the requirement is to support at least one
million requests per second. The engineering team provisioned multiple
instances on the public subnet and designated these instance IDs as targets
for the NLB.
Describe the correct routing scheme for the target instance configured in the
NLB.
1) Traffic is routed through the instance using the primary private IP address
2) Traffic is routed to the instance using the primary public IP address
3) Traffic is routed to the instance using the DNS name
L4 NAT load balancer to support TCP listeners (return traffic does not go through the NLB)
Ability to handle volatile workloads and handle millions of requests per second
Registration of IP addresses and static IP addresses, including targets outside the VPC
Multiple ports can register each instance or IP address to the same target group
NLB does not need Pre-application which is required for CLBs and ALBs when large scale
access is anticipated
While ALB and CLB use X-Forwarded-For to determine the source IP address, NLB does not
rewrite the source IP address and the source port, so the source can be determined from
packets.
NLB has built-in fault tolerance and can handle open connections for months or years.
Support for containerized applications such as ECS
Support for monitoring the individual health status of each service
Support for Subnet expansion (subnets can be added)
[Q] Cross-zone load balancing
A large supermarket chain is running an e-commerce application. It deploys four EC2
instances, one instance in AZ-a and three instances in AZ-b for redundancy, and uses
ELB to control traffic.
What are the results of the traffic balancing with and without cross-zone load balancing
in this configuration?
1) With cross-zone load balancing enabled, one instance of AZ-a receives 50% of the
traffic and three instances of AZ-b each receive 17% of the traffic. With cross-zone
load balancing disabled, one instance of AZ-a receives 25% of the traffic, and three
instances of AZ-b each receive 25% of the traffic.
2) With cross-zone load balancing enabled, one instance of AZ-a receives 25% of the
traffic and three instances of AZ-b each receive 17% of the traffic. With cross-zone
load balancing disabled, one instance of AZ-a receives 25% of the traffic, and three
instances of AZ-b each receive 25% of the traffic.
3) With cross-zone load balancing enabled, one instance of AZ-a receives 25% of the
traffic and three instances of AZ-b each receive 25% of the traffic. With cross-zone
load balancing disabled, one instance of AZ-a receives 50% of the traffic and the
four instances of AZ-b each receive approximately 17% of the traffic.
4) With cross-zone load balancing enabled, one instance of AZ-a receives 90% of the
traffic and three instances of AZ-b each receive 10% of the traffic. With cross-zone
load balancing disabled, one instance of AZ-a receives 10% of the traffic, and three
instances of AZ-b each receive 30% of the traffic.
[Q] Encryption
How can you achieve your encryption requirements? (Please choose two.)
1) Configure a TCP listener with NLB and terminate SSL on the EC2 instance.
2) Configure an HTTPS listener in the ALB and install an SSL certificate on
the ALB and the EC2 instance.
3) Configure an HTTPS listener in the NLB to install SSL certificates on the
ALB and EC2 instances.
4) Use pass-through mode in an ALB to terminate SSL on an EC2 instance.
5) Configure a TCP listener in the ALB and install an SSL certificate on the
ALB and the EC2 instance.
[Q] Sticky Session.
1) Use the load balancing function to send all requests from the same user
are sent to the same EC2 instance during a session.
2) Use Connection Draining to send all requests from the same user to the
same EC2 instance during a session.
3) Use a sticky session to send all requests from the same user to the same
EC2 instance during the session.
4) Use SSL Termination to send all requests from the same user to the same
EC2 instance during a session.
[Q]Connection Draining
1) Connection training
2) Cross-zone load balancing
3) Sticky session
4) Enabling Health Checks
[Q] Logging
Cross Zone Distribute the load evenly across multiple EC2 instances
across multiple AZs based on thethe load of the
load balancing subordinate EC2 instances.
Reference: https://aws.amazon.com/jp/blogs/developer/using-python-and-amazon-sqs-fifo-queues-to-preserve-message-sequencing/
Scope of the SQS questionnaire
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked to choose SQS out of similar services like
Selecting SQS Amazon SNS, SES, etc.
Message deduplication ✓ You will be asked about the characteristics and use cases
ID of message deduplication ID.
Select the services you should use to increase the reliability of your worker
processes.
1) Amazon SQS
2) Amazon SNS
3) Amazon SES
4) Amazon MQ
Selecting the SQS
SQS is a polling-type queuing service and is used for
concurrent execution of tasks.
You are a solution architect and you are building an e-commerce site that is
hosted on an EC2 instance. The site's orders are configured to be actioned by
the processing server via messages from the SQS queue; the visibility
timeout for the SQS queue is set to 30 minutes. The site is configured to
notify the order handler of a message when an order is completed, but we
are experiencing trouble delivering some of the message notifications to
orders.
1) The server processing the order is not deleting messages in the SQS
queue after processing the messages.
2) The standard queue is used, which results in duplicate messages.
3) The queue is set to short polling, which increases the number of empty
message retrievals.
4) Several order messages have been transferred to the debt letter queue.
SQS features
SQS can be used in conjunction with various AWS
services and realized in a loosely coupled architecture.
Basic information.
Using single-issue messages as a queue
Polling processing queuing service
The standard queue does not guarantee the order of
message communication, but the FIFO queue does.
Priority queues (FIFO) can take precedence over other
queues
The message is retained for the message retention
period, but if it is exceeded, the message is deleted.
You can't cancel a message you've issued.
Retry the queue based on the delivery policy
SQS features
A queue is a relay station where a message sent by the
Producer is stored, and processing begins when the
Consumer polls the message.
Relay station
Producer Consumer
transmission Messages. transmission
process process
Messages.
SQS features
SQS issues and stores queues and manages the polling
process.
Transmission Receiving
process process
(1) Send a message
(4) If there is a
(2) Keep in queue message, it will
receive it.
Features of the SQS queue
Unlimited messages are available, but the message
retention period needs to be set up
1) You can configure standard Amazon SQS queues for S3 events, but you
cannot configure FIFO queues.
2) You can configure Amazon SQS FIFO queues for S3 events, but you
cannot configure standard queues.
3) S3 events can be configured with both standard Amazon SQS queues and
FIFO queues.
4) You cannot set up Amazon SQS for S3 events, so you need to use SNS.
SQS Queue Type
In SQS, you'll have to choose between Standard queues
or FIFO queues to initialize SQS.
✓ First in, first out (FIFO) system to protect the order of delivery.
✓ There is no duplication in the queue because the message is
delivered only once and the consumer maintains the state of the
available queue until the consumer processes the process and
FIFO queues deletes it.
✓ Limited to 300 transactions per second
✓ FIFO queue are used for use cases where the order of operations
and events is important or where duplication is not acceptable.
Standard queue
Standard queueing is a queueing system that performs
"sequential processing" and "one-time messaging" as
much as possible.
FIFO queue
As the name implies, the queueing system protects the
order in which the first queue entered is processed first.
[Q] SQS Identifier
1) Use the SQS FIFO queue to send the message with a group ID attribute
that represents the value of the device ID of the IoT data.
2) The standard SQS queue is used to send the message with a group ID
attribute that represents the value of the device ID of the IoT data.
3) Use Kinesis Data Streams to send a message with a group ID attribute
that represents the value of the device ID of the IoT data.
4) Kinesis Data Streams is used to process data in isolation per shard by
assigning a group ID attribute that represents the value of the device ID
of the IoT data.
SQS Identifier
SQS allows you to use various functions when using the
queue.
1) Create an Amazon SQS queue, configure the front end to add messages
to the queue, and configure the back end to poll the queue for messages.
2) Create an Amazon SQS queue, configure the backend to add messages to
the queue, and configure the frontend to poll the queue for messages.
3) Create an Amazon SNS, configure the front end to add messages to the
queue, and configure the back end to poll the queue for messages.
4) Create an Amazon SNS, configure the backend to add messages to the
queue, and configure the frontend to poll the queue for messages.
SQS Configuration
The basic structure of SQS is that queues are triggered by
the front servers and processed by the back-end
processing servers in parallel
Reference: https://aws.amazon.com/jp/blogs/developer/using-python-and-amazon-sqs-fifo-queues-to-preserve-message-sequencing/
[Q] SQS and Auto Scaling
Company B has built a workload on AWS that runs video processing. The
system requires a distributed configuration with queues for parallel data
processing. These jobs are executed irregularly and there are many
processing changes, so the execution period is unclear. In addition, the load
is likely to increase or decrease frequently. This video processing system is
planned to be operated in the medium to long term, and each editing
process will take from one to 30 minutes to complete.
Reference: https://docs.aws.amazon.com/ja_jp/autoscaling/ec2/userguide/as-using-sqs-queue.html
[Q] Visibility timeout
Company B has built a workload on AWS that runs video processing. The
video processing system executes the video editing process based on
messages from the Amazon SQS queue sent from the EC2 instance. After
processing, the video is stored in S3.
Spot instances are used to process this, but some spot instances are
terminated immediately after the messages are retrieved from the queue.
These spot instances have not completed processing the message, SQS is
using FIFO queues and visibility timeouts have been set up.
Based on this scenario, what happens to messages that are not finished?
1) Once the visibility timeout has passed, the message can be processed
again.
2) The message is lost because it was deleted from the queue during
processing.
3) The message is not processed and is moved to the dead letter queue.
4) The message remains in the queue and is immediately retrieved by
another instance.
Visibility timeout
Visibility timeout is a feature that renders the message
invisible for a certain period of time (30 seconds to 12
hours)
EC2
1
(1) Send
Instructions
EC2 SQS EC2 2
EC2
3
Immediately after a message is received, the message remains in the
queue. To prevent other consumers from reprocessing the same
message, Amazon SQS can prevent duplicate processing by setting a
visibility timeout.
Visibility timeout
Visibility timeout is a feature that renders the message
invisible for a certain period of time (30 seconds to 12
hours).
EC2
1
(1) Send
Instructions
EC2 SQS EC2 2
EC2
3
(2) The polled message
cannot be seen during
visibility timeout (10 minutes).
Visibility timeout
Visibility timeout is a feature that renders the message
invisible for a certain period of time (30 seconds to 12
hours). (3) Only specific EC2
instances can process it.
EC2
1
(1) Send
Instructions
EC2 SQS EC2 2
EC2
3
(2) The queue cannot be seen
during visibility timeout (10
minutes).
[Q] Polling method
1) Use a delay queue to defer delivery of new messages to the queue for 10
seconds.
2) Use short polling to defer delivery of new messages to the queue for 10
seconds.
3) Use message timers to defer delivery of new messages to the queue for
10 seconds.
4) Use a visibility timeout to defer delivery of new messages to the queue
for 10 seconds.
[Q] Message timers
1) Use SQS to set up priority messages for paid users to be processed and
use default messages for free users.
2) Use the Lambda function to set up a polling process that prioritizes
message processing for paid users and uses default messages for free
users.
3) Set up a polling process that prioritizes message processing for paid users
using SNS and uses default messages for free users.
4) Use Amazon MQ to set priority messages for paid users and use default
messages for free users.
Advanced queue settings
SQS allows you to use various functions when using the
queue. You need to use it properly depending on use-case.
• SendMessageBatch
• DeleteMessageBatch
• ChallengeMessageVisibilityBatch
The scope of CloudFront
What is CloudFront?
CloudFront is a CND service that uses global locations to
efficiently deliver content
CloudFront
EC2
content
server
See: https://aws.amazon.com/jp/cloudfront/features/?nc=sn&loc=2
What is CloudFront?
CloudFront is a CDN (Content Delivery Network) service
provided by AWS
CloudFront
What is CloudFront?
CDN is a service to speed up the process of web content
delivery
America Asia Europe
EC2
Web server
What is CloudFront?
CDN is a service to speed up the process of web content
delivery
America Asia Europe
EC2
Web server
Cache Speeding up
delivery on edge
Cloud Cloud Cloud servers
Front Front Front
The scope of the CloudFront questionnaire
Frequent questions extracted from 1625 questions are as
follows
S3 Configuration with ✓ You will be asked about the configuration with CloudFront
based on scenarios such as high performance content
CloudFront delivery.
✓ You will be asked about the use of the regional edge cache
Regional edge cache that CloudFront uses for delivery.
The scope of the CloudFront questionnaire
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about the behavior of CloudFront in
CloudFront Behavior fetching the cache first, and the behavior of CloudFront in
the absence of cached data.
Access control ✓ You will be asked how to limit the use of users' access to
to cache cached data.
✓ You will be asked about the method of logging and its use
Logging in CloudFront.
CloudFront features
Large-scale accesses to edge locations around the world
allow for the efficient and rapid delivery of content.
Origin
S3 Bucket
(Static Web Hosting)
origin server
[Q] Custom origin configuration
Web application
EC2
Origin Server
[Q] Origin server redundancy
1) Connect the ELB to an existing EC2 instance and configure the ELB as the
origin server.
2) Amazon S3 is used to provide dynamic content for web applications and
configure the S3 bucket as an origin server.
3) Configure two or more EC2 instances deployed in different availability
zones as an origin server.
4) Add an Auto Scaling group to an existing EC2 instance and configure Auto
scaling as an origin server.
Origin server redundancy
The origin server should be redundant and work with
CloudFront via ELB.
Web Application
EC2 EC2
A leading media company delivers news to its customers based on video data
in Amazon S3 buckets. The company's customers are located all over the
world and experience high demand during peak times. Regions in Europe
have been complaining of slow download speeds and high rates of HTTP500
errors during peak hours, and you, as the solution architect, have been asked
to help remedy this.
See: https://aws.amazon.com/jp/cloudfront/features/?nc=sn&loc=2
[Q] Regional edge cache
Which content type goes directly to the origin rather than the regional edge
cache? (Please choose two.)
Cloud Cloud
Cloud
Front Front
Front
Distribution Settings
CloudFront can implement optimal delivery based on your
conditions.
You are a solutions architect and manage the operations of a web application.
Since the application is used globally, the delivery process is handled by
CloudFront. The origin server is frequently accessed because objects that
should be cached are not in the edge locations. This problem also occurs for
objects that are commonly used.
Select the most likely cause of this problem from the following
Cache Analyze content usage data and set target URLs for the
Target Setting caching of static and dynamic content.
Mixed Scenarios
• If the Maximum TTL is set to 5 minutes (300 seconds) and the Cache-
Control max-age header is set to 1 hour (3600 seconds), CloudFront caches
objects for 5 minutes instead of 1 hour.
• If the Cache-Control max-age header is set to 3 hours and the Expires
header is set to 1 month, CloudFront will cache objects for 3 hours instead
of 1 month.
• If you set 0 seconds for Default TTL, Minimum TTL, and Maximum TTL,
CloudFront will always make sure there is the latest content from the origin.
Reference: https://docs.aws.amazon.com/ja_jp/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#expiration-individual-objects
[Q] Use of cache
http://pintor.cloudfront.net/main.html?language=de ...
http://pintor.cloudfront.net/main.html?language=en
http://pintor.cloudfront.net/main.html?language=jp
A leading image delivery site is built on AWS. The site's operator is looking to
use a CDN to streamline image delivery. So, as a solution architect, you are
tasked with calculating and reporting on the cost of content delivery using
CloudFront.
Select which of the following are elements for calculating CloudFront costs.
(Select two.)
1) Number of Regions
2) Number of global edge locations
3) Data Transfer Out
4) Number of requests
5) The number of caches set
CloudFront Costs
Mainly charges for requests and data transfer out
• HTTP/HTTPS requests
• Origin Shield requests
Request • Invalid requests
• Field level Encryption requests
• Real-time log requests
Which of the following is the most effective way to save money with
CloudFront?
GZIP
compression
Reference: https://docs.aws.amazon.com/ja_jp/AmazonCloudFront/latest/DeveloperGuide/private-content-overview.html
[Q] Access control to origin
In the application you built, you store static contents in S3 and then use
CloudFront for global distribution. In doing so, you want to fully protect the
communication between the CloudFront distribution and the S3 bucket
containing the website's static files. Users should only be able to access the
S3 bucket through CloudFront and not directly.
A major image delivery site is built on AWS. You are considering using a CDN
to streamline your image delivery system. As a solution architect, you plan to
use CloudFront to deliver your content efficiently. You need to make sure that
the content is only available to registered end users who are members on the
site.
Please select a solution that can meet this requirement. (Please select two)
• Let them access the contents only from a signed URL, not a URL
that directly accesses the content.
• Use signed URLs as signed cookies are not supported by the
RTMP distribution.
Signed URL
• This is used to restrict access to individual contents (application
installation downloads).
• Used if the client that does not support cookies (such as a
custom HTTP client).
• Let them access the contents only through a signed cookie, not
a URL that accesses the content directly.
• Used to provide access to multiple restricted files (e.g., all files
Signed Cookie
of video in HLS format and all files in the subscriber's area of a
website).
• Use this if you do not want to change the current URL.
[Q] CloudFront region restriction
By enabling geo-
restrictions in the CloudFront
distribution
settings, the
specified
countries and
regions will be
EC2
restricted from content
distribution. server
See: https://aws.amazon.com/jp/cloudfront/features/?nc=sn&loc=2
[Q] Access restriction
The news media application uses CloudFront to deliver web news. The
application runs on an EC2 instance behind the Elastic Load Balancer (ELB).
You need to restrict the ability of users to bypass CloudFront and access
content directly through the ELB.
A web media company decided to use CloudFront to set up their web server
as an Origin to improve read performance. Recently, they conducted an IT
audit and were required to secure the data communication to the Origin
server and CloudFront because the delivery process using CloudFront is not
secure. It is important to note that this Origin server is not an ELB.
1) AWS Certificate Manager (ACM) is used on the origin and CloudFront side
to enable data communication via HTTPS.
2) Third-party CA certificates are used on the viewer and CloudFront side to
enable data communication via HTTPS.
3) Third-party CA certificates are used on both the origin and CloudFront
side to enable data communication via HTTPS.
4) AWS Certificate Manager (ACM) is used on the viewer and CloudFront
side to enable data communication via HTTPS.
[Q] Encryption
Reference: https://docs.aws.amazon.com/ja_jp/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html
[Q] Logging
Your company has a web delivery service hosted on EC2 Instances using
CloudFront, and your IT security department is auditing the PCI compliance
of applications that use this web delivery.
Please select the appropriate action to ensure compliance goals. (Select two.)
DynamoDB in action
Reference: https://aws.amazon.com/jp/dynamodb/
DynamoDB question scope
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked to select DynamoDB as the best
Selecting DynamoDB database to meet requirements.
Capacity Mode ✓ You will be asked about the differences between the two
Settings capacity modes of DynamoDB and their purposes.
[Q] Select DynamoDB
Select the best AWS database service for this database processing. (Please
choose two.)
1) Amazon Aurora
2) DynamoDB
3) Amazon EMR
4) ElastiCache
5) RedShift
NoSQL-type database
There are two main types of databases: Relational DB and
Non-Relational DB.
Relational
NoSQL
DB
KVS: Key Value Type
Enables high-speed processing by grouping values into a
single line without a relational schema structure
SQL Tables
Distributed
in-memory KVS
data grid
data lake
document DB (e.g. Hadoop HDFS)
Distributed OLTP Amazon DocumentDB
ElasticSearch S3
For Operations For analysi
relational database
RDS data warehouse
(RDB (OLTP))
Graph DB
Amazon
Neptune
Centralized
What DynamoDB can do
The key-value (wide column type) allows for easy
manipulation of data.
NoSQL can’t do /
NoSQL can do
Not suitable for
CRUD operations
Simple queries and orders JOIN/TRANSACTION/COMMIT
/ROLLBACK are not allowed.
For example, NoSQL DB are
good at processing session Detailed queries and orders
data for applications that (not good for searching or
need to be accessed and joining data)
processed by tens of Reading and writing large
thousands of people at the amounts of data is expensive.
same time.
[Q] DynamoDB features
Which of the following is the best way to use DynamoDB? (Select two.)
Backend
• Mobile App Backend/Batch Processing Lock
data Management/Flash Marketing/Storage Index
processing
DynamoDB performance
Fully managed NoSQL database service with unlimited
table sizes, but with a single data limit of 400KB
Reference: https://docs.aws.amazon.com/ja_jp/amazondynamodb/latest/developerguide/Limits.html#limits-dynamodb-encryption
DynamoDB performance
Single-digit millisecond latency can be consistently
achieved, while DAX allows requests to be processed in
microsecond increments.
Table A
As the operations manager, you should select best solution to mitigate this
problem
Write Read.
Table
Table Design
Design tables in nested structures using tables, items and
attributes.
Table
Item
Table Design
Design tables in nested structures using tables, items and
attributes.
Table
Item
Attribute
Table
Item
Attribute
Hash key
Primary key
Design tables as nested structures with tables, items and
attributes.
Table
Item
Attribute
Composite key
Secondary index
LSI and GSI are added when the search requirements
cannot be met by hash and range keys alone.
It should not be diversified because it requires additional throughput and storage capacity
and increases writes.
Table operation
For table operations, you can use the following commands
to operate DynamoDB tables
GetItem Query
Get certain items(s) subject to a Retrieve items matching the hash
hash key key and range key (up to 1MB)
PutItem Scan.
Write 1 item Search all tables (up to 1MB)
BatchGetitem
Update
Get matching items for multiple
Update 1 item.
primary keys
Delete
Remove 1 item.
[Q] DynamoDB streams
Application
Execution of application processes such as
features
notification processing in response to data
triggered by data updates, etc.
updates
Use cases for DynamoDB streams
Lambda functions performs many different applications
processes with DynamoDB streams
DynamoDB
Automatic update of
separate tapering
Saving logs
DynamoDB Lambda
Streams
Push notification
[Q] Scaling
How can the solution architect get rid of this hotkey problem?
DAX clusters
CACHE
CACHE
EC2
CACHE DynamoDB
DynamoDB Accelerator (DAX)
Enabling fast in-memory performance in DynamoDB
For high read workloads and rapidly growing workloads, DAX can save
operational costs by designing for increased throughput and not over-
provisioning read-capacity units
[Q] Global table
Global table uses endpoints in multiple regions around the world with DynamoDB
performance
In addition to read/write capacity, users are charged for cross-region replication data
transfer fees.
The strong consistency that could be enforced by the option cannot be used for Global
tables.
On-demand backup
DynamoDB can perform hundreds of TB backups without
impacting performance
Lambda DynamoDB
What is Lambda?
A mechanism for executing programming code without
starting a server. It can be used to make simple
application processes.
DynamoDB
Integration with API ✓ You will be asked about how to integrate Lambda with API
Gateway Gateway to run a Lambda function based on an API call.
Cooperation with RDS ✓ You will be asked how to configure the RDS proxy for use.
[Q] Characteristics of Lambda
1) C#
2) .NET
3) Go.
4) PHP
5) C+
Lambda Features
The Lambda function can use many programing languages
and the execution environment is managed on the AWS
side.
Lambda is a typical managed service and the execution infrastructure is managed
entirely by AWS.
Easy to implement event-driven applications called Lambda functions in
conjunction with AWS services
Support for Java, Go, PowerShell, Node.js, C#, Python and Ruby runtimes
A Lambda Function is composed of
• code - Create function code and dependencies. For scripting languages, you
can edit the function code in the built-in editor. If the language is not
supported by the editor, upload the deployment package. If the size of the
deployment package exceeds 50 MB, upload it into S3.
• Runtime - the Lambda runtime for each language to execute the function.
• Handler - The method to be executed at runtime when calling a function
https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/configuration-console.html
https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/gettingstarted-limits.html
Lambda Billing
Lambda is charged by the number of requests and the
duration of the code's execution.
As a solution architect, you are using AWS Lambda to implement a batch job
workload. This Lambda function fetches data from Amazon S3, processes it,
and then stores the results of the process in DynamoDB. However, when I
ran this Lambda function, an error occurred in the Lambda function after 15
minutes.
The default value for the function timeout time is 3 seconds, and the maximum
allowed value is 900 seconds (15 minutes). When the timeout is reached, the
function is stopped.
The default maximum number of concurrent executions of the function is 100,
but the maximum is 1000 (can be increased to hundreds of thousands by asking
AWS).
The amount of memory available for the execution of the function. The amount of
memory in the range of 128 MB to 3,008 MB.
The storage capacity of the /tmp directory is 512MB
Up to 5 Lambda layers can be configured
https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/configuration-console.html
https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/gettingstarted-limits.html
How does Lambda work?
You can easily use lambda from web and mobile
applications using API or HTTP Requests
Calling Lambda
Implementation of Lambda: Blueprint
A collection of Sample code can be used when coding a
Lambda function
Modify the
Design a use
Find sample sample code to
case with
code Blueprint create a
Lambda
function
[Q] Lambda processing timing.
Among the following options, choose the best solution to handle Kinesis and
S3 asynchronously (choose two)
The solution architect has written code that uses the AWS Lambda function,
which, when executed, stores streaming data in the ElastiCache cluster;
since the ElastiCache is located in the same account's VPC, the The Lambda
function needs to be configured to access resources in the VPC.
1) VPC subnet ID
2) VPC Security Group ID
3) VPC's ARN
4) VPC logical ID
5) VPC root table ID
VPC access
Access to AWS resources in the VPC without going over
the Internet
Access to resources in the VPC without going
over the Internet
Accessing Create an ENI by specifying the subnet ID and
resources in VPC security group ID when specifying the VPC,
the VPC and connect via the ENI
ENI is dynamically assigned an IP for a
specified subnet via DHCP
Choose the method you need to improve this Lambda function processing.
1) Lambda Edge
2) Lambda Layer
3) Invocation
4) API Gateway cache
The Lambda layer
You can define and reference common components
between Lambda functions as a Lambda Layer (up to 5)
Lambda
Layer
common
function
[Q] Configure Lambda.
Automakers are developing data processing application workloads that take
sensor data installed in vehicles on AWS, process the data, and store it in
DynamoDB. The application needs to return a notification to the user that
the data has been successfully stored. These event processing must be
automated.
1) The sensor data is taken into an Amazon SQS FIFO queue, processed by
the Lambda function, and then written to a DynamoDB table.
2) Sensor data is imported into Kinesis Data Streams, processed by the
Lambda function, and then written to a DynamoDB table.
3) The sensor data is pulled into an Amazon SQS standard queue, processed
by the Lambda function, and then written to a DynamoDB table.
4) Perform SNS notifications based on data storage by DynamoDB streams.
5) Another Lambda function performs notifications based on the data
storage by the DynamoDB stream.
Lambda Use Cases
Combining SQS and Lambda to create a processing
program that stores IoT sensor data in DynamoDB
SQS
DynamoDB
Lambda integration
Lambda can be triggered by a variety of services.
Amazon S3
Amazon Kinesis
Amazon DynamoDB Streams
Amazon Cognito(Sync)
Amazon SNS
Amazon SQS
Alexa Skills Kit
Amazon SWF
[Q] Integration with the API Gateway
Scope of Development
Web
application
API Gateway Lambda
DynamoDB
Lambda Mobile App
Mobile integration is easy with mobile photo management
through Lambda
Authentication
Cognito
Photo registration in S3
triggers Lambda
Photo Registration
S3 Lambda
DynamoDB
[Q] Lambda Edge
A major news site uses the CloudFront web distribution to serve static
content to users around the world, but due to the lack of HTML files
corresponding to the URIs, requests, such as when reloading the browser,
can result in errors. In such cases, it is necessary to redirect the error pages
(e.g. 403/404) to index.html to avoid this problem.
Reference: https://aws.amazon.com/jp/lambda/edge/
Lambda Edge
Integrate CloudFront with the Lambda capabilities to run
code in locations close to users around the world.
See: https://aws.amazon.com/jp/cloudfront/features/?nc=sn&loc=2
Lambda Edge
The Lambda function associated with the event is
executed at the edge location and returns the execution
result Viewer
Request
CloudFront
Viewer Origin
Request Request
Origin
Viewer
Response
Response
[Q] Connect to RDS
DNS
https://www.yahoo.co.jp/ https://196.10.0.1
What is Route 53?
DNS is a mechanism for converting a easy-to-use URL to
an IP address for the system on the Internet
DNS
https://www.yahoo.co.jp/ https://196.10.0.1
What is Route 53?
Route53 is an authoritative DNS server provided by AWS,
called Route53 because it works on port 53
DNS
https://www.yahoo.co.jp/ https://196.10.0.1
Route 53
What is Route 53?
Check the DNS records, a table that links IP addresses to
URLs, and route them.
DNS
https://www.yahoo.co.jp/ https://196.10.0.1
Apply Route53 to ✓ You will be asked how to apply name resolution to on-
on-premise premises environments using Route 53.
Route 53
Route53 is a service that makes it easy to use the
features of an authoritative DNS server in a managed
form
Create the
Set up a
same host Create a Set Routing
domain on
zone as the record Policy
Route 53
domain name
[Q] Host zone
Select the correct feature for the public host zone. (Please select two.)
1) The same host zone can be used by VPCs in multiple regions as long as
the VPCs are mutually accessible.
2) It is possible to route a domain in a private subnet
3) A container that manages publicly available DNS domain records on the
Internet.
4) Define how to route traffic to a DNS domain on the Internet
Host zone
A container that holds information about how to route
traffic for a domain (example.com) and its subdomain
(sub.example.com).
You are a solution architect and you are building a web application on AWS.
You want to use the example.com domain name for this configuration, and
you need to configure it for a Route53 record.
Which of the following record types is not supported by Amazon Route 53?
1) MX
2) AAAA
3) CNAME
4) DNSSEC
Record type
Create DNS records and set various records to configure
the routing method
You are a solution architect and you are building a web application on AWS.
This application uses IPv4 communication only. You have deployed the
application hosted in an Auto Scaling group on an EC2 instance with an ALB
in place that evenly distributes incoming traffic. We would like to use the
example.com domain name for this configuration.
Which record type should I use to configure DNS names for ALB on Route
53? (Please select two.)
1) AAAA Records
2) A Records
3) CNAME Records
4) Alias record type "AAAA" record set.
5) Type "A" record set of alias records
Alias record
Use AWS-specific alias records when associating AWS
resources such as CloudFront and ELB with a domain.
You are a solution architect and you are building a web application on AWS.
The application utilizes multiple EC2 instances behind the ELBs for increased
redundancy. For this application, you need to use Route53 to configure it to
minimize the amount of communication latency that occurs.
1) Select the ELB health check type and configure Route 53.
2) Select the EC2 health check type and configure Route 53.
3) Create a CNAME record in Amazon Route 53 that points to an ALB
endpoint
4) Enable Amazon Route53 health checks and configure routing policies.
Failover configuration
Failover configuration is a redundant primary and
secondary configuration that utilizes Route53's health-
checking capabilities.
• Routing destinations
based on health
checks
• configurable in
cross-regions
[Q] Failover configuration
An enterprise is building an application using two EC2 instances. As a
solution architect, you are trying to configure the EC2 instances to be DNS-
routed and redundant so that traffic to the anomalous instance can be
avoided by making them redundant. If the anomaly is not occurring, you
plan to use both configurations actively.
Which of the following options would allow you to enforce the geographic
restrictions? (Select two.)
You can restrict content to only those locations where the right to
distribute it, by specifying a geographical region and setting limits for
distribution
Localization of content distribution, for example, by changing content
based on region
Use endpoints from specific regions to improve performance locally.
[Q] Traffic Flow
The DNS resolver is a function that allows the resolver to query the DNS
servers it knows to find out the IP address (name resolution). In other
words, it checks the domain name correspondence.
Recursive DNS resolvers should re-query the domain for changes.
By keeping that information in the cache, it is possible for the resolver
to keep track of the domain information without having to resolve the
name every time.
Recursive DNS resolvers can reduce the number of calls that need to be
made to Route 53
[Q] Apply Route53 to on-premise
A company runs an application with ELB and Route 53 configured on two
EC2 instances. The application is published using the domain example.com.
As a solution architect, you are trying to use Route53 to apply it to your on-
premises environment. You need to resolve DNS queries for resources in
your on-premises network from AWS VPCs.
Reference: https://aws.amazon.com/jp/blogs/aws/new-amazon-route-53-リゾルバー-for-hybrid-clouds/
The scope of Security Group
What is a security group?
A firewall feature to configure the accessibility of traffic to the
instance
SSH Access
Port 22 permits
(SSH)
EC2
Instance
The scope of Security Group question
Frequent questions extracted from 1625 questions are as follows
Security Group You will be asked about characteristics of security groups control
Features with traffic communications
Default Settings You will be asked about the settings of the default security group
You will be asked about about traffic control settings for SSH
SSH Connection connections, the basic configuration of an EC2 instance.
ELB Security Group You will be asked about how to set up security groups when
Settings configuring ELB and EC2 instances.
The scope of Security Group question
Frequent questions extracted from 1625 questions are as follows
Company A is building a web application consisting of a web server and a database server.
The web server and the database server are configured using different EC2 instances
located in different subnets. For security purposes, the database server should only allow
traffic from the web server.
Route
table
10.0.0.0/16
AZ AZ
Public Subnet 10.0.5.0/24 Private subnet
10.0.10.0/24
Security
Security
Group
Group
EC2 EC2
Web server DB Server
Network ACLs
Network ACLs control traffic to subnets
Route
table
10.0.0.0/16
AZ Network ACLs
AZ
Public Subnet 10.0.5.0/24 Private subnet
10.0.10.0/24
Security
Security
Group
Group
EC2 EC2
Web server DB Server
Security groups and network ACLs
We have to set both security groups and network ACLs to control
traffic
Which is the default setting for the default security group? (Select two)
Reference: https://docs.aws.amazon.com/ja_jp/vpc/latest/userguide/VPC_SecurityGroups.html#DefaultSecurityGroup
[Q] SSH connection
A major e-commerce site uses an on-demand EC2 instance to build its web server. This
EC2 instance must be placed in a public subnet and ensure that it is only accessible from
a specific IP address (130.178.101.46), via an SSH connection.
1) Select the SSH protocol UDP and port 22 and set the source to 130.178.101.46/32.
2) Select the SSH protocol UDP and port 22 and set the source to 130.178.101.46/0.
3) Select the SSH protocol TCP and port 22 and set the source 130.178.101.46/32.
4) Select the SSH protocol TCP and port 22 and set the source 130.178.101.46/0.
SSH Connection
When connecting to an EC2 instance of SSH, the configuration uses
the TCP protocol port 22 configuration.
[Q] The use of custom sources
A major e-commerce site uses on-demand EC2 instances to build a web server. You need
to place the web server on a public subnet and the database server on a private subnet
and configure traffic control in the security group. As a solution architect, you are in the
process of setting up the inbound rules for the security group.
A major e-commerce site uses on-demand EC2 instances to build its web server. The web
server is placed in a public subnet and the database server is placed in a private subnet to
distribute traffic with ELBs. Set up security groups on the ELB and the web server to
allow public access to the ELB from the Internet and restrict the web server to access
only from the ELB.
Which method of setting up ELB security groups can meet this requirement? (Please
select two)
1) Add an inbound rule to allow HTTP / HTTPS to the ELB security group and specify
"0.0.0.0/0" as the source.
2) Add outbound rules to allow all TCP to ELB security groups, and specify an Internet
gateway as the source
3) Add an outbound rule to allow HTTP / HTTPS to the ELB security group and specify
the web server security group to the source.
4) Add inbound rules to allow HTTP / HTTPS to the ELB security group and specify the
web server security group to the source.
5) Add an outbound rule to allow HTTP / HTTPS to the ELB security group and specify
the source 0.0.0.0/0.
ELB security group settings
When configuring an ELB with an EC2 instance to serve as a web
server, it is better to restrict outbound access only from the web
server.
A major e-commerce site uses multiple on-demand EC2 instances to build its web server.
The web server is deployed in a public subnet and the RDS PostgreSQL database is
deployed in a private subnet and security groups are set up. Traffic from the Internet is
distributed to the EC2 instance by the ALB, allowing only HTTPS access from the Internet,
and the SSL configuration is configured to terminate at the ALB.
How do you need to configure the security group to increase safety? (Select three)
1) Set the RDS security group to an inbound rule from the security group for the EC2
instance on port 5432.
2) Set the security group of the EC2 instance to the inbound rule from the security group
of ALB on port 80.
3) Set the inbound rule from source 0.0.0.0/0 on port 443 and port 80 in the ALB
security group.
4) For the RDS security group, set inbound rules from the security group of the EC2
instance on port 443 and port 80.
5) Set the security group of the EC2 instance to the inbound rule from the security group
of ALB on port 443.
6) Set the inbound rule from source 0.0.0.0/0 on port 443 in the ALB security group..
RDS security group settings
Set the protocol and port number used by the database in the RDS
security group.
ALB Allow traffic over HTTPS or HTTP from the Internet and specify
Security Group 0.0.0.0/0 to the source.
EC2 EC2 instances allow inbound access from the ALB via HTTPS or
Security Group HTTP and specify the ALB security group at the source.
Allow port 5432 for communication with PostgreSQL from the EC2
RDS instance that will be the web server.
Security Group The source specifies the security group of the EC2 instance.
RDS security group settings
A typical database engine port number for RDS is as follows
Kinesis
Streams
Scope of Kinesis questions
The results of analyzing the range of questions from 1625 are as
follows
Based on the scenario, You will be asked about the question of
Selecting Kinesis choosing Kinesis as the data processing service that meets the
requirements.
Cooperation between You will be asked about the services that can be integrated with
Kinesis and other services Kinesis Data Firehose and Kinesis Data Streams.
Kinesis application You will be asked about how to build applications using Kinesis.
Scope of Kinesis questions
The results of analyzing the range of questions from 1625 are as
follows
Kinesis Scaling Based on the scenario, You will be asked about how to scale Kinesis.
[Q] Select Kinesis
A large media company wanted to generate advertising revenue through news media and
built a media site using AWS. In order to provide ads to users in real time, the service is
enabled by capturing access behavior data and real-time data processing. You need a
mechanism to capture clickstream events from the source and feed the data stream to the
downstream application simultaneously.
Kinesis
Streams
Amazon Kinesis Data Streams
The streaming process is divided into shards and distributed to
allow faster processing
(2)
Streams
Data Processing
Streams
Shard1
Shard2
Shard3
Amazon Kinesis Data Streams
Kinesis Data Streams is made up of the following elements. Kinesis
improves performance with shards.
Shards are a basic unit of throughput for Amazon Kinesis data streams. 1 shard
The Shard provides 1 MB/s data input and 2 MB/s data output capability. 1 shard supports
up to 1,000 PUT records per second
Data BLOBs are the data to be processed, added to the data stream by the data
Data BLOB producer
Maximum size is 1 megabyte (MB)
Partition keys used to separate records and route them to different shards of the
Partition key data stream
Fluent
Apache
Storm
AWS
IoT
[Q] Characteristics of Kinesis
IoT solution makers are building a sensor-based traffic survey system on AWS. IoT sensor
data is collected using Amazon Kinesis Data Streams. In as little as 24 hours, IoT sensor
data is collected via Amazon Kinesis Data Firehose to S3 We store data in buckets.
However, upon examination of the data, it appears that the S3 bucket does not receive all
the data being sent to the Kinesis stream. There seems to be no problem with sending
data from the sensor devices.
1) The data retention period in Amazon Kinesis Data Streams is the default setting.
2) The delivery settings from Amazon Kinesis Data Streams have been disabled.
3) Amazon Kinesis Data Firehose has enabled a processing setting that eliminates some
insufficient data.
4) The data retention period in Amazon Kinesis Data Firehose is the default setting.
Kinesis Features
Kinesis allows you to adjust the amount of data, data retention
period, batch interval and encryption
Able to specify batch size or batch interval, such as setting the batch interval
to 60 seconds
Kinesis data stream data retention period defaults to 24 hours, with a maximum
value of 168 hours
Delivery streams are automatically scaled, one shard can capture up to 1 MB of
data per second (including partition keys), and writes can capture 1,000 records
per second
Automatically encrypts uploaded data to the destination by specifying a KMS
encryption key
Metrics can be viewed from the console or Amazon CloudWatch
Charged only for the amount of data sent and the conversion of data formats
[Q] Using Kinesis Data Firehose
IoT solution manufacturers are building sensor-based traffic survey systems on AWS,
which collects IoT sensor data and uses those data for traffic volume prediction models.
The speed of the data is 1 GB per minute and it is necessary to narrow down the data to
include only the most relevant attributes and store them in S3 to build a predictive model.
1) Get the data into Kinesis Data Streams, use the Lambda function to narrow down the
data output range, and then store it in S3.
2) Get the data into Kinesis Data Firehose, narrow down the data output range with
Firehose's filtering capabilities, and then save it to S3.
3) Get the data into Kinesis Data Streams, use Kinesis Data Analytics to narrow down
the data output range, and then store it in S3.
4) Get the data into Kinesis Data Firehose, use the Lambda function to narrow down the
data output range, and then save it to S3.
Amazon Kinesis Data Firehose
A service for distributing stream data to various databases; it also
functions as an ETL in conjunction with Lambda
S3
IoT
Redshift
Data
Kinesis
Firehose
Elastic
Search
[Q] Basic configuration of Kinesis
1) Collect data per device in Amazon Kinesis Data Streams using each device's partition
key, and use Amazon Kinesis Data Firehose to store the data in Amazon S3
2) Specify a shard for each device, collect data on a per-device basis with Amazon
Kinesis Data Streams, and use Amazon Kinesis Data Firehose to store the data in
Amazon S3
3) Collect data per device in Amazon SQS using one standard queue for each device, and
use the Lambda function to store the data in Amazon S3
4) Collect data per device in Amazon SQS using one FIFO queue for each device, and
use the Lambda function to store the data in Amazon S3
Amazon Kinesis Data Firehose
Kinesis Data Streams collects data in real time and Kinesis Data
Firehose transforms and stores the data.
S3
IoT
Redshift
Data
Kinesis Kinesis
Streams Firehose
Elastic
Search
Amazon Kinesis Data Analytics
Kinesis Data Analytics Provides real-time analysis of stream data
with standard SQL queries
Streaming Streaming
Resources Destination
Kinesis Kinesis
Firehose Firehose
Kinesis
Analytics
Kinesis Kinesis
Streams Streams
[Q] Cooperation between Kinesis and other services
The automaker intends to deploy a MaaS platform that captures real-time location data on
its latest models. The company's solution architects plan to use Kinesis Data Firehose to
deliver unique streaming data downstream analytics targets.
Which of the following targets are not supported as Kinesis Data Firehose destinations?
1) Amazon EMR
2) Amazon RedShift
3) S3
4) Amazon Elasticsearch
Cooperation between Kinesis and other services
Kinesis in conjunction with other services to process data or store
data
The IoT venture operates a store analytics IoT solution. IoT data, such as store sensors, is
sent to Kinesis Data Streams and processed for delivery by Kinesis Data Firehose. The
solution architect has configured the Kinesis Agent to send IoT data to the Firehose
delivery stream, but the data does not seem to be reaching the Firehose as expected.
Fluent plugin for Amazon OSS Fluent Output Plugin to Send Events to Kinesis Streams
Kinesis and Kinesis Firehose
Amazon Kinesis Data Easily send test data to Kinesis Streams or Kinesis Firehose
Generator (KDG) using the Kinesis Data Generator (KDG)
Reference: https://docs.aws.amazon.com/ja_jp/streams/latest/dev/enhanced-consumers.html
The scope of EFS
What is EFS?
File storage available for sharing in multiple instances
AZ AZ AZ
EFS Configuration You will be asked how to configure EFS with multiple EC2 instances
As a solution architect, you are building an application that makes use of multiple Linux
EC2 instances. This application requires access to a POSIX-compliant shared network file
system.
1) EBS
2) S3
3) Amazon FSx for Windwos
4) EFS
EFS (Elastic File System)
Shared storage accessible from multiple EC2 instances
AZ AZ AZ
EFS Settings
EFS is built using the following procedure.
File
System
Directory
Directory
[Q] EFS Settings
1) Create a subdirectory for each user and grant users read/write/execution permissions.
Then mount the subdirectory to the user's home directory.
2) Configure a mount target in each AZ where each EC2 instance is deployed and set up
access to EFS.
3) Configure a mount target in the region where each EC2 instance is deployed and set
up access to EFS
4) Configure a mount target on the VPC where each EC2 instance is deployed and set up
access to EFS.
5) Create a separate EFS file system for each user and grant each user
read/write/execution rights to the root directory. Then mount the file system in the
user's home directory.
Mount Target
To access EFS, it is necessary to set the mount target to which
the EC2 instance is connected to.
AZ
EC2
Large IT companies are building web applications on AWS. It requires the use of multiple
EC2 instances to share data. This application needs to be resilient in case of a failure.
1) Configure an Auto Scaling group across multiple AZs by setting up an ELB target
group for an EC2 instance. Store the data in EFS and mount the target on each
instance.
2) Configure ELB target groups for EC2 instances. Store the data in EFS and mount the
target on each instance.
3) Configure an Auto Scaling group across multiple AZs by setting up an ELB target
group for an EC2 instance. Store the data in the EBS and mount the target on each
instance.
4) Configure target groups of ELBs for EC2 instances. Store the data in the EBS and
mount the target on each instance.
EFS Configuration
EFS can be accessed from multiple EC instances in multiple AZs
AZ AZ AZ
Data analytics companies are using AWS to implement big data analytics workloads. Large
amounts of data processing are required using a fleet of thousands of EC2 instances
across multiple AZs. The data utilizes a shared storage layer that can be mounted and
accessed by all EC2 instances simultaneously.
Amazon-efs-utils
EFS Mount Helpers
Provision
Burst Throughput
Throughput
A large IT company is building a web application on AWS. The application has multiple EC2
instances deployed in multiple AZs that store data in shared storage. This data is a file
that is used for internal directory management and is only controlled by the EC2 instances.
The files are expected to be used frequently at first, but then accessed less frequently.
Reference: https://aws.amazon.com/jp/efs/features/infrequent-access/
The Scope of API Gateway
What is the API Gateway?
A service that can call functions and data of other services by
request and response through API
Request Communication
Internet API
Response Communication
You will be asked about the features such as the API types used by
API Gateway Features the API Gateway.
You will be asked about the factors that generate charges on the
API Gateway cost API Gateway
You will be asked about how to configure API Gateway with other
API Gateway Configuration AWS services.
API Gateway You will be asked about authentication methods, such as setting
Authentication Method permissions to the API Gateway.
The Scope of the API Gateway questions
The results of analyzing the range of questions from 1625 are as
follows
Based on the scenario, you will be asked how to configure the
Cache usage caching features, TTL, etc. used to improve the performance of the
API Gateway.
As a solution architect, you are building a mobile application on AWS. The mobile
application fetches and uses data from several application services to input data into the
user interface. The implementation requires an architectural configuration that separates
the client interface from the federated application.
1) AWS Lambda
2) AWS Device Farm
3) API Gateway
4) AWS Transit Gateway
Use case
Integrate with external applications using the API Gateway as an
entry point
API Gateway
Cache
WEB AWS
Apps Services
API Gateway Lambda
[Q] API Gateway Features
New web applications with AWS are developed based on a microservices architecture. As a
solution architect, you decide to use the API Gateway to flexibly link applications with
various functions.
Select a reason why you should use the API Gateway when building a microservice.
(Select two.)
Charged only for the API calls received and the amount of data
RESTful API transferred
Which configuration would be the least expensive and most available architecture?
1) Create APIs by API Gateway to collaborate between microservices and use Lambda
for service back-end processing.
2) Build a website hosted by EC2 instance and then work with other EC2 instances for
back-end processing, via SQS.
3) Set the application load balancer as the target group for the Auto scaling group with
up to two instances, and distribute the traffic.
4) Create an API by API Gateway and use it to collaborate with microservices, and use
an EC2 instance for service back-end processing.
API Gateway Configuration
Make EC2-based WEB applications serverless
CloudFront
EC2 EC2
Automatic Failover
RDS RDS
API Gateway Configuration
Make EC2-based WEB applications serverless
S3
10.0.0.0/16
CloudFront
API Gateway
AZ AZ
Private Subnet 10.0.6.0/24 Private subnet
Lambda
10.0.11.0/24
Automatic Failover
RDS RDS
[Q] API Gateway authentication method
Choose the best configuration method to implement privilege management for the API
gateway.
1) Use an authentication key to set access permissions to the API Gateway for different
users.
2) Use IAM policy to set access permissions to the API Gateway for different users.
3) Use an API key to set access permissions to the API Gateway for different users.
4) Use access keys to set access permissions to the API Gateway for different users.
The API Gateway authentication method
Various API Gateway access authentication are available
Resource Policy (Restful Configure the permission or denial of actions from API Gateway
API only) resources by defining a resource policy in JSON format.
Create an IAM policy that sets API access rights and set the
IAM Certification policy to IAM users or IAM roles to control access to the API.
Enable IAM authentication in API methods
Reference: https://aws.amazon.com/jp/api-gateway/
[Q] Throttling
Server-side throttling Limit requests to all clients. This prevents the backend
limits services from being overwhelmed by too many total requests.
Reference: https://docs.aws.amazon.com/ja_jp/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.html
The Scope of Questions for Aurora
The results of analyzing the range of questions from 1625 are as
follows
You will be asked to select Aurora as a suitable DB
Aurora Features You will be asked about features such as the benefits of choosing
Aurora.
You will be asked about how to configure tier settings for Aurora
Failover configuration failover configuration.
You will be asked the architectural configuration that allows for the
Global Configuration global deployment of aurora to the read replica.
The Scope of Questions for Aurora
The results of analyzing the range of questions from 1625 are as
follows
As a solution architect, you are building a database system on AWS. Your existing on-
premises database uses MySQL 5.6 to manage customer data for your business system.
Recently, the volume of data processing has increased and we have decided to build a
high-performance database on AWS. You are considering whether Amazon Aurora is the
best choice for you.
5 times Performance
Comparison with r3.8xlargeAurora and Sysbench4 instance
Resilience/Self-healing Scalability
Subnet
Aurora DB
instance
Subnet Subnet
Subnet
Aurora DB
instance
Subnet Subnet
A major news media company runs a web application for news distribution on AWS. The
application runs on an Amazon EC2 instance fleet in the Auto Scaling group behind ALB.
We use the Aurora database for the Data layer. The performance of this application is
slowing down due to the ever-increasing number of read requests.
Please select a solution that can meet this requirement. (Select two.)
AZ
Subnet
Aurora Writer
(Master)
DB Cluster Configuration
Aurora constitutes a DB cluster of master and read replicas together
AZ AZ
DB Cluster
Private Subnet 10.0.1.0/24 Private subnet
10.0.3.0/24
AZ AZ
ELB
EC2 EC2
Ends
Points
DB Cluster
Private Subnet 10.0.1.0/24 Private subnet
10.0.3.0/24
AZ AZ
ELB
EC2 EC2
Writing process
Ends
Points
DB Cluster
Private Subnet 10.0.1.0/24 Private subnet
10.0.3.0/24
AZ AZ
ELB
EC2 EC2
Read Processing
Ends
Points
DB Cluster
Private Subnet 10.0.1.0/24 Private subnet
10.0.3.0/24
A major news media company hosts a web application for news distribution on AWS. It
uses the Aurora database for its database. The company is currently deploying four read
replicas in multiple AZs to increase read throughput and to serve as failover targets. The
replicas are configured as follows
Tier 1 (8TB)
Tier 1 (16TB)
Tier 15 (16TB)
Tier 15 ( 32TB)
1) Tier 1 (8TB)
2) Tier 1 (16TB)
3) Tier 15 (16TB)
4) Tier 15 ( 32TB)
Failover configuration
Promote read replicas to masters in ascending order of tier number
and then in descending order of maximum size.
AZ AZ
ELB
EC2 EC2
Ends
Points
DB Cluster
Private Subnet 10.0.1.0/24 Private subnet
10.0.3.0/24
Snapshot
MySQL Aurora
with MySQL
Aurora Multimaster
Write performance is also scalable by building multiple master
databases
AZ AZ AZ
Aurora Writer
(Master)
2017-present
AZ AZ AZ
Aurora Writer
(Master) Aurora Writer
(Master)
A large IT company is building a web application on AWS. The number of users in this
application is expected to skyrocket, and at this stage it is not yet possible to determine
how much performance is required. It may also experience a severe drop in demand and
has to deal with erratic processing loads. However, the drawback is that we cannot predict
this in advance.
Company B is building a web application on AWS. With users all over the world and a high
number of global requests, performance is slowing down despite the use of read replicas in
Amazon RDS for MySQL. There seems to be a limit to the performance of the basic
performance of RDS.
To remedy this problem, choose the most cost-effective and high-performance solution.
1) Newly created Amazon RDS global read replicas to enable fast local reads with low
latency in each region.
2) Migrate to Amazon Aurora global database and enable fast local reads with low latency
in each region.
3) Moving to Amazon Aurora serverless, enabling fast local reads with low latency in each
region.
4) Migrate Amazon DynamoDB global tables and enable fast local reads with low latency
in each region.
Aurora Global DB
High-performance read replica which can be built in other regions
Aurora Aurora
(Master) (Reader)
[Q] Endpoint selection
To achieve this requirement, which is the optimal configuration method for Aurora
endpoints? (Please select two)
Memory Type DB
Data Data
Processes data
Faster than a disk
What is ElastiCache?
ElastiCache is an in-memory database that keeps a cache in
memory and performs high-speed processing
ElastiCache
Cash
RDS
Fetch data from the cache the next time you access it
The scope of ElastiCache questions
The results of analyzing the range of questions from 1625 are as
follows
Based on the scenario, you will be asked to select an ElastiCache
Selecting ElastiCache that matches the database requirements.
You are in charge of game developments for a game company, building a database to be
used in the game under development. The game needs to implement the ability for items to
appear in response to recorded user behavior data, real-time, high-speed processing of
user behavior data is required for this setting.
1) ElastiCache
2) Redshift
3) Aurora
4) RDS
ElastiCache
Services that can be easily implemented to build, manage and scale
distributed in-memory cache DB
[Use Case]
Session Management
IOT processing and stream analysis
Metadata Storage
Social media data processing/analysis
Pub/Sub processing
DB Cache Processing
Use case
Consider using cache when you want to speed up data access
[Use Case]
User matching process
Recommendations Processing
Fast display of image data
Ranking using user data in a game event
[Q] ElastiCache type
As a systems developer for a game company, you are building a database to be used in a
game under development. The database needs to use a multi-threaded in-memory cache
layer to improve the performance of repeated queries.
As a solution architect, you are building a system for fast data processing. You need real-
time processing for session data processing, and you decide that ElasticCache is the best
choice for these data accelerations, but you have to compare whether to choose
Memcached or Redis.
Redis Memcached
Redis Memcached
Complex data types are needed. Simple data types are needed
The in-memory data sets need to be sorted or ranked. There is a need to run a large node with multiple
cores or threads.
For the load of the read process, you need to
replicate to the read replica. There is a need for scale-out and scale-in capabilities
to add or remove nodes as the demand increases or
Need pub/sub functionality decreases in the system.
Automatic failover is necessary The need to cache objects such as databases.
The persistence of the keystore is necessary. Keystore persistence is not necessary
The ability to backup and restore is necessary. No need for backup and restoration features
There is a need to support multiple databases. Multiple databases are not available
Use case
The pub/sub feature of ElastiCache Redis can be used for chat
applicatons
ElastiCache
Chat App pub/sub
Server server
ElastiCache with Redis
In addition, you can take advantage of location queries / operations
with Lua scripts and pub/sub models
You are performing a load test on an application hosted on AWS. While testing an Amazon
RDS MySQL DB instance, you find that there are cases where the CPU usage reaches
100% and the application becomes unresponsive. The application seems to be doing a lot of
reading.
1) Utilize the queuing process with SQS to reduce the concentration of access to RDS
2) Deploying Auto Scaling to RDS Instances to Increase Scalability Under Load
3) DynamoDB (DAX cluster) in front of the RDS to introduce the caching process
4) Putting ElasticCache in front of the RDS and introducing the caching process
ElastiCache Configuration
Standard configuration method to identify the data to be cached
and use it in conjunction with RDS
In-memory cache
Cache unused patterns
Usage patterns
ElastiCache
As a solution architect, you are building a data acceleration mechanism. Session data
processing requires real-time processing, and the caching uses an ElastiCache cluster. In
order to comply with the company's security policy, the data used needs to be protected.
Reference: https://docs.aws.amazon.com/ja_jp/AmazonElastiCache/latest/red-ug/encryption.html
ElastiCache Security
ElastiCache Redis can encrypt data transfers, encrypt data storage
and authenticate with Redis Auth
Direct Connect
Location
Direct Connect You will be asked about how to configure the various gateways and
Configuration interfaces used for Direct Connect configuration.
Connection between You will be asked about how to configure a Direct Connect gateway
regions when connecting between regions.
Site-to-site VPN You will be asked about how to configure a gateway when setting up
connection a site-to-site VPN.
You will be asked about the use of the VPN CloudHub as a way of
VPN CloudHub configuring multiple VPNs together.
The scope of the question
The results of analyzing the range of questions from 1625 are as
follows
Direct Connect You will be asked about how to configure Direct Connect to ensure
Redundancy redundancy.
[Q] Select connection method
1) Direct Connect
2) VPC Peering
3) VPN
4) Snowball
5) AWS Storage Gateway
On-premise connection with VPC
VPN Connection
The company you work for is currently migrating its infrastructure and applications to the
AWS cloud. As a solution architect, you have implemented a connection configuration that
uses Direct Connect to connect to your on-premises environment.
1) Install the customer gateway device in your on-premises environment and connect to
the Direct Connect device.
2) Install a virtual private gateway on the Amazon VPC side to connect to a Direct
Connect device.
3) Install a virtual private gateway in your on-premises environment to connect to Direct
Connect devices.
4) Install a customer gateway device on the Amazon VPC side and connect to the Direct
Connect device.
5) Set up a private virtual interface in your on-premises environment to connect to your
Direct Connect device.
Direct Connect Configuration
Connect a leased line connection to the AWS environment by
physically connecting your on-premises environment to a Direct
Connect location
Direct Connect
Location
In-
Private 10G house
VIF or On-Premises
1G Equipm Environment
ent
Virtual Private Direct Customer Customer
Gateway Connect Gateway Gateway
Devices Devices
[Q] Inter-region connections
A global consulting firm with offices around the world is building an AWS-based document
sharing system and plans to share country knowledge. To implement this mechanism, you
need to implement high-bandwidth, low-latency connections to multiple VPCs in multiple
regions within the same account. Each VPC has a unique CIDR range.
Which is the best solution design that can meet this requirement? (Select two)
1) Create a Direct Connect gateway and create a customer virtual interface for each
region.
2) Configure a Direct Connect connection from the office to the AWS region.
3) Configure a VPN connection from the office to the AWS region.
4) Implementing a Direct Connect connection to each AWS region
5) Create a Direct Connect gateway and create a private virtual interface to each region.
Inter-regional Connections
Direct Connect gateway connects multiple VPCs in multiple regions
from multiple AZs in multiple regions belonging to the same account
Virtual Private
Gateway
Direct Connect
Gateway
Reference: https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html
[Q] Site-to-site VPN connection
A Retail Company is planning to migrate to the AWS cloud using the Tokyo region. In order
to do so, you need a configuration that connects your office to the AWS Cloud. As a
solution architect, you have set up an AWS managed IPSec VPN connection between your
remote on-premises network and a VPC over the Internet.
Which of the following represents the correct configuration of an IPSec VPN connection?
1) Create a virtual private gateway on the AWS side of the VPN and create a customer
gateway on the on-premises side of the VPN.
2) Create a virtual private gateway on the on-premises side of the VPN and create a
customer gateway on the AWS side of the VPN.
3) Create a virtual customer gateway on the AWS side of the VPN and create a customer
gateway on the on-premises side of the VPN.
4) Create a virtual customer gateway on the on-premises side of the VPN and create a
customer gateway on the AWS side of the VPN.
Site-to-site VPN connection
Connect the virtual private gateway on the AWS side with a
customer gateway device in an on-premises environment.
Reference: https://docs.aws.amazon.com/ja_jp/vpn/latest/s2svpn/how_it_works.html
Site-to-site VPN connection
The AWS-side virtual private gateway can also be a Transit
Gateway
Reference: https://docs.aws.amazon.com/ja_jp/vpn/latest/s2svpn/how_it_works.html
[Q] VPN CloudHub
Media companies use Direct Connect in the Tokyo region to connect their offices to the
AWS cloud. Its branches in Singapore and Sydney use a separate region and connect to
the VPC using a site-to-site VPN connection. The company is looking for a solution to
help its branches send and receive data to and from each other and the head office.
1) VPN CloudHub
2) VPC Customer Gateway
3) VPC Endpoints
4) AWS Transit Gateway
VPN CloudHub
Multiple site-to-site VPN connections can be combined to provide
secure site-to-site communication.
Reference: https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-vpn-cloudhub.html
[Q] Direct Connect Redundancy
Your company uses Direct Connect to connect your office to the AWS cloud. However,
the problem is that only one Direct Connect connection is configured and redundancy is
not ensured. As a solution architect, you have been asked to increase the redundancy of
the Direct Connect connection. A cost-optimized configuration is required.
In-
Private 10G house
VIF or On-Premises
Equipm
1G Environment
ent
Virtual Private Direct Customer Customer
Gateway Connect Gateway Gateway
Devices Devices
You will be asked about how to leverage the stack set and deploy
CloudFormation Features CloudFormation across multiple accounts
CloudFormation You will be asked about the each elements of the template snippets
Template snippets of a CloudFormation.
CloudFormation template You will be asked about how to descript the CloudFormation
description template.
[Q] Select CloudFormation
The company has created guidelines for standardizing infrastructure configuration on AWS.
As a solution architect, you have a mechanism in place to share the deployment of EC2
instances, VPCs, and other configurations that follow the guidelines when using AWS
resources.
1) CloudFormation
2) AWS Elastic Beanstalk
3) AWS Systems Manager
4) CodeDeploy
CloudFormation
CloudFormation can be leveraged when you want to deploy the
infrastructure environment on AWS accurately and efficiently
Use case
You want to streamline to launch AWS resources
You want to standardize the infrastructure used in development,
testing, and production environments
You want to use exactly the same resources and provisioning settings
every time
You want to manage the environment configuration like software
CloudFormation
An automated environment configuration service that describes and
deploys a template of all infrastructure resources in the AWS.
When updating the stack, the summary is a change set to check the
impact of the change of resource after deploying
Change set
There are two ways to change the stack: direct update and update
using a change set.
The ability to create a stack for multiple AWS accounts and multiple
Stack Set regions
As a solution architect, you have created a CloudFormation template and are responsible
for standardizing the configuration of the environment, and it is necessary to output some
of the settings so that they can be referenced by other templates when creating the AWS
stack.
1) Value
2) Outputs
3) Properties
4) Mappings
[Q] CloudFormation template description
As a solution architect, you are responsible for creating CloudFormation templates and
standardizing the configuration content.
Above is omitted....
Mappings:
RegionMap:
ap-northeast-1:
hvm: "ami-0792756bc9edf3e63"
ap-southeast-1:
hvm: "ami-0162da29310cc18f6"
Description: Create EC2 Instance
Resources:
MyEC2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: ! FindInMap [RegionMap, !Ref 'AWS::Region', hvm]
InstanceType: !Ref InstanceType
Tags:
- Key: Name
Value: myInstance
AWSTemplateFormatVersion: '2010-09-09'
Description:
Metadata:
Parameters:
Mappings:
Conditions:
Transform:
Resources:
FirstVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version
Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata:
Parameters:
Mappings:
Conditions:
Transform:
Resources:
FirstVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version
Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings:
Conditions:
Transform:
Resources:
FirstVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version
Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform:
Resources:
FirstVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version
Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
FirstVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version
Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
FirstVPC: Describe the SAM version of the serverless app
Type: AWS::EC2::VPC Describe the actual resources to be generated on the stack and
Properties: their configuration properties
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version
Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
FirstVPC: Describe the SAM version of the serverless app
Type: AWS::EC2::VPC Describe the actual resources to be generated on the stack and
Properties: their configuration properties
CidrBlock: 10.0.0.0.0/16
Describe the settings, such as resource names and type properties
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version
Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
FirstVPC: Describe the SAM version of the serverless app
Type: AWS::EC2::VPC Describe the actual resources to be generated on the stack and
Properties: their configuration properties
CidrBlock: 10.0.0.0.0/16
Describe the settings, such as resource names and type properties
Tags:
- Key: Name
Describe the dependencies between resources
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version
Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
FirstVPC: Describe the SAM version of the serverless app
Type: AWS::EC2::VPC Describe the actual resources to be generated on the stack and
Properties: their configuration properties
CidrBlock: 10.0.0.0.0/16
Describe the settings, such as resource names and type properties
Tags:
- Key: Name
Describe the dependencies between resources
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment Built-in functions: Ref and FindInMap, etc.
Properties:
DependOn:
VpcId: ! Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version
Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
FirstVPC: Describe the SAM version of the serverless app
Type: AWS::EC2::VPC Describe the actual resources to be generated on the stack and
Properties: their configuration properties
CidrBlock: 10.0.0.0.0/16
Describe the settings, such as resource names and type properties
Tags:
- Key: Name
Describe the dependencies between resources
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment Built-in functions: Ref and FindInMap, etc.
Properties:
DependOn:
VpcId: ! Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Describe the values and destinations to be output after building the
Outputs: stack
The scope of ECS
What is Amazon ECS?
Services to enable the building of container applications with
Docker in AWS
Reference: https://aws.amazon.com/jp/blogs/containers/developers-guide-to-using-amazon-efs-with-amazon-ecs-and-aws-fargate-part-1/
The scope of ECS Question
The results of analyzing the range of questions from 1625 are as
follows
ECS Cost When using ECS, You will be asked the elements that affect cost.
ECS Authorization You will be asked how to set permissions for ECS tasks to use other
Settings AWS resources.
The scope of ECS Question
The results of analyzing the range of questions from 1625 are as
follows
ALB and ECS You will be asked how to set up an ECS container when configuring
Configuration it with an ALB.
You will be asked the basic ECS configuration when running multiple
ECS Configuration jobs using ECS.
[Q] ECS selection
Now company X is planning to build an application using AWS, and Company X has a
CI/CD environment that uses Docker to build Docker applications. Company X is not using
Docker's open source mechanism and will not be using it in the future.
1) Amazon ECS
2) Amazon EKS
3) Amazon ECR
4) Amazon Fargate
Amazon Container Services
There are four related services in ECS
The place where the images run Services to manage containers The environment in which the
to the container engine will be container is run
stored
Amazon ECS
Amazon ECR AWS Fargate
Amazon EKS
Elastic Container Service (ECS)
Scalable and high-performance container orchestration services to
support Docker containers
Integrated with ECS and Docker CLI to simplify the workflow from
development to production
You are a solution architect and you are building a new application composed of multiple
components in a Docker container. Running a container requires a way to choose an
instance type, manage cluster scheduling, etc. without having to implement it.
Launching an EC2 instance with ECS Dedicated computing engines that can be
Capable of performing detailed server-level installed in ECS and EKS
control over the infrastructure of running No need to manage a cluster of EC2 instances
container applications No need to select an instance type, manage
Manage server clusters and schedule container cluster scheduling, and optimize the use of
placement on the server clusters
A wide range of customization options in the Define the app requirements for CPU, memory,
server cluster are available etc., and the necessary scaling and infrastructure
will be managed by Fargate
Launch tens of thousands of containers in
seconds
[Q] ECS Cost
1) Fargate launch types are charged based on the number of CPUs and memory
resources.
2) EC2 launch types are charged based on the number of CPUs and memory resources.
3) Both the Fargate launch type and the EC2 launch type are charged based on the
number of CPUs and memory resources.
4) The EC2 launch type is charged based on the EC2 instance and EBS volume used.
ECS Cost
ECS incurs EC2 instance usage fees or Fargate usage fees.
Only charge for AWS resources (EC2 There is a charge for the vCPU and
instances, EBS volumes, etc.). memory resources required for
containerized applications.
A one-minute minimum fee applies.
1) Define a separate task role for the container with the same task definition.
2) Set the IAM role on an EC2 instance set to ECS.
3) Launch another container cluster with ECS and define the task.
4) Create a separate task definition for a container for different task roles.
Task Definition
When running a Docker container in ECS, you define a task to run
the container.
Task Definition
Determine the resource usage
of the task, information about
the docker container to run on
the task, etc.
Reference: https://docs.aws.amazon.com/ja_jp/AmazonECS/latest/developerguide/Welcome.html
[Q] ECS Authorization Settings
As a solutions architect, you are implementing an application utilizing the EC2 launch type
of Amazon ECS. This application requires permissions to write data to Amazon DynamoDB.
1) Create an IAM policy with access permissions to DynamoDB and attach it to the
container instance.
2) Create an IAM policy with access permissions to DynamoDB, assign it to an IAM role
and set that IAM role to ECS.
3) Create an IAM policy with access permissions to DynamoDB and assign it to the ECR
cluster.
4) Create an IAM policy with access permissions to DynamoDB and assign it to a task
using taskRoleArn parameter.
ECS Authorization Settings
The necessary permissions for task execution need to be assigned
to each task by IAM policy.
IAM Policy
Assign an IAM policy to tasks
and grant access to resources
on a per-task basis.
Reference: https://docs.aws.amazon.com/ja_jp/AmazonECS/latest/developerguide/Welcome.html
[Q] ALB and ECS configuration
You are a solution architect and you are building a web application using Docker. You have
implemented an application that executes tasks separated into multiple container clusters
in a task definition, and you need to control traffic to them by a single ALB.
Which features help you to achieve this with minimal effort? (Select two)
With path-based routing in ALB, you can Multiple ports can be registered as targets
route to a target group according to a URL for ALB when dividing an EC2 launched by
You can implement path routing for ECS into target groups
containers by specifying containers for Register a dynamic port number in the ECS
ECS path routing task definition and distribute the traffic
destination according to the port number.
[Q] ECS Configuration
We are building a Docker application in our company. You want to grant additional
permissions to Docker application containers on an ECS cluster that you have already
deployed, and you want to add new tasks. This will be used to handle both very important
data processing jobs and batch jobs that can be performed at any time.
Which of the following is the most cost-effective option to meet this requirement?
1) Set up a reserved EC2 instance for critical data processing jobs and a spot EC2
instance for non-critical jobs.
2) Separate processing containers to run by assigning separate task definitions for
important data processing jobs and non-critical jobs.
3) In conjunction with Amazon SQS, set a priority queue for important data processing
jobs, and set a standard queue for less important jobs.
4) In conjunction with Lambda, set priority jobs for important data processing jobs and
set standard jobs for less important jobs.
ECS Configuration
Define the job content to be executed in the ECS task definition
and configure multiple task processing.
Mission-critical
Define a batch job
Reference: https://docs.aws.amazon.com/ja_jp/AmazonECS/latest/developerguide/launch_types.html
The scope of Redshift
What is Redshift?
Managed services that allow you to build a data warehouse on AWS
Working with BI
tools from Redshift
data.
Reference: https://aws.amazon.com/jp/blogs/database/using-amazon-redshift-for-fast-analytical-reports/
The scope of Redshift question
The results of analyzing the range of questions from 1625 are as
follows
Based on the scenario, you will be asked to choose Redshift as a
Selecting Redshift suitable database.
Redshift Configuration You will be asked how to configure Redshift using AZ and region.
You will be asked how to enable traffic control and monitoring for
Traffic Control traffic via VPC.
Reserved nodes You will be asked how to cost-optimize the use of Redshift nodes.
[Q] Select Redshift
Your company has implemented two data processing operations utilizing a relational
database. The first process runs a complex query in a data warehouse that takes hours to
complete. The second process involves performing customer data analysis and visualizing
it in a dashboard.
1) Perform data warehouse processing using Redshift and perform customer data
analysis using RDS.
2) Implement both operational processes using Redshift.
3) Implementing both operational processes using RDS.
4) Implement both operational processes using Aurora.
Redshift
Fast, Scalable and Cost-Effective Managed DWH/Data Lake Analysis
Services
Datamart BI/
Data Visualization
Structured ET
Data
L Redshift
Business DB OLAP
Data Lake AI/
Semi- Machine Learning
structured Analysis
Data ETL
Amazon Glue
S3
Chatbot/Chatbot
Unstructured Smart Speaker
Data S3 S3 Amazon EMR
Glacier
Instance Type
Choose from two instance types depending on the data size and
operations
Redshift clusters
Managed Storage
(Redshift file format)
Nodes can be configured from 1 to 32 • S3 for persistent data storage
[Q] Redshift configuration
A data analytics company uses a Redshift cluster to house its data warehouse on AWS.
The company has a disaster recovery plan in place to ensure business continuity. As a
solution architect, you are required to implement a solution to increase the resiliency of
Redshift in the event of a region outage.
ELB
AZ AZ
EC2 EC2
Automatic Failover
Redshift Redshift
Automatic Workload Machine learning automatically prioritizes query execution when setting
Management up multiple query execution in workload management
Recommendations for Automatically analyze cluster performance and more, and make
best usages recommendations for optimization and cost reduction
[Q] Traffic control
Which of the following is the most suitable solution to meet the requirement?
1) Set routes to VPC endpoints in the network ACLs of Amazon Redshift's installed
subnets.
2) Set the route to VPC endpoints in the gateway route table used by Amazon Redshift.
3) Using Amazon Redshift's enhanced VPC routing.
4) Set a security group on Amazon Redshift to allow routes to VPC endpoints.
Traffic Control
You can enable enhanced VPC routing to force traffic to your VPC.
VPC S3
Redshift Endpoints
[Q] Encryption
Choose the best encryption method that meets this requirement. (Select two.)
An enterprise uses Redshift for online analytical processing (OLAP) applications to handle
complex queries on large data sets. When processing these queries, there is a requirement
to define a way to route the queries to a queue.
+plus +
[Q] Redshift Spectrum
A Big data analytics company which provides IoT solutions are now building data analytics
solutions on AWS to analyze vehicle data. Since vehicle data is sent in large volumes, data
is stored in S3 via Firehose from Kinesis Data Streams. This S3 data needs to be directly
queried to perform large and complex queries on this S3 data.
1) Athena
2) S3 Select
3) QuickSight
4) Redshift Spectrum
RedShift Spectrum
RedShift Spectrum enables data analysis directly on user-managed
S3 buckets
RedShift Clusters
Leader
Node
User-managed S3 buckets
Data linkage (To Redshift)
It is important to consolidate the analytics base as a DWH by
moving the data to Redshift
S3 is the most frequently used data integration destination, and it is
S3 possible to retrieve data from S3 and analyze it in Redshift, or to
perform internal data analysis directly in S3
Amazon QuickSight Connect to Redshift and allow you to conduct data visualization
Amazon Machine
RedShift is available as training data for machine learning
Learning
Cannot directly integrate with RDS from RedShift, but can use
RDS PostgreSQL's features to integrate data with RDS
[Q] Reserved nodes
Company B runs a data warehouse on AWS using an Amazon Redshift cluster with six
nodes. They use this system to perform various business analyses on a daily basis, and
this system will be in use for the next year.
Choose an instance configuration, which allows you to reduce the cost of this Redshift
configuration.
Redshift clusters
Leader
Node
Managed Storage
(Redshift file format)
The scope of SNS
What is SNS?
Amazon SNS enables asynchronous communication with other
services in a fully managed push notification service
Receiving
(1) Topic 3) Push the contents process
Send a message of the communication
[Multiprotocol]
-HTTPS
-EMAIL
-SQS
-Mobile push
The scope of SNS question
The results of analyzing the range of questions from 1625 are as
follows
Based on the scenario, you will be asked to select an Amazon SNS
Select SNS to achieve the requirements.
SNS configuration you will be asked how to build solutions using SNS.
[Q] Selection of SNS
1) Amazon SNS
2) Amazon SQS
3) Amazon SES
4) Amazon MQ
Amazon Simple Notification Service (SNS)
Asynchronous communication is realized by the sender creating an
SNS topic and the receiver subscribing to it.
Receiving
process
Topics
Topics
SNS and SQS
SNS and SQS have different processing methods, so they can be
used differently depending on use case
SNS SQS
A Leading manufacturing company is building web applications using EC2 instances. This
application needs the ability to perform back-end processing by notifying other
applications. As a solution architect, you are considering a notification scheme.
Which of the following are the notification protocols supported by Amazon SNS? (Select
three)
1) SSH
2) FTP
3) SMS
4) HTTPS
5) Email
6) MQ
Features of SNS
SNS can be used for loosely coupled architecture by linking with
various AWS services and setting notifications.
Single-issue message
Message communication order is not guaranteed
Irrevocable
Retry by delivery policy
Message size up to 256KB
Features of SNS
SNS uses HTTP/HTTPS/JSON format messages as follows.
HTTP/HTTPS Headers
HTTP/HTTPS reception registration confirmation in JSON format
HTTP/HTTPS notification in JSON format
Unregistered HTTP/HTTPS reception in JSON format
SetSubscriptionAttributes Delivery Policy in JSON Format
SetTopicAttributes Delivery Policy in JSON format
SNS collaboration
SNS can be used for loosely coupled architecture by linking with
various AWS services and setting notifications.
A Leading manufacturing company is building web applications using EC2 instances. This
application needs the ability to perform back-end processing by notifying other
applications. This notification is handled by the Lambda function, which performs the back-
end processing, reaching a peak of about 5000 requests per second. You have conducted
some tests and found that some of the Lambda functions are not executing.
1) Amazon SNS has reached the notification limit and the limit needs to be raised.
2) The limit needs to be raised because Amazon SNS message delivery has exceeded
Lambda's account concurrency quota.
3) Improper IAM policy for linking to Lambda functions from Amazon SNS.
4) We need to authenticate Amazon SNS subscriptions on the Lambda function side.
SNS Configuration
SNS can be configured with SQS and Lambda functions using SNS
push notifications
SQS
S3 SNS
Lambda
The scope of AWS Storage
Gateway
What is AWS Storage Gateway?
AWS Storage Gateway can connect and extend the storage of on-
premises environments to Amazon S3
EC2
S3
Glacier
IA Storage
The scope of AWS Storage Gateway question
The results of analyzing the range of questions from 1625 are as
follows
You will be asked to choose how to achieve a hybrid configuration,
Strage gateway selection extending storage in an on-premises environment.
Can be used for big data processing / cloud bursting / moving data to
AWS storage for system migration
Retain data on S3 for backup, archiving and disaster planning
Leverage AWS storage easily in an on-premises environment
[Q] Storage gateway type
Which AWS Storage Gateway configuration is best suited to achieve your requirements?
Reference: https://aws.amazon.com/jp/storagegateway/file/
Volume Gateway
Achieve a hybrid configuration with S3 for disk data in on-premises
environments
Enables faster access to data from applications, keeping either a cache of the latest
access data, or a copy of the entire volume on-premises
Interfaced as block storage on iSCSI
On-premises local disk backups automatically performed on the AWS side
Update data transferred to AWS asynchronously
Data protection and recovery with Amazon EBS Snapshots, Storage Gateway Volume
Clones, and AWS Backup
Reference: https://aws.amazon.com/jp/storagegateway/volume/?nc=sn&loc=2&dn=4
Volume Gateway Type
Choose from two types by whether to set the primary on-premise
or S3.
• Primary S3 storage
• Extend S3 on-premises storage
Cached Volume • Use Amazon S3 as the primary data storage, keeping
Gateway frequently accessed data in the local storage gateway
• Frequently accessed data can be cached in an on-
premises environment, enabling low-latency access
Master
Account
(AWS account A)
• Consolidating Billing
• Centralized Authority Management
You will be asked how to set up and delete master and member
Account Settings accounts
Consolidating Billing You will be asked the benefits in a Consolidating Billing settings
You will be asked how the Service Control Policy is set up and its
purpose.
SCP Also based on the scenario, you will be asked the actual results of
SCP settings.
A company has three internal AWS accounts. It was decided that the CIO's office would
take the lead in overseeing IT cost management and operations.
1) AWS Organizations
2) IAM
3) AWS Trusted Advisor
4) AWS Systems Manager
AWS Organizations
AWS Organizations is a managed service that makes IAM access
management effortless for large organizations
1) Perform the deletion by master account A, after giving the necessary settings to member account
B that you want to delete from existing Organizations.
2) If member account B accepts the invitation to new Organizations from master account C,
account B is removed from the Organization of account A.
3) If Account A performs the privilege transfer from Account A to the new Organizations and
Account C agrees, Account B is registered to the new Organizations.
4) If Account A performs a privilege transfer from Account A to the new Organizations and Account
C agrees, Account B is removed from Account A's Organization.
5) Invitations to new Organizations are extended from master account C to account B. Once the
acceptance is received from account B, it is added to the member account.
6) The root account of member account B that you want to remove from Organizations performs
the withdrawal process on its own.
Account Settings
Select one account as a master account from among AWS
accounts
AWS Master
Account Account
A member account is registered as a member account once the member account approves
the invitation from the master account.
If you want to delete it from a member account, the account must have authority such as
billing processing.
AWS Organizations
A unit in which the master account manages member accounts in a
unit called OU
Administrator Route Organization Policy
Service
Master Management
Account Policy
A company has three internal AWS accounts. Because of the different billing processes in
different departments, they use AWS Organizations to oversee IT cost management and
operations. So, as a solution architect, you're considering whether you should use AWS
Organizations to set up Consolidating Billing for all three AWS accounts.
1) If each account utilizes S3, a volume discount on S3 costs is applied and cost savings
are reliably possible.
2) If each account utilizes S3, Organizations dedicated price range to S3 costs will be
applied.
3) If each account utilizes S3, the utilization volume of S3 costs will increase.
4) If each account utilizes S3, you may be able to reduce S3 costs.
Select Type
Choose between two methods: Consolidating Billing and full account
integration management
A company has three internal AWS accounts. Because of the disparate billing processes in
different departments, they use AWS Organizations to oversee IT cost management and
operations. We use Service Control Policy (SCP) to manage permissions centrally across
all accounts in the organization.
1) An IAM user with a member account set up with access to EC2 by the SCP is granted
EC2 operation rights.
2) SCP affects all users and roles in a set member account, including the root user.
3) SCP affects all users and roles in the configured member accounts other than the root
user.
4) SCP affects the roles linked to the service.
5) SCP does not affect the roles linked to the service.
6) An IAM user with a member account that has been set up with access to EC2 by SCP
is not authorized to operate EC2.
SCP
A policy called SCP can be used to set authority boundaries for
members within an OU.
IAM SCP
• EC2 permissions
• EC2 permissions
• RDS permissions
• ECS permits
• ECS permits
• EC2 permissions
• ECS permits
IAM and AWS Organizations
IAM performs user management within AWS accounts;
Organizations performs management of multiple AWS accounts
themselves.
Company A, a leading news website, uses the AWS Cloud to manage its IT infrastructure.
Since the company uses multiple AWS accounts, it decided to use AWS Organizations to
manage their accounts. The company runs applications that require a high degree of
interoperability and require the sharing of VPCs between member accounts.
1) Enable the VPC sharing feature in AWS Organizations to use VPC sharing between
multiple member accounts.
2) The default setting in AWS Organizations allows for VPC sharing between multiple
member accounts.
3) Using VPC sharing between multiple member accounts in conjunction with AWS RAM.
4) Using VPC sharing between multiple member accounts in conjunction with IAM.
Share resources
AWS organizations enables users to share resources between AWS
Organizations member accounts.
10.0.0.0/16
AZ ELB AZ
EC2 EC2
The Scope of multi-AZ configurations question
The results of analyzing the range of questions from 1625 are as
follows
You will be asked Multi-AZ configurations using EC2, redundant
Multi-AZ configuration configuration with the addition of ELB and Auto Scaling, and Multi-
AZ configurations with DB servers and RDS.
Some IT company uses AWS to build web applications. The web layer of the application
runs on an EC2 instance and the database layer utilizes Amazon RDS MySQL. Currently,
all resources are deployed in a single availability zone. The development team wants to
improve the availability of the application before it goes live.
Which is the architectural configuration for making this web application redundant? (Please
select two)
1) Deploying web tier EC2 instances to the two AZs behind ELB.
2) Deploying web-tier EC2 instances to the two regions behind ELB.
3) Deploying web-tier EC2 instances to the two VPCs behind the ELB.
4) Deploying an Amazon RDS MySQL database in a multi-AZ configuration.
5) Deploying an Amazon RDS MySQL database in a global database configuration.
Multi-AZ configuration
Basic configuration with web server redundancy in the public subnet
and RDS failover configuration
10.0.0.0/16
AZ ELB AZ
EC2 EC2
Automatic Failover
RDS RDS
S3
[Q] DB multi-AZ configuration
An IT company is using AWS to build web applications. The web layer of the application
runs on an EC2 instance and the database layer uses Amazon RDS MySQL. Each instance
is placed on a private subnet for security purposes; the web server is required to download
software patches to the instances from the Internet; and the NAT gateway addresses the
issue of inaccessibility in the event of a failure.
1) Create a NAT instance in each Availability Zone. Control traffic to your NAT instance
with ELB.
2) Create a NAT gateway in each Availability Zone. Control traffic to your NAT instance
with ELB.
3) Create a NAT gateway in each Availability Zone. Configure the route table on each
private subnet so that the instances use the NAT gateway in the same Availability
Zone.
4) Create a NAT gateway in each Availability Zone. Each NAT gateway controls traffic to
your EC2 instances based on health checks.
Multi-AZ configuration
10.0.0.0/16
AZ AZ
10.0.0.0/16
Public Subnet 10.0.0.0/24 Public Subnet 10.0.1.0/24
ELB
NAT NAT
EC2 EC2
Automatic Failover
RDS RDS
The scope of Amazon FSx
What is Amazon FSx?
Fully managed storage services to provide industry-standard file
storage
Metadata
ID
The scope of Amazon FSx for Windows
The results of analyzing the range of questions from 1625 are as
follows
Selecting Amazon FSx for You will be asked to choose Amazon FSx for Windows to meet
Windows storage requirements
Selecting Amazon FSx for You will be asked to choose Amazon FSx for Lustre to meet storage
Lustre requirements
Three file storage
In addition to EFS, two other types of FSx-type file storage are
available, depending on the use case
A Bank is building a web application on AWS using Microsoft's distributed file system. As a
solution architect, you need to choose the best storage that fits this distributed file
system.
A leading aviation company is building a simulation system on AWS for engine development.
This is a high-performance workflow used to simulate engine performance and failure
prediction. During analysis, "hot data" needs to be processed and stored quickly in a
parallel and distributed fashion. The "cold data" must be kept for reference, so that it can
be quickly accessed for reading and updating at low cost.
1) Amazon EMR
2) EFS
3) Amazon FSx for Lustre
4) Amazon FSx for Windows File Server
Amazon FSx For Lustre
Provides ultra-high performance storage dedicated to distributed and
parallel processing for fast computing process
Fast Parallel
Processing
Reference: https://docs.aws.amazon.com/ja_jp/AWSEC2/latest/UserGuide/InstanceStorage.html
The scope of the instance store question
The results of analyzing the range of questions from 1625 are as
follows
You will be asked to select an instance store that is presented with
Selecting Instance Store storage requirements based on the scenario
Instance Store Based on the characteristics of the instance store, You will be
features asked how to set up the storage and so on.
[Q] Select instance store
As a solution architect, you plan to build a web application by launching an EC2 instance.
There has been a requirement for some EC2 instances to utilize high-performance
ephemeral storage.
Reference: https://docs.aws.amazon.com/crypto/latest/userguide/awscryp-service-kms.html
The Scope of KMS questions
The results of analyzing the range of questions from 1625 are as
follows
you will be asked to select a KMS as a means of performing
Selecting KMS encryption
CMK Management You will be asked about the management of CMKs created in KMS.
In EBS encryption, which service does AWS use to protect the volume data being stored?
As a solution architect, you are building a web application using an EC2 instance that
stores data in an S3 bucket. The EBS volumes are encrypted with a unique customer
master key (CMK) to ensure data confidentiality. A member accidentally deleted the CMK
and we are unable to recover user data.
1) Once the CMK is deleted, it is impossible to recover and you lose access to the data.
2) You can restore CMK by contacting AWS support.
3) You can restore CMK from the root account user.
4) Since the CMK is just after the CMK is deleted, you can cancel the deletion of the
CMK and recover the key.
AWS KMS
KMS uses CMK and customer data key to perform encryption.
Customer
Keys used for the actual data encryption
Data Key Generated by KMS and encrypted by CMK
(Encryption key)
Solution Architect has developed an encryption solution. In this solution, the data key must
be encrypted using envelope protection before it is written to disk.
Apps
CMK
(2) Provide two data keys
Encrypted
Data Key
Data Key
The scope of AWS Snow Family
AWS Snow Family
A service that uses a physical storage device to bypass the
Internet and transfer large amounts of data directly to AWS
Choose the fastest, most feasible and cost-effective way to migrate your data.
[Use Case]
Migration/Disaster Preparedness Data Migration/Data
Center Consolidation/Data Migration for Content Delivery
参照:https://docs.aws.amazon.com/ja_jp/snowball/latest/ug/receive-device.html
Snowball and Snowball Edge
Snowball Edge has the high-performance capabilities of
Snowball+Computing, and AWS now recommends Snowball Edge
instead of Snowball
Snowball Snowball Edge
[Use Case]
Migrate huge amounts of data, up to video libraries, image
repositories, or even entire data centers
The scope of Glacier
Amazon S3 Glacier
Glacier is cheaper storage than S3 for medium and long term
storage used for archiving data
You will be asked to select the type of data extraction method you
Data Retrieval can choose when retrieving data in Glacier
You will be asked how to use of provisioned capacity.
You will be asked about that require the use of vault locks
Vault locks presented with the requirements for enhanced compliance.
[Q] Glacier features
As a solution architect, you plan to archive to Amazon Glacier using lifecycle management
using S3. You need to make sure your supervisor understands the resiliency of your data.
Which of the following is the correct description of Amazon Glacier Storage? (Please
select two)
Archive
Vault
Jobs
[Q] Data retrieval
As a solution architect, you are building a solution to manage and store corporate
documents using AWS. Once stored, the data is rarely used, but it is expected to be
retrieved within 10 hours, following the instructions of the administrator if necessary. You
have decided to use Amazon Glacier and are considering its configuration method.
1) Quick Extraction
2) Standard Ejection
3) Large-volume extraction
4) Vault Rock
Glacier Data Retrieval Type
Depending on the settings of the Glacier data acquisition type, the
data acquisition time and fee at the time of acquisition will vary
Type Features
Provisioned Provisioned Capacity, a mechanism that ensures that the acquisition capacity
Capacity of expedited retrieval is available when needed
Bulk retrieval is the least expensive retrieval option and allows for large
Bulk Retrieval amounts of data (including petabytes of data) to be retrieved in a day or less
at low cost. Typically, mass retrieval takes 5 to 12 hours
First of previous
[Q] Vault locks
1) Use the S3 Glacier vault to store sensitive archival data and then use the vault lock
policy.
2) Using S3 Glacier Archive to store sensitive archive data and then use the archive
policy.
3) Use the S3 Glacier Vault to store sensitive archival data and then use the Lifecycle
Policy.
4) Use S3 Glacier Archive to store sensitive archive data and then use resource policies.
Access Management
Glacier access management uses different methods depending on
usecase
Management Methods Features
Rapid: 0.033USD/GB
Data Retrieval Fee Standard: 0.011USD/GB
Large capacity: 0.00275USD/GB
Provisioned
110.00USD/provisioned capacity unit
capacity
1) Automate online data transfer to specific AWS storage services using AWS
DataPipeline
2) Automate online data transfer to certain AWS storage services using AWS Snowball
edge.
3) Automate online data transfer to certain AWS storage services using Amazon DML.
4) Automate online data transfer to certain AWS storage services using AWS DataSync.
AWS DataSync
AWS DataSync is a service used to migrate storage data to S3 or
EFS
Reference: https://docs.aws.amazon.com/ja_jp/datasync/latest/userguide/how-datasync-works.html
DR Configuration
A financial services company has created a disaster recovery plan in its business
continuity plan. As a solution architect, you want to make sure that a scaled-down version
of a fully functioning environment is always running in the AWS cloud and minimize
recovery time in the event of a disaster.
1) Warm Standby
2) Backup & Restore
3) Pilot Light
4) Multi-site
DR Configuration
Solution for disaster recovery prepare various methods depending
on the application
A method of standing by in an operational state for instantaneous
Hot Standby switching to a standby server in the event of a problem with the
production server machine
Backup & A method that allows for periodic backups to be performed and
Restore restored in the event of a failure of the production equipment
AZ AZ
EC2 EC2
The scope of You will learn about security-related services such as AWS WAF
Security and AWS Shield, which appear on the Associate exam.
You will learn about the features and differences between the
The scope of User
services used for user management, such as AWS Directory
Management Service and Cognito.
The scope of cost You will learn about budgets management services and
optimization cost estimate tools
The scope of
AWS Security
The scope of Security Question
The results of analyzing the range of questions from 1625 are as
follows
Based on the scenario, you will be asked to select the best security
Selecting Security
service to meet the requirements for responding to a security
Services incident.
You will be asked about how to create and manage certificates for
ACM setting up SSL/TLS communications
Your company uses AWS to run a web application. While monitoring your web traffic
volume, you discover that unauthorized access through a specific IP address is attempting
to obtain passwords and other information. The attacker appears to be executing more
than 10 requests per second, which is unusual.
Region
WEB
Unauthorized access Apps
WAF
(EC2)
AWS GuardDuty
Services that use machine learning and other techniques to detect
security threats to AWS infrastructure and apps
Threat Assessment
VPC
Flow log High
Malicious access
DNS Logs Medium
Low
CloudTrail
Amazon Inspector
Amazon EC2 hosted diagnostic service that deploys an agent to
diagnose platform vulnerabilities
Edge Location
Standard Advanced
Company A has a web application with Auto Scaling group and ALB configured on an EC2
instance. Encryption of data communication needs to be achieved and an SSL certificate
needs to be loaded into the ALB.
Select the AWS service you should use to centrally manage your SSL certificates.
https://aws.amazon.com/jp/blogs/security/how-to-help-achieve-mobile-app-transport-
security-compliance-by-using-amazon-cloudfront-and-aws-certificate-manager/
[Q] CloudHSM
The financial institution uses AWS-based business systems. We are currently looking to
expand globally with branches in key regions in Europe. The European region has its own
security standards, and as a financial institution, it is imperative that we encrypt our data
according to those standards.
1) Using AWS KMS, create a CMK in a custom keystore and store the key material in
AWS CloudHSM.
2) Using AWS KMS, create a CMK that will be AWS-managed and the key materials will
be stored in AWS CloudHSM.
3) Using AWS KMS, create a CMK in a custom keystore and store the key material in
AWS KMS.
4) Using AWS KMS, create a CMK that will be AWS-managed and the key materials will
be stored in AWS KMS.
CloudHSM
CloudHSM is a service that protects encryption keys by means of a
dedicated HW module (HSM) with anti-abuse measures in place.
Used to meet strict encryption requirements
CloudHSM
The scope of
Networks and data
Network and Data
The results of analyzing the range of questions from 1625 are as
follows
Based on the scenario, you will be asked to select Amazon EMR for
Selecting EMR the requirements for data analysis.
Selecting Migration Based on the scenario, you will be asked to select the migration
Services service
An enterprise has a web application running on multiple EC2 instances. You are building a
mechanism to store and analyze the application log files. Considering the number of
servers, the log files are going to be large, and you need a large data processing capacity
to analyze these logs.
Which of the following services would best meet this requirement? ( Select two)
1) Save the application log files to Glacier and process them by Redshift.
2) Save the application log files to S3 and process them by Amazon EMR.
3) Save the application log files in S3 and process them by S3 Select.
4) Save the application log files in DynamoDB and process them by Lambda function.
Amazon EMR
EMR can set up big data frameworks such as Apache Spark,
Apache Hive, and Presto to process and analyze large amounts of
data
https://aws.amazon.com/jp/blogs/news/optimizing-downstream-data-processing-with-
amazon-kinesis-data-firehose-and-amazon-emr-running-apache-spark/
[Q] Select Athena
A company stores the execution data of a web application running on multiple EC2
instances in S3, and it is necessary to analyze the data stored in S3 using standard
queries.
1) Import the data into an Amazon Redshift cluster and query the data.
2) Query the data using Amazon Athena against the S3 bucket object.
3) Query the data using Amazon EMR against the S3 bucket object.
4) Query the data using S3 Select against the S3 bucket object.
Amazon Athena
An interactive query service that allows you to easily analyze data
directly in Amazon S3
Amazon BI
S3 Athena Tools
A major retailer has decided to migrate its on-premise database system to AWS. They
need to migrate their on-premises MongoDB database to Amazon DynamoDB. Because
the data to be migrated is so large, they need a tool to help them migrate to AWS.
Please select a solution that can meet this requirement. (Select two.)
AWS Database Migration Database Migration Tool that enables you to migrate your database
Service to AWS quickly and securely
AWS Server Migration An agentless service that makes migrating thousands of on-premise
Service workloads to AWS easier and faster than ever before
On-premise On-premise
DB DB
DB DB
on EC2
DMS on EC2
RDS RDS/
Redshift
AWS Server Migration Service
An agentless service that makes migrating thousands of on-premise
workload servers to AWS easier and faster than ever before
A major gaming company is building an action game on AWS that requires real-time
communication. The users of this game are from all over the world, and accelerated
communication is very important for usability. The game data is communicated via its own
application using the UDP protocol.
Choose the best solution to improve the communication performance of this game data.
Reference: https://aws.amazon.com/jp/blogs/networking-and-content-delivery/accessing-private-application-load-
balancers-and-instances-through-aws-global-accelerator/
[Q] Select Transit Gateway
A consulting firm uses multiple AWS accounts to run its applications. Because there are
multiple AWS accounts with multiple VPCs, the firm decided to establish routing between
all private subnets. The architecture should be simple and allow for transitive routing.
Reference: https://aws.amazon.com/jp/transit-gateway/
The scope of Environmental
Automation
Environmental Automation question
The results of analyzing the range of questions from 1625 are as
follows
Based on the scenario, you will be asked to select services related
Selecting environmental
to automation, sharing of infrastructure configuration and
automation services deployment of web applications.
CodePipeline
Cloud
Formation
CloudWatch
CodeDeploy
ECS / EKS
(Docker compatible)
CodeCommit CodeBuild
Elastic Beanstalk
(web app deployment)
OpsWorks
(Infrastructure configuration and management)
Environmental Automation Services
AWS provides a wealth of environmental automation services to
enable DevOps
CodeCommit/CodeBuild These service are used for coding management. CodePipeline is
also available for CloudFormation and ECS deployment
CodeDeploy/CodePipline automation.
As a solution architect, you've built a Ruby-based web application using Cloud9. You want
to upload these codes to the AWS cloud and have them deployed to AWS. The
requirement is to automate the deployment process and version control as much as
possible.
1) OpsWorks
2) CloudFormation
3) Amazon ECS
4) AWS Elastic Beanstalk
AWS Elastic Beanstalk
Automated services for the construction and deployment of
standard configurations of web applications
ELB+ Auto Scaling allows you to run Enable Scalable Batch Processing Work
scalable web applications by coding a with SQS + Auto Scaling
scalable configuration and version Create regular task execution
Used for Docker containers applications infrastructure: a backup process that
Multiple containers can run in the runs every day at 1 am
environment using ECS Run a web application in a worker host
and having the workload time to perform
the process
[Q] Select OpsWorks
Your company is thinking of setting up a CI/CD environment with AWS. Currently, we use
Chef for configuration management of our on-premises servers. Therefore, you are
required to use a service that enables you to use your existing Chef cookbook in AWS.
Which of the following services offers a fully managed use of Chef Cookbook?
ELB ELB
App App
Auto-Scaling Auto-Scaling
Custom Backend
Auto-Scaling
Code Series
A set of services that automate the committing, execution and
deployment of development code on a Git-based repository
CodePipeline
Coding
Build Testing Deployment
(Source Management)
Managed source A fully managed build service that allows you to CodeDeploy is an
management services to compile source code, run tests, and create deployable automated service for
securely host Git-based software packages development, testing and
repositories deployment to production
Using AWS Managed Based on the scenario, you will be asked about the features and
Microsoft AD configuration methods of AWS Managed Microsoft AD.
AWS Security Token Service A service that allows users to request temporary limited privilege
(STS) credentials for federated users authenticated by IAM users or AD
Active Directory
A mechanism for authenticating users by username and password.
Windows AD is widely used in user management.
A leading e-commerce site uses Microsoft Active Directory to provide users and groups
with access to resources on its on-premises infrastructure. The company decided to go
for a hybrid configuration with AWS for its IT infrastructure. The company uses SQL
Server-based applications and wants to configure the trust relationship to enable Single
Sign-On (SSO) because it is essential to work with AWS.
As a solution architect, which of the following AWS services would you recommend for this
use case? (Select two)
1) Simple AD
2) AD Connector
3) AWS SSO
4) AWS Managed Microsoft AD
5) Cognito
Select the AWS Directory Service
Create a new directory in AWS or achieve control using existing
Active Directory authentication
10.0.0.0/16
AZ AZ
Simple AD
Implementing
Certification
Management
EC2
Amazon Amazon
Workspace Workspace
AD Connector
Services that leverage existing directories and enable access to the
AWS environment DC
Active
Directory
10.0.0.0/16
AZ AZ
AD Connector
EC2 EC2
[Q] Using AWS Managed Microsoft AD
A leading e-commerce site uses Microsoft Active Directory to provide users and groups
access to resources on its on-premises infrastructure. The company has decided to make
its IT infrastructure a hybrid configuration with AWS. It is necessary to access resources
in both environments using on-premises credentials stored in Active Directory.
In this scenario, which of the following can be used to meet this requirement?
Which of the following provides the most effective approach to implement integration?
1) Enable single sign-on between AWS and LDAP using the AWS Single Sign-On (SSO)
service.
2) Enforcing IAM credentials using LDAP credentials and matching IAM roles.
3) Utilize a custom identity broker application.
4) Using IAM policy to reference LDAP identifiers and AWS credentials
Using AWS SSO
Single Sign-On (SSO) service that facilitates the centralized
management of SSO access to AWS accounts and applications
IAM users
IAM users
Authenticated Issues an STS
users
1) AWS SSO
2) AD Connector
3) Simple AD
4) Amazon Cognito
Cognito
If you want to add user authentication to your application, use
Cognito
Reference: https://docs.aws.amazon.com/ja_jp/cognito/latest/developerguide/amazon-
cognito-integrating-user-pools-with-identity-pools.html
The scope of
Work Management
Work Management Question
The results of analyzing the range of questions from 1625 are as
follows
Based on the scenario, you will be asked to choose an Amazon MQ
Amazon MQ Selection to the requirement to utilize the queue.
Based on the scenario, you will be asked asked to choose the best
AWS Step Functions
solution to create workflows or processes from SQS and Step
VS SQS Functions .
Based on the scenario, you will be asked asked to choose the best
AWS Step Functions VS
solution to create workflows or processes from Step Functions and
SWF SWF
[Q] Select Amazon MQ
Edutech Venture plans to migrate its digital learning platform to an on-premises
environment. The current application provides English learning content to non-native
speakers and processes their learning status to calculate the optimal learning time. You
want to use the RabbitMQ message broker cluster for task management in this process.
1) Amazon MQ
2) Amazon SQS
3) Step Functions
4) SWF
Features of Queue Services
Understand how to use each service in different cases and deal
with potential exam questions.
Reference: https://d1.awsstatic.com/webinars/jp/pdf/services/20190522_AWS-Blackbelt_StepFunctions.pdf
AWS Step Functions
For example, you can implement a mechanism to identify and tag
images using Step Functions
Start
ExtractImageMetadata
ImageTypeCheck
StoreImageMetadata NotSupportedImageTyp
Rekognition Thumbnail
AddRekognizedTags
ExtractImageMetadata
End
Reference: https://d1.awsstatic.com/webinars/jp/pdf/services/20190522_AWS-Blackbelt_StepFunctions.pdf
AWS Step Functions Cooperation
Step Functions can concatenate various AWS services to create
workflows
• AWS Lambda
• Amazon API Gateway
Calling Service • Amazon EventBridge
• AWS CodePipeline
• AWS IoT Rules Engine
• AWS Step Functions
Activity feature can connect to other services that are not supported.
[Q] AWS Step Functions VS SWF
A large bank is building applications to perform data processing on AWS, and it is
necessary to configure data processes in conjunction with Lambda functions, Amazon EMR,
etc. to achieve a complex process.
1) AWS Batch
2) AWS Step Functions
3) Amazon Simple Workflow Service (SWF)
4) Amazon SQS
Amazon Simple WorkFlow (SWF)
Use SWF to create workflows on an instance, as the predecessor
service of Step Functions
AWS Step Functions VS SWF
It is recommended to use Step Functions, except for some
processes that are only available in SWF.
It can set custom budgets to send out alerts when budget thresholds
AWS Budgets are exceeded
See: https://aws.amazon.com/jp/tco-calculator/
AWS Pricing Calculator
New service to conduct individualized forecast cost estimates in
line with business and personal needs
See: https://calculator.aws/#/
CloudWatch Billing Alarms
CloudWatch's billing feature allows you to set alarms on billing
amounts
AWS Budgets
Custom budgets can be set up and fine-tuned to set alarms for
when costs or usage exceed the budgeted amount.
https://aws.amazon.com/jp/aws-cost-management/aws-budgets/
Cost Explorer
Visualize changes in AWS costs and usage over time and create
custom reports to analyze cost and usage data.
AWS Cost and Usage Report
Provides the most comprehensive data on AWS costs and usage
Lists AWS usage for each service category used by account/IAM users as
an hourly or daily statement item.
https://aws.amazon.com/jp/aws-cost-management/aws-cost-and-usage-reporting/
AWS Cost Categories
The ability to categorize costs by your own organization and project
structure
AWS Trusted Advisor
Services that provide advice on cost optimization and security and
improving performance vs.