0% found this document useful (0 votes)
494 views1,099 pages

AWS Certified Solutions Architect Associate Exam: The Shortest Path To Success

Uploaded by

Thanh LC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
494 views1,099 pages

AWS Certified Solutions Architect Associate Exam: The Shortest Path To Success

Uploaded by

Thanh LC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1099

AWS Certified Solutions

Architect Associate Exam:

The Shortest Path to Success


Introduction
Purpose of this course

This course aims to help you pass


the AWS Certified Solutions Architect Associate exam in
the shortest possible time.
Concept of this course
Associate Exam Preparation Courses take a lot of time,
and you still need to include mock exam practice!

Our Solutions Architect Associate Exam Course 26Hours

Associate exam course with the most users on


18Hours
Udemy
Most Acclaimed Associate Exam Course on
24Hours
Udemy
Associate exam course with the longest
83Hours
attendance on Udemy
Concept of this course
It can take more than 30 hours to carry out the hands-on
series, not to mention the time needed to practice the
mock exams themselves as recommended.
Concept of this course
The shortest way to pass is to focus on the exam content
range that is actually being tested.

Focus on the range


Analyze the exam
of exam questions
questions that are
and learn the
actually being asked
common patterns
Concept of this course
We have extracted and analyzed the range of questions
from 1625 examples of from mock test

Test contents for 3 production tests 195 Qs

Japanese Associate Mock Test Course with the


390 Qs
Most Users (Provided by Our Company)

One of Udemy's Top 3 Courses 260 Qs

One of Udemy's Top 3 Courses 390 Qs

One of Udemy's Top 3 Courses 390 Qs

Total: 1625 questions


Concept of this course
We have calculated the exam question rate and will deep
dive into the underlying question tendencies and patterns!

Category Questions Rate


S3 182 11.17%
EC2 145 8.90%
VPC 94 5.77%
Auto Scaling 76 4.66%
RDS 74 4.54%
EBS 65 3.99%
SQS 60 3.68% 62%
ELB 58 3.56%
CloudFront 56 3.44%
IAM 54 3.31%
DynamoDB 52 3.19%
Lambda 50 3.07%
Route53 42 2.58%
Course contents
Section What you will learn in the section

Learn the internal AWS certification system and explore


Associate Exam
the AWS Certified Solutions Architect Associate exam
Overview areas of questioning.

Analyze the questioning tendency from 1625 Associate


Analysis of Associate
exam questions to clarify the range of AWS services to
Exam Questions be covered.

Of the main services where more than 60% of the


Scope of questions for
questions are asked, we will check the question format
major services ① of IAM / S3 / EC2 / VPC and learn the range of
(IAM / S3 / EC2 / VPC)
questions.
Of the main services where more than 60% of the
Scope of questions for
questions are asked, we will check the question format
major services ② of Auto Scaling・RDS・EBS・ELB and learn the range
(Auto Scaling・RDS・EBS・ELB)
of questions
Scope of questions for Of the main services where more than 60% of the
major services ③ questions are asked, we will check the question format
(SQS・CloudFront・DynamoDB・
of SQS・CloudFront・DynamoDB・Lambda・Route53
Lambda・Route53) and learn the range of questions
Course contents
Section What you will learn in the section

Scope of questions In order to cover a little less than 90% of the


from secondary question range, we will check the remaining
group of services frequent questions and learn the question range.

Question range from In order to confirm the question range of 97%


occasional questions or more, check the remaining rare questions
for high scores and learn the question range.

We will review the example questions that have


Practice exam
been covered in all lectures this far
What is AWS?
What is AWS?
AWS is a service that allows you to instantly use the
functions needed for infrastructure development and
application development anytime, anywhere.

On- EC2
premises (server)
server
What is AWS?
With AWS, you can instantly use infrastructure such as
servers, storage, and databases.

On- EC2
premises (server)
server

S3
Storage (Storage)
What is AWS?
A major feature of AWS is that you can obtain and use a
server for free and in minutes

On- EC2
premises (server)
server

✓ It takes a lot of time ✓ Get up in a few


minutes
✓ Requires initial cost.
✓ Available from free
Making Physical Equipment a Service
Efficient system management is possible by borrowing the
physical equipment used for system operation via the Internet.

Data Center
Making Physical Equipment a Service
Efficient system management is possible by borrowing the
physical equipment used for system operation via the Internet.

Data Center

Cloud (Internet)

EC2 RDS
#1 Global Share
Amazon is an overwhelming presence with its cloud share
of over 30% for many years.
2019 Global Share

34.6%

5.2% 18.1%
6.2%
AWS Certification
Overview
AWS Certification Overview
There are four categories of AWS certifications:
Foundational, Associate, Professional, and Specialty.

https://aws.amazon.com/jp/blogs/big-data/upgrade-your-resume-with-the-aws-certified-big-data-specialty-certification/
AWS Certification Overview
Qualification level and ideal process to success

AWS Certified Solutions AWS Certified DevOps


Architect Professional Engineer Professional

AWS Certified SysOps AWS Certified Solution


Administrator Developer

AWS Certified Solution


Architect Associate

AWS Certified Cloud Practitioner


AWS Certification Overview
Qualification level and ideal process to success

AWS Certified Solutions AWS Certified DevOps


Architect Professional Engineer Professional

AWS Certified SysOps AWS Certified Solution


Administrator Developer

AWS Certified Solution


Architect Associate

AWS Certified Cloud Practitioner


AWS Certification Overview
Qualification level and ideal process to success

AWS Certified Solutions AWS Certified DevOps


Architect Professional Engineer Professional

AWS Certified SysOps AWS Certified Solution


Administrator Developer

AWS Certified Solution


Architect Associate

AWS Certified Cloud Practitioner


AWS Certification Overview

AWS Certified Solutions AWS Certified DevOps


Architect Professional Engineer Professional

AWS Certified SysOps AWS Certified Solution


Administrator Developer

AWS Certified Solution


Architect Associate

AWS Certified Cloud Practitioner


Associate Exam
Overview
Required Ability for Examinees

Define a solution using architectural design principles


based on customer requirements.

Provide implementation guidance based on best


practices to an organization throughout the lifecycle of a
project.
Recommended AWS Knowledge

◼ 1 year of hands-on experience designing available, cost-effective, fault-tolerant, and


scalable distributed systems on AWS.
◼ Hands-on experience using compute, networking, storage, and database AWS services.
◼ Hands-on experience with AWS deployment and management services.

◼ Ability to identify and define technical requirements for an AWS-based application.


◼ Ability to identify which AWS services meet a given technical requirement.
◼ Knowledge of recommended best practices for building secure and reliable applications on
the AWS platform.
◼ An understanding of the basic architectural principles of building in the AWS Cloud.
◼ An understanding of the AWS global infrastructure.
◼ An understanding of network technologies as they relate to AWS.
◼ An understanding of security features and tools that AWS provides and how they relate to
traditional services

参照:https://aws.amazon.com/jp/certification/certified-solutions-architect-associate/
Response Types

◼ Multiple choice: Has one correct response and three incorrect responses
(distractors)
◼ Multiple response: Has two correct responses out of five response options.
AWS Exam Passing Grade

◼ Test time: 130 minutes


◼ Number of questions: 65 questions
◼ Score range: 100 points-1000 points (difficulty adjusted
average value)
◼ Passing score: 720 points (about 72%)

参照:https://aws.amazon.com/jp/certification/certified-solutions-architect-associate/
Question pattern

[Question pattern ①]

AWS service selection / AWS service features and function selection


Question pattern

[Question pattern ①]

AWS service selection / AWS service features and function selection


⇒ Duplicate with AWS certified cloud practitioner
Question pattern

[Question pattern ①]

AWS service selection / AWS service features and function selection


⇒ Duplicate with AWS certified cloud practitioner

Your company runs an application where users share videos. This application is hosted on an
EC2 instance for processing videos uploaded by users. I have an EC2 that processes and
publishes video and has an Auto Scaling group set up.

Select the service you should use to increase the reliability of this process.

1) Amazon SQS
2) Amazon SNS
3) Amazon SES
4) CloudFront
Question pattern

[Question pattern ①]

AWS service selection / AWS service features and function selection


⇒ Duplicate with AWS certified cloud practitioner

[Question pattern ②]

Choosing the right way to configure various AWS services


Question pattern

[Question pattern ②]

Choosing the right way to configure various AWS services

As a Solutions Architect, you are building an SFA on AWS. This SFA has a business
requirement for sales staff to upload sales daily. In addition, these records should be kept for
sales reports. Durable and highly available storage is required for report storage. Since many
sales staff use this SFA, it is an important requirement to prevent these records from being
accidentally erased due to operational mistakes.

Choose a data protection measure that meet these requirements.

1) Enable the versioning function using S3.


2) Accumulate data on EBS and take snapshots automatically on a regular basis
3) Accumulate data in S3 and take snapshots automatically on a regular basis
4) Accumulate data in RDS and take snapshots automatically on a regular basis
Question pattern

[Question pattern ①]

AWS service selection / AWS service features and function selection


⇒ Duplicate with AWS certified cloud practitioner

[Question pattern ②]

Choosing the right way to configure various AWS services

[Question pattern ③]

Choosing the optimal architectural configuration that combines


various AWS services
Question pattern

[Question pattern ③]

Choosing the optimal architectural configuration that combines


various AWS services

You are building a two-tier web application that delivers content while processing transactions
on AWS. The data layer utilizes an online transaction processing (OLTP) database. At the WEB
layer, it is necessary to implement a flexible and scalable architectural configuration.

Choose the best way to meet this requirement.

1) Set up ELB and Auto Scaling groups on your EC2 instance.


2) Set up a multi-AZ configuration for RDS.
3) Deploy EC2 instances in Multi-AZ and perform failover routing with Route53
4) Install more EC2 instances than expected capacity
The scope of
Associate Exam
Test areas
Of the five design principles of the Well Architected
Framework, four are test areas other than "operational
excellence."
Domain Rate

Domain1 Design Resilient Architectures 30%

Domain2 Design High-Performing Architectures 28%

Design Secure Applications and


Domain3 24%
Architectures

Domain4 Design Cost-Optimized Architectures 18%


Domain 1: Design Resilient Architectures

◼ 1.1 Design a multi-tier architecture solution


◼ 1.2 Design highly available and/or fault-tolerant architectures
◼ 1.3 Design decoupling mechanisms using AWS services
◼ 1.4 Choose appropriate resilient storage
Domain 1: Design Resilient Architectures

◼ 1.1 Design a multi-tier architecture solution

As a Solutions Architect, you plan to host a web application consisting of a web


server and a database server on AWS. You need to set up a database server in
the private subnet and a WEB server in the public subnet to communicate
between the instances, but the communication does not work well.

Choose a solution that solves this problem.

1) Control traffic with security groups


2) Control traffic with VPC endpoints
3) Control traffic with network ACLs
4) Allow the WEB server to access the database server with the IAM role.
Domain 1: Design Resilient Architectures

◼ 1.1 Design a multi-tier architecture solution

As a Solutions Architect, you plan to host a web application consisting of a web


server and a database server on AWS. I need to set up a database server in the
private subnet and a WEB server in the public subnet to communicate between
the instances, but the communication does not work well. Choose a solution that
solves this problem.

1) Control traffic with security groups

Option 1 is the correct answer. In order to set up a database server in the private
subnet and a WEB server in the public subnet and communicate between
instances, it is essential to set an appropriate security group that allows the
communication. Security groups allow you to control traffic by specifying IP
addresses between EC2 instances.
Domain 1: Design Resilient Architectures

1.2 Design highly available and/or fault-tolerant architectures

A customer relationship management (CRM) application runs on Amazon EC2


instances in multiple Availability Zones behind an Application Load Balancer.

If one of these instances fails, what occurs?

1) The load balancer will stop sending requests to the failed instance.
2) The load balancer will terminate the failed instance.
3) The load balancer will automatically replace the failed instance.
4) The load balancer will return 504 Gateway Timeout errors until the instance is
replaced
Domain 1: Design Resilient Architectures

1.2 Design highly available and/or fault-tolerant architectures

A customer relationship management (CRM) application runs on Amazon EC2


instances in multiple Availability Zones behind an Application Load Balancer.

If one of these instances fails, what occurs?

1) The load balancer will stop sending requests to the failed instance.

Option 1 is the correct answer. An Application Load Balancer (ALB) sends


requests to healthy instances only. An ALB performs periodic health checks on
targets in a target group. An instance that fails health checks for a configurable
number of consecutive times is considered unhealthy. The load balancer will no
longer send requests to the instance until it passes another health check.
Domain 1: Design Resilient Architectures
◼ 1.3 Design decoupling mechanisms using AWS services

A company needs to perform asynchronous processing, and has Amazon SQS as


part of a decoupled architecture. The company wants to ensure that the number
of empty responses from polling requests are kept to a minimum.

What should a solutions architect do to ensure that empty responses are


reduced?

1) Increase the maximum message retention period for the queue.


2) Increase the maximum receives for the redrive policy for the queue.
3) Increase the default visibility timeout for the queue.
4) Increase the receive message wait time for the queue.
Domain 1: Design Resilient Architectures
◼ 1.3 Design decoupling mechanisms using AWS services

A company needs to perform asynchronous processing, and has Amazon SQS as


part of a decoupled architecture. The company wants to ensure that the number
of empty responses from polling requests are kept to a minimum.

What should a solutions architect do to ensure that empty responses are


reduced?

4) Increase the receive message wait time for the queue.

Option 4 is the correct answer. When the ReceiveMessageWaitTimeSeconds


property of a queue is set to a value greater than zero, long polling is in effect.
Long polling reduces the number of empty responses by allowing Amazon SQS to
wait until a message is available before sending a response to a ReceiveMessage
request.
Domain 1: Design Resilient Architectures
◼ 1.4 Choose appropriate resilient storage

A company currently stores data for on-premises applications on local drives. The chief technology
officer wants to reduce hardware costs by storing the data in Amazon S3 but does not want to make
modifications to the applications. To minimize latency, frequently accessed data should be available
locally.

What is a reliable and durable solution for a solutions architect to implement that will reduce the cost
of local storage?

1) Deploy an SFTP client on a local server and transfer data to Amazon S3 using AWS Transfer for
SFTP.
2) Deploy an AWS Storage Gateway volume gateway configured in cached volume mode.
3) Deploy an AWS DataSync agent on a local server and configure an S3 bucket as the destination.
4) Deploy an AWS Storage Gateway volume gateway configured in stored volume mode.
Domain 1: Design Resilient Architectures
◼ 1.4 Choose appropriate resilient storage

A company currently stores data for on-premises applications on local drives. The chief technology
officer wants to reduce hardware costs by storing the data in Amazon S3 but does not want to make
modifications to the applications. To minimize latency, frequently accessed data should be available
locally.

What is a reliable and durable solution for a solutions architect to implement that will reduce the cost
of local storage?

2) Deploy an AWS Storage Gateway volume gateway configured in cached volume mode

Option 2 is the correct answer. An AWS Storage Gateway volume gateway connects an on-premises
software application with cloudbacked storage volumes that can be mounted as Internet Small
Computer System Interface (iSCSI) devices from on-premises application servers. In cached volumes
mode, all the data is stored in Amazon S3 and a copy of frequently accessed data is stored locally.
Domain2: Design High-Performing
Architectures

◼ 2.1 Identify elastic and scalable compute solutions for a workload


◼ 2.2 Select high-performing and scalable storage solutions for a workload
◼ 2.3 Select high-performing networking solutions for a workload
◼ 2.4 Choose high-performing database solutions for a workload
Domain2: Design High-Performing
Architectures
◼ 2.1 Identify elastic and scalable compute solutions for a workload

You are building a two-tier web application that delivers content while processing
transactions on AWS. The data layer utilizes an online transaction processing
(OLTP) database. At the WEB layer, it is necessary to utilize a flexible and
scalable architectural configuration.

Choose the best way to meet this requirement.

1) Set up ELB and Auto Scaling groups on your EC2 instance.


2) Set up a multi-AZ configuration for RDS.
3) Deploy EC2 instances in Multi-AZ and perform failover routing with Route53
4) Install more EC2 instances than expected capacity
Domain2: Design High-Performing
Architectures
◼ 2.1 Identify elastic and scalable compute solutions for a workload

You are building a two-tier web application that delivers content while processing
transactions on AWS. The data layer utilizes an online transaction processing
(OLTP) database. At the WEB layer, it is necessary to realize a flexible and
scalable architectural configuration. Choose the best way to meet this
requirement.

1) Set up ELB and Auto Scaling groups on your EC2 instance.

Option 1 is the correct answer. Flexible and scalable server processing on AWS
can be achieved by configuring Auto Scaling and ELB for your EC2 instances. ELB
distributes traffic to multiple instances for increased redundancy, and Auto
Scaling automatically scales under heavy load.
Domain2: Design High-Performing
Architectures
◼ 2.2Select high-performing and scalable storage solutions for a workload

A company operates a set of EC2 instances hosted on AWS. These are all Linux-
based instances and require access to shared data via a standard file interface.
Since the storage where data is stored is used by multiple instances, strong
consistency and file locking are required. As a Solutions Architect, you are
considering the best storage.

Choose the best storage that meets this requirement.

1) S3
2) EBS
3) Glacier
4) EFS
Domain2: Design High-Performing
Architectures
◼ 2.2Select high-performing and scalable storage solutions for a workload

A company operates a set of EC2 instances hosted on AWS. These are all Linux-
based instances and require access to shared data via a standard file interface.
Since the storage where data is stored is used by multiple instances, strong
consistency and file locking are required. As a Solutions Architect, you are
considering the best storage. Choose the best storage that meets this
requirement.

4)EFS

Option 4 is the correct answer. EFS allows multiple EC2 instances to access the
EFS file system and share data at the same time. EFS provides a file system
interface and file system access semantics (such as strong integrity and file
locking) that allow simultaneous access from up to thousands of Amazon EC2
instances.
Domain2: Design High-Performing
Architectures
◼ 2.3Select high-performing networking solutions for a workload

A company operates infrastructure located on AWS's private and public subnets. A database
server is installed in the private subnet, and a NAT instance is installed in the public subnet
because the instances in the private subnet send reply traffic to the Internet side. You
recently discovered that NAT instances are becoming a bottleneck.

How should you improve?

1) Use VPC connection for wide bandwidth


2) Set access settings using VPC endpoint
3) Change a NAT instance to a NAT gateway
4) Extend / augment an instance of a NAT instance.
Domain2: Design High-Performing
Architectures
◼ 2.3Select high-performing networking solutions for a workload

A company operates infrastructure located on AWS's private and public subnets. A database
server is installed in the private subnet, and a NAT instance is installed in the public subnet
because the instances in the private subnet send reply traffic to the Internet side. You
recently discovered that NAT instances are becoming a bottleneck.

How should you improve?

3)Change a NAT instance to a NAT gateway

Option 3 is the correct answer. A NAT gateway is a managed service that you can use on
behalf of a NAT instance. Since the performance such as scalability is guaranteed on the AWS
side, using the NAT gateway will lead to improvement of the bottleneck of the NAT instance.
You can scale by changing the instance type of the NAT instance itself, but this action does
not guarantee that the problem will not occur in the future. Therefore, you can easily
improve performance and eliminate bottlenecks by changing your NAT instance to a NAT
gateway.
Domain2: Design High-Performing
Architectures
◼ 2.4Choose high-performing database solutions for a workload

As a system developer at a game company, you are building a database for the
game you are developing. In this game, it is necessary to implement a function
that items appear according to the user behavior data, and high-speed
processing of the user behavior data is required.

Choose a service that meets this requirement.

1) Redshift
2) ElastiCache
3) Aurora
4) RDS
Domain2: Design High-Performing
Architectures
◼ 2.4Choose high-performing database solutions for a workload

As a system developer at a game company, you are building a database for the
game you are developing. In this game, it is necessary to implement a function that
items appear according to the user behavior data, and high-speed processing of the
user behavior data is required. Choose a service that meets this requirement.

2)ElastiCache

Option 2 is the correct answer. ElastiCache is an in-memory key-value store high-


performance database. Its main purpose is to provide ultra-fast (less than
millisecond latency) and low cost access to copying data. Therefore, it is the most
suitable database for high-speed data processing, and it is most suitable for high-
speed processing of user behavior data, and it is possible to realize real-time
ranking processing and item appearance according to the behavior data recording.
Domain 3: Design Secure Applications
and Architectures

◼ 3.1 Design secure access to AWS resources


◼ 3.2 Design secure application tiers
◼ 3.3 Select appropriate data security options
Domain 3: Design Secure Applications
and Architectures
3.1 Design secure access to AWS resources
A company runs a public-facing three-tier web application in a VPC across multiple Availability
Zones. Amazon EC2 instances for the application tier running in private subnets need to
download software patches from the internet. However, the instances cannot be directly
accessible from the internet.

Which actions should be taken to allow the instances to download the needed patches?
(Select TWO.)

1) Configure a NAT gateway in a public subnet.


2) Define a custom route table with a route to the NAT gateway for internet traffic and
associate it with the private subnets for the application tier.
3) Assign Elastic IP addresses to the application instances.
4) Define a custom route table with a route to the internet gateway for internet traffic and
associate it with the private subnets for the application tier.
5) Configure a NAT instance in a private subnet.
Domain 3: Design Secure Applications
and Architectures
3.1 Design secure access to AWS resources
A company runs a public-facing three-tier web application in a VPC across multiple Availability
Zones. Amazon EC2 instances for the application tier running in private subnets need to
download software patches from the internet. However, the instances cannot be directly
accessible from the internet.

Which actions should be taken to allow the instances to download the needed patches?
(Select TWO.)

1) Configure a NAT gateway in a public subnet.


2) Define a custom route table with a route to the NAT gateway for internet traffic and
associate it with the private subnets for the application tier.

Options 1 and 2 are correct answers. A NAT gateway forwards traffic from the instances in
the private subnet to the internet or other AWS services, and then sends the response back
to the instances. After a NAT gateway is created, the route tables for private subnets must be
updated to point internet traffic to the NAT gateway.
Domain 3: Design Secure Applications
and Architectures
3.2 Design secure application tiers
One company runs an application hosted on AWS. The application utilizes a VPC and two
public subnets, one subnet where users access the web server over the Internet and the
other subnet where the database server is located. As a security officer, you have begun to
consider improving the security of your architecture. Access to the WEB server is limited to
access from the company intranet and Internet access from employee PCs and does not
require Internet access like open WEB services.

Select the most secure configuration of the following:

1) Move the database server to a private subnet and use it for RDS.
2) Set up a NAT gateway on the public subnet and install RDS on the private subnet.
3) Move the web server to a private subnet.
4) Move the database and web server to a private subnet.
Domain 3: Design Secure Applications
and Architectures
3.2 Design secure application tiers
One company runs an application hosted on AWS. The application utilizes a VPC and two
public subnets, one subnet where users access the web server over the Internet and the
other subnet where the database server is located. As a security officer, you have begun to
consider improving the security of your architecture. Access to the WEB server is limited to
access from the company intranet and Internet access from employee PCs and does not
require Internet access like open WEB services.

Select the most secure configuration of the following:

4)Move the database and web server to a private subnet.

Option 4 is the correct answer. Access to the WEB server is limited to access from the in-
house network and Internet access using employee PCs and does not require an unspecified
number of Internet access like open WEB services. Therefore, do not install the WEB server
on the public subnet, but install it on the private subnet.
Domain 3: Design Secure Applications
and Architectures
3.3 Select appropriate data security options

A company’s security team requires that all data stored in the cloud be encrypted
at rest at all times using encryption keys stored on-premises.

Which encryption options meet these requirements? (Select TWO.)

1) Use Server-Side Encryption with Amazon S3 Managed Keys (SSE-S3).


2) Use Server-Side Encryption with AWS KMS Managed Keys (SSE-KMS).
3) Use Server-Side Encryption with Customer Provided Keys (SSE-C).
4) Use client-side encryption to provide at-rest encryption.
5) Use an AWS Lambda function triggered by Amazon S3 events to encrypt the
data using the customer’s
6) keys.
Domain 3: Design Secure Applications
and Architectures
3.3 Select appropriate data security options

A company’s security team requires that all data stored in the cloud be encrypted
at rest at all times using encryption keys stored on-premises.
Which encryption options meet these requirements? (Select TWO.)

3)Use Server-Side Encryption with Customer Provided Keys (SSE-C).


4)Use client-side encryption to provide at-rest encryption.

Options 3 and 4 are correct answers. Server-Side Encryption with Customer-


Provided Keys (SSE-C) enables Amazon S3 to encrypt objects server side using
an encryption key provided in the PUT request. The same key must be provided
in GET requests for Amazon S3 to decrypt the object. Customers also have the
option to encrypt data client side before uploading it to Amazon S3 and
decrypting it after downloading it. AWS SDKs provide an S3 encryption client that
streamlines the process.
Domain 4: Design Cost-Optimized
Architectures

◼ 4.1 Identify cost-effective storage solutions


◼ 4.2 Identify cost-effective compute and database services
◼ 4.3 Design cost-optimized network architectures
Domain 4: Design Cost-Optimized
Architectures
◼ 4.1 Identify cost-effective storage solutions

A company needs to maintain access logs for a minimum of 5 years due to regulatory
requirements. The data is rarely accessed once stored but must be accessible with one
day’s notice if it is needed.

What is the MOST cost-effective data storage solution that meets these requirements?

1) Store the data in Amazon S3 Glacier Deep Archive storage and delete the objects
after 5 years using a lifecycle rule.
2) Store the data in Amazon S3 Standard storage and transition to Amazon S3 Glacier
after 30 days using a lifecycle rule.
3) Store the data in logs using Amazon CloudWatch Logs and set the retention period
to 5 years.
4) Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage
and delete the objects after 5 years using a lifecycle rule.
Domain 4: Design Cost-Optimized
Architectures
◼ 4.1 Identify cost-effective storage solutions

A company needs to maintain access logs for a minimum of 5 years due to regulatory
requirements. The data is rarely accessed once stored but must be accessible with one
day’s notice if it is needed.

What is the MOST cost-effective data storage solution that meets these requirements?

1) Store the data in Amazon S3 Glacier Deep Archive storage and delete the objects
after 5 years using a lifecycle rule.

Option 1 is the correct answer. Data can be stored directly in Amazon S3 Glacier Deep
Archive. This is the cheapest S3 storage class.
Domain 4: Design Cost-Optimized
Architectures
◼ 4.2 Identify cost-effective compute and database services

A company uses Reserved Instances to run its data-processing workload. The nightly job
typically takes 7 hours to run and must finish within a 10-hour time window. The company
anticipates temporary increases in demand at the end of each month that will cause the job
to run over the time limit with the capacity of the current resources. Once started, the
processing job cannot be interrupted before completion. The company wants to implement a
solution that would allow it to provide increased capacity as cost-effectively as possible.

What should a solutions architect do to accomplish this?

1) Deploy On-Demand Instances during periods of high demand.


2) Create a second Amazon EC2 reservation for additional instances.
3) Deploy Spot Instances during periods of high demand.
4) Increase the instance size of the instances in the Amazon EC2 reservation to support the
increased workload.
Domain 4: Design Cost-Optimized
Architectures
◼ 4.2 Identify cost-effective compute and database services

A company uses Reserved Instances to run its data-processing workload. The nightly job
typically takes 7 hours to run and must finish within a 10-hour time window. The company
anticipates temporary increases in demand at the end of each month that will cause the job
to run over the time limit with the capacity of the current resources. Once started, the
processing job cannot be interrupted before completion. The company wants to implement a
solution that would allow it to provide increased capacity as cost-effectively as possible.

What should a solutions architect do to accomplish this?

1) Deploy On-Demand Instances during periods of high demand.

Option 1 is the correct answer. While Spot Instances would be the least costly
option, they are not suitable for jobs that cannot be interrupted or must
complete within a certain time period. On-Demand Instances would be billed for
the number of seconds they are running
Domain 4: Design Cost-Optimized
Architectures
◼ 4.3 Design cost-optimized network architectures

As a Solutions Architect, you work for a company that operates a global image
distribution site. Currently, the company is considering using a CDN to streamline
the image distribution system. Therefore, you decided to calculate and report the
cost for content distribution using CloudFront.

Select the following CloudFront costing elements: (Select two.)

1) Number of requests
2) Data transfer out
3) Resource type
4) Number of edge locations to use
Domain 4: Design Cost-Optimized
Architectures
◼ 4.3 Design cost-optimized network architectures

As a Solutions Architect, you work for a company that operates a global image
distribution site. Currently, the company is considering using a CDN to streamline the
image distribution system. Therefore, you decided to calculate and report the cost for
content distribution using CloudFront. Select one of the following CloudFront costing
elements: (Select two.)

1) Number of requests
2) Data transfer out

Options 1 and 2 are correct answers. Amazon CloudFront pricing is determined by the
following factors:

-Traffic distribution: Data transfer and request prices vary by region, and prices vary by
location at the edge where content is delivered.
-Requests: The number and type of requests (HTTP or HTTPS) and the region where
the request was made.
-Data transfer out: Amount of data transferred from Amazon CloudFront edge locations
The scope of
AWS services
Analysis of exam question range
We have extracted and analyzed the range of questions
from 1625 examples of from mock test

Test contents for 3 production tests 195 Qs

Japanese Associate Mock Test Course with the


390 Qs
Most Users (Provided by Our Company)

One of Udemy's Top 3 Courses 260 Qs

One of Udemy's Top 3 Courses 390 Qs

One of Udemy's Top 3 Courses 390 Qs

Total: 1625 questions


The Top Focus Range of Services in the Exam
Of the more than 100 services, only the top 13 services
have 62% of the questions.

Category Questions Rate


S3 182 11.17%
EC2 145 8.90%
VPC 94 5.77%
Auto Scaling 76 4.66%
RDS 74 4.54%
EBS 65 3.99%
SQS 60 3.68% 62%
ELB 58 3.56%
CloudFront 56 3.44%
IAM 54 3.31%
DynamoDB 52 3.19%
Lambda 50 3.07%
Route53 42 2.58%
The Top Focus Range of Services in the Exam
Of the more than 100 services, only the top 13 services
have 62% of the questions.

S3 is a 99.999999999% (9 x 11) durable and highly available object


Amazon Simple Storage
storage service. It is accessible from the Internet and can be used
Service (S3) for storing large amounts of data and long-term storage of data.

Amazon EC2 is a service that launches virtual servers such as Windows and
Elastic Compute Cloud Linux. You can choose the processor, storage, networking, operating
(Amazon EC2) system, and purchase model.

VPC is a service that builds a virtual networking environment. Select


Amazon VPC IP address range, create subnets, configure route tables and network
gateways during configuration.

RDS is a managed relational database service for MySQL, PostgreSQL,


Amazon RDS Oracle, SQL Server and MariaDB.

Amazon Elastic Block EBS is a dedicated block storage that can be used by attaching to an
Store (EBS) EC2 instance via a network.

Elastic Load Balancing is a load balancer that automatically


ELB distributes traffic to your application across multiple instances.

Auto Scaling is a service that automatically scales according to the


Auto Scaling load of EC2 instances.
The Top Focus Range of Services in the Exam
Of the more than 100 services, only the top 13 services
have 62% of the questions.

Amazon SQS is a fully managed polling message queuing service. It


Amazon SQS is used for worker parallel and distributed processing.

AWS Identity & Access IAM is an access management service that securely manages access
Management(IAM) to AWS services and resources.

CloudFront is a high-speed content delivery network (CDN) service


Amazon CloudFront that securely delivers content to viewers around the world with high-
speed, low-latency transfers.

DynamoDB is a key-value store and document database that delivers


Amazon DynamoDB performance in the millisecond range, regardless of size.

Lambda is a typical serverless service of AWS that executes


AWS Lambda programming code processing without a server.

Route53 is a service that provides the functions of a DNS server and


Amazon Route53 performs domain conversion and routing.
Next Most Frequent Services
Adding this 2nd selection, our range covers 90% of questions
Category Questions Rate
Security Group 35 2.15%
Kinesis 31 1.90%
EFS 30 1.84%
API Gateway 30 1.84%
CloudWatch 30 1.84%
Aurora 29 1.78%
ElastiCache 28 1.72%
Connection 28 1.72%
CloudFormation 23 1.41%
ECS 22 1.35%
Redshift
SNS
21
18
1.29%
1.10% 28% ⇒ 90%
AWS Storage Gateway 17 1.04%
Organizations 17 1.04%
Multi AZ 16 0.98%
Amazon FSX for Windows 13 0.80%
Instance Store 11 0.67%
KMS 11 0.67%
Snowball 10 0.61%
Glacier 10 0.61%
AWS DataSync 10 0.61%
DR Configuration 10 0.61%
CloudTrail 10 0.61%

※ CloudWatch, Cloud trail are including the previous version on the exam but from the 02 version, these services are less likely to appear
Next Most Frequent Services
Adding services with 10 or more questions covers 90% of
exam content
Security Group provides firewall function to control communication
Security Group traffic of instance and ELB.

Kinesis is a data processing service that collects, processes, and


Kinesis analyzes streaming data in real time.

Amazon
EFS is a simple, scalable, and stretchable, fully managed NFS file
Elastic File System system for use with AWS cloud services and on-premises resources.
(EFS)
API Gateway is a service that creates and manages RESTful API and
Amazon API Gateway WebSocket API that provide real-time two-way communication
applications.

CloudWatch is a monitoring service that monitors applications,


Amazon CloudWatch
optimizes resource utilization, and provides a comprehensive view of
(SAA-01) operational health.

Aurora is a distributed, accelerated relational database for the cloud


Amazon Aurora that is compatible with MySQL and PostgreSQL.

ElastiCache is a fully managed in-memory data store compatible with


Amazon ElastiCache Redis or Memcached
Next Most Frequent Services
Adding services with 10 or more questions covers 90% of
exam content
Site-to-site connection Direct Connect is a leased line service that establishes a private
connection between AWS and your on-premise environment using
(Direct Connect / VPN) AWS site-to-site VPN

CloudFormation is an Infrastructure as Code service that creates


AWS CloudFormation templates in code and automates the provisioning of AWS resources.

Amazon Elastic ECS is a scalable and high-performance container orchestration


Container Service (ECS) service that supports Docker containers.

Amazon Redshift Redshift is a fast and simple cost-effective data warehouse

SNS is a push-type messaging service with pub / sub functions


Amazon SNS Used for message notification and alarm settings.

Storage Gateway is a hybrid cloud storage service that provides


AWS Storage Gateway virtually unlimited access to cloud storage from on-premises.

AWS Organizations is a management service that provides


AWS Organizations centralized management and centralized billing for multiple AWS
accounts.
Next Most Frequent Services
Adding services with 10 or more questions covers 90% of
exam content
A multi-AZ configuration is a highly available infrastructure
Multi AZ configuration that uses two or more Availability Zones. Basic
configuration of AWS architecture

Amazon FSx for Windows is a reliable, scalable, fully managed file


Amazon FSx for Windows storage that can be accessed via the industry standard Server
Message Block (SMB) protocol.

The instance store is a block storage that is physically connected to


Instance Store the EC2 instance and is used to store temporary data.

AWS Key KMS is a service that makes it easy to create and manage encryption
Management Service keys to encrypt a wide range of AWS services and applications.

The Snow family is a highly secure portable storage device or trailer


AWS Snow family that collects and processes data at the edge, such as Snowball, and
uses it to migrate data to and from AWS.

Amazon Glacier is a secure, durable, and extremely low-cost Amazon


Amazon Glacier S3 cloud storage class. Used for data archiving and long-term backup

DataSync is a service that moves large amounts of online data


AWS DataSync between on-premises storage and S3 or Amazon EFS, Amazon FSx
for Windows File Server easily and quickly.
Next Most Frequent Services
Adding services with 10 or more questions covers 90% of
exam content
DR configuration method using AWS such as backup acquisition
DR Configuration method using another region

CloudTrail CloudTrail is a log acquisition and monitoring service that tracks user
(SAA-01) activity and API usage.
Occasional Questions for Even Higher Score
Including services with 4 or more questions covers 97%
of the exam content.

Category Questions Rate


AWS WAF 9 0.55%
AWS Global Accelerator 8 0.49%
AWS Elastic BeanStalk 8 0.49%
EMR 8 0.49%
ACM 8 0.49%
OpsWorks 7 0.43%
DMS 7 0.43%
Cognito 7 0.43%

7% ⇒ 97%
Athena 7 0.43%
Amazon MQ 6 0.37%
AWS Directory Service 6 0.37%
AWS SSO 6 0.37%
Amazon FSX for Lustre 5 0.31%
AWS Transit Gateway 5 0.31%
AWS Step Functions 5 0.31%
SWF 5 0.31%
CloudHSM 4 0.25%
STS 4 0.25%
Occasional Questions for Even Higher Score
Including services with 4 or more questions covers 97%
of the exam content.
WAF is a web application firewall that protects web applications or
AWS WAF APIs from common web vulnerabilities such as SQL injection and
cross-site scripting.

AWS Global Accelerator uses two global static IPs to automatically


AWS Global Accelerator reroute traffic to the closest, healthy endpoint, improving Internet
user performance by up to 60%.

Elastic BeanStalk is a service that deploys web applications using


AWS Elastic BeanStalk Java, .NET, PHP, Node.js, Python, Ruby, Go and Docker and
automates version control.

Amazon EMR can perform petabyte-scale analysis with Apache Spark,


Amazon EMR Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto

AWS
ACM is a service that provisions, manages, and deploys Secure
Certificate Manager Sockets Layer / Transport Layer Security (SSL / TLS) certificates.
(ACM)

OpsWorks is a configuration management service that allows you to


AWS OpsWorks automate server configuration using Chef and Puppet code.

AWS Database DMS is a database migration tool that enables you to safely migrate
Migration Service your database to AWS in a short period of time.
Occasional Questions for Even Higher Score
Including services with 4 or more questions covers 97%
of the exam content.
Cognito is a service that allows you to quickly and easily add user
Amazon Cognito sign-up / sign-in and access control capabilities to web and mobile
apps.

Athena is an interactive query service that you can use to easily


Amazon Athena analyze data in Amazon S3 using standard SQL.

Amazon MQ is a managed message broker service compatible with


Amazon MQ Apache ActiveMQ that allows you to use message brokers in the
cloud using industry standard APIs and protocols.

AWS Directory Service is a service that enables you to integrate with


AWS Directory Service AD in your on-premises environment or build a new AD on AWS to
use managed Active Directory (AD) within AWS.

AWS SSO is a service that facilitates centralized management of


AWS Single Sign-On
access to multiple AWS accounts and business applications and
(SSO) provides users with single sign-on access.

Amazon FSx for Luster is a high-performance shared storage ideal


Amazon FSx for Lustre for many workloads such as machine learning, high-performance
computing (HPC), video rendering, and financial simulation.

AWS Transit Gateway is a service that configures hub and spokes


AWS Transit Gateway through a central hub, connecting multiple VPCs and on-premises
networks to manage complex peer connections
Occasional Questions for Even Higher Score
Including services with 4 or more questions covers 97%
of the exam content.
AWS Step Functions is a serverless workflow creation and
AWS Step Functions management service that allows you to create processes by
arranging AWS Lambda functions and multiple AWS services as flows.

Amazon Simple Workflow SWF is a workflow creation and management service that builds,
(SWF) runs, and scales background jobs with parallel or consecutive steps.

CloudHSM is a hardware-based key storage that manages encryption


AWS CloudHSM keys using FIPS 140-2 Level 3 certified HSMs for compliance with
industry standards such as PKCS # 11, JCE, and CNG libraries.

AWS STS is an authentication service that allows authenticated users,


Security Token Service such as IAM users, to request temporary restricted privilege
(AWS STS) credentials.
Learning guide
Concept of this course
The most efficient way to success is to focus on the exam
range that is actually being asked.

Focus on the range


Analyze the exam
of exam questions
questions that are
and learn the
actually being asked
common patterns
Concept of this course
Strengthen your knowledge based on the questioning
patterns of each service and confirm your ability in the
mock exam!

Understand the Confirm your


Focus on learning
question abilities with
the tested content
tendencies mock tests.
S3 question range
The range of questions about S3 based on 1625 Qs

✓ You will be asked to select storage that meets the storage


S3 storage features requirements of a given scenario.
✓ You will be asked to indicate the characteristics of S3

✓ You will be asked a simple question about the data


S3 data capacity limit capacity of S3.

✓ You will be asked about an S3 storage class that meets a


Select storage type given scenario's storage requirements.
✓ You will be asked to describe a life cycle management plan.

✓ You will be asked about the cost factor of S3.


S3 usage cost ✓ You may also be asked for a function that allows you to
set billing according to your request.

✓ You will be asked about "appropriate setting method to


Lifecycle management move or delete the storage class by life cycle management
according to the data storage time".
Select Storage type

As a Solutions Architect, you are building a mechanism to store and share reports
generated by your internal applications. This report will be generated by AWS Step
Functions to automate the generation process, but it will generate several terabytes of
data to be used in the report and must be stored in S3.

As a Solutions Architect, choose the most cost-effective storage type.

1) S3 Standard-IA
2) S3 Standard
3) S3 Intelligent Tiering
4) S3 Glacier
Storage class selection

As a Solutions Architect, you are building a mechanism to store and share reports
generated by your internal applications. This report will be generated by AWS Step
Functions to automate the generation process, but it will generate several terabytes of
data to be used in the report and must be stored in S3.

As a Solutions Architect, choose the most cost-effective storage type.

1) S3 Standard-IA
2) S3 Standard
3) S3 Intelligent Tiering
4) S3 Glacier

[Explanation of the question]


Questions are introduced as examples of the exam content and patterns.
In order to save time, each question will not be covered individually in
this lecture series.
Storage class selection
Choose a storage type based on the S3 usage

Type Explanation Performance

✓ Durability is very high because the data is ■ Durability


duplicated in multiple places. 99.999999999%
STANDARD ■ Availability
✓ It is suitable for storing a large amount of
frequently used data. 99.99%

✓ IA stands for In frequency Access and is storage for


infrequent access data. Unlike One Zone-IA, this ■ Durability
type is storage for important master data. 99.999999999%
STANDARD-IA ■ Availability
✓ Cheaper than Standard, but more expensive than 99.9%
One Zone-IA.

✓ It's storage for infrequent access, but for low ■ Durability


availability and non-essential data because it's not 99.999999999%
One Zone-IA multi-AZ distributed. ■可用性
✓ Even cheaper than Standard IA 99.5%

✓ Reduced Redundancy Storage


Low redundancy storage. It is used for data ■ Durability
placement etc. taken out from Glacier. 99.99%
RRS ■ Availability
✓ Currently deprecated storage and will not be used. 99.99%
Now more expensive than Standard
Mock Question example.
Section content
lecture What you will learn in the lecture

Learn the range of content referenced for IAM, a major


The scope of IAM service that manages users on AWS.

Learn the range of content referenced for S3, a major


The scope of S3 service used for AWS storage.

Learn the range of content referenced for EC2, A major


The scope of EC2 service for building virtual servers on AWS.

Learn the range of content referenced for VPC, a major


The scope of VPC service that sections out the network in AWS.
The scope of IAM
What is IAM?
AWS Identity and Access Management (IAM) is an
authentication / authorization tool for AWS operation
security.

 Implementation of AWS user authentication


 Access policy settings
 Set individual or group permissions
What is IAM?
IAM is an authentication / authorization tool performing
AWS operation security.

User AWS Service


IAM

User
EC2:○
S3:×
Group EC2
Group
EC2:○
S3:○

S3
What is IAM?
IAM is an authentication / authorization tool performing
AWS operation security.

User AWS Service


IAM

User
EC2:○
S3:×
Group EC2
Group
EC2:○
S3:○

S3
What is IAM?
IAM is an authentication / authorization tool performing
AWS operation security.

User AWS Service


IAM

User
EC2:○
S3:×
Group EC2
Group
EC2:○
S3:○

S3
The Scope IAM Questions
Frequently asked questions extracted from 1625
questions are as follows
✓ You will be asked how IAM users are to be set up and
IAM User used.

✓ You will be asked the authority of the root account.


Root Account ✓ You will be asked about the best practices for limiting the
use of the root account.

✓ You will be asked about the purpose of the IAM group and
IAM Group how they are set up.

✓ Based on IAM policy documents, you will be asked what


IAM Policy permission status a setting indicates.

✓ You will be asked the content and intended use of each


type of IAM policy.
IAM Policy Type ✓ In addition, You will be asked about the use of MFAs and
password enforcement.
The Scope IAM Questions
Frequently asked questions extracted from 1625
questions are as follows

IAM Roles ✓ You will be asked about scenarios for setting up IAM roles.

✓ You will be asked the case for content authentication


IAM Authentication requiring an access key and secret access key.
Method ✓ You will be asked about the best practices that require
MFA certification.

IAM Database ✓ You will be asked the use of IAM database authentication
Authentication as a method mainly used for RDS authentication

Recording ✓ You will be asked about the purpose of the tools used to
User Activity manage IAM users‘ activities and other records.

✓ You will be asked best practices to set permissions based


IAM Best Practice primarily on minimum authority.
Key Topics
Users, groups, policies, and roles are key elements of IAM

Users Groups

Policies Roles
[Q] IAM Policy
The following IAM policies are used to set permissions on AWS resources.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"172.103.1.38/24"
]
}
}
}
]
}

Select the correct explanation for this setting.

1) All resources except for the IP address (172.103.1.38) are denied access privileges.
2) The IP address (172.103.1.0) has access rights to all resources.
3) The IP address (172.103.1.3) has been denied access to all resources.
4) The IP address (172.103.1.3) has access rights to all resources.
IAM Policy
A configuration document to grant access rights to users
and groups (JSON format document).

Individual ⇒ IAM user AWS Services


IAM

A Policy
EC2: ○
S3: X
Group ⇒ IAM Group EC2
B Policy
EC2: ○
S3: ○.

S3
IAM Policy
IAM policy is set in JSON format

"Allow".
Effect "Deny.

Target AWS services


Action Example: "s3:Get"

Target AWS resources


Resource Written in ARN

Target entity of this policy


Condition (IAM User, group, role)
[Q] IAM Users
As a solution architect, you‘re in the process of setting up AWS access
privileges for new personnel in your department. you've created multiple IAM
users, but you need to make sure you know what privileges are included by
default for the created IAM users.

Which of the following is the correct explanation for the default privileges of
an IAM user?

1) Restrictive permissions have been set up.


2) All Access permissions are set to allow access to resources other than
those with administrative privileges.
3) It has no authority.
4) Basic resource permissions are given by default.
IAM User
Users can use AWS services allowed by the IAM policy.

Individual ⇒ IAM user AWS Services


IAM

personal
EC2: ○
S3: X
Group ⇒ IAM Group EC2
group
EC2: ○
S3: ○.

S3
IAM User
Users on AWS are set up as authorized entities called IAM
users.

• The very first account created when creating an AWS


account
Root account
• Permission to use all AWS services and resources
(Root user) • It is strongly recommended that you do not use the
root user for daily tasks.
• An IAM user who has been granted administrative
Administrative privileges.
authority • This user can have the IAM admin roles.
(IAM user) • However, no permissions will be granted that only the
root account can do.

• Power users are IAM users with full access to all AWS
Power user
services except IAM admin rights
(IAM user) • No permission to operate the IAM
[Q] Root account

When you register for an account with AWS, an account, called a root
account, is created and allows you to perform AWS operations. There are
certain operations that can only be performed by the root account.

Please select a response that can be implemented for the root account only.
(Select two.)

1) Activate access to IAM user's financial information


2) Implement user management within the AWS account.
3) Register a domain using Route 53.
4) Contact AWS Support.
5) Become a member account in AWS Organizations.
Root account
Only the root account (root user) has the following
abilities:

 Change your AWS root account email address and password


 Activate/deactivate access to IAM user's financial information
 Transferring a Route53 domain registration to another AWS account
 Cancellation of AWS Services (AWS Support, etc.)
 AWS Account Suspension
 Consolidated Billing Setup
 Submit a Vulnerability Assessment Form
 Reverse DNS Application
[Q] IAM Group
A company needs to set up at least 300 AWS users. These users are divided
into three departments and each department has different AWS resources to
use. As a solution architect, you've been asked to consider how to best set
up permissions for these users.

How should you set up your authority based on the principle of least
authority?

1) Create an IAM policy with the minimum permissions required for each
user and set it for IAM users.
2) Create an IAM policy with the minimum privileges required for each user
and set it up in an IAM group. In addition, place these IAM users in an
IAM group.
3) Create an IAM policy with the minimum permissions required for each
user and set it for IAM users. In addition, place these IAM users in an IAM
group.
4) Create an IAM group for each department and set up an IAM policy that
sets the minimum privileges required for each user.
IAM Group
A unit of authority that is set up collectively as a group. A
group is usually made up of multiple IAM users.

Individual ⇒ IAM user AWS Services


IAM

personal
EC2: ○
S3: X
Group ⇒ IAM Group EC2
group
EC2: ○
S3: ○.

S3
[Q]IAM Role

A solution Architect is building an application that performs database


operations using Lambda functions. This serverless application accesses
Amazon DynamoDB tables and performs the process of retrieving and
processing data.

What is the safest way to grant the Lambda function access to the
DynamoDB tables?

1) Create an IAM role with the necessary privileges to access DynamoDB


tables and assign that role to a Lambda function.
2) Create an IAM policy with the necessary permissions to access
DynamoDB tables and assign that role to a Lambda function.
3) Create an IAM group with the necessary permissions to access
DynamoDB tables and assign its role to a Lambda function.
4) Create an IAM user with the necessary privileges to access DynamoDB
tables and assign that role to a Lambda function.
IAM Role
You can grant access rights to AWS resources as a role

Individual ⇒ IAM user AWS Services


IAM

A Policy
EC2:○
S3: X
Group ⇒ IAM Group EC2
B Policy
EC2:○
S3: ○.

C Policy
S3
IAM Role EC2: x
S3: ○.
AWS EC2
service Beanstalk
Data Pipeline, etc.
[Q] What type of IAM policy is this?

A large IT company had a group of developers with power-access accounts for


their development. However, a single developer inadvertently deleted an
instance in the production environment, causing a critical application to go
down for an extended period of time. After this incident was reported, you
were asked to act as a solution architect to implement security best practice
controls.

Choose the appropriate response to prevent such incidents from recurring.

1) Use the root account to limit administrative privileges to the root account
only.
2) SCP is used to control the maximum privileges that development personnel
can grant to IAM identities.
3) Use IAM groups to limit the privilege settings of the developers' group
personnel.
4) Use permission boundaries to control the maximum privileges that
development personnel can grant to IAM identities.
Type of IAM policy
The IAM policy is called user-based policy and there are
many other policies in existence.

✓ A policy that attach managed and inline policies to IAM


User-based
entities (users, groups of users, roles).
policy ✓ User policy permissions are granted to the entity.

✓ A policy that attaches an inline policy defined in a JSON-


Resource-based formatted document, such as a bucket policy, to a resource
policy ✓ Examples are Amazon S3 bucket policies and trust policies for
IAM roles.

✓ A Permission boundary set the maximum range of


permissions that a user-based policy can grant to an IAM
Permission
entity. This itself does not grant permissions.
Boundary ✓ IAM entities are only allowed actions that are allowed by both
user-based policies and permission boundaries.
Access Permission Boundary
You can set an upper limit on permission settings at the
permission boundary and then set permissions in other
policies.

Reference: https://docs.aws.amazon.com/ja_jp/IAM/latest/UserGuide/access_policies.html#access_policy-types
Type of IAM policy
There are many policy types used for AWS permission
control

✓ A policy that define access permission limits for organization


or organizational unit (OU) member accounts
✓ As with the permission boundary, this in itself does not grant
SCP permissions.
✓ IAM users in a member account can only perform actions that
are allowed by both SCP and IAM policy.
✓ Control the principal of other accounts that have access to the
resources to which the ACL is attached.
✓ It differs from resource-based policies in that it does not use
ACL
the JSON policy document structure.
✓ A cross-account permission policy that grants access to
defined principals

✓ Ability to pass parameters when creating a temporary session


for a role or federated user.
Session Policy ✓ Access permissions for the sessions you create will be
restricted, but no permissions will be granted.
[Q] User-based policy types

As a solution architect, you are implementing account management using


AWS. First, you need to create an IAM user and issue account privileges to
AWS users. You need to have an IAM policy for two users with administrative
privileges as account privileges.

Select the easiest available IAM policy type to implement this privilege
management.

1) Using AWS administrative policies for administrative privileges


2) Using the inline policy
3) Take advantage of third party policies.
4) Using the Customer Management Policy
User-based policy types
Create an IAM policy and grant access to users and others

AWS Management Policy


Policies created and managed by AWS

Management Policy
Customer Management Policy
Administrative policies that are created and managed by
user. The same policy can be attached to multiple IAM
entities

Specific policies embedded in a single principal entity


In-line Policy (user, group, or role) that can be attached to a principal
entity
[Q] Trust policy for IAM roles
Your company is using AWS to build an application. You need to give
temporary access to a representative of that company to incorporate a
solution from vendor A into this application. As a solution architect, you want
to delegate access to this person so that he or she can access resources.

As a solution architect, which of the following responses should you


implement?

1) Create a new IAM user, set up permissions to the necessary AWS


resources, and then grant them to the person in charge of Company A.
2) Create an STS, a mechanism for temporary authentication, and grant
permissions to the necessary AWS resources to Company A personnel
after configuring them.
3) Issue an access key, set up permissions to the necessary AWS resources,
and then grant it to the person in charge of Company A.
4) Create a new IAM role, set up permissions to the required AWS resources,
and then grant them to the person in charge of Company A.
Trust Policy for IAM Roles
IAM roles are also used to delegate temporary authority
to others, such as auditors.

✓ A Policy Specific to IAM Role Delegation Operations

✓ The authority of the IAM role associated with the


trust policy can be transferred (permitted) to the
Principal, who is the operator of the trust policy.

Third-party user

IAM
Role

Transfer
of authority
[Q] IAM authentication method

As a solution architect, you are developing a web application on AWS. The


application needs to use HTTPS to call another web application, and you
need the ability to access and collaborate directly with the web services by
IAM.

Choose the best way to configure your application to work with IAM in your
code.

1) Create an IAM role and run it on your application.


2) Create an IAM user and run it on your application.
3) Create a set of access keys for the user and run them on the application.
4) Implement Cognito on your application and issue STS to collaborate with
it.
IAM Authentication Method
The method of user authentication by IAM depends on
the tool used.
• REST/Query formats such as EC2 instance
Access Key ID /
connections
Access Key
• Used for authentication when using the AWS CLI and
secret access key
API

X.509 Certificate • Authentication method for SOAP API requests

To the AWS • Set a password for each AWS account to authenticate


Management Console the login.
Login password • Default is not set (cannot log in).

• An authentication method that uses a pin code to


MFA (Multi-Factor authenticate a physical device. It is recommended
Authentication) that the root account be given an MFA to enhance
security.
[Q] IAM database authentication

Your company is building a database solution using AWS and is using


multiple Amazon RDS MySQL databases. Currently, the application connects
to the database using the normal MySQL authentication method of user IDs,
but executing the password by code is a security concern.

Which of these can enable secure user access with short-term credentials to
improve security?

1) Use IAM database authentication.


2) Configure your MySQL database to use AWS STS
3) Execute the AUTH command for database authentication.
4) Perform temporary authentication on the application using the IAM role.
IAM database authentication
You can connect to Amazon RDS DB using IAM user
authentication and authentication tokens with IAM DB
authentication

Reference: https://aws.amazon.com/jp/blogs/news/using-iam-authentication-to-connect-with-pgadmin-
amazon-aurora-postgresql-or-amazon-rds-for-postgresql/
[Q] Recording user activities
Your company has an IAM policy for one S3 bucket that allows external third
party applications to read the files. Therefore, it is necessary to ensure that
these accesses are being used correctly by the expected external users and
are not being used in an unanticipated manner.

Choose the best method for this checking mechanism.

1) CloudTrail
2) Server access log
3) Storage Class Analysis
4) IAM Access Analyzer
Recording user activity
A variety of tools can be used to obtain activity records
Analyze S3 buckets, IAM roles, etc., shared with external
IAM Access
entities to identify unintentional access to resources and
Analyzer data that are security risks

Access Advisor
It displays the date and time when an IAM entity (user,
Service Last
group, role) last accessed an AWS service
Accessed Data

IAM authentication information report file with date and


Credential Report
time of use

AWS Config is a service that manages IAM users, groups,


AWS Config
roles, policy change history, and configuration changes.

AWS AWS CloudTrail is a service that logs and monitors


CloudTrail various account activities and API calls.
[Q] Best Practices for IAM Authorization

You‘ve just created a new AWS account and are setting up AWS
environments. AWS has defined best practices that are required to be
executed for newly created accounts. This doesn't mean that you can't use
AWS if you don't run it, but it is recommended that you do, so you're going
to respond.

As a solution architect, select the items you should deal with in the early
days of AWS. (Choose three.)

1) Enable MFA authentication for all users.


2) Enable CloudTrail trail execution.
3) Enable CloudWatch monitoring.
4) Enable monitoring of Config.
5) Enable two-step verification with a password.
6) Set a password policy.
Best Practices for IAM Authority
Follow best practices when using IAM.

✓ Lock the access keys of the AWS root user and do not use the root
account unnecessarily.
✓ Create individual IAM users and manage them with IAM users.
✓ Use the IAM group to assign permissions to IAM users.
✓ Set only minimum privileges for IAM users and IAM groups.
✓ Instead of creating a new policy, use AWS management policies.
✓ Use a customer management policy, not an inline policy.
✓ Use access levels to verify IAM permissions
✓ Set a strong password policy for your users.
✓ Enable MFA.
✓ Use IAM roles for applications running on Amazon EC2 instances
✓ Use IAM roles to transfer permissions when granting temporary
authentication to a third party
✓ Don't share the access key
✓ Rotate authentication information on a regular basis.
✓ Remove unnecessary credentials.
✓ Monitor the activity of your AWS account.
The Scope of S3
What is S3?
S3 is a very durable and highly available storage solution
for medium-long term data storage.



What is S3?
S3 is a very durable and highly available storage solution
for medium-long term data storage.



S3 use cases
Image data for content delivery are stored in S3 and
distributed using CloudFront.

Client S3

Images

CMS
The scope of S3 questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked to select storage that meets the storage
S3 storage features requirements of a given scenario.
✓ You will be asked to demonstrate the characteristics of S3

✓ You will be asked simple questions about the data capacity


S3 data capacity limit of S3.

✓ You will be asked to select an S3 storage class that meets


Storage class selection your scenario's storage requirements.
✓ Questions will be given about life cycle management.

✓ The cost factor of S3 will be asked.


S3 usage cost ✓ You may also be asked for a function that allows you to
set billing based on your request.

✓ A question is asked about the appropriate setting method


Lifecycle management to move or delete the data through life cycle management.
The scope of S3 questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about preventive measures in case of
Version control accidental deletion of data.
✓ MFA deletion is often a set response pattern.

✓ You will be asked differences among bucket policy, ACLs,


S3 Access and IAM usage.
Management ✓ Questions on the configuration of the bucket policy itself
✓ Questions on the restricting access using pre-signed URLs

Block ✓ You will be asked how to set objects to be published on


public access the Internet.

✓ You will be asked how to allow other accounts to use your


Cross-account access bucket.

✓ You will be asked for the purpose of using the S3 access


S3 Access Point point.
The scope of S3 questions
Frequent questions extracted from 1625 questions are as
follows

Static Web Hosting ✓ You will be asked how to set up static web hosting

Route 53 ✓ You will be asked how to set up a Route53 domain for


Domain settings static web hosting.

cross-origin
✓ You will be asked how to share an S3 bucket with a
resource sharing domain configured as an origin to another domain.
(CORS)

✓ You will be asked to select the services available for the S3


S3 Event event and how to implement an S3 event.

S3 Encryption ✓ You will be asked the encryption methods available for S3.
The scope of S3 questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked the replication method of S3 and how to
Replication configure it.

✓ You will be asked to select AWS services which analyze


Data Analysis of S3 data in conjunction with S3.

Recording ✓ You will be asked the method and services of checking and
S3 Usage Status analyzing data usage relating to S3.

✓ You will be asked the issuess caused by the S3 read and


S3 Consistency Model write integrity model.

✓ You will be asked The best method for uploading large


S3 Multipart upload files.
The scope of S3 questions
Frequent questions extracted from 1625 questions are as
follows
S3 Transfer ✓ You will be asked a necessary method to optimize data
Acceleration uploads to S3 globally.

✓ You will be asked how to streamline data retrieval


Improving requests and other requests and an efficient setup for
performance handling large numbers of requests when uploading a
large number of objects.
[Q] S3 question range

A venture company is building a web application using EC2 instances. This


application requires storage of a large number of log files as it continues to
create more log files for access and API calls. They need to frequently access
the stored data and also store large amounts of data at low cost.

Which of the following storage services is the most cost effective?

1) Amazon EFS
2) Amazon EBS
3) Amazon S3
4) Amazon EC2 Instance Store
S3 Storage Features
AWS offers three forms of storage services

✓ A disk service that attaches to EC2


✓ Save data in block format
Block storage
✓ High Speed & Wide Bandwidth
✓ Example: EBS, instance store

✓ Inexpensive and Durable Online Storage


✓ Storing Data in Object Format
Object storage
✓ It is redundant to multiple AZs by default.
✓ Example: S3, Glacier

✓ Shared storage service that can be attached from


multiple EC2 instances simultaneously
File storage ✓ Save data in file format
✓ Example: EFS
S3 Storage Features
S3 stores data as objects. An object consists of the
following elements:

Key
The name of an object, and the objects in the bucket to be uniquely
identified

Value
It is the data itself, consisting of byte values

Version ID
ID for version control

Metadata
Information about the attributes associated with the object

Sub-resources
Provides support for storing and managing bucket configuration
information Example: access control list (ACL)
S3 Storage Features
S3 divides storage space into bucket units and stores data
in objects

S3

Bucket Bucket
(contents-buckets) (website-buckets)

Object Object Object Object


(mp3) (jpeg) (html) (csv)
[Q] Data capacity limit of S3

You work as an engineer for a large manufacturing company. You are


currently building a document management application to efficiently store
and share the large amount of manufacturing documents in your company.
The solution is set to use S3 to store data, but you need to check the
restrictions on the stored data.

Which of the following is the correct explanation for the data storage
constraints of Amazon S3? (Please select two.)

1) The storage capacity of S3 is set at the time of bucket creation and then
scaled automatically.
2) The amount of data in storage and the number of objects that can be
stored is unlimited.
3) The maximum number of objects that can be uploaded in one PUT is 5GB
4) The maximum number of objects that can be uploaded in one PUT is 5TB
5) S3 provides file system access semantics (e.g., strong integrity and file
locking) and simultaneously accessible storage.
6) Use the mount helper to access S3.
S3 data capacity limit
S3 has unlimited storage capacity and can store data from
0KB - 5TB

Data Capacity limit

Bucket
Bucket is the space in which the object is stored. The name should be global
and unique as it will be located in the region. The data storage capacity is
unlimited and the storage capacity is automatically expanded.

Object.
This is a file format that is stored in S3 and has a URL assigned to the object.
The number of objects that can be stored in the bucket is unlimited.

Limitations on the size of objects that can be saved


Data size per object can be stored from 0 KB - 5 TB
[Q] Selecting a storage class

Company A, one of the world's four largest audit firms, produces a variety of
audit reports. These audit reports need to be kept for a certain period of
time with strong security. In addition, the data underlying the creation of
these audit reports is stored in S3 and amounts to several hundred terabytes.
The original data and audit reports are frequently accessed.

Which is the most cost-effective storage class for this use case?

1) S3 Standard-IA
2) S3 Standard
3) S3 Intelligent Tiering
4) S3 Glacier
Storage class selection
Choose a storage type according to your S3 usage

Type Explanation Performance

✓ Durability is very high because the data is ■ Durability


duplicated in multiple places. 99.999999999%
STANDARD ■ Availability
✓ It is suitable for storing a large amount of
frequently used data. 99.99%

✓ IA stands for In frequency Access and is storage for


infrequent access data. Unlike One Zone-IA, this ■ Durability
type is storage for important master data. 99.999999999%
STANDARD-IA ■ Availability
✓ Cheaper than Standard, but more expensive than 99.9%
One Zone-IA.

✓ It's storage for infrequent access, but for low ■ Durability


availability and non-essential data because it's not 99.999999999%
One Zone-IA multi-AZ distributed. ■Availability
✓ Even cheaper than Standard IA 99.5%

✓ Reduced Redundancy Storage


Low redundancy storage. It is used for data ■ Durability
placement etc. taken out from Glacier. 99.99%
RRS ■ Availability
✓ Currently deprecated storage and will not be used. 99.99%
Now more expensive than Standard
Storage class selection
Choose a storage type according to your S3 usage

Type Explanation Performance


✓ Inexpensive archival storage
■ Durability
✓ Extraction of data takes cost and time (3 to 5 h)
99.999999999%
Amazon Glacier ✓ Quick removal (2 to 5 minutes)
■ Availability
✓ Designation in Life Cycle Management
NA
✓ Vault lock feature to retain data

✓ Cheapest archival storage ■ Durability


Amazon Glacier ✓ For data that is accessed once or twice a year 99.999999999%
Deep Archive ✓ Cost and time for data extraction (less than 12 h) ■ Availability
✓ Specify in life cycle management NA

✓ It uses two storage types hierarchies, high


frequency and low frequency, and keeps files that ■ Durability
S3 Intelligent are accessed at high frequency (standard class) 99.999999999%
Tiering while files that are not accessed are automatically ■ Availability
moved to low frequency (standard IA class).
99.99%
✓ Use it in case of you don't know the access pattern.
[Q] S3 usage costs
You are building a web application using AWS. The application is going to
have a number of processes such as storing and retrieving data from an EC2
instance to access storage. After comparing the I/O performance and latency
required for storage, it seems that S3, EBS, and EFS can all be supported. As
a solution architect, you‘ve decided to choose the lowest cost storage. You
will also use standard storage or general-purpose storage in any types of the
storage. (FIX THIS)

List these three storage options, in order of lowest cost, from left to right.

1) S3 standard < EBS general-purpose volume < EFS standard


2) S3 standard < EFS standard < EBS general-purpose volume
3) EBS general-purpose volume < EFS standard < S3 standard
4) EBS general-purpose volume < S3 standard < EFS standard
The cost of using S3
When comparing storage costs, excluding instance stores,
the least expensive are S3 and Glacier
✓ Standard: 0.025USD/month per GB
✓ S3 Intelligent Tiering: A combination of standard and
standard IA
Data capacity of S3
✓ Standard IA: 0.019USD/month per 1 GB
Cost of storage ✓ One Zone IA: 0.0152 USD per month per 1 GB
✓ Glacier: 0.005USD/month per 1 GB
✓ Glacier deep archive: 0.002USD/month per 1 GB

EBS generic ✓ Generic: 0.12USD/month per 1GB


Cost of storage ✓ Cold HDD: 0.03USD/month per 1 GB

EFS ✓ Standard: 0.36USD/month per GB


Cost of storage ✓ Low frequency access: 0.0272 USD/month

Instance store ✓ Included in an EC2 instance.

https://aws.amazon.com/jp/s3/pricing/ ...
[Q] S3 usage costs

As a solution architect, you are building an image processing application using


AWS. In this application, the user uploads images to Amazon S3 for processing.
You are using S3 Transfer Acceleration to upload large images, but the transfer
process seems to have failed.

In this scenario, how would you charge for the image transfer?

1) You only pay for what you use for S3 Transfer Acceleration to upload
images.
2) You will have to pay both the S3 transfer fee and the S3TA transfer fee for
the temporary use of the image upload.
3) Only pay S3 transfer fee to upload images
4) You don't have to pay a transfer fee to upload images.
The cost of using S3
S3 charges for data volume, requests and data transfer

Region ✓ Region: The price varies by region.

✓ Data capacity: charges are based on the amount of data


and storage period.
Data capacity (Per GB)
✓ S3 Intelligent Tiering, IA storage for a minimum of 30 days

✓ A fee is charged based on the request for the data.


Request and Data (Per 1000 requests)
Retrieval ✓ Charges are based on the amount of data retrieved.
(Per GB)

✓ Data Transfer In: (Free)


Data transfer ✓ Data transfer out to the Internet (per GB)
✓ Data transfer out of S3 to within AWS (per GB)

https://aws.amazon.com/jp/s3/pricing/ ...
The cost of using S3
The S3 has a volume discount.

https://aws.amazon.com/jp/s3/pricing/ ...
[Q] Life cycle management

You are a solution architect, setting up and managing your company's


document management storage. The high cost of using S3 storage due to
the very large amount of data you are storing is a problem and you have
been asked by your boss to implement cost savings. You need to set up new
lifecycle rules and configure them to migrate objects to cheaper storage
classes over time. So you need to set up a new lifecycle rule to migrate
objects to a cheaper storage class over time. However, some lifecycle rules
could not be set up.

Which of the following lifecycle rules cannot be set? (Please choose three.)

1) S3 Standard ⇒ S3 Intelligent-Tiering
2) S3 Standard-IA ⇒ S3 Intelligent-Tiering
3) S3 Standard-IA ⇒ S3 One Zone-IA
4) S3 Intelligent-Tiering ⇒ S3 Standard
5) S3 One Zone-IA ⇒ S3 Standard-IA
6) S3 Glacier ⇒ S3 Standard-IA
Life Cycle Management
Set rules that automatically change the storage class
which stores objects and delete objects after some time.
Automatic archiving
over a period of time
Setup Method

S3 (Standard) Glacier

Configuring a whole bucket or prefix


Automatic and inexpensive
Queueing at 0:00 UTC every day
storage over a period of time
based on the object update date,
specified in days
Maximum 1000 rules
S3 (Standard) S3 (Standard-IA)
Objects over 128KB can be moved to
IA Automatically deleted
Not configurable if MFA Delete is after a period of time.
enabled.
Deletion
S3 (Standard)
Life Cycle Management
The paths that can be configured for life cycle policies are
as follows:

Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/lifecycle-
transition-general-considerations.html
[Q] Version control

Silicon Valley startups use Amazon S3 to share data among their employees.
To ensure that these data are not accidentally deleted, you need to set up
objects to be protected.

As a solution architect, please select a response that can meet your


requirements. (Please select two.)

1) Enable versioning in buckets.


2) Create an event trigger when you delete an S3 object and set up a
notification by SNS.
3) Enable life cycle rules in the bucket.
4) Enable MFA deletion.
5) Enable data deletion disabled in the bucket settings.
Version control
Even if a user accidentally deletes data, it can be restored
from the old version.
[Now] [Past Minutes]
version ID version ID
00011 00010
Features Data A Data A

Data B Data B
 Apply version control to the
entire bucket. data C data C
 Objects are stored for each
version.
version ID
 Set how long the version is
00012
retained by lifecycle
Data A
management
 It is necessary to delete the old Data B
version separately from the
object. data C
S3 MFA Delete
As an option for the versioning feature, you can require
MFA authentication when deleting objects.
[Q] Access management for S3

You are a solution architect and you are building a video sharing application
on AWS. The application is configured to host video software on an EC2
instance which processes video data stored in an Amazon S3 bucket. The
video data must be restricted to be viewed only by certain users.

Which settings restrict third parties from directly accessing the video data in
the bucket?

1) Set a bucket policy to allow references only from URLs in the web
application.
2) Configure the IAM role to allow only web applications to access the S3
bucket.
3) Configure ACLs to allow only web applications to access the S3 bucket.
4) Configure the setting to allow references only from URLs in the web
application by using a signed URL.
[Q] Access management for S3

You are building a data analytics system in AWS. The system takes data from
IoT sensors and stores it in an Amazon S3 bucket via Kinesis Data Firehose
as it is streamed and processed by Kinesis Data Streams. The encrypted data
must then be simply queried using SQL queries on the data in the S3 bucket
and the results must be written back to the S3 bucket. Because of the
sensitive nature of the data, fine-grained controls must be implemented for
access to the S3 bucket.

Please select a combination of solutions that can meet these requirements.


(Please select two.)

1) Query the data with Athena and store the results in buckets.
2) Query the data by Redshift and store the results in buckets.
3) Use bucket ACLs to restrict access to buckets.
4) Query the data by Amazon EMR and store the results in buckets.
5) Use a bucket policy to restrict access to buckets.
6) Use an IAM policy to restrict access to buckets.
S3 Access Management
S3 access management uses different methods for
different purposes
Management System Feature
✓ Configuring IAM users to have access to S3 as an AWS
IAM resource
user policy ✓ Managing permissions to internal IAM users and AWS
resources

✓ Set bucket access rights with JSON


Bucket policy
✓ Access management, including for external users

✓ Permissions can be configured in XML on a per


ACL bucket/object basis
✓ Individually configurable for each object

✓ Grant the right to access S3 object URLs with pre-


signed URLs generated by the AWS SDK for a certain
Pre-signed URL period of time.
✓ Allow a third party to view the URL on the Internet.
[Q] S3 Bucket Policy

You are thinking of setting up a bucket policy to control S3 buckets, and since AWS
provides a number of sample bucket policies, you have decided to copy a bucket policy
that is close to your objectives. We need to understand and customize the contents of
the copied bucket policy.

Which of the following is the correct description of the following bucket policy?

{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS":
["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
S3 Bucket Policy

Version of the policy.


Be sure to include it at the beginning.

{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy

The part where the Statement


describes the policy content

{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy

The Sid (statement ID) is an optional


identifier that the user gives to the policy

{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy

Decide whether to allow or deny a


policy.

{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy

Specify the target principals (e.g., IAM


users, root accounts, etc.)

{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy

{ Specifies the action to apply the effect to


"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy

Specify buckets to which the policy


should be applied

{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
S3 Bucket Policy

Specify the conditions under which the


policy is applied

{
"Version":"2012-10-17",
"Statement":[.
{
"Sid":"AddCannedAcl",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::44445555666666:root"]}
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-bucket-policies.html
[Q] Pre-signed URL

You are a solution architect and you are building a video sharing application.
The application stores a large number of video files in an S3 bucket, which
are temporarily shared to users via an EC2 instance. At that time, only
authorized users should be able to access the video data.

Select the S3 configuration to enable this access.

1) Disable block public access to S3 buckets so that the URLs can be viewed.
2) Use CloudFront to distribute images based on the cache.
3) Use ACLs to grant access to users whose videos are shared.
4) Generate a pre-signed URL and distribute it to users whose videos are
shared.
Pre-signed URL
Pre-signed URLs make available special URLs that can
only be accessed by certain users.

(1) Access request

EC2
(2) Issue Pre-signed URL

(3) Signed in advance by URL


user

(4) Permitted
access
S3
[Q] Public access

You are a solution architect for a media company. You are currently running
web media on AWS and need to set up Amazon S3 buckets to serve static
content for this web media.

Which setting is used to publish all objects uploaded to the S3 bucket to the
Internet? (Select two.)

1) Disable block public access.


2) Enable public access settings.
3) Set a bucket policy to Allow access from the Internet.
4) Set a ACL to allow access from the Internet.
5) Set a IAM policy to allow access from the Internet.
Block public access
Block Public Access setting blocks access from the Internet
and is enabled by default when the bucket is created.
[Q] Cross-account access
Your manufacturing company is a large enterprise with 5000 employees.
With multiple departments using AWS accounts, you need to manage a large
number of AWS accounts. You have a requirement to copy an object in an S3
bucket owned by Account A to another S3 bucket belonging to Account B.
You have a requirement to copy an object in an S3 bucket owned by Account
A to another S3 bucket belonging to Account B. As the solution architect,
have configured the copied objects to be owned by the destination account.

Select a configuration method to meet this requirement.

1) Enable the requester payment feature on the S3 bucket of account A and


copy it to account B.
2) Create an IAM customer management policy that allows objects in
account A to be copied to account B and set up cross-origin resource
sharing in S3.
3) Set up replication from Account A buckets to Account B buckets and set
up cross-origin resource sharing in S3.
4) Create an IAM customer management policy that allows objects in
account A to be copied to account B, and allow cross-account access in
S3 and set up for IAM users.
Cross-account Access
You can grant access to account B to a bucket owned by
account A

Reference: https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
[Q] S3 access point

A company is in the process of building a document management system on


AWS. Because this storage is used by multiple departments and multiple
applications globally, you need to configure various access control rules.
Therefore, as a solution architect, you are looking to configure S3 to simplify
the management of large data accesses to shared datasets in S3.

Select an access setting that can meet this requirement.

1) Use Amazon S3 Transfer Acceleration.


2) Use the S3 access point.
3) Use VPC endpoints.
4) Enable multipart uploading.
S3 Access Point
You can create access points based on the access
destination and apply policies to configure access settings.

Manage access with bucket policies Manage access with access point policies
Static Web Hosting

You are building a corporate website for your company on AWS. The site is a
simple static web site and you have deployed it on Amazon S3 to keep costs
as low as possible.

Select the correct Amazon S3 website endpoint for the resulting site. (Select
two.)

1) http://bucket-name.s3-website.Region.amazonaws.com
2) http://s3-website-Region.bucket-name.amazonaws.com
3) http://bucket-name.s3-website-Region.amazonaws.com
4) http://s3-website.Region.bucket-name.amazonaws.com
5) http://bucket-name.Region.s3-website.amazonaws.com
Static Web Hosting
If you want to build a static site, you can build an
inexpensive web page with static web hosting

• You can host your website without a server.


• The price is low because you don't need a server.
Static Web Hosting
• Multi-AZ redundancy on its own, no operations required
advantages
• You can set up your own domain with Route 53.
• Delivery via CloudFront is possible

• No dynamic sites, such as running a server-side scripting


Static Web Hosting language
disadvantages • SSL cannot be used by itself, and CloudFront is required to
set up SSL

Depending on the region you are using, Amazon S3 website


endpoints can be in one of the following two forms
Website Endpoint ✓ http://bucket-name.s3-website-Region.amazonaws.com
✓ http://bucket-name.s3-website.Region.amazonaws.com
Static Web Hosting
If you want to build a static site, you can build an
inexpensive web page with static web hosting

Disable block public access.

Set bucket read permissions in the bucket policy.

Index documents such as Index.html


to be stored in the bucket.

In the static web hosting configuration screen


Index documents, such as Index.html
Set and enable it.
[Q] Domain settings with Route 53

You are building a corporate website for your company on AWS. The site is a
simple static web site and you have deployed it on Amazon S3 to keep costs
as low as possible. You would also like to set up a new domain name for it
using Route 53.

Select a setting to route traffic to an S3 static website using Route 53.


(Choose two.)

1) Set the bucket and the domain to the same name.


2) Set up a domain using a CNAME record.
3) Configure the domain using an Alias record (equivalent to an A record).
4) Configure the domain using an Alias record (equivalent to an AAAA
record).
5) Set the object and the domain to the same name.
Domain configuration with Route 53
You can set up a domain on your S3 static web hosting site.

 Select the Region alias to the S3 Web site endpoint as the traffic
destination.
 Set up the domain using the A record (IPv4) type of Alias record as
the record type.
 Default values are set for evaluating the normality of the target.
 The bucket name must be the same as the domain or subdomain
name
[Q] Cross Origin Resource Sharing (CORS)

Your company is building a document management system that uses S3.


This system is accessed by users using a domain, but you need the ability to
link files from other domains and use them.

Choose a solution to meet this requirement.

1) Global replication
2) Cross-account access
3) Cross Origin Resource Sharing (CORS)
4) S3 Access Point
Cross Origin Resource Sharing (CORS)
A single website can be shared with multiple domains.

Reference: https://docs.aws.amazon.com/ja_jp/sdk-for-javascript/v2/developer-guide/cors.html
[Q] S3 Event

You have a photo sharing application built on AWS. The photos are stored in
an S3 bucket and the image processing is performed by an application
hosted on multiple EC2 instances. The solution architect has configured a
mechanism to run image processing on one of the EC2 instances, depending
on the data uploaded.

How should you configure S3 and other AWS services to meet your
requirements?

1) Create an S3 event notification that is triggered the data upload, to


invoke SQS, and the EC2 instance polls the processing messages from the
SQS queue and processes the images in parallel.
2) Create an S3 event notification that is triggered the data upload, to
launch the SNS, which triggers the SNS message and causes the EC2
instance to process the images in parallel.
3) Create an S3 event notification that is triggered by the data upload to
invoke the Lambda function, which triggers the EC2 instance to perform
concurrent image processing.
4) Create an S3 event notification that is triggered the data upload, to
launch the SWF, which triggers the EC2 instance to perform concurrent
image processing.
S3 Event
System integration linked with S3 object operations

S3 Event Notification

 Notifications triggered by events in the bucket can be set up for


SNS/SQS/Lambda.
 Seamless system integration linked with S3 object operations -
Message notification of data upload to S3 via SNS
-Execute a Lambda function that is triggered the upload of an
S3 object

Data S3 Event Sending


uploads triggers email

EC2 S3 user
[Q] S3 encryption.

Company A, a law firm, is in the process of building a document


management system using Amazon S3. Many of the documents stored are
highly sensitive and related to legal work, so encryption is essential. The
company's security policy dictates that they use the company's own
algorithms to encrypt them. So, as a solution architect, you are considering
the encryption method that should be used.

Which of the following encryption methods should you use?

1) CSE
2) SSE-KMS
3) SSE-S3
4) SSE-C
S3 Encryption
Select one of the following four encryption formats when
storing data in S3
Encryption Method Feature
✓ Easiest S3`s defaults encryption method.
✓ Automatic creation and management of encryption keys on
SSE-S3 the S3 side. There is no need to manage the keys yourself.
✓ Encrypt data using 256-bit Advanced Encryption Standard
(AES-256), a block cipher

✓ Encryption using the encryption key set in AWS KMS


✓ Users can create and manage their own encryption keys
SSE-KMS
using AWS KMS
✓ User can use the client‘s unique encryption key with KMS

✓ Server-side encryption (SSE-C) with a user-specified key can


SSE-C be used
✓ It is complicated to set up and manage.

✓ Client-side encryption, which encrypts data before it is sent


to Amazon S3
Client-side
✓ Create and implement encryption keys using AWS KMS and
Encryption (CSE) other tools
✓ Use a master key stored in the application
[Q] Replication

Your company utilizes AWS resources across multiple regions. We currently


have an Amazon S3 bucket in the Singapore region that stores a large
amount of data and you would like to replicate this data to the Sydney region
to perform data backups.

Which of the following is the correct way to configure replication?

1) Enable version control in the Singapore region bucket and create a new
S3 bucket in the Sydney region to configure inter-region replication.

2) Create an S3 bucket with version control set up in the Sydney region and
configure replication from the Singapore region bucket.

3) Create an S3 bucket with version control set up in the Sydney region and
configure cross-origin resource sharing from the Singapore region bucket.

4) Enable version control in the Singapore Region bucket and create a new
S3 bucket in the Sydney Region to configure cross-origin resource sharing.
Replication
Use cross-region replication across regions to increase
resilience

Triggering Replication Setup

✓ The versioning feature must be


enabled in advance.
✓ Replication is triggered by the ✓ The bucket to be replicated is
creation, update or deletion of located in a different region
objects in the bucket ✓ Bi-directional replication is
possible.
✓ Data transfer costs are incurred.
[Q] Analysis of S3 data

Company B has configured a data lake using Amazon S3 to perform big data
analytics. As a solution architect, you want to put in place a solution that
performs big data analytics by querying data assets directly in the data lake.

Select the services that should be used in this case.

1) Analyzing Complex Queries with Redshift Spectrum


2) Parsing complex queries with Amazon Athena
3) Complex Query Analysis with S3 Select
4) Analyzing Complex Queries with Amazon EMR
Analysis of S3 data
Multiple services can be selected to search and analyze
data in S3, depending on the purpose
Analysis Services Feature

✓ It is an internal search function of S3, which allows you


S3 Select
to execute queries directly in S3 and obtain data.
(Glacier Select) ✓ Execute GZIP compressed data, CSV and JSON

✓ An interactive query service that makes it easy to


analyze data directly in Amazon S3
Amazon Athena
✓ Athena SQL queries can call SageMaker machine
learning models and perform machine learning inference

✓ Fully managed service that uses machine learning to


Amazon Macie discover, classify and protect sensitive Amazon S3 data
✓ Perform sensitive data detection and investigation.

✓ The ability to query Amazon Redshift directly from


Amazon Redshift Amazon S3's stored data
Spectrum ✓ Recommended if you are using Redshift, as it assumes
you have a Redshift cluster running
[Q] Collaboration with EMR

Company B has configured a data lake using Amazon S3 to perform big data
analytics; the web application access logs stored in S3 need to be processed
using Apache Hadoop to process the data.

Select the configuration of services that should be used to meet this


requirement.

1) Install Apache Hadoop in EC2 and analyze the data in S3.


2) Use RedShift Spectrum to process log files.
3) Use Kinesis Data Analytics to process log files.
4) Use Amazon EMR to process the log files.
Analysis of S3 data
Storing big data in S3 and performing big data analysis in
EMR

Accumulation of Perform big data Save the analysis


behavioral data and analysis with Apache results to S3
log files, including
genomic data
[Q] Check S3 Usage Status

As a solution architect, you are building a document management application


that utilizes S3 buckets. You are currently developing additional functionality
to generate reports using document data, and you need to be able to get a
detailed view of all request access to the S3 bucket and object-level
operations of the bucket.

Which is the best way to meet your requirements?

1) Set up CloudWatch logs for Amazon S3 buckets.


2) Enable the S3 Access Analyzer for Amazon S3 buckets.
3) Enable server access logging for Amazon S3 buckets.
4) Configure CloudTrail for Amazon S3 buckets.
Check S3 Usage Status
You can check the status of S3 usage and the occurrence
of S3 events.

S3 Storage Class Analysis

 Simplified visualization of data access patterns


 Output in CSV format
 Perform in-bucket analysis.
 Identify infrequently accessed data and retention periods for lifecycle
policy settings.
Server access log
Allows you to log access to S3. Set buckets and prefixes
as targets.
S3 Access Analyzer
Check if the access status of S3 complies with the access
policy, and monitor for unauthorized access

✓ S3 features in conjunction with the IAM Access Analyzer


✓ Monitoring for policy violations in accordance with bucket
policy/ACLs
✓ Analyze public or shared bucket access and display the
results of that analysis.
✓ Check the actual access to the bucket.
[Q] S3 Consistency Model

Your company runs a web application on AWS. The application stores log files
in Amazon S3. This log file is used for real-time processing of ad displays, so
there are frequent read operations, but when changes occur to the log file,
the old log file is read.

Which is the most likely cause of this problem?

1) The S3 bucket replaces an existing object, and attempts to read the


object immediately may return old data until the changes are fully
reflected.
2) With S3 buckets, reading errors may occur when trying to read the object
immediately after replacing an existing object.
3) Because the S3 bucket uses a strong consistency model, it cannot read
object data that is being updated, so it displays old data.
4) The S3 bucket can't read object data that is being updated unless object
sharing is set up, so you'll see old data.
S3 Data Consistency model
S3 adopted the "strong consistency model" for data
registration / update / deletion.

Type Consistency Model

✓ Consistency Read
Data registration ✓ Data will be reflected immediately
after registration

Update ✓ From December 2020, the


eventual consistency model was
changed to a strong consistency
model. Therefore, no discrepancy
Delete occurs.
[Q] Data consistency check for uploads.

An AI venture is building an AI-based facial recognition application. To


achieve facial recognition, millions of images are stored in an S3 bucket,
which is then used to learn facial recognition. In order to register a new user
for the face recognition, the user's face picture must be added to the S3
bucket. It is important that the uploaded image is not changed and is saved
with integrity.

What do I need to do to show that the object was successfully saved?

1) Enable S3 bucket integrity checking.


2) Get the HTTP200 result code and MD5 checksum from the S3 API call.
3) Set up an S3 event and implement Amazon SNS message notifications
after the upload.
4) Set a hash value in the S3 prefix to check for consistency.
Data consistency check at upload
You can use the Content-MD5 header to check the
integrity of uploaded objects. [fix this]

1. Get the base64 encoded MD5 checksum value of an object.

2. Verify the integrity of the object being uploaded.

When the upload is signed with AWS signature version 4, you must
use the x-amz-content-sha256 header instead.

Is this a 3 rd option?
[Q] Increase the speed of uploads.

You are a solution architect and are building a video sharing application on
AWS. The application is configured to host a video processing application on
an EC2 instance that uses video data stored in an Amazon S3 bucket. The
users are global and large amounts of data are uploaded. This has led to
significant delays in uploading large video files to the destination S3 bucket,
resulting in complaints.

Select a method to increase the speed of file uploads to S3. (Please select
two.)

1) Use Amazon S3 Transfer Acceleration to speed up the upload of files to


the destination S3 bucket.
2) Use Direct Connect to speed up file uploads to S3.
3) Use AWS Global Accelerator to speed up the upload of files to the
destination S3 bucket.
4) Using AWS Transit Gateway to Speed Up File Uploads to Destination S3
Buckets
5) Use multi-part uploads to speed up the uploads.
Multi-part upload
The ability to split large objects into several uploads

Original 300 300 300 300


data MB MB MB MB S3
1TB
• Split into 1 - 10,000 parts
Up to 5 TB of data.
Uploadable
• 5MB - 5GB in size

• *Finally, ~5MB or less is


possible.

[If you fail]


◼ If you stop uploading, the part data remains
◼ Cleanup settings are possible with life cycle
management.
S3 Transfer Acceleration
Use the geographically closest edge location to perform
fast data uploads.
[Q] Improved performance

Currently, Company B, an SCM services provider, is building a new SCM


solution on AWS. The solution requires uploading a large number of data to
S3 buckets to share documents to be recorded during supply chain runs.
These upload requests can run from hundreds to as many as 2000
simultaneously and require processing to streamline performance.

As a solution architect, choose the best performance improvement strategy.


(Please select two.)

1) Create unique custom prefixes within a single bucket and upload a daily
file with those prefixes.
2) Upload files with Transfer Acceleration enabled within a single bucket.
3) Enable S3 multipart upload and run the upload process.
4) Upload a file that creates a random custom prefix using a hash in a single
bucket.
Improved performance
Improve performance with parallel requests and custom
prefixes

• Distribute requests by scaling parallel requests horizontally


to Amazon S3 service endpoints, distributing the load
Executing Parallel across multiple paths over the network
Requests • High-throughput transfers are possible using applications
that GET or PUT data on multiple connections
simultaneously.

• Use date-based sequential naming with a custom prefix to


optimize performance.
Custom Prefix
• More than 3,500 PUT/COPY/POST/DELETE requests and
5,500 GET/HEAD requests per second
Backup
S3 can backup and restore its objects with Glacier

Archive Restore

• Individual objects can be


• Moving S3 object data to Glacier
restored on a per-object basis
through lifecycle settings
• Temporarily replicate for a
specified number of days
[Data linkage]
• Select recovery time
• S3: 8KB objects/metadata
• Restoration period is charged by
• Glacier: 32KB objects/metadata
Glacier
Batch operation
Create batch processing for large amounts of S3 object
data

✓ Jobs are the basic unit of function for S3 batch


operations, and creating a job creates a batch operation.
✓ A job contains all the information needed to perform a
Job given operation on a list of objects
✓ Pass a list of objects to an S3 batch operation and
specify the actions to be performed on those objects

✓ A manifest is an Amazon S3 object that lists the object


keys on which Amazon S3 acts
Manifest ✓ Specify the manifest object key, Tag, and optional
version ID
✓ Amazon S3 Inventory Report / CSV file in two formats
The Scope of EC2
What is EC2?
Instantly create a server on the Internet that has the same
performance as a server in an on-premises environment.
Data center

Cloud
(Internet)

EC2 RDS
The Scope of the EC2 questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked to select an EC2 instances to match the
EC2 Features requirements of a scenario.

EC2 Cost ✓ You will be asked how to reduce the cost of using EC2

✓ You will be asked how to make an AMI available in a


different region.
Use of AMI ✓ You will be asked how to efficiently launch optimal EC2
instances using an AMI.

✓ You will be asked the optimal instance type to meet the


Select Instance Type requirements in the case scenario.

✓ You will be asked settings for automatic configuration


User data using a script when launching an EC2 instance.
The Scope of the EC2 questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about the ability to add additional
Tag settings information to an EC2 instance.

✓ You will be asked the authentication method used to


access the EC2 instances.
Key Pair Usage ✓ You will be asked how to use a key pair in other accounts
and regions.

✓ You will be asked to configure the settings and methods


Internet access required to access the launched EC2 instances via the
Internet.

Instance Purchase ✓ You will be asked about selecting a cost-effective purchase


options options for instances.

✓ You will be asked the type and characteristics of the


Reserved instance reserved instance.
features ✓ You will be asked the features of the reserved instances
and how to sell and about changing its attribute.
The Scope of the EC2 questions
Frequent questions extracted from 1625 questions are as
follows
Spot instance ✓ You will be asked to select the features of the spot
features instance based on a given scenario.

✓ You will be asked how to configure the spot instances


Use of Spot fleet using a spot fleet.

✓ You will be asked to select a spot block to meet the


Use of Spot block requirements based on a given scenario.

✓ You will be asked to select an EC2 fleet to meet the


Use of EC2 fleet requirements based on a given scenario.

✓ You will be asked to select a cluster placement group to


Use of meet the requirements based on a given scenario.
Placement Group ✓ It is necessary to select the type of placement group to be
asked.
The Scope of the EC2 questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked how to configure the network of EC2
Enhanced networking instances for high performance.

✓ You will be asked to select the Elastic Fabric Adapter to


Elastic Fabric Adapter enable use cases such as HPC workloads.

✓ You will be asked to select Run Command as a way to run


Run Command Windows server commands from the console.

✓ You will be asked the status of an EC2 instance when it is


Automatic EC2
recovered to meet the requirements based on a scenario.
Recovery ✓ You will be asked the method of EC2 backup.

✓ You will be asked of troubles at instance start-up.


Instance Status ✓ You will be asked about the status of an instance with
respect to its stopping and starting.
The Scope of the EC2 questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked the purpose of performing hibernation
Hibernation on an EC2 instance.

✓ You will be asked how to get the metadata from the EC2
Obtaining Metadata instance.
[Q] EC2 Features

A venture company has a web application on AWS. This web application uses
RDS as a database to run batch jobs every day at 7am. In doing so, it needs
to process the log files of the business operations and run about 2000
records sequentially in a batch job via a shell script. Each record requires
about 1-5 seconds of execution time to process each record, so the load on
the batch job is high.

Which computing engine should be used to run this batch job?

1) AWS Lambda
2) Amazon EC2
3) Amazon EMR
4) Fargate
EC2 Features
A virtual server available on a pay-as-you-go basis (hours
or seconds base) that can be launched in minutes

◼ Start up, add or remove nodes and change machine specifications in


minutes.
◼ EC2 uses Generic Intel architecture
◼ Available with administrative privileges
◼ Support for most operating systems, including Windows and Linux
◼ Automatic configuration up to the operating system by selecting the
provided types, with layers above the operating system at your disposal
◼ Create, save, and reuse OS settings in your own Amazon Machine Image
EC2 Features
The unit used by EC2 is called an instance, and an
instance is set up in an arbitrary AZ and used as a server.
Tokyo Region

AZ

AZ AZ

EC2
instance
[Q] EC2 Cost

A leading e-commerce company uses numerous EC2 instances to enable


their e-commerce sites and business processes. The cost of using EC2
instances has become enormous and you have been asked to promote cost
optimization. You need to review the EC2 instance billing method and
consider the best way to deal with it.

Which of the following is the correct explanation for EC2 cost incidence?
(Please select two.)

1) There is a cost if an on-demand instance is placed on hold.


2) Costs are also incurred while the spot instance is preparing to stop.
3) No costs are incurred while the on-demand instance is down.
4) Costs are incurred even if the reserved instance is in the terminated state.
5) Costs are incurred when an on-demand instance is preparing to go into a
suspended or dormant state.
EC2 Cost
The cost of using EC2 is determined by the price range of
the instance type and purchase options.

There are various fees based on the purchase options.

✓ On demand: standard price


Purchase Options
✓ Reserved/Saving Plan: discounts for reservations
and pre-payments
✓ Spot Instances: Receive up to 90% discount

✓ The price is determined by the instance type and


Instance Type and Size size.
✓ a1.medium is 0.0255USD / hour
EC2 Cost
The price varies depending on the region and charges for
data transfer out in addition to time of use.

✓ Pay by the hour or by the second (minimum 60


seconds)
Setting up time billing ✓ Use of a Linux instance is charged by the second
✓ other instances are charged by the hour.

✓ Data Transfer In: Free


data transfer ✓ Data transfer out to the Internet (per GB)
✓ Data transfer out of S3 to within AWS (per GB)

✓ You will also be charged for the amount of data in


the attached EBS. Note that even if you stop the
volume instance, you will still be charged for the EBS.
✓ You will not be charged to the instance store.
EC2 Cost
Whether to costs are incurred depends on the state of the
EC2 instance.
EC2 Cost
Stopping an EC2 instance can reduce charges

✓ Charges are based on the time it takes to run.


Running/Start/Restart
✓ There will be a charge for the EBS volume in use.

✓ EC2 fee accrual will be stopped.


Stop / Hibernate
✓ There will be a charge for the EBS volume in use.

✓ EC2 fee accrual will be stopped.


Terminate ✓ By default, the EBS volume set as the root volume
is also deleted and fee accrual is stopped.
How to start EC2
Steps to launch an EC2 instance

Select AMI (OS Settings)

Select an instance type

Configuring Instance Type Details

Select Storage

Add Tags

Select a security group

Set the key pair


[Q] Use of AMI

Company B has decided to use AWS as a standard IT Infrastructure across


the company, including its subsidiaries and group companies. In order to do
so, they need to prepare an AMI for standard use and make it available to
different AWS accounts. As a solution architect, you have configured an AMI
in the Tokyo region to be available in the Singapore region as well. Also, the
Singapore region account is a separate account.

Which of the following AMI features are available to address this


requirement? (Please choose two.)

1) AMI can be copied between AWS regions.


2) AMI can be shared between AWS regions.
3) AMI that uses encrypted snapshots are not available.
4) AMI cannot be shared with another AWS account.
5) AMI can be shared with another AWS account.
6) Key pairs can also be shared together.
Select AMI (OS Settings)
You can select the OS setting through AMI

AMI: OS image for instance launching

AMI
(Courtesy
of AWS)

AMI
(3rd party)

Save to S3
EC2
Custom Instance
AMI
(your own)
Select AMI (OS Settings)
You can select the OS setting through AMI
[Q] Use of AMI

You have been asked to launch a large number of EC2 instances to build
workloads tasks. In order to perform these tasks efficiently, you need to
automate the process of deploying new computing resources that will be in
the same configuration and the same state.

Which approach is appropriate to meet this requirement? (Select three.)

1) Executing the Bootstrap


2) Using the AMI provided by AWS
3) Sharing EC2 instances by copying AMI
4) Using the Golden Image
5) Using the launch template
6) Using activation settings
Use of AMI
EC2 instances can be launched, backed up and shared
through AMI.
✓ Use AMI as the OS choice for the your servers
OS Selection
✓ AMI is used to restore the server you were using

✓ You can create an AMI from an existing EC2


instance setting.
EC2 backup
✓ Save an EC2 instance setting as a backup, including
a snapshot of the EBS volume.
✓ An AMI that reflects the optimal EC2 instance
configuration is called a golden image
Golden image
✓ You can always launch the best instance by using
the AMI with the best EC2 instance configuration.

Share AMI ✓ You can share the AMI with other accounts by
giving permission to specify the AWS account
With other accounts number.

✓ AMI is only available within the region


Moving to another
✓ Copying to another region is possible. The Copied
Region
AMI will be a separate from the original AMI.
How to start EC2
Steps to launch an EC2 instance

Select AMI (OS Settings)

Select an instance type

Configuring Instance Type Details

Select Storage

Add Tags

Select a security group

Set the key pair


Selecting an Instance Type
Select the server resources, such as CPU, memory, storage,
and network capacity, in the instance type selection
[Q] Select the instance type

A major e-commerce company is using AWS to build a web application. This


application needs to create the ability to analyze customer information and
present the best products to them. In doing so, they run workloads that
require high sequential read and write access to very large data sets on local
storage.

Which of the following is the best instance type to use in this scenario?

1) Storage Optimized Instances


2) memory-optimized instance
3) computing-optimized instance
4) generic instance
Instance type

Family and Generation

t2.nano
Instance capacity
Instance Type
Select the instance type according to the case purpose.
Family: A1, M5, T3, etc.
Provides balanced computing, memory, and network resources for a variety of workloads.
General Purpose This instances that are ideal for applications that use the same percentage of an instance's
resources, such as web servers and code repositories.

Family: C5, C6g, etc.


Computing Used for computing-bound applications that require high performance processors. Use
cases include batch processing workloads, media transcoding, high performance web
optimization servers, high performance computing (HPC), scientific modeling, dedicated game servers
and advertising server engines, machine learning inference.

Family: X1, R5, high memory, z1d, etc.


Memory optimization This instances are optimized for fast performance needed ffor workloads that process large
data sets in memory

Family: H1, D2, I3, I3en, etc.

Storage Optimization This instances are suitable For workloads that require high sequential read and write
access to large data sets in local storage. Storage optimized instances are ideal for low-
latency random I/O operations with tens of thousands of IOPS

Family: P3, Inf1, G4 (GPU), F1 (FPGA), etc.

High speed computing High-speed computing instances are ideal for software that uses hardware accelerators
(co-processors) to perform functions such as floating-point computation, graphics
processing, and data pattern matching on the CPU.
How to start EC2
Steps to launch an EC2 instance

Select AMI (OS Settings)

Select an instance type

Configuring Instance Type Details

Select Storage

Add Tags

Select a security group

Set the key pair


[Q]How can I use user data?

A company is building a web application on AWS that uses EC2 instances.


When launching these EC2 instances, it is necessary to automatically
configure the Apache server that will be used by all instances.

Select the features of the EC2 instance that should be used to meet this
requirement.

1) User data
2) Metadata
3) Tags
4) Enable the automatic setting function
Use of user data
You can configure a script to be executed when an EC2
instance is launched with user data.

✓ User data is used to automate the detailed


configuration of EC2 instances
User data
✓ You can set up a Bash script, for example, to
be executed at instance launch.

✓ A process that is performed at startup by


Bootstrap
passing user data to the instance.
Use of user data
Advanced instance configuration allows you to set up
execution scripts with user data in the advanced details
How to start EC2
Steps to launch an EC2 instance

Select AMI (OS Settings)

Select an instance type

Configuring Instance Type Details

Select Storage

Add Tags

Select a security group

Set the key pair


Storage choices
Add storage to be used directly by EC2.
Storage choices
There are two types of storage used directly in EC2:
indivisible instance stores and self-configured EBS

✓ Block-level physical storage inseparable from EC2 on a


disk embedded in the host computer
Instance
✓ Temporary EC2 data is retained, and the data is
store deleted when the EC2 is stopped or terminated.
✓ For Free

✓ Networked block-level storage connected to the


network and managed independently of EC2
Elastic Block Store
✓ If you terminate EC2, EBS can retain data and store
(EBS) Snapshot in S3.
✓ Additional EBS fee is required.
How to start EC2
Steps to launch an EC2 instance

Select AMI (OS Settings)

Select an instance type

Configuring Instance Type Details

Select Storage

Add Tags

Select a security group

Set the key pair


[Q] Tagging

Your company has multiple departments using AWS and a variety of users
are using AWS. Therefore, the company need to effectively manage AWS
resources. As a solution architect, you have set up a categorization that
allows you to identify Amazon EC2 resources by department.

Which AWS features can be used to classify many AWS resources?

1) Parameter
2) Metadata
3) Tags
4) User data
Tag settings
You can set up additional tags to give AWS resource
names, and make groups for resources such as EC2
How to start EC2
Steps to launch an EC2 instance

Select AMI (OS Settings)

Select an instance type

Configuring Instance Type Details

Select Storage

Add Tags

Select a security group

Set the key pair


Security group
Provides a firewall feature that allows the user to control
the accessibility of traffic to the instance.

Security
HTTP access group

SSH access
Port 22
permission
(SSH)
EC2
instance
How to start EC2
Steps to launch an EC2 instance

Select AMI (OS Settings)

Select an instance type

Configuring Instance Type Details

Select Storage

Add Tags

Select a security group

Set the key pair


[Q] Using key pairs

You have created an AWS account and launched your first Linux EC2 instance.
You need to access this instance and install the server software to configure
it as a web server. To do so, you will have to access the instance and
configure it from your local terminal.

Select the authentication method you want to use to securely access your
instance.

1) Key pair
2) Access key
3) Secret access key
4) ID and Password
Key Pair Usage
Uses a key pair to access an instance with a public key
that matches the private key.

key pair
secret key public key

Private key.
to access the
EC2 instance EC2
instance
[Q] Launch template

As a solution architect, you have asked to standardizse your internal AWS


usage. By preparing the configuration of EC2 instances that you typically use,
you are considering a response that will allow you to manually launch EC2
instances on a regular basis, streamlining the process and reducing
management overhead. In doing so, it is necessary to save the EC2
instance's AMI selection, instance types, key pairs, security groups, and
other settings.

Select the features of EC2 that meet this requirement.

1) Use a launch template.


2) Use a configuration settings.
3) Use AMI.
4) Use a configuration group.
Configuration of template
This allows you to save detailed settings of the startup as
a template.

Launch Template

Auto Scaling
[Q] Internet access

You, as the solution architect, have created a new AWS account and
configured your IT infrastructure. You have created a new subnet on an
Amazon VPC and launched an Amazon EC2 instance on that subnet. You
have attempted to access the EC2 instance directly from the Internet to set
up the EC2 instance, but you can't seem to make a connection.

What steps do you need to take to deal with a failed connection to an EC2
instance? (Please choose two.)

1) A NAT gateway is installed on the public subnet.


2) The rules for outbound traffic are properly configured in the security
group.
3) The instance is set to a public IP address.
4) The instance is configured with a private IP address.
5) Internet access routes to the Internet are properly configured in the route
table associated with the subnet to the Internet gateway.
Internet access
Use a public IP to access the launched instance.
Internet access
If you are unable to access an EC2 instance from the
Internet, the following are possible causes

✓ A public IP address is automatically given when an


instance is launched on the default subnet.
✓ When using user-created subnets with the default
No public IP settings, "Public IP auto-assignment settings" are not
address. enabled.
✓ If it is not assigned, the instance is recreated or EIP is
used.

✓ Appropriate permissions have not been configured by


Access permission security groups or network ACLs.
setting
✓ Problems with your on-premises network environment

✓ You do not have instances on the public subnet (no


Network internet gateway configured on the subnet and VPC)
Misconfiguration.
✓ No route for IGW in Route table
[Q] Instance purchase options

Your company is looking to migrate its IT infrastructure hosted in a data


center to the AWS cloud. Your company owns licenses for the server
software used by your applications, and you want to continue to use those
licenses when they are moved to AWS. You are a solution architect and are
considering the best server migration destination.

Choose the most cost-effective instance purchase method.

1) Using a Dedicated Host


2) Use a bare-metal instance.
3) Using an on-demand instance
4) Using a Reserved Instance
Instance purchase options
Since discounted prices are offered based on the instance
purchase options you choose based on your needs.

✓ This is a normal instance purchase option


On-demand ✓ Pay by time period for computing performance without a long-term
instance contract. You have full control over its lifecycle and can decide when
to start, stop, deactivate, start, restart, or end it.

✓ Amazon EC2 Reserved Instances (RIs) are a form of instance


purchase that offers a significant discount (up to 75%) compared to
regular on-demand rates by reserving a one or three year term of
Reserve instance use.
✓ There are two types of capacity that can be reserved for use in a
particular availability zone or region.
✓ You can purchase recurring capacity reservations on a daily, weekly,
or monthly basis for a specified start time and duration over a year.
Scheduled Reserve capacity in advance so that it can be used when needed.
Reserved Instance ✓ It is used for workloads that do not run continuously but are
executed on a regular schedule.
→Now no longer available in 2021
✓ Spot instances are unused EC2 instances held for AWS management
but unused, which are available at a lowest price than any others.
Users can request unused EC2 instances at a quiescent discount (up
Spot instance to 90% off or so).
✓ It is used when there is flexibility in execution time and for processes
that can be interrupted.
Saving Plan
Reduce the cost of Amazon EC2 by adhering to a certain
amount of usage over a period of one to three years.

◼ Discounted contracts that are applied by signing a contract to


use a specific amount of capacity (measured in USD/hour) for
a period of one or three years, similar to a reserved instance
◼ Save up to 72% on AWS computing usage
◼ Applicable to Amazon EC2, AWS Fargate and AWS Lambda
Capacity Reservations
Capacity Reservations can be applied to each purchase
option and it is used in a set with a reserved instance.

✓ The right to reserve that an instance type can be


Capacity
started. Capacity Reservations beforehand suppresses
Reservations insufficient capacity errors at runtime.

On-demand ✓ On-demand Capacity Reservations is used to reserve


Capacity capacity for on-demand usage fees only for the period
Reservations of time required

Zone ✓ Capacity Reservations within a designated availability


Reserved zone (AZ) for a period of one year or three years
instance ✓ Only Specific AZ is available in designated locations.

Region ✓ Capacity Reservations within a given region for 1 year


Reserved or 3 years
instance ✓ AZ is available everywhere in the specific region.
Capacity Reservations
Capacity Reservations for a given period of time for an
EC2 instance in a specific availability zone

On-Demand
Reserved
Capacity Saving Plan
instance
Reservations
No commitment is
required and can be
Period created and cancelled
Requires a fixed 1-year or 3-year commitment
as needed

Specific AZ capacity is Specific AZ


Capacity
available for or available to book in None
Advantages reservation the region

Billing discount None Yes Yes

Limit of 20 per AZ or
Limit the number of
region but
Instance Constraints on-demand instances None
Application for raising
per region
the limit is possible.
Physically capable instances
A type of instance that can be launched and users can
controll a physical host server to some degree.

Dedicated
Dedicated Host Bare Metal
instance

✓ An EC2 instance running ✓ Instances that have


on a dedicated HW VPC direct access to the
✓ A physical server with
✓ Physically separate from processor and memory of
EC2 instance capacity
instances belonging to the server on which the
completely dedicated to
other AWS accounts at application is based
users
the level of the host HW ✓ Enables integration with
✓ Ability to use existing
✓ Instances of the same various AWS services,
software licenses bound
AWS account may share allowing the OS to
to the server
HW with instances of the directly access the
same AWS account. underlying hardware
[Q] Reserved instance features

Your company has built a web application by purchasing a standard Reserved


instance upfront for a 3-year term. However, the application was so buggy
that the company decided to hastily discontinue it. There are still more than
two years periods left to use the reserved instance. As the solution architect,
you need to stop accruing charges for the reserved instance as soon as
possible.

What cost effective measures should you choose to address in this situation?

1) Sell reserved instances in the AWS marketplace.


2) Sell reserved instances on the Amazon Marketplace.
3) Since the reserved instances are purchased upfront on a 3-year contract,
the charges have already been incurred and we have no choice but to use
them as they are.
4) Contact AWS to cancel your AWS subscription.
Reserved instance features
You can use the instance for a specified period of time,
and it is up to 72% cheaper than on-demand
standard convertible

1 year (40% discount) 1 year (31% discount)


Period of use
3 years (60% discount) 3 years (54% discount)

AZ/instance size/network
Yes Yes
type changeable or not

Whether to change the


instance family/OS/tenancy/ No Yes
payment options

Preserved instance
Marketplace. Yes No
Saleability

 Workloads in constant state or predictable usage


Use case  Applications for capacity reservations, such as
disaster planning
[Q] Spot instance features
Company B has built a web application using AWS services. This web
application has recently been experiencing increased load and has been
experiencing problems with slow processing. As a solution architect, you
have configured Auto Scaling to use spot instances to handle the temporary
load increase.

Which of the following statements is true regarding the functionality of a spot


instance? (Please choose two.)

1) If the spot request is persistent, the spot instance is launched again after
the spot instance has been suspended.
2) Canceling an active spot request will also terminate the associated
instance.
3) If the spot request is persistent, stop the spot instance and then start the
spot instance again.
4) Spot blocks can be interrupted in the same way as spot instances.
5) Canceling an active spot request does not terminate the associated
instance.
Spot instance features
EC2 instances with spare computing capacity available at
a discount (up to 90% off) compared to on-demand
instances

◼ It is a cheapest instance (up to 90% discount) because spot instances


are kept open AWS operation but a user can temporarily borrow it
◼ It takes a little longer than usual to boot up.
◼ It can be used for backup purposes and may be deleted in the process.
⇒ Only Temporary used

Request interruption and other behavior.


◼ If the need for spot instances are persistent, restart the spot instance
after it has been suspended
◼ Spot blocks are designed to be uninterrupted.
[Q] Use of Spot fleet

Your company has a batch processing workload that runs weekly and runs
for about two hours. The processing of this workload requires you to
automatically select and launch the lowest priced instance by specifying the
instance type and bid price to increase cost efficiency.

Which is the most cost effective solution that can meet this requirement?

1) Running a Workload in a Spot Instance


2) Running Workloads on a Reserved Instance
3) Running Workloads on Scheduled Reserve Instances
4) Running a workload in a spot fleet
Use of Spot Fleet
By specifying the instance type and bid price, Spot Fleet
automatically selects and launches the lowest priced
instance.

[Example of Spot Fleet Setup]

✓ Number of instances: 10
✓ Bid Price: $1.00
✓ Instance type: c4.16xlarge, c3.8xlarge

From c4.16xlarge and c3.8xlarge instance types


10 instances are automatically bid and launched.
[Q] Use of Spot block

Your company has a batch processing workload that runs weekly and runs
for about two hours. The processing of this workload requires the use of spot
instances to be cost-effective, but a two-hour workload is not allowed to be
stop while in progress. As a solutions architect, you are considering the best
instance.

Which of the following options is the most cost effective solution?

1) Running a spot instance with a spot fleet


2) Running a spot instance with a spot block
3) Running a spot instance with an EC2 Fleet
4) Running a spot instance with an EC2 block fleet
Use of Spot block
A Spot block can have a spot instance available for 1 - 6
hours without interruption

✓ No interruptions for up to six hours.


✓ Stabilize the use of spot instances that might
Advantages
be interrupted in the process.
✓ Can be configured as an option for spot fleets

✓ The price will be slightly higher than a regular


Disadvantage
spot instance, so not as cheap.
[Q] Use of EC2 fleet

A leading eCommerce company is building an eCommerce application. The


application will be accessed by a large number of users globally. Estimating the
performance requirements, they need 20 instances in the medium to long term,
with an additional 5 instances for load flexibility and for background jobs such as
batch jobs.

How is it best to launch a total of 30 instances in a cost-optimized manner?

1) 20 reserved instances and 10 scheduled instances make up the spot fleet.


2) An EC2 fleet consists of 20 on-demand instances and 10 scheduled reserved
instances.
3) You configure an EC2 fleet of 20 reserved instances and 10 spot instances.
4) A spot fleet of 20 on-demand instances and 10 scheduled reserved instances
make up the spot fleet.
Use of EC2 Fleet
A mechanism for defining settings as an instance group
consisting of on-demand and spot instances

EC2 Fleet must configure the following points :


• Which instance type do you want to use?
• How many combinations of on-demand and spots should be used?
• How much will you spending cap?
[Q] Use of Placement groups

The university is now running its data analysis on AWS. This specific analysis
requires high performance server processing, and high performance network
processing with multiple EC2 instances for performance computing is also
essential.

Which EC2 instance configuration should you use to run this application?

1) Configure a partition placement group on an EC2 instance.


2) Configure an EC2 fleet with an EC2 instance.
3) Configure a spot fleet on an EC2 instance.
4) Configure a spread placement group on an EC2 instance.
5) Configuring a Cluster Placement Group on an EC2 instance
Use of Placement Groups
Ability to logically group instances within a single
availability zone to improve performance
✓ A configuration that logically groups instances in a single AZ
✓ Can span multiple peer VPCs in the same region
Cluster ✓ Instances within a group are placed in the same segment of the network
with higher throughput limits per flow of TCP/IP traffic, improving
placement communication between instances by placing them in the same segment of
group the network that has a higher bifurcated bandwidth.
✓ Configuration for applications with low network latency and high network
throughput

✓ Amazon EC2 is configured by dividing each group into logical segments


called partitions
Partition
✓ Each partition in a placement group has its own set of racks, and no two
placement partitions in a placement group share the same rack.
group ✓ Isolating racks isolates and mitigates the impact of hardware failures within
the application.

✓ A group of instances that can be placed separately in different racks, each


with its own network and power supply
Spread ✓ For instance, within a single AZ, seven instances placed in a spread
placement placement group will be placed in seven different racks.
group ✓ A small number of critical instances can be kept separate from each other.
It reduces the risk of concurrent failures that can occur when instances
share the same rack.
[Q] Enhanced networking

The university is now running its genome data analysis on AWS. Genome
analysis requires high performance server processing, and high performance
network processing with multiple EC2 instances for performance computing
is also essential. As a solution architect, you need to configure EC2 instances
to ensure proximity, low latency and high network throughput.

Which configuration is necessary to accomplish this requirement? (Please


choose three.)

1) Use enhanced networking for EC2 instances.


2) Use an EBS provisioned IOPS volume.
3) Use a computing-optimized instance of an EC2 instance.
4) Use a cluster placement group.
5) Use a Dedicated Host.
Enhanced networking
High bandwidth, high performance in packets per second
(PPS), and consistently low inter-instance latency.

1. Reference: https://aws.amazon.com/jp/premiumsupport/knowledge-center/enable-configure-enhanced-networking/
[Q] Using Elastic Fabric Adapter

The university runs genomic data analysis in an on-premise environment.


Genomic analysis requires high-performance server processing and uses
performance computing (HPC). As a solution architect, you are considering
moving these workflows from an on-premises infrastructure to the AWS
cloud.

Which network components are used by the EC2 instance running the HPC
workflow?

1) Elastic Network Interface


2) Elastic Fabric Adapter
3) Elastic Network Adapter
4) Elastic IP Address
Using Elastic Fabric Adapter
Network devices for EC2 to accelerate high-performance
computing (HPC) and machine learning applications

ENA provides the traditional IP networking EFAs have OS bypass capabilities in addition to ENA
capabilities needed to support VPC features: the Libfabric API allows HPC and machine
learning applications to bypass the operating system
kernel and communicate directly with EFA devices
[Q] Run Command

You are an engineer who is responsible for internal AWS operations in your
company, you are running an EC2 instance and running a Windows server
setup. You need to run the PowerShell script for this Windows server, but you
need to run it from the AWS Management Console.

Select a method to run the script on the target EC2 instance from the AWS
Management Console.

1) AWS Trusted Advisor


2) AWS CLI
3) Run Command
4) AWS OpsWorks
Run Command
Execute commands such as PowerScript and Windows
Update settings from the management console

Reference: https://aws.amazon.com/blogs/aws/new-ec2-run-command-remote-instance-management-at-scale/
[Q] Automatic recovery of EC2

Your company runs a large web application with over 30 EC2 instances. This
application needs to operate as automatically as possible. As a solution
architect, you can use Amazon CloudWatch alarms to automatically recover if
an EC2 instance fails.

Which is the correct description of this auto-recovered instance's status?

1) The public IPv4 address set for the instance is changed to a different
address when the instance is restored.
2) Public IPv4 addresses configured in the instance are maintained after
recovery.
3) The restored instance will retain the instance ID, private IP address,
Elastic IP address and all metadata.
4) Any data in memory before the instance restoration is preserved.
EC2 Recovery
It is important to backup your EC2 instances on a regular
basis

◼ Take regular backups (AMI/snapshots)


◼ Regularly review the recovery process
◼ Deploying Critical Applications in Multiple AZs
◼ Monitoring instance status with CloudWatch
-If the check fails, use the CloudWatch alarm action to
automatically recover the instance.
After auto-recovery, the status and IP address are the same as
the original instance.
◼ Configure dynamic IP addressing when the instance is started
[Q] Stop and start of the instance

You are a solution architect and are performing maintenance on an EC2


instance. You tried to restart a stopped EC2 instance, but it immediately
changed from a pending state to a terminated state.

Which is the most likely cause? (Please choose two.)

1) The EBS volume limit was exceeded.


2) The EBS snapshot is corrupted.
3) The EBS snapshot is encrypted.
4) This is a copied snapshot of the EBS snapshot.
5) The EBS volume is insufficient.
Restarting the Instance
The status of the instance transitions as follows

Reference: https://docs.aws.amazon.com/ja_jp/AWSEC2/latest/UserGuide/ec2-instance-lifecycle.html

✓ Data will be lost and the host may change when the system is rebooted.
✓ An instance fails to start in the following cases.
1. The snapshot is corrupted.
2. EBS volume limit is exceeded.
3. It does not have an encrypted snapshot key.
4. losing the part that requires an instantiated store type AMI.
Restarting the Instance
The status of the instance shows the following

The instance is preparing to move to the running state.


If you are starting it for the first time, or if you are
Pending starting it after it has been in the pending state, the
You won't be charged
instance will be in the stopped state.

Running The instance is running and ready to use. Will be charged

Not billed if preparing to


Stopping The instance is in the process of stopping or pausing. stop. Billed if preparing
to hibernate

The instance is not available because it has been shut


Stopped down. You can start the instance at any time.
You won't be charged

Shutting-
The instance is being prepared for deletion. You won't be charged
down

The instance has been completely deleted and cannot be


Terminated started.
You won't be charged
[Q] Use of hibernation

You have started your EC2 instance. This EC2 instance needs to be
temporarily stopped for maintenance, and you are required to maintain the
data and other data in memory when you do so.

Which features should be configured in an EC2 instance to meet this


requirement?

1) AMI.
2) Use the EC2 instance reboot configuration.
3) Restart the EC2 instances.
4) Use hibernation.
Use of hibernation
Hibernation allows for maintenance of the pre-stop state
at the time of reboot

By saving the contents of the main memory to a hard


disk or other storage device before shutting down, the
next time the system starts up, it will be loaded back
Hibernation Features into the main memory and start up in the same state
as before shutting down.
Maintaining the pre-stop state at the time of reboot
makes it easier to set up after reboot.

The availability of hibernation is determined by the


instance type.
Available
Initially this was only possible with M3, M4, M5, C3, C4,
Instance type
C5, R3, R4 and R5 running Amazon Linux 1, but now
Amazon Linux 2 and Windows are also supported.
Use of hibernation
Hibernation allows for maintenance of the pre-stop state
at the time of reboot
[Q] Retrieving metadata

The solution architect has configured a new IT infrastructure in your AWS


account; you have created a new subnet in an Amazon VPC and launched an
Amazon EC2 instance in that subnet. You are connecting to the instance via
SSH and need to get the instance's public IP from within a shell script
running on the instance's command line.

Choose the correct URL path to get the public IP of your instance.

1) http://169.254.169.254/latest/meta-data/public-ipv4
2) http://169.254.169.254/latest/user-data/public-ipv4
3) http://254.169.254.169/latest/meta-data/public-ipv4
4) http://254.169.254.169/latest/user-data/public-ipv4
Obtaining Metadata
To get the instance metadata, use the following

http://169.254.169.254/latest/meta-data/
The IP address 169.254.169.254 is a link local
address, valid only from the instance
The scope of VPC
What is VPC?
VPC is a virtual network service that allows users to carve
out a dedicated area from the AWS cloud network

The AWS Cloud Network Space


What is VPC?
VPC is a virtual network service that allows users to carve
out a dedicated area from the AWS cloud network

The AWS Cloud Network Space

Your Network Space


The scope of the VPC questions
Frequent questions extracted from 1625 questions are as
follows
VPC Settings ✓ You will be asked the configuration status of the default
(Default VPC) VPC.

VPC Settings ✓ You will be asked the configuration method using the VPC
(VPC Wizard) wizard used to set up the VPC.

Subnet mask ✓ You will be asked about CIDR configuration using subnet
Settings masks.

✓ You will be asked the use of the various gateways installed


Gateway Settings in the VPC and subnet.

✓ You will be asked how to set up and utilize an Internet


Internet gateway gateway.
The scope of the VPC questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked questions about how to set up the NAT
gateway.
NAT Gateway ✓ You will be asked about the differences and features
between a NAT instance and a NAT gateway.

✓ You will be asked how to integrate with AWS services


VPC Endpoints using VPC endpoints.

✓ You will be asked about the use and configuration of VPC


VPC peering peering, which connects VPC to VPC

✓ You will be asked about the characteristics of the network


ACLs and security gateways, including the differences
Network ACL between them
✓ You will be asked about network ACL configuration .
✓ You will be asked about the connection method to access
Connecting to the AWS service installed in the VPC.
services in VPC ✓ You will be asked about the setting up the connection
method.
The scope of the VPC questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked how to configure the subnet
Subnet
✓ You will be asked about the optimal configuration of AWS
Configuration resource deployment using public and private subnets

Bastion server ✓ You will be asked about the configuration of access to


Configuration resources in a private subnet with a Bastion server.

✓ You will be asked questions about the role of the VPC flow
VPC Flow Log log.

Use of DNS ✓ You will be asked the configuration of DNS name


in VPC resolution to be applied in VPCs.

✓ You will be asked about the role of Elastic IP and the


Elastic IP billing system.
The scope of the VPC questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked the use of IP floating, a mechanism to
IP Floating reduce downtime when switching between EC2 instances.

✓ You will be asked the role of ENI and the attachment


ENI scheme.
Virtual Private Cloud (VPC)
VPC is a service that creates a logically separate section
within the AWS cloud to create a user-defined virtual
network

✓ Build a virtual network by selecting a desired IP address range


✓ You can take full control of your virtual networking environment
by creating subnets, setting up route tables and network
gateways, etc.
✓ Networks inside and outside the cloud can be connected to each
other as needed
✓ Multiple connection options available
-Internet based VPN
-Leased line (Direct Connect)
Virtual Private Cloud (VPC)
A VPC can be set to a single AZ

AZ
Virtual Private Cloud (VPC)
VPC can include resources in multiple AZs within the
same region
AZ①

AZ (2)
Subnets and VPCs
A combination of VPCs and subnets create a network
space. VPC must be set up with at least one subnet.
AZ

subnet
10.0.1.0/24

EC2
[Q] VPC setting (default VPC)

You have opened a new AWS account and first launched an EC2 instance.
you did not configure a VPC, so this EC2 instance has a default VPC set up.
You need to make sure that the instance has both a private DNS hostname
and a public DNS hostname.

How are DNS host names assigned when using the default VPC? (Please
choose two.)

1) A VPC other than the default will be assigned a private DNS hostname,
but not a public DNS hostname.
2) A non-default VPC instance is assigned a public DNS hostname and a
private DNS hostname.
3) A VPC instance is assigned a private DNS hostname, but not a public DNS
hostname.
4) The default VPC is assigned a public DNS hostname and a private DNS
hostname.
5) The default VPC is not assigned a public or private DNS hostname.
VPC Settings (Default VPC)
When you create an AWS account, a default VPC and
default subnet are automatically generated for each region

✓ Automatically creates a default VPC of size /16 IPv4 CIDR blocks


(172.31.0.0/16). This provides up to 65,536 private IPv4 addresses.
✓ A default subnet of size /20 is created for each availability zone. In this
case, up to 4,096 addresses are created per subnet, some of which are
reserved for use by Amazon.
✓ Create an Internet gateway and connect to the default VPC.
✓ Create a default security group and associate it with the default VPC.
✓ Create a default network access control list (ACL) and associate it with the
default VPC.
✓ Associate an AWS account with a default DHCP option set for an AWS
account with a default VPC
✓ Public and private DNS hostnames are given.
[Q] VPC configuration (VPC wizard)

You've opened a new AWS account and decided to configure your VPC first.
You can use the VPC wizard to quickly set up the most commonly used
configurations. You need a network configuration to set up a web server that
needs public access and a database server that is limited to private access
for increased security. You decide to use the VPC wizard to select the
configuration that is closest to your desired configuration.

Which of the following configurations cannot be selected by the VPC Wizard?

1) VPC with a single public subnet


2) A VPC with one public subnet and one private subnet
3) VPC with hardware VPN access to one public subnet and one private
subnet
4) VPC with one public subnet and hardware VPN access
5) VPC with hardware VPN access on a single private subnet
VPC Configuration (VPC Wizard)
The VPC Wizard allows you to instantly select commonly
used VPC configurations.
VPC settings: normal settings
If you don‘t use the VPC wizard, you must create a VPC,
create a subnet, etc. in sequence.

Setting traffic
Create a VPC Create a Set internet route permissions to
(CIDR setting) subnet Configure the Gateway. your VPC
(Network ACL)
Classless Inter-Domain Routing (CIDR)
Set the IP address that allows you to use a subnet mask
and control the number of IP addresses available.

[Notation]
196.51.XXX.XXX/16

Subnet
Fixed IP addresses up to the
16 th digit from the left
[Q] Subnet mask settings
You are planning to set up a new VPC and set up an IT infrastructure with
two public subnets and two private subnets. You need to set up a CIDR for
IPv4 addressing and subnet creation within a single VPC. 200 IP addresses
need to be made available as CIDR settings.

Select a setting that provides the optimal number of IP addresses for the
CIDR subnet mask, but not too many.

1) /21
2) /22
3) /23
4) /24
CIDR
VPC can set the CIDR range between /16 ~ /28

/16 ~ /28
CIDR
Combination of the number of subnets and the number of
IP addresses that can be configured when the CIDR is set

Per subnet.
Subnet mask Number of subnets Number of IP addresses
(Available on AWS)

/18 4 16379

/20 16 4091

/22 64 1019

/24 256 251

/26 1024 59

/28 4096 11
CIDR
Some addresses are already in use from the AWS side

Host address Purpose

.0 Network address

.1 VPC router

.2 DNS services provided by Amazon

.3 Addresses reserved in AWS

.255 Broadcast address


[Q] Create a subnet.

You've opened a new AWS account and decided to configure your VPC first.
You can use the VPC wizard to quickly set up the most commonly used
configurations. You need a network configuration to set up a web server that
needs public access and a database server that is limited to private access
for increased security.

Which of the following is the correct description of Amazon VPC subnets?


(Please select two.)

1) Each subnet is set up in a single availability zone.


2) Each subnet can be configured in multiple availability zones.
3) Each subnet is automatically associated with the main root table of the
VPC.
4) Each subnet is configured with a subnet's main root table, which is
automatically associated with the VPC.
5) Each subnet is configured with an Internet gateway by default.
Subnet
A subnet is a network segment divided by a CIDR range

Public Subnet Private Subnet


10.0.1.0/24 10.0.2.0/24

EC2 EC2

Subnets through which Subnets with no route to


traffic is routed to the the Internet gateway
Internet gateway
Subnet
Multiple subnets can be placed in a VPC but they are each
limited to only one AZ.
AZ

Public Subnet Private Subnet


10.0.1.0/24 10.0.2.0/24

EC2 EC2

The default maximum number of subnets created per VPC is 200


Granting of CIDR
VPCs and subnets are assigned a CIDR (IP address range)
to determine their network range.
10.0.0.0/16 AZ

Public Subnet Private Subnet


10.0.1.0/24 10.0.2.0/24

EC2 EC2
Subnet
The type of subnet is separated by the presence or
absence of routing to the Internet gateway.

Public Subnet Private Subnet


10.0.1.0/24 10.0.2.0/24

EC2 EC2

Subnets through which Subnets with no route to


traffic is routed to the the Internet gateway
Internet gateway
Subnet
The type of subnet is separated by the presence or
absence of routing to the Internet gateway.
[Q] Configure the gateway.

You are thinking of setting up a new VPC and setting up an infrastructure


with two public subnets and two private subnets. The private subnets you
have configured will need to connect to the host from the Internet using the
IPv6 protocol. In doing so, we will deny access from the Internet, but allow
traffic to the Internet side to pass through.

What mechanism must be set up to enable this connection?

1) Egress-Only Internet Gateway


2) Internet gateway
3) NAT Gateway
4) customer gateway
Gateway Settings
The gateways that can be created and managed in the
VPC console are as follows
Internet ✓ A gateway to the Internet, often used as the default
gateway gateway

✓ Gateway to enable traffic from private subnet resources to


NAT Gateway the Internet

Egress-Only Internet ✓ Internet Gateway for IPv6


✓ Allows VPCs to send to the Internet via IPv6, but prevents
Gateway connections to instances from the Internet

✓ Gateway used to connect to the on-premises environment


Customer gateway ✓ It provides information about customer gateway devices
or software applications to AWS

✓ A virtual private gateway is a router on the Amazon side


Virtual private of the VPN tunnel
gateway ✓ Use for VPN connection
[Q] Internet Gateway

You are planning to set up a new VPC and set up an infrastructure with one
public subnet and one private subnet, but in order to allow access to the
Internet using an IPv4 address, the subnet needs to be configured to
function as a public subnet. Required.

Which configuration is required for the public subnet?

1) Set up a route to the Egress-Only Internet Gateway.


2) Set up a route to the Internet gateway.
3) Set up a route to the NAT gateway.
4) Set up a route to the customer gateway.
Internet gateway
You need an Internet gateway to connect to the Internet
from the public subnet
10.0.0.0/16 AZ

Public Subnet Private Subnet


10.0.1.0/24 10.0.2.0/24

EC2 EC2
Internet gateway
Establish a route to the Internet gateway in a route table.

◼ Install an Internet gateway in the VPC.


◼ Configure a route to the Internet gateway in the
public subnet's route table.
[Q] NAT Gateway.

Your company hosts a news media delivery application on AWS. The


application is challenged by the fact that the back-end servers are used in
one AZ and the NAT gateway is also deployed only in the same AZ.

Select the best AWS architecture configuration to solve this problem. (Please
choose two.)

1) Configure public and private subnets in one AZ, and install NAT gateways
in each public subnet.
2) Configure public and private subnets in the two AZs and install a NAT
gateway in each public subnet.
3) Configure public and private subnets in the two AZs and install a NAT
instance in each public subnet.
4) Route from the private subnet to the NAT gateway (or NAT instance) in
each AZ.
5) Set up a route with one private subnet and one NAT gateway (or NAT
instance).
NAT Gateway
A NAT gateway is required on the public subnet to
connect to the Internet from the private subnet
10.0.0.0/16 AZ

Public Subnet Private Subnet


10.0.1.0/24 10.0.2.0/24

EC2 EC2

NAT
NAT Gateway
Establish a route to the Internet gateway in a route table.

◼ Install a NAT gateway on the public subnet.


◼ Configure a route to the NAT gateway in the
private subnet's route table.
[Q] NAT instance VS NAT gateway

You are thinking of setting up a new VPC and setting up an infrastructure


with two public subnets and two private subnets. You are currently setting
up a configuration that will allow instances of the private subnet to start
outbound IPv4 traffic to the Internet. We need to configure a NAT instance
for this network access.

Which of the following is the correct description of the characteristics of a


NAT instance? (Please choose three.)

1) Security groups can control traffic on NAT instances.


2) Network ACLs can control traffic on NAT instances.
3) NAT instances are available for port forwarding.
4) NAT instances are managed on the AWS side.
5) NAT instances cannot select an instance type.
NAT instance VS NAT gateway
NAT gateways are provided on the AWS side in a managed
fashion, and are more redundant and easier to manage
[Q] VPC endpoints.

You are a solution architect and you are trying to access DynamoDB
configured outside the VPC from an EC2 instance within the VPC. The
instance must make API calls to DynamoDB and you must ensure that API
calls do not traverse the Internet as per your security policy.

Which configuration is the correct way to accomplish this requirement?

1) Create a gateway endpoint and set up a root table entry for the endpoint.
2) Create an interface endpoint and set up a root table entry for the
endpoint.
3) Create a private type endpoint and set up a root table entry for the
endpoint.
4) Create an endpoint ENI for each VPC subnet
5) Create a VPC peering connection between VPC and DynamoDB
VPC Endpoints
VPC endpoints provide an entrance for AWS services with
global IP to be accessed directly from within the VPC
10.0.0.0/16 AZ

Public Subnet
10.0.0.0/24

S3
EC2
I need access to the
S3 outside of the VPC.
VPC Endpoints
VPC endpoints provide an entrance for AWS services with
global IP to be accessed directly from within the VPC
10.0.0.0/16 AZ

Public Subnet
10.0.0.0/24

VPC S3
EC2 endpoint
VPC Endpoints
Gateway type applies to S3 and DynamoDB only;
Many services use private links (interfaces)

✓ Gateways that can be specified as route table destinations


Gateway-type
for traffic destined to supported AWS services.
endpoint ✓ Applicable to DynamoDB and S3 only

✓ Elastic Network Interface with a private IP address range


of subnet IP addresses that serves as an entry point for
Private link type traffic destined for supported services.
endpoint ✓ Use a private IP address to privately access the service.
(Interface type) ✓ AWS PrivateLink limits all network traffic between VPCs
and services to the Amazon network
✓ Applicable to many AWS services such as RDS and EC2
VPC Endpoints
The gateway type has special routing on the subnet and
communicates with external services from the VPC.
10.0.0.0/16 AZ
Features.
 Access Control:.
Set up an endpoint policy.
 Fee: Free
Subnet
 Redundancy: AWS supports it.
10.0.0.0/24

route table VPC S3


EC2 endpoint
(VPCE-9j9kh9)
VPC Endpoints
Private links create private IP addresses for the endpoint
on the subnet and DNS route by name resolution.
10.0.0.0/16 AZ Features.
 Access Control:.
Set up a security group
 Fee: Chargeable
Subnet  Redundancy: Multi-AZ design
10.0.0.0/24

route table VPC


EC2 endpoint RDS
(10.0.0.100)
[Q] VPC Peering.

Your company has multiple applications in multiple regions. Each one uses a
separate VPC, but the applications need to work together; these VPCs need
to be connected so that the different applications can communicate with each
other.

Which of the following is the most cost-effective solution for this use case?

1) Using a VPC Peering Connection


2) Using an Internet Gateway
3) Using a VPN connection
4) Using a Direct Connect
VPC Peering
VPC peering enables traffic routing between two VPCs

VPC peering

 Enables peer connections between VPCs between different AWS accounts


 Peer connections between different VPCs across regions are also possible
 There is no single point of failure or bandwidth bottleneck.
VPC Peering
VPC peering enables traffic routing between two VPCs
VPC Peering
VPC peering enables traffic routing between two VPCs
[Q] Network ACL

Your company runs an AWS-based web application. Recently, there has been
a spike in traffic that is attempting to gain unauthorized access. It seems
that an unauthorized access is being attempted from several fixed IP
addresses. The requests appear to be coming from different IP addresses
within the same CIDR range.

Choose a protection measure that has a direct effect on this access.

1) Reject the corresponding CIDR in the inbound setting of a network ACL


with a rule number that is smaller than the other rules.

2) Reject the corresponding CIDR in the outbound setting of a network ACL


with a rule number that is smaller than the other rules.

3) Reject the corresponding CIDR in the inbound setting of a security group


with a rule number that is smaller than the other rules.

4) Reject the corresponding CIDR in the outbound setting of a security


group with a rule number that is smaller than the other rules.
Network ACL
Add access control via network ACLs.

Route table

10.0.0.0/16
AZ network ACL AZ
Public Subnet Private Subnet.
10.0.5.0/24 10.0.10.0/24

Security

Security
group

group
EC2 EC2
Web server DB server
Network ACL
Traffic settings use security groups or network ACLs.

Security Group Settings Network ACLs Settings

◼ Applied on a per-instance basis ◼ Applied on a VPC/subnet basis


◼ Stateful: outbound is allowed if ◼ Stateless: inbound settings alone
only inbound is set. (Remain in do not allow for outbound.
state.) ◼ Allow or deny in/out.
◼ Specify only permissions in/out. ◼ All communications are allowed
◼ By default, only communication by default
within the same security group is ◼ Apply the numbers in order
allowed
◼ All rules apply.
[Q] Network ACL

You have built a VPC and created two subnets. We are currently in the
process of setting up the network ACLs. When you do so, you plan to use the
default network ACLs that are set when you set up the VPC.

Which of the following is the correct description of the network ACL default
settings? (Please select two.)

1) A default inbound rule has been set up to deny all traffic.


2) A default outbound rule has been set up to deny all traffic.
3) A default inbound rule has been set up to allow all traffic.
4) A default outbound rule has been set up to allow all traffic.
5) There is a default outbound rule that allows traffic to the Internet
gateway.
Network ACL
The default configuration of the network differs between
default and custom

The first time the VPC


✓ All inbound traffic is configured to be allowed.
is set to
✓ All outbound traffic is configured to be allowed.
Default NACL

Create a custom one ✓ All inbound traffic is set to be rejected.


NACL Default Settings ✓ It is configured to deny all outbound traffic.
[Q] Network ACL settings

You have built a VPC and created two subnets. Now you are in the process of
setting up your network ACLs.

What happens when a web server on a subnet with this network ACL applied
is accessed from 121.103.215.159?

1) SSH connection from 121.103.215.159 is allowed.


2) SSH connection from 121.103.215.159 is rejected.
3) You can access the web site via HTTP from 121.103.215.159.
4) You can't access the web site via HTTP from 121.103.215.159.
5) You can access the website via HTTPS from 121.103.215.159.
Configuring Network ACLs
Traffic settings use security groups or network ACLs.
[Q] Configuration by subnet.

You plan to host a web application on AWS. First, you have created a VPC
and launched an EC2 instance that will serve as your web server on a public
subnet. You also set up another EC2 instance on a separate subnet that will
host the MySQL database and connect to it from the web server.

How should you set up your database for safety reasons? (Choose two.)

1) Place the database server on a private subnet.


2) Place the database server on a public subnet.
3) Specify the IP address of the web server in the security group and set a
setting on the DB side instance to allow only port numbers from MySQL.
4) Specify the IP address of the web server in the security group and set up
the instance on the web server side to allow only port numbers from
MySQL.
5) Specify the IP address of the WEB server in the IAM database
authentication, and set up the instance on the WEB server side to allow
only the port number from MySQL.
Configuration with subnets
AWS resources that want to increase security for should
be installed on a private subnet.

10.0.1.0/16

AZ AZ

Public Subnet Private Subnet.


10.0.4.0/24 10.0.5.0/24
Internet
gateway

Bastion RDS
server
[Q] Connecting to services in VPC: SSH

You've opened a new AWS account and decided to configure your VPC first. You
can use the VPC wizard to quickly set up a commonly used VPC configuration.
We need a network configuration to set up a database server that is limited to
private access for increased security. You need to set up a bastion server on a
public subnet and access it only from the corporate data center via SSH.

Which is the best way to accomplish this? (Please select two.)

1) Launch an EC2 instance on the public subnet.


2) Launch an EC2 instance on a private subnet.
3) Give the instance a security group that only allows access on port 22 through
the IP address of the corporate data center, and implement access with a
PEM key.
4) Give instances a security group that only allows access on port 22 through
the IP address of the corporate data center, and implement access with an
access key.
5) Give the instance a security group that only allows access on port 22 through
the IP address of the corporate data center, and enforce access by user ID
and password.
[Q] Connecting to services in VPC: RDP

You have decided to open a new AWS account and configure a VPC first. You plan to set
up a web server with limited private access for increased security, and you want to use
a bastion server with Microsoft Remote Desktop Protocol (RDP) access to limit
administrative access to all instances.

How should you implement the Bastion server configuration? (Please select two.)

1) Launch an EC2 instance with an Elastic IP address on the public subnet.


2) Launch an EC2 instance with an Elastic IP address on the private subnet.
3) Launch an EC2 instance with a public IP address on the public subnet.
4) Launch an EC2 instance with a private IP address on the private subnet.
5) Give the security group a setting that allows RDP access to EC2 instances from
corporate IP addresses only on 22 ports.
6) Grant RDP access to EC2 instances from corporate IP addresses only on port 3389 in
the security group.
Connection to services in the VPC
Network ACLs and security group permissions are
required to connect to services in the VPC

✓ SSH is the protocol used for standard connections to


instances
✓ Allow SSH on port 22 by specifying the IP address to be
SSH connection connected by the security group/network ACL.
✓ Specify a public IP address/EIP and use a PEM key to
access the instance.

✓ Remote Desktop Connection Method


✓ RDP is a connection protocol for remote desktops
✓ Install a Bastion server on the public subnet and give it an
RDP Connection Elastic IP.
✓ Allow RDP on port 3389 by specifying the IP address to be
connected by the security group/network ACL.
Bastion server
A bastion server is required to connect to instances in the
private subnet. A NAT gateway is required for return
traffic.

10.0.1.0/16

AZ AZ

Public Subnet Private Subnet.


10.0.4.0/24 10.0.5.0/24
Internet
gateway

Bastion
Server EC2
NAT
gateway
Bastion server
A bastion server is required to connect to instances in the
private subnet. A NAT gateway is required for return
traffic.

10.0.1.0/16

AZ AZ

Public Subnet Private Subnet.


10.0.4.0/24 10.0.5.0/24
Internet
gateway

Bastion
Server EC2
NAT
gateway
[Q] VPC flow log

You have a VPC set up and are using AWS resources, you have multiple EC2
instances running in the VPC for your web application, and you are running
traffic balancing with ELB. As part of your monitoring, you need to capture
information about the traffic reaching the ELB.

Choose the best method for collecting this data.

1) Enable VPC flow logging for the EC2 instances with which the ELB is
associated.
2) Use Amazon CloudWatch Logs to review the logs from the ELB.
3) Enable VPC flow logging on the network interface associated with the ELB.
4) Enable VPC flow logging for subnets where ELBs are running.
VPC Flow Log
VPC Flow Logging captures network traffic and enables it
to be monitored by CloudWatch

 The traffic that originates from/destination of the network interface is the target.
 Obtain traffic logs that have been accepted/rejected by security groups and
network ACL rules
 Collecting, processing and storing in a time frame called the capture window
(about 10 minutes)
 You can also get network interface traffic for RDS, Redshift, ElasticCache and
WorkSpaces.
 No additional charge.
[Q] Use of DNS in VPC

You have opened a new AWS account and launched an EC2 instance. This
EC2 instance has a custom VPC configured on it. You want to use this EC2
instance as a web server and set up a custom domain called Pintor.com. As a
solution architect, you want to use Route53's private host zone feature to
make this happen.

Which of the following VPC settings must be enabled? (Please choose two.)

1) enableDnsHostnames
2) enableDnsSupport
3) enableVpcSupport
4) enableVpcHostnames
5) enableDnsDomain
Using DNS in VPC
Instances launched in a VPC need to be configured to
receive the public DNS hostname corresponding to the
public IP address
✓ This settings indicates whether an instance with a public
IP address should get the corresponding public DNS
hostname.
enableDnsHostnames
✓ If this attribute is true and enableDnsSupport attribute is
also true, the instance in the VPC gets the DNS hostname.

✓ This settings indicates whether or not DNS resolution is


supported.
✓ If this attribute is false, the Amazon Route 53 Resolver
server, which resolves public DNS hostnames to IP
enableDnsSupport addresses, does not work
✓ If this attribute is true, a query to the DNS servers
provided by Amazon (IP address 169.254.169.253) or to
a reserved IP address (VPC IPv4 network range plus 2) is
successful.
[Q] Elastic IP

As a solution architect, you're looking to reduce AWS costs, and when you
use Cost Explorer to review the cost details, you discover that you're being
charged for Elastic IP addresses that should be available for free.

Why have you been charged for an Elastic IP address?

1) The Elastic IP is not released, but the Elastic IP is not attached to the EC2
instance.
2) Elastic IP attaches to the EC2 instance without releasing the Elastic IP.
3) The free time on Elastic IP has been exceeded.
4) The number of free uses of Elastic IP has been exceeded.
Elastic IP
Elastic IP is an additional IP address that can be used
statically. In order for an instance to access the Internet,
it uses a public IP or Elastic IP.

✓ Dynamic public IPv4 addresses


✓ If the instance is stopped, the IP address is
changed.
Public IP ✓ If public IP address allocation is enabled in the VPC,
it is automatically assigned to resources in the VPC.
✓ For Free

✓ Static public IPv4 addresses


✓ The IP address is not changed even if the instance
is stopped.
Elastic IP ✓ Create Elastic IP in the VPC console and then
attach it to the required service.
✓ Free when IP is used. If you don't use it without
releasing it, you’ll be charged.
[Q] IP Floating

You are building an application that is hosted on an EC2 instance. A non-


functional requirement of this application is that if an EC2 instance fails, you
need to change the traffic to another EC2 instance to continue processing.
Once the application is up and running, the EC2 instance fails and the traffic
could be switched to another instance, but there would be downtime.

Select the method you should implement to solve this problem.

1) Use IP floating with ENI.


2) Use IP floating with EFA.
3) Use IP floating with ELB.
4) Use Elastic IP for IP floating.
IP Floating
The ability to automatically reassign Elastic IPs to
eliminate downtime in the event of a failure

EIP

EC2 EC2
(1) Failure (2) Automatic
Occurs EIP switching
[Q] ENI

You are building an application that is hosted on an EC2 instance. You have
implemented a configuration for this instance using a private IP address and
MAC address, and if the primary instance is terminated, the ENI must be
attached to the standby secondary instance. This allows traffic flow to
resume within seconds. To do so, you use "warm attach" with the ENI
attachment to the EC2 instance.

Select the correct description of Warm Attachment.

1) Attaching ENI to a suspended instance


2) Attaching ENI to an instance in the startup process
3) Attaching ENI to a running instance
4) Attaching ENI when the instance is idle
ENI
The Elastic Network Interface is a logical networking
component in a VPC that represents a virtual network
card. It is used when assigning IP addresses to instances.
[Network attribute information maintained by ENI]

✓ Primary private IPv4 addresses from the VPC's IPv4


address range
✓ One or more secondary private IPv4 addresses from
the VPC's IPv4 address range
✓ One Elastic IP address (IPv4) for each private IPv4
address
✓ A single public IPv4 address
✓ One or more IPv6 addresses
✓ One or more security groups
✓ MAC address
✓ Source/destination check flag
ENI
ENI is utilized by attaching it to an instance. There are
three ways to attach ENI.

✓ Attaching the ENI while the instance is


Hot attach
running

✓ Attaching the ENI while the instance is


Warm attach
preparing for run. (while launching)

✓ Attaching the ENI while the instance is


Cold attach
stopped.
Section Content
Lecture What you will learn in the lecture

The scope of Auto Review the questions related to Auto Scaling, an


Scaling essential part of AWS architecture configuration

Review the questions related to RDS, AWS‘ flagship


The scope of RDS relational database service

Review the questions related to EBS, the storage used


The scope of EBS in conjunction with EC2 instances

Review the questions related to ELBs, which are


The scope of ELB essential for AWS architecture configuration
The Scope of Auto Scaling
What is Auto Scaling?
The ability to add new instances to improve performance
when access to the instance has increased

When traffic exceeds the processing


volume of two EC2 instances

ELB

EC2 EC2
What is Auto Scaling?
The ability to add new instances to improve performance
when access to the instance has increased

When traffic exceeds the processing


volume of two EC2 instances

ELB

New EC2 EC2 EC2


Scaling type
There are two types of scaling: vertical scaling and
horizontal scaling. Auto-scaling is horizontal scaling

Vertical Scaling Horizontal Scaling

[Expansion] [Expansion]
Scale-up: adding or increasing Scale-out: Increase the number of
memory and CPU devices/servers to be processed

[Reduction] [Reduction]
Scale-down: Reduce memory and Scale-in: Reduce the number of
CPU and lower performance devices/servers to be processed
The scope of the Auto Scaling question
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about the configuration method that
Creating a Launch
determines the contents of the instance configuration used
Configuration when setting up an Auto Scaling group.

Creating a launch ✓ You will be asked about the difference between a launch
template configuration and a launch template

Auto Scaling ✓ Based on the given scenario, You will be asked about the
configuration configuration of the architecture using Auto Scaling.

Auto Scaling ✓ Based on the given scenario, you will be asked to confirm
configuration settings the configurations using Auto Scaling.

✓ When configuring an Auto Scaling group, you will be asked


Setting the Group Size how to set the group size.
The scope of the Auto Scaling question
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked how to configure the scaling policy that
you choose when configuring the Auto Scaling group.
Setting scaling policy ✓ You will be asked about the method of selecting the type
of scaling policy.

✓ You will be asked about the choice of health check method


Health check on the Auto Scaling group and its effectiveness.

✓ You will be asked about how to select an exit policy that


determines the order of instance deletion when scale-in
Termination Policy ✓ You will be asked about the default deletion order and the
order of selection of AZs.

✓ You will be asked about The method and use of the


Cooldown period cooldown period that can be set at scale-in.

The behavior of Auto ✓ You will be asked about the behavior of Auto Scaling when
an imbalance occurs during Auto Scaling execution, or
Scaling when an instance is terminated or an anomaly occurs,
The scope of the Auto Scaling question
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about the use and behavior of lifecycle
hooks, which are custom actions that are executed when
Lifecycle hook an instance is launched or deleted by the Auto Scaling
group.

✓ You will be asked how to perform proper troubleshooting


Trouble-shooting when a problem occurs during Auto Scaling execution.
The Auto-Scaling configuration process
Auto Scaling configuration requires advance preparation
of the ELB and startup template

(1) Creating an
ELB target group.

(2) Create launch


configurations

(3) Create Auto-Scaling group


Setting Threshold
Setting Scaling policy
Setting termination policy
Collaboration with ELB
Instances launched by Auto-Scaling can be placed in the
target group of the ELB
Settings of Auto-Scaling
After preparing launch configurations or launch templates,
configure the Auto Scaling group.

Create Launch Create


configurations Auto Scaling group

✓ Configure settings such as the


instance types to be started by
✓ Setting the group size (number
Auto Scaling
of instances to be launched) of
✓ Use launch configurations or Auto Scaling
launch template.
✓ Setting the Execution Threshold
✓ launch configurations are
✓ Select a scaling policy and set
dedicated to Auto Scaling.
the scale-out and scale-in
✓ launch templates are available methods
for all instance launches, with
✓ Set up a Termination Policy
enhanced features such as
versioning
[Q] Create launch configurations

You are a solution architect and you are building a web application on AWS.
The application utilizes multiple EC2 instances behind an ELB for increased
redundancy. In addition, we have configured an Auto Scaling group so that
we can scale out when the load increases. After a while, you find that you
need to change the instance type in the Auto Scaling group.

How do you change the instance type configured in an Auto Scaling group?

1) Create a new launch configuration using the new instance type and
reconfigure the Auto Scaling group to use it.
2) Create a new launch configuration using the new instance type and
modify the Auto Scaling group to use it.
3) Modify the Auto Scaling group by selecting a new instance type on the
Modify Auto Scaling group instance type screen.
4) Edit the launch configuration used by the Auto Scaling group to change to
the new instance type.
[Q] Create a launch template.

A company is building a web application on AWS. To prepare for a temporary


increase in application load due to increased demand or other factors, they
decided to deploy an Auto Scaling mechanism on their EC2 instances. The
solution architect is required to select a mechanism to properly manage the
instance configuration and configure the Auto Scaling group.

Which features should they use to manage your instances?

1) Create an Auto Scaling group using a launch template


2) Create an Auto Scaling group using a golden image
3) Create an Auto Scaling group using a spot-fleet request
4) Create an Auto Scaling group using a launch configuration
Creating launch templates
A launch template is a mechanism that uses the startup
settings described in EC2 as a template for automatic
startup.
A launch template

Auto Scaling

✓ Currently, it is recommended to use a launch


template rather than a launch configuration.
✓ If the AMI is updated, it needs to be recreated.
✓ Select and start the instance as configured in the
launch template.
✓ The startup configuration is dedicated to Auto
Scaling.
✓ Launch templates are widely used to launch E2
instances
[Q] Auto Scaling Configuration

Company B has built a web application on AWS to deliver contents. At the


data layer, it uses an online transaction processing (OLTP) database. The web
layer needs to achieve a flexible and scalable architectural configuration, and
measures for load balancing and temporary load are essential.

Choose the best way to meet this requirement.

1) Configuring Auto Scaling and ELB for an EC2 instance


2) Configure a multi-AZ configuration for RDS.
3) Deploying EC2 instances to a multi-AZ and fail-over routing with Route 53
4) Install more EC2 instances than the projected capacity
Auto Scaling Configuration
Configure Auto-Scaling with ELB configuration for
redundancy and auto-scaling for heavy traffic
Region

AZ AZ
Subnet Subnet
10.0.1.0/24 10.0.2.0/24

ELB

EC2 Auto-Scaling EC2


[Q]Auto Scaling configuration settings

You have implemented a web application on AWS. The application consists of


an Amazon EC2 instance, Amazon ELB, and Auto Scaling and Route53 across
two subnets. However, it appears that the deployed application is running an
EC2 instance on only one subnet, not two subnets.

What is the most likely cause of this problem?

1) The AMI for the launch configuration does not exist.


2) The Route53 target group is not configured with multiple subnets.
3) Cross-zone load balancing is not enabled in ELB.
4) Auto Scaling groups are not configured in multiple subnets.
[Q]Auto Scaling configuration settings

You have implemented your web application on AWS. This application has a
multi-AZ configuration with Amazon EC2 instances and Amazon ELB. In
addition, you need to add Auto Scaling to automatically add EC2 instances
and configure them to handle the temporary load increase of incoming
requests.

Select the conditions for adding an existing EC2 instance to your Auto Scaling
group. (Select two.)

1) There is an AMI that launched an existing instance.


2) Put the instance set in the Auto Scaling group into hibernation.
3) The instance you set in the Auto Scaling group is not a member of
another Auto Scaling group.
4) An existing instance is launched in one of the VPCs defined in the Auto
Scaling group.
5) An existing instance is launched in one of the Availability Zones defined in
the Auto Scaling group.
Auto Scaling configuration settings
The Auto Scaling group specifies a subnet of a specific
VPC and launches an instance in the AZ mapped to the
subnet
[Q] Setting the group size.

You are building a web application on AWS. This web application consists of a
single EC2 instance. Due to the cost and the low level of importance of the
web application, it has been decided not to use multiple instances, but you
need to configure it to maintain a single instance even if you are running
Auto scaling against instance failure.

Which is the most cost effective scaling method that can meet this
requirement?

1) Create an auto-scaling group across one AZ with min = 1, max = 1, and


desired = 1
2) Create an auto-scaling group across two AZs with min = 1, max = 1, and
desired = 1
3) Create an auto-scaling group across one AZ with min = 1, max = 2, and
desired = 1
4) Create an auto-scaling group across two AZs with min = 1, max = 2, and
desired = 1
Setting the Group Size
In setting the group size, you can set the increase or
decrease values of the instance.

✓ Set the number of instances in a state where


Auto Scaling is not running.
Desired Capacity
✓ You can manually perform scaling by
increasing this number.

✓ Set the lower limit of the number of instances


to be reduced when scaling in.
Minimum Capacity
✓ You cannot set a value larger than your
desired capacity.

✓ The maximum capacity sets the maximum


number of instances to launched on scale-out.
Maximum Capacity
✓ You cannot set a value less than the desired
capacity.
[Q] Setting a scaling policy.

You have built web application on AWS. The application has a multi-AZ
configuration with an Amazon EC2 instance and Amazon ELB. You need to
add Auto Scaling, automatically adding EC2 instances temporarily to match
the increased load of incoming requests. You need to configure the EC2
instances to handle the temporary increase in load of incoming requests.

What scaling policies should be set up?

1) Set a target tracking scaling policy with a threshold of 60% average total
CPU usage in the Auto Scaling group.
2) Set a step scaling policy with a threshold of 60% average total CPU usage
in the Auto Scaling group.
3) Set a scheduling policy with a threshold of 60% average total CPU usage
in the Auto Scaling group.
4) Set a manual scaling policy with a threshold of 60% average total CPU
usage in the Auto Scaling group.
Target tracking scaling policy
Target tracking scaling policy is scaling using CloudWatch
monitoring metrics.
[Q] Setting a scaling policy

You have implemented web application on AWS. The application has a multi-
AZ configuration with an Amazon EC2 instance and an Amazon ELB. In
addition, you need to add Auto Scaling to automatically add EC2 instances
and configure it to handle temporary load increases for incoming requests.
The application is expected to periodically increase in load at certain times on
weekends.

How should you configure Auto Scaling to meet this requirement?

1) Use a life cycle hook.


2) Use scheduled spot instances.
3) Use a scheduled scaling policy.
4) Use a step scaling policy.
Setting a scaling policy
Set a scaling policy and implement scaling.

This is the normal Settings for Target Tracking


Simple Scaling Policy
scaling
One-step scaling based on alarm settings
Dynamic
scaling Multi-step scaling with two or more step
adjustment values that dynamically scale the
Step scaling
number of instances based on the size of the
alarm exceedance

Adjust the desired capacity and perform the


Manual scaling
scaling manually.

Specify the date and time to execute the scaling


Scheduled scaling
and execute the scaling.
Setting a scaling policy
Multiple scaling policy settings can be used in combination

Perform dynamic
Set up a Scheduled scaling when the
Scaling schedule is exceeded.
[Q] Health check

The web application is currently running on AWS. The web application has an
Auto Scaling group configured on an Amazon EC2 instance behind the ELB.
Today, one EC2 instance experienced an anomaly and the ELB has removed it
from the target, but Auto Scaling did not launch a new instance.

Which is the most likely cause of this behavior?

1) The ELB health check type is not being used by Auto Scaling.
2) EC2 health check types are not being used by Auto Scaling.
3) The Auto Scaling group has a cooldown period set.
4) The Auto Scaling group has a timeout grace period set.
Health check
Use either EC2 status information or ELB health checks to
check the health of EC2 under Auto-Scaling

If the status of the instance is not running, it is


EC2 Status
considered abnormal.

ELB Take advantage of ELB's health check feature


Health check
ELB health checks and CloudWatch alerts can be used as
triggers

ELB CloudWatch

Auto-Scaling Auto-Scaling
[Q] Termination Policy

The web application is currently running on AWS. The web application has an
Auto Scaling group configured on an Amazon EC2 instance behind the ELB.
As the load increases, the Auto Scaling group spawns new instances across
two availability zones (AZs). When scaling is performed, three EC2 instances
are deployed in ap-northeast-1a and four EC2 instances are deployed in ap-
northeast-1c.

How is an instance deleted when it is scaled in?

1) The oldest instance of the startup configuration ends up in ap-northeast-


1c.
2) The oldest instance of the startup configuration ends up in ap-northeast-
1a.
3) Randomly terminate the instance in ap-northeast-1a.
4) Randomly terminate the instance in ap-northeast-1c.
5) One instance of ap-northeast-1a is created to maintain balance.
Termination Policy
Configure from which instance to exit when scaling in
based on reduced demand
✓ Check to see if there are instances in more than one AZ
and delete the instance in the AZ where the most
Selecting instances are located.
AZ
✓ If the same number of instances are placed in all AZs,
randomly select an AZ to remove the instance.
Default
✓ Delete the oldest instance with the oldest startup time
Selecting ✓ If there is more than one old instance, delete the instance
an with the shortest next billing time.
Instance ✓ If there are multiple instances close to the next billing time,
delete them at random.
✓ Check to see if there are instances in more than one AZ
and delete the instance in the AZ where the most
Selecting instances are located.
AZ
✓ If the same number of instances are placed in all AZs,
randomly select an AZ to remove the instances.
Custom

Selecting
an ✓ Remove according to the custom policy of the selected AZ.
Instance
Termination Policy
Configure from which instance to exit when scaling in
based on reduced demand

OldestInstance Terminates in order of oldest instance

Terminates from the most recent instance of the


NewestInstance start-up time

OldestLaunch
Terminates from the oldest running instance
Configuration

ClosestTo Terminates from the instance where the next


NextInstanceHour billing timing comes.
[Q] Cool down period

The web application is currently running on AWS. The web application has an
Auto Scaling group configured on an Amazon EC2 instance behind the ELB.
Recently, Auto Scaling has been running for a short period of time, adding
and removing instances at the same time, causing a lot of scaling events to
occur.

What should be done to improve these scaling situations? (Select three.)

1) Change the Auto Scaling group size to increase the desired capacity.
2) Configure scaling using scheduled scaling actions.
3) Change the CloudWatch alarm period that triggers the Auto Scaling scale-
down policy.
4) Change the threshold for CloudWatch alarms that trigger Auto Scaling
scale-down policies.
5) Change the cooldown period for Auto Scaling groups
Cooldown period
Cooldown time can be set at the end of the instance

✓ Prevent the Auto Scaling group from launching or


terminating additional instances before the impact of
the previous activity of the instance has balanced.
Cooldown period ✓ The cooldown period is set by default. The default
setting is set to 300 seconds.
✓ The cooldown period can be changed.

✓ During the cooldown period, when scheduled actions


start at the scheduled time or scaling activities are
initiated by the target tracking or step scaling policy,
Cooldown period they are not waiting for the cooldown period to end to
Exceptions execution.
✓ If an instance is no longer normal, Amazon EC2 Auto
Scaling replaces the abnormal instance without
waiting for the cooldown period to complete.
[Q] Behavior of Auto Scaling

The web application is currently running on AWS. The web application has an
Auto Scaling group configured on the AmazonEC2 instance behind the ELB.
This Auto Scaling Group uses two AZs, and there are currently six
AmazonEC2 instances running in the group.

What action does Auto Scaling take if one of the EC2 instances fails? (Select
two.)

1) To correct the imbalance, terminate the instances in the AZ where the


three EC2 instances are running.
2) Launch a new instance in the AZ where the failed instance is located.
3) Launch a new instance in the AZ where there is no failed instance.
4) Delete the first instance that failed and then launch a new instance in the
same AZ.
5) After launching a new instance, the failed instance is terminated.
6) Select one of the two AZs at random and terminate the instance of that
AZ.
The Behavior of Auto Scaling
When Auto Scaling is performed, the number of instances
is adjusted to be properly distributed across the AZ.

✓ Launch an instance in the AZ with the fewest instances


Basic Behavior ✓ If instance start-up fails, start in a different AZ until it
succeeds

Redistribution
✓ Adjust an imbalance in the number of instances between
AZs.
✓ Stop the instance that caused the group to become uneven
and launch a new instance in the AZ that was under-
Behavior when an resourced.
unbalance occurs Behavior during redispersion
between AZs ✓ Prevent performance degradation by starting a new
instance before terminating the old one.
✓ Approaching the maximum Auto Scaling capacity can slow
down the redistribution process or stop it altogether. To
avoid this, temporarily increase the maximum capacity (add
10% or +1 of the maximum capacity).
[Q] Lifecycle Hook

The web application is currently running on AWS. The web application has an
Auto Scaling group configured on an Amazon EC2 instance behind the ELB.
When performing scale-in, we would like to be able to download the log files
of the instances to be stopped in order to examine the impact of instance
stoppages.

Which of the following features can be used to enable this custom action?

1) EC2 Fleet Configuration for Auto Scaling Groups


2) Termination policy for Auto Scaling groups
3) Scheduled scaling policies for Auto Scaling groups
4) Auto Scaling Group Lifecycle Hooks
Lifecycle hook
When an instance is launched or deleted by the Auto
Scaling group, Lifecycle hook make the instance pause
and custom actions are performed

A cooldown period is
performed at the end of
the instance, and you can
set up an action to be
performed during the time.
Instances can be put on
standby for a period of time

Reference: https://docs.aws.amazon.com/ja_jp/autoscaling/ec2/userguide/lifecycle-hooks.html#lifecycle-hooks-overview
[Q] Troubleshooting

Your company has set up ELBs on EC2 instances to distribute traffic and then
set up an Auto Scaling group. You use a load testing tool to run the Auto
Scaling group to see how it behaves as the load improves. However, the
status check of the EC2 instances launched by Auto Scaling seems to show
“Impaired”.

What actions does Auto Scaling perform?

1) Wait a few minutes for the instance to recover, and if it does not recover,
terminate the instance and then replace it with another one.
2) Immediately terminate the instance and then replace it with another one.
3) The ELB switches the target to another instance.
4) Auto Scaling performs rebalancing between AZs that have not
experienced a failure.
Trouble-shooting
Auto Scaling must be suspended for instance
maintenance and investigation

Instance ✓ Auto Scaling will repeatedly launch instances, and if it


continues to fail for 24 hours, it may be stopped on
startup failure Amazon's side.

✓ If the instance is Impaired, check to see if it recovers for


a few minutes
Instance failure
✓ If it is not recovered, start a new instance and terminate
the instance impaired.

✓ If you stop an instance without temporarily stopping the


Auto Scaling group, a new instance will be started.
Trouble shooting
✓ Stopping Auto Scaling, investigating and recovering, and
restarting Auto Scaling is the basic practice
The Scope of the RDS
What is RDS?
RDS is a service that allows relational databases to be
instantly launched and used in the cloud
Data center

Cloud
(Internet)

EC2 RDS
The scope of the RDS questionnaire
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked to choose RDS as the best database
Selection of RDS service offered by AWS

✓ You will be asked the features of RDS and database


RDS features engines such as MySQL and PostgreSQL.

✓ You will be asked about the characteristics and use cases


Selecting storage type of the storage types used for DB instances

Public access ✓ You will be asked about the configuration of direct Internet
configuration access to a DB instance like RDS.

✓ You will be asked about the characteristics of read replicas,


a scaling scheme using RDS.
Read Replica ✓ You will be asked about the differences between Aurora
and others.
The scope of the RDS questionnaire
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked a use case for configuring a cross-
Cross-region replica regional read replica using RDS.

✓ You will be asked to select the best scaling method based


Scaling of RDS on requirements such as cost optimization and
performance improvement.

✓ You will be asked the way in which RDS encryption is


implemented.
RDS Encryption ✓ You will be asked how to encrypt an unencrypted RDS DB
instance in the middle of the process.

✓ You will be asked about the RDS maintenance.


Maintenance ✓ You will be asked how the RDS maintenance window is set
up and its impact on the RDS.

✓ You will be asked how to back up RDS and how to restore


Backup it.
[Q] Select RDS.

Company B has identified the requirements for building a database using


AWS. As a solution architect, you have chosen the best AWS service based
on your database requirements. The requirement for this company is to
manage the database environment in house.

Choose a database construction method that meets this requirement.

1) EC2
2) RDS
3) Aurora
4) DynamoDB
Data model
There are many data models with different purposes.

 Relational model
 Graph model
 Key value store
 Object
 Document
 Wide column
 Hierarchical
Relational model
The relational model is the basic data model for databases.
[Q] Characteristics of RDS

You are a solution architect and you are building a database on AWS. Since
you are currently using MYSQL on-premises, you have decided that you can
easily migrate to MYSQL in RDS, if you use RDS, you need to migrate based
on its features.

Select a method that is not recommended for RDS

1) Enable automatic backups


2) Use MyISAM as the storage engine for MYSQL.
3) Use InnoDB as the storage engine for MYSQL.
4) Large table partitions should not exceed 16TB.
Features of RDS
RDS is a fully managed relational database for a variety of
database software

Databases can be built using standard software


such as

-MySQL
-ORACLE
-Microsoft SQL Server
-PostgreSQL
-MariaDB
-Amazon Aurora
RDS Best Practices
The RDS recommends the following best practices

• Allocate enough RAM for DB in-memory


• Identifying OS problems using extended monitoring
• Setting Amazon CloudWatch alarms for specific metric
thresholds
• Use InnoDB as the storage engine for MYSQL.
• Large table partitions should not exceed 16TB.
RDS Constraints
While RDS is managed service, there are limitations by the
range of features provided by AWS.

Limitations of RDS

• Versions are limited.


• Capacity is capped.
• Unable to login to OS in host computers
• Cannot access the engine file system as a user.
• It does not provide some features of DB engines
• Individual patches cannot be applied to RDS DB.
[Q] Selecting a storage type

You are building a relational database on AWS. You expect a large number of
transactional processes to occur in this database solution and are concerned
about random I/O latency. As the solution architect, you were asked to
improve performance through database configuration. You are required to do
this without putting extra operational load on the system.

Which of the following database methods should you choose?

1) Use ElastiCache caching for fast processing


2) Install database software into EC2 instances with the provisioned IOPS
EBS volume
3) Change the storage type to Provisioned IOPS for RDS
4) Change the instance type of the RDS to the most appropriate one.
Selecting a storage type
Choose general purpose and provisioned IOPS as your
storage type. Magnetic is old type and less used.

✓ SSD type
✓ Charged for capacity per GB
General Purpose
✓ Capable of delivering 100-10,000 IOPS with bursts in addition
to normal performance (depending on size)

✓ SSD type
✓ Charged for capacity per GB and per provisioned IOPS
Provisioned IOPS
✓ Capable of 1,000-30,000 IOPS with bursts in addition to
normal performance (depending on size)

✓ Hard Disk Type


Magnetic ✓ Capacity charge per GB + IO request charge
✓ Average 100 to maximum hundreds of IOPS
[Q] Public access configuration
The customer management department is running a database solution on
AWS as a CRM solution to manage customer data. The department plans to
build a new database using RDS MySQL as part of the additional functionality.
To meet non-functional requirements, that database should be accessible
directly from the Internet.

What is the correct way to set up a connection to RDS MySQL via the
Internet? (Please choose three.)

1) Enable public access to the RDS instance.


2) Place the RDS instance on a public subnet.
3) Place the RDS instance on a private subnet.
4) Create a security group that allows access to the RDS instance from the
Internet and assign it to the RDS instance.
5) Configure the NAT gateway to route to the subnet where the RDS
database is located.
6) Enable Internet access in RDS instances.
Public access configuration
Public access must be enabled and access must be
granted with security groups.
Public access configuration
Set up the RDS on a public subnet, connect and operate it
directly with SQL software.

10.0.0.0/16
AZ

Public Subnet
10.0.0.0/24

Workbench
General Configuration
Install RDS on a private subnet and use an EC2 instance as
a bastion host to access it.

10.0.0.0/16
AZ

Public Subnet
10.0.0.0/24

EC2 Workbench

Pravate Subnet
10.0.1.0/24
General Configuration
Because this configuration relies on a single AZ, there is a
risk of downtime in the event of an AZ failure.

10.0.0.0/16
AZ

Public Subnet
10.0.0.0/24

EC2 Workbench

Pravate Subnet AZ Failure


10.0.1.0/24

Immediate shutdown
Multi-AZ configuration
A multi-AZ configuration is required to ensure that the
configuration does not stop in the event of an AZ failure.
10.0.0.0/16

AZ AZ

Public Subnet Public Subnet.


10.0.0.0/24 10.0.2.0/24
ELB

Auto Scaling

EC2 EC2

Private Subnet Private Subnet.


10.0.1.0/24 10.0.3.0/24

automatic failover
synchronous replication
RDS RDS
[Q] Effect of multi-AZ configuration
A company is using an RDS database configured in a multi-AZ deployment to
improve the availability of its enterprise systems. The primary database of
the RDS has failed.

Select what action is automatically taken on the RDS after the failure.

1) The CNAME record is moved from primary to secondary.


2) Primary DB reboots.
3) The secondary DB of RDS is configured.
4) Scaling is performed.
Effects of Multi-AZ Configuration
Failover is available very easily by enabling the failover
setting.

✓ Composition of the primary and secondary databases


✓ The two databases use synchronous replication to keep the data
content the same
✓ If the primary fails, failover is automatically performed and the
secondary database is promoted to the primary.
✓ On failover, the CNAME record is moved from primary to secondary.
✓ The DB in the standby state is not accessible
[Q] Read replica
A large e-commerce company is using RDS PostgreSQL database for their e-
commerce site. It is necessary to analyze customer purchase data to provide
services like giving recommendations. The analysis process for these new
functions is also performed in the same database, which has a negative
impact on the speed of the e-commerce site.

Which of the following is the most cost effective solution to this problem?

1) Create a read replica in the same AZ as the master database.


2) Create a read replica in another AZ.
3) Create a read replica in another region.
4) Enable multi-AZ in the RDS database and run analysis workloads in the
standby database.
Read Replica
Up to 5 read-only replicas (15 for Aurora) to scale out the
DB reading process

AZ AZ

Synchronous replication
automatic failover

RDS RDS
master slave
Asynchronous
replication

Read Read Read Read


replica replica replica replica
[Q] Cross-region configuration
A major e-commerce company is using the RDS PostgreSQL database for
their e-commerce site. The e-commerce site is deployed in various countries
in Asia, and although the master database is located in the Singapore region,
the database needs to be expanded to serve local read traffic effectively.

Choose the most cost effective solution that can meet this requirement.

1) RDS failover configuration


2) RDS in a cross-region configuration
3) RDS in a multi-master configuration
4) RDS with cross-reed replicas
Cross-region configuration
It is also possible to configure a cross-regional read
replica

Reference: https://aws.amazon.com/jp/blogs/news/best-practices-for-amazon-rds-for-postgresql-cross-region-read-replicas/
[Q] Scaling of RDS
You are building a two-tier application using EC2 instances and RDS. At this
stage, the workload requirements for the application are clear, but the
performance requirements, such as the expected number of requests
required for database processing, are unknown. Therefore, it is necessary to
scale the database after booting.

Which scaling method should be implemented after deploying the RDS


database for this requirement? (Select two)

1) Add Read replicas.


2) Enable multi-AZ configuration.
3) Select a more optimal instance type.
4) Select a larger instance size.
5) Enable extended networking.
Scaling of RDS
You should avoid effects of database performance
degradation by using scaling

Declining RDS performance

RDS
✓ Improve performance by scaling

✓ The reading process is slow.


✓ Stop writing process etc・・
Scale-up
Performance improvements through scaling up by
changing the instance type and size

Improve performance by changing the size to a


Changing the Instance
high-performance one for the current DB
Size
instance type.

Changing the Instance If a DB instance type suitable for the current DB


Type usage method is available, change to that type.

Change the storage type to a high-performance


Change storage type
type (IOPS if there is a lot of I/O processing).
Storage Capacity Change
Storage capacity can be increased, but cannot be
decreased.

✓ Storage capacity can be increased.

RDS
RDS

✓ Storage capacity cannot be decreased


Using ElastiCache
ElastiCache can retain a portion of the read process of RDS
in the cache to achieve high-speed query processing.

EC2

If there are caches, If there are no caches,


ElastiCache operate RDS will operate data processes
data processes

RDS

Keep the query in the cache


Migration to Aurora
RDS MySQL and PostgreSQL versions compatible with
Aurora and can be easily migrated to improve performance

RDS Aurora
[Q] RDS encryption.
A large e-commerce company is using the RDS PostgreSQL database for their
e-commerce site. Recently, an IT audit was performed and it was noted that
the RDS database was not encrypted.

Select the correct description for the procedure to encrypt this RDS database.
(Choose two.)

1) Enable the encryption option for existing RDS database actions.


2) Create a snapshot of the RDS database, copy the encrypted snapshot,
and restore the database from the encrypted snapshot.
3) Existing RDS databases cannot be encrypted and are terminated.
4) Enable the encryption option on the configuration change screen of the
existing RDS database.
5) Create an encrypted read replica of the RDS database and promote this
to the master database.
RDS Encryption
RDS can implement encryption of stored data resources
and connection encryption

Encryption of data ✓ Encrypt the data transactions to the


transactions DB instance using SSL/TLS.

Encryption of
✓ Encrypt data resources in storage.
stored data
DB RDS Encryption
Instances and snapshots can be encrypted

Encryption target Encryption method

• AES-256 encryption
• Key Management with AWS
• DB instances KMS
• Automatic backups • The same key is used for
• Read Replicas the Read replica
• Snapshots • Encryption can be set only
when an instance is created
• Encryption/restore of
snapshot copies
[Q] Maintenance
Company B is planning to use a database on AWS and is considering using
RDS, but they need to know about the influences of the maintenance
provided by AWS for their managed services. They need to avoid that period
of maintenance time, especially if the DB is forced to go offline during the
maintenance window, as it will have a significant impact.

Select the following maintenance events that will cause database downtime.
(Select two.)

1) Applying Security Patches


2) Applying Multi-AZ Features
3) Updating Database software
4) Updating DB Parameter Group
5) Updating the Options Group
AWS Console Dashboard
The RDS console dashboard shows an summarized view of
the RDS instance status.
Checking the log
You can view and downloaded DB logs from the dashboard

Log retention period


DB engine Log type
(Default)
General query log 24 hours
MySQL/MariaDB Error log
Slow query log
Alert log Alert log is 30 days.
Oracle Audit log The audit log and trace log
Trace log are 7 days old.
Error log Seven days.
SQL Server The agent log
Trace log dumping
Query log Three days.
PostgreSQL Error log
Integration with CloudWatch
In conjunction with CloudWatch, RDS has centralized,
metrics-driven operational management for users.

Get the metrics of each active Amazon RDS database


CloudWatch Metrics at 5-minute intervals and display them in the
dashboard.
When enhanced monitoring is enabled, metrics are
acquired for the transmission interval in seconds,
Enhanced monitoring enabling near real-time monitoring.
Monitoring costs will be charged.
Can monitor a single Amazon RDS metric over a
specified period of time and perform one or more
CloudWatch Alarms actions based on the metric values associated with a
given threshold value

Enables monitoring, storage, and access to database


CloudWatch Logs log files in CloudWatch Logs.

For CloudWatch events, you can use event patterns to


CloudWatch Events filter incoming events and create rules to trigger
targets.
AWS Console Dashboard
Maintenance and Backup allows you to see the settings in
the Maintenance and Backup windows

Note that the DB instance will be temporarily taken offline when the necessary
operating system and database patches are applied
[Q] Backup
Company B uses a database environment on AWS using RDS. However, the
database has been corrupted by a failure and needs to be restored. You, as the
solution architect, are using point-in-time recovery to recover the data to its most
recent configuration.

Which is the correct way to restore an RDS database to a specific point in time?
(Please choose two.)

1) Snapshots and transaction logs can restore the DB to the state it was in 5
minutes ago.
2) Only snapshots can restore the DB to the state it was 5 minutes ago.
3) Only the transaction log can restore the DB to the state it was 5 minutes ago.
4) Snapshots and transaction logs can restore DB it was 10 minutes ago
5) Only snapshots can restore the DB to the state it was 10 minutes ago.
6) Only the transaction log can restore the DB to the state it was 10 minutes ago.
Backup
By taking a snapshot, RDS data can be saved and fault
tolerance can be implemented.

AZ AZ
synchronous replication
automatic failover

RDS RDS
master slave

Read Read Read Read


replica replica replica replica
S3
Transaction
Snapshot
log
Backup
RDS backups are taken as snapshots and there are two
methods used to take snapshots.

Amazon RDS automatically takes a


daily snapshot of RDS data if you have
Automatic Backup
enabled automatic backup
Point-in-time recovery is possible.

You can freely take a snapshot at your


Manual Snapshot
specified frequency
Automatic backup
Automatic backups will automatically take periodic
snapshots managed on the RDS side.
✓ It Automatically take snapshots and transaction logging regularly
✓ You can restore your DB instance to a specific time of day with the
most corresponding backups and transaction logs.
✓ The backup cycle is fixed once a day.
✓ Transaction logs are automatically archived every 5 minutes, which
enables point-in-time recovery.
✓ The backup retention period defaults to 7 days but can be set to up
35 days.
✓ Performs an incremental backup.
✓ RDS backups are stored in AWS-managed S3 storage.
✓ Snapshots are deleted when you delete a DB instance or when you
disable automatic backups.
The Scope of EBS
What is EBS?
An EBS is block storage used in conjunction with an EC2
instance. It is used for workloads on an instance.
Tokyo Region

AZ AZ

EC2 EC2
instance instance

EBS EBS EBS


The scope of the EBS questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked to select storage kind that meets the
EBS Selection storage requirements of the scenario

✓ You will be asked about the characteristics of EBS


EBS features ✓ You will be asked how to attach EBS to an EC2 instance
and whether or not it can be accessed from the Internet

Selecting ✓ Based on the scenario, You will be asked to select the EBS
EBS Volume Type volume type to meet workload requirements

✓ You will be asked about the features and functions of EBS


Snapshot Features snapshots

Snapshots ✓ You will be asked how to set up snapshots and other


management settings, such as taking regular backups using snapshots.
The scope of the EBS questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked how to share the snapshot with another
Share snapshots account and how to share the snapshot with another
region.

✓ You will be asked the way in which the EBS volume is


Deleting an EBS
deleted and the behavior of EBS when an EC2 instance is
volume deleted.

✓ You will be asked the method of setting up EBS encryption


and the EBS encryption coverage.
Encryption ✓ You will be asked restrictions on the use of encrypted
snapshots.

✓ You will be asked about the characteristics of the EBS


EBS Status status.

EBS RAID ✓ You will be asked about the configuration and use of RAID
configuration 0 and RAID 1 using EBS.
[Q] Select EBS

A leading financial institution is using AWS to develop an application for its


new Fintech business. This application requires EC2 instances to perform
data processing for the application and requires a storage service that can
provide minimum latency access to the data. The data capacity is expected
to grow to 10 TB.

Which of the following is the most appropriate storage service to meet this
requirement?

1) EFS
2) instance store
3) EBS
4) S3
5) Amazon FSx
Select EBS
AWS offers three forms of storage services

✓ A disk service that attaches to and utilizes EC2


✓ Save data in block format
Block storage
✓ High Speed & Wide Bandwidth
✓ Example: EBS, instance store

✓ Inexpensive and Durable Online Storage


✓ Storing Data in Object Format
Object storage
✓ It is redundant to multiple AZs by default.
✓ Example: S3, Glacier

✓ Shared storage service that can be attached to


multiple EC2 instances simultaneously
File storage ✓ Save data in file format
✓ Example: EFS
Select EBS
EC2s use two types of storage: Instance store and EBS

✓ Block-level physical storage inseparable from EC2 on a


disk embedded in the host computer
Instance
✓ Temporary EC2 data is retained, and the data is
store deleted when the EC2 is stopped or terminated.
✓ Free to use

✓ Block-level storage, it attaches to EC2 through


network
Elastic Block Store ✓ EBS data can be retained even if EC2 is terminated
(EBS)
✓ Snapshots can be retained in S3
✓ Additional EBS fee is required.
[Q] EBS features

Company B is building a web application on AWS. As a solution architect, you


have decided to use an EBS General Purpose SSD volumes as storage for the
web server. You intend to use multiple EC2 instances and multiple EBS
volumes in conjunction with multiple EC2 instances, but you are looking into
how you can configure them.

Which of the following is the correct description of EBS volume?

1) EBS volumes can be attached to multiple instances.


2) An EBS volume can be attached to any instance of the same region.
3) An EBS volume can only be attached to an instance of the same VPC.
4) EBS volumes can only be attached to instances in the same AZ.
EBS features
Block-level storage services attached to EC2

[Basic info]
Tokyo Region
✓ It is used for purposes like OS
operation, application and data
AZ AZ storage.
✓ It attaches EC2 through network.
✓ 99.999% availability
✓ Sizes range from 1 GB to 16 TB
EC2 EC2 ✓ Charged by size and duration of use
instance instance

[Features]
EBS EBS EBS ✓ Volume data is replicated by default
to multiple HWs in the AZ, making
it redundant.
✓ The EBS can be used even if all
ports are closed because it is not
subject to communication control
by security groups.
✓ Data is stored permanently
EBS features
EBS cannot attach to an instance in another AZ.

Tokyo Region
[Features]

AZ AZ ✓ EC2 instances are not able to


access EBS in other AZs

EC2 EC2
instance instance

EBS EBS EBS EBS


EBS features
A single EBS cannot be shared by multiple instances
simultaneously.

Tokyo Region
[Features]

AZ AZ ✓ You can connect multiple EBSs to


an EC2 instance, but you cannot
share an same EBS with multiple
instances.
✓ However, only provisioned IOPS can
be shared by multiple instances.

EBS EBS EBS


EBS features
An EC2 instance can use EBS volumes used by another
instance in the same AZ.

AZ [Features]

✓ You can replace an EBS volume


with another instance.

EBS EBS EBS


EBS features
An EC2 instance can use EBS volumes used by another
instance in the same AZ.

AZ [Features]

✓ You can replace an EBS volume


with another instance.

EBS EBS EBS


[Q] Select the EBS volume type.

As a solution architect, you are building a new EC site for mobile. Currently,
you are provisioning EC2 instances via the EC2 API. These instances will
display the best screen for the customer depending on the customer’s data.
A non-functional requirement for storage is to make the volume type
unavailable for boot volumes.

Which storage volume type cannot be used as a boot volume for an EC2
instance? (Select two.)

1) General Purpose SSD


2) Provisioned IOPS SSD
3) Instance store
4) Throughput-optimized HDD
5) Cold HDD
EBS Volume Type
Select EBS Type from five different types with different
performance and costs depending on the use case
Use case Size.
✓ Virtual desktop
✓ Apps that require low latency
General Purpose ✓ Small to medium sized databases
1GB~16TB
✓ Development environment
SSD
✓ NoSQL and apps that rely on high I/O performance
Provision ✓ Large scale DB with workloads of 10,000 IOPS and
over 160MB/s 4GB~16TB
IOPS ✓ Nitro System Amazon EC2 Instance and EBS
Optimization Instance Types for Speed

Throughput ✓ Big data processing


✓ DWH 500GB~16TB
optimization ✓ Large-scale ETL processing and log analysis
HDD ✓ Not available for the root (boot) volume.
HDD
✓ Data with low frequency of access, such as log data
Cold HDD ✓ Backup and archiving 500GB~16TB
✓ Not available for the root (boot) volume.

✓ Avoid use because it is an old generation volume for


Magnetic basic purposes. 1GB~1TB
✓ Workloads with infrequent access to data
[Q] Snapshot features

You are a solution architect and are responsible for managing AWS
infrastructure within your company. The cost of using AWS is increasing for
this company and you have used AWS Trusted Advisor to test the room for
cost optimization. According to AWS Trusted Advisor, you can reduce costs by
cleaning up unused EBS volumes and snapshots to save space and money.

Which of the following is the correct explanation for reducing snapshots?

1) Since it is an incremental snapshot, deleting any one of the series of


snapshots makes the other snapshots unavailable.
2) Although it is an incremental snapshot, you can restore EBS with just the
latest snapshot even if you delete all but the latest snapshot.
3) Since it is an incremental snapshot, only the first and most recent
snapshot should be kept.
4) Since it is an incremental snapshot, all but the first snapshot can be
deleted.
Snapshot Features
EBS uses snapshots to make backups

[Features]
Tokyo Region
✓ Backup EBS data by Snapshot
✓ EBS can be restored from a
AZ AZ snapshot to another AZ
✓ Snapshots are stored in S3.
✓ After the second generation of
EC2 EC2 Snapshot, it becomes an
instance instance incremental backup that saves the
incremental data (it is possible to
EBS EBS EBS EBS restore even if the first generation
is deleted)
✓ Compressed storage at the block
level during snapshot creation,
EBS Snapshot resulting in a fines on the
compressed volume.
✓ EBS is still available when creating
S3 snapshots.
[Q] Snapshot management

We have a web application in our company that uses multiple EBSs. Security
regulations require us to perform backups on a regular basis, but we
currently do it manually and it is very time-consuming. Therefore, we would
like to implement an automated method of creating, maintaining and
deleting backups of EBS volumes.

What is the easiest way to automate these tasks in EBS?

1) Configure EBS volume replication to create backups in S3


2) Define a script to run the snapshot using the AWS CLI commands.
3) Use the Data Lifecycle Manager (DLM) to manage snapshots of the
volumes.
4) Enable automatic backups on the EBS console screen.
Snapshot management
Although stationary points are recommended when
creating snapshots, they can be taken at any time without
affecting EBS operations.

 Setting of the Stationary point is recommended to maintain data


consistency when creating snapshots.
 Unlimited storage and number of generations
 If you need to manage their creation, automate it with AWS CLI or API.
 You can use DLM to schedule snapshot taking.
[Q] Share a snapshot.

The company has multiple departments with AWS accounts that use AWS
resources for various purposes, the EBS in department A‘s account A needs
to be used in department B’s account B as well. As the solution architect, you
are required to deal with this set up. This snapshot was taken from an EBS
volume that was encrypted with a custom key.

What is the correct combination of steps to share an encrypted EBS


snapshot? (Please choose two.)

1) Configure a copy of the EBS volume to be copied to another account.


2) Disable encryption on EBS snapshots.
3) Configure the sharing settings of a snapshot with account B ID on the
EC2 console screen.
4) Share a custom key used to encrypt the volume
5) Change the permissions of the encrypted snapshot on the EC2 console
screen and set it to account B.
Share a snapshot
Snapshots can be used across regions

Tokyo Region Osaka region

AZ AZ AZ

EC2 EC2 EC2


instance instance instance

EBS EBS EBS EBS EBS

EBS Snapshot EBS Snapshot


Copying Snapshot from one region to another

S3 S3
Share a snapshot
Snapshots can be transferred to other accounts by
changing the permissions

Account A Account B

AZ AZ AZ

EC2 EC2 EC2


instance instance instance

EBS EBS EBS EBS EBS

EBS Snapshot EBS Snapshot


Delegate authority to another account
[Q] Delete an EBS volume.

The research team uses EC2 instances for data analysis. The data collected
on a daily basis is run as an analytical workload as a batch job on an EC2
instance with an EBS volume attached to it. While running the analysis, the
team discovered that when the EC2 instance is terminated, the connected
EBS volumes are also lost.

What is the most likely cause for this problem?

1) Because the EC2 instance is created with an instance store-based AMI,


the root volume can only store data temporarily.
2) The EBS volumes had not taken snapshots, so data could not be retained
at the time of termination.
3) At the end of an EC2 instance, the EBS volume was deleted at the same
time because the protection of the EBS volume was not enabled.
4) If an EBS volume is configured as the root volume of an EC2 instance, the
default behavior when the instance is terminated is to also terminate the
connected root volume.
Deleting an EBS volume
The root volume EBS is deleted when the EC2 instance is
deleted, so you need to change the configuration if you
want to keep the data.

✓ The EBS-backed AMI instance uses EBS as its root


volume.
EBS in root volume
✓ In the default configuration, the EBS volume is
deleted along with the deletion of the EC2 instance.

✓ When the TerminateOnDelete attribute is enabled,


DeleteOnTermination the EBS is deleted in response to the deletion of an
EC2 instance.
attribute
✓ Deactivating allows only EBS volumes to be kept
[Q] Encryption of EBS

Research institutions use EC2 instances for data analysis. The data collected
on a daily basis is performed as a batch job on an EC2 instance with an EBS
volume. Because this data are highly sensitive, the sensitive data stored in
the EBS must meet HIPAA compliance standards.

Which is the correct description of an encrypted EBS volume? (Please choose


three.)

1) The data stored in the volume is encrypted


2) Volume snapshots are encrypted.
3) Data Transferring between volumes and instances is unencrypted.
4) An SSL certificate is required for data encryption to transfer between
volumes and instances.
5) A snapshot of the volume requires a separate snapshot encryption to be
performed.
EBS Encryption
EBS uses KMS's CMK to implement encryption during
volume creation and snapshot creation.

✓ Using AWS KMS Customer Master Key (CMK) for


encryption when creating EBS volumes and snapshots
EBS Encryption ✓ Encryption is implemented for both data transfer and
stored data between the instance and the EBS storage
connected to it.

✓ The data stored in the volume


✓ Data transferred between the volume and the
Encryption target instance
✓ All snapshots created from the volume
✓ All volumes created from those snapshots
[Q] EBS status.

You have created an AWS account and launched a new EC2 instance. When
you check the launched EC2 instance, you see that the EC2 status check
shows Insufficient Data.

What is the most likely explanation for this status?

1) The volume check is in progress.


2) EBS is exceeding the volume limit.
3) EC2 failed to check the volume.
4) There is not enough data in the volume.
EBS Status
The EBS needs to understand the following four status
indicators

The status check is still in


progress

Reference: https://docs.aws.amazon.com/ja_jp/AWSEC2/latest/UserGuide/monitoring-volume-status.html
[Q] RAID configuration of EBS.

As a solutions architect, you have built a business application using an EC2


instance of a web server and database. You are using a large EC2 instance
with one 500 GB EBS volume to host a relational database. A performance
check revealed that we needed to improve the write throughput to the
database.

Choose a method to meet this requirement (choose two).

1) Increase the size of the EC2 instance.


2) Setting up a RAID 0 configuration with two or more EBS volumes
3) Setting up a RAID 1 configuration with two or more EBS volumes
4) Install an EC2 instance in a cluster placement group.
5) Restart the EC2 instance using the PV AMI to enable the extended
network.
EBS RAID Configuration
RAID 0 and RAID 1 configurations are often implemented
in EBS to improve performance and redundancy

✓ Purpose: To improve performance.


✓ RAID 0 is a configuration that treats multiple
RAID 0 disks as if they were a single disk to speed up
reading and writing
✓ Call this setting “striping”.

✓ Purpose: To increase volume redundancy.


RAID 1 ✓ RAID 1 mirrors two volumes at the same time.
✓ Call this setting mirroring.
The Scope of ELB
What is ELB?
ELB is a service that provides a load balancer that enables
processing by multiple EC2 instances

Distribute traffic to
multiple instances

ELB

EC2 EC2
What is ELB?
You can also check the health of your EC2 instances and
use only normal instances.

Distribute traffic to
multiple instances

ELB Concentrate traffic to this one.

EC2 EC2
The scope of the ELB questionnaire
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about the use and features of ELBs,
ELB features including how it differs to Route 53.

✓ You will be asked about basic architectural configuration


using ELB.
ELB Configuration ✓ You will be asked the method of configuration using
internal ELB based on security requirements.

✓ Based on the scenario, You will be asked which ELB type


Select the ELB Type should be used.

✓ You will be asked about the function of ALBs and how they
ALB features differ from other ELB types.

✓ You will be asked about the function of NLBs and how they
NLB Features differ from other ELB types.
The scope of the ELB questionnaire
Frequent questions extracted from 1625 questions are as
follows
Cross-zone load ✓ You will be asked about the features and use cases of
balancing cross-zone load balancing in ELB.

Encryption ✓ You will be asked how to set up ELB encryption.

✓ You will be asked about the characteristics and use cases


Sticky session of ELB sticky sessions.

✓ You will be asked about the features and use cases of


Connection Draining Connection Draining in ELB.

Logging ✓ You will be asked how to log ELB.


[Q] ELB Features

Your company is using AWS in multiple departments and you have a VPC set
up for each application. As a solution architect, you are implementing inter-
application integration. To implement this, you want to configure each VPC to
have a peering connection and use a single ELB to route traffic to multiple
EC2 instances in peered VPCs in the same region.

How can such an ELB configuration be achieved?

1) IP addresses are used as targets by NLB or ALB.


2) Any ELB can be configured across VPCs.
3) This requirement requires the use of Route 53, not ELB.
4) ELBs cannot be configured across VPCs.
ELB Features
ELB promotes Scalability through load balancing and high
availability through health checks

Ensuring Scalability High availability

Load balancing accross multiple Distribute traffic only to active


EC2 instances/ECS Services targets from multiple EC2
instances in multiple availability
zones

traffic load traffic load Focus on


Distributed: 30 Distributed: 70 active target.

ELB ELB

EC2 EC2 EC2 EC2


ELB Features
A managed load balancing service that is commonly used
to distribute processing of EC2 instances

 A service that distributes the load between instances. Load


balancing can be done by targeting IP addresses as well as
instances.
 Health check recognizes abnormal instances and distributes traffic
to only active instances.
 Can be used for both public and private subnets
 Scaling, which automatically increases or decreases capacity based
on load, is done on the AWS side as a managed service.
 Charged by load balancer capacity unit (LCU) usage based on time
(CLBs only)
 Integrate with Auto Scaling, Route 53, Cloud Formation, etc.
[Q] ELB Configuration

A venture company in Singapore is using AWS to build an application for a


new service. The application requires four EC2 instances on a web server,
plus a target group of ELBs to be configured.

How can you configure it with regions in Asia?

1) Deploy all four instances to two availability zones in the Singapore region.
2) Deploy the four instances to AZ-a in the Singapore region.
3) All four instances are deployed to AZ-b in the Sydney region.
4) Two instances are deployed to AZ-a in the Singapore region and the other
two instances are deployed to AZ-b in the Sydney region.
ELB Configuration
ELB can be configured to distribute traffic to instances
across multiple AZs.
One region

AZ AZ
Public Subnet Public Subnet
10.0.1.0/24 10.0.2.0/24

ELB
EC2 EC2
[Q] ELB Configuration

A healthcare company is using AWS to build a medical data sharing


application for a new service. The application uses an EC2 instance for the
web server and S3 and RDS for the data layer. To distribute the load, we
need to configure the ALB for the Internet. It is necessary to restrict public
access to medical data.

Select the configuration you need to make this configuration work. (Select
two.)

1) Create a public subnet corresponding to the same AZ and associate it


with the ALB.
2) Connect your Internet gateway to a private subnet
3) Add an Elastic IP address to each EC2 instance in the private subnet.
4) Install a NAT gateway on the private subnet.
5) Set up an RDS and EC2 instance on a private subnet to serve as a server
for applications.
ELB Configuration
ELBs can be used for private subnets as well

One region

AZ AZ
Private Subnet Private Subnet
10.0.1.0/24 10.0.2.0/24

internal
EC2 ELB EC2
ELB Configuration
You can also configure ELBs connected to the public
network for private subnets to distribute traffic

10.0.0.0/16
AZ AZ
Public subnet
ELB
10.0.0.0/24

Private subnet private subnet


10.0.1.0/24 10.0.3.0/24

EC2 EC2

RDS RDS
Synchronous
replication
MySQL DB server Automatic failover
[Q] Select the ELB type

Company A operates a video delivery site and is looking to use the AWS
cloud to deliver its content to users around the world. The video delivery site
has users all over the world and must support at least one million requests
per second.

Which ELB type should be used to meet this requirement?

1) Application Load Balancer


2) Classic Load Balancer
3) Basic Load Balancer
4) Network Load Balancer
Select the ELB type
There are three types of load balancers available, and
they can be used for different purposes
• Layer 4 and 7 are supported, using TCP, SSL, HTTP and HTTPS
listeners
• Since it's an older type, you should prioritry use ALB or NLB.
CLB
• It is charged based on data transfer (in GB).
• Since IP addresses are variable, only DNS is available when
specified

• Layer 7 support and HTTP/HTTPS listener support


• Path routing is available
• Charged by load balancer capacity unit (LCU) usage based on time.
ALB
• Since IP addresses are variable, only DNS is available when
specified
• Cross-zone load balancing is enabled by default

• L4 NAT load balancer to support TCP listeners (return traffic does


not go through the NLB)
• It is charged by the amount of LCUs used, based on time.
NLB • Subnet expansion support for NLB (subnets can be added)
• Fixed IP, so both DNS and IP can be used
• Higher performance processing than ALB and CLB
• Cross-zone load balancing is disabled by default
[Q] ALB features

A venture company is using AWS to build an application for a new service.


The application requires the use of four EC2 instances on the web server to
configure a target group of ALBs. In addition, the development team
configures the routing by routing traffic to multiple back-end services based
on the URL paths in the HTTP headers.

Request https://www.pintor.com/index to microservice A


Request https://www.example.com/head to microservice B

Choose a configuration method that meets these requirements.

1) Using NLB's query string parameter-based routing


2) Using ALB's HTTP Header Based Routing
3) Use Route53 weighted routing.
4) Use ALB's path-based routing.
CLB (Classic Load Balancer)
Early ELB type with standard L4/L7 load balancing but no
complex configuration

region
 Support for HTTP/HTTPS and TCP/SSL protocols
L4 and L7
AZ AZ
CLB
 Identify the source IP address by Proxy Protocol
Subnet Subnet
10.0.1.0/24 10.0.2.0/24  Server certificate authentication between the ELB
and the back-end EC2 instance when using
HTTPS/SSL
 All instances functions under the CLB should be
EC2 EC2 EC2 EC2
the same.
 It is not possible to do content-based routing,
which checks the contents of requests and
allocates them to different destinations
Application Load Balancer (ALB)
Single load balancer with enhanced Layer 7 support,
enabling requests to be routed to different applications

Support for Layer 7 and HTTP/HTTPS listener


region
Accepting WebSocket and HTTP/2 requests
AZ AZ Multiple ports can be registered in one instance
ALB
Subnet Subnet
10.0.1.0/24 10.0.2.0/24
Multiple ports can be registered as individual targets,
making it possible to load balance containers such
as ECS that use the ports
target Health checks on target groups are possible
EC2 EC2 EC2 group EC2 EC2
Deletion protection is available as well as EC2
target Weighted load balancing is available
group
Content-based routing can be used to check the
contents of requests and distribute them to different
destinations.
Allows path-based routing based on the URL path
CLB and ALB
ALBs can be balanced per function by path-based routing
according to the contents of the request.

CLB Multi-Function Multi-Function Balancing of


Balancing ALBs

Path routing enables


multiple function
balancing

order.japan.com Procure.japan.com japan.com/order Japan.com/procure

order function order function Procurement function Procurement function order function order function Procurement function Procurement function
APP server APP server APP server APP server APP server APP server APP server APP server
[Q] NLB features

Company A operates a video delivery site and is looking to use the AWS
cloud to deliver its content to users around the world. The video delivery site
has users all over the world and the requirement is to support at least one
million requests per second. The engineering team provisioned multiple
instances on the public subnet and designated these instance IDs as targets
for the NLB.

Describe the correct routing scheme for the target instance configured in the
NLB.

1) Traffic is routed through the instance using the primary private IP address
2) Traffic is routed to the instance using the primary public IP address
3) Traffic is routed to the instance using the DNS name

4) Traffic is routed to the instance using Elastic IP addresses


5) Traffic is routed to the instance using the instance ID.
Network Load Balancer (NLB)
NLB is a high-performance load balancer designed to
handle millions of requests per second while maintaining
high throughput with ultra-low latency.

 L4 NAT load balancer to support TCP listeners (return traffic does not go through the NLB)
 Ability to handle volatile workloads and handle millions of requests per second
 Registration of IP addresses and static IP addresses, including targets outside the VPC
 Multiple ports can register each instance or IP address to the same target group
 NLB does not need Pre-application which is required for CLBs and ALBs when large scale
access is anticipated
 While ALB and CLB use X-Forwarded-For to determine the source IP address, NLB does not
rewrite the source IP address and the source port, so the source can be determined from
packets.
 NLB has built-in fault tolerance and can handle open connections for months or years.
 Support for containerized applications such as ECS
 Support for monitoring the individual health status of each service
 Support for Subnet expansion (subnets can be added)
[Q] Cross-zone load balancing
A large supermarket chain is running an e-commerce application. It deploys four EC2
instances, one instance in AZ-a and three instances in AZ-b for redundancy, and uses
ELB to control traffic.

What are the results of the traffic balancing with and without cross-zone load balancing
in this configuration?

1) With cross-zone load balancing enabled, one instance of AZ-a receives 50% of the
traffic and three instances of AZ-b each receive 17% of the traffic. With cross-zone
load balancing disabled, one instance of AZ-a receives 25% of the traffic, and three
instances of AZ-b each receive 25% of the traffic.
2) With cross-zone load balancing enabled, one instance of AZ-a receives 25% of the
traffic and three instances of AZ-b each receive 17% of the traffic. With cross-zone
load balancing disabled, one instance of AZ-a receives 25% of the traffic, and three
instances of AZ-b each receive 25% of the traffic.
3) With cross-zone load balancing enabled, one instance of AZ-a receives 25% of the
traffic and three instances of AZ-b each receive 25% of the traffic. With cross-zone
load balancing disabled, one instance of AZ-a receives 50% of the traffic and the
four instances of AZ-b each receive approximately 17% of the traffic.
4) With cross-zone load balancing enabled, one instance of AZ-a receives 90% of the
traffic and three instances of AZ-b each receive 10% of the traffic. With cross-zone
load balancing disabled, one instance of AZ-a receives 10% of the traffic, and three
instances of AZ-b each receive 30% of the traffic.
[Q] Encryption

A large supermarket chain is running an e-commerce application. All data in


transit through the ELB must be encrypted.

How can you achieve your encryption requirements? (Please choose two.)

1) Configure a TCP listener with NLB and terminate SSL on the EC2 instance.
2) Configure an HTTPS listener in the ALB and install an SSL certificate on
the ALB and the EC2 instance.
3) Configure an HTTPS listener in the NLB to install SSL certificates on the
ALB and EC2 instances.
4) Use pass-through mode in an ALB to terminate SSL on an EC2 instance.
5) Configure a TCP listener in the ALB and install an SSL certificate on the
ALB and the EC2 instance.
[Q] Sticky Session.

A large supermarket chain runs an e-commerce application. In order to


achieve redundancy, multiple EC2 instances are configured with traffic
control using ELB and scaling using Auto Scaling. Because the system often
generates intermittent system processes from the same user, the
requirement is that the same user must continue to receive traffic from the
same EC2 instance.

Choose how to configure the ELB to meet this requirement.

1) Use the load balancing function to send all requests from the same user
are sent to the same EC2 instance during a session.
2) Use Connection Draining to send all requests from the same user to the
same EC2 instance during a session.
3) Use a sticky session to send all requests from the same user to the same
EC2 instance during the session.
4) Use SSL Termination to send all requests from the same user to the same
EC2 instance during a session.
[Q]Connection Draining

A large supermarket chain is running an EC application with multi-AZ


configured EC2 instances with ALBs configured. The development team has
been struggling with a recurring issue where requests in progress from the
ELB to the EC2 instance are being dropped when the instance goes bad.

Which features should be used to address this problem?

1) Connection training
2) Cross-zone load balancing
3) Sticky session
4) Enabling Health Checks
[Q] Logging

A company is running a web application that has cross-zone load balancing


to multi-AZ by setting up ALBs on EC2 instances. As a solution architect, you
are responsible for analyzing this application, and you are required to obtain
detailed information about all HTTP requests handled by the ALB and analyze
the traffic conditions.

Choose a response to meet this requirement.

1) Get ALB metrics from CloudWatch.


2) Enable the ELB access log and save the log data to S3
3) Enable detailed monitoring of EC2.
4) Configure CloudTrail as ELB to obtain access logs.
Key Features of ELB
Various features for ELB load balancing

Check whether an EC2 instance is normal or abnormal


Health Check and assign only healthy EC2 instances to be used.

Cross Zone Distribute the load evenly across multiple EC2 instances
across multiple AZs based on thethe load of the
load balancing subordinate EC2 instances.

Encrypted HTTPS or TLS communication can be implemented by


communication setting an SSL/TSL certificate to ELB.

Sticky Continuously sending requests from the same user to


session the same EC2 instance during a session

Connection If an instance is unregistered or fails, stop sending new


Draining requests to that back-end instance

Logging Enabling ELB logging collects logs in S3 buckets


Section content
Lecture What you will learn in the lecture

Review questions about SQS, a services that


The Scope of SQS
performs task management through queing.

The Scope of Review questions about CloudFront, the AWS


CloudFront CDN service.

The Scope of Review questions about DynamoDB, a typical


DynamoDB NoSQL type database.

Review questions about Lambda, a typical


The Scope of Lambda
serverless computing environment.

The Scope of Route 53 Review questions about Route 53, which


provides DNS server capabilities for AWS.
The Scope of the SQS
What is SQS?
A queuing service that enables parallel execution of
workloads by managing queues that then trigger tasks.

Reference: https://aws.amazon.com/jp/blogs/developer/using-python-and-amazon-sqs-fifo-queues-to-preserve-message-sequencing/
Scope of the SQS questionnaire
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked to choose SQS out of similar services like
Selecting SQS Amazon SNS, SES, etc.

✓ You will be asked about features and constraints, such as


the polling process of SQS queues.
SQS Features ✓ You will be asked about the behavior and configuration of
SQS.

✓ You will be asked about the characteristics and use cases


SQS queue type of the standard and FIFO queues that can be selected in
SQS

✓ You will be asked about the characteristics and usage of


SQS Identifier identifiers for SQS.

✓ You will be asked how to configure SQS with EC2


Configure SQS instances, ECS, etc.
Scope of the SQS questionnaire
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked to configure the scaling settings when
SQS and Auto Scaling using SQS in conjunction with Auto Scaling.

✓ You will be asked about the characteristics and use cases


Visibility timeout of visibility timeout.

✓ You will be asked how to differentiate short polling from


Polling system long polling and the use cases.

✓ You will be asked the characteristics and use cases of


Delay queue delayed cues.

✓ You will be asked the characteristics and use cases of


Priority queue prioritized queues.
Scope of the SQS questionnaire
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about the features and use cases of
Message timers message timers.

Message deduplication ✓ You will be asked about the characteristics and use cases
ID of message deduplication ID.

✓ You will be asked about the characteristics and use cases


Dead letter queue of dead letter queues.

✓ You will be asked about the configuration method used to


SQS Batch Actions send messages together in an SQS queue.
[Q] Select SQS

Your company runs a video management application for uploading,


processing and publishing user-submitted videos. The application utilizes
multiple EC2 instances to process video uploaded by users. It has an EC2-
based worker process that processes and publishes video and has an Auto
Scaling group configured.

Select the services you should use to increase the reliability of your worker
processes.

1) Amazon SQS
2) Amazon SNS
3) Amazon SES
4) Amazon MQ
Selecting the SQS
SQS is a polling-type queuing service and is used for
concurrent execution of tasks.

A fully managed pub/sub messaging service.


Amazon SNS It is used for collaborative processing through email notifications and
push notifications.

Fully managed queuing services


Amazon SQS It is used for parallel execution of tasks by the polling process.

A service that enables e-mail functionality. It is used to implement


Amazon SES email sending and receiving functionality in an application. It enables
secure, global and large-scale sending of e-mail.

Managed message brokerage service for Apache ActiveMQ using


Amazon MQ industry-standard APIs and messaging protocols such as JMS, NMS,
AMQP, STOMP, MQTT, and WebSocket.
[Q] SQS features

You are a solution architect and you are building an e-commerce site that is
hosted on an EC2 instance. The site's orders are configured to be actioned by
the processing server via messages from the SQS queue; the visibility
timeout for the SQS queue is set to 30 minutes. The site is configured to
notify the order handler of a message when an order is completed, but we
are experiencing trouble delivering some of the message notifications to
orders.

Which is the most likely cause of this problem?

1) The server processing the order is not deleting messages in the SQS
queue after processing the messages.
2) The standard queue is used, which results in duplicate messages.
3) The queue is set to short polling, which increases the number of empty
message retrievals.
4) Several order messages have been transferred to the debt letter queue.
SQS features
SQS can be used in conjunction with various AWS
services and realized in a loosely coupled architecture.

Basic information.
 Using single-issue messages as a queue
 Polling processing queuing service
 The standard queue does not guarantee the order of
message communication, but the FIFO queue does.
 Priority queues (FIFO) can take precedence over other
queues
 The message is retained for the message retention
period, but if it is exceeded, the message is deleted.
 You can't cancel a message you've issued.
 Retry the queue based on the delivery policy
SQS features
A queue is a relay station where a message sent by the
Producer is stored, and processing begins when the
Consumer polls the message.

Relay station
Producer Consumer
transmission Messages. transmission
process process

Messages.
SQS features
SQS issues and stores queues and manages the polling
process.

(3) Inquire about


the presence of a
message.

Transmission Receiving
process process
(1) Send a message
(4) If there is a
(2) Keep in queue message, it will
receive it.
Features of the SQS queue
Unlimited messages are available, but the message
retention period needs to be set up

• Unlimited number of messages available


Message • Message size is up to 256KB
limitations • However, the extended client library allows messages
to be exchanged up to 2GB.

• SQS queue messages are stored for the retention


period without deleting them.
• Default 4 days (can be set from 60 seconds minimum
The retention
to 14 days maximum)
period
• If the process of deleting the message is not executed
on the application, the queue will stay until the period
is exceeded.
[Q] SQS Queue Type.

A global consulting firm is building an information sharing system on AWS to


share consulting insights globally. The system uses S3 as the storage layer,
but adds the process of delivering SQS queues triggered by events whenever
data is uploaded to S3.

Which of the following is the correct explanation for this feature?

1) You can configure standard Amazon SQS queues for S3 events, but you
cannot configure FIFO queues.
2) You can configure Amazon SQS FIFO queues for S3 events, but you
cannot configure standard queues.
3) S3 events can be configured with both standard Amazon SQS queues and
FIFO queues.
4) You cannot set up Amazon SQS for S3 events, so you need to use SNS.
SQS Queue Type
In SQS, you'll have to choose between Standard queues
or FIFO queues to initialize SQS.

✓ One or more copies of the message may not be delivered in order.


✓ It is a method in which the message is delivered at least once, and
the message may be duplicated.
Standard queues ✓ An almost unlimited number of transactions per second
✓ Standard queues are used in the case where the application can
accept possible duplicates and possible messages out of order.

✓ First in, first out (FIFO) system to protect the order of delivery.
✓ There is no duplication in the queue because the message is
delivered only once and the consumer maintains the state of the
available queue until the consumer processes the process and
FIFO queues deletes it.
✓ Limited to 300 transactions per second
✓ FIFO queue are used for use cases where the order of operations
and events is important or where duplication is not acceptable.
Standard queue
Standard queueing is a queueing system that performs
"sequential processing" and "one-time messaging" as
much as possible.
FIFO queue
As the name implies, the queueing system protects the
order in which the first queue entered is processed first.
[Q] SQS Identifier

As a solution architect, you're building a workload that analyzes streaming


data from IoT devices. In this streaming process, data is sent to AWS every
minute. Each IoT device's data is required to be processed individually in turn.
The IoT devices are in groups of two to five, located in the same location,
and the data must also be analyzed collectively as a group.

Choose a solution that can meet this requirement.

1) Use the SQS FIFO queue to send the message with a group ID attribute
that represents the value of the device ID of the IoT data.
2) The standard SQS queue is used to send the message with a group ID
attribute that represents the value of the device ID of the IoT data.
3) Use Kinesis Data Streams to send a message with a group ID attribute
that represents the value of the device ID of the IoT data.
4) Kinesis Data Streams is used to process data in isolation per shard by
assigning a group ID attribute that represents the value of the device ID
of the IoT data.
SQS Identifier
SQS allows you to use various functions when using the
queue.

Queue URL • The URL to be assigned to the queue

Message ID • The ID assigned to the message

• The message group ID is a tag that specifies a


message that belongs to a specific message group
• Messages belonging to the same message group are
Message Group ID processed one at a time in strict order relative to the
message group.
• To interleave multiple ordered message groups in a
single FIFO queue, use the message group ID value.
[Q] SQS Configuration
A marketing company is using Amazon ECS to build a data analysis
application. This application runs on multiple Amazon ECS tasks. The front-
end application performs data pre-processing and then passes the data to a
back-end ECS task to perform data analysis. These analytical processes run
in parallel to achieve high performance, while reducing interdependencies to
ensure that failures do not affect other components.

What is the most cost effective combination of AWS architectural


configurations that can meet this requirement?

1) Create an Amazon SQS queue, configure the front end to add messages
to the queue, and configure the back end to poll the queue for messages.
2) Create an Amazon SQS queue, configure the backend to add messages to
the queue, and configure the frontend to poll the queue for messages.
3) Create an Amazon SNS, configure the front end to add messages to the
queue, and configure the back end to poll the queue for messages.
4) Create an Amazon SNS, configure the backend to add messages to the
queue, and configure the frontend to poll the queue for messages.
SQS Configuration
The basic structure of SQS is that queues are triggered by
the front servers and processed by the back-end
processing servers in parallel

Reference: https://aws.amazon.com/jp/blogs/developer/using-python-and-amazon-sqs-fifo-queues-to-preserve-message-sequencing/
[Q] SQS and Auto Scaling
Company B has built a workload on AWS that runs video processing. The
system requires a distributed configuration with queues for parallel data
processing. These jobs are executed irregularly and there are many
processing changes, so the execution period is unclear. In addition, the load
is likely to increase or decrease frequently. This video processing system is
planned to be operated in the medium to long term, and each editing
process will take from one to 30 minutes to complete.

What is the most cost effective combination of AWS architectural


configurations that can meet this requirement? (Please select two.)

1) Set up parallel processing with SQS by using a reserved instance on the


video processing server.
2) Use a spot instance of the video processing server to set up parallel
processing with SQS.
3) Set up parallel processing with Lambda by using a spot instance on the
video processing server.
4) Configure Auto Scaling to scale using spot instances and set the
appropriate backlog of SQS to a threshold value to perform the scaling.
5) Configure Auto Scaling to scale using spot instances and set a threshold
for the number of messages in SQS to perform the scaling.
SQS and Auto Scaling
When configuring SQS and Auto Scaling, set up scaling
based on CloudWatch metrics to match the processing
volume of the queue.
Process Polling
The processing
EC2
instructions SQS EC2 Instance Cloud
Watch
Notification of
Auto scaling Notification of
high metrics
Auto
Scaling
Running an Auto Scaling group
Metrics settings.
✓ Scaling based on
instance backlog
✓ Message Count
✓ Message size
EC2 EC2
✓ Processing time, etc.

Reference: https://docs.aws.amazon.com/ja_jp/autoscaling/ec2/userguide/as-using-sqs-queue.html
[Q] Visibility timeout
Company B has built a workload on AWS that runs video processing. The
video processing system executes the video editing process based on
messages from the Amazon SQS queue sent from the EC2 instance. After
processing, the video is stored in S3.
Spot instances are used to process this, but some spot instances are
terminated immediately after the messages are retrieved from the queue.
These spot instances have not completed processing the message, SQS is
using FIFO queues and visibility timeouts have been set up.

Based on this scenario, what happens to messages that are not finished?

1) Once the visibility timeout has passed, the message can be processed
again.
2) The message is lost because it was deleted from the queue during
processing.
3) The message is not processed and is moved to the dead letter queue.
4) The message remains in the queue and is immediately retrieved by
another instance.
Visibility timeout
Visibility timeout is a feature that renders the message
invisible for a certain period of time (30 seconds to 12
hours)

EC2
1

(1) Send
Instructions
EC2 SQS EC2 2

EC2
3
Immediately after a message is received, the message remains in the
queue. To prevent other consumers from reprocessing the same
message, Amazon SQS can prevent duplicate processing by setting a
visibility timeout.
Visibility timeout
Visibility timeout is a feature that renders the message
invisible for a certain period of time (30 seconds to 12
hours).

EC2
1

(1) Send
Instructions
EC2 SQS EC2 2

EC2
3
(2) The polled message
cannot be seen during
visibility timeout (10 minutes).
Visibility timeout
Visibility timeout is a feature that renders the message
invisible for a certain period of time (30 seconds to 12
hours). (3) Only specific EC2
instances can process it.

EC2
1

(1) Send
Instructions
EC2 SQS EC2 2

EC2
3
(2) The queue cannot be seen
during visibility timeout (10
minutes).
[Q] Polling method

Company B has built a workload on AWS to perform video processing. The


application is configured to allow the video editing process by the EC2
instances to be run in isolation using Amazon SQS queues. The developers
discovered that when the video list was updated during video editing, the
video list would go unprocessed and message processing would fail.

Which features should be used in cases where message processing fails?

1) Use a delay queue to handle failures in message processing.


2) Use short polling to handle message processing failures.
3) Use long polling to handle message processing failures.
4) Use a dead letter queue to handle message processing failures.
Polling Method
There are two types of polling methods, short polling and
long polling.

• If the query result is empty, the SQS waits for


the specified time to receive a message before
returning a response.
Long polling
• Set the message reception waiting time from 0
to 20 seconds
• The number of empty responses can be reduced.

• An empty message is returned immediately if


Short polling
the queue is empty.
[Q] Delay queue

You are a solution architect and are building an application using


microservices. You use SQS queues to isolate components between
microservices. Since each component requires a certain amount of time to
process SQS messages, it is essential as a queue setup that you stop
delivering new messages to the queue for 10 seconds and then start
processing them.

Choose a method of setting up the queue to meet this requirement.

1) Use a delay queue to defer delivery of new messages to the queue for 10
seconds.
2) Use short polling to defer delivery of new messages to the queue for 10
seconds.
3) Use message timers to defer delivery of new messages to the queue for
10 seconds.
4) Use a visibility timeout to defer delivery of new messages to the queue
for 10 seconds.
[Q] Message timers

You are a solution architect and are building an application using


microservices. You are using SQS queues to isolate components between
microservices. You are currently in the process of implementing a
configuration that defers the delivery of certain messages to the queue for
10 seconds and all other messages are delivered to the queue immediately.

Choose a method of setting up the queue to meet this requirement.

1) Use a delay queue to defer the delivery of a specific message to the


queue for 10 seconds.
2) Use short polling to defer delivery of a specific message to the queue for
10 seconds.
3) Use a message timers to defer delivery of a specific message to the
queue for 10 seconds.
4) Use a visibility timeout to defer delivery of a specific message to a queue
for 10 seconds.
[Q] Priority queue

Company B has built a video editing application on AWS. This video


processing application performs video editing by sending messages from the
Amazon SQS queue sent from the EC2 instance, and stores the processed
video in S3. There are two types of users: free users and paid users. Files
submitted by paid users must be processed with priority.

Choose an implementation method to meet these requirements.

1) Use SQS to set up priority messages for paid users to be processed and
use default messages for free users.
2) Use the Lambda function to set up a polling process that prioritizes
message processing for paid users and uses default messages for free
users.
3) Set up a polling process that prioritizes message processing for paid users
using SNS and uses default messages for free users.
4) Use Amazon MQ to set priority messages for paid users and use default
messages for free users.
Advanced queue settings
SQS allows you to use various functions when using the
queue. You need to use it properly depending on use-case.

• The ability to delay delivery of new messages to the queue


for a few seconds (set from 0 seconds to 15 minutes).
Delay queue • The difference between delay queue and visibility timeout is
that the delay queue becomes invisible immediately after it
is issued. It also has an effect on the entire queue.

• You can prioritize the order in which the messages are


processed.
Priority queue
• This allows you to set up workflows so that tasks with a
priority response are processed first.

• This queue moves messages that cannot be successfully


processed (consumed) to another queue.
Dead letter queue
• The reason for the failure can be analyzed later while
preventing the accumulation of unprocessable queues.
Advanced queue settings
SQS allows you to use various functions when using the
queue. You need to use it properly depending on use-case.
• A token used to avoid duplicate messages
• Messages sent to the queue with the same de-duplication ID can
De-duplication ID be configured not to be accepted for a period of 5 minutes.
• It applies to the entire queue, not just individual message groups.
• Used in FIFO only.

• Use the AWS Key Management Service (AWS KMS) to encrypt


Encryption outgoing data.

• The message timer is a function that prevents the message


added to the queue from being displayed from the moment the
message is issued. if a message is sent with a 45-second timer,
it will not be displayed for the first 45 seconds of the queue.
Message timer • Use a delay queue to set the number of seconds of delay for the
entire queue rather than for individual messages.
• The message timer setting for an individual message takes
precedence over the entire queue.
[Q] Batch Action on SQS

Your company is building a business system that runs workflows on AWS. As


your solution architect, you are implementing a highly available, high
performance flow with queuing using SQS, with an expected peak rate of
about 1000 messages per second processed through SQS, and it is important
that messages are processed in sequence.

Which features should be used in this SQS implementation?

1) Use FIFO queues in batch mode with 4 messages per operation.


2) Use FIFO queues in batch mode with 2 messages per operation.
3) Use a standard queue in batch mode of 4 messages per operation.
4) Use standard queues in batch mode with 2 messages per operation.
SQS Batch Actions
Batch actions can be configured to handle multiple
messages with a single action

You can take advantage of the batch actions using


the AWS SDK, which supports Amazon SQS batch
actions.

• SendMessageBatch
• DeleteMessageBatch
• ChallengeMessageVisibilityBatch
The scope of CloudFront
What is CloudFront?
CloudFront is a CND service that uses global locations to
efficiently deliver content

CloudFront

EC2
content
server

See: https://aws.amazon.com/jp/cloudfront/features/?nc=sn&loc=2
What is CloudFront?
CloudFront is a CDN (Content Delivery Network) service
provided by AWS

CloudFront
What is CloudFront?
CDN is a service to speed up the process of web content
delivery
America Asia Europe
EC2
Web server
What is CloudFront?
CDN is a service to speed up the process of web content
delivery
America Asia Europe
EC2
Web server

Cloud Cloud Cloud


Front Front Front
What is CloudFront?
CDN is a service to speed up the process of web content
delivery
America Asia Europe
EC2
Web server Offloaded from
origin server

Cache Speeding up
delivery on edge
Cloud Cloud Cloud servers
Front Front Front
The scope of the CloudFront questionnaire
Frequent questions extracted from 1625 questions are as
follows
S3 Configuration with ✓ You will be asked about the configuration with CloudFront
based on scenarios such as high performance content
CloudFront delivery.

Custom Origin ✓ You will be asked about the configuration of CloudFront


Configuration with custom origins such as EC2 and ELB.

✓ Based on the scenarios where redundancy of the origin


Origin server
server is required, You will be asked about the redundant
redundancy configuration of CloudFront.

✓ You will be asked about the use of edge locations that


Edge location CloudFront uses for delivery.

✓ You will be asked about the use of the regional edge cache
Regional edge cache that CloudFront uses for delivery.
The scope of the CloudFront questionnaire
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about the behavior of CloudFront in
CloudFront Behavior fetching the cache first, and the behavior of CloudFront in
the absence of cached data.

✓ You will be asked how to set the cache retention period in


Set the Cache CloudFront delivery settings.
Retention Period ✓ You will be asked how to configure it using the Cache-
Control header.

✓ You will be asked how to set up the delivery process to


Use of Cache take advantage of the cache, and how to control the finer
details and effectiveness of the cache.

CloudFront costs ✓ You will be asked tht cost factors of CloudFront

✓ You will be asked about the use of the Gzip compression


Gzip compression feature.
The scope of the CloudFront questionnaire
Frequent questions extracted from 1625 questions are as
follows
Access Control to ✓ You will be asked how to configure it to limit access to the
Origins origin by bypassing CloudFront.

Access control ✓ You will be asked how to limit the use of users' access to
to cache cached data.

✓ You will be asked how to restrict access to CloudFront


Access restrictions delivery from certain countries and regions.

✓ You will be asked how to set up the encryption of


communication in CloudFront.
Encryption ✓ You will be asked the use of field-level encryption in
CloudFront.

✓ You will be asked about the method of logging and its use
Logging in CloudFront.
CloudFront features
Large-scale accesses to edge locations around the world
allow for the efficient and rapid delivery of content.

 High-performance distributed distribution with over 210 edge


locations
 High performance contents delivery
 Security features through integration with AWS WAF, AWS
Certificate Manager and AWS Shield DDoS protection
 Dynamic page delivery is possible by forwarding the origin with
headers, cookies, and Query Strings.
[Q] S3 Configuration with CloudFront

A leading news media company is building a news delivery application using


AWS. The application is configured using EC2 instances and S3 to stream
based on video data stored in S3 buckets. Due to the frequency of video data
upload and delivery requests, As a solution architect, you want to improve
the performance of request processing.

Which of the following solutions should be implemented to address this


problem? (Please choose two.)

1) Configure the CloudFront distribution with the S3 bucket as the origin.


2) Implement Route 53 regional restriction settings to optimize delivery by
region.
3) Enable S3 Transfer Acceleration for S3 buckets.
4) Add ELBs to enable cross-zone load balancing.
5) Change the EC2 instance to a storage optimized instance.
CloudFront configuration
CloudFront delivery contents using edge locations closer
to the user

Origin

Edge Edge Edge


location location location
CloudFront configuration
One of the basic configurations is to configure CloudFront
delivery to S3 static web hosting.

CloudFront CloudFront CloudFront CloudFront


@EdgeLocation @EdgeLocation @EdgeLocation @EdgeLocation

S3 Bucket
(Static Web Hosting)

origin server
[Q] Custom origin configuration

A leading news media company is building a music distribution application


using AWS. The application consists of 10 EC2 instances and Auto scaling,
with each song residing on an FTP that is easily accessible by the EC2
instances. During the Christmas season and other times of year when music
distribution is heavily implemented, access spikes and Auto scaling expands
the number of instances to 100, increasing the cost of using AWS. Network
transfer out seems to be particularly high. As a solution architect, you've
been asked to significantly reduce costs without changing the application
code.

1) Use the AWS Transit Gateway


2) Use cross-zone load balancing for ELBs.
3) Use Route53 weighted routing.
4) Use the CloudFront Distribution
CloudFront configuration
Configuration with an EC2 instance of the web application
as the origin server is also a common configuration.

CloudFront CloudFront CloudFront CloudFront


@EdgeLocation @EdgeLocation @EdgeLocation @EdgeLocation

Web application

EC2

Origin Server
[Q] Origin server redundancy

A leading news media company is building a music distribution application


using AWS. The application is configured using a single EC2 instance in a
single AZ, and each song is delivered using CloudFront with an EC2 instance
configured as the origin server.

Please suggest improvements to make this application highly available.

1) Connect the ELB to an existing EC2 instance and configure the ELB as the
origin server.
2) Amazon S3 is used to provide dynamic content for web applications and
configure the S3 bucket as an origin server.
3) Configure two or more EC2 instances deployed in different availability
zones as an origin server.
4) Add an Auto Scaling group to an existing EC2 instance and configure Auto
scaling as an origin server.
Origin server redundancy
The origin server should be redundant and work with
CloudFront via ELB.

CloudFront CloudFront CloudFront CloudFront


@EdgeLocation @EdgeLocation @EdgeLocation @EdgeLocation

Web Application

EC2 EC2

Redundant origin servers


[Q] Edge location

A leading media company delivers news to its customers based on video data
in Amazon S3 buckets. The company's customers are located all over the
world and experience high demand during peak times. Regions in Europe
have been complaining of slow download speeds and high rates of HTTP500
errors during peak hours, and you, as the solution architect, have been asked
to help remedy this.

Choose the best solution to address this problem.

1) Use the Amazon Route 53 weighted routing policy to increase the


weighted ratio of routing to the European region.
2) DynamoDB's DAX cluster is placed in front of the S3 bucket to enable fast
delivery processing.
3) The ElastiCache cluster is placed in front of the S3 bucket to enable fast
delivery operations.
4) Use CloudFront to cache web content and use all edge locations for
content delivery
Edge Network
AWS can use a content delivery network for global
content delivery

See: https://aws.amazon.com/jp/cloudfront/features/?nc=sn&loc=2
[Q] Regional edge cache

A major media company stores video data in Amazon S3 buckets and


configures CloudFront delivery to deliver news to its customers. The
company's customers are located around the world and are in high demand
at peak times. the AWS Content Delivery Network (CDN) provides a multi-
layered cache by default. Regional edge caching improves latency and
reduces the load on the origin server when objects are not yet cached at the
edge. However, some content does not appear to make use of the regional
edge cache.

Which content type goes directly to the origin rather than the regional edge
cache? (Please choose two.)

1) Content configured to forward all headers at the time of request


2) Content for which a direct access request to the origin was issued by the
user
3) All content with access control using custom headers
4) The proxy methods PUT / POST / PATCH / OPTIONS / DELETE go directly
to the origin.
5) Content with all TTLs set to 0
CloudFront configuration
A regional edge cache has been added for more efficient
delivery processing
• The regional edge cache is
between the origin server and
the POPs (worldwide edge
Origin locations) that serve content
directly to the viewer.
• Even if the content isn't popular
enough to remain in POP, it can
be placed near the viewer as a
middle ground to improve the
performance of that content.
• Content and proxy methods PUT
/ POST / PATCH / OPTIONS /
DELETE that are configured to
forward all headers on request
go directly to the origin.
Regional Regional
edge cache edge cache

POP Edge POP Edge POP Edge


location location location

CloudFront Point of Presence (POP) is an edge location where


popular content is placed as close to the user as possible
[Q] The behavior of CloudFront

Your company offers an image delivery service consisting of Amazon S3


buckets, EC2 instances and CloudFront. You use CloudFront to optimize your
image delivery. If the content is not at an edge location, you need to see
what kind of processing occurs.

Select the correct description of the CloudFront process in the following.

1) Use another edge location where the content is stored.


2) CloudFront accesses the origin server to get data to the edge
3) A 404 error occurs because the proper data is not in place
4) Stock the request in CloudFront and wait for the data to come to the
edge.
CloudFront Behavior
Keep data in the cache to speed up delivery.
America Asia Europe
Retrieving data from the origin server (EC2) Web server
• Direct access to the origin during
the initial data acquisition
• If the TTL has passed and there is
no cache on the edge, access the
origin directly.
From the second time,
we access the cache
of edge locations.

Cloud Cloud
Cloud
Front Front
Front
Distribution Settings
CloudFront can implement optimal delivery based on your
conditions.

 Set CloudFront as the delivery destination domain


 Configure CloudFront through the management console and APIs.
 Choose between WEB Distribution and RTMP Distribution
 Maximum usage of 40 Gbps/100,000 RPS or more will be applied
for relaxation of the ceiling.
 You can specify your own domain
Distribution Settings
Can use RTMP distribution when using Adobe media, but
we usually use web distribution

WEB Distribution RTMP Distribution

Used for web delivery using the normal


HTTP protocol. Used for RTMP format distribution.
Support for HTTP1.0/ HTTP1.1/ HTTP2 Stream media files using Adobe Media
Server and the Adobe Real-Time
Origins configure S3
Messaging Protocol (RTMP)
buckets/MediaPackage channels/HTTP
servers Originating S3 buckets.
Delivering static and dynamic The client uses a media file/media
downloadable content over HTTP and player (JW Player, Flowplayer, Adobe
HTTPS Flash)
Video on demand in a variety of →From2021 RTMP Distribution has
formats, including Apple HTTP Live been abolished.
Streaming (HLS) and Microsoft Smooth
Streaming
[Q] Set the cache retention period.

You are a solutions architect and manage the operations of a web application.
Since the application is used globally, the delivery process is handled by
CloudFront. The origin server is frequently accessed because objects that
should be cached are not in the edge locations. This problem also occurs for
objects that are commonly used.

Select the most likely cause of this problem from the following

1) The range of object settings to be cached is narrow.


2) Cache-Control max-age directive is set to a low value.
3) The size of the files to be cached exceeds the CloudFront standard.
4) SSL certificate settings are not cached.
Set the Cache Retention Period
After deciding what to cache, it is important to predict the
frequency of cache usage and set the cache retention
period.

• Specifies the minimum period of time (in seconds)


that an object should be kept in the CloudFront cache
Minimum TTL before CloudFront sends another request to the origin.
• The default value is 0 (seconds).

• Specifies the maximum period of time (in seconds)


that an object should be kept in the CloudFront cache
Maximum TTL before CloudFront queries the origin to see if the
object was updated.
• The default value is 31,536,000 (seconds), or 1 year

• Specifies the default period of time (in seconds) that


an object should be kept in the CloudFront cache until
Default TTL CloudFront sends another request to the origin.
• The default value is 86,400 (seconds), or one day.
Set the Cache Retention Period
You can use TTL, Cache-Control and Expires headers to
control how long objects are kept in the cache

Cache Analyze content usage data and set target URLs for the
Target Setting caching of static and dynamic content.

TTL Cache retention period set for CloudFront delivery

The Expires header on the Cache-Control header


Expires
Setting the cache expiration date
Cache header
Example: Expires: Thu, 01 Dec 1994 16:00:00 GMT
Expiration
Date
You can specify how long (in seconds) CloudFront will
Cache- keep the object in the cache before retrieving it from
Control the origin server again.
max-age The minimum expiration time is 0 seconds for web
headers distribution and 3600 seconds for RTMP distribution.
Maximum value is 100 (years).
Set the Cache Retention Period
Try to avoid setting conflicting cache deadlines as much
as possible, as this can lead to complex results if you set
up multiple elements.

Mixed Scenarios
• If the Maximum TTL is set to 5 minutes (300 seconds) and the Cache-
Control max-age header is set to 1 hour (3600 seconds), CloudFront caches
objects for 5 minutes instead of 1 hour.
• If the Cache-Control max-age header is set to 3 hours and the Expires
header is set to 1 month, CloudFront will cache objects for 3 hours instead
of 1 month.
• If you set 0 seconds for Default TTL, Minimum TTL, and Maximum TTL,
CloudFront will always make sure there is the latest content from the origin.

Reference: https://docs.aws.amazon.com/ja_jp/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#expiration-individual-objects
[Q] Use of cache

Company A hosts a multi-lingual website on AWS. The website is served


using CloudFront. The language is HTTP request is specified at
http://pintor.cloudfront and is displayed as follows

http://pintor.cloudfront.net/main.html?language=de ...
http://pintor.cloudfront.net/main.html?language=en
http://pintor.cloudfront.net/main.html?language=jp

The net / main.html?language = jp cache data needs to be configured on the


CloudFront side to be displayed as a Japanese display site.

Choose a configuration to accomplish this.

1) Set the query string parameters.


2) Use dynamic content settings.
3) Use the cache origin setting.
4) Use forward cookies.
Use of Cache
Cache-control increases the cache hit rate and enables
effective cache utilization

Exact match of parameter


Disable old Cache
values

The cache can be deactivated before it


The cache is specified based on an expires.
exact match between the URL and the Disabling non-essential caches for
parameters of the forwarding options effective use
function (header/cookie/Query Strings)
You can specify up to 3000 disabling
Single file caches up to 20GB paths
For GET/HEAD/OPTION requests Up to 15 invalidation path requests can
be specified using wildcards
[Q] CloudFront Costs

A leading image delivery site is built on AWS. The site's operator is looking to
use a CDN to streamline image delivery. So, as a solution architect, you are
tasked with calculating and reporting on the cost of content delivery using
CloudFront.

Select which of the following are elements for calculating CloudFront costs.
(Select two.)

1) Number of Regions
2) Number of global edge locations
3) Data Transfer Out
4) Number of requests
5) The number of caches set
CloudFront Costs
Mainly charges for requests and data transfer out

• HTTP/HTTPS requests
• Origin Shield requests
Request • Invalid requests
• Field level Encryption requests
• Real-time log requests

• Region data transfer out to the Internet (in GB)


Data Transfer Out
• Intra-regional data transfer out to origin (in GB)

Use of Dedicated IP • Dedicated IP Custom SSL Certificates associated


Custom SSL with the CloudFront distribution
[Q] Gzip compression

Static contents are stored in S3 and distributed globally using CloudFront,


but the problem with CloudFront is that its usage fees are higher than
expected due to the large number of destinations.

Which of the following is the most effective way to save money with
CloudFront?

1) Perform the file compression process by edge location.


2) Shorten the cache retention period with CloudFront.
3) Perform the file compression process by Lambda@edge.
4) Compress the content delivered by S3 configured on the origin server.
Gzip compression
GZIP compression on the edge side for faster delivery
America Asia Europe
EC2
Web server
Compression for
faster delivery

GZIP
compression

Cloud Cloud Cloud


Front Front Front
Restricted access
Detailed control over access to delivery content through
signed URLs and signed cookies.

Restrict access to the files in


the origin in two ways
Restrict • Setting up an Origin Access
access to Identity (OAI) for Amazon
files in the S3 Buckets
CloudFront • Configuring Custom Headers
edge cache for Private HTTP Servers
(Custom Origins)

Reference: https://docs.aws.amazon.com/ja_jp/AmazonCloudFront/latest/DeveloperGuide/private-content-overview.html
[Q] Access control to origin

In the application you built, you store static contents in S3 and then use
CloudFront for global distribution. In doing so, you want to fully protect the
communication between the CloudFront distribution and the S3 bucket
containing the website's static files. Users should only be able to access the
S3 bucket through CloudFront and not directly.

Choose a solution that can meet this requirement

1) Create an origin access ID (OAI) and set it in the S3 bucket policy.


2) Configure the S3 bucket policy to allow only traffic from the CloudFront
security group.
3) Use a signed URL to restrict origin access.
4) Create an access control list and restrict origin access.
Access control to origin
Declare access to the S3 bucket in OAI and to the custom
origin in a custom header.

• OAI is a mechanism used to restrict access to S3


buckets from CloudFront requests
OAI • Create a special user, called an Origin Access Identity
(OAI), and allow access only to that user
• CloudFront uses OAI to access the files in the bucket.

A mechanism to restrict access to custom origins by


Custom Headers setting optional custom headers

Viewer protocol Configure the distribution so that viewers must use


policy HTTPS to access CloudFront

Configure the distribution so that CloudFront uses the


Origin protocol
same protocol as the viewer to forward requests to the
policy origin
[Q] Access control to cache

A major image delivery site is built on AWS. You are considering using a CDN
to streamline your image delivery system. As a solution architect, you plan to
use CloudFront to deliver your content efficiently. You need to make sure that
the content is only available to registered end users who are members on the
site.

Please select a solution that can meet this requirement. (Please select two)

1) Use a signed CloudFront URL


2) Using Cookies Signed by CloudFront
3) HTTPS is required for communication between CloudFront and custom
origin
4) HTTPS is required for communication between CloudFront and S3 origin
5) Using OAI in CloudFront
Access control to cache
Restrict users from accessing contents held in caches with
signed URLs and signed cookies

• Let them access the contents only from a signed URL, not a URL
that directly accesses the content.
• Use signed URLs as signed cookies are not supported by the
RTMP distribution.
Signed URL
• This is used to restrict access to individual contents (application
installation downloads).
• Used if the client that does not support cookies (such as a
custom HTTP client).

• Let them access the contents only through a signed cookie, not
a URL that accesses the content directly.
• Used to provide access to multiple restricted files (e.g., all files
Signed Cookie
of video in HLS format and all files in the subscriber's area of a
website).
• Use this if you do not want to change the current URL.
[Q] CloudFront region restriction

Leading news delivery companies are building their news delivery


applications on AWS. The users are global and deliver content globally. The
application uses EC2 fleets installed on a private subnet behind the ALB.
There are restrictions on information from China and so access from China
must be blocked.

What is the easiest way to meet this requirement?

1) Use network ACLs to block the IP address range associated with a


particular country.
2) Change the ELB security group to deny incoming traffic from blocked
countries.
3) Use CloudFront to serve content and block access to it from certain
countries.
4) Change the security group of an EC2 instance to deny incoming traffic
from a blocked country.
CloudFront region restrictions
Restrict access from users in a specific location using the
geo-restricted feature.

By enabling geo-
restrictions in the CloudFront
distribution
settings, the
specified
countries and
regions will be
EC2
restricted from content
distribution. server

See: https://aws.amazon.com/jp/cloudfront/features/?nc=sn&loc=2
[Q] Access restriction

The news media application uses CloudFront to deliver web news. The
application runs on an EC2 instance behind the Elastic Load Balancer (ELB).
You need to restrict the ability of users to bypass CloudFront and access
content directly through the ELB.

Choose a solution that can meet this requirement (choose two).

1) Create a VPC security group in the ELB to allow access to Lambda


functions.
2) Set the IAM role in ELB to allow access to Lambda functions.
3) Set the IAM role in the Lambda function to allow access to the ELB.
4) Use AWS Lambda to automatically update CloudFront internal services
when their IP addresses change.
5) Create an origin access ID (OAI) and associate it with the distribution.
6) Restrict access to ELBs using network ACLs
Access restriction
Can also be configured to avoid accessing the origin ELB
directly instead of CloudFront

• A method for specifying the CloudFront IP address and allowing


access to the ELB only for the specified IP.
Use CloudFront's • The Lambda function, which gets the CloudFront IP range and
IP range updates the security group's inbound rules if there is a change in
the IP address, reconfigures the IP.
• Application for raising the IP address limit is required.

• Restrict access to ELBs if the specified string is not in the


Use CloudFront custom header.
custom headers • Passing arbitrary headers to the ELB origin using CloudFront's
custom headers
[Q] Encryption

A web media company decided to use CloudFront to set up their web server
as an Origin to improve read performance. Recently, they conducted an IT
audit and were required to secure the data communication to the Origin
server and CloudFront because the delivery process using CloudFront is not
secure. It is important to note that this Origin server is not an ELB.

Choose the best way to meet this requirement.

1) AWS Certificate Manager (ACM) is used on the origin and CloudFront side
to enable data communication via HTTPS.
2) Third-party CA certificates are used on the viewer and CloudFront side to
enable data communication via HTTPS.
3) Third-party CA certificates are used on both the origin and CloudFront
side to enable data communication via HTTPS.
4) AWS Certificate Manager (ACM) is used on the viewer and CloudFront
side to enable data communication via HTTPS.
[Q] Encryption

A Government agency is using AWS to build security-demanding applications


that deliver tax information based on an individual's personal number. This
application requires citizens to send and receive personally identifiable
information, which requires dual data protection measures to be in place.
While encryption of the communication is in place, additional levels of
encryption must be applied at the CloudFront edge locations to ensure that
the PII data is protected end-to-end.

Please select the most appropriate measure to meet this requirement.


(Please select two.)

1) Use a signed URL to deliver the data.


2) Set up field-level encryption.
3) Perform HTTPS communication using ACM.
4) Deliver data with an origin access ID.
5) Implement data encryption using CloudHSM.
Encryption
CloudFront uses SSL/TLS encryption and field-level
encryption

 Enables you to issue and configure certificates in conjunction


with AWS ACM
 Set up an SSL certificate and set content delivery to HTTPS.
 Configure Amazon CloudFront to use HTTPS for viewers to
SSL/TLS request files when CloudFront communicates with the viewer
 Encrypt the communication between CloudFront and the
origin by configuring CloudFront to use HTTPS when
retrieving files from the origin
 SSL supports Perfect Forward Secrecy (PFS)

 Together with HTTPS, it adds a layer of security.


Field level  End-to-end data protection so that only certain applications
can access certain data while the system is processing
encryption  CloudFront field-level encryption uses public key
authentication to access origin server from cloudfront
Field level encryption
CloudFront field-level encryption uses public key
authentication to access origin servers

Reference: https://docs.aws.amazon.com/ja_jp/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html
[Q] Logging

Your company has a web delivery service hosted on EC2 Instances using
CloudFront, and your IT security department is auditing the PCI compliance
of applications that use this web delivery.

Please select the appropriate action to ensure compliance goals. (Select two.)

1) Configure the VPC flow log to CloudFront.


2) Configure CloudTrail to CloudFront.
3) Enable the CloudFront cache log.
4) Get the request sent to the CloudFront API.
5) Enable the CloudFront access log.
Other Security Features
Secure content distribution and access management by
linking with various external services

 In conjunction with the AWS WAF firewall, it is possible to


allow/ block web requests to the distribution. Also, link
reference prohibition using referrer restriction is possible.
 DDoS support with AWS Shield
 CloudTrail provides action records executed in CloudFront by
users, roles, or services in AWS.
 The CloudFront access log provides a detailed record of
requests made to the distribution
Scope of DynamoDB
What is DynamoDB?
A NoSQL-type database ideal for real-time data processes
such as streaming data

DynamoDB in action

After the Lambda function


IoT and web sessions. Using this DynamoDB data
pre-processes the data,
Obtain streaming data to display ads, etc.
it is stored in DynamoDB

Reference: https://aws.amazon.com/jp/dynamodb/
DynamoDB question scope
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked to select DynamoDB as the best
Selecting DynamoDB database to meet requirements.

✓ You will be asked about the characteristics of DynamoDB,


including its performance and limitations.
DynamoDB features ✓ You will be asked about the use cases for which
DynamoDB can be used.

✓ You will be asked about the consistency model, such as


Consistency model the impact based on the DynamoDB consistency model.

✓ You will be asked about the types of keys used in


DynamoDB and how to configure them.
DynamoDB index ✓ You will be asked about the uses and differences between
the two secondary indexes in DynamoDB.

✓ You will be asked about the effects of DynamoDB streams,


DynamoDB streams use cases, and archetypal configurations using the
streams.
DynamoDB question scope
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked how to set up DynamoDB scaling and its
Scaling effectiveness.

✓ You will be asked scale method for DynamoDB to deal


DAX with too many requests

✓ You will be asked how to set up DynamoDB global tables


Global table and what they are used for.

Capacity Mode ✓ You will be asked about the differences between the two
Settings capacity modes of DynamoDB and their purposes.
[Q] Select DynamoDB

Company B provides IoT solutions. The company uses streaming data


collected from IoT devices to perform real-time data processing. This data
processing does not require complex data schema setups or other complex
transaction processing, but it does require real-time, high performance
processing.

Select the best AWS database service for this database processing. (Please
choose two.)

1) Amazon Aurora
2) DynamoDB
3) Amazon EMR
4) ElastiCache
5) RedShift
NoSQL-type database
There are two main types of databases: Relational DB and
Non-Relational DB.

DB for operation systems DB for Big Data

Relational
NoSQL
DB
KVS: Key Value Type
Enables high-speed processing by grouping values into a
single line without a relational schema structure

SQL Tables

ID Data1 Data2 Data3


0001 XXXX AAAA BBBB
0002 XXXX AAAA BBBB

Key value DB tables


Key Value
0001 XXXX, AAAA, BBBB
0002 XXXX, AAAA, BBBB
AWS Database Services

Distributed

in-memory KVS
data grid
data lake
document DB (e.g. Hadoop HDFS)
Distributed OLTP Amazon DocumentDB

ElasticSearch S3
For Operations For analysi

relational database
RDS data warehouse
(RDB (OLTP))

Graph DB
Amazon
Neptune

Centralized
What DynamoDB can do
The key-value (wide column type) allows for easy
manipulation of data.

NoSQL can’t do /
NoSQL can do
Not suitable for

 CRUD operations
 Simple queries and orders  JOIN/TRANSACTION/COMMIT
/ROLLBACK are not allowed.
 For example, NoSQL DB are
good at processing session  Detailed queries and orders
data for applications that (not good for searching or
need to be accessed and joining data)
processed by tens of  Reading and writing large
thousands of people at the amounts of data is expensive.
same time.
[Q] DynamoDB features

Company B offers a C to C buying and selling Platform using an application


built on AWS. We are currently in the process of implementing a feature that
uses web session data, customer information and product information to
make recommendations on the best products for customers. We are sorting
out the requirements for which areas we should use DynamoDB as we need
to process a variety of data.

Which of the following is the best way to use DynamoDB? (Select two.)

1) Objects in excess of 400 KB, such as product images, are stored in S3


and metadata is stored in DynamoDB.
2) Use a separate local secondary index for each item to enable fast
processing.
3) BLOB data is stored in DynamoDB.
4) Store frequently accessed data and infrequently accessed data in separate
tables
5) Customer management information is stored in DynamoDB for fast
processing of recommendations.
DynamoDB use cases
Used for big data processing or for applications that
require large amounts of data processing

 Ideal for collecting, storing and analyzing key


value-type sequential data such as IoT data
Big data
 Integrate with Amazon EMR's Hadoop processing
for big data processing

 Simple and data storage on applications such as


session data and metadata
Application  Store your data for high performance processing
 Store Document data such as JSON formant
DynamoDB use cases
DynamoDB is used to store a large amount of web activity
data and log data thats generated.

• Store and process game session data and


User behavior
website user behavior data.
data
• It is used to manage the activity history of each
management user.

Backend
• Mobile App Backend/Batch Processing Lock
data Management/Flash Marketing/Storage Index
processing
DynamoDB performance
Fully managed NoSQL database service with unlimited
table sizes, but with a single data limit of 400KB

[Performance] [Data capacity limit]

• High scalability and unlimited • No storage capacity limit.


performance scaling -There is no practical limit on the
• Low latency with no decrease in size of the table.
response time under high loads -Tables have no restrictions on
• High availability (data stored in 3 the number of items or bytes.
AZs with no SPOF) • Data items are limited.
• Managed service and The size limit of the item is 400
maintenance-free: CloudWatch KB and it cannot store large data.

Reference: https://docs.aws.amazon.com/ja_jp/amazondynamodb/latest/developerguide/Limits.html#limits-dynamodb-encryption
DynamoDB performance
Single-digit millisecond latency can be consistently
achieved, while DAX allows requests to be processed in
microsecond increments.

DynamoDB tables Single digit millisecond latency

Improve performance from milliseconds to


DAX microseconds, even when the number of requests
per second is in the millions
Partitioning
To process large amounts of data at high speed,
DynamoDB uses partitioning for distributed processing.

Table A

Table A. Table A. Table A.


Partition A Partition B Partition C
[Q] Consistency model

A company uses DynamoDB to manage customer session data. They have


received complaints that users are seeing obsolete data when they access
the database.

As the operations manager, you should select best solution to mitigate this
problem

1) Enable DynamoDB replication settings.


2) Enable DAX for DynamoDB.
3) Modifying the DynamoDB data integrity model
4) Augment the DynamoDB cluster.
DynamoDB consistency model
Eventual-consistent model by default, but it use strong-
consistency model as optional for specific command.

Write Read.

 Default: Eventual consistency


model
The latest write results may not be
reflected in the immediate reading
Finalized when you can confirm the process.
completion of the writing in at least
two AZs.
 Optional: strong consistency
model
GetItem/Query/Scan allows for
strong consistency options
Table Design
DynamoDB is composed of three elements: table, item
and attribute.

In DynamoDB, a table is a collection of data. Like other


Table
DBs, it stores data in a table.

Data is created by creating items in each table. If a


Item field named "Personal" is created, names and IDs are
attached as attributes.

Each item consists of one or more attributes. An


attribute is the smallest unit of data that does not need
Attribute to be divided further. For example, in a Personal field,
an attribute of a name, such as first name, last name,
etc., should be set
Table Design
Design tables in nested structures using tables, items and
attributes.

Table
Table Design
Design tables in nested structures using tables, items and
attributes.

Table
Item
Table Design
Design tables in nested structures using tables, items and
attributes.

Table
Item
Attribute

Attributes can be of VALUE, JSON, or any other type


[Q] Capacity mode setting

Your company's application has a DynamoDB table configured in the data


layer and monitored by CloudWatch alarms. This DynamoDB table is
configured using provisioning capacity mode. Today, we were notified by an
alarm that the load on this DynamoDB table is approaching write capacity.

What happens when the write capacity reaches the limit?

1) The request is suppressed and an HTTP 503 code error occurs.


2) The request is adjusted and fails with the HTTP 400 code (invalid
request) and a ProvisionedThroughputExceededException.
3) The capacity of DynamoDB scales automatically, so the request succeeds
and the HTTP200 status code is returned.
4) It goes into burst throughput mode, the request is successful, and the
HTTP200 status code is returned.
Capacity Mode Setting
Select a capacity mode based on whether the capacity to
be used is predictable or not

 Mode to select when the capacity to be used is


unpredictable
On-demand mode  Charging based on the number of actual requests when
traffic volume is difficult to predict
 Automatic scaling for on-demand read/write processing

 Mode to be selected when the capacity to be used can be


predicted in advance
 Set pre-predicted write capacity units (WCUs) and read
capacity units (RCUs).
 Charged based on the capacity you set
Provisioning
mode  You can use UpdateTable operations to increase the
ReadCapacityUnits or WriteCapacityUnits as many times
as you need.
 When the capacity is approached, an HTTP 400 code
(invalid request) and a Provisioned
ThroughputExceededException is fired.

The read/write capacity mode can be switched once every 24 hours.


DynamoDB Costs
You are charged according to the capacity setting method
and the functions you use.

 Storage capacity (in GB)


On-demand  write unit
 read-only unit

 Storage capacity (in GB)


Provision  Read Capacitance Unit (RCU)
 Write Capacity Unit (WCU)

 Global Table: Replicated Write Capacity Unit


(rWCU)
Other
 DynamoDB Accelerator (DAX): per node time
 DynamoDB streams: per stream read request
Index
DynamoDB can use the implicitly set Key and the
explicitly set Key as an index.

An index that is implicitly declared as a key (hash


Implicit Key key or range key) to uniquely identify data and used
for searches, one per table

A local secondary index (LSI) can be used to add


another range key if the primary key type is a hash
key or range key
5 per table / Created when creating a table
Explicit key
A global secondary index (GSI) can be set up with a
different hash key. A global search is performed on
all data.
5 per table / Create after table creation
Primary key
DynamoDB uses two types of primary keys: hash and
range keys.

Hash key Range key

 ID and other information that


 A hash key plus a range is called
uniquely identifies the data
a range key or composite key.
corresponding to the key in KVS
 When creating a table, choose
 Choose one attribute to declare
two attributes and declare one as
as a hash key when creating a
a hash key and the other as a key
table
called the range key
 Called a hash key because the
 Identify a single item by
partition is determined by a hash
combining two values
function
 Duplicate compound keys are
 The hash key does not allow for
allowed if they are used alone.
duplication of single keys.
Primary key
Design tables as nested structures with tables, items and
attributes.

Table
Item
Attribute

Hash key
Primary key
Design tables as nested structures with tables, items and
attributes.

Table
Item
Attribute

Hash key Range key

Composite key
Secondary index
LSI and GSI are added when the search requirements
cannot be met by hash and range keys alone.

Local Secondary Index Global Secondary Index


(LSI) (GSI)

 GSI is a search method to specify


 A search method that allows
a new partition key and sort key
indexes to be created in addition
for the index.
to the sort key.
 Can be configured for both hash
 Only compound keytables can be
and compound keytables
configured.
 It can replace hash keys, so
 For items organized by composite
searches across physical
keys, you can specify a partition
partitions are possible without
key and use it as an index of
being confined to physical
another rule for query retrieval.
partitions

It should not be diversified because it requires additional throughput and storage capacity
and increases writes.
Table operation
For table operations, you can use the following commands
to operate DynamoDB tables

GetItem Query
Get certain items(s) subject to a Retrieve items matching the hash
hash key key and range key (up to 1MB)

PutItem Scan.
Write 1 item Search all tables (up to 1MB)

BatchGetitem
Update
Get matching items for multiple
Update 1 item.
primary keys

Delete
Remove 1 item.
[Q] DynamoDB streams

Company B offers a C to C buying and selling platform using an application


built on AWS. We are currently developing a new feature that uses web
session data, customer information and product information to recommend
the best products to customers. Each time order information from a
customer is stored in a table, a simple program process is required to
perform pre-processing on the data and pass the data over to the
recommendation function.

Which is the best combination that can meet this requirement?

1) DynamoDB Streams + Lambda


2) DynamoDB DAX + API Gateway
3) Amazon SQS + Lambda
4) CloudWatch Events + Lambda
DynamoDB streams
Ability to capture the history of additions, changes, and
deletions of items stored in DynamoDB tables as they
occur.

History data Data order

 The data is serialized


according to the order in
 Stores a history of data which the operations were
changes within the last 24 performed
hours, which will be deleted  Changes based on a particular
after 24 hours hash key are stored in the
 Data capacity is automatically correct order, but the order in
managed. which they are received may
be changed if the hash keys
are different
Use cases for DynamoDB streams
Streams can be used for application functions that are
triggered by data updates and replication.

Cross-region  Cross-region replication can be triggered by


replication stream captioning

Application
 Execution of application processes such as
features
notification processing in response to data
triggered by data updates, etc.
updates
Use cases for DynamoDB streams
Lambda functions performs many different applications
processes with DynamoDB streams

DynamoDB
Automatic update of
separate tapering

Saving logs
DynamoDB Lambda
Streams

Push notification
[Q] Scaling

Company B provides C to C trading solutions using applications built on AWS


and uses provisioned DynamoDB tables for session data processing. but
there are days when it is rarely used. Provisioned throughput capacity is
configured for heavy loads to avoid throttling and is not cost effective.

What is the most efficient solution for optimizing costs?

1) Use the DynamoDB auto-scaling policy.


2) Reduce the number of provisioned throughputs.
3) Create CloudWatch alarms based on the capacity of the DynamoDB table
and automatically adjust WCU and RCU based on the alarms through the
Lambda function.
4) Using DynamoDB DAX to improve database performance
DynamoDB Auto Scaling
Automatically scale tables or GSIs based on the scaling
policy.

 Use the AWS Application Auto Scaling service to


configure Application Auto Scaling policies.
 Dynamically adjust provisioned throughput
performance on behalf of the user based on
DynamoDB Auto traffic patterns based on CloudWatch monitoring.
Scaling  A table or global secondary index increases
provisioned read and write capacity and can
handle spikes in traffic without throttling.
 When you create a DynamoDB table, Auto
Scaling is enabled by default.
Scaling DynamoDB
Automatically scale tables or GSIs based on the scaling
policy.
[Q] DAX

Company B provides a C to C buying and selling platform using applications


built on AWS and uses DynamoDB tables in provisioning mode for session
data processing. Recently, the number of application requests has
skyrocketed and the solution architect in charge has increased the number of
DynamoDB RCUs. Still, he has to deal with hot partition issues with hot keys.

How can the solution architect get rid of this hotkey problem?

1) Use DynamoDB global tables.


2) DynamoDB Streams.
3) DynamoDB Accelerator (DAX).
4) Use DynamoDB's GSI.
DynamoDB Accelerator (DAX)
DAX adds in-memory cache-type functionality to
DynamoDB

DAX clusters

CACHE

CACHE
EC2

CACHE DynamoDB
DynamoDB Accelerator (DAX)
Enabling fast in-memory performance in DynamoDB

 As an in-memory cache reduces response time for result-consistent


read workloads from single-digit milliseconds to microseconds. Multi-
AZ DAX clusters can process millions of requests per second.

 DAX is a managed service compatible with APIs that use DynamoDB


and can be easily deployed with reduced operational and application
complexity

 For high read workloads and rapidly growing workloads, DAX can save
operational costs by designing for increased throughput and not over-
provisioning read-capacity units
[Q] Global table

Company B provides a C to C buying and selling platform using applications


built on AWS and uses DynamoDB tables in provisioning mode for session
data processing. The company wants to deploy applications to three different
AWS regions in an active-active configuration. In order to keep the
information in sync, the database must be replicated.

Which database solution is best suited for these requirements?

1) Amazon DynamoDB with global tables


2) The Global Layer of ElastiCache
3) Amazon S3 Cross Region Replication
4) Amazon Aurora's globally configured multi-master configuration
Global table
DynamoDB can create multi-master tables that are
synchronized across regions

 Global table uses endpoints in multiple regions around the world with DynamoDB
performance
 In addition to read/write capacity, users are charged for cross-region replication data
transfer fees.
 The strong consistency that could be enforced by the option cannot be used for Global
tables.
On-demand backup
DynamoDB can perform hundreds of TB backups without
impacting performance

 Backup can be executed at any time. This backup is for long-


term data storage.

 In the past, it was necessary to use Data Pipeline, but now


backup can be performed easily.
The Scope of Lambda
What is Lambda?
A mechanism for executing programming code without
starting a server. It can be used to make simple
application processes.

Lambda DynamoDB
What is Lambda?
A mechanism for executing programming code without
starting a server. It can be used to make simple
application processes.

Lambda executes functions

The Lambda Function

DynamoDB

Get data from DynamoDB.


Scope of the Lambda questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked questions about the features of Lambda,
Lambda Features including the programming languages it can use.

✓ You will be asked questions about setting limits on the use


Lambda Limitations of Lambda.

Process timing of ✓ You will be asked about the synchronous and


Lambda asynchronous processing settings.

✓ You will be asked about how to set up Lambda to access


Lambda and VPC resources in the VPC to perform operations.

✓ You will be asked about the purpose of using the Lambda


Lambda layer layer.
Scope of the Lambda questions
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about how to design an architecture
Lambda Configuration using Lambda.

Integration with API ✓ You will be asked about how to integrate Lambda with API
Gateway Gateway to run a Lambda function based on an API call.

✓ You will be asked how to execute Lambda functions in


Lambda Edge conjunction with CloudFront.

Cooperation with RDS ✓ You will be asked how to configure the RDS proxy for use.
[Q] Characteristics of Lambda

Company B is an IT company that provides IoT solutions. They are currently


planning a serverless application to process streaming data and will be using
Lambda functions. What programming languages available for Lambda
functions.

Which programming languages are supported by the Lambda runtime?


(Please choose two.)

1) C#
2) .NET
3) Go.
4) PHP
5) C+
Lambda Features
The Lambda function can use many programing languages
and the execution environment is managed on the AWS
side.
 Lambda is a typical managed service and the execution infrastructure is managed
entirely by AWS.
 Easy to implement event-driven applications called Lambda functions in
conjunction with AWS services
 Support for Java, Go, PowerShell, Node.js, C#, Python and Ruby runtimes
 A Lambda Function is composed of
• code - Create function code and dependencies. For scripting languages, you
can edit the function code in the built-in editor. If the language is not
supported by the editor, upload the deployment package. If the size of the
deployment package exceeds 50 MB, upload it into S3.
• Runtime - the Lambda runtime for each language to execute the function.
• Handler - The method to be executed at runtime when calling a function

https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/configuration-console.html
https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/gettingstarted-limits.html
Lambda Billing
Lambda is charged by the number of requests and the
duration of the code's execution.

 Lambda will charged based on the number of requests and execution


time of the code
 Because you are only charged for execution of the code, it is much more
cost effective than keeping a server.
 The execution time is calculated from the moment code execution is
started until the process is returned or aborted. Values are rounded up
to the nearest 100 milliseconds.
 One million free requests per month and 400,000 GB-second of
computing time are in the free quota.
[Q] Lambda Limitations

As a solution architect, you are using AWS Lambda to implement a batch job
workload. This Lambda function fetches data from Amazon S3, processes it,
and then stores the results of the process in DynamoDB. However, when I
ran this Lambda function, an error occurred in the Lambda function after 15
minutes.

Which factor is most likely to be the root cause of this problem?

1) The Lambda function is set up for asynchronous processing.


2) The Lambda function is out of memory.
3) The maximum number of concurrent executions of Lambda functions has
been reached.
4) The execution time of the Lambda function is exceeded.
Lambda Limitations
Lambda functions are limited in the amount of data,
execution time and number of concurrent executions in
order to enable efficient processing.

 The default value for the function timeout time is 3 seconds, and the maximum
allowed value is 900 seconds (15 minutes). When the timeout is reached, the
function is stopped.
 The default maximum number of concurrent executions of the function is 100,
but the maximum is 1000 (can be increased to hundreds of thousands by asking
AWS).
 The amount of memory available for the execution of the function. The amount of
memory in the range of 128 MB to 3,008 MB.
 The storage capacity of the /tmp directory is 512MB
 Up to 5 Lambda layers can be configured

https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/configuration-console.html
https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/gettingstarted-limits.html
How does Lambda work?
You can easily use lambda from web and mobile
applications using API or HTTP Requests

Preparing the Lambda function


(Coding)

Calling Lambda
Implementation of Lambda: Blueprint
A collection of Sample code can be used when coding a
Lambda function

Modify the
Design a use
Find sample sample code to
case with
code Blueprint create a
Lambda
function
[Q] Lambda processing timing.

The company uses AWS Lambda to implement a data processing application


workload. the Lambda function retrieves the IoT data and stores the
processing results in Amazon S3. The application returns a prompt to notify
the user that the data processing was successful. The entire process takes
about 10 minutes to complete. As a solution architect, you have been asked
to refactor this workload to make it asynchronous processing.

Among the following options, choose the best solution to handle Kinesis and
S3 asynchronously (choose two)

1) Use the Kinesis stream to pass data to Lambda functions.


2) Use Amazon SQS queues to pass data to Lambda functions.
3) Create a Lambda function to process requests asynchronously.
4) Set the execution schedule to the Lambda function to perform
asynchronous processing.
5) Use a DynamoDB stream to pass data to a Lambda function.
Lambda processing timing
Ability to call and run from other AWS services and
applications using the SDK

 Function asynchronously to handle events.


Asynchronous call  When a function is called asynchronously, it does
not wait for a response from the function code.

 When a function is called synchronously, Lambda


executes the function and waits for a response.

Synchronous call  At the completion of the execution, the response


set in the Lambda function is returned with
additional data such as the version of the executed
function.
Schedule function
Executing Lambda functions triggering at a specific time

Process that wants to execute a


function at a specific time

Lambda runs on a regular basis


[Q] Lambda and VPC

The solution architect has written code that uses the AWS Lambda function,
which, when executed, stores streaming data in the ElastiCache cluster;
since the ElastiCache is located in the same account's VPC, the The Lambda
function needs to be configured to access resources in the VPC.

Which VPC-specific information is required for the Lambda function? (Please


choose two.)

1) VPC subnet ID
2) VPC Security Group ID
3) VPC's ARN
4) VPC logical ID
5) VPC root table ID
VPC access
Access to AWS resources in the VPC without going over
the Internet
 Access to resources in the VPC without going
over the Internet
Accessing  Create an ENI by specifying the subnet ID and
resources in VPC security group ID when specifying the VPC,
the VPC and connect via the ENI
 ENI is dynamically assigned an IP for a
specified subnet via DHCP

 Attach a policy called


Access
"AWSLAMBDAVPCAccessExecutionRole" to the
configuration IAM Role that is assigned to the function.
[Q] Lambda layer

You are trying to optimize costs in an application that uses a serverless


configuration; you are running multiple applications that utilize Lambda
functions, but you find that one part of the process is duplicated.

Choose the method you need to improve this Lambda function processing.

1) Lambda Edge
2) Lambda Layer
3) Invocation
4) API Gateway cache
The Lambda layer
You can define and reference common components
between Lambda functions as a Lambda Layer (up to 5)

Lambda Lambda Lambda Lambda


Function Function Function Function

common common common common


function function function function
The Lambda layer
You can define and reference common components
between Lambda functions as a Lambda Layer (up to 5)

Lambda Lambda Lambda Lambda


Function Function Function Function

Lambda
Layer

common
function
[Q] Configure Lambda.
Automakers are developing data processing application workloads that take
sensor data installed in vehicles on AWS, process the data, and store it in
DynamoDB. The application needs to return a notification to the user that
the data has been successfully stored. These event processing must be
automated.

Choose an implementation of a Lambda function that can satisfy this


requirement (choose two).

1) The sensor data is taken into an Amazon SQS FIFO queue, processed by
the Lambda function, and then written to a DynamoDB table.
2) Sensor data is imported into Kinesis Data Streams, processed by the
Lambda function, and then written to a DynamoDB table.
3) The sensor data is pulled into an Amazon SQS standard queue, processed
by the Lambda function, and then written to a DynamoDB table.
4) Perform SNS notifications based on data storage by DynamoDB streams.
5) Another Lambda function performs notifications based on the data
storage by the DynamoDB stream.
Lambda Use Cases
Combining SQS and Lambda to create a processing
program that stores IoT sensor data in DynamoDB

SQS

DynamoDB
Lambda integration
Lambda can be triggered by a variety of services.

 Amazon S3
 Amazon Kinesis
 Amazon DynamoDB Streams
 Amazon Cognito(Sync)
 Amazon SNS
 Amazon SQS
 Alexa Skills Kit
 Amazon SWF
[Q] Integration with the API Gateway

We are developing a new web application that is scalable to support


unpredictable workloads. The application performs simple workloads that are
executed in response to external HTTPS calls.

Which solution is best suited for this use case?

1) API Gateway and Lambda


2) Auto Scaling Groups and EC2
3) CloudFront and Lambda
4) API Gateway and EC2
Integration with API Gateway
By integrating with the API Gateway, you can execute
Lambda functions from the API.

Scope of Development

Web
application
API Gateway Lambda
DynamoDB
Lambda Mobile App
Mobile integration is easy with mobile photo management
through Lambda

Authentication

Cognito
Photo registration in S3
triggers Lambda
Photo Registration

S3 Lambda

Getting the Register


metadata metadata

DynamoDB
[Q] Lambda Edge

A major news site uses the CloudFront web distribution to serve static
content to users around the world, but due to the lack of HTML files
corresponding to the URIs, requests, such as when reloading the browser,
can result in errors. In such cases, it is necessary to redirect the error pages
(e.g. 403/404) to index.html to avoid this problem.

Choose a solution that can meet this requirement.

1) Use the regional edge cache to redirect the response.


2) Use localization to redirect the response.
3) Enable CloudFront redirection settings.
4) Use Lambda @ Edge to customize the content that the CloudFront web
distribution delivers to users.
Lambda Edge
Content delivered by CloudFront can be processed at
edge locations with the Lambda function

Reference: https://aws.amazon.com/jp/lambda/edge/
Lambda Edge
Integrate CloudFront with the Lambda capabilities to run
code in locations close to users around the world.

See: https://aws.amazon.com/jp/cloudfront/features/?nc=sn&loc=2
Lambda Edge
The Lambda function associated with the event is
executed at the edge location and returns the execution
result Viewer
Request
CloudFront

Viewer Origin
Request Request

Viewer Lambda Origin


server

Origin
Viewer
Response
Response
[Q] Connect to RDS

As a solution architect, you are building a mechanism to acquire data from


RDS, process the data, and then register the processing results in DynamoDB.
I decided to use a mechanism to access the database using Lambda
functions and process the data.

Choose a method that meets this requirement. (Select two.)

1) Use DynamoDB stream with Lambda function.


2) Use RDS proxy in Lambda function.
3) Use RDS endpoints with Lambda functions.
4) Connect your Lambda function to RDS by enabling public access to RDS.
RDS proxy
When using Lambda to connect to an RDS database, you
can use the RDS proxy instead of an endpoint to connect
and run the connection more efficiently.
The scope of Route 53
What is Route 53?
Provides the role of a DNS server that converts IP
addresses into readable URLs and makes them available
as addresses

DNS
https://www.yahoo.co.jp/ https://196.10.0.1
What is Route 53?
DNS is a mechanism for converting a easy-to-use URL to
an IP address for the system on the Internet

DNS
https://www.yahoo.co.jp/ https://196.10.0.1
What is Route 53?
Route53 is an authoritative DNS server provided by AWS,
called Route53 because it works on port 53

DNS
https://www.yahoo.co.jp/ https://196.10.0.1

Route 53
What is Route 53?
Check the DNS records, a table that links IP addresses to
URLs, and route them.

DNS
https://www.yahoo.co.jp/ https://196.10.0.1

Route 53 DNS record


The scope of the question on Route 53
Frequent questions extracted from 1625 questions are as
follows
✓ You will be asked about the creation of host zones at the
Host zone beginning of domain configuration of Route 53, and the
distinction between private and public host zones

✓ You will be asked to select a record type in the Route 53


Record type configuration.
✓ You will be asked the different record types.

✓ You will be asked about the use of alias records used to


Alias record configure AWS resources, such as CloudFront, in Route 53.

✓ You will be presented with a scenario to configure Route


Select routing policy 53 and asked to choose the appropriate routing policy.

✓ You will be asked how to configure the failover


Failover configuration configuration using Route 53.
The scope of the question on Route 53
The following is a list of frequently asked questions that
were extracted from 1625 questions
Route 53 ✓ You will be asked how to configure Route 53 to limit the
Geo Restrictions regions to be delivered to using Route 53.

✓ You will be asked the use of traffic flow in Route 53


Traffic flow routing policy settings.

✓ You will be asked about TTL settings in DNS name


TTL resolution.

Apply Route53 to ✓ You will be asked how to apply name resolution to on-
on-premise premises environments using Route 53.
Route 53
Route53 is a service that makes it easy to use the
features of an authoritative DNS server in a managed
form

 Three main functions: domain registration, DNS routing, and


health checks
 Policy-based routing settings
Configurable routing conditions based on traffic
routing/failover/traffic flow
 SLAs that guarantee 100% availability on the AWS side
 It is offered as a managed service, so there is no need for users
to consider things like redundancy
How to use Route 53
When you start using Route53 and register a domain, it
automatically generates a host zone automatically and
sets up routing there.

Create the
Set up a
same host Create a Set Routing
domain on
zone as the record Policy
Route 53
domain name
[Q] Host zone

An enterprise is building an application using two EC2 instances. As a


solution architect, you are trying to configure the EC2 instances to be
redundant with DNS routing to avoid traffic to the anomalous instances. To
make the configuration capable of supporting multiple regions, we decided to
use Route 53 routing. To do this, we need to configure a public host zone.

Select the correct feature for the public host zone. (Please select two.)

1) The same host zone can be used by VPCs in multiple regions as long as
the VPCs are mutually accessible.
2) It is possible to route a domain in a private subnet
3) A container that manages publicly available DNS domain records on the
Internet.
4) Define how to route traffic to a DNS domain on the Internet
Host zone
A container that holds information about how to route
traffic for a domain (example.com) and its subdomain
(sub.example.com).

Public host zone Private Host Zone

 A container to manage DNS


domain records in a private
network closed to VPCs
 A container for managing DNS
 Define how to route traffic to DNS
domain records published on the
domains in the VPC
Internet
 Support for multiple VPCs in one
 Define how to route traffic to the
private host zone
Internet's DNS domains
 VPCs from multiple regions can
use the same host zone as long as
the VPCs are mutually accessible
[Q] Record type.

You are a solution architect and you are building a web application on AWS.
You want to use the example.com domain name for this configuration, and
you need to configure it for a Route53 record.

Which of the following record types is not supported by Amazon Route 53?

1) MX
2) AAAA
3) CNAME
4) DNSSEC
Record type
Create DNS records and set various records to configure
the routing method

Maintains the domain's DNS server/domain


administrator's email address/serial number, etc., and
SOA
uses it to determine if the information has been
updated during a zone transfer.

A record that defines the association between a host


A
name and an IPv4 address

A record that defines the host name of the mail


MX
delivery address (mail server)

A record that defines an alias for a legitimate hostname.


CNAME It is used when transferring a specific host name to
another domain name.

Other record types can be found at:


https://docs.aws.amazon.com/ja_jp/Route53/latest/DeveloperGuide/ResourceRecordTypes.html
[Q]Alias Records.

You are a solution architect and you are building a web application on AWS.
This application uses IPv4 communication only. You have deployed the
application hosted in an Auto Scaling group on an EC2 instance with an ALB
in place that evenly distributes incoming traffic. We would like to use the
example.com domain name for this configuration.

Which record type should I use to configure DNS names for ALB on Route
53? (Please select two.)

1) AAAA Records
2) A Records
3) CNAME Records
4) Alias record type "AAAA" record set.
5) Type "A" record set of alias records
Alias record
Use AWS-specific alias records when associating AWS
resources such as CloudFront and ELB with a domain.

 An alias record can set up a domain name for an AWS resource by


returning the IP address of the AWS service endpoint to a DNS query.
 Used for the following services:
• S3 bucket configured as a static website
• CloudFront
• ELB
• AWS Elastic Beanstalk Environment
 Type according to the IP address version:
• A record (IPv4 address) with the IP address of the aliases target
• Change the IP address of the aliases target to a AAAA record (IPv6
address)
[Q] Select a Routing policy.

You are a solution architect and you are building a web application on AWS.
The application utilizes multiple EC2 instances behind the ELBs for increased
redundancy. For this application, you need to use Route53 to configure it to
minimize the amount of communication latency that occurs.

How would you need to configure AWS Route 53 in this scenario?

1) Use fail-over routing policy.


2) Use latency routing policy.
3) Use weighted routing policy.
4) Use simple routing policy.
Select a Routing policy.
Various routing methods can be selected and configured.

 A routing method that responds to DNS queries based solely


Simple routing policy on pre-set values in the record set
 Routing is determined by static mapping.

 A routing method that sets weights on multiple endpoints


Weighted routing policy and responds to DNS queries according to the weights
 Routes more to highly weighted endpoints.

 A routing scheme that responds to DNS queries for available


Failover routing Policy resources based on health checks
 Routed to available resources.

 A routing method that sets IP addresses to up to eight


separate records chosen at random and returns multiple
Multivalue answer values
routing policy  It does not replace ELB, but the ability to check for success
and return multiple IP addresses allows for improved
availability and load balancing using DNS.
Select a Routing policy.
Various routing methods can be selected and configured.

 A routing scheme that responds to DNS queries depending


on the latency of the region. It is often the user's nearest
Latency routing policy region.
 It is routed to the one with lower latency between regions.

 A routing system that returns different records for each


region by identifying the user's location based on their IP
Geolocation routing address.
policy
 It enables highly accurate classification of record responses
without relying on the network structure.

 A method for routing traffic by creating geographic proximity


rules based on user and resource locations
-If you are using an AWS resource, the location is the AWS
region where the resource was created.
Geoproximity routing -If you are using a non-AWS resource, the location is the
policy location by the latitude and longitude of the resource.
 The amount of traffic to be routed to a particular resource
can be changed by setting the bias as needed.
 You need to use traffic flow to create this type of policy.
[Q] Failover configuration

A company is building an application using two EC2 instances. As a solution


architect, you decide to configure Route53 so that you can perform failover
against the ALB you have set up on the EC2 instance. When you do so, you
need to update your DNS alias record to point to the secondary ALB.

Which Route53 configuration is needed to automate the failover process?

1) Select the ELB health check type and configure Route 53.
2) Select the EC2 health check type and configure Route 53.
3) Create a CNAME record in Amazon Route 53 that points to an ALB
endpoint
4) Enable Amazon Route53 health checks and configure routing policies.
Failover configuration
Failover configuration is a redundant primary and
secondary configuration that utilizes Route53's health-
checking capabilities.

• Routing destinations
based on health
checks
• configurable in
cross-regions
[Q] Failover configuration
An enterprise is building an application using two EC2 instances. As a
solution architect, you are trying to configure the EC2 instances to be DNS-
routed and redundant so that traffic to the anomalous instance can be
avoided by making them redundant. If the anomaly is not occurring, you
plan to use both configurations actively.

Choose a method that meets this requirement.

1) Active/passive configuration with fail-over-routing


2) Active/active configuration with fail-over-routing
3) Active/passive configuration with latency-sealing routing
4) Active/active configuration with latency routing
Failover configuration
Failover configuration utilizes normal resources by using
the Route53 health check feature.

◼ Route 53 only routes the primary resource as


Failover the active resource. In the event of a failure,
(Active/Passive) Route 53 routes the secondary resource.
◼ Configure the setting using a failover policy.

◼ Route 53 routes multiple resources as active.


In the event of a failure, Route 53 will fail
Failover
back to a normal resource.
(Active/Active)
◼ Configure this setting using other routing
policies other than failover Routing Policy.
[Q] Route 53 Geo Restrictions
Some major media company is building their news delivery applications on
AWS. The users are global and the content is delivered globally. The
application uses a fleet of EC2 instances installed on a private subnet behind
the ALB. There are restrictions on information from China and access from
China must be blocked.

Which of the following options would allow you to enforce the geographic
restrictions? (Select two.)

1) Use Route53's Geolocation routing policy to limit content delivery to the


locations where you have distribution rights.
2) Use Route53's Geoproximity routing policy to limit content delivery to
where you have distribution rights.
3) Enable Route 53 region restrictions to set distribution limits to specific
regions.
4) Enable CloudFront geo-restrictions to set distribution limits to specific
regions.
5) Configure distribution restrictions to specific regions according to the
CloudFront distribution policy.
Route 53 Geo Restrictions
Geolocation routing policy can be used to restrict content
distribution to only those locations where you are
authorized to distribute it

Use cases for Geolocation routing policy

 You can restrict content to only those locations where the right to
distribute it, by specifying a geographical region and setting limits for
distribution
 Localization of content distribution, for example, by changing content
based on region
 Use endpoints from specific regions to improve performance locally.
[Q] Traffic Flow

An enterprise is building an application using two EC2 instances. As a


solution architect, you are configuring a routing configuration using Route53.
After sorting out the design policy, you need to set up a complex routing
policy due to the organizational structure and the large number and
complexity of app users.

Select an efficient way to implement a complex routing configuration using


Route53.

1) Configuring routing policies by creating flows using ALIAS records


2) Configuring routing policy by flowing ALIAS records with traffic flow
3) Configuring routing policies by setting the order using traffic flows
4) You can set a routing policy by setting up routing in a JSON or YAML file
Traffic flow
Instead of using ALIAS records to create complex routing
policies in the past, complex policies can now be
configured in the visual flow of traffic flow.

On the root record set screen, you Traffic Flow.


can use the Setting up a routing
policy Setting up a routing policy
[Q]TTL

A company runs an application with ELB and Route 53 configured on two


EC2 instances. The application is published using the domain example.com.
You have recently put a disaster recovery plan in place and you, as the
solution architect, are reviewing the configuration to make it redundant
through DNS routing. To do so, you have reconfigured the new domain for
your existing host zone on Route 53. However, after an hour, the routing to
the new domain is not performed.

Which is the most likely factor in this problem?

1) TTL is the expiration date.


2) CNAME records are not configured correctly.
3) A health check error is occurring.
4) The domain was just acquired and is not reflected.
TTL
You can set how long (in seconds) the recursive DNS
resolver should cache and retain information about
records.

 The DNS resolver is a function that allows the resolver to query the DNS
servers it knows to find out the IP address (name resolution). In other
words, it checks the domain name correspondence.
 Recursive DNS resolvers should re-query the domain for changes.
 By keeping that information in the cache, it is possible for the resolver
to keep track of the domain information without having to resolve the
name every time.
 Recursive DNS resolvers can reduce the number of calls that need to be
made to Route 53
[Q] Apply Route53 to on-premise
A company runs an application with ELB and Route 53 configured on two
EC2 instances. The application is published using the domain example.com.
As a solution architect, you are trying to use Route53 to apply it to your on-
premises environment. You need to resolve DNS queries for resources in
your on-premises network from AWS VPCs.

Which of the following configurations meets this requirement? (Select two.)

1) Create an inbound endpoint in the Route 53 resolver to allow DNS


resolvers on the on-premises network to forward DNS queries to the
Route 53 resolver.
2) Create an outbound endpoint in the Route 53 resolver to allow the Route
53 resolver to forward queries to the resolver on the on-premises
network.
3) Create an inbound endpoint in the Route 53 resolver to allow the Route
53 resolver to forward queries to the resolver on the on-premises
network.
4) Create an outbound endpoint in the Route 53 resolver to allow DNS
resolvers on the on-premises network to forward DNS queries to the
Route 53 resolver.
5) Use VPC endpoints from the Route 53 resolver to allow the Route 53
resolver and the resolver on the on-premises network to work with each
other.
Apply Route53 to on-premise
The Route 53 resolver enables on-premises name
resolution in VPCs from on-premises. This enables name
resolution between on-premises and AWS.

✓ Create inbound endpoints and set up connections to VPC


✓ Create outbound endpoints and set up communications to outbound

Reference: https://aws.amazon.com/jp/blogs/aws/new-amazon-route-53-リゾルバー-for-hybrid-clouds/
The scope of Security Group
What is a security group?
A firewall feature to configure the accessibility of traffic to the
instance

HTTP access Security


Group

SSH Access

Port 22 permits
(SSH)

EC2
Instance
The scope of Security Group question
Frequent questions extracted from 1625 questions are as follows

Security Group  You will be asked about characteristics of security groups control
Features with traffic communications

Default Settings  You will be asked about the settings of the default security group

 You will be asked about about traffic control settings for SSH
SSH Connection connections, the basic configuration of an EC2 instance.

The use of custom


 You will be asked about How to set up a security group source.
sources

ELB Security Group  You will be asked about how to set up security groups when
Settings configuring ELB and EC2 instances.
The scope of Security Group question
Frequent questions extracted from 1625 questions are as follows

 You will be asked about how to set up security groups when


RDS security group setting configuring RDS and EC2 instances.
[Q] Security group features

Company A is building a web application consisting of a web server and a database server.
The web server and the database server are configured using different EC2 instances
located in different subnets. For security purposes, the database server should only allow
traffic from the web server.

Which is a suitable setting to meet this requirement?

1) Controlling Traffic by VPC Endpoints


2) Controlling Traffic by Security Groups
3) Controlling Traffic by Network ACLs
4) Allow the web server to access the database server by an IAM role.
Security Group
Security Groups control traffic to EC2 instances

Route
table

10.0.0.0/16
AZ AZ
Public Subnet 10.0.5.0/24 Private subnet
10.0.10.0/24

Security

Security
Group

Group
EC2 EC2
Web server DB Server
Network ACLs
Network ACLs control traffic to subnets

Route
table

10.0.0.0/16
AZ Network ACLs
AZ
Public Subnet 10.0.5.0/24 Private subnet
10.0.10.0/24

Security

Security
Group

Group
EC2 EC2
Web server DB Server
Security groups and network ACLs
We have to set both security groups and network ACLs to control
traffic

Security Group Settings Network ACLs Settings

 Applied on instances  Applied on subnet


 Stateful: outbound is also allowed if  Stateless: inbound configuration
only inbound is set. (Keep state) alone isn’t mirrored for outbound.
 Only sets Allow ( in/out)  Sets Allow and deny (in/out)
 Only allows communication within  The default NACL is set to allow all
the same security group in default communications
SG  Applied in sequence
 Applies all rules
[Q] Default setting
A solution architect has created a new AWS account and selected the Asia Pacific
(Sydney) region. Within the default VPC, there is a default security group.

Which is the default setting for the default security group? (Select two)

1) Inbound rules allow all traffic from all addresses.


2) In the outbound rules, all traffic from all addresses is allowed.
3) In the outbound rules, all traffic from the same security group is allowed.
4) There are outbound rules that allow traffic only to VPC
5) Inbound rules allow all traffic from the same security group.
Default Setting
If you do not specify a security group, Amazon EC2 uses the
default security group

 If you start an EC2 instance, for example, without specifying a


security group, a default security group is applied.
Default
 It is set to allow all inbound access from resources with the same
Security Group security group.
 The outbound rule allows all traffic from all addresses.

 The security group that is automatically specified when you start an


Default settings of EC2 instance is the default setting for a custom security group.
Custom Security  For Linux instances, the SSH setting is set to 0.0.0.0/0.
Group  A specific IP address is set in the security group that is first
specified when the DB instance is started.
Default Settings
VPC default security group is set to allow to the same security
group ID and is initially disabled communication

Reference: https://docs.aws.amazon.com/ja_jp/vpc/latest/userguide/VPC_SecurityGroups.html#DefaultSecurityGroup
[Q] SSH connection

A major e-commerce site uses an on-demand EC2 instance to build its web server. This
EC2 instance must be placed in a public subnet and ensure that it is only accessible from
a specific IP address (130.178.101.46), via an SSH connection.

Which security group settings would allow this access?

1) Select the SSH protocol UDP and port 22 and set the source to 130.178.101.46/32.
2) Select the SSH protocol UDP and port 22 and set the source to 130.178.101.46/0.
3) Select the SSH protocol TCP and port 22 and set the source 130.178.101.46/32.
4) Select the SSH protocol TCP and port 22 and set the source 130.178.101.46/0.
SSH Connection
When connecting to an EC2 instance of SSH, the configuration uses
the TCP protocol port 22 configuration.
[Q] The use of custom sources

A major e-commerce site uses on-demand EC2 instances to build a web server. You need
to place the web server on a public subnet and the database server on a private subnet
and configure traffic control in the security group. As a solution architect, you are in the
process of setting up the inbound rules for the security group.

Which options are disabled when setting up a security group?

1) Use the security group ID for custom sources of inbound rules.


2) Use CIDR for custom sources of inbound rules.
3) Use an IP address for custom sources of inbound rules.
4) Use the Internet Gateway ID for custom sources of inbound rules.
The use of custom sources
The source of the security group can be a CIDR or a security group
ID to control traffics.

Specify the IP address or CIDR.


or
Security groups.
[Q] ELB security group settings

A major e-commerce site uses on-demand EC2 instances to build its web server. The web
server is placed in a public subnet and the database server is placed in a private subnet to
distribute traffic with ELBs. Set up security groups on the ELB and the web server to
allow public access to the ELB from the Internet and restrict the web server to access
only from the ELB.

Which method of setting up ELB security groups can meet this requirement? (Please
select two)

1) Add an inbound rule to allow HTTP / HTTPS to the ELB security group and specify
"0.0.0.0/0" as the source.
2) Add outbound rules to allow all TCP to ELB security groups, and specify an Internet
gateway as the source
3) Add an outbound rule to allow HTTP / HTTPS to the ELB security group and specify
the web server security group to the source.
4) Add inbound rules to allow HTTP / HTTPS to the ELB security group and specify the
web server security group to the source.
5) Add an outbound rule to allow HTTP / HTTPS to the ELB security group and specify
the source 0.0.0.0/0.
ELB security group settings
When configuring an ELB with an EC2 instance to serve as a web
server, it is better to restrict outbound access only from the web
server.

 Allow HTTP/HTTPS traffic from the Internet


Inbound  Source 0.0.0.0/0 for a web site to be accessible to everyone

 Allow only HTTP/HTTPS traffic from the web server


 To limit access from the web server, specify the IP address of the
Outbound web server in the source or specify the security group set on the
web server.
[Q] RDS security group settings

A major e-commerce site uses multiple on-demand EC2 instances to build its web server.
The web server is deployed in a public subnet and the RDS PostgreSQL database is
deployed in a private subnet and security groups are set up. Traffic from the Internet is
distributed to the EC2 instance by the ALB, allowing only HTTPS access from the Internet,
and the SSL configuration is configured to terminate at the ALB.

How do you need to configure the security group to increase safety? (Select three)

1) Set the RDS security group to an inbound rule from the security group for the EC2
instance on port 5432.
2) Set the security group of the EC2 instance to the inbound rule from the security group
of ALB on port 80.
3) Set the inbound rule from source 0.0.0.0/0 on port 443 and port 80 in the ALB
security group.
4) For the RDS security group, set inbound rules from the security group of the EC2
instance on port 443 and port 80.
5) Set the security group of the EC2 instance to the inbound rule from the security group
of ALB on port 443.
6) Set the inbound rule from source 0.0.0.0/0 on port 443 in the ALB security group..
RDS security group settings
Set the protocol and port number used by the database in the RDS
security group.

ALB  Allow traffic over HTTPS or HTTP from the Internet and specify
Security Group 0.0.0.0/0 to the source.

EC2  EC2 instances allow inbound access from the ALB via HTTPS or
Security Group HTTP and specify the ALB security group at the source.

 Allow port 5432 for communication with PostgreSQL from the EC2
RDS instance that will be the web server.
Security Group  The source specifies the security group of the EC2 instance.
RDS security group settings
A typical database engine port number for RDS is as follows

• MySQL: TCP IP communication port number 3306

• PostgreSQL: TCP IP communication port number 5432

• Remote Desktop: TCP IP Communication Port number 3389


The Scope of Kinesis
What is Kinesis?
Service to build analytical systems and applications for stream data
processing

IoT Spark Apps


Data Streaming location

Kinesis
Streams
Scope of Kinesis questions
The results of analyzing the range of questions from 1625 are as
follows
 Based on the scenario, You will be asked about the question of
Selecting Kinesis choosing Kinesis as the data processing service that meets the
requirements.

 You will be asked about kinesis characteristics, like data retention


Kinesis Features period of Kinesis

 Based on the scenario, You will be asked about the configuration of


Kinesis configuration Kinesis-based applications and stream processing.

Cooperation between  You will be asked about the services that can be integrated with
Kinesis and other services Kinesis Data Firehose and Kinesis Data Streams.

Kinesis application  You will be asked about how to build applications using Kinesis.
Scope of Kinesis questions
The results of analyzing the range of questions from 1625 are as
follows

Kinesis Scaling  Based on the scenario, You will be asked about how to scale Kinesis.
[Q] Select Kinesis

A large media company wanted to generate advertising revenue through news media and
built a media site using AWS. In order to provide ads to users in real time, the service is
enabled by capturing access behavior data and real-time data processing. You need a
mechanism to capture clickstream events from the source and feed the data stream to the
downstream application simultaneously.

Which services should you use to meet this requirement?

1) Use Amazon Kinesis Data Streams.


2) Use Amazon SQS.
3) Using AWS Step Functions.
4) Using Amazon SimpleWorkFlow (SWF).
Kinesis
Kinesis, a fully managed service for collecting and processing
stream data, consists of three main services

Amazon Kinesis Data Amazon Kinesis Data Amazon Kinesis Data


Streams Firehose Analytics

Real-time visualization and


Build an application to Easily deliver stream data to
analysis of stream data with
process stream data S3, Redshift, etc.
standard SQL queries
Amazon Kinesis Data Streams
Services to build analytical systems and applications for stream
data processing

IoT Spark Apps


Data Streaming location

Kinesis
Streams
Amazon Kinesis Data Streams
The streaming process is divided into shards and distributed to
allow faster processing

(2)
Streams
Data Processing
Streams
Shard1

Shard2

Shard3
Amazon Kinesis Data Streams
Kinesis Data Streams is made up of the following elements. Kinesis
improves performance with shards.

Shards are a basic unit of throughput for Amazon Kinesis data streams. 1 shard
The Shard provides 1 MB/s data input and 2 MB/s data output capability. 1 shard supports
up to 1,000 PUT records per second

A record is a unit of data stored in an Amazon Kinesis data stream


Records
Records consist of sequence numbers, partition keys, data BLOBs

Data BLOBs are the data to be processed, added to the data stream by the data
Data BLOB producer
Maximum size is 1 megabyte (MB)

Partition keys used to separate records and route them to different shards of the
Partition key data stream

Sequence Number Sequence number is a unique identifier for each record


Amazon Kinesis Data Streams
A variety of services are available to the data provider (producer)
and data user (consumer) of Kinesis Streams
Kinesis
Appender Kinesis
(2) Firehose
Kinesis
Producer
Streams
Kinesis
Data Processing Client
Kinesis
Streams
Agent Shard1 Kinesis
Analytics
AWS
SDK Shard2
Lambda
Cloud Shard3
Watch
EMR

Fluent
Apache
Storm
AWS
IoT
[Q] Characteristics of Kinesis

IoT solution makers are building a sensor-based traffic survey system on AWS. IoT sensor
data is collected using Amazon Kinesis Data Streams. In as little as 24 hours, IoT sensor
data is collected via Amazon Kinesis Data Firehose to S3 We store data in buckets.
However, upon examination of the data, it appears that the S3 bucket does not receive all
the data being sent to the Kinesis stream. There seems to be no problem with sending
data from the sensor devices.

Which is the most likely cause of this problem?

1) The data retention period in Amazon Kinesis Data Streams is the default setting.
2) The delivery settings from Amazon Kinesis Data Streams have been disabled.
3) Amazon Kinesis Data Firehose has enabled a processing setting that eliminates some
insufficient data.
4) The data retention period in Amazon Kinesis Data Firehose is the default setting.
Kinesis Features
Kinesis allows you to adjust the amount of data, data retention
period, batch interval and encryption

 Able to specify batch size or batch interval, such as setting the batch interval
to 60 seconds
 Kinesis data stream data retention period defaults to 24 hours, with a maximum
value of 168 hours
 Delivery streams are automatically scaled, one shard can capture up to 1 MB of
data per second (including partition keys), and writes can capture 1,000 records
per second
 Automatically encrypts uploaded data to the destination by specifying a KMS
encryption key
 Metrics can be viewed from the console or Amazon CloudWatch
 Charged only for the amount of data sent and the conversion of data formats
[Q] Using Kinesis Data Firehose

IoT solution manufacturers are building sensor-based traffic survey systems on AWS,
which collects IoT sensor data and uses those data for traffic volume prediction models.
The speed of the data is 1 GB per minute and it is necessary to narrow down the data to
include only the most relevant attributes and store them in S3 to build a predictive model.

Which is the most cost-effective combination of services to meet this requirement?

1) Get the data into Kinesis Data Streams, use the Lambda function to narrow down the
data output range, and then store it in S3.
2) Get the data into Kinesis Data Firehose, narrow down the data output range with
Firehose's filtering capabilities, and then save it to S3.
3) Get the data into Kinesis Data Streams, use Kinesis Data Analytics to narrow down
the data output range, and then store it in S3.
4) Get the data into Kinesis Data Firehose, use the Lambda function to narrow down the
data output range, and then save it to S3.
Amazon Kinesis Data Firehose
A service for distributing stream data to various databases; it also
functions as an ETL in conjunction with Lambda

S3

IoT
Redshift
Data

Kinesis
Firehose
Elastic
Search
[Q] Basic configuration of Kinesis

An IoT solution manufacturer is building a sensor-based traffic survey system on AWS


that collects IoT sensor data and uses it for traffic volume prediction models. The data is
sent to AWS in real time. For quality of service, it is essential to ensure that the data is
received and the data is processed by each IoT device.

Which is the most cost-effective combination of the following services?

1) Collect data per device in Amazon Kinesis Data Streams using each device's partition
key, and use Amazon Kinesis Data Firehose to store the data in Amazon S3
2) Specify a shard for each device, collect data on a per-device basis with Amazon
Kinesis Data Streams, and use Amazon Kinesis Data Firehose to store the data in
Amazon S3
3) Collect data per device in Amazon SQS using one standard queue for each device, and
use the Lambda function to store the data in Amazon S3
4) Collect data per device in Amazon SQS using one FIFO queue for each device, and
use the Lambda function to store the data in Amazon S3
Amazon Kinesis Data Firehose
Kinesis Data Streams collects data in real time and Kinesis Data
Firehose transforms and stores the data.

S3

IoT
Redshift
Data

Kinesis Kinesis
Streams Firehose
Elastic
Search
Amazon Kinesis Data Analytics
Kinesis Data Analytics Provides real-time analysis of stream data
with standard SQL queries
Streaming Streaming
Resources Destination

Kinesis Kinesis
Firehose Firehose

Kinesis
Analytics
Kinesis Kinesis
Streams Streams
[Q] Cooperation between Kinesis and other services

The automaker intends to deploy a MaaS platform that captures real-time location data on
its latest models. The company's solution architects plan to use Kinesis Data Firehose to
deliver unique streaming data downstream analytics targets.

Which of the following targets are not supported as Kinesis Data Firehose destinations?

1) Amazon EMR
2) Amazon RedShift
3) S3
4) Amazon Elasticsearch
Cooperation between Kinesis and other services
Kinesis in conjunction with other services to process data or store
data

• Polling and processing the stream data in conjunction with


Kinesis Data Streams Lambda functions.
• Perform data processing in EC2.

The following destinations can be specified as destinations for


stream delivery
Kinesis Data Firehose • Amazon S3
• Amazon Redshift
• Amazon Elasticsearch Service
[Q] Building an application

The IoT venture operates a store analytics IoT solution. IoT data, such as store sensors, is
sent to Kinesis Data Streams and processed for delivery by Kinesis Data Firehose. The
solution architect has configured the Kinesis Agent to send IoT data to the Firehose
delivery stream, but the data does not seem to be reaching the Firehose as expected.

Which are the most plausible root causes of this problem?

1) Kinesis Agent is required to be set in the Kinesis data stream.


2) Kinesis Data Firehose delivery stream processing has reached the limit.
3) Kinesis Data Streams lack of shards.
4) Kinesis Data Firehose distribution stream source is set to Kinesis Data Streams
Building an application
Kinesis Streams leverages the following functions to build streaming
processing applications
OSS standalone Java applications that easily collect and
Amazon Kinesis Agent incorporate data into Kinesis services

Amazon Kinesis Producer


OSS auxiliary library to send data to Kinesis Streams
Library (KPL)

Fluent plugin for Amazon OSS Fluent Output Plugin to Send Events to Kinesis Streams
Kinesis and Kinesis Firehose

Amazon Kinesis Data Easily send test data to Kinesis Streams or Kinesis Firehose
Generator (KDG) using the Kinesis Data Generator (KDG)

Create a Kinesis application using KCL, an OSS client library, to


Amazon Kinesis Client be deployed to an EC2 instance, etc.
Library (KCL) Workers implement record processor life cycle management
(create/end) depending on the number of shards
[Q] Kinesis scaling

An agricultural venture from a university operates an agricultural IoT solution. We are


implementing an IoT application using Kinesis Data Streams to acquire and analyze sensor
data in real time from sensor devices installed on farmland. There has been a delay in the
performance of the data delivery rate between data stream producers and consumers. You,
as the solution architect, were asked to improve the throughput performance.

What should be done to improve the current performance?

1) Using the enhanced monitoring capabilities of Amazon Kinesis Data Streams.


2) Increase the number of read transactions per shard.
3) Use the scaling feature of Amazon Kinesis Data Streams.
4) Use the extended fan-out feature of Amazon Kinesis Data Streams.
Kinesis Scaling
In Kinesis scaling, increase the number of shards by resharding.

• Shard split : To improve performance by increasing the


number of shards.
Resharding • Shard merge : to reduce costs by reducing the number of
shards.
• It can support up to one instance per shard.

• This is a consumer development feature dedicated to


throughput.
Enhanced fan-out function • This feature allows consumers to receive records from a
stream with a throughput of up to 2 MB of data per second
per shard.
Kinesis Scaling
In this configuration, consumers using the enhanced fan-out feature
process up to 2 MB/second of data per shard from a stream with
shards.

Reference: https://docs.aws.amazon.com/ja_jp/streams/latest/dev/enhanced-consumers.html
The scope of EFS
What is EFS?
File storage available for sharing in multiple instances

AZ AZ AZ

EC2 EC2 EC2


EFS Question Scope
The results of analyzing the range of questions from 1625 are as
follows
 Based on the scenario the storage requirements are presented and
Selecting EFS you will be asked to choose EFS .

EFS Settings  You will be asked how to configure EFS

EFS Configuration  You will be asked how to configure EFS with multiple EC2 instances

 Based on the scenario, You will be asked about the differences in


EFS Performance Mode EFS performance modes and how to configure these.

 You will be asked to implement EFS lifecycle management and set


The use of EFS IA up cost savings using Infrequent Access.
[Q] Select EFS

As a solution architect, you are building an application that makes use of multiple Linux
EC2 instances. This application requires access to a POSIX-compliant shared network file
system.

Which storage service should you choose?

1) EBS
2) S3
3) Amazon FSx for Windwos
4) EFS
EFS (Elastic File System)
Shared storage accessible from multiple EC2 instances

 Installed in a region with object storage


S3  S3 can be accessed directly from the Internet
 For long-term storage of large amounts of data

 Installed in the AZ with block storage


 Utilized as an EC2 instance disk volume, but not physically available
EBS over the network
 Cannot attach to multiple EC2 instances(except for IOPS type)

 NAS-like file storage


 EFS can be used as a file system and shared access by multiple EC2
EFS instances
 Unlike S3, EFS cannot be accessed directly from the Internet
EFS (Elastic File System)
EFS is a simple, scalable and flexible file storage

 Fully Managed Services


 Accessible via the Network File System version 4 (NFS v4)
Simple
protocol, with associated tools and standard protocols/APIs
(POSIX compliant)

 Scalable data storage up to petabytes


Scalable  Throughput/IOPS performance automatically scales and
maintains low latency

 Automatic expansion as the files increase and contraction


as the files decrease
Flexibility  No need to set the capacity in advance
 Pay-as-you-go billing
Basic Performance
EFS has the ability to enable thousands of simultaneous accesses.

Basic Performance Limitations

 Throughput 100 MiB/s


(Bursting)  Number of file systems per account: 1000
 File name 255 bytes  Mount targets per file system per AZ: 1
 Maximum capacity of 1 file 48TB  Tags per file system: 50
 Up to 128 users per instance can be opened  Security groups per mount target: 5
simultaneously  Number of VPCs per file system: 1
 POSIX compliance  Number of mounting targets for each VPC:
 Thousands of simultaneous accesses are 400
possible
Use case
Use EFS when sharing data with multiple EC2 instances.

Usage Policy Usage Scene

 Use EFS for the application's shared


 Simultaneous access configuration directory.
from multiple instances that cannot be  Use EFS for shared data access
configured with EBS (Except for IOPS) storage in distributed parallel
 Need to add data in seconds processing environments such as big
 Need to operate in a fully managed and data
simplified manner  Use EFS for content sharing
repositories
EFS data storage
EFS data files are stored distributed across multiple AZs

AZ AZ AZ
EFS Settings
EFS is built using the following procedure.

Create a file system

Select the Performance Mode

Create a mount target for the destination

Create security groups


File System
A file system is an EFS management unit and is a storage location
for files and directories.

File
System

Directory

Directory
[Q] EFS Settings

A leading IT company is building a web application on AWS. The application is hosted on


multiple EC2 instances deployed in multiple AZs and uses Amazon EFS to configure a
user's home directory. It needs to be configured to allow users to store their files in the
EFS file system.

How do you set up EFS to meet this requirement? (Select two)

1) Create a subdirectory for each user and grant users read/write/execution permissions.
Then mount the subdirectory to the user's home directory.
2) Configure a mount target in each AZ where each EC2 instance is deployed and set up
access to EFS.
3) Configure a mount target in the region where each EC2 instance is deployed and set
up access to EFS
4) Configure a mount target on the VPC where each EC2 instance is deployed and set up
access to EFS.
5) Create a separate EFS file system for each user and grant each user
read/write/execution rights to the root directory. Then mount the file system in the
user's home directory.
Mount Target
To access EFS, it is necessary to set the mount target to which
the EC2 instance is connected to.

 Connection destination in the AZ in VPC


 EC2 instances connect from a mount target that is in
the same AZ
 It has a fixed DNS name and IP address
 An IP address is automatically assigned by mounting
using the DNS name of the file system.
Mount Target
Through the mount target, EC2 instances can access EFS outside
the AZ.

AZ

EC2

Mount Mount target has NFSv4


Target endpoint, IP address, DNS name
[Q] EFS Configuration

Large IT companies are building web applications on AWS. It requires the use of multiple
EC2 instances to share data. This application needs to be resilient in case of a failure.

How do you configure the solution to meet these requirements?

1) Configure an Auto Scaling group across multiple AZs by setting up an ELB target
group for an EC2 instance. Store the data in EFS and mount the target on each
instance.
2) Configure ELB target groups for EC2 instances. Store the data in EFS and mount the
target on each instance.
3) Configure an Auto Scaling group across multiple AZs by setting up an ELB target
group for an EC2 instance. Store the data in the EBS and mount the target on each
instance.
4) Configure target groups of ELBs for EC2 instances. Store the data in the EBS and
mount the target on each instance.
EFS Configuration
EFS can be accessed from multiple EC instances in multiple AZs

AZ AZ AZ

EC2 EC2 EC2

Mount Mount Mount


Target Target Target

EFS is not charged for data


access, but it is charged based
on the amount of data.
[Q] EFS performance mode

Data analytics companies are using AWS to implement big data analytics workloads. Large
amounts of data processing are required using a fleet of thousands of EC2 instances
across multiple AZs. The data utilizes a shared storage layer that can be mounted and
accessed by all EC2 instances simultaneously.

Choose the storage if you want to maximize throughput performance.

1) EBS Provisioning IOPS


2) Amazon S3
3) Max I / O mode EFS
4) General Purpose mode EFS
EFS Performance Mode
Choose from general purpose mode and maximum I/O mode. Basic
General Purpose Mode

 Mode for general use


 The EFS default mode is the general-purpose mode, and the
General Purpose recommended
mode  Lowest latency
 Limit the file system operations to 7000 per second

 Used for large builds that require simultaneous access from


MAX I / O dozens to thousands of clients
performance mode  Scaling to prioritize total throughput
 Latency is somewhat longer than General Purpose mode
EFS Client
You uses dedicated client software when operating from an EC2
instance of EFS

Amazon-efs-utils
EFS Mount Helpers

Linux NFSv4 Client


Provisioned throughput
Depending on the use case, provisioned throughput is also available

Provision
Burst Throughput
Throughput

 A method to perform a burst of credits at


peak times to improve temporary  Consistent pre-set throughput method
performance  Controlled by API / AWS CLI / Management
 Limited to maximum throughput and burst Console
times  Can reduce throughput performance only
 Increased throughput performance requires once a day
increased storage capacity
[Q] The use of EFS IA

A large IT company is building a web application on AWS. The application has multiple EC2
instances deployed in multiple AZs that store data in shared storage. This data is a file
that is used for internal directory management and is only controlled by the EC2 instances.
The files are expected to be used frequently at first, but then accessed less frequently.

What is the most cost-effective solution?

1) Use EFS life cycle management.


2) Use EFS storage optimization.
3) Use S3 life cycle management.
4) Use EBS Life Cycle Management.
The use of EFS IA
Cost savings by storing infrequently accessed data in IA storage

 Enable EFS lifecycle management of the file system by selecting the


lifecycle policy that matches your needs
 Amazon EFS Infrequent Access (EFS IA) is a storage class that
provides cost-optimized price/performance for files that are not
accessed every day
 Price up to 92% discount: $0.025 (GB per month)

Reference: https://aws.amazon.com/jp/efs/features/infrequent-access/
The Scope of API Gateway
What is the API Gateway?
A service that can call functions and data of other services by
request and response through API

Request Communication

Internet API
Response Communication

Internal and external


systems/applications, etc.
Service assets
The Scope of the API Gateway questions
The results of analyzing the range of questions from 1625 are as
follows
 Based on the scenario, you will be asked about selecting an API
Selecting API Gateway Gateway to achieve a result.

 You will be asked about the features such as the API types used by
API Gateway Features the API Gateway.

 You will be asked about the factors that generate charges on the
API Gateway cost API Gateway

 You will be asked about how to configure API Gateway with other
API Gateway Configuration AWS services.

API Gateway  You will be asked about authentication methods, such as setting
Authentication Method permissions to the API Gateway.
The Scope of the API Gateway questions
The results of analyzing the range of questions from 1625 are as
follows
 Based on the scenario, you will be asked how to configure the
Cache usage caching features, TTL, etc. used to improve the performance of the
API Gateway.

 Based on the scenario, you will be asked to choose a method of


Throttling performance tuning using throttling.
[Q] Select API Gateway

As a solution architect, you are building a mobile application on AWS. The mobile
application fetches and uses data from several application services to input data into the
user interface. The implementation requires an architectural configuration that separates
the client interface from the federated application.

Which AWS service can meet this requirement?

1) AWS Lambda
2) AWS Device Farm
3) API Gateway
4) AWS Transit Gateway
Use case
Integrate with external applications using the API Gateway as an
entry point

API Gateway
Cache

WEB AWS
Apps Services
API Gateway Lambda
[Q] API Gateway Features

New web applications with AWS are developed based on a microservices architecture. As a
solution architect, you decide to use the API Gateway to flexibly link applications with
various functions.

Select a reason why you should use the API Gateway when building a microservice.
(Select two.)

1) The RESTful API is available.


2) The spiral API is available.
3) Implementation patterns are provided in Blueprint.
4) The API Gateway is charged according to the number of provisioning.
5) API calls and is charged according to the amount of data transferred.
API Gateway
API Gateway is fully managed service that can create and manage API.

• Creating, deploying and managing RESTful API


API Creation • Creating, deploying and managing WebSocket APIs to expose
and AWS Lambda functions and other AWS services
Management • Make API method calls exposed by front-end HTTP and
WebSocket endpoints

• Up to hundreds of thousands of APIs can be called and accepted


simultaneously
• Back-end protection for DDoS attack and throttling
Basic Performance
• Provides API that can be used when executing workloads of EC2
/ Lambda / arbitrary web application.
• Is tightly integrated with Lambda
API Gateway Cost
The billing method differs depending on the type of API to be set.

HTTP API Charged by the API calls used

Charged only for the API calls received and the amount of data
RESTful API transferred

Charged from the number of messages received and sent and


WebSocket API total connection time in minutes
[Q] API Gateway Configuration

Company A develops web applications based on a microservices architecture. The


application allows users to perform simple data processing processes by calling the API. It
receives about 1,000 requests daily as a non-functional requirement and requires an
average response time of 50 ms.

Which configuration would be the least expensive and most available architecture?

1) Create APIs by API Gateway to collaborate between microservices and use Lambda
for service back-end processing.
2) Build a website hosted by EC2 instance and then work with other EC2 instances for
back-end processing, via SQS.
3) Set the application load balancer as the target group for the Auto scaling group with
up to two instances, and distribute the traffic.
4) Create an API by API Gateway and use it to collaborate with microservices, and use
an EC2 instance for service back-end processing.
API Gateway Configuration
Make EC2-based WEB applications serverless
CloudFront

10.0.0.0/16 Static Content Delivery


S3
AZ AZ

Public Subnet 10.0.5.0/24 Public Subnet


ELB
10.0.10.0/24

NAT Auto Scaling


Gateway

EC2 EC2

Private Subnet 10.0.6.0/24 Private subnet


10.0.11.0/24

Automatic Failover
RDS RDS
API Gateway Configuration
Make EC2-based WEB applications serverless

Static Content Delivery

S3
10.0.0.0/16
CloudFront

API Gateway

AZ AZ
Private Subnet 10.0.6.0/24 Private subnet
Lambda
10.0.11.0/24

Automatic Failover
RDS RDS
[Q] API Gateway authentication method

Your company operates a variety of applications. You've decided to implement an API


gateway to create inter-application integration. There are multiple developers and IT
administrators using the API gateway, and you need to configure the best permissions for
the API gateway for each user.

Choose the best configuration method to implement privilege management for the API
gateway.

1) Use an authentication key to set access permissions to the API Gateway for different
users.
2) Use IAM policy to set access permissions to the API Gateway for different users.
3) Use an API key to set access permissions to the API Gateway for different users.
4) Use access keys to set access permissions to the API Gateway for different users.
The API Gateway authentication method
Various API Gateway access authentication are available

Resource Policy (Restful Configure the permission or denial of actions from API Gateway
API only) resources by defining a resource policy in JSON format.

Create an IAM policy that sets API access rights and set the
IAM Certification policy to IAM users or IAM roles to control access to the API.
Enable IAM authentication in API methods

Lambda Control access to API on a method-by-method basis based on


Authorizer the authentication provider of the Lambda function.

Cognito Use Cognito user pool as an authentication provider and


Authorizer control access to API on a method-by-method basis.
[Q] Use cache

Your company is building a microserviced application. As a solution architect, you've


developed a new Restful API that leverages API Gateway, AWS Lambda, and Aurora
database services to tie together data processing functions. This API is very read
intensive, but the data rarely changes.

How to reduce costs while improving the performance of API processing?

1) Add a lead replica to Aurora.


2) Enable API Gateway Cache
3) Apply for relaxation of the API Gateway read request limit.
4) Switch Aurora database to Aurora serverless.
[Q] Using the cache

Your company is building a microserviced application. As a solution architect, you've


developed a new Restful API using API Gateway, AWS Lambda, and Aurora database
services to collaborate on data processing functions. Controlling the cache is necessary to
improve performance and reduce the load on back-end services.

Which features do you use to control the cache?

1) Configure the throttle feature


2) Enable Burst functions
3) Use of the Time to Live (TTL) setting.
4) Enable the cache request feature.
Using the cache
API Gateway cache can reduce the number of calls to the endpoint
and shorten the latency of requests to the API

The default TTL value is 300 seconds


Maximum TTL value is 3600 seconds
TTL=0 disables the cache

Reference: https://aws.amazon.com/jp/api-gateway/
[Q] Throttling

Your company is building a microserviced application. As a solution architect, you've


deployed an API using the API Gateway, Amazon API Gateway, to tie in data processing
functions. When you get it up and running, there seems to be an overload of requests from
one particular customer, increasing your API processing load.

Choose the best solution to this problem.

1) Configure throttling limits per client.


2) Configure server-side cache limits
3) Configure cache limits per method
4) Limit TTL on the server side.
Throttling
Protects the back-end services against traffic spikes by limiting the
number of requests if there are too many.

Server-side throttling Limit requests to all clients. This prevents the backend
limits services from being overwhelmed by too many total requests.

Limit requests to specific clients according to the "usage plan"


per client.
Throttling limit per client
It is effective when there are many requests from a particular
user
The Scope of Aurora
What is Aurora?
Aurora is a high-speed, high-performance relational database
distributed in multi-AZ.

Reference: https://docs.aws.amazon.com/ja_jp/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.html
The Scope of Questions for Aurora
The results of analyzing the range of questions from 1625 are as
follows
 You will be asked to select Aurora as a suitable DB
Aurora Features  You will be asked about features such as the benefits of choosing
Aurora.

 You will be asked how best to set up an Aurora read replica.


Read Replica  You will be asked the difference between an Aurora read replica and
an RDS read replica.

 You will be asked about how to configure tier settings for Aurora
Failover configuration failover configuration.

 Based on the scenario, you will be asked the question to choose


Aurora Serverless Aurora Serverless to meet the requirements.

 You will be asked the architectural configuration that allows for the
Global Configuration global deployment of aurora to the read replica.
The Scope of Questions for Aurora
The results of analyzing the range of questions from 1625 are as
follows

 You will be asked to select an endpoint to implement the connection


Endpoint selection to various instance types of Aurora.
[Q] Aurora features

As a solution architect, you are building a database system on AWS. Your existing on-
premises database uses MySQL 5.6 to manage customer data for your business system.
Recently, the volume of data processing has increased and we have decided to build a
high-performance database on AWS. You are considering whether Amazon Aurora is the
best choice for you.

Which of the following is the wrong explanation for choosing Aurora?

1) Compatible with PostgreSQL 10.4


2) Compatible with MySQL5.7
3) Availability is 99.99%
4) Fast reading with up to five read replicas
Aurora
Aurora is new type distributed relational database in cloud

 Relational DB suitable for the cloud by Amazon


 It has distributed clusters for high performance
 Its features is mixed by NoSQL-type distributed high-speed
processing and RDB data manipulation
Aurora
Aurora can offer 2.5 to 5 times more performance than regular
MySQL at one-tenth the price of a commercial database.

5 times Performance
Comparison with r3.8xlargeAurora and Sysbench4 instance

2.5-5 times performance


Comparison with TPC-C r3.8xlarge
Aurora
Aurora can be selected as one of the database software in RDS
Aurora features
Compared to other RDS engine types, Aurora is capable of reading
and writing in large quantities due to its higher parallel processing
performance.

 High-speed processing of queries is possible due to high


parallel processing
 Can handle a large number of writes and reads at the same
time
 Database aggregation and throughput improvement are possible
 5 times the performance is not always guaranteed, so find and
use the area to be applied.
Aurora features
Compatible with MySQL / PostgreSQL and can use tools and
communities

Compatible with Compatible with


MySQL 8.0 PostgreSQL
Aurora features
A new type of distributed, fault-tolerant and self-healing, scalable,
fully managed RDB

Resilience/Self-healing Scalability

 Two copies can be installed in three


AZs. a totaly six copies
 Seamlessly expand using 10GB to 64TB
 Continuous backup of past data in S3
SSD data plane
 High-speed restore is possible
 Auto-Scaling can be executed
 Achieve consistent restore times at any
 Read processing using up to 15 read
time
replicas is possible
 99.99% high availability and high
durability
Aurora Use Cases
RDBs with heavy query processing should consider migrating to
Aurora

 Database with a large amount of write transactions


 Effective in cases where parallel query processing is
Mass data
required and the data size is large
processing
 Database processing with a large number of connections
and tables

 Highly scalable performance and unlimited data capacity


required
Global operation  Replication is easy to perform
 Globally configurable
The virtual volume of a DB cluster
Aurora consists of one DB instance and one DB cluster volume.
Copies are distributed to 3 AZ to form a cluster
AZ AZ AZ

Subnet

Aurora DB
instance
Subnet Subnet

Copy Copy Copy

Copy Copy Copy


The virtual volume of a DB cluster
Aurora consists of one DB instance and one DB cluster volume.
Copies are distributed to 3 AZ to form a cluster
AZ AZ AZ

Subnet

Aurora DB
instance
Subnet Subnet

Copy Virtual Copy


Volumes Copy

Copy Copy Copy


[Q] Read replica

A major news media company runs a web application for news distribution on AWS. The
application runs on an Amazon EC2 instance fleet in the Auto Scaling group behind ALB.
We use the Aurora database for the Data layer. The performance of this application is
slowing down due to the ever-increasing number of read requests.

Please select a solution that can meet this requirement. (Select two.)

1) Migrate DB to Aurora multimaster configuration.


2) Change ALB to NLB.
3) Migrate DB to Aurora serverless.
4) Add an Amazon Aurora replica
5) Add CloudFront web distribution.
DB Cluster Configuration
Check Aurora's DB cluster configuration with the image without
virtual volumes.

AZ

Subnet

Aurora Writer
(Master)
DB Cluster Configuration
Aurora constitutes a DB cluster of master and read replicas together

AZ AZ

DB Cluster
Private Subnet 10.0.1.0/24 Private subnet
10.0.3.0/24

Aurora Writer Aurora Reader


(Master) (Read replica)
Up to 15 pieces
DB Cluster Configuration
These masters and replicas connect through the endpoints

AZ AZ
ELB

EC2 EC2

Ends
Points

DB Cluster
Private Subnet 10.0.1.0/24 Private subnet
10.0.3.0/24

Aurora Writer Aurora Reader


(Master) (Read replica)
Up to 15 pieces
DB Cluster Configuration
Aurora Writer is used for the writing process.

AZ AZ
ELB

EC2 EC2
Writing process

Ends
Points

DB Cluster
Private Subnet 10.0.1.0/24 Private subnet
10.0.3.0/24

Aurora Writer Aurora Reader


(Master) (Read replica)
Up to 15 pieces
DB Cluster Configuration
Use Aurora Reader for the reading process.

AZ AZ
ELB

EC2 EC2
Read Processing

Ends
Points

DB Cluster
Private Subnet 10.0.1.0/24 Private subnet
10.0.3.0/24

Aurora Writer Aurora Reader


(Master) (Read replica)
Up to 15 pieces
[Q] Failover configuration

A major news media company hosts a web application for news distribution on AWS. It
uses the Aurora database for its database. The company is currently deploying four read
replicas in multiple AZs to increase read throughput and to serve as failover targets. The
replicas are configured as follows

Tier 1 (8TB)
Tier 1 (16TB)
Tier 15 (16TB)
Tier 15 ( 32TB)

Which tier will be promoted to master during failover?

1) Tier 1 (8TB)
2) Tier 1 (16TB)
3) Tier 15 (16TB)
4) Tier 15 ( 32TB)
Failover configuration
Promote read replicas to masters in ascending order of tier number
and then in descending order of maximum size.

1. Amazon Aurora promotes from high priority (lowest tier in number)


Read replicas.
2. If two or more Aurora replicas share the same priority, promote the
largest size replica.
Reference: https://aws.amazon.com/jp/blogs/aws/additional-failover-control-for-amazon-aurora/
Perform failover
Failover to the Reader if the master fails

AZ AZ
ELB

EC2 EC2

Ends
Points

DB Cluster
Private Subnet 10.0.1.0/24 Private subnet
10.0.3.0/24

Aurora Writer Aurora Reader


(Master) (Read replica)
Up to 15 pieces
Migration
Migration from MySQL and PostgreSQL snapshots to Aurora is
possible

Snapshot

MySQL Aurora
with MySQL
Aurora Multimaster
Write performance is also scalable by building multiple master
databases

AZ AZ AZ

Aurora Writer
(Master)

Aurora Reader Aurora Reader Aurora Reader


(Read replica) (Read replica) (Read replica)
Aurora Multimaster
Write performance is also scalable by building multiple master
databases

2017-present
AZ AZ AZ

Aurora Writer
(Master) Aurora Writer
(Master)

Aurora Reader Aurora Reader Aurora Reader


(Read replica) (Read replica) (Read replica)

 Zero downtime no matter which node falls


 Zero downtime no matter which AZ falls
 Write performance scaling
[Q] Aurora serverless

A large IT company is building a web application on AWS. The number of users in this
application is expected to skyrocket, and at this stage it is not yet possible to determine
how much performance is required. It may also experience a severe drop in demand and
has to deal with erratic processing loads. However, the drawback is that we cannot predict
this in advance.

Which database is best to meet this requirement?

1) RDS Auto Scaling Settings


2) Aurora Serverless
3) Aurora Multi-Master Configuration
4) DynamoDB to auto-scaling on demand
Aurora Serverless
Aurora Serverless is an Aurora type suitable for unpredictable
application workloads

Run based on the needs of the application


Automatically start up/shut down
Automatic scale-up / scale-down
[Q] Global Configuration

Company B is building a web application on AWS. With users all over the world and a high
number of global requests, performance is slowing down despite the use of read replicas in
Amazon RDS for MySQL. There seems to be a limit to the performance of the basic
performance of RDS.

To remedy this problem, choose the most cost-effective and high-performance solution.

1) Newly created Amazon RDS global read replicas to enable fast local reads with low
latency in each region.
2) Migrate to Amazon Aurora global database and enable fast local reads with low latency
in each region.
3) Moving to Amazon Aurora serverless, enabling fast local reads with low latency in each
region.
4) Migrate Amazon DynamoDB global tables and enable fast local reads with low latency
in each region.
Aurora Global DB
High-performance read replica which can be built in other regions

 Replication is performed by the storage level replication function


instead of log transfer.

 Low-latency replication, with replication taking generally less than


one second and up to five seconds

Primary Region Secondary Region

Aurora Aurora
(Master) (Reader)
[Q] Endpoint selection

Company B is building a web application on AWS. The application runs on a fleet of


Amazon EC2 instances with Auto Scaling groups configured on the ALB. We are using
Amazon Aurora PostgreSQL as the database. As a solution architect, you have been asked
to optimize the database workloads in your cluster. You need to direct read queries, such
as reporting, to lower capacity instances while forwarding write operations for production
traffic to higher capacity instances.

To achieve this requirement, which is the optimal configuration method for Aurora
endpoints? (Please select two)

1) Configure custom endpoints for write traffic.


2) Configure custom endpoint for Read request to
3) Configure instance endpoints for write traffic
4) Configure the instance endpoints for read requests
5) Configure cluster endpoints for write traffic
6) Configure the read endpoint for read request .
Endpoint selection
Select the endpoints according to the instance type you want to
use

Clusters  Endpoints available for the write process for Aurora


(Writing) clusters
Endpoints  Writer only

Read  Reader endpoints used to access the read replica


Endpoints  Read replica only

Instance  Endpoints to access a specific instance


Endpoints  Set to a separate instance

Custom  Endpoints with free instance combinations


Endpoints  Combining Writer and Reader, etc.
The scope of
ElastiCache
What is ElastiCache?
ElastiCache is an in-memory database that keeps a cache in
memory and performs high-speed processing

Memory Type DB

Data Data

Processes data
Faster than a disk
What is ElastiCache?
ElastiCache is an in-memory database that keeps a cache in
memory and performs high-speed processing
ElastiCache

Cash

RDS

Fetch data from the cache the next time you access it
The scope of ElastiCache questions
The results of analyzing the range of questions from 1625 are as
follows
 Based on the scenario, you will be asked to select an ElastiCache
Selecting ElastiCache that matches the database requirements.

 you will be asked which type of ElastiCache for specific purpose;


Redis or Memcached.
ElastiCache types  you will be asked about the differences between Redis and
Memcached and their characteristics.

 Based on the scenario, you will be asked ElastiCache's configured


ElastiCache Configuration solution.

 You will be asked how to configure security settings in ElastiCache


ElastiCache Security Redis.
[Q] Select ElastiCache

You are in charge of game developments for a game company, building a database to be
used in the game under development. The game needs to implement the ability for items to
appear in response to recorded user behavior data, real-time, high-speed processing of
user behavior data is required for this setting.

Please select a service to meet this requirement.

1) ElastiCache
2) Redshift
3) Aurora
4) RDS
ElastiCache
Services that can be easily implemented to build, manage and scale
distributed in-memory cache DB

 Launching a cache cluster in a few clicks


 Fully-managed to achieve high availability for
monitoring, automatic fault detection, recovery,
expansion, patching, and backup
 Available from two widely used engines
memcached,/redis
Use case
Consider using cache when you want to speed up data access

[Use Case]
 Session Management
 IOT processing and stream analysis
 Metadata Storage
 Social media data processing/analysis
 Pub/Sub processing
 DB Cache Processing
Use case
Consider using cache when you want to speed up data access

[Use Case]
 User matching process
 Recommendations Processing
 Fast display of image data
 Ranking using user data in a game event
[Q] ElastiCache type

As a systems developer for a game company, you are building a database to be used in a
game under development. The database needs to use a multi-threaded in-memory cache
layer to improve the performance of repeated queries.

Choose which services should be used for this database cache.

1) Amazon DynamoDB DAX.


2) Amazon RDS MySQL
3) Amazon ElastiCache Memcached"
4) Amazon ElastiCache Redis
[Q] ElastiCache type

As a solution architect, you are building a system for fast data processing. You need real-
time processing for session data processing, and you decide that ElasticCache is the best
choice for these data accelerations, but you have to compare whether to choose
Memcached or Redis.

Select Memcached's features on ElastiCache (Select two.)

1) It is an in-memory cache DB that runs on a single thread.


2) Provide pub/sub functionality.
3) No snapshot feature.
4) Automatic failover is possible.
5) Persistence of the keystore is not necessary.
ElastiCache type
ElastiCache is available with both open source Redis and
Memcached

Redis Memcached

 In-memory cache type DB that can


 In-memory cache type DB that can read/write values fast
read/write data fast
 In-memory cache DBs that work in
 All data operations in an in-memory multi-threaded
cache DB run on a single thread are
exclusive  No snapshot feature
 There is a snapshot feature  Unable to persist data
 can persist data  Failover and restoration are not
possible.
ElastiCache type
Memcached is often used for simple use case like data process
tasks, but Redis is often used for more detailed database needs

Redis Memcached

 Complex data types are needed.  Simple data types are needed
 The in-memory data sets need to be sorted or ranked.  There is a need to run a large node with multiple
cores or threads.
 For the load of the read process, you need to
replicate to the read replica.  There is a need for scale-out and scale-in capabilities
to add or remove nodes as the demand increases or
 Need pub/sub functionality decreases in the system.
 Automatic failover is necessary  The need to cache objects such as databases.
 The persistence of the keystore is necessary.  Keystore persistence is not necessary
 The ability to backup and restore is necessary.  No need for backup and restoration features
 There is a need to support multiple databases.  Multiple databases are not available
Use case
The pub/sub feature of ElastiCache Redis can be used for chat
applicatons

ElastiCache
Chat App pub/sub
Server server
ElastiCache with Redis
In addition, you can take advantage of location queries / operations
with Lua scripts and pub/sub models

 A scripting language that has features such as high


Lua Script portability and fast execution speed

 Location information such as longitude and latitude


Location Query can be queried and processed
 Search distance and search range can be specified

 The pub/sub model separates the "event initiator"


from the "event processor".
pub/sub model
 Leverage in message processing and event
processing
[Q] ElastiCache Configuration

You are performing a load test on an application hosted on AWS. While testing an Amazon
RDS MySQL DB instance, you find that there are cases where the CPU usage reaches
100% and the application becomes unresponsive. The application seems to be doing a lot of
reading.

Which configurations perform faster processing for high read-load requests?

1) Utilize the queuing process with SQS to reduce the concentration of access to RDS
2) Deploying Auto Scaling to RDS Instances to Increase Scalability Under Load
3) DynamoDB (DAX cluster) in front of the RDS to introduce the caching process
4) Putting ElasticCache in front of the RDS and introducing the caching process
ElastiCache Configuration
Standard configuration method to identify the data to be cached
and use it in conjunction with RDS

In-memory cache
Cache unused patterns
Usage patterns

Increased DB access load decreases Increase availability by placing


throughput and availability frequently accessed data in the cache

ElastiCache

EC2 RDS EC2 RDS


[Q] ElastiCache Security

As a solution architect, you are building a data acceleration mechanism. Session data
processing requires real-time processing, and the caching uses an ElastiCache cluster. In
order to comply with the company's security policy, the data used needs to be protected.

Choose a solution for data protection

1) Enabling Encryption During Data Transfer with ElastiCache Redis


2) Issue the RedisAUTH command in ElastiCache Redis.
3) Enable encryption on storage with ElastiCache Redis
4) Enable encryption on storage with ElastiCache Memcached
5) Enable encryption during data transfer with ElastiCache Memcached
ElastiCache Security
ElastiCache Redis can encrypt data transfers, encrypt data storage
and authenticate with Redis Auth

Reference: https://docs.aws.amazon.com/ja_jp/AmazonElastiCache/latest/red-ug/encryption.html
ElastiCache Security
ElastiCache Redis can encrypt data transfers, encrypt data storage
and authenticate with Redis Auth

 On-disk data is encrypted during the execution of


Encrypt data synchronization and backup operations using AWS KMS.
in storage  The encryption target is the disk being swapped and the backup
in Amazon S3

 Enforces encryption on data moving from one location to


another (e.g., between nodes in a cluster, between clusters and
Encrypt of data applications)
Communication  It will only be possible to enable it when the Redis replication
group is created.

 Authenticate with a Redis authentication token before the


Redis Auth client can execute the command.
The scope of Inter-site connection
Use cases for inter-site connections
Use Direct Connect, a leased line, or site-to-site VPN to connect
between on-premises environment and AWS cloud

Direct Connect
Location

10G In- On-Premises


or
1G
house Environment
Equipm
Gateway Direct Gateway
Connect
ent
Devices
The scope of the question
The results of analyzing the range of questions from 1625 are as
follows
Selecting a  You will be asked to select the services and features that connect
Connection Method the AWS cloud with your on-premises environment.

Direct Connect  You will be asked about how to configure the various gateways and
Configuration interfaces used for Direct Connect configuration.

Connection between  You will be asked about how to configure a Direct Connect gateway
regions when connecting between regions.

Site-to-site VPN  You will be asked about how to configure a gateway when setting up
connection a site-to-site VPN.

 You will be asked about the use of the VPN CloudHub as a way of
VPN CloudHub configuring multiple VPNs together.
The scope of the question
The results of analyzing the range of questions from 1625 are as
follows
Direct Connect  You will be asked about how to configure Direct Connect to ensure
Redundancy redundancy.
[Q] Select connection method

As a solution architect, you're responsible for connecting your company's on-premises


infrastructure to the AWS cloud network.

Please select a method to meet this requirement. (Please select two)

1) Direct Connect
2) VPC Peering
3) VPN
4) Snowball
5) AWS Storage Gateway
On-premise connection with VPC

VPN Connection

Leased Line Connection


(Direct connect)
Direct Connect and VPN
VPNs are cheaper and quicker to use, but Direct Connect has more
reliability and quality
VPN Direct Connect
 More expensive than VPN because
 Inexpensive best-effort lines are
Cost available
it requires a carrier's leased line
service contract

 If you have a gateway that supports


 It takes several weeks to set up
VPN in your on-premises
Time environment, you can set it
because it requires physical setting
support.
immediately.

 Bandwidth is limited due to


Bandwidth encryption overhead.
 1G/10Gbps per port

 Affected by network conditions due  Higher quality is guaranteed by the


Quality to the Internet carrier

 Since it depends on the Internet, it


 Since the route is physically secured,
Obstacle Isolation is difficult to verify outside the
verification is relatively easy.
range held by the company.
[Q] Direct Connect Configuration

The company you work for is currently migrating its infrastructure and applications to the
AWS cloud. As a solution architect, you have implemented a connection configuration that
uses Direct Connect to connect to your on-premises environment.

Select how Direct Connect configuration should be implemented. (Select two.)

1) Install the customer gateway device in your on-premises environment and connect to
the Direct Connect device.
2) Install a virtual private gateway on the Amazon VPC side to connect to a Direct
Connect device.
3) Install a virtual private gateway in your on-premises environment to connect to Direct
Connect devices.
4) Install a customer gateway device on the Amazon VPC side and connect to the Direct
Connect device.
5) Set up a private virtual interface in your on-premises environment to connect to your
Direct Connect device.
Direct Connect Configuration
Connect a leased line connection to the AWS environment by
physically connecting your on-premises environment to a Direct
Connect location
Direct Connect
Location

In-
Private 10G house
VIF or On-Premises
1G Equipm Environment
ent
Virtual Private Direct Customer Customer
Gateway Connect Gateway Gateway
Devices Devices
[Q] Inter-region connections

A global consulting firm with offices around the world is building an AWS-based document
sharing system and plans to share country knowledge. To implement this mechanism, you
need to implement high-bandwidth, low-latency connections to multiple VPCs in multiple
regions within the same account. Each VPC has a unique CIDR range.

Which is the best solution design that can meet this requirement? (Select two)

1) Create a Direct Connect gateway and create a customer virtual interface for each
region.
2) Configure a Direct Connect connection from the office to the AWS region.
3) Configure a VPN connection from the office to the AWS region.
4) Implementing a Direct Connect connection to each AWS region
5) Create a Direct Connect gateway and create a private virtual interface to each region.
Inter-regional Connections
Direct Connect gateway connects multiple VPCs in multiple regions
from multiple AZs in multiple regions belonging to the same account

Virtual Private
Gateway

Direct Connect
Gateway
Reference: https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html
[Q] Site-to-site VPN connection

A Retail Company is planning to migrate to the AWS cloud using the Tokyo region. In order
to do so, you need a configuration that connects your office to the AWS Cloud. As a
solution architect, you have set up an AWS managed IPSec VPN connection between your
remote on-premises network and a VPC over the Internet.

Which of the following represents the correct configuration of an IPSec VPN connection?

1) Create a virtual private gateway on the AWS side of the VPN and create a customer
gateway on the on-premises side of the VPN.
2) Create a virtual private gateway on the on-premises side of the VPN and create a
customer gateway on the AWS side of the VPN.
3) Create a virtual customer gateway on the AWS side of the VPN and create a customer
gateway on the on-premises side of the VPN.
4) Create a virtual customer gateway on the on-premises side of the VPN and create a
customer gateway on the AWS side of the VPN.
Site-to-site VPN connection
Connect the virtual private gateway on the AWS side with a
customer gateway device in an on-premises environment.

Reference: https://docs.aws.amazon.com/ja_jp/vpn/latest/s2svpn/how_it_works.html
Site-to-site VPN connection
The AWS-side virtual private gateway can also be a Transit
Gateway

Reference: https://docs.aws.amazon.com/ja_jp/vpn/latest/s2svpn/how_it_works.html
[Q] VPN CloudHub

Media companies use Direct Connect in the Tokyo region to connect their offices to the
AWS cloud. Its branches in Singapore and Sydney use a separate region and connect to
the VPC using a site-to-site VPN connection. The company is looking for a solution to
help its branches send and receive data to and from each other and the head office.

Choose an AWS service that can meet this requirement.

1) VPN CloudHub
2) VPC Customer Gateway
3) VPC Endpoints
4) AWS Transit Gateway
VPN CloudHub
Multiple site-to-site VPN connections can be combined to provide
secure site-to-site communication.

Reference: https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-vpn-cloudhub.html
[Q] Direct Connect Redundancy

Your company uses Direct Connect to connect your office to the AWS cloud. However,
the problem is that only one Direct Connect connection is configured and redundancy is
not ensured. As a solution architect, you have been asked to increase the redundancy of
the Direct Connect connection. A cost-optimized configuration is required.

Which solution can meet this requirement?

1) Implement a redundant configuration with dual Direct Connect settings.


2) Use a site-to-site VPN as a backup connection.
3) Use a site-to-site VPN as the primary connection.
4) Use an output-only Internet gateway as a backup connection.
Direct Connect Redundancy
VPN connection redundancy is implemented by using eBGP for peer
connections between sites, and then using iBGP for peer
connections between the same sites.
Direct Connect
Location

In-
Private 10G house
VIF or On-Premises
Equipm
1G Environment
ent
Virtual Private Direct Customer Customer
Gateway Connect Gateway Gateway
Devices Devices

• Enter the public IP address of the customer


A virtual private gateway is used to set up a
VPN that can be used in conjunction with VPN gateway device.
• Add a network prefix to advertise, if
DirectConnect to encrypt all data passing
through a DirectConnect link. necessary
The scope of
CloudFormation
What is CloudFormation?
Environmental automation services to deploy AWS infrastructure
configuration based on a template
CloudFormation Question Scope
The results of analyzing the range of questions from 1625 are as
follows
Selecting  You will be asked to select CloudFormation to meet requirements
CloudFormation that you want to implement on the AWS.

 You will be asked about how to leverage the stack set and deploy
CloudFormation Features CloudFormation across multiple accounts

CloudFormation  You will be asked about the each elements of the template snippets
Template snippets of a CloudFormation.

CloudFormation template  You will be asked about how to descript the CloudFormation
description template.
[Q] Select CloudFormation

The company has created guidelines for standardizing infrastructure configuration on AWS.
As a solution architect, you have a mechanism in place to share the deployment of EC2
instances, VPCs, and other configurations that follow the guidelines when using AWS
resources.

Which is the right technology choice for this requirement?

1) CloudFormation
2) AWS Elastic Beanstalk
3) AWS Systems Manager
4) CodeDeploy
CloudFormation
CloudFormation can be leveraged when you want to deploy the
infrastructure environment on AWS accurately and efficiently

Use case
 You want to streamline to launch AWS resources
 You want to standardize the infrastructure used in development,
testing, and production environments
 You want to use exactly the same resources and provisioning settings
every time
 You want to manage the environment configuration like software
CloudFormation
An automated environment configuration service that describes and
deploys a template of all infrastructure resources in the AWS.

 Provisioned resources can be modified and removed through


templates
 The templates written in JSON/YAML
 You can deploy AWS Resource in cross-region and can implement
cross-account management
 You can use custom resources if you want to utilize resources
and features that are not directly supported by CloudFormation
How to use CloudFormation
Template create a stack, which is a collection of AWS resources.

[Template] [Cloud Formation] [Stack]


It define resources and CloudFormation uses A set of AWS resources.
parameters in JSON/YAML templates to create, modify, Deleting the stack also
and delete stacks, and at deletes the associated
runtime it detects errors, rolls resource
back, and automatically
determines inter-resource
dependencies.
[Q] CloudFormation features

Company B has created a CloudFormation template to standardize its infrastructure


configuration. As a solution architect, you have changed the infrastructure configuration
after deploying the template and you need to review the changes.

Choose a CloudFormation function that meets this requirement.

1) Using the AWS CloudFormation stack set to review the changes.


2) Using the AWS CloudFormation template, review the changes.
3) Using AWS CloudFormation change sets to review the changes.
4) Using AWS CloudFormation drift to identify the changes.
CloudFormation features
A function for managing the created stack is provided.

When updating the stack, the summary is a change set to check the
impact of the change of resource after deploying
Change set
There are two ways to change the stack: direct update and update
using a change set.

Ability to detect differences from the original template when the


Drift AWS resources deployed by the template change

The ability to create a stack for multiple AWS accounts and multiple
Stack Set regions

Ability to export the reference values of a referenced template,


extract the values, and then import the referenced template to
Import/ Export
perform resource references to enable coordinated infrastructure
deployment.
CloudFormation features
CloudFormation Designer allows you to create a template visually
[Q] CloudFormation template Snippets

As a solution architect, you have created a CloudFormation template and are responsible
for standardizing the configuration of the environment, and it is necessary to output some
of the settings so that they can be referenced by other templates when creating the AWS
stack.

Which a snippet of the template needs to be set?

1) Value
2) Outputs
3) Properties
4) Mappings
[Q] CloudFormation template description
As a solution architect, you are responsible for creating CloudFormation templates and
standardizing the configuration content.

Above is omitted....
Mappings:
RegionMap:
ap-northeast-1:
hvm: "ami-0792756bc9edf3e63"
ap-southeast-1:
hvm: "ami-0162da29310cc18f6"
Description: Create EC2 Instance
Resources:
MyEC2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: ! FindInMap [RegionMap, !Ref 'AWS::Region', hvm]
InstanceType: !Ref InstanceType
Tags:
- Key: Name
Value: myInstance

What will this CloudFormation template look like? (Select three)


Template
Template Version

AWSTemplateFormatVersion: '2010-09-09'
Description:
Metadata:
Parameters:
Mappings:
Conditions:
Transform:
Resources:
FirstVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version

AWSTemplateFormatVersion: '2010-09-09' Template Description


Description:
Metadata:
Parameters:
Mappings:
Conditions:
Transform:
Resources:
FirstVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version

Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata:
Parameters:
Mappings:
Conditions:
Transform:
Resources:
FirstVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version

Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings:
Conditions:
Transform:
Resources:
FirstVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version

Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform:
Resources:
FirstVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version

Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
FirstVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version

Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation

FirstVPC: Describe the SAM version of the serverless app


Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version

Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
FirstVPC: Describe the SAM version of the serverless app

Type: AWS::EC2::VPC Describe the actual resources to be generated on the stack and
Properties: their configuration properties
CidrBlock: 10.0.0.0.0/16
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version

Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
FirstVPC: Describe the SAM version of the serverless app

Type: AWS::EC2::VPC Describe the actual resources to be generated on the stack and
Properties: their configuration properties
CidrBlock: 10.0.0.0.0/16
Describe the settings, such as resource names and type properties
Tags:
- Key: Name
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version

Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
FirstVPC: Describe the SAM version of the serverless app

Type: AWS::EC2::VPC Describe the actual resources to be generated on the stack and
Properties: their configuration properties
CidrBlock: 10.0.0.0.0/16
Describe the settings, such as resource names and type properties
Tags:
- Key: Name
Describe the dependencies between resources
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
DependOn:
VpcId: !Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version

Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
FirstVPC: Describe the SAM version of the serverless app

Type: AWS::EC2::VPC Describe the actual resources to be generated on the stack and
Properties: their configuration properties
CidrBlock: 10.0.0.0.0/16
Describe the settings, such as resource names and type properties
Tags:
- Key: Name
Describe the dependencies between resources
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment Built-in functions: Ref and FindInMap, etc.
Properties:
DependOn:
VpcId: ! Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Outputs:
Template
Template Version

Template Description
AWSTemplateFormatVersion: '2010-09-09'
Description: Additional information about the template
Metadata: Describe keypairs, user names, etc. as parameters required at
Parameters: runtime
Mappings: Describe the mapping of keys and values for specifying conditional
Conditions: parameter values
Transform: Describe the condition name and conditional content at the time of
Resources: resource creation
FirstVPC: Describe the SAM version of the serverless app

Type: AWS::EC2::VPC Describe the actual resources to be generated on the stack and
Properties: their configuration properties
CidrBlock: 10.0.0.0.0/16
Describe the settings, such as resource names and type properties
Tags:
- Key: Name
Describe the dependencies between resources
Value: FirstVPC
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment Built-in functions: Ref and FindInMap, etc.
Properties:
DependOn:
VpcId: ! Ref FirstVPC
InternetGatewayId: !Ref InternetGateway
Describe the values and destinations to be output after building the
Outputs: stack
The scope of ECS
What is Amazon ECS?
Services to enable the building of container applications with
Docker in AWS

Reference: https://aws.amazon.com/jp/blogs/containers/developers-guide-to-using-amazon-efs-with-amazon-ecs-and-aws-fargate-part-1/
The scope of ECS Question
The results of analyzing the range of questions from 1625 are as
follows

ECS Selection  You will be asked to select a service to use Docker

 Based on the scenario, You will be asked the choice of the


Launch type appropriate launch type when configuring ECS

ECS Cost  When using ECS, You will be asked the elements that affect cost.

 You will be asked how to use task definitions during implementation


Task definition of Amazon ECS.

ECS Authorization  You will be asked how to set permissions for ECS tasks to use other
Settings AWS resources.
The scope of ECS Question
The results of analyzing the range of questions from 1625 are as
follows
ALB and ECS  You will be asked how to set up an ECS container when configuring
Configuration it with an ALB.

 You will be asked the basic ECS configuration when running multiple
ECS Configuration jobs using ECS.
[Q] ECS selection

Now company X is planning to build an application using AWS, and Company X has a
CI/CD environment that uses Docker to build Docker applications. Company X is not using
Docker's open source mechanism and will not be using it in the future.

Choose a solution that can meet this requirement.

1) Amazon ECS
2) Amazon EKS
3) Amazon ECR
4) Amazon Fargate
Amazon Container Services
There are four related services in ECS

Registry Control plane Data Plane

The place where the images run Services to manage containers The environment in which the
to the container engine will be container is run
stored
Amazon ECS
Amazon ECR AWS Fargate
Amazon EKS
Elastic Container Service (ECS)
Scalable and high-performance container orchestration services to
support Docker containers

 Containerized apps can be easily run and scaled in AWS


 Deploying and managing containers with Fargate requires no server provisioning or
management
 All kinds of containerized applications can be created easily.
 Easily launched in seconds whether there are dozens or tens of thousands of
Docker containers
 ELB / VPC / IAM / ECR / CloudWatch / CloudFormation / CloudTrail and other
AWS services are available for ECS
 Automatic assignment of ENI per task in VPC network mode; Security Group can
be configured per task; Private IP communication to other resources in the VPC is
possible
 There are two launch type: Fargate launch type and EC2 launch type
Amazon Elastic Kubernetes Service (EKS)
Services to deploy, manage and scale containerized applications
using open source Kubernetes

 Kubernetes is an open source platform designed for automated


deployment, scaling, and operational automation of app containers
 Ability to use existing plugins and tools created by Kubernetes partners
and the community
 As it is a managed service, there is no need to manage the control plane.
 Automatically set up an encrypted, secure communication channel
between worker nodes and the managed control plane
 Full compatibility with applications managed in a Kubernetes environment
Elastic Container Registry (ECR)
Easily store, manage and deploy Docker container images with a
fully managed registry service

 Integrated with ECS and Docker CLI to simplify the workflow from
development to production

 Strong certification management mechanism by IAM

 Available from outside AWS if you can access the endpoint

 Automatic cleanup of images by lifecycle policy

 Automatic assignment of ENIs per task in VPC network mode, and


security groups can be configured for each task
[Q] Launch type

You are a solution architect and you are building a new application composed of multiple
components in a Docker container. Running a container requires a way to choose an
instance type, manage cluster scheduling, etc. without having to implement it.

How should I configure a Docker container? (Please select two)

1) Using EC2 Launch Type on Amazon EKS


2) Using EC2 Launch Type in Amazon ECS
3) Configure a Docker application with AWS Elastic BeansStalk.
4) Using Fargate Launch Type in Amazon ECS
5) Placing the container image to Amazon ECR
Launch type
ECS-compatible computing engine to run containers without the
management of servers or clusters

EC2 Launch mode Fargate Launch Mode

 Launching an EC2 instance with ECS  Dedicated computing engines that can be
 Capable of performing detailed server-level installed in ECS and EKS
control over the infrastructure of running  No need to manage a cluster of EC2 instances
container applications  No need to select an instance type, manage
 Manage server clusters and schedule container cluster scheduling, and optimize the use of
placement on the server clusters
 A wide range of customization options in the  Define the app requirements for CPU, memory,
server cluster are available etc., and the necessary scaling and infrastructure
will be managed by Fargate
 Launch tens of thousands of containers in
seconds
[Q] ECS Cost

As a solution architect, you're building a new application composed of multiple components


in a Docker container, and you're considering whether to use the Fargate or EC2 launch
type with Amazon ECS.

Choose the correct content as the billing for Amazon ECS.

1) Fargate launch types are charged based on the number of CPUs and memory
resources.
2) EC2 launch types are charged based on the number of CPUs and memory resources.
3) Both the Fargate launch type and the EC2 launch type are charged based on the
number of CPUs and memory resources.
4) The EC2 launch type is charged based on the EC2 instance and EBS volume used.
ECS Cost
ECS incurs EC2 instance usage fees or Fargate usage fees.

EC2 Launch type charges Fargate Launch type charges

 Only charge for AWS resources (EC2  There is a charge for the vCPU and
instances, EBS volumes, etc.). memory resources required for
containerized applications.
 A one-minute minimum fee applies.

ECR charges for the amount of data stored in the


repository and transferred to the Internet
[Q] Task definition

We are building a Docker application in our company. We want to grant additional


permissions to Docker application containers on an ECS cluster that we have already
deployed and want to add new tasks.

Which Amazon ECS configuration can meet this requirement?

1) Define a separate task role for the container with the same task definition.
2) Set the IAM role on an EC2 instance set to ECS.
3) Launch another container cluster with ECS and define the task.
4) Create a separate task definition for a container for different task roles.
Task Definition
When running a Docker container in ECS, you define a task to run
the container.

Task Definition
Determine the resource usage
of the task, information about
the docker container to run on
the task, etc.

Reference: https://docs.aws.amazon.com/ja_jp/AmazonECS/latest/developerguide/Welcome.html
[Q] ECS Authorization Settings

As a solutions architect, you are implementing an application utilizing the EC2 launch type
of Amazon ECS. This application requires permissions to write data to Amazon DynamoDB.

How do you assign DynamoDB permissions to certain ECS tasks only?

1) Create an IAM policy with access permissions to DynamoDB and attach it to the
container instance.
2) Create an IAM policy with access permissions to DynamoDB, assign it to an IAM role
and set that IAM role to ECS.
3) Create an IAM policy with access permissions to DynamoDB and assign it to the ECR
cluster.
4) Create an IAM policy with access permissions to DynamoDB and assign it to a task
using taskRoleArn parameter.
ECS Authorization Settings
The necessary permissions for task execution need to be assigned
to each task by IAM policy.

IAM Policy
Assign an IAM policy to tasks
and grant access to resources
on a per-task basis.

Reference: https://docs.aws.amazon.com/ja_jp/AmazonECS/latest/developerguide/Welcome.html
[Q] ALB and ECS configuration

You are a solution architect and you are building a web application using Docker. You have
implemented an application that executes tasks separated into multiple container clusters
in a task definition, and you need to control traffic to them by a single ALB.

Which features help you to achieve this with minimal effort? (Select two)

1) ALB+ dynamic port mapping


2) ALB+ path routing
3) CLB+ Dynamic Port Mapping
4) NLB+ Dynamic Port Mapping
5) NLB+ path routing
ALB and ECS configuration
When configuring an ALB, path routing and dynamic port mapping
can be used to integrate with ECS

Path Routing Configuration Dynamic Port Mapping

 With path-based routing in ALB, you can  Multiple ports can be registered as targets
route to a target group according to a URL for ALB when dividing an EC2 launched by
 You can implement path routing for ECS into target groups
containers by specifying containers for  Register a dynamic port number in the ECS
ECS path routing task definition and distribute the traffic
destination according to the port number.
[Q] ECS Configuration
We are building a Docker application in our company. You want to grant additional
permissions to Docker application containers on an ECS cluster that you have already
deployed, and you want to add new tasks. This will be used to handle both very important
data processing jobs and batch jobs that can be performed at any time.

Which of the following is the most cost-effective option to meet this requirement?

1) Set up a reserved EC2 instance for critical data processing jobs and a spot EC2
instance for non-critical jobs.
2) Separate processing containers to run by assigning separate task definitions for
important data processing jobs and non-critical jobs.
3) In conjunction with Amazon SQS, set a priority queue for important data processing
jobs, and set a standard queue for less important jobs.
4) In conjunction with Lambda, set priority jobs for important data processing jobs and
set standard jobs for less important jobs.
ECS Configuration
Define the job content to be executed in the ECS task definition
and configure multiple task processing.

Mission-critical
Define a batch job

Not mission critical


Define a batch job

Reference: https://docs.aws.amazon.com/ja_jp/AmazonECS/latest/developerguide/launch_types.html
The scope of Redshift
What is Redshift?
Managed services that allow you to build a data warehouse on AWS

Redshift can create


Data Warehouse

Working with BI
tools from Redshift
data.
Reference: https://aws.amazon.com/jp/blogs/database/using-amazon-redshift-for-fast-analytical-reports/
The scope of Redshift question
The results of analyzing the range of questions from 1625 are as
follows
 Based on the scenario, you will be asked to choose Redshift as a
Selecting Redshift suitable database.

Redshift Configuration  You will be asked how to configure Redshift using AZ and region.

 You will be asked how to enable traffic control and monitoring for
Traffic Control traffic via VPC.

Encryption  You will be asked about encryption methods in Redshift.

 You will be asked how to manage workloads by setting up a queue to


WLM Redshift processing.
The scope of Redshift question
The results of analyzing the range of questions from 1625 are as
follows
 You will be asked how to run queries directly to S3 storage using
Redshift Spectrum Redshift.

Reserved nodes  You will be asked how to cost-optimize the use of Redshift nodes.
[Q] Select Redshift

Your company has implemented two data processing operations utilizing a relational
database. The first process runs a complex query in a data warehouse that takes hours to
complete. The second process involves performing customer data analysis and visualizing
it in a dashboard.

Which is the best database that can meet these requirements?

1) Perform data warehouse processing using Redshift and perform customer data
analysis using RDS.
2) Implement both operational processes using Redshift.
3) Implementing both operational processes using RDS.
4) Implement both operational processes using Aurora.
Redshift
Fast, Scalable and Cost-Effective Managed DWH/Data Lake Analysis
Services

 Starting with hundreds of gigabytes of data and extending to petabytes or more


 Available for less than 1,000 USD per year per terabyte
 Fully managed, with many maintenance tasks such as automatic table maintenance and data
placement automated, including automatic workload management
 Relational Database which compatible PostgreSQL, but column-oriented data model
 Cluster configuration with multiple nodes together. Starts with a single AZ, no multi-AZ configuration
 Up to three times the performance of other cloud data warehouses in RA3 instances
 Distributed caching with AQUA makes Redshift run up to 10 times faster than other cloud data
warehouses
Data Lake
S3 can be used as a hub of data analysis in a data lake

Data Collection Data Storage Data Processing Data Utilization

Datamart BI/
Data Visualization
Structured ET
Data
L Redshift
Business DB OLAP
Data Lake AI/
Semi- Machine Learning
structured Analysis
Data ETL
Amazon Glue
S3
Chatbot/Chatbot
Unstructured Smart Speaker
Data S3 S3 Amazon EMR

Glacier
Instance Type
Choose from two instance types depending on the data size and
operations

RA3 instance DC2 instance


 Data warehouse using fixed local SSD
 Optimize data warehousing by scaling and storage
paying for computing performance and  Increase the storage capacity of the
managed storage independently cluster by adding nodes to increase the
 If you expect the amount of data to size of the data
increase, it is recommended to use the  DC2 node type recommended for
RA3 node uncompressed data sets of less than 1
 Minimum 2 nodes required TB
 Lowest price is $3.836/node/hour  At least one node required
 Cheapest $0.314/node/hour
Redshift configuration
Redshift is configured to perform data processing with multiple nodes
in a group unit called a cluster

Redshift clusters

• Endpoints of the query


Leader
• SQL code generation and execution
Node

• Fast Local SSD Cache


• Parallel execution of queries

Compute Compute Compute


Node Node Node
Cache Cache Cache

Managed Storage
(Redshift file format)
Nodes can be configured from 1 to 32 • S3 for persistent data storage
[Q] Redshift configuration

A data analytics company uses a Redshift cluster to house its data warehouse on AWS.
The company has a disaster recovery plan in place to ensure business continuity. As a
solution architect, you are required to implement a solution to increase the resiliency of
Redshift in the event of a region outage.

Which of the following is the best approach to meet this requirement?

1) Conduct a cross-region snapshot copy.


2) Launch the cross-region lead replica and promote it to the master at failover.
3) Copy a snapshot of the cluster to another region.
4) Implementing global clusters.
Redshift configuration
Unlike RDS, there is no multi-AZ failover configuration or multi-
region support.

ELB
AZ AZ

Private Subnet 10.0.4.0/24 Private subnet


10.0.5.0/24

EC2 EC2

Automatic Failover
Redshift Redshift

• It is necessary to copy the snapshot to another region, just in case


S3
Automatic Operations
Easy operation with automatic maintenance features and detailed
monitoring

 CloudWatch metrics are automatically stored by default and


CloudWatch
can be viewed in the Redshift console

 Set the execution time and automatically get regular backups


Backup
 Snapshots can also be taken manually

 Patching is also automatic by AWS


Automatic Maintenance
 Patching time can be specified in the maintenance window
Query Efficiency through Machine Learning
Machine learning adjusts query execution and assists in efficient
automatic execution.
 Automatic optimization of distributed table configuration
Table Maintenance
 Automatic update of statistics
Automation
 Automatic data reorganization

Automatic Workload  Machine learning automatically prioritizes query execution when setting
Management up multiple query execution in workload management

 A machine learning algorithm is used to analyze each target query and


predict the query execution time. And prioritize short-running queries
Short acceleration over long-running queries
 As a result, the WLM queue can be reduced.

Recommendations for  Automatically analyze cluster performance and more, and make
best usages recommendations for optimization and cost reduction
[Q] Traffic control

Your company is implementing data analysis operations using a relational database. To do


this, you need to configure a data warehouse using Redshift to control traffic to VPC
endpoints and work with AWS resources.

Which of the following is the most suitable solution to meet the requirement?

1) Set routes to VPC endpoints in the network ACLs of Amazon Redshift's installed
subnets.
2) Set the route to VPC endpoints in the gateway route table used by Amazon Redshift.
3) Using Amazon Redshift's enhanced VPC routing.
4) Set a security group on Amazon Redshift to allow routes to VPC endpoints.
Traffic Control
You can enable enhanced VPC routing to force traffic to your VPC.

 Enhanced VPC routing forces Amazon Redshift to force all


COPY and UNLOAD traffic between the cluster and the data
repository through Amazon VPC.
Extended VPC Routing
 You can use VPC flow logs to monitor COPY and UNLOAD
traffic.
 Communicate according to the route table set in the VPC

Force routing via


VPC.

VPC S3
Redshift Endpoints
[Q] Encryption

An enterprise use Redshift clusters to build a data warehouse. As an operations person,


you’ve been asked by the internal security team to ensure that the data in the Redshift
database is encrypted.

Choose the best encryption method that meets this requirement. (Select two.)

1) Implement encryption using SSL/TSL.


2) Implementing encryption using AWS KMS.
3) Setting up a trusted connection between Amazon Redshift and the HSM
4) Implementing encryption using CSE.
5) Implementing encryption using Redshift Auth.
Encryption
Encryption with KMS and ACM can also be implemented in Redshift

 Encryption using AWS KMS as well as other databases


Encryption of stored data  The data stored in the disk and a snapshot of the encrypted
target

 Encrypt data in transit between Amazon Redshift clusters and


SQL clients via JDBC/ODBC
Encryption of
Communication  Implementing SSL communication based on a certificate by
ACM
[Q] WLM

An enterprise uses Redshift for online analytical processing (OLAP) applications to handle
complex queries on large data sets. When processing these queries, there is a requirement
to define a way to route the queries to a queue.

which of the following services can be achieved to meet this requirement?

1) Specify the route of the queue in Redshift’s enhanced VPC routing.


2) Specify the route of the queue in Redshift's Work Load Management (WLM).
3) Specify the route of the queue using Redshift DLM.
4) Linking the SQS to Redshift and specify the route of the queue.
Workload Management (WLM)
Multiple queues can be set up according to the workload, and
queues can be set up and prioritized based on query allocation rules

User Group A User Group B

• Set the slot in the queue,


and allocation of CPU and
memory
• Increasing the number of
slots increases the degree
of parallelism but decreases
the allocated memory

Queues for long queries Cue for short-queries


Scaling
RedShift can be scaled by changing/adding nodes types and adding
clusters

Adding nodes Adding clusters


Improve performance by adding computing Concurrency Scaling automatically adds
nodes temporary clusters in seconds to
accommodate urgent concurrent requests,
resulting in consistently fast performance.
(Additional clusters 1~10)

+plus +
[Q] Redshift Spectrum

A Big data analytics company which provides IoT solutions are now building data analytics
solutions on AWS to analyze vehicle data. Since vehicle data is sent in large volumes, data
is stored in S3 via Firehose from Kinesis Data Streams. This S3 data needs to be directly
queried to perform large and complex queries on this S3 data.

Which of the following solutions is best for you?

1) Athena
2) S3 Select
3) QuickSight
4) Redshift Spectrum
RedShift Spectrum
RedShift Spectrum enables data analysis directly on user-managed
S3 buckets
RedShift Clusters

Leader
Node

Compute Compute Compute


Node Node Node
Cache Cache Cache

RedShift Query Query Query Query Query


Spectrum Engine Engine Engine Engine Engine

User-managed S3 buckets
Data linkage (To Redshift)
It is important to consolidate the analytics base as a DWH by
moving the data to Redshift
 S3 is the most frequently used data integration destination, and it is
S3 possible to retrieve data from S3 and analyze it in Redshift, or to
perform internal data analysis directly in S3

 Using Kinesis data Firehose, you can specify Redshift as the


Kinesis destination for streaming data and store the data for analysis

 No direct connection to RDS, but can perform data migration using


RDS AWS Data Pipeline or DMS

DynamoDB  You can copy data from DynamoDB to Redshift

Amazon EMR  You can copy data from EMR to Redshift


Data linkage (From Redshift)
In addition to data visualization using QuickSight, data extraction to
S3 is also possible from Redshift

Amazon QuickSight  Connect to Redshift and allow you to conduct data visualization

 It is possible to extract data from Redshift to S3 by running the


S3 UNLOAD command

Amazon Machine
 RedShift is available as training data for machine learning
Learning

 Cannot directly integrate with RDS from RedShift, but can use
RDS PostgreSQL's features to integrate data with RDS
[Q] Reserved nodes

Company B runs a data warehouse on AWS using an Amazon Redshift cluster with six
nodes. They use this system to perform various business analyses on a daily basis, and
this system will be in use for the next year.

Choose an instance configuration, which allows you to reduce the cost of this Redshift
configuration.

1) Apply prepayment discounts to purchase reserved nodes for a compute node


2) Apply prepaid discount to purchase reserved nodes for the leader node
3) Apply prepayment discount to purchase spot nodes for a compute node
4) Apply prepayment discounts to purchase spot nodes for a leader node
Reserved nodes
Redshift is discounted by Reserved Nodes.

Redshift clusters
Leader
Node

Cost savings can be made by using


reserved nodes for compute nodes

Compute Compute Compute


Node Node Node
Cache Cache Cache

Managed Storage
(Redshift file format)
The scope of SNS
What is SNS?
Amazon SNS enables asynchronous communication with other
services in a fully managed push notification service

(2) Communication processing

Receiving
(1) Topic 3) Push the contents process
Send a message of the communication

[Multiprotocol]
-HTTPS
-EMAIL
-SQS
-Mobile push
The scope of SNS question
The results of analyzing the range of questions from 1625 are as
follows
 Based on the scenario, you will be asked to select an Amazon SNS
Select SNS to achieve the requirements.

 you will be asked features such as features and performance that


SNS features can be achieved by SNS.

SNS configuration  you will be asked how to build solutions using SNS.
[Q] Selection of SNS

A Leading manufacturing company is building web applications using EC2 instances.


Because the EC2 and Lambda functions need to be executed separately, it is essential to
have a mechanism to push notifications between the components to work together.

Which services can be used to separate between applications?

1) Amazon SNS
2) Amazon SQS
3) Amazon SES
4) Amazon MQ
Amazon Simple Notification Service (SNS)
Asynchronous communication is realized by the sender creating an
SNS topic and the receiver subscribing to it.

Receiving
process

Topics

Topics
SNS and SQS
SNS and SQS have different processing methods, so they can be
used differently depending on use case

SNS SQS

• The message is persistent


• The message is not persistent • Push type / polling type delivery
• Push-type delivery method method
• Published by producers • Producer accumulates messages in
• Consumers is sent messages. queue
• Consumer gets message from queue
[Q] SNS features

A Leading manufacturing company is building web applications using EC2 instances. This
application needs the ability to perform back-end processing by notifying other
applications. As a solution architect, you are considering a notification scheme.

Which of the following are the notification protocols supported by Amazon SNS? (Select
three)

1) SSH
2) FTP
3) SMS
4) HTTPS
5) Email
6) MQ
Features of SNS
SNS can be used for loosely coupled architecture by linking with
various AWS services and setting notifications.

 Single-issue message
 Message communication order is not guaranteed
 Irrevocable
 Retry by delivery policy
 Message size up to 256KB
Features of SNS
SNS uses HTTP/HTTPS/JSON format messages as follows.

 HTTP/HTTPS Headers
 HTTP/HTTPS reception registration confirmation in JSON format
 HTTP/HTTPS notification in JSON format
 Unregistered HTTP/HTTPS reception in JSON format
 SetSubscriptionAttributes Delivery Policy in JSON Format
 SetTopicAttributes Delivery Policy in JSON format
SNS collaboration
SNS can be used for loosely coupled architecture by linking with
various AWS services and setting notifications.

 Amazon CloudWatch: Billing Alert notifications


 Amazon SES: Bounce/Complaint Feedback Notification
 Amazon SQS: Queueing and processing via SNS notifications
 Amazon S3: Notification when a file is uploaded
 Amazon Elastic Transcoder: notification of video conversion process
completion/failure
 AWS Lambda: launching processes triggered by SNS.
[Q] SNS Configuration

A Leading manufacturing company is building web applications using EC2 instances. This
application needs the ability to perform back-end processing by notifying other
applications. This notification is handled by the Lambda function, which performs the back-
end processing, reaching a peak of about 5000 requests per second. You have conducted
some tests and found that some of the Lambda functions are not executing.

Choose the best solution to solve this problem.

1) Amazon SNS has reached the notification limit and the limit needs to be raised.
2) The limit needs to be raised because Amazon SNS message delivery has exceeded
Lambda's account concurrency quota.
3) Improper IAM policy for linking to Lambda functions from Amazon SNS.
4) We need to authenticate Amazon SNS subscriptions on the Lambda function side.
SNS Configuration
SNS can be configured with SQS and Lambda functions using SNS
push notifications

SQS

S3 SNS

Lambda
The scope of AWS Storage
Gateway
What is AWS Storage Gateway?
AWS Storage Gateway can connect and extend the storage of on-
premises environments to Amazon S3

Tokyo Region On-premise


environment

EC2

S3
Glacier
IA Storage
The scope of AWS Storage Gateway question
The results of analyzing the range of questions from 1625 are as
follows
 You will be asked to choose how to achieve a hybrid configuration,
Strage gateway selection extending storage in an on-premises environment.

 As a way to expand storage in an on-premises environment, you will


Storage Gate Types be asked the question of choosing a type of storage gateway.
[Q] Storage Gateway Selection

As a solution architect, you are building a crowdfunding application on AWS. This


application allows you to raise money for various social contribution projects. In order to
give users peace of mind, it is a prerequisite that the data function of this application is
severely secure. As for security requirements, you decided to use a service that encrypts
the stored data by default, without having to configure it separately.

Choose a service that meets this requirement (Select two)

1) AWS Storage Gateway


2) Amazon Glacier
3) Amazon RDS
4) AWS Lambda
5) Amazon ECS
Benefits of AWS Storage Gateway
You can take advantage of the features and performance of S3
while taking advantage of on-premises storage.

 Seamless integration using industry-standard protocols


 Low latency access is possible by leveraging the cache
 Take advantage of the robustness, low cost, and scalability of AWS storage
 Efficient data transfer
 It is integrated with AWS monitoring, management and security, and
 Automatically implement encryption.
Uses AWS Storage Gateway
Use Storage Gateway when you want to use AWS storage for data
transfer and backup storage of on-premises storage.

 Can be used for big data processing / cloud bursting / moving data to
AWS storage for system migration
 Retain data on S3 for backup, archiving and disaster planning
 Leverage AWS storage easily in an on-premises environment
[Q] Storage gateway type

Company B owns 3 TB of volume data in an on-premises repository that holds a large


number of print files. This repository is growing by about 500 GB per year and needs to be
utilized as a single logical volume. You, as the solution architect, have decided to extend
this repository to S3 storage to avoid local storage capacity constraints while maintaining
optimal response times for frequently accessed data, and will be utilizing S3 as the primary.

Which AWS Storage Gateway configuration is best suited to achieve your requirements?

1) Cached volumes utilizing snapshots scheduled to move to S3


2) Stored volumes utilizing snapshots scheduled for relocation to S3
3) Cached volumes that utilize a scheduled transfer snapshot to Glacier (quick access)
4) A virtual tape library utilizing snapshots scheduled for relocation to S3
Storage Gateway Type
Choose one of the three gateways depending on the data type used

• It provides a backup solution that seamlessly connects to


File Gateway the AWS cloud and stores data files and backup images in
Amazon S3 cloud storage.

• Provide cloud backup iSCSI block storage volumes for on-


Volume Gateway premises applications
• Using either cached volumes or stored volumes

• Providing virtual tape storage and VTL management to


Tape Gateway store data on Amazon S3 and Glacier.
File Gateway
On-premises file data stored as objects on Amazon S3 via AWS
Storage Gateway

 Transfer data using SMB or NFS interface in the virtual appliance


 Backing up to low-cost S3 storage classes such as S3 Standard, S3 Low
Frequency Access, S3 Glacier, and S3 Glacier Deep Archive.
 S3 lifecycle policy/versioning/cross-region replication, etc. are available
 Update data transferred to AWS asynchronously

Reference: https://aws.amazon.com/jp/storagegateway/file/
Volume Gateway
Achieve a hybrid configuration with S3 for disk data in on-premises
environments

 Enables faster access to data from applications, keeping either a cache of the latest
access data, or a copy of the entire volume on-premises
 Interfaced as block storage on iSCSI
 On-premises local disk backups automatically performed on the AWS side
 Update data transferred to AWS asynchronously
 Data protection and recovery with Amazon EBS Snapshots, Storage Gateway Volume
Clones, and AWS Backup

Reference: https://aws.amazon.com/jp/storagegateway/volume/?nc=sn&loc=2&dn=4
Volume Gateway Type
Choose from two types by whether to set the primary on-premise
or S3.

• Primary S3 storage
• Extend S3 on-premises storage
Cached Volume • Use Amazon S3 as the primary data storage, keeping
Gateway frequently accessed data in the local storage gateway
• Frequently accessed data can be cached in an on-
premises environment, enabling low-latency access

• Primary on-premises storage


• Backing up its data to Amazon S3 asynchronously, while
Storage type volume
storing the primary data locally.
Gateway • On-premises applications will be able to access their
entire data set with low latency.
Tape Gateway
Using Storage Gateway as a virtual tape library enables highly
robust external storage backup storage

 Using Virtual Tape Library (VTL)-compatible backup software to store


backup data in S3 and Glacier via Storage Gateway
 Available on-premises and AWS EC2 environments
 Utilize inexpensive archival storage (S3/Glacier) with backup software to
perform tape ejection operations
 Support for major backup software
The scope of AWS
Organizations
What is AWS Organizations?
Integrated management can be utilized when multiple AWS accounts
are available

Master
Account
(AWS account A)

• Consolidating Billing
• Centralized Authority Management

AWS AWS AWS


Account B Account C Account D
The Scope of AWS Organizations questions
The results of analyzing the range of questions from 1625 are as
follows
Selecting AWS  You will be asked to select an appropriate management service
Organizations based on the scenario

 You will be asked how to set up and delete master and member
Account Settings accounts

Consolidating Billing  You will be asked the benefits in a Consolidating Billing settings

 You will be asked how the Service Control Policy is set up and its
purpose.
SCP  Also based on the scenario, you will be asked the actual results of
SCP settings.

 You will be asked about the mechanism of resource sharing between


Sharing resources members possible by setting up AWS Organizations.
[Q] Select AWS Organizations

A company has three internal AWS accounts. It was decided that the CIO's office would
take the lead in overseeing IT cost management and operations.

Which services are available to consolidate multiple accounts in AWS?

1) AWS Organizations
2) IAM
3) AWS Trusted Advisor
4) AWS Systems Manager
AWS Organizations
AWS Organizations is a managed service that makes IAM access
management effortless for large organizations

Multiple accounts Management of account


Consolidating Billing
Centralized Management creation

Create a new AWS account


Grouping AWS accounts to
in the console/SDK/CRI, Consolidating Billing for
apply policies and manage
and manage the creation in Multiple AWS Accounts
them centrally
the log
[Q] Account settings
A company has three internal AWS accounts. Because each department processes billing separately,
they use AWS Organizations to oversee IT cost management and operations. For administrative
purposes, the company has two master accounts and two Organizations set up. A requirement has
arisen to move member account B, which currently belongs to Organizations in master account A, to
Organizations managed by another account C.

Which way to move a member account? (Select two)

1) Perform the deletion by master account A, after giving the necessary settings to member account
B that you want to delete from existing Organizations.
2) If member account B accepts the invitation to new Organizations from master account C,
account B is removed from the Organization of account A.
3) If Account A performs the privilege transfer from Account A to the new Organizations and
Account C agrees, Account B is registered to the new Organizations.
4) If Account A performs a privilege transfer from Account A to the new Organizations and Account
C agrees, Account B is removed from Account A's Organization.
5) Invitations to new Organizations are extended from master account C to account B. Once the
acceptance is received from account B, it is added to the member account.
6) The root account of member account B that you want to remove from Organizations performs
the withdrawal process on its own.
Account Settings
Select one account as a master account from among AWS
accounts

AWS Master
Account Account

AWS AWS AWS Members Members Members


Account Account Account Account Account Account

 A member account is registered as a member account once the member account approves
the invitation from the master account.
 If you want to delete it from a member account, the account must have authority such as
billing processing.
AWS Organizations
A unit in which the master account manages member accounts in a
unit called OU
Administrator Route Organization Policy
Service
Master Management
Account Policy

Organizational Unit (OU) Organizational Unit (OU)

Members Members Members Members


Account Account Account Account

Members Members Members Members


Account Account Account Account
[Q] Consolidating Billing

A company has three internal AWS accounts. Because of the different billing processes in
different departments, they use AWS Organizations to oversee IT cost management and
operations. So, as a solution architect, you're considering whether you should use AWS
Organizations to set up Consolidating Billing for all three AWS accounts.

Select the cost benefits of using AWS Organizations.

1) If each account utilizes S3, a volume discount on S3 costs is applied and cost savings
are reliably possible.
2) If each account utilizes S3, Organizations dedicated price range to S3 costs will be
applied.
3) If each account utilizes S3, the utilization volume of S3 costs will increase.
4) If each account utilizes S3, you may be able to reduce S3 costs.
Select Type
Choose between two methods: Consolidating Billing and full account
integration management

Consolidated Billing Only All Feature

 Select if you want to conduct only


bulk payments on behalf of the  Select if you want to control multiple
agency accounts in a company, including
 Cost advantages arise because payment batch agency
volume discounts can be integrated.
[Q] SCP

A company has three internal AWS accounts. Because of the disparate billing processes in
different departments, they use AWS Organizations to oversee IT cost management and
operations. We use Service Control Policy (SCP) to manage permissions centrally across
all accounts in the organization.

Which is correct as utilization of SCP? (Select three)

1) An IAM user with a member account set up with access to EC2 by the SCP is granted
EC2 operation rights.
2) SCP affects all users and roles in a set member account, including the root user.
3) SCP affects all users and roles in the configured member accounts other than the root
user.
4) SCP affects the roles linked to the service.
5) SCP does not affect the roles linked to the service.
6) An IAM user with a member account that has been set up with access to EC2 by SCP
is not authorized to operate EC2.
SCP
A policy called SCP can be used to set authority boundaries for
members within an OU.

IAM SCP

• EC2 permissions
• EC2 permissions
• RDS permissions
• ECS permits
• ECS permits

• EC2 permissions
• ECS permits
IAM and AWS Organizations
IAM performs user management within AWS accounts;
Organizations performs management of multiple AWS accounts
themselves.

AWS Organizations IAM

Organizations AWS Accounts

AWS AWS IAM IAM


Account Account Users Group
[Q] Share resources

Company A, a leading news website, uses the AWS Cloud to manage its IT infrastructure.
Since the company uses multiple AWS accounts, it decided to use AWS Organizations to
manage their accounts. The company runs applications that require a high degree of
interoperability and require the sharing of VPCs between member accounts.

In order to share a VPC, what settings are needed?

1) Enable the VPC sharing feature in AWS Organizations to use VPC sharing between
multiple member accounts.
2) The default setting in AWS Organizations allows for VPC sharing between multiple
member accounts.
3) Using VPC sharing between multiple member accounts in conjunction with AWS RAM.
4) Using VPC sharing between multiple member accounts in conjunction with IAM.
Share resources
AWS organizations enables users to share resources between AWS
Organizations member accounts.

AWS Resource Access Manager (RAM) Shared Reserved Instances

 By using RAM, resources such as  Reserved Instance Sharing is enabled.


VPC can be shared between  Reserved instances are shared
accounts in the OU. between member accounts.
The Scope of multi-AZ
configurations
What is a multi-AZ configuration?
An infrastructural configuration that increases availability by
redundancy of AWS resources in multiple AZs

10.0.0.0/16

AZ ELB AZ

Public Subnet 10.0.0.0/24 Public Subnet


10.0.1.0/24

EC2 EC2
The Scope of multi-AZ configurations question
The results of analyzing the range of questions from 1625 are as
follows
 You will be asked Multi-AZ configurations using EC2, redundant
Multi-AZ configuration configuration with the addition of ELB and Auto Scaling, and Multi-
AZ configurations with DB servers and RDS.

 You will be asked about proper subnet configuration, such as public


DB multi-AZ configuration and private subnets.
[Q] EC2 Multi-AZ configuration

Some IT company uses AWS to build web applications. The web layer of the application
runs on an EC2 instance and the database layer utilizes Amazon RDS MySQL. Currently,
all resources are deployed in a single availability zone. The development team wants to
improve the availability of the application before it goes live.

Which is the architectural configuration for making this web application redundant? (Please
select two)

1) Deploying web tier EC2 instances to the two AZs behind ELB.
2) Deploying web-tier EC2 instances to the two regions behind ELB.
3) Deploying web-tier EC2 instances to the two VPCs behind the ELB.
4) Deploying an Amazon RDS MySQL database in a multi-AZ configuration.
5) Deploying an Amazon RDS MySQL database in a global database configuration.
Multi-AZ configuration
Basic configuration with web server redundancy in the public subnet
and RDS failover configuration

10.0.0.0/16

AZ ELB AZ

Public Subnet 10.0.0.0/24 Public Subnet 10.0.1.0/24

EC2 EC2

Private Subnet 10.0.2.0/24 Private subnet


10.0.3.0/24

Automatic Failover
RDS RDS

S3
[Q] DB multi-AZ configuration

An IT company is using AWS to build web applications. The web layer of the application
runs on an EC2 instance and the database layer uses Amazon RDS MySQL. Each instance
is placed on a private subnet for security purposes; the web server is required to download
software patches to the instances from the Internet; and the NAT gateway addresses the
issue of inaccessibility in the event of a failure.

How can you increase the availability and cost-effectiveness?

1) Create a NAT instance in each Availability Zone. Control traffic to your NAT instance
with ELB.
2) Create a NAT gateway in each Availability Zone. Control traffic to your NAT instance
with ELB.
3) Create a NAT gateway in each Availability Zone. Configure the route table on each
private subnet so that the instances use the NAT gateway in the same Availability
Zone.
4) Create a NAT gateway in each Availability Zone. Each NAT gateway controls traffic to
your EC2 instances based on health checks.
Multi-AZ configuration

10.0.0.0/16

AZ AZ
10.0.0.0/16
Public Subnet 10.0.0.0/24 Public Subnet 10.0.1.0/24
ELB

NAT NAT

Private Subnet 10.0.2.0/24 Private subnet


10.0.3.0/24

EC2 EC2

Automatic Failover
RDS RDS
The scope of Amazon FSx
What is Amazon FSx?
Fully managed storage services to provide industry-standard file
storage

File Storage Object Storage Block Storage

Metadata

ID
The scope of Amazon FSx for Windows
The results of analyzing the range of questions from 1625 are as
follows
Selecting Amazon FSx for  You will be asked to choose Amazon FSx for Windows to meet
Windows storage requirements

Selecting Amazon FSx for  You will be asked to choose Amazon FSx for Lustre to meet storage
Lustre requirements
Three file storage
In addition to EFS, two other types of FSx-type file storage are
available, depending on the use case

 NAS-like file storage


 EFS can be used as a file system and shared access by multiple EC2
EFS instances
 Unlike S3 cannot be accessed directly from the Internet

Amazon FSx  Storage compatible with Windows File Server


For Windows File  Built on top of Windows and rich in integration with Windows AD, OS
and software
Server

 Distributed fast storage that is compatible with open source Lustre,


Amazon FSx which is distributed file storage
For Lustre  Processing storage for temporary storage for fast computing data
layers such as machine learning
[Q] Select Amazon FSx for Windows

A Bank is building a web application on AWS using Microsoft's distributed file system. As a
solution architect, you need to choose the best storage that fits this distributed file
system.

Which of the following AWS services is best for this application?

1) Amazon FSx for Windows File Server


2) Amazon FSx for Lustre
3) EFS
4) AWS Managed Microsoft AD
Amazon FSx For Windows File Server
Storage used if you want to use Windows File Server on AWS Cloud

Features and Use Cases Architectural Configuration

 Can migrate Windows File Server to AWS


 Access via ENI
 Extensive management features such as Active
Directory (AD) integration  Control in VPC security groups
 SMB protocol enables a wide range of  Specify and configure a single subnet of a
connectivity, including Amazon EC2, VMware single AZ.
Cloud on AWS, Amazon WorkSpaces, and  It can be shared across multiple instances or
Amazon AppStream 2.0 instances accessed from instances in other AZs
 Accessible from up to thousands of computing  It can be configured in multi-AZ.
instances
[Q] Select Amazon FSX for Lustre

A leading aviation company is building a simulation system on AWS for engine development.
This is a high-performance workflow used to simulate engine performance and failure
prediction. During analysis, "hot data" needs to be processed and stored quickly in a
parallel and distributed fashion. The "cold data" must be kept for reference, so that it can
be quickly accessed for reading and updating at low cost.

Choose the best storage for conducting such advanced simulations.

1) Amazon EMR
2) EFS
3) Amazon FSx for Lustre
4) Amazon FSx for Windows File Server
Amazon FSx For Lustre
Provides ultra-high performance storage dedicated to distributed and
parallel processing for fast computing process

Features and Use Cases Architectural Configuration

 Access via ENI/endpoint


 Distributed file systems utilized by many
supercomputers  Controlled by security groups
 You can use Lustre in a fully managed and  Create a single subnet in a single AZ.
secure manner.  Processing hot data in a parallel distributed
 Optimal capacity 3600GB manner
 Throughput of up to several hundred GB/s  Easily store cold data in Amazon S3; seamless
integration with S3 for integration into data lake
 Scalable to millions of IOPS
type big data processing and analysis solutions.

Fast Parallel
Processing

Amazon FSx Fast Parallel


Processing
For Lustre
Fast Parallel
S3 (Standard) Processing
The scope of Instance Store
What is an instance store?
Storage for temporary data storage that is physically attached to
an EC2 instance

Reference: https://docs.aws.amazon.com/ja_jp/AWSEC2/latest/UserGuide/InstanceStorage.html
The scope of the instance store question
The results of analyzing the range of questions from 1625 are as
follows
 You will be asked to select an instance store that is presented with
Selecting Instance Store storage requirements based on the scenario

Instance Store  Based on the characteristics of the instance store, You will be
features asked how to set up the storage and so on.
[Q] Select instance store

The data processing application is running on a G4.largeEC2 instance with a 50 GB EBS


general-purpose volume. The application will be required to utilize temporary data in a
small database (less than 30 GB) in the EBS root volume to increase the I / O speed.

What is the most cost-effective way to improve database response time?

1) Move the temporary database to instance storage.


2) Utilize the new 50GB provisioned IOPS with one 3000 IOPS allocated.
3) Storage optimization and change to an optimized instance.
4) Change the instance size to a larger one.
Select Instance Store
There are two types of storage used directly in EC2: inseparable
instance stores and self-configuring EBS

 EC2 and inseparable block-level physical storage on the host


computer's built-in disk
Instance
 Temporary data of EC2 is kept and cleared when EC2 is
Store stopped or terminated
 Free

 Managed independently of EC2 in networked block-level


Elastic Block Store storage
(EBS)  Terminate EC2 but retain EBS and retain Snapshot in S3
 EBS cost required separated from that of EC2
[Q] Instance store features

As a solution architect, you plan to build a web application by launching an EC2 instance.
There has been a requirement for some EC2 instances to utilize high-performance
ephemeral storage.

How should you add a new instance store volume?

1) Start a new instance store and attach to the instance.


2) You can specify the instance store volume for the instance only when the instance is
launched.
3) Stop the instance and attach the instance store.
4) Use block device mapping to specify additional instance store volumes when the
instance starts.
Instance Store Features
Instance store, which is inseparable storage available directly in
EC2
• Instance store volume can be specified only if the instance is launched
• Cannot detach from an instance and attach to another instance.
• Data on the instance store is maintained only during operation of the associated
instance
• Data is lost in the event of a failure in the underlying disk drive and when the instance
is stopped, deactivated, or terminated.
• The instance type determines the size of the available instance store and the type of
hardware used in the instance store volume.
• Some uses NVMe or SATA-based solid state drives (SSDs) to achieve high random
I/O performance.
• Use block device mapping to specify the instance EBS volume and instance store
volume
• Instance store volume virtual device ephemeral disk
The scope of AWS KMS
What is KMS?
AWS KMS is a service to create and manage master keys used for
encryption

Reference: https://docs.aws.amazon.com/crypto/latest/userguide/awscryp-service-kms.html
The Scope of KMS questions
The results of analyzing the range of questions from 1625 are as
follows
 you will be asked to select a KMS as a means of performing
Selecting KMS encryption

CMK Management  You will be asked about the management of CMKs created in KMS.

 You will be asked about configuration scheme tailored to the


KMS Settings scenario, like when using KMS in an application.
[Q] Select KMS
As a solution architect, you are building a medical information sharing application using a
large dedicated EC2 instance with multiple EBS volumes. The EBS volumes must be
encrypted to comply with HIPAA standards, considering the confidentiality of the data to
be processed.

In EBS encryption, which service does AWS use to protect the volume data being stored?

1) Implementing Key Management with AWS Key Management Service (KMS)


2) Using Amazon managed keys using SSE-EBS.
3) Implementing key management using the ACM.
4) Use of SSL certificates provided by AWS Certificate Manager (ACM).
AWS KMS
AWS KMS is a managed encryption key creation and management
service for encrypting data
 KMS is a managed service that creates, manages, and operates encryption keys. It can
create, import, rotate, delete, and manage CMK using the AWS Management Console, AWS
SDKs, or CLI.
 It works with IAM to manage key access.
 It invalidates / activates / deletes the customer master key (CMK) and automatically
rotates the key every year.
 It is also possible to install and manage an external CMK on the KMS
 KMS uses FIPS 140-2 validated or validated hardware security modules to protect keys
 It integrates with AWS CloudTrail to show usage logs for all keys
 It is applicable to many AWS services such as RDS and S3
 Encrypt in the application by using KMS SDK
[Q] CMK management

As a solution architect, you are building a web application using an EC2 instance that
stores data in an S3 bucket. The EBS volumes are encrypted with a unique customer
master key (CMK) to ensure data confidentiality. A member accidentally deleted the CMK
and we are unable to recover user data.

What to do to solve this problem?

1) Once the CMK is deleted, it is impossible to recover and you lose access to the data.
2) You can restore CMK by contacting AWS support.
3) You can restore CMK from the root account user.
4) Since the CMK is just after the CMK is deleted, you can cancel the deletion of the
CMK and recover the key.
AWS KMS
KMS uses CMK and customer data key to perform encryption.

 The CMK is the first master key created, and if lost,


Customer
the encrypted data will be inaccessible.
Master Key  Used to encrypt the encryption key.
(CMK)  It will be rotated.

Customer
 Keys used for the actual data encryption
Data Key  Generated by KMS and encrypted by CMK
(Encryption key)

Envelope  Encryption method that uses an encryption key to


Encryption encrypt without encrypting directly with a master key
[Q] KMS Settings

Solution Architect has developed an encryption solution. In this solution, the data key must
be encrypted using envelope protection before it is written to disk.

Which solutions can assist with this requirement?

1) AWS KMS API


2) API Gateway with STS
3) IAM Access Key
4) AWS Certificate Manager
Envelope Encryption
KMS uses a technique called envelope encryption to perform
encryption.

 Envelope encryption is a method of performing encryption by combining two


keys, a data key and a customer master key.
 First, encrypt the data key using the customer master key (CMK) (creation of
encryption key)
 The application then uses an encryption key to encrypt the data
 The application then sends the encryption key and the encrypted data.

Data Key Request

Apps
CMK
(2) Provide two data keys

Encrypted
Data Key
Data Key
The scope of AWS Snow Family
AWS Snow Family
A service that uses a physical storage device to bypass the
Internet and transfer large amounts of data directly to AWS

Snowball Snowball Edge Snowmobile

Petabyte-scale data movement Petabyte-scale data movement Exabyte scale


(currently deprecated) + Data Transfer
Computing and storage capabilities
The Snow family questions
The results of analyzing the range of questions from 1625 are as
follows
 Based on the scenario about data migration, you will be asked to
Selecting Snowball types select the type and number of Snowballs to meet requirements .
[Q] Select Snowball types
Company C decided to migrate its entire infrastructure and applications, currently hosted
on an on-premises network, to the AWS Cloud. Currently, there is a total of 80TB of data
that needs to be moved to the S3 bucket in a timely and cost-effective manner. We
estimated that it would take more than a week to upload the data to AWS using the free
capacity of our existing Internet connection.

Choose the fastest, most feasible and cost-effective way to migrate your data.

1) Conduct data migration using two Snowballs.


2) Conduct data migration using one of Snowball Edge Compute Optimized.
3) Conduct data migration using one of the Snowball Edge Storage Optimized.
4) Conduct data migration using Connect Direct Connect.
5) Conduct data migration using two Snowball Edge Storage Optimized.
Snowball
Old type Appliance used for petabyte-scale data migration, now
replaced by Snowball Edge

 You can import and export data between on-premises


data storage locations and Amazon S3.
 Snowball available with 80 TB models in all regions
 Encryption enforces and protects data in storage and in
transit
 Using the AWS Snowball Management Console
 Perform local data transfer between on-premises data
centers and Snowball
 Snowball is a shipping container in itself

[Use Case]
Migration/Disaster Preparedness Data Migration/Data
Center Consolidation/Data Migration for Content Delivery

参照:https://docs.aws.amazon.com/ja_jp/snowball/latest/ug/receive-device.html
Snowball and Snowball Edge
Snowball Edge has the high-performance capabilities of
Snowball+Computing, and AWS now recommends Snowball Edge
instead of Snowball
Snowball Snowball Edge

• Implement encryption on the client side • Encryption on the Edge side


• Require rich resources on the client side and • Data processing available at write time using
conduct data transfer by client-side software Lambda functions
Data transfer by S3 Adapter For Snowball built
into the appliance
• Capacity: 80TB / Appliance
• Capacity: 100TB (80TB available) / Appliance
• Use: Data Migration
• Usage: used as data migration + local processing
• Clustering: Impossible storage
• Rack Mount: Not possible • Clustering: possible
• Maximum retention time: 90 days • Rack mount: possible
• Maximum retention time 360 days
• Snowball Edge Storage Optimized (80TB)
• Snowball Edge Compute Optimized (42TB)
Snowmobile
An exabyte-scale data transfer service that can be used to move
very large amounts of data to AWS

 14 m long sturdy shipping container towed by a semi-


trailer truck
 Snowmobile can transfer up to 100 PB per unit
 Snowmobile gets the data directly transported to the
data center
 Once the data is loaded, Snowmobile is sent back to
AWS and the data is imported to Amazon S3
 Data encrypted with a 256-bit encryption key
 Use multiple layers of security to protect your data,
including dedicated security personnel, GPS tracking,
alarm monitoring, 24/7 surveillance cameras, and
参照:https://docs.aws.amazon.com/ja_jp/snowbal/latest/ug/receive-device.html optional security vehicles to guard you in transit

[Use Case]
Migrate huge amounts of data, up to video libraries, image
repositories, or even entire data centers
The scope of Glacier
Amazon S3 Glacier
Glacier is cheaper storage than S3 for medium and long term
storage used for archiving data

It retains the same


durability as S3, but at a It cannot get data quickly
lower price!
The scope of Glacier questions
The results of analyzing the range of questions from 1625 are as
follows
 You will be asked to choose Glacier based on the scenario.
Select Glacier  The same thing as the S3 problem

 You will be asked about the characteristics of Glacier during storage


Glacier Features selection.

 You will be asked to select the type of data extraction method you
Data Retrieval can choose when retrieving data in Glacier
 You will be asked how to use of provisioned capacity.

 You will be asked about that require the use of vault locks
Vault locks presented with the requirements for enhanced compliance.
[Q] Glacier features

As a solution architect, you plan to archive to Amazon Glacier using lifecycle management
using S3. You need to make sure your supervisor understands the resiliency of your data.

Which of the following is the correct description of Amazon Glacier Storage? (Please
select two)

1) Archive provides 99.9999999999% durability.


2) Archive provides 99.999% durability
3) Use the "vault" as a container to store the archive.
4) Use a "bucket" as a container to store the archive.
5) Archive provides 99.99% availability.
Glacier Features
Glacier is cheaper storage than S3 for medium and long term
storage used for archiving data

 In Amazon S3 Glacier, the data is stored in the "Archive


 Maximum size of a single archive is 40 TB
 Unlimited number of archives and amount of data that can be stored
 Each archive is assigned a unique archive ID when it is created, and the archive cannot be updated after
it is created.
 Use "vaults" as containers for storing archives (up to 1,000 vaults in a single AWS account)
 Integrate with Amazon S3 lifecycle rules to automate the archiving of Amazon S3 data and reduce overall
storage costs
 Automatic encryption by default using the Advanced Encryption Standard (AES) 256-bit symmetric key
 Unlike S3, the process of uploading and retrieving data directly is not possible, so uploading/downloading
from S3 lifecycle management or programmatic processing is required
 Minimum retention period for Glacier is 90 days
How Glacier works
Unlike S3, store data in units called vault and archive
Management Methods Features

 Vault to store the archive


Vault
 Vault created in the region

 Archive, the basic unit of storage in S3 Glacier, with any


Archive data such as photos, videos, and documents
 Each archive has a unique address.

 Execution unit to execute a SELECT query on an archive,


Jobs
retrieve an archive, or get a vault inventory

 It can set up notifications using SNS when a job is


Notification Settings
completed, as jobs take time to complete
How Glacier works
Glacier first archives the data and stores it in the vault for a long
time

Data Fast Slow

Archive

Vault

Jobs
[Q] Data retrieval

As a solution architect, you are building a solution to manage and store corporate
documents using AWS. Once stored, the data is rarely used, but it is expected to be
retrieved within 10 hours, following the instructions of the administrator if necessary. You
have decided to use Amazon Glacier and are considering its configuration method.

How does Glacier need to be set up?

1) Quick Extraction
2) Standard Ejection
3) Large-volume extraction
4) Vault Rock
Glacier Data Retrieval Type
Depending on the settings of the Glacier data acquisition type, the
data acquisition time and fee at the time of acquisition will vary
Type Features

 Expedited Retrieval, a mode to quickly access data when a subset of the


Expedited Retrieval archive is needed quickly. It usually takes less than one to five minutes to
become available

Provisioned  Provisioned Capacity, a mechanism that ensures that the acquisition capacity
Capacity of expedited retrieval is available when needed

 Standard retrieval defaults to accessing all archives within a few hours.


Standard Retrieval Typically, standard retrieval takes 3 to 5 hours

 Bulk retrieval is the least expensive retrieval option and allows for large
Bulk Retrieval amounts of data (including petabytes of data) to be retrieved in a day or less
at low cost. Typically, mass retrieval takes 5 to 12 hours
First of previous
[Q] Vault locks

A healthcare startup is building a medical information sharing application in a new service,


storing patient health records on Amazon S3. As a solution architect, you need to
implement an archiving solution based on Amazon S3 Glacier and enforce data access
regulations and compliance management.

As a solution architect, which solution should you choose?

1) Use the S3 Glacier vault to store sensitive archival data and then use the vault lock
policy.
2) Using S3 Glacier Archive to store sensitive archive data and then use the archive
policy.
3) Use the S3 Glacier Vault to store sensitive archival data and then use the Lifecycle
Policy.
4) Use S3 Glacier Archive to store sensitive archive data and then use resource policies.
Access Management
Glacier access management uses different methods depending on
usecase
Management Methods Features

 Set access rights to S3 services for IAM users and resources


IAM Policy
 Centrally manage access rights to resources

 Define access policies directly in the vault and grant access to


Vault Policy the vault to users within the organization and even to users
outside the organization

 Define restrictions on data retrieval


 Limit the extraction to 'Free Usage Limit'. Or, if you want to
Data Retrieval Policy extract more than your free usage limit, you can specify 'Maximum
Extraction Rate' to limit the extraction speed and set the
maximum extraction cost

 By prohibiting changes by locks, compliance management can be


Vault Rock Policy
strongly enforced

Signature  For authentication protection, all requests need to be signed


Amazon Glacier cost
Cheaper storage than S3 for medium and long term storage such as
backup
0.005USD (about 0.5 yen) per GB/month
Capacity Fee → S3 is 0.025USD/0.0152USD/GB in the standard/One
zone

Rapid: 0.033USD/GB
Data Retrieval Fee Standard: 0.011USD/GB
Large capacity: 0.00275USD/GB

Rapid: 11.00USD/ 1,000 requests


Data Retrieval
Standard: 0.0571USD / 1,000 requests
Request Fee
Large capacity: 0.0275USD / 1,000 requests

Provisioned
110.00USD/provisioned capacity unit
capacity

Free data transfer (in)


Data Transfer fee Data transfer to (out of) the Internet is free up to 1 GB/month.
Above that, there is a charge

Price per July 2020. Prices are subject to change.


Glacier Deep Archive
Storage type for medium- to long-term storage, which is less
expensive than Glacier and allows data storage, but data acquisition
is even slower

Cheaper Slower data retrieval


Glacier Deep Archive
Storage type for medium- to long-term storage, which is less
expensive than Glacier and allows data storage, but data acquisition
is even slower

 Basic data model and management is the same as for Glacier


 Available from 0.00099 USD per GB per month, the lowest price on AWS
 Data is stored across three or more AWS availability zones and is 99.999999999999% as
durable as S3
 With standard retrieval, data can be retrieved within 12 hours
 With high-volume retrieval, acquisition costs can be reduced by using large-volume retrieval
to retrieve data within 48 hours.
Other Areas
AWS DataSync
Company A has made the decision to migrate its infrastructure to AWS. As a solution
architect, you are responsible for moving most of your on-premises data to Amazon S3
and Amazon EFS. You need to automate the online data transfer to these AWS storage
services.

Choose the best solution among the following

1) Automate online data transfer to specific AWS storage services using AWS
DataPipeline
2) Automate online data transfer to certain AWS storage services using AWS Snowball
edge.
3) Automate online data transfer to certain AWS storage services using Amazon DML.
4) Automate online data transfer to certain AWS storage services using AWS DataSync.
AWS DataSync
AWS DataSync is a service used to migrate storage data to S3 or
EFS

Reference: https://docs.aws.amazon.com/ja_jp/datasync/latest/userguide/how-datasync-works.html
DR Configuration
A financial services company has created a disaster recovery plan in its business
continuity plan. As a solution architect, you want to make sure that a scaled-down version
of a fully functioning environment is always running in the AWS cloud and minimize
recovery time in the event of a disaster.

Which scheme can meet this requirement?

1) Warm Standby
2) Backup & Restore
3) Pilot Light
4) Multi-site
DR Configuration
Solution for disaster recovery prepare various methods depending
on the application
 A method of standing by in an operational state for instantaneous
Hot Standby switching to a standby server in the event of a problem with the
production server machine

 A method to prepare in advance all the necessary data and equipment


Cold Standby as a standby server, but standby in a non-operational state

 A spare server is provided in the same form as the production server,


Warm Standby but requires some work before the switchover (recovery)

Backup &  A method that allows for periodic backups to be performed and
Restore restored in the event of a failure of the production equipment

 A server in a down state is prepared in another region and launched in


Pilot Light the event of a failure.

 Configure your infrastructure to multiple sites such as multi-AZ


Multi-site configurations and multi-region deployments
Use another Region for DR
A financial services company is creating a disaster recovery plan in its business continuity
plan. As a solution architect, you are considering a solution that allows for cross-regional
disaster response with EC2 instance configurations and RDS. Cost optimization is the
most important factor in this response.

Which should you choose as a disaster recovery strategy? (Select two)

1) Create an AMI for an EC2 instance and copy it to another region.


2) Create an AMI of an EC2 instance and share it to another region.
3) Move the EC2 instances to another region.
4) Configure the RDS to a different region.
5) Create a snapshot of the RDS and copy it to another region.
Use another Region for DR
Copy AMI and Snapshot to another region
Replicate storage or DB to another region

Singapore Region Sydney Region


10.0.0.0/16 10.0.0.0/16

AZ AZ

Public Subnet 10.0.0.0/24 Public Subnet 10.0.0.0/24

EC2 EC2

AMI Copy AMI


Section Content
Lecture What you will learn in the lecture

The scope of You will learn about security-related services such as AWS WAF
Security and AWS Shield, which appear on the Associate exam.

You will learn about network-related services such as data


The scope of
analysis and data migration services, Transit Gateway and other
Network and data network-related services that appear on the Associate exam.

You will learn about the features and differences between


The scope of environmental
environment automation services such as CodeDeploy,
automation OpsWorks, and Elastic BeanStalk.

You will learn about the features and differences between the
The scope of User
services used for user management, such as AWS Directory
Management Service and Cognito.

You will learn about the features and differences between


The scope of Work
services used to create and manage workflows, such as AWS
Management Step Functions and SWF.
Section Content
Lecture What you will learn in the lecture

The scope of cost You will learn about budgets management services and
optimization cost estimate tools
The scope of
AWS Security
The scope of Security Question
The results of analyzing the range of questions from 1625 are as
follows
 Based on the scenario, you will be asked to select the best security
Selecting Security
service to meet the requirements for responding to a security
Services incident.

 You will be asked about how to create and manage certificates for
ACM setting up SSL/TLS communications

 You will be asked about how to use encryption key management


CloudHSM services that should be used to align with industry standards and
the security standards required
[Q] Select security services

Your company uses AWS to run a web application. While monitoring your web traffic
volume, you discover that unauthorized access through a specific IP address is attempting
to obtain passwords and other information. The attacker appears to be executing more
than 10 requests per second, which is unusual.

Choose the best solution against such an attack. (Select two)

1) Using AWS WAF to set up rate-based rules.


2) Using security groups to block certain IP addresses.
3) Using AWS Shield to mitigate DDoS attacks.
4) Using CloudFront to implement access control with OAI.
5) Using the network ACLs to block certain IP addresses.
AWS WAF
A firewall service that inspects web application traffic
communications to block attacks and unauthorized access to
vulnerabilities
 Blocking SQL injection, cross-site scripting, and other malicious requests
 Set custom rules in the blocking method (rate-based rules / IP-based filters /
regular expression patterns / size limits / action allow or deny settings)
 Monitoring in conjunction with CloudWatch

Region

WEB
Unauthorized access Apps
WAF
(EC2)
AWS GuardDuty
Services that use machine learning and other techniques to detect
security threats to AWS infrastructure and apps

Threat Assessment

VPC
Flow log High

Malicious access
DNS Logs Medium

Low
CloudTrail
Amazon Inspector
Amazon EC2 hosted diagnostic service that deploys an agent to
diagnose platform vulnerabilities

 Automatically analyze system settings and behavior on demand for


AWS resources

 Uses Built-in Rules Package for analysis


-CVE (Common Vulnerabilities & Exposures)
-CIS (Center for Internet Security)
-Rules based on best practices
-Run-time behavior analysis

 Create detailed report with recommended response procedures

 Integration with the development process through API collaboration


AWS Shield
AWS Shield is automatic mitigation systems against DDoS attacks
in L3/L4, applied with CloudFront and Route53

 Applying Automatic Mitigation Systems at Edge Locations in L3/L4


 Integrated with CloudFront and Route53 edge locations to inspect all incoming
packets
 Automatic mitigation of 96% of DDoS attacks
 Free for Standard Edition / Paid for Advanced Edition

Edge Location

DDoS Attacks AWS


Shield
AWS Shield
Advanced version allows you to work with WAF to provide strong
protection against large scale attacks

Standard Advanced

 In addition to L3/L4, work with WAF


 Supports L3/L4 DDoS attacks
to protect against L7 DDoS attacks
 Free and applicable to all users
 Defending against larger attacks
 Protection against SYN/UDP floods,
 24/7 access to AWS DDoS Response
replay attacks, etc.
Team (DRT)
 Perform automatic detection and
 Enables "DDoS Cost Protection to
automatic mitigation
Scaling" to protect AWS claims from
 Built in services such as CloudFront
resource usage spikes
and Route53 run automatically
 Reporting
[Q] ACM

Company A has a web application with Auto Scaling group and ALB configured on an EC2
instance. Encryption of data communication needs to be achieved and an SSL certificate
needs to be loaded into the ALB.

Select the AWS service you should use to centrally manage your SSL certificates.

1) IAM Credential Manager


2) Amazon Cognito
3) AWS KMS
4) AWS Certificate Manager
AWS Certificate Manager
Provisioning, managing and deploying Secure Sockets
Layer/Transport Layer Security (SSL/TLS) certificates

https://aws.amazon.com/jp/blogs/security/how-to-help-achieve-mobile-app-transport-
security-compliance-by-using-amazon-cloudfront-and-aws-certificate-manager/
[Q] CloudHSM

The financial institution uses AWS-based business systems. We are currently looking to
expand globally with branches in key regions in Europe. The European region has its own
security standards, and as a financial institution, it is imperative that we encrypt our data
according to those standards.

Choose the best solution to deal with such security standards.

1) Using AWS KMS, create a CMK in a custom keystore and store the key material in
AWS CloudHSM.
2) Using AWS KMS, create a CMK that will be AWS-managed and the key materials will
be stored in AWS CloudHSM.
3) Using AWS KMS, create a CMK in a custom keystore and store the key material in
AWS KMS.
4) Using AWS KMS, create a CMK that will be AWS-managed and the key materials will
be stored in AWS KMS.
CloudHSM
CloudHSM is a service that protects encryption keys by means of a
dedicated HW module (HSM) with anti-abuse measures in place.
Used to meet strict encryption requirements

CloudHSM
The scope of
Networks and data
Network and Data
The results of analyzing the range of questions from 1625 are as
follows
 Based on the scenario, you will be asked to select Amazon EMR for
Selecting EMR the requirements for data analysis.

 Based on the scenario, you will be asked to select Amazon Athena


Selecting Athena for the requirements for data analysis.

Selecting Migration  Based on the scenario, you will be asked to select the migration
Services service

 Based on the scenario, you will be asked to select AWS Global


Selecting Global
Accelerator to meet requirements such as improving network
Accelerator performance.

 Based on the scenario, you will be asked to select AWS Transit


Selecting Transit Gateway Gateway to meet the requirement to simplify network connectivity.
[Q] Select EMR

An enterprise has a web application running on multiple EC2 instances. You are building a
mechanism to store and analyze the application log files. Considering the number of
servers, the log files are going to be large, and you need a large data processing capacity
to analyze these logs.

Which of the following services would best meet this requirement? ( Select two)

1) Save the application log files to Glacier and process them by Redshift.
2) Save the application log files to S3 and process them by Amazon EMR.
3) Save the application log files in S3 and process them by S3 Select.
4) Save the application log files in DynamoDB and process them by Lambda function.
Amazon EMR
EMR can set up big data frameworks such as Apache Spark,
Apache Hive, and Presto to process and analyze large amounts of
data

https://aws.amazon.com/jp/blogs/news/optimizing-downstream-data-processing-with-
amazon-kinesis-data-firehose-and-amazon-emr-running-apache-spark/
[Q] Select Athena

A company stores the execution data of a web application running on multiple EC2
instances in S3, and it is necessary to analyze the data stored in S3 using standard
queries.

Which is the most cost-effective service in meeting these requirements?

1) Import the data into an Amazon Redshift cluster and query the data.
2) Query the data using Amazon Athena against the S3 bucket object.
3) Query the data using Amazon EMR against the S3 bucket object.
4) Query the data using S3 Select against the S3 bucket object.
Amazon Athena
An interactive query service that allows you to easily analyze data
directly in Amazon S3

Amazon BI
S3 Athena Tools

Directly query the Performs fast JDBC/ODBC/API


S3 data queries on large Works with BI tools
data via
[Q] Select migration service

A major retailer has decided to migrate its on-premise database system to AWS. They
need to migrate their on-premises MongoDB database to Amazon DynamoDB. Because
the data to be migrated is so large, they need a tool to help them migrate to AWS.

Please select a solution that can meet this requirement. (Select two.)

1) Extract the data using the Schema Conversion Tool (SCT).


2) Load on AWS Snowball Edge devices and migrate data to AWS.
3) Migrate data to Amazon DynamoDB using the AWS Database Migration Service (DMS).
4) Set up AWS Direct Connect and migrate your data to AWS.
5) Use AWS Storage Gateway to migrate the data to Amazon DynamoDB.
Migration services
Services used for infrastructure migration and data migration to the
AWS cloud

It can collect server configuration data, usage data, and behavioral


AWS Application Discovery
data to provide information needed for migration, such as server
Service utilization data and dependency mapping

AWS Database Migration Database Migration Tool that enables you to migrate your database
Service to AWS quickly and securely

AWS Server Migration An agentless service that makes migrating thousands of on-premise
Service workloads to AWS easier and faster than ever before

It automatically converts the source database schema and most of


AWS Schema
the database code objects such as views, stored procedures, and
Conversion Tool functions to the target database compatible format
AWS Application Discovery Service
Services that provide information needed for migration, such as server
utilization data and dependency mapping

 Collects the target device


information for migration
(VMware/Windows/Linux
environment information
understanding)
 Collects information about the
Discovery Agents current equipment and software
dependencies
 As a result, you can save yourself
the hassle of manual labor
AWS Database Migration Service
A database migration tool that allows you to migrate your database to
AWS quickly and securely

On-premise On-premise
DB DB

DB DB
on EC2
DMS on EC2

RDS RDS/
Redshift
AWS Server Migration Service
An agentless service that makes migrating thousands of on-premise
workload servers to AWS easier and faster than ever before

 A service that allows you to easily migrate large numbers of


servers, up to 50 VMs per account at the same time
 Conducting Migration in an Agentless Type
 A VM-specific migration tool that captures incremental
changes against VMware on-premises and transfers them
automatically to AWS
 You can create an AMI in the destination AWS and migrate it
to an EC2 instance
[Q] Select Global Accelerator

A major gaming company is building an action game on AWS that requires real-time
communication. The users of this game are from all over the world, and accelerated
communication is very important for usability. The game data is communicated via its own
application using the UDP protocol.

Choose the best solution to improve the communication performance of this game data.

1) Using the Global Accelerator to communicate game data.


2) Using CloudFront to communicate your game data.
3) Using Elastic Load Balancer to communicate game data.
4) Using Route 53 to communicate game data.
AWS Global Accelerator
Provides global traffic support that improves user traffic
performance by up to 60% globally

Reference: https://aws.amazon.com/jp/blogs/networking-and-content-delivery/accessing-private-application-load-
balancers-and-instances-through-aws-global-accelerator/
[Q] Select Transit Gateway

A consulting firm uses multiple AWS accounts to run its applications. Because there are
multiple AWS accounts with multiple VPCs, the firm decided to establish routing between
all private subnets. The architecture should be simple and allow for transitive routing.

Choose the best solution to configure such a network connection.

1) Create an AWS Transit Gateway to connect between VPCs.


2) Connect between VPCs using VPC peering.
3) Connect between VPCs using AWS Managed VPN.
4) Share VPCs with each account using AWS resource sharing.
AWS Transit Gateway
Simplify your network by connecting your VPC and on-premises
network through a central hub

Reference: https://aws.amazon.com/jp/transit-gateway/
The scope of Environmental
Automation
Environmental Automation question
The results of analyzing the range of questions from 1625 are as
follows
 Based on the scenario, you will be asked to select services related
Selecting environmental
to automation, sharing of infrastructure configuration and
automation services deployment of web applications.

 Based on the scenario, you will be asked to choose the best


Selecting OpsWorks Type OpsWorks type.
Environmental Automation Services
AWS provides a wealth of environmental automation services to
enable DevOps
Constructed Build Testing Deployment Provisioning Monitoring

CodePipeline
Cloud
Formation

CloudWatch

CodeDeploy
ECS / EKS
(Docker compatible)
CodeCommit CodeBuild

Elastic Beanstalk
(web app deployment)

OpsWorks
(Infrastructure configuration and management)
Environmental Automation Services
AWS provides a wealth of environmental automation services to
enable DevOps
CodeCommit/CodeBuild These service are used for coding management. CodePipeline is
also available for CloudFormation and ECS deployment
CodeDeploy/CodePipline automation.

Automated Deploy Services for deploying and scaling web


Elastic Beanstalk applications and services on a common servers

Configuration management services that would allow you to


OpsWorks automate the configuration, deployment, and management of a
server for a managed instance of Chef or Puppet

Templated provisioning services for describing and provisioning


CloudFormation all infrastructure resources in AWS

Docker container templating service for building environments on


Amazon ECS AWS, setting up environment images and coding infrastructure
settings in DockerFile
[Q] Select environmental automation services

As a solution architect, you've built a Ruby-based web application using Cloud9. You want
to upload these codes to the AWS cloud and have them deployed to AWS. The
requirement is to automate the deployment process and version control as much as
possible.

Which is the best service that can meet this requirement?

1) OpsWorks
2) CloudFormation
3) Amazon ECS
4) AWS Elastic Beanstalk
AWS Elastic Beanstalk
Automated services for the construction and deployment of
standard configurations of web applications

 Services to deploy WEB applications quickly and easily


 Deploy web applications with support for Java, PHP, Ruby, Python,
Node.js, .NET, Docker, and Go
 Deploy and scale with familiar servers such as Apache, Nginx, Messenger,
IIS, etc.
 Upload the code and automate deployments from capacity provisioning,
load balancing, Auto Scaling to application health monitoring
AWS Elastic Beanstalk Use Cases
Used for the deployment of web applications and utilize in the
deployment of long task-time workloads

Web Server Environment Worker Environment

 ELB+ Auto Scaling allows you to run  Enable Scalable Batch Processing Work
scalable web applications by coding a with SQS + Auto Scaling
scalable configuration and version  Create regular task execution
 Used for Docker containers applications infrastructure: a backup process that
 Multiple containers can run in the runs every day at 1 am
environment using ECS  Run a web application in a worker host
and having the workload time to perform
the process
[Q] Select OpsWorks

Your company is thinking of setting up a CI/CD environment with AWS. Currently, we use
Chef for configuration management of our on-premises servers. Therefore, you are
required to use a service that enables you to use your existing Chef cookbook in AWS.

Which of the following services offers a fully managed use of Chef Cookbook?

1) OpsWorks for Chef Automate


2) Opsworks Stacks
3) OpsWorks for Puppet Enterprise
4) OpsWorks for Chef Enterprise
OpsWorks
OpsWorks is a configuration management service for configuring
and operating applications using Chef or Puppet

OpsWorks for Chef OpsWorks for Puppet


The OpsWorks Stack
Automation Enterprise
OpsWorks for Chef Automation
Fully managed server services for creating and continuous
deployment and compliance checks on the Chef server

 Chef Automation is a service that utilizes Chef cookbooks and recipes to


automate infrastructure management
 Allows you to configure the continuous delivery pipeline of infrastructure
and apps
 Resources to get configuration updates from the Chef server
 Allows visibility into operational/compliance/workflow events
 You can build a Chef server in AWS and utilize tools such as Chef
Automate API and Chef DK.
OpsWorks for Puppet Enterprise
Fully Managed Puppet Master Automates Application Testing,
Deployment and Operation

 Puppet master manages the nodes in the infrastructure, stores node


information, and serves as a central repository for Puppet modules
 Puppet Master can automate the entire stack handling tasks such as
software and OS configuration, package installation, database
configuration, change management, policy enforcement, monitoring and
quality assurance, etc.
 The module contains instructions on how to configure the infrastructure,
enabling the reuse and sharing of Puppet code
 Using Puppet, you can automate how to configure, deploy and manage
nodes in EC2 instances and on-premises devices
The OpsWorks Stack
AWS Original service that provides a simple and flexible way to
create and manage stacks and applications

 Perform the modeling by components called


stacks/layers/instances/applications
 Configuration management and auto-scaling in code
 Support for Linux/Windows servers
 Allows for task automation by life cycle events
 OpsWorks Agent executes the recipe in local mode in Chef Client
 Stack manages configuration information in JSON format for all
instances that are top entities in OpsWorks
 AWS OpsWorks stack does not require a Chef server
Elastic Beanstalk vs OpsWorks
Against Elastic Beanstalk, which specializes in web app deployment,
OpsWorks automates the configuration and construction of
advanced infrastructure environments

Elastic Beanstalk OpsWorks

Application Deployment Automation Infrastructure configuration automation


Services for deploying and scaling web applications Infrastructure configuration and management
and services on familiar servers services that would allow you to automate the
configuration, deployment and management of
servers for managed instances of Chef and Puppet

ELB ELB

App App

Auto-Scaling Auto-Scaling

Custom Backend

Auto-Scaling
Code Series
A set of services that automate the committing, execution and
deployment of development code on a Git-based repository

Fully Managed pipeline


between coding and deployment

CodePipeline

Coding
Build Testing Deployment
(Source Management)

Managed source A fully managed build service that allows you to CodeDeploy is an
management services to compile source code, run tests, and create deployable automated service for
securely host Git-based software packages development, testing and
repositories deployment to production

CodeCommit CodeBuild CodeDeploy


The scope of
User Management
User Management Question
The results of analyzing the range of questions from 1625 are as
follows
Selecting the AWS  Based on the scenario, you will be asked to select the use AWS
Directory Service Directory Service to meet requirements like AD integration

Using AWS Managed  Based on the scenario, you will be asked about the features and
Microsoft AD configuration methods of AWS Managed Microsoft AD.

 Based on the scenario, you will be asked about the configuration


Using AWS SSO methods needed to utilize SSO.

 Based on the scenario, you will be asked about how to configure


Using AWS STS STS to provide temporary credentials.

 Based on the scenario, you will be asked about how to configure


Using Cognito Cognito to provide authentication capabilities to the application.
User management services (non-IAM)
The following services are important user management and
authentication excluding IAM.
A service that uses Active Directory on AWS to manage users with
Active Directory, or to integrate with AD in an on-premises
AWS Directory Service environment and achieve single sign-on. Or services for integration
with AD and single sign-on in an on-premises environment.

Provides user sign-up/sign-in and access control capabilities for web


Amazon Cognito and mobile applications; social identity providers such as Facebook,
Google, and Amazon; and enterprise identity providers with SAML 2.0

A service that facilitates the centralized management of multiple


AWS accounts and access to business applications and provides
AWS Single Sign-On (SSO) users with single sign-on access.
Centralize access and user permissions for all accounts in AWS
Organizations

AWS Security Token Service A service that allows users to request temporary limited privilege
(STS) credentials for federated users authenticated by IAM users or AD
Active Directory
A mechanism for authenticating users by username and password.
Windows AD is widely used in user management.

My ID is... Authentication complete


My password is... Access granted.
Active Directory
A directory service is a mechanism for storing a variety of user-
related information and providing user authentication

Manage user information Features


Identity and Access Management
• ID
• User Name • Improving operational efficiency
• Full name • Improving Compliance
• Department • Enhanced security, etc.
• Group App access control
• In charge
• Phone number • File Sharing
• Email Address • Patch management, etc.
• Password

An indispensable authentication system when using the Windows system


[Q] Select AWS Directory Service

A leading e-commerce site uses Microsoft Active Directory to provide users and groups
with access to resources on its on-premises infrastructure. The company decided to go
for a hybrid configuration with AWS for its IT infrastructure. The company uses SQL
Server-based applications and wants to configure the trust relationship to enable Single
Sign-On (SSO) because it is essential to work with AWS.

As a solution architect, which of the following AWS services would you recommend for this
use case? (Select two)

1) Simple AD
2) AD Connector
3) AWS SSO
4) AWS Managed Microsoft AD
5) Cognito
Select the AWS Directory Service
Create a new directory in AWS or achieve control using existing
Active Directory authentication

• Create a new fully managed directory on the AWS side


• Samba 4 standalone managed directory using Active Directory
Simple AD Compatible Server
• Small type is up to 500 users, Large type is up to 5,000 users
• A feature subset of AWS Managed Microsoft AD is available

• Integrated Active Directory and IAM management for on-premises


environments
• The directory gateway used to redirect to the Microsoft Active
Directory on-premises
AD Connector • It allows you to log on to Amazon WorkMail applications such as AWS,
Amazon WorkSpaces, or Amazon WorkDocs using your existing
company credentials.
• Single sign-on in conjunction with AWS SSO is possible

• Create a fully managed AD that is compatible with Microsoft Active


Directory on the AWS side
• Enables single sign-on in conjunction with AWS SSO, setting up a
AWS Managed
trust relationship between AWS and on-premise Microsoft AD
Microsoft AD • Able to set up and run directories on AWS or connect AWS resources
to existing on-premises Microsoft AD
• Manage up to 50,000 users
Simple AD
You don‘t have to be an IAM user to be able to easily authenticate
and access Amazon Workspace.

10.0.0.0/16

AZ AZ

Public Subnet 10.0.0.0/24 Public Subnet


10.0.1.0/24

Simple AD
Implementing
Certification
Management
EC2

Amazon Amazon
Workspace Workspace
AD Connector
Services that leverage existing directories and enable access to the
AWS environment DC

Active
Directory

10.0.0.0/16

AZ AZ

Public Subnet 10.0.0.0/24 Public Subnet


10.0.1.0/24

AD Connector

EC2 EC2
[Q] Using AWS Managed Microsoft AD
A leading e-commerce site uses Microsoft Active Directory to provide users and groups
access to resources on its on-premises infrastructure. The company has decided to make
its IT infrastructure a hybrid configuration with AWS. It is necessary to access resources
in both environments using on-premises credentials stored in Active Directory.

In this scenario, which of the following can be used to meet this requirement?

1) Setting up a SAML 2.0-based federation with Web ID federation


2) Using AWS Organizations to apply IAM management to on-premise environments.
3) Setting up a SAML2.0-based federation using Microsoft Active Directory Federation
Services (AD FS).
4) Setting up a SAML2.0-based federation with Cognito.
AWS Managed Microsoft AD
Create Microsoft AD on AWS to integrate management of AWS
user management with on-premises environments
[Q] Using AWS SSO
The company has decided to make its IT infrastructure a hybrid configuration with AWS,
which requires the integration of on-premises data center Lightweight Directory Access
Protocol (LDAP) directory services with VPC using IAM. The identity store currently in
use is not compatible with SAML.

Which of the following provides the most effective approach to implement integration?

1) Enable single sign-on between AWS and LDAP using the AWS Single Sign-On (SSO)
service.
2) Enforcing IAM credentials using LDAP credentials and matching IAM roles.
3) Utilize a custom identity broker application.
4) Using IAM policy to reference LDAP identifiers and AWS credentials
Using AWS SSO
Single Sign-On (SSO) service that facilitates the centralized
management of SSO access to AWS accounts and applications

Connect to an existing AWS


Managed Microsoft AD directory and
manage users with the standard
Active Directory management tool
provided with Windows Server

Support for identity federation with SAML 2.0


Reference: https://aws.amazon.com/jp/blogs/security/introducing-aws-single-sign-on/ AWS SSO adds custom federation (ID) brokers to the AWS
Managed Microsoft AD or AWS SSO directory and works with
many services
[Q] Using AWS STS
As a Solution architect, you has designed a solution to provide a single sign-on to the
existing staff of a company. In this solution it is necessary to grant permissions to
temporary federation users.

Which is the best service to meet these requirements?

1) Utilizing AWS STS and SAML.


2) Utilizing SAML with IAM users.
3) Delegate authority in IAM policies and IAM roles.
4) Implementing authentication using AWS SSO.
Security Token Service (STS)
STS is a service that provides limited and temporary security
credentials

IAM users
IAM users
Authenticated Issues an STS
users

Temporary credentials are used in scenarios where identity


federation (federation/web identity federation using custom
federation brokers and SAML 2.0), cross-account access, and IAM
roles are used. Federation users
[Q] Using Cognito
Some company is implementing mobile applications with AWS Lambda, API Gateway,
DynamoDB. As a solution architect, you need to ensure maximum security by allowing
users to connect via Google Login and turn on MFA.

Choose a solution that can meet this requirement.

1) AWS SSO
2) AD Connector
3) Simple AD
4) Amazon Cognito
Cognito
If you want to add user authentication to your application, use
Cognito

Reference: https://docs.aws.amazon.com/ja_jp/cognito/latest/developerguide/amazon-
cognito-integrating-user-pools-with-identity-pools.html
The scope of
Work Management
Work Management Question
The results of analyzing the range of questions from 1625 are as
follows
 Based on the scenario, you will be asked to choose an Amazon MQ
Amazon MQ Selection to the requirement to utilize the queue.

 Based on the scenario, you will be asked asked to choose the best
AWS Step Functions
solution to create workflows or processes from SQS and Step
VS SQS Functions .

 Based on the scenario, you will be asked asked to choose the best
AWS Step Functions VS
solution to create workflows or processes from Step Functions and
SWF SWF
[Q] Select Amazon MQ
Edutech Venture plans to migrate its digital learning platform to an on-premises
environment. The current application provides English learning content to non-native
speakers and processes their learning status to calculate the optimal learning time. You
want to use the RabbitMQ message broker cluster for task management in this process.

Which services should be used in migrating this solution?

1) Amazon MQ
2) Amazon SQS
3) Step Functions
4) SWF
Features of Queue Services
Understand how to use each service in different cases and deal
with potential exam questions.

• A fully managed pub/sub messaging service. Used for message


and alert notifications between components.
Amazon SNS • Select SNS if you want event notifications or messaging/push
notifications on AWS.

• Fully managed message queuing service. Enables distributed


parallel processing of the execution process.
Amazon SQS • Select SQS for queuing/task parallel and distributed
processing/polling type notifications on AWS.

• A service that provides the ability to send and receive emails.


Amazon SES Can implement email notifications on an application.
• Select SES if you want to implement the email feature.

• Managed message broker service for Apache ActiveMQ, which


allows you to use message brokers in the cloud using industry
Amazon MQ standard APIs and protocols
• Select MQ if you want to implement an industry standard API or
queuing for Apache ActiveMQ.
[Q] AWS Step Functions VS SQS
Trading Company B operates a system that uses SQS queues to process orders. Recently,
a complaint has come in that the company is experiencing problems with order processing
being performed twice by the person in charge. As a solution architect, you are required to
address this problem.

Choose the response you need to solve this problem.

1) Set the message deduplication ID of SQS.


2) Set the SQS visibility timeout.
3) Change the message size of SQS.
4) Use the Step function instead of SQS.
Workflow Creation
Step Functions are used to create workflows and build applications
for task processing

• It can be integrated with AWS Lambda functions to cerate


serverless workflow by connecting multiple AWS services to
AWS Step Functions perform business processes
• It is newer workflow creation tool than SWF

• It can create and manage workflows in which developers build,


Amazon Simple WorkFlow
run, and scale background jobs with parallel or sequential steps
(SWF) • Older type workflow tools

• Fully managed message queuing service. Enables distributed


parallel processing of the execution process.
Amazon SQS • Select SQS for queuing/task parallel and distributed
processing/polling/pull type notifications on AWS.
• It cannot be used for creating workflow itself.
AWS Step Functions
Allows you to define a state machine in JSON format and build
worker flow and work-processing applications

Reference: https://d1.awsstatic.com/webinars/jp/pdf/services/20190522_AWS-Blackbelt_StepFunctions.pdf
AWS Step Functions
For example, you can implement a mechanism to identify and tag
images using Step Functions
Start

ExtractImageMetadata

ImageTypeCheck

StoreImageMetadata NotSupportedImageTyp

Rekognition Thumbnail

AddRekognizedTags

ExtractImageMetadata

End
Reference: https://d1.awsstatic.com/webinars/jp/pdf/services/20190522_AWS-Blackbelt_StepFunctions.pdf
AWS Step Functions Cooperation
Step Functions can concatenate various AWS services to create
workflows

Coordinate the following services directly in the Amazon statement language

• Calling the AWS Lambda function


• After running an AWS Batch job, run another action based on the results
• Insert the item into Amazon DynamoDB or retrieve the item from here
Collaborative Services • Run an Amazon ECS task and wait for it to complete
• Published in Amazon SNS on the topic
• Send a message on Amazon SQS
• Managing an AWS Glue or Amazon SageMaker job
• Building a workflow to run Amazon EMR jobs
• Start the workflow execution by another AWS Step Functions

You can call Step Functions from the following services

• AWS Lambda
• Amazon API Gateway
Calling Service • Amazon EventBridge
• AWS CodePipeline
• AWS IoT Rules Engine
• AWS Step Functions

Activity feature can connect to other services that are not supported.
[Q] AWS Step Functions VS SWF
A large bank is building applications to perform data processing on AWS, and it is
necessary to configure data processes in conjunction with Lambda functions, Amazon EMR,
etc. to achieve a complex process.

Which of the following options are available to create this system?

1) AWS Batch
2) AWS Step Functions
3) Amazon Simple Workflow Service (SWF)
4) Amazon SQS
Amazon Simple WorkFlow (SWF)
Use SWF to create workflows on an instance, as the predecessor
service of Step Functions
AWS Step Functions VS SWF
It is recommended to use Step Functions, except for some
processes that are only available in SWF.

• Describe the state machine in JSON format.


• It is a serverless application integrated with Lambda and runs on
Step Functions.
• The logic is defined and processed on Step Functions to reduce
Step Functions the code work.
• Use AWS Step Functions instead of SWF for new applications
because the productive and agile approach allows you to adjust
application components using a visual workflow

• Write a decider program and create a workflow that separates the


activity step from the decision step
• Worker applications run on EC2 instances and on-premises
servers or more
• Applications built and run in Java and Ruby code (using AWS
SWF Flow Framework)
• Step Functions is recommended because SWF is more complex
than Step Functions.
• Utilize SWF if you need an external signal to intervene in the
process, or to launch a child process that returns the result to
the parent.
The scope of
Cost optimization
AWS Cost Management
Support tools and services related to operations, maintenance and
support

Visualization tools for understanding, analyzing and managing AWS


AWS Cost Explorer costs and usage and its economics

The cost of AWS and


A report to see the cost and usage details of AWS
Usage Report

It can set custom budgets to send out alerts when budget thresholds
AWS Budgets are exceeded

Tools to Help with AWS Costing


Pricing Tools TCO Calculator/Pricing Calculator

It can categorize costs by your own organization and project


AWS Cost Categories structure

Services that provide advice on cost optimization, security and


Trusted Advisor improving performance
Utilizing the Pricing Tool
Use the official AWS tools for quotes and price comparisons

You can compare the price with AWS and on-premises or


TCO Calculator
colocation environments.

Pricing You can conduct an individualized forecasted cost estimate in line


Calculator with your business and personal needs.
Total Required Costing Tool
Compare costs, such as on-premises environments, with using AWS
and estimate the cost savings

See: https://aws.amazon.com/jp/tco-calculator/
AWS Pricing Calculator
New service to conduct individualized forecast cost estimates in
line with business and personal needs

See: https://calculator.aws/#/
CloudWatch Billing Alarms
CloudWatch's billing feature allows you to set alarms on billing
amounts
AWS Budgets
Custom budgets can be set up and fine-tuned to set alarms for
when costs or usage exceed the budgeted amount.

https://aws.amazon.com/jp/aws-cost-management/aws-budgets/
Cost Explorer
Visualize changes in AWS costs and usage over time and create
custom reports to analyze cost and usage data.
AWS Cost and Usage Report
Provides the most comprehensive data on AWS costs and usage

Lists AWS usage for each service category used by account/IAM users as
an hourly or daily statement item.

https://aws.amazon.com/jp/aws-cost-management/aws-cost-and-usage-reporting/
AWS Cost Categories
The ability to categorize costs by your own organization and project
structure
AWS Trusted Advisor
Services that provide advice on cost optimization and security and
improving performance vs.

You might also like