Lab 1: Exploring and Interacting With The AWS Management Console and AWS CLI
Lab 1: Exploring and Interacting With The AWS Management Console and AWS CLI
Lab overview
The Amazon Web Services (AWS) environment is an integrated collection of hardware and software
services designed to provide quick and inexpensive use of resources. The AWS API sits atop the
AWS environment. An API represents a way to communicate with a resource. There are different
ways to interact with AWS resources, but all interaction uses the AWS API. The AWS Management
Console provides a simple web interface for AWS. The AWS Command Line Interface (AWS CLI) is
a unified tool to manage your AWS services through the command line. Whether you access AWS
through the AWS Management Console or using the command line tools, you are using tools that
make calls to the AWS API.
This lab follows the Architecting Fundamentals module, which focuses on the core requirements for
creating workloads in AWS. This lab reinforces module discussions on the what, where, and how of
building AWS workloads. Students first explore the features of the AWS Management Console and
then use the Amazon Simple Storage Service (Amazon S3) API to deploy and test connectivity to an
Amazon S3 bucket using two different methods:
Objectives
After completing this lab, you should be able to do the following:
Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:
Start lab
1. To launch the lab, at the top of the page, choose Start Lab.
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2. To open the lab, choose Open Console .
You are automatically signed in to the AWS Management Console in a new web browser tab.
Warning: Do not change the Region unless instructed.
If you see the message, You must first log out before logging into a different AWS account:
In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:
● Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
● Refresh the page and try again.
Lab environment
The lab environment provides you with the following resources to get started: an Amazon Virtual
Private Cloud (Amazon VPC), the necessary underlying network structure, a security group allowing
the HTTP protocol over port 80, an Amazon Elastic Compute Cloud (Amazon EC2) instance with the
Amazon CLI installed, and an associated Amazon EC2 instance profile. The instance profile
contains the permissions necessary to allow Session Manager, a capability of AWS Systems
Manager, to access the Amazon EC2 instance.
The following diagram shows the interactive flow of the AWS API for creating AWS services and
resources used in the lab through the AWS Management Console and AWS CLI.
Learn more: The AWS Management Console provides secure sign-in using your AWS account root
user credentials or AWS Identity and Access Management (IAM) account credentials. When you first
sign in, the user credentials are authenticated and the home page is displayed. The home page
provides access to each service console and offers a single place to access the information you
need to perform your AWS related tasks. For more information, see What is the AWS Management
Console?.
3. On the navigation bar, choose the Region selector displayed at the top-right corner of the
console, and then choose the Region to which you want to switch.
The Region on the console home page is now changed to the Region you chose.
Caution: If the chosen Region opens up a different webpage instead of the console home page,
choose Cancel and try to choose a different Region.
4. To open the General Settings page, click gear icon from menu bar.
5. Click on More user settings.
A Successfully updated localization and Region settings message is displayed on top of the screen.
Caution: If the current Region shown on the Region selector in the top-right corner is the same
Region you choose in the default Region dropdown list, you will not see the success message with
Go to new default Region. Try choosing a different Region from the dropdown menu to see this
message and complete the next step.
The Unified Settings page is displayed with the Region set to the Default Region you chose.
Note: If you do not choose a default Region, the last Region you visited becomes your default.
10.Choose the AWS logo displayed in the upper-left-hand corner to return to the console home
page.
11. On the navigation bar, choose the Region selector displayed at the top-right corner of the
console, and then choose the Region that matches the LabRegion value located to the left of
these instructions.
Caution: Verify that you are in the correct region that matches to the LabRegion value located to the
left of these instructions.
Task 1.2: Search with the AWS Management Console
In this task, you explore the search box on the navigation bar, which provides a unified search tool
for locating AWS services and features, service documentation, and the AWS Marketplace.
12.To open a console for a service, go to the Search box in the navigation bar of the AWS
Management Console, and enter cloud.
The more characters you type, the more the search refines your results.
13.To narrow the results to the type of content that you want, choose one of the categories on
the left navigation pane.
14.To quickly navigate to a service or popular features of a service, in the Services section,
hover over the AWS Cloud Map service name in the results and choose the link.
Note: For more details about a documentation result or AWS Marketplace result, hover on the result
title and choose a link.
15.Choose the AWS logo displayed in the upper-left-hand corner to return to the console home
page.
16.On the navigation bar, choose Services to open a full list of services.
17.From the left navigation menu, choose All services or Recently visited, and then choose a
service from the list that you want to add as a favorite.
18.To the left of the service name, select the star.
Note: Repeat the previous step to add more services to your Favorites list.
19.To view the list of favorite services, from the left navigation menu, choose Favorites.
Note: Alternatively, Favorites are pinned and visible on the navigation bar at the top of the console
window.
20.On the navigation bar, choose Services to open a full list of services.
21.In the Favorites list, deselect the star next to the name of a service you wish to remove.
Note: Alternatively, in the Recently visited list or All services list, deselect the star next to the name
of a service that is in your Favorites list.
Task 1.4: Open a console for a service
22.On the navigation bar, choose Services to open a full list of services.
23.Choose a service under Favorites or Recently visited or All services to quickly navigate to a
specific service.
24.Choose the AWS logo displayed in the upper-left-hand corner to return to the AWS
Management Console home page.
26.In the Add widgets menu, choose the title bar at the top of the widget that you want to add
and then drag the widget on the console page.
27.To rearrange a widget, configure the following:
● Choose the title bar at the top of the widget, for example, Favorites, and then drag the widget
to a new location on the console page.
28.To resize a widget, configure the following:
● Choose the Recently Visited widget.
● Drag the bottom-right corner of the widget to resize.
Note: You cannot adjust the size of the Welcome to AWS, Explore AWS, and AWS Health widgets.
Congratulations! You have explored the AWS Management Console and learned to customize your
console home screen.
Caution: Verify that you are in the correct region that matches to the LabRegion value located to the
left of these instructions.
Learn more: Amazon S3 is an object storage service that offers industry-leading scalability, data
availability, security, and performance. Customers can use Amazon S3 to store and protect any
amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup
and restore, archive, enterprise applications, Internet of Things (IoT) devices, and big data analytics.
For more information, see What is Amazon S3?.
33.In the navigation pane on the left-hand side of the console, choose Buckets.
34.Choose Create bucket.
35.In the General configuration section, for Bucket name, enter labbucket-NUMBER.
Note: Replace NUMBER in the bucket name with a random number. This ensures that you have a
unique name.
Note: Amazon S3 bucket names must be globally unique and Domain Name System (DNS)
compliant.
36.The AWS Region should match the LabRegion value found to the left of these lab
instructions.
37.Leave all other settings on this page as the default configurations.
38.Choose Create bucket at the bottom of the screen.
In terms of implementation, you can create a bucket using the Amazon S3 API, but you performed
the same operation using the Amazon S3 console instead. The console uses the Amazon S3 APIs
to send requests to Amazon S3.
The S3 console is displayed. The newly created bucket is displayed among the list of all the buckets
for the account.
Congratulations! You have created a new Amazon S3 bucket with the default configuration.
39.To open the context (Right-click) menu, choose this image link and choose the option to save
the image to your computer.
● Name your file similar to HappyFace.jpg.
Note: The method to save files varies by web browser. Choose the appropriately worded option
from your context menu.
45.Choose Close.
46.At the top of the AWS Management Console, in the search box, search for and choose
EC2.
47.In the navigation pane on the left-hand side of the console, choose Instances.
48.Select Command Host.
49.Choose Connect.
Learn more: With Session Manager, you can connect to Amazon EC2 instances without having to
expose the SSH port on your firewall or Amazon VPC security group. For more information, see
AWS Systems Manager Session Manager.
51.Choose Connect.
Note: Alternatively, you can copy the CommandHostSessionUrl value from the left side of these lab
instructions and paste it in a new browser tab. The terminal for the Command Host instance opens.
A new browser tab or window opens with a connection to the Command Host instance.
Task 4.2: Use high-level S3 commands with the AWS CLI
In this task, you access the high-level features of Amazon S3 using the AWS CLI.
52. Command: Enter the following command in your Command Host session:
Tip: To copy the command, hover on it and choose the copy icon. Paste the command in the
Command Host session.
Note: The following ls command lists all of the buckets owned by the user.
aws s3 ls
53. Command: Copy the following command to a text editor, replace NUMBER with the random
number you chose for your bucket, and paste the command in the Command Host session.
aws s3 mb s3://labclibucket-NUMBER
Expected output:
make_bucket: labclibucket-xxxxx
Note: To simplify the instructions in this lab, this newly created bucket will be referred to as the
labclibucket-NUMBER for the remainder of the instructions, regardless of what bucket name you
actually choose in this step.
55. Command: Enter the following command in your Command Host session:
aws s3 ls
56. Command: Copy the following command to a text editor, replace labclibucket-NUMBER with
the name of the S3 bucket you created in the previous step, and paste the command in the
Command Host session.
57.To run the modified command in your Command Host session, press Enter.
Expected output:
upload: ../../home/ssm-user/HappyFace.jpg to
s3://labclibucket-xxxxx/HappyFace.jpg
58. Command: Copy the following command to a text editor, replace labclibucket-NUMBER with
the name of the S3 bucket you created in the previous step, and paste the command in the
Command Host session.
aws s3 ls s3://labclibucket-NUMBER
Notice the uploaded object in the newly created bucket in the output list. You can close the browser
tab.
As demonstrated in this task, the high-level Amazon S3 commands simplify managing Amazon S3
objects. Using these commands, you can manage the contents of Amazon S3 within itself and with
local directories. The S3 commands are built on top of the operations found in the S3 API
commands.
Congratulations! You have used the AWS CLI to create, list, and copy objects into the Amazon S3
bucket.
Conclusion
Congratulations! You now have successfully:
End lab
Follow these steps to close the console and end your lab.
Note: Do not include any personal, identifying, or confidential information into the lab environment.
Information entered may be visible to others.
Lab overview
As an AWS solutions architect, it is important that you understand the overall functionality and
capabilities of Amazon Web Service (AWS) and the relationship between the AWS networking
components. In this lab, you create an Amazon Virtual Private Cloud (Amazon VPC), a public and a
private subnet in a single Availability Zone, public and private routes, a NAT gateway, and an internet
gateway. These services are the foundation of networking architecture inside of AWS. This
architecture design covers concepts of infrastructure, design, routing, and security.
The following image shows the final architecture for this lab environment:
Objectives
After completing this lab, you should know how to do the following:
Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:
Start lab
1. To launch the lab, at the top of the page, choose Start Lab.
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2. To open the lab, choose Open Console .
You are automatically signed in to the AWS Management Console in a new web browser tab.
Warning: Do not change the Region unless instructed.
In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:
● Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
● Refresh the page and try again.
Scenario
Your team has been tasked with prototyping an architecture for a new web-based application. To
define your architecture, you need to have a better understanding of public and private subnets,
routing, and Amazon EC2 instance options.
Learn more: With Amazon VPC, you can provision a logically isolated section of the AWS Cloud
where you can launch AWS resources in a virtual network that you define. You have complete
control over your virtual networking environment, including selection of your own IP address ranges,
creation of subnets, and configuration of route tables and network gateways. You can also use the
enhanced security options in Amazon VPC to provide more granular access to and from the Amazon
EC2 instances in your virtual network.
3. At the top of the AWS Management Console, in the search bar, search for and choose VPC.
Caution: Verify that the Region displayed in the top-right corner of the console is the same as the
Region value on the left side of this lab page.
Note: The VPC management console offers a VPC Wizard, which can automatically create several
VPC architectures. However, in this lab you create the VPC components manually.
The console displays a list of your currently available VPCs. A default VPC is provided so that you
can launch resources as soon as you start using AWS.
A You successfully created vpc-xxxxxxxxxx / Lab VPC message is displayed on top of the screen.
● State: Available
The lab VPC has a Classless Inter-Domain Routing (CIDR) range of 10.0.0.0/16, which includes all
IP addresses that start with 10.0.x.x. This range contains over 65,000 addresses. You later divide
the addresses into separate subnets.
8. From the same page, choose Actions and choose Edit VPC settings.
9. From the DNS settings section, select Enable DNS hostnames.
This option assigns a friendly Domain Name System (DNS) name to Amazon EC2 instances in the
VPC, such as the following:
ec2-52-42-133-255.us-west-2.compute.amazonaws.com
10.Choose Save.
A You have successfully modified the settings for vpc-xxxxxxxxxx / Lab VPC. message is displayed
on top of the screen.
Any Amazon EC2 instances launched into this Amazon VPC now automatically receive a DNS
hostname. You can also create a more meaningful DNS name (for example, app.company.com)
using records in Amazon Route 53.
Congratulations! You have successfully created your own VPC and now you can launch the AWS
resources in this defined virtual network.
A You have successfully created 1 subnet: subnet-xxxxxx message is displayed on top of the
screen.
● State: Available
Note: The VPC has a CIDR range of 10.0.0.0/16, which includes all 10.0.x.x IP addresses. The
subnet you just created has a CIDR range of 10.0.0.0/24, which includes all 10.0.0.x IP addresses.
These ranges might look similar, but the subnet is smaller than the VPC because of the /24 in the
CIDR range.
Now, configure the subnet to automatically assign a public IP address for all instances launched
within it.
17.From the Auto-assign IP settings section, select Enable auto-assign public IPv4 address.
18.Choose Save.
A You have successfully changed subnet settings: Enable auto-assign public IPv4 address
message is displayed on top of the screen.
Note: Even though this subnet is named Public Subnet, it is not yet public. A public subnet must
have an internet gateway and route to the gateway. You create and attach the internet gateway and
route tables in this lab.
A You have successfully created 1 subnet: subnet-xxxxxx message is displayed on top of the
screen.
● State: Available
Note: The CIDR block of 10.0.2.0/23 includes all IP addresses that start with 10.0.2.x and 10.0.3.x.
This is twice as large as the public subnet because most resources should be kept private, unless
they specifically need to be accessible from the internet.
Your VPC now has two subnets. However, these subnets are isolated and cannot communicate with
resources outside the VPC. Next, you configure the public subnet to connect to the internet through
an internet gateway.
Congratulations! You have successfully created a public subnet and a private subnet in the lab VPC.
Learn more: An internet gateway serves two purposes: To provide a target in your VPC route tables
for internet-bound traffic, and to perform network address translation (NAT) for instances that have
been assigned public IPv4 addresses.
A The following internet gateway was created: igw-xxxxxx - Lab IGW. You can now attach to a VPC
to enable the VPC to communicate with the internet. message is displayed on top of the screen.
You can now attach the internet gateway to your Lab VPC.
25.From the same page, choose Actions and choose Attach to VPC.
26.For Available VPCs, select Lab VPC from the dropdown menu.
27.Choose Attach internet gateway.
A Internet gateway igw-xxxxx successfully attached to vpc-xxxxx message is displayed on top of the
screen.
● State: Attached
The internet gateway is now attached to your Lab VPC. Even though you have created an internet
gateway and attached it to your VPC, you must also configure the route table of the public subnet to
use the internet gateway.
Congratulations! You have successfully created an internet gateway so that internet traffic can
access the public subnet.
Task 4: Route internet traffic in the public subnet to the
internet gateway
In this task, you create a route table and add a route to the route table to direct internet-bound traffic
to your internet gateway and associate your public subnets with your route table. Each subnet in
your VPC must be associated with a route table; the table controls the routing for the subnet. A
subnet can only be associated with one route table at a time, but you can associate multiple subnets
with the same route table.
Learn more: A route table contains a set of rules, called routes, that are used to determine where
network traffic is directed. To use an internet gateway, your subnet’s route table must contain a route
that directs internet-bound traffic to the internet gateway. You can scope the route to all destinations
not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6), or you can scope the route
to a narrower range of IP addresses. If your subnet is associated with a route table that has a route
to an internet gateway, it’s known as a public subnet.
There is currently one default route table associated with the VPC, Lab VPC. This routes traffic
locally. You now create an additional route table to route public traffic to your internet gateway.
A Route table rtb-xxxxxxx | Public Route Table was created successfully. message is displayed on
top of the screen.
Note: There is one route in your route table that allows traffic within the 10.0.0.0/16 network to flow
within the network, but it does not route traffic outside of the network.
A Updated routes for rtb-xxxxxxx / Public Route Table successfully message is displayed on top of
the screen.
36.Choose the Subnet associations tab.
37.Choose Edit subnet associations.
38.Select Public Subnet
39.Choose Save associations.
A You have successfully updated subnet associations for rtb-xxxxxxx / Public Route Table. message
is displayed on top of the screen.
Note: The subnet is now public because it has a route to the internet through the internet gateway.
Learn more: You can use Amazon EC2 security groups to help secure instances within an Amazon
VPC. By using security groups in a VPC, you can specify both inbound and outbound network traffic
that is allowed to or from each Amazon EC2 instance. Traffic that is not explicitly allowed to or from
an instance is automatically denied.
Security: It is recommended to use HTTPS protocol to improve web traffic security. However, to
simplify this lab, only HTTP protocol is used.
A Security group (sg-xxxxxxx | Public SG) was created successfully message is displayed on top of
the screen.
Congratulations! You have successfully created a security group that allows HTTP traffic. You need
this in the next task when you launch an Amazon EC2 instance in the public subnet.
Task 6: Launch an Amazon EC2 instance into a public
subnet
In this task, you launch an Amazon EC2 instance into a public subnet. To activate communication
over the internet for IPv4, your instance must have a public IPv4 address that’s associated with a
private IPv4 address on your instance. By default, your instance is only aware of the private
(internal) IP address space defined within the VPC and subnet.
Learn more: The internet gateway that you created logically provides the one-to-one NAT on behalf
of your instance. So when traffic leaves your VPC subnet and goes to the internet, the reply address
field is set to the public IPv4 address or Elastic IP address of your instance, and not its private IP
address.
45.At the top of the AWS Management Console, in the search bar, search for and choose EC2.
For this lab, use a t3.micro instance type. This instance type has 2 vCPUs and 1 GiB of memory.
In this lab, the default storage settings are all that is needed. No changes are required.
Note: To install and configure the new instance as a web server, you provide a user data script that
automatically runs when the instance launches.
64.In the User data - optional section, copy and paste the following:
#!/bin/bash
# To connect to your EC2 instance and install the Apache web server with
PHP
yum update -y
yum install -y httpd php8.1
systemctl enable httpd.service
systemctl start httpd
cd /var/www/html
wget
https://us-west-2-tcprod.s3.amazonaws.com/courses/ILT-TF-200-ARCHIT/v7.9.2
.prod-7555a90f/lab-2-VPC/scripts/instanceData.zip
unzip instanceData.zip
The remaining settings on the page can be left at their default values.
Your Amazon EC2 instance is now launched and configured as you specified.
68.Occasionally choose the console refresh button and wait for Public Instance to display the
Instance state as Running and wait for Status check to pass 3/3 checks passed.
Note: The Amazon EC2 instance named Public Instance is initially in a Pending state. The instance
state then changes to Running indicating that the instance has finished booting.
Congratulations! You have successfully launched an Amazon EC2 instance into a public subnet.
Note: If you need to make any section of the console larger, you can resize the horizontal edges of
the containers displayed on the console.
The web page hosted on the Amazon EC2 instance is displayed. The page displays the instance ID
and the AWS Availability Zone where the Amazon EC2 instance is located.
Learn more: Session Manager is a fully managed AWS Systems Manager capability that you use to
manage your Amazon EC2 instances through an interactive one-click browser-based shell or
through the AWS Command Line Interface (AWS CLI). You can use Session Manager to start a
session with an Amazon EC2 instance in your account. After starting the session, you can run bash
commands as you would through any other connection type.
76.At the top of the AWS Management Console, in the search bar, search for and choose
EC2.
77.In the left navigation pane, choose Instances.
78.Select Public Instance and choose Connect.
Learn more: With Session Manager, you can connect to Amazon EC2 instances without needing to
expose the SSH port on your firewall or Amazon VPC security group. For more information, see
AWS Systems Manager Session Manager.
80.Choose Connect.
A new browser tab or window opens with a connection to the Public Instance.
Note: The Session Manager service is not updated in real time. If you experience errors with
Session Manager connecting to an Amazon EC2 instance you just launched, ensure that you have
given the instance a few minutes to launch, pass health checks, and communicate with the Session
Manager service before trying to open a session connection again.
81. Command: Enter the following command to change to the home directory (/home/ssm-user/)
and test web connectivity using the cURL command:
cd ~
curl -I https://aws.amazon.com/training/
Expected output:
HTTP/2 200
content-type: text/html;charset=UTF-8
server: Server
date: Wed, 19 Apr 2023 14:43:47 GMT
x-amz-rid: 6HVPS1JY1XW2S1K34Q3Z
set-cookie: aws-priv=eyJ2IjoxLCJldSI6MCwic3QiOjB9; Version=1;
Comment="Anonymous cookie for privacy regulations";
Domain=.aws.amazon.com; Max-Age=31536000; Expires=Thu, 18-Apr-2024
14:43:47 GMT; Path=/; Secure
set-cookie: aws_lang=en; Domain=.amazon.com; Path=/
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
strict-transport-security: max-age=63072000
x-content-type-options: nosniff
x-amz-id-1: 6HVPS1JY1XW2S1K34Q3Z
last-modified: Thu, 30 Mar 2023 15:58:02 GMT
content-security-policy-report-only: default-src *; connect-src *;
font-src * data:; frame-src *; img-src * data:; media-src *; object-src *;
script-src *; style-src 'unsafe-inline' *; report-uri
https://prod-us-west-2.csp-report.marketing.aws.dev/submit
vary: accept-encoding,Content-Type,Accept-Encoding,User-Agent
x-cache: Miss from cloudfront
via: 1.1 88c333921d5c405e037b84bb8c2dc33e.cloudfront.net (CloudFront)
x-amz-cf-pop: GRU3-P1
x-amz-cf-id: 89R1wtM9vYV0kIQXrEVkcoNzg_C3UfQJIEVkC5BA3xiIH3FD0nVnYw==
Congratulations! You have successfully connected to your public instance using Session Manager.
You can safely close the tab and return to the console.
Note: To create a NAT gateway, you must specify the public subnet in which the NAT gateway
should reside. You must also specify an Elastic IP address to associate with the NAT gateway when
you create it. You cannot change the Elastic IP address after you associate it with the NAT gateway.
After you’ve created a NAT gateway, you must update the route table associated with one or more of
your private subnets to point internet-bound traffic to the NAT gateway. This allows instances in your
private subnets to communicate with the internet.
A NAT gateway nat-xxxxxxx | Lab NGW was created successfully. message is displayed on top of
the screen.
In the next step, you create a new route table for a private subnet that redirects non-local traffic to
the NAT gateway.
A Route table rtb-xxxxxxx | Private Route Table was created successfully. message is displayed on
top of the screen.
The private route table is created and the details page for the private route table is displayed.
You now add a route to send internet-bound traffic through the NAT gateway.
A Updated routes for rtb-xxxxxxx / Private Route Table successfully message is displayed on top of
the screen.
This route sends internet-bound traffic from the private subnet to the NAT gateway that is in the
same Availability Zone.
Congratulations! You have successfully created the NAT gateway and configured the private route
table.
Learn more: When you specify a security group as the source for a rule, traffic is allowed from the
network interfaces that are associated with the source security group for the specified port and
protocol. Incoming traffic is allowed based on the private IP addresses of the network interfaces that
are associated with the source security group (and not the public IP or Elastic IP addresses). Adding
a security group as a source does not add rules from the source security group.
A Security group (sg-xxxxxxx | Private SG) was created successfully message is displayed on top of
the screen.
Learn more: Private instances can route their traffic through a NAT gateway or a NAT instance to
access the internet. Private instances use the public IP address of the NAT gateway or NAT instance
to traverse the internet. The NAT gateway or NAT instance allows outbound communication but
doesn’t allow machines on the internet to initiate a connection to the privately addressed instances.
103. At the top of the AWS Management Console, in the search bar, search for and choose
EC2.
The Launch an instance page is displayed. In this task, you add a tag to the Amazon EC2 instance.
108. Locate the Application and OS Images (Amazon Machine Image) section.
109. Ensure that Amazon Linux is selected as the OS.
110. Ensure that Amazon Linux 2023 AMI is selected in the dropdown menu.
For this lab, use a t3.micro instance type. This instance type has 2 vCPUs and 1 GiB of memory.
In this lab, the default storage settings are all that is needed. No changes are required.
The remaining settings on the page can be left at their default values.
Note: To install and configure the new instance as a web server, you provide a user data script that
automatically runs when the instance launches.
123. In the User data - optional section, copy and paste the following:
#!/bin/bash
# To connect to your EC2 instance and install the Apache web server with
PHP
yum update -y
yum install -y httpd php8.1
systemctl enable httpd.service
systemctl start httpd
cd /var/www/html
wget
https://us-west-2-tcprod.s3.amazonaws.com/courses/ILT-TF-200-ARCHIT/v7.9.2
.prod-7555a90f/lab-2-VPC/scripts/instanceData.zip
unzip instanceData.zip
The remaining settings on the page can be left at their default values.
Your Amazon EC2 instance is now launched and configured as you specified.
The Amazon EC2 instance name Private Instance is initially in a Pending state. The state then
changes to Running, indicating that the instance has finished booting.
127. Occasionally choose the console refresh button and wait for the Instance state to
change to Running.
Congratulations! You have successfully launched an Amazon EC2 instance into a private subnet.
A new browser tab or window opens with a connection to the Private Instance.
Note: The Session Manager service is not updated in real time. If you experience errors with
Session Manager connecting to an Amazon EC2 instance you just launched, ensure that you have
given the instance a few minutes to launch, pass health checks, and communicate with the Session
Manager service before trying to open a session connection again.
132. Command: Enter the following command to change to the home directory
(/home/ssm-user/) and test web connectivity using the cURL command:
cd ~
curl -I https://aws.amazon.com/training/
Expected output:
HTTP/2 200
content-type: text/html;charset=UTF-8
server: Server
date: Wed, 19 Apr 2023 14:59:09 GMT
x-amz-rid: AZPXJ57K93ERATZV588Z
set-cookie: aws-priv=eyJ2IjoxLCJldSI6MCwic3QiOjB9; Version=1;
Comment="Anonymous cookie for privacy regulations";
Domain=.aws.amazon.com; Max-Age=31536000; Expires=Thu, 18-Apr-2024
14:59:08 GMT; Path=/; Secure
set-cookie: aws_lang=en; Domain=.amazon.com; Path=/
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
strict-transport-security: max-age=63072000
x-content-type-options: nosniff
x-amz-id-1: AZPXJ57K93ERATZV588Z
last-modified: Thu, 30 Mar 2023 15:58:02 GMT
content-security-policy-report-only: default-src *; connect-src *;
font-src * data:; frame-src *; img-src * data:; media-src *; object-src *;
script-src *; style-src 'unsafe-inline' *; report-uri
https://prod-us-west-2.csp-report.marketing.aws.dev/submit
vary: accept-encoding,Content-Type,Accept-Encoding,User-Agent
x-cache: Miss from cloudfront
via: 1.1 fb6a4eca9caced7b791557c24b8c6606.cloudfront.net (CloudFront)
x-amz-cf-pop: GRU3-P1
x-amz-cf-id: Tjphb1UhSXmtyHvybuq4QIFwzTurEI0g_saLB2nLjlYRiBbHbqn85Q==
133. Close the Session Manager tab and return to the console.
Congratulations! You have successfully connected to a private instance using Session Manager.
(Optional) Task 1: Troubleshooting connectivity between
the private instance and the public instance
In this optional task, you use the Internet Control Message Protocol (ICMP) to validate a private
instance’s network reachability from the public instance.
Note: This task is optional and is provided in case you have lab time remaining. You can complete
this task or skip to the end of the lab.
Note: To copy the private IPv4 address, hover over it and choose the copy icon.
A new browser tab or window opens with a connection to the Public Instance.
First, use a curl command to retrieve a header file and confirm is the web app hosted on the private
instance is reachable from the public instance.
143. Command: Copy the following command to your notepad. Replace PRIVATE_IP with the
value of the Private IPv4 address for the Private Instance:
curl PRIVATE_IP
Expected output:
<html><body><h1>It works!</h1></body></html>
144. Command: Copy the following command to your notepad. Replace PRIVATE_IP with the
value of the Private IPv4 address for the Private Instance:
ping PRIVATE_IP
145. Command: Copy and paste the updated command in your terminal and press Enter.
ping 10.0.2.131
146. After a few seconds, stop the ICMP ping request by pressing CTRL+C.
The ping request to the private instance fails. Your challenge is to use the console and figure out the
correct inbound rule required in the Private SG to be able to successfully ping the private instance.
If you have trouble completing the optional task, refer to the Optional Task Solution section at the
end of the lab.
Note: This task is optional and is provided in case you have lab time remaining. You can complete
this task or skip to the end of the lab .
147. Return to the browser tab with the AWS Management Console open.
148. In the left navigation pane, choose Instances.
149. Select Public Instance.
150. Choose Connect.
A new browser tab or window opens with a connection to the Public Instance.
153. Command: To view all categories of instance metadata from within a running instance,
run the following command:
154. Command: Run the following command to retrieve the public-hostname (one of the
top-level metadata items that were obtained in the preceding command):
curl http://169.254.169.254/latest/meta-data/public-hostname -H
"X-aws-ec2-metadata-token: $TOKEN"
Note: The IP address 169.254.169.254 is a link-local address and is valid only from the instance.
You have successfully learned how to retrieve instance metadata from your running Amazon EC2
instance.
Conclusion
Creating a VPC with both public and private subnets provides you the flexibility to launch tasks and
services in either a public or private subnet. Tasks and services in the private subnets can access
the internet through a NAT gateway.
End lab
Follow these steps to close the console and end your lab.
Additional resources
● What is Amazon VPC?
● Subnets for Your VPC
● Connect to the internet using an internet gateway
● Configure route tables
● Control traffic to resources using security groups
● NAT gateways
● Public IPv4 addresses
● Understanding the basics of IPv6 networking on AWS
Objectives
By the end of this lab, you will be able to do the following:
Prerequisites
This lab requires the following:
● Access to a notebook computer with Wi-Fi and Microsoft Windows, macOS, or Linux
(Ubuntu, SuSE, or Red Hat)
● An internet browser, such as Chrome, Firefox, or Microsoft Edge
● A plaintext editor
Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:
Start lab
1. To launch the lab, at the top of the page, choose Start Lab.
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2. To open the lab, choose Open Console .
You are automatically signed in to the AWS Management Console in a new web browser tab.
Warning: Do not change the Region unless instructed.
If you see the message, You must first log out before logging into a different AWS account:
In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:
● Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
● Refresh the page and try again.
Scenario
Your team has been tasked with prototyping an architecture for a new web-based application. To
define your architecture, you need to have a better understanding of load balancers and managed
databases, such as Amazon RDS.
Lab environment
The lab environment provides you with the following resources to get started: an Amazon Virtual
Private Cloud (Amazon VPC), underlying necessary network structure, three security groups to
control inbound and outbound traffic, two EC2 instances in a private subnet, and an associated EC2
instance profile. The instance profile contains the permissions necessary to allow the AWS Systems
Manager Session Manager feature to access the EC2 instance.
The following diagram shows the expected architecture of the important lab resources you build and
how they should be connected at the end of the lab.
Learn more: Amazon Aurora is a fully managed relational database engine that is compatible with
MySQL and PostgreSQL. Aurora is part of the managed database service, Amazon RDS. Amazon
RDS is a web service that makes it easier to set up, operate, and scale a relational database in the
cloud. For more information, see What is Amazon Aurora?.
3. At the top of the AWS Management Console, in the search bar, search for and choose
RDS.
4. In the left navigation pane, choose Databases.
5. Choose Create database.
6. In the Choose a database creation method section, select Standard create.
7. In the Engine options section, configure the following:
● Engine type: Select Aurora (MySQL Compatible).
8. In the Templates section, select Dev/Test.
9. In the Settings section, configure the following:
● DB cluster identifier: Enter aurora.
● Master username: Enter dbadmin.
● Credentials management: Choose Self managed option.
● Master password: Paste the LabPassword value from the left side of these lab instructions.
● Confirm master password: Paste the LabPassword value from the left side of these lab
instructions.
10.In the Instance configuration section, configure the following:
● DB instance class: Select Burstable classes (includes t classes).
● From the dropdown menu, choose the db.t3.medium instance type.
11. In the Availability & durability section, for Multi-AZ deployment, select Don’t create an Aurora
Replica.
Learn more: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for DB
instances, making them a natural fit for production database workloads. When you provision a
Multi-AZ DB instance, Amazon RDS automatically creates a primary DB instance and synchronously
replicates the data to a standby instance in a different Availability Zone. For more information, see
Amazon RDS Multi-AZ.
Note: Since this lab is about knowing the resources required to build a multi-tier architecture, you do
not need to perform a Multi-AZ deployment. You learn how to deploy a Multi-AZ architecture in the
next lab.
12.In the Connectivity section, configure the following:
● Virtual private cloud (VPC): Select LabVPC from the dropdown menu.
● DB subnet group: Select labdbsubnetgroup from the dropdown menu.
● Public access: Select No.
● VPC security group (firewall): Select Choose existing.
● Existing VPC security groups:
○ To remove the default security group from the Existing VPC security groups field,
select the X.
○ In the Existing VPC security groups dropdown menu, enter LabDBSecurityGroup
to choose this option.
Learn more: Subnets are segments of an IP address range in an Amazon VPC that you designate
to group your resources based on security and operational needs. A DB subnet group is a collection
of subnets (typically private) that you create in an Amazon VPC and then designate for your DB
instances. With a DB subnet group, you can specify an Amazon VPC when creating DB instances
using the command line interface or API. If you use the console, you can just select the Amazon
VPC and subnets you want to use. For more information, see Working with DB subnet groups.
Learn more: With Amazon VPC, you can launch AWS resources into a virtual network that you have
defined. This virtual network closely resembles a traditional network that you would operate in your
own data center, with the benefits of using the scalable infrastructure of AWS. For more information,
see Amazon VPC VPCs and Amazon RDS.
Caution: Ensure the correct value for DB cluster parameter group is selected from the dropdown
menu. An incorrect value results in errors when building the database replicas.
Learn more: You can encrypt your Amazon RDS instances and snapshots at rest by activating the
encryption option for your Amazon RDS DB instance. Data that is encrypted at rest includes the
underlying storage for a DB instance, its automated backups, read replicas, and snapshots. For
more information, see Encrypting Amazon RDS resources.
17.In the Maintenance section, unselect Enable auto minor version upgrade.
Note: Because the nature of this lab is short lived there is no need to set up a maintenance
schedule for the database.
Your Aurora MySQL DB cluster is in the process of launching. The Amazon RDS database can take
up to 5 minutes to launch. However, you can continue to the next task.
A load balancer serves as the single point of contact for clients. Clients send requests to the load
balancer, and the load balancer sends them to targets, such as EC2 instances. To configure your
load balancer, you create target groups and then register targets with your target groups.
20.At the top of the console, in the search bar, search for and choose
EC2.
21.In the left navigation pane, expand the Load Balancing section and choose Target Groups.
22.Choose Create target group.
The Specify group details page is displayed.
The remaining settings on the page can be left at their default values.
24.Choose Next.
A Successfully created target group: ALBTargetGroup message is displayed on top of the screen.
27.In the left navigation pane, expand the Load Balancing section and choose Load Balancers.
28.Choose Create load balancer.
29.In the Load balancer types section, for Application Load Balancer card, choose Create.
A Successfully created load balancer: LabAppALB message is displayed on top of the screen.
The load balancer is in the Provisioning state for few minutes and then changes to Active.
In this task, you created an Application Load Balancer and you added EC2 instances as a target to
the load balancer. This task provides a demonstration on how to register a target with a load
balancer. In addition to individual EC2 instances, Auto Scaling groups can also be registered as
targets for the load balancer. When you use Auto Scaling groups as targets for load balancing, the
instances that are launched by the Auto Scaling group are automatically registered with the load
balancer. Likewise, EC2 instances that are ended by the Auto Scaling groups are automatically
unregistered from the load balancer. Using Auto Scaling groups with a load balancer is
demonstrated in the next lab.
Congratulations! You have successfully created a load balancer, created target groups, and
registered the EC2 instances with the target group.
36.At the top of the console, in the search bar, search for and choose
RDS.
37.In the navigation pane, choose Databases.
38.From the list of DB identifiers, select the hyperlink for the cluster named aurora.
39.On the Connectivity & security tab, you can find the endpoint and port number for the
database cluster. In general, you need the endpoints and the port number to connect to the
database.
40.Copy and paste the Endpoint name of the writer instance value to a notepad. You need this
value later in the lab.
41.On the Configuration tab, you can find details regarding how the database is currently
configured.
42.On the Monitoring tab, you can monitor metrics for the following items of your database:
● The number of connections to a database instance
● The amount of read and write operations to a database instance
● The amount of storage that a database instance is currently using
● The amount of memory and CPU being used for a database instance
● The amount of network traffic to and from a database instance
WARNING: Wait for the Status of the aurora DB instance to show as Available before continuing to
the next task.
Congratulations! You have successfully reviewed the Amazon RDS DB instance metadata through
the console.
43.At the top of the console, in the search bar, search for and choose
EC2.
44.In the left navigation pane, choose Target Groups.
45.Select ALBTargetGroup.
46.In the Targets tab, wait until the instance status is displayed as healthy.
Learn more: Elastic Load Balancing periodically tests the ping path on your web server instance to
determine health. A 200 HTTP response code indicates a healthy status, and any other response
code indicates an unhealthy status. If an instance is unhealthy and continues in that state for a
successive number of checks (unhealthy threshold), the load balancer removes it from service until it
recovers. Fore more information, see Health checks for your target groups.
48.Copy the DNS name and paste the value in a new browser tab to invoke the load balancer.
Tip: To copy the DNS name, hover on it and select the copy icon.
The application connects to the database, loads some initial data, and displays information. With
this application, you can add, edit, or delete an item from a store’s inventory.
The inventory information is stored in the Amazon RDS MySQL-compatible database you created
earlier in the lab. This means that if the web application server fails, the data won’t be lost. It also
means that multiple application servers can access the same data.
Congratulations! You have successfully accessed the web application installed on the EC2 instance
through the load balancer.
Optional Task: Creating an Amazon RDS read replica in a
different AWS Region
In this challenge task, you create a cross-Region read replica from the source DB instance. You
create a read replica in a different AWS Region to improve your disaster recovery capabilities, scale
read operations into an AWS Region closer to your users, and to make it easier to migrate from a
data center in one AWS Region to a data center in another AWS Region.
Note: This challenge task is optional and is provided in case you have lab time remaining. You can
complete this task or skip to the end of the lab here.
51.Switch back to the browser tab open to the AWS Management Console.
52.At the top of the console, in the search bar, search for and choose
RDS.
53.In the left navigation pane, choose Databases.
54.Select aurora DB instance as the source for a read replica.
55.Choose Actions and select Create cross-Region read replica.
The remaining settings in this section can be left at their default values.
The remaining settings in this section can be left at their default values.
58.Choose Create.
A Your Read Replica creation has been initiated. message is displayed on the screen.
59.To review the cross-Region read replica in the destination region, choose the hyperlink on
the same page labeled here.
60.Otherwise, choose Close.
Congratulations! You have successfully completed the optional task and started the creation of a
cross-Region read replica for the Amazon RDS database.
Conclusion
Congratulations! You have now successfully completed the following:
In this lab, you learned how to deploy various resources needed for a prototype web application in
your Amazon VPC. However, the architecture that was created in this lab does not meet AWS Cloud
best practices because it is not an elastic, durable, highly available design. By relying on only a
single Availability Zone in the architecture, there is a single point of failure. You learn how to
configure your architecture for redundancy, failover, and high availability in the next lab.
End lab
Follow these steps to close the console and end your lab.
Objectives
After completing this lab, you should be able to do the following:
● Create an Amazon EC2 Auto Scaling group and register it with an Application Load Balancer
spanning across multiple Availability Zones.
● Create a highly available Amazon Aurora database (DB) cluster.
● Modify an Aurora DB cluster to be highly available.
● Modify an Amazon Virtual Private Cloud (Amazon VPC) configuration to be highly available
using redundant NAT gateways.
● Confirmed your database can perform a failover to a read replica instance.
Prerequisites
This lab requires the following:
● Access to a notebook computer with Wi-Fi and Microsoft Windows, macOS, or Linux
(Ubuntu, SuSE, or Red Hat)
● An internet browser, such as Chrome, Firefox, or Microsoft Edge
● A plaintext editor
Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:
Start lab
1. To launch the lab, at the top of the page, choose Start Lab.
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2. To open the lab, choose Open Console .
You are automatically signed in to the AWS Management Console in a new web browser tab.
Warning: Do not change the Region unless instructed.
In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:
● Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
● Refresh the page and try again.
● An Amazon VPC
● Public and private subnets in two Availability Zones
● An internet gateway (not shown in the diagram) associated with the public subnets
● A NAT gateway in one of the public subnets
● An Application Load Balancer deployed across the two public subnets to receive and forward
incoming application traffic
● An EC2 instance in one of the private subnets, running a basic inventory tracking application
● An Aurora DB cluster containing a single DB instance in one of the private subnets to store
inventory data
The following image shows the initial architecture:
3. At the top of the AWS Management Console, in the search bar, search for and choose VPC.
Note: The Lab VPC was created for you by the lab environment, and all of the application resources
used by this lab exercise exist inside this VPC.
Your Lab VPC appears on the list along with the default VPC.
The subnets that are part of the Lab VPC are displayed in a list. Examine the following details listed
in the columns for Public Subnet 1:
● In the VPC column, you can identify which VPC this subnet is associated with. This subnet
exists inside the Lab VPC.
● In the IPv4 Classless Inter-Domain Routing (CIDR) column, the value of 10.0.0.0/24 means
this subnet includes the 256 IPs (five of which are reserved and unusable) between 10.0.0.0
and 10.0.0.255.
● In the Availability Zone column, you can identify the Availability Zone in which this subnet
resides. This subnet resides in the Availability Zone ending with an “a”.
6. To reveal more details at the bottom of the page, select Public Subnet 1.
Note: To expand the lower window pane, drag the divider up and down. Alternatively, to choose a
preset size for the lower pane you can choose one of the three square icons.
7. On the lower half of the page, choose the Route table tab.
This tab displays details about the routing for this subnet:
● The first entry specifies that traffic destined within the VPC’s CIDR range (10.0.0.0/20) is
routed within the VPC (local).
● The second entry specifies that any traffic destined for the internet (0.0.0.0/0) is routed to the
internet gateway (igw-xxxx). This configuration makes it a public subnet.
8. Choose the Network ACL tab.
This tab displays the network access control list (ACL) associated with the subnet. The rules
currently permit all traffic to flow in and out of the subnet. You can further restrict the traffic by
modifying the network ACL rules or by using security groups.
An internet gateway called Lab IG is already associated with the Lab VPC.
This is the security group used to control incoming traffic to the Application Load Balancer.
12.On the lower half of the page, choose the Inbound rules tab.
The security group permits inbound web traffic (port 80) from everywhere (0.0.0.0/0).
By default, security groups allow all outbound traffic. However, you can modify these rules as
necessary.
14.Select the Inventory-App security group. Ensure that it is the only security group selected.
This is the security group used to control incoming traffic to the AppServer EC2 instance.
15.On the lower half of the page, choose the Inbound rules tab.
The security group only permits inbound web traffic (port 80) from the Application Load Balancer
security group (Inventory-ALB).
17.Select the Inventory-DB security group. Ensure that it is the only security group selected.
This is the security group used to control incoming traffic to the database.
18.On the lower half of the page, choose the Inbound rules tab.
The security group permits inbound MYSQL/Aurora traffic (port 3306) from the application server
security group (Inventory-App).
By default, security groups allow all outbound traffic. As with the outbound rules for the previous
security groups, you can modify these rules as necessary.
20.At the top of the console, in the search bar, search for and choose
EC2.
21.In the left navigation pane, choose Instances.
22.Select the AppServer instance to reveal more details at the bottom of the page.
23.After reviewing the instance details, choose the Actions dropdown menu, choose Instance
settings, and then choose Edit user data.
24.On the Edit user data page, choose Copy user data.
25.Paste the user data you just copied into a text editor. You use it in a later task.
26.Expand the navigation menu by choosing the menu icon in the upper-left corner.
27.In the left navigation pane, choose Target Groups.
28.Select the Inventory-App target group to reveal more details at the bottom of the page.
29.On the lower half of the page, choose the Targets tab.
The Application Load Balancer forwards incoming requests to all targets on the list. The AppServer
EC2 instance you examined earlier is already registered as a target.
32.Copy the InventoryAppSettingsPageURL on the left side of these lab instructions to your
clipboard.
33.Open a new web browser tab, paste the URL you copied in the previous step, and press
Enter.
The settings page for the inventory application is displayed. The database endpoint, database name,
and login details are already populated with the values for the Aurora database.
34.Leave all the settings on the inventory app settings page as the default configurations.
35.Choose Save.
After saving the settings, the inventory application redirects to the main page, and inventory for
various items are displayed. You can add items to the inventory or modify the details of the existing
inventory items. When you interact with this application, the load balancer forwards your requests to
the previous AppServer in the load balancer’s target group. The AppServer registers any inventory
changes in the Aurora database. The bottom of the page displays the instance ID and the Availability
Zone where the instance resides.
Note: Leave this inventory application web browser tab open while working on the remaining lab
tasks. You return to it in later tasks.
Congratulations! You have now finished inspecting all of the resources created for you in the lab
environment and successfully accessed the provided inventory application. Next, you create a
launch template to use with Amazon EC2 Auto Scaling to make the inventory application highly
available.
36.At the top of the console, in the search bar, search for and choose
EC2.
37.In the left navigation pane, below Instances, choose Launch Templates.
38.Choose Create launch template.
39.In the Launch template name and description section, configure the following:
● Launch template name: Enter Lab-template-NUMBER
Note: Replace NUMBER with a random number, such as the following example:
Lab-template-98469549
Note: If the template name already exists, try again with a different number.
You must choose an AMI. An AMI is an image defining the root volume of the instance along with its
operating system, applications, and related details. Without this information, your template would be
unable to launch new instances.
AMIs are available for various operating systems (OSs). In this lab, you launch instances running the
Amazon Linux 2023 OS.
40.For Application and OS Images (Amazon Machine Image) Info, choose the Quick Start tab.
41.Choose Amazon Linux as the OS.
42.For Amazon Machine Image, choose Amazon Linux 2023 AMI.
43.For Instance type, choose t3.micro from the dropdown menu.
When you launch an instance, the instance type determines the hardware allocated to your instance.
Each instance type offers different compute, memory, and storage capabilities, and they are grouped
in instance families based on these capabilities.
44.In the Network Settings section, for Security groups, choose Inventory-App.
45.Scroll down to the Advanced details section.
46.Expand Advanced details.
47.For IAM instance profile, choose Inventory-App-Role.
48.For Metadata version, choose V2 only (token required).
49.In the User data section, paste the user data you saved to your text editor during Task 1.2.
50.Choose Create launch template.
51.Choose View launch templates.
52.In the left navigation pane, below Auto Scaling, choose Auto Scaling Groups.
53.Choose Create Auto Scaling group and configure the following:
● Auto Scaling group name: Enter Inventory-ASG
● Launch template: From the dropdown menu, select the launch template that you created
earlier.
54.Choose Next.
This tells the Auto Scaling group to register new EC2 instances as part of the Inventory-App target
group that you examined earlier. The load balancer sends traffic to instances that are in this target
group.
● Health check grace period: Enter 300
58.Choose Next.
59.On the Configure group size and scaling - optional page, configure the following:
● Desired capacity: Enter 2
● Min desired capacity: Enter 2
● Max desired capacity: Enter 2
60.In the Additional settings section, choose Enable group metrics collection within
CloudWatch.
61.Choose Next.
For this lab, you always maintain two instances to ensure high availability. If the application is
expected to receive varying loads of traffic, it is also possible to create scaling policies that define
when to launch and terminate instances. However, this is not necessary for the Inventory application
in this lab.
This tags the Auto Scaling group with a name, which also applies to the EC2 instances launched by
the Auto Scaling group. This helps you identify which EC2 instances are associated with which
application or with business concepts, such as cost centers.
64.Choose Next.
65.Review the Auto Scaling group configuration for accuracy, and then choose Create Auto
Scaling group.
Your application will soon be running across two Availability Zones. Amazon EC2 Auto Scaling
maintains the configuration even if an instance or Availability Zone fails.
Now that you have created your Auto Scaling group, you can verify that the group has launched your
EC2 instances.
The Activity history section maintains a record of events that have occurred in your Auto Scaling
group. The Status column contains the status of your instances. When your instances are launching,
the status column shows PreInService. After an instance is launched, the status changes to
Successful.
Refresh: If your instances have not reached the InService state yet, you need to wait a few minutes.
You can choose refresh to retrieve the current lifecycle state of your instances.
70.Choose the Monitoring tab. Here, you can review monitoring-related information for your
Auto Scaling group.
Learn more: This page provides information about activity in your Auto Scaling group and the usage
and health status of your instances. The Auto Scaling tab displays Amazon CloudWatch metrics
about your Auto Scaling group, and the EC2 tab displays metrics for the EC2 instances managed by
the Auto Scaling group. For more information, see Monitor your Auto Scaling instances and groups.
Congratulations! You have now successfully created an Auto Scaling group, which maintains your
application’s availability and makes it resilient to instance or Availability Zone failures. Next, you test
the high availability of the application.
71.Expand the navigation menu by choosing the menu icon in the upper-left corner.
72.In the left navigation pane, choose Target Groups.
73.Under Name, select Inventory-App.
74.On the lower half of the page, choose the Targets tab.
In the Registered targets section, there are three instances. This includes the two Auto Scaling
instances named Inventory-App and the original instance you examined in Task 1, named
AppServer. The Health status column shows the results of the load balancer health check that you
performed against the instances. In this task, you remove the original AppServer instance from the
target group, leaving only the two instances managed by Amazon EC2 Auto Scaling.
The load balancer stops routing requests to a target as soon as it is deregistered. The Health status
column for the AppServer instance displays a draining state, and the Health Status Details column
displays Target deregistration is in progress until in-flight requests have completed. After a few
minutes, the AppServer instance finishes deregistering, and only the two Auto Scaling instances
remain on the list of registered targets.
Note: Deregistering the instance only detaches it from the load balancer. The AppServer instance
continues to run indefinitely until you terminate it.
77.If the Health status column for the Inventory-App instances does not display healthy yet,
update the list of instances every 30 seconds using the refresh button at the top-right corner
of the page until both Inventory-App instances display healthy in the Health status column. It
might take a few minutes for the instances to finish initializing.
If the status does not eventually change to healthy, ask your instructor for help diagnosing the
problem. Hovering on the information icon in the Health status column provides more information
about the status.
The application is ready for testing. You test the application by connecting to the Application Load
Balancer, which sends your request to one of the EC2 instances managed by Amazon EC2 Auto
Scaling.
Note: If you closed the browser tab, you can reopen the inventory application by doing the following:
● Open a new web browser tab, paste the DNS name from your clipboard, and press Enter.
The load balancer forwards your request to one of the EC2 instances. The bottom of the page
displays the instance ID and Availability Zone.
79. Refresh: Refresh the page in your web browser a few times. The instance ID and Availability
Zone sometimes change between the two instances.
● You send the request to the Application Load Balancer, which resides in the public subnets.
The public subnets are connected to the internet.
● The Application Load Balancer chooses one of the EC2 instances that reside in the private
subnets and forwards the request to the instance.
● The EC2 instance then returns the web page to the Application Load Balancer, which returns
the page to your web browser.
The following image displays the flow of information for this web application:
Congratulations! You have now confirmed that Amazon EC2 Auto Scaling successfully launched
two new Inventory-App instances across two Availability Zones, and you deregistered the original
AppServer instance from the load balancer. The Auto Scaling group maintains high availability for
your application in the event of failure. Next, you simulate a failure by terminating one of the
Inventory-App instances managed by Amazon EC2 Auto Scaling.
80.Return to the EC2 Management Console, but do not close the application tab. You return to it
in later tasks.
81.In the left navigation pane, choose Instances.
82.Choose one of the Inventory-App instances. (It does not matter which one you choose.)
83.Choose Instance State and then choose Terminate instance.
84.Choose Terminate.
After a short period of time, the load balancer health checks will notice that the instance is not
responding and automatically route all incoming requests to the remaining instance.
85.Leaving the console open, switch to the Inventory Application tab in your web browser and
refresh the page several times.
The Availability Zone shown at the bottom of the page stays the same. Even though an instance has
failed, your application remains available.
After a few minutes, Amazon EC2 Auto Scaling also detects the instance failure. You configured
Amazon EC2 Auto Scaling to keep two instances running, so Amazon EC2 Auto Scaling
automatically launches a replacement instance.
86. Refresh: Return to the EC2 Management Console. Reload the list of instances using the
refresh button every 30 seconds until a new EC2 instance named Inventory-App appears.
The newly launched instance displays Initializing under the Status check column. After a few
minutes, the health check for the new instance should become healthy, and the load balancer
resumes distributing traffic between two Availability Zones.
87. Refresh: Return to the Inventory Application tab and refresh the page several times. The
instance ID and Availability Zone change as you refresh the page.
Congratulations! You have successfully verified that your application is highly available.
Task 6.1: Configure the database to run across multiple Availability Zones
In this task, you make the Aurora database highly available by configuring it to run across multiple
Availability Zones.
88.At the top of the console, in the search bar, search for and choose
RDS.
89.In the left navigation pane, choose Databases.
90.Locate the row that contains the inventory-primary value.
91.In the fifth column, labeled Region & AZ, note in which Availability Zone the primary is
located.
Caution: In the following steps you create an additional instance for the database cluster. For true
high-availability architecture, the second instance must be located in an Availability Zone that is
different from that of the primary instance.
92.Select the inventory-cluster radio button associated with your Aurora database cluster.
93.Choose Actions and then choose Add reader.
94.In the Settings section, configure the following:
● DB instance identifier: Enter inventory-replica
95.In the Connectivity section, under Availability Zone, select a different Availability Zone from
the one you noted above where the inventory-primary is located.
96.At the bottom of the page, choose Add reader.
A new DB identifier named inventory-replica appears on the list, and its status is Creating. This is
your Aurora Replica instance. You can continue to the next task without waiting.
Learn more: When your Aurora Replica finishes launching, your database is deployed in a highly
available configuration across multiple Availability Zones. This does not mean that the database is
distributed across multiple instances. Although both the primary DB instance and the Aurora Replica
access the same shared storage, only the primary DB instance can be used for writes. Aurora
Replicas have two main purposes. You can issue queries to them to scale the read operations for
your application. You typically do so by connecting to the reader endpoint of the cluster. That way,
Aurora can spread the load for read-only connections across as many Aurora Replicas as you have
in the cluster. Aurora Replicas also help to increase availability. If the writer instance in a cluster
becomes unavailable, Aurora automatically promotes one of the reader instances to take its place as
the new writer. For more information, see Replication with Amazon Aurora.
While the Aurora Replica launches, continue to the next task to configure high availability for the
NAT gateway, and then return to the Amazon RDS console in the final task to confirm high
availability of the database after the creation of the replica is complete.
Congratulations! You have successfully configured high availability for the database tier.
The Inventory-App servers are deployed in private subnets across two Availability Zones. If they
need to access the internet (for example, to download data), the requests must be redirected
through a NAT gateway (located in a public subnet). The current architecture has only one NAT
gateway in Public Subnet 1, and all of the Inventory-App servers use this NAT gateway to reach the
internet. This means that if Availability Zone 1 failed, none of the application servers would be able to
communicate with the internet. Adding a second NAT gateway in Availability Zone 2 ensures that
resources in private subnets can still reach the internet even if Availability Zone 1 fails.
The existing NAT gateway is displayed. Now create one for the other Availability Zone.
Details for the newly created route table are displayed. There is currently one route, which directs all
traffic locally. Now, add a route to send internet-bound traffic through the new NAT gateway.
A Updated routes for rtb-xxxxxxxxxxxx / Private Route Table 2. message is displayed on top of the
screen.
You have created the route table and configured it to route internet-bound traffic through the new
NAT gateway. Next, associate the route table with Private Subnet 2.
A You have successfully updated subnet associations for rtb-xxxxxxxxxxxx / Private Route Table 2.
Internet-bound traffic from Private Subnet 2 is now sent to the NAT gateway in the same Availability
Zone.
Your NAT gateways are now highly available. A failure in one Availability Zone does not impact traffic
in the other Availability Zone.
Congratulations! You have successfully verified that your NAT gateways are highly available.
112. At the top of the console, in the search bar, search for and choose
RDS.
113. In the left navigation pane, choose Databases.
Caution: Verify that the inventory-replica DB instance status is changed to Available before
continuing to the next step.
114. For the DB identifier, select the inventory-primary DB identifier associated with your
Aurora primary DB instance.
Note: The primary DB instance with DB identifier inventory-primary currently displays Writer under
the Role column. This is the only database node in the cluster that can currently be used for writes.
Observe that the application continues to function correctly after the failover.
Congratulations! You have successfully verified that your database can successfully complete a
failover and is highly available.
Conclusion
Congratulations! You now have successfully completed the following:
● Created an Amazon EC2 Auto Scaling group and registered it with an Application Load
Balancer spanning across multiple Availability Zones.
● Created a highly available Aurora DB cluster.
● Modified an Aurora DB cluster to be highly available.
● Modified an Amazon VPC configuration to be highly available using redundant NAT
gateways.
● Confirmed your database can perform a failover to a read replica instance.
End lab
Follow these steps to close the console and end your lab.
Objectives
By the end of this lab, you should be able to do the following:
Lab environment
You are tasked with evaluating and improving an event-driven architecture. Currently, Customer
Care professionals take snapshots of products and upload them into a specific S3 bucket to store
the images. The development team runs Python scripts to resize the images after they are uploaded
to the ingest S3 bucket. Uploading a file to the ingest bucket invokes an event notification to an
Amazon SNS topic. Amazon SNS then distributes the notifications to three separate SQS queues.
The initial design was to run EC2 instances in Auto Scaling groups for each resizing operation. After
reviewing the initial design, you recommend replacing the EC2 instances with Lambda functions.
The Lambda functions process the stored images into different formats and stores the output in a
separate S3 bucket. This proposed design is more cost effective.
Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:
Start lab
1. To launch the lab, at the top of the page, choose Start Lab.
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2. To open the lab, choose Open Console .
You are automatically signed in to the AWS Management Console in a new web browser tab.
Warning: Do not change the Region unless instructed.
If you see the message, You must first log out before logging into a different AWS account:
In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:
● Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
● Refresh the page and try again.
3. At the top of the AWS Management Console, in the search box, search for and choose
Simple Notification Service.
4. Expand the navigation menu by choosing the menu icon in the upper-left corner.
5. From the left navigation menu, choose Topics.
6. Choose Create topic.
7. On the Create topic page, in the Details section, configure the following:
● Type: Choose Standard.
● Name: Enter a unique SNS topic name, such as resize-image-topic-, followed by four
random numbers.
8. Choose Create topic.
The topic is created and the resize-image-topic-XXXX page is displayed. The topic’s Name, Amazon
Resource Name (ARN), (optional) Display name, and topic owner’s AWS account ID are displayed in
the Details section.
9. Copy the topic ARN and Topic owner values to a notepad. You need these values later in the
lab.
Example:
Task 2.1: Create an Amazon SQS queue for the thumbnail image
10.At the top of the AWS Management Console, in the search box, search for and choose
Simple Queue Service.
11. On the SQS home page, choose Create queue.
12.On the Create queue page, in the Details section, configure the following:
● Type: Choose Standard (the Standard queue type is set by default).
● Name: Enter thumbnail-queue.
13.The console sets default values for the queue Configuration parameters. Leave the default
values.
14.Choose Create queue.
Amazon SQS creates the queue and displays a page with details about the queue.
15.On the queue’s detail page, choose the SNS subscriptions tab.
16.Choose Subscribe to Amazon SNS topic.
Note: If the SNS topic is not listed in the menu, choose Enter Amazon SNS topic ARN and then
enter the topic’s ARN that was copied earlier.
18.Choose Save.
Your SQS queue is now subscribed to the SNS topic named resize-image-topic-XXXX.
Task 2.2: Create an Amazon SQS queue for the mobile image
19.On the SQS console, expand the navigation menu on the left, and choose Queues.
20.Choose Create queue.
21.On the Create queue page, in the Details section, configure the following:
● Type: Choose Standard (the Standard queue type is set by default).
● Name: Enter mobile-queue.
22.The console sets default values for the queue Configuration parameters. Leave the default
values.
23.Choose Create queue.
Amazon SQS creates the queue and displays a page with details about the queue.
24.On the queue’s detail page, choose the SNS subscriptions tab.
25.Choose Subscribe to Amazon SNS topic.
26.From the Specify an Amazon SNS topic available for this queue section, choose the
resize-image-topic SNS topic you created previously under Use existing resource.
Note: If the SNS topic is not listed in the menu, choose Enter Amazon SNS topic ARN and then
enter the topic’s ARN that was copied earlier.
27.Choose Save.
Your SQS queue is now subscribed to the SNS topic named resize-image-topic-XXXX.
28.At the top of the AWS Management Console, in the search box, search for and choose
Simple Notification Service.
29.In the left navigation pane, choose Topics.
30.On the Topics page, choose resize-image-topic-XXXX.
31.Choose Publish message.
The message is published to the topic, and the console opens the topic’s detail page. To investigate
the published message, navigate to Amazon SQS.
36.At the top of the AWS Management Console, in the search box, search for and choose
Simple Queue Service.
37.Choose any queue from the list.
38.Choose Send and receive messages .
39.On Send and receive messages page, in the Receive messages section, choose Poll for
messages .
40.Locate the Message section. Choose any ID link in the list to review the Details, Body, and
Attributes of the message.
The Message Details box contains a JSON document that contains the subject and message that
you published to the topic.
41.Choose Done .
Congratulations! You have successfully created two Amazon SQS queues and published to a topic
that sends notification messages to a queue.
Task 3.1: Configure the Amazon SNS access policy to allow the Amazon S3
bucket to publish to a topic
42.At the top of the AWS Management Console, in the search box, search for and choose
Simple Notification Service.
43.From the left navigation menu, choose Topics.
44.Choose the resize-image-topic-XXXX topic.
45.Choose Edit .
46.Navigate to the Access policy - optional section and expand it, if necessary
47.Delete the existing content of the JSON editor panel.
48.Copy the following code block and paste it into the JSON Editor section.
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"SNS:GetTopicAttributes",
"SNS:SetTopicAttributes",
"SNS:AddPermission",
"SNS:RemovePermission",
"SNS:DeleteTopic",
"SNS:Subscribe",
"SNS:ListSubscriptionsByTopic",
"SNS:Publish"
],
"Resource": "SNS_TOPIC_ARN",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "SNS_TOPIC_OWNER"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "SNS_TOPIC_ARN",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "SNS_TOPIC_OWNER"
}
}
}
]
}
49.Replace the two occurrences of SNS_TOPIC_OWNER with the Topic owner (12-digit AWS
Account ID) value that you copied earlier in Task 1. Make sure to leave the double quotes.
50.Replace the two occurrences of SNS_TOPIC_ARN with the SNS topic ARN value copied
earlier in Task 1. Make sure to leave the double quotes.
51.Choose Save changes .
Note: In this lab, you set up a prefix filter so that you receive notifications only when files are added
to a specific folder (ingest).
Note: In this lab, you set up a suffix filter so that you receive notifications only when .jpg files are
uploaded.
58.In the Event types section, select All object create events.
59.In the Destination section, configure the following:
● Destination: Select SNS topic.
● Specify SNS topic: Select Choose from your SNS topics.
● SNS topic: Choose the resize-image-topic-XXXX SNS topic from the dropdown menu.
Or, if you prefer to specify an ARN, choose Enter ARN and enter the ARN of the SNS topic copied
earlier.
61.At the top of the AWS Management Console, in the search box, search for and choose
Lambda.
62.Choose Create function.
63.In the Create function window, select Author from scratch.
64.In the Basic information section, configure the following:
● Function name: Enter CreateThumbnail.
● Runtime: Choose Python 3.9.
● Expand the Change default execution role section.
● Execution role: Select Use an existing role.
● Existing role: Choose the role with the name like XXXXX-LabExecutionRole-XXXXX.
This role provides your Lambda function with the permissions it needs to access Amazon S3 and
Amazon SQS.
Caution: Make sure to choose Python 3.9 under Other supported runtime. If you choose Python
3.10 or the Latest supported, the code in this lab fails as it is configured specifically for Python 3.9.
At the top of the page there is a message like, Successfully created the function CreateThumbnail.
You can now change its code and configuration. To invoke your function with a test event, choose
“Test”.
The SQS trigger is added to your Function overview page. Now configure the Lambda function.
Caution: Do not copy this code—it is just an example to show what is in the zip file.
Code
74.Examine the preceding code. It is performing the following steps:
● Receives an event, which contains the name of the incoming object (Bucket, Key)
● Downloads the image to local storage
● Resizes the image using the Pillow library
● Creates and uploads the resized image to a new folder
75.In the Runtime settings section, choose Edit.
● For Handler, enter CreateThumbnail.handler.
76.Choose Save.
At the top of the page there is a message like, Successfully updated the function CreateThumbnail.
Caution: Make sure you set the Handler field to the preceding value, otherwise the Lambda function
will not be found.
Leave the other settings at the default settings. Here is a brief explanation of these settings:
● Memory defines the resources that are allocated to your function. Increasing memory also
increases CPU allocated to the function.
● Timeout sets the maximum duration for function processing.
80.Choose Save.
A message is displayed at the top of the page with text like, Successfully updated the function
CreateThumbnail.
The CreateThumbnail Lambda function has now been configured.
81.At the top of the AWS Management Console, in the search box, search for and choose
Lambda.
82.Choose Create function.
83.In the Create function window, select Author from scratch.
84.In the Basic information section, configure the following:
● Function name: Enter CreateMobileImage.
● Runtime: Choose Python 3.9.
● Expand the Change default execution role section.
● Execution role: Select Use an existing role.
● Existing role: Choose the role with the name like XXXXX-LabExecutionRole-XXXXX.
This role provides your Lambda function with the permissions it needs to access Amazon S3 and
Amazon SQS.
Caution: Make sure to choose Python 3.9 under Other supported runtime. If you choose Python
3.10 or the Latest supported, the code in this lab fails as it is configured specifically for Python 3.9.
At the top of the page there is a message like, Successfully created the function CreateMobile. You
can now change its code and configuration. To invoke your function with a test event, choose “Test”.
At the top of the page there is a message like, The trigger mobile-queue was successfully added to
function CreateMobileImage. The trigger is in a disabled state.
The SQS trigger is added to your Function overview page. Now configure the Lambda function.
Caution: Do not copy this code—it is just an example to show what is in the zip file.
Code
94.In the Runtime settings section, choose Edit.
● For Handler, enter CreateMobileImage.handler.
95.Choose Save.
At the top of the page there is a message like, Successfully updated the function
CreateMobileImage.
Caution: Make sure you set the Handler field to the preceding value, otherwise the Lambda function
will not be found.
Leave the other settings at the default settings. Here is a brief explanation of these settings:
● Memory defines the resources that are allocated to your function. Increasing memory also
increases CPU allocated to the function.
● Timeout sets the maximum duration for function processing.
99.Choose Save.
A message is displayed at the top of the page with text like, Successfully updated the function
CreateMobileImage.
Congratulations! You have successfully created 2 AWS Lambda functions for the serverless
architecture and set the appropriate SQS queue as trigger for their respective functions.
Caution: Firefox users – Make sure the saved file name is InputFile.jpg (not .jpeg).
101. At the top of the AWS Management Console, in the search box, search for and choose
S3.
102. In the S3 Management Console, choose the xxxxx-labbucket-xxxxx bucket hyperlink.
103. Choose the ingest/ link.
104. Choose Upload.
105. In the Upload window, choose Add files.
106. Browse to and choose the XXXXX.jpg picture you downloaded.
107. Choose Upload.
108. At the top of the AWS Management Console, in the search box, search for and choose
Lambda.
109. Choose the hyperlink for one of your Create- functions.
110. Choose the Monitor tab.
Log messages from Lambda functions are retained in Amazon CloudWatch Logs.
In addition, the logs display any logging messages or print statements from the functions. This
assists in debugging Lambda functions.
Note: Reviewing the logs you may notice that the Lambda function has been executed multiple
times. This is because the Lambda function is receiving the test message posted to the SNS topic in
task 2. Another one of logs was generated when the event notifications for your S3 bucket was
created. The third log was generated when an object was uploaded the S3 bucket, and triggered the
functions.
If you find the resized image here, you have successfully resized the image from its original to
different formats.
Congratulations! You have successfully validated the processed image file from the logs generated
by the function code through browsing Amazon S3 and Amazon CloudWatch Logs.
Optional Tasks
Challenge tasks are optional and are provided in case you have extra time remaining in your lab.
You can complete the optional tasks or skip to the end of the lab.
● (Optional) Task 1: Create a lifecycle configuration to delete files in the ingest bucket after 30
days.
Note: If you have trouble completing the optional task, refer to the Optional Task 1 Solution
Appendix section at the end of the lab.
● (Optional) Task 2: Add an SNS email notification to the existing SNS topic.
Note: If you have trouble completing the optional task, refer to the Optional Task 2 Solution
Appendix section at the end of the lab.
Conclusion
Congratulations! You now have successfully:
End lab
Follow these steps to close the console and end your lab.
Objectives
After completing this lab, you should be able to do the following:
Prerequisites
This lab requires the following:
● Access to a notebook computer with Wi-Fi and Microsoft Windows, macOS, or Linux
(Ubuntu, SuSE, or Red Hat)
● An internet browser, such as Chrome, Firefox, or Microsoft Edge
Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:
Start lab
1. To launch the lab, at the top of the page, choose Start Lab.
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2. To open the lab, choose Open Console .
You are automatically signed in to the AWS Management Console in a new web browser tab.
Warning: Do not change the Region unless instructed.
If you see the message, You must first log out before logging into a different AWS account:
In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:
● Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
● Refresh the page and try again.
Lab Environment
The lab environment provides you with some resources to get started. There is an Auto Scaling
group of EC2 instances being used as publicly accessible web servers. The web server
infrastructure is deployed in an Amazon Virtual Private Cloud (Amazon VPC) and configured for
multiple Availability Zones. It also uses load balancers. The lab also provides a CloudFront
distribution with this load balancer as an origin.
The following diagram shows the general expected architecture you should have at the end of this
lab. During this lab, you create a new S3 bucket for the existing lab environment. You then configure
this bucket as a new, secure origin to the existing CloudFront distribution.
Amazon CloudFront
CloudFront is a content delivery web service. It integrates with other AWS products so that
developers and businesses can distribute content to end users with low latency, high data transfer
speeds, and no minimum usage commitments.
You can use CloudFront to deliver your entire website, including dynamic, static, streaming, and
interactive content, using a global network of edge locations. CloudFront automatically routes
requests for your content to the nearest edge location to deliver content with the best possible
performance. CloudFront is optimized to work with other AWS services, like Amazon S3, Amazon
Elastic Compute Cloud (Amazon EC2), Elastic Load Balancing (ELB), and Amazon Route 53.
CloudFront also works seamlessly with any origin server that doesn’t use AWS, which stores the
original, definitive versions of your files.
Amazon S3 provides developers and information technology teams with secure, durable, highly
scalable object storage. Amazon S3 has a simple web services interface to store and retrieve any
amount of data from anywhere on the web.
You can use Amazon S3 alone or together with other AWS services such as Amazon EC2, Amazon
Elastic Block Store (Amazon EBS), and Amazon Simple Storage Service Glacier (Amazon S3
Glacier), along with third-party storage repositories and gateways. Amazon S3 provides
cost-effective object storage for a wide variety of use cases, including cloud applications, content
distribution, backup and archiving, disaster recovery, and big data analytics.
Note: If you do not find the list of distributions, ensure that you are at the correct page. Choose
Distributions from the CloudFront navigation menu located on the left side of the console.
This tab contains the details about the current configuration of this particular CloudFront distribution.
It contains the most generally needed information about a distribution. It is also where you configure
the common high-level items for the distribution, such as activating the distribution, logging, and
certificate settings.
7. Copy edit: From the Details section, in the General tab, copy the ARN value and save it in a
text editor. You need this value for a later task.
8. Copy edit: From the Details section, in the General tab, copy the Distribution domain name
value.
The Distribution domain name is also found to the left of these lab instructions under the listing
LabCloudFrontDistributionDNS.
9. Paste the Distribution domain value you copied into a new browser tab.
A simple web page is loaded displaying the information of the web server from which CloudFront
retrieved the content. By requesting content from the Distribution domain value for the CloudFront
distribution, you are verifying that the existing cache is working.
This tab contains the distribution’s configuration if you need to keep your application secure from the
most common web threats using AWS WAF or need to prevent users in specific countries from
accessing your content using geographic restrictions. These features are not configured for use in
this lab.
Note: The only origin currently on the distribution is an ELB load balancer. This load balancer is
accepting and directing web traffic for the auto scaling web servers in its target group.
13. Copy edit: Copy the load balancer’s Domain Name System (DNS) value for this origin from
the column labeled Origin domain.
Note: You can adjust the widths of most columns in the console by dragging the dividers in the
header.
14.Paste the DNS value for the load balancer into a new browser tab.
The DNS value for this distribution is also found to the left of these lab instructions under the listing
LabLoadBalancerDNS.
The simple web page hosted on the EC2 instances is displayed again. This web page displays the
same content that was delivered by the CloudFront distribution earlier. However, by requesting from
the load balancer directly you are not using the existing CloudFront caching system. In any single
request, the IP address displayed on the page might differ because traffic is not always routed to the
same EC2 instance behind the load balancer.
This step demonstrates that the origins defined for a distribution are the locations used to retrieve
novel content when a request is made to the CloudFront distribution’s frontend.
Behaviors define the actions that the CloudFront distribution takes when there is a request for
content, such as which origin to serve which content, Time To Live of content in the cache, cookies,
and how to handle various headers.
This tab contains a list of current behaviors defined for the distribution. You configure new or existing
behaviors here. Behaviors for the distribution are evaluated in the explicit order in which you define
them on this tab.
● Select the radio button in the row next to the behavior you want to modify.
● Choose Edit.
● Choose Cancel to close the page and return to the console.
There is only one behavior currently configured in this lab environment. The behavior accepts HTTP
and HTTPS for both GET and HEAD requests to the load balancer origin.
17.Choose the Error Pages tab.
This tab details which error page is to be returned to the user when the content requested results in
an HTTP 4xx or 5xx status code. You can configure custom error pages for specific error codes
here.
This tab contains the distribution’s configuration for object invalidation. Invalidated objects are
removed from CloudFront edge caches. A faster and less expensive method is to use versioned
objects or directory names. There are no invalidations configured for CloudFront distributions by
default.
This tab contains the configuration for any tags applied to the distribution. You can view and edit
existing tags and create new tags here. Tags help you identify and organize your distributions.
20.At the top of the console, in the search bar, search for and choose
S3.
21.In the Buckets section, choose Create bucket.
Note: If you do not find the Create bucket button, ensure you are at the correct page. Choose
Buckets from the navigation menu located on the left side of the console.
22.Copy the LabBucketName from left of the lab instructions and paste into the Bucket name
field.
Note: To simplify the written instructions in this lab, this newly created bucket is referred to as the
LabBucket for the remainder of the instructions.
The AWS Region should match the PrimaryRegion value found to the left of these lab instructions.
The Amazon S3 console is displayed. The newly created bucket is displayed among the list of all the
buckets for the account.
Congratulations! You have created a new S3 bucket with the default configuration.
A message window titled Edit Block public access (bucket settings) is displayed.
You have removed the block on all public access policies for the LabBucket. You are now able to
create access policies for the bucket that allow for public access. The bucket is currently not public,
but anyone with the appropriate permissions can grant public access to objects stored within the
bucket.
35. Copy edit: Copy and paste the Bucket ARN value into a text editor to save the information
for later. It is a string value like arn:aws:s3:::LabBucket located above the Policy box.
The ARN value uniquely identifies this S3 bucket. You need this specific ARN value when creating
bucket based policies.
36. File contents: Copy and paste the following JSON into a text editor.
{
"Version": "2012-10-17",
"Id": "Policy1621958846486",
"Statement": [
{
"Sid": "OriginalPublicReadPolicy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "RESOURCE_ARN"
}
]
}
37.Replace the RESOURCE_ARN value in the JSON with the Bucket ARN value you copied in
a previous step and append a /* to the end of the pasted Bucket ARN value.
By appending the
/* wildcard to the end of the ARN, the policy definition applies to all objects located in the
bucket.
{
"Version": "2012-10-17",
"Id": "Policy1621958846486",
"Statement": [
{
"Sid": "OriginalPublicReadPolicy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::lab-bucket-1234/*"
}
]
}
38.Return to the Amazon S3 console.
39.Paste the completed JSON into the Policy box.
40.Choose Save changes.
Caution: If you receive an error message at the bottom of the screen, it’s probably caused by a
syntax error with JSON. The policy will not save until the JSON is valid. You can expand the error
message in the Amazon S3 console for more information about correcting the policy.
By using the
* wildcard as the Principal value, all identities requesting the actions defined in the policy
document are allowed to do so. By appending the /* wildcard to the allowed Resources, this
policy applies to all objects located in the bucket.
Note: The policies currently applied to the bucket make the objects in this bucket publicly readable.
In later lab steps, you configure the bucket to be accessible only from the CloudFront distribution.
55.Inspect the URL for the object and notice it is an Amazon S3 URL.
56.Close this page with the object.
Congratulations! You have created a folder in an S3 bucket, uploaded an object, and tested that the
object can be retrieved from the S3 URL.
57.At the top of the console, in the search bar, search for and choose S3.
58.Select the link for the LabBucket found in the Buckets section.
62. Copy edit: Copy and paste the Bucket ARN value into a text editor to save the information
for later. It is a string value like arn:aws:s3:::LabBucket located above the Policy box.
The ARN value uniquely identifies this S3 bucket. You need this specific ARN value when creating
bucket based policies.
63. File contents: Copy and paste the following JSON into a text editor.
{
"Version": "2012-10-17",
"Statement": {
"Sid": "AllowCloudFrontServicePrincipalReadOnly",
"Effect": "Allow",
"Principal": {
"Service": "cloudfront.amazonaws.com"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "RESOURCE_ARN",
"Condition": {
"StringEquals": {
"AWS:SourceArn": "CLOUDFRONT_DISTRIBUTION_ARN"
}
}
}
}
64.Replace the RESOURCE_ARN value in the JSON with the Bucket ARN value you copied in
a previous step and append a
/* to the end of the pasted Bucket ARN value.
65.Replace the CLOUDFRONT_DISTRIBUTION_ARN value in the JSON with the ARN value
you copied in a previous step.
{
"Version": "2012-10-17",
"Id": "Policy1621958846486",
"Statement": [
{
"Sid": "OriginalPublicReadPolicy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::lab-bucket-1234/*"
"Condition": {
"StringEquals": {
"AWS:SourceArn":
"arn:aws:cloudfront::123456789:distribution/E3LU8VQUNZACBE"
}
}
}
}
A message window titled Edit Block public access (bucket settings) is displayed.
A Successfully edited Block Public Access settings for this bucket. message is displayed on top of
the screen.
Congratulations! You have edited the S3 bucket policy so that the only principal allowed to read
objects CloudFront distribution.
Task 5.3: Create a new origin with Origin Access Control (OAC)
In this task, you add the LabBucket as a new origin to the existing CloudFront distribution.
75.At the top of the console, in the search bar, search for and choose
CloudFront.
76.From the CloudFront Distributions page, choose the ID link for the only available distribution.
A page showing the details of the distribution is displayed.
79.From the Origin domain field, choose the name of your LabBucket from the Amazon S3
section.
Note: Recall that the S3 bucket in this lab is never configured as a website. You have only changed
the bucket policy regarding who is allowed to perform GetObject API requests against the S3 bucket
into an Allow Public read policy.
Note: The Origin Path field is optional and configures which directory in the origin CloudFront should
forward requests to. In this lab, rather than configuring the origin path, you leave it blank and instead
configure a behavior to return only objects matching a specific pattern in the requests.
A Successfully created origin My Amazon S3 Origin message is displayed on top of the screen.
You can safely ignore any message like, The S3 bucket policy needs to be updated as you
completed updating the bucket policy already.
This field configures which matching patterns of object requests the origin can return. Specifically, in
this behavior only .png objects stored in the CachedObjects folder of the Amazon S3 origin can be
returned. Unless there is a behavior configured for them, all other requests to the Amazon S3 origin
would result in an error being returned to the requester. Typically, users would not be requesting
objects directly from the CloudFront distribution URL in this manner; instead, your frontend
application would generate the correct object URL to return to the user.
89.From the Origin and origin groups dropdown menu, choose My Amazon S3 Origin.
90.From the Cache key and origin requests section, ensure Cache policy and origin request
policy (recommended) is selected.
91.From the Cache policy dropdown menu, ensure CachingOptimized is selected.
92.Leave all other settings on the page at the default values.
93.Choose Create behavior.
Congratulations! You have created: a new origin for the Amazon S3 bucket, an Origin Access
Control, and distribution behavior on a CloudFront distribution for the objects stored in the Amazon
S3 bucket for the lab.
94.At the top of the console, in the search bar, search for and choose S3.
95.Select the link for the LabBucket found in the Buckets section.
An error message is displayed with Access denied messages. This is expected because the new
bucket policy does not allow access to the object directly from Amazon S3 URLs. By denying access
to S3 objects directly through Amazon S3, users can no longer bypass the controls provided by
CloudFront cache, which can include logging, behaviors, signed URLs, or signed cookies.
Congratulations! You have confirmed the object is no longer directly accessible from the Amazon S3
URL.
Task 7: Test access to the object in the bucket using the
CloudFront distribution
In this task, you confirm that you can access objects in the Amazon S3 origin for the CloudFront
distribution.
100. Copy edit: Copy the CloudFront distribution’s domain DNS value from the left side of
these lab instructions under the listing LabCloudFrontDistributionDNS.
101. Paste the DNS value into a new browser tab.
A simple web page is loaded displaying the information of the web server where CloudFront
retrieved the content from.
The browser makes a request to the CloudFront distribution and the object is returned from the
Amazon S3 origin.
Hint: If the CloudFront URL redirects you to the Amazon S3 URL, or if the object isn’t immediately
available, the CloudFront distribution might still be updating from your recent changes. Return to the
CloudFront console. Select Distributions from the navigation menu. Confirm that the Status column
is Enabled and the Last modified column has a timestamp. You need to wait for this before testing
the new origin and behavior. After you have confirmed the status of the distribution, wait a few
minutes and try this task again.
Congratulations! You have confirmed that the object is returned from a CloudFront request.
Cross-Region replication is a feature of Amazon S3 that allows for automatic copying of your data
from one bucket to another bucket located in a different AWS Region. It is a useful feature for
disaster recovery. After the cross-Region replication feature is enabled for a bucket, every new
object that you currently have read permissions for, which is created in the source bucket, is
replicated into the destination bucket you define. This means that objects replicated to the
destination bucket have the same names. Objects encrypted using an Amazon S3 managed
encryption key are encrypted in the same manner as their source bucket.
To perform Cross-Region replication, you must enable object versioning for both the source and
destination buckets. To maintain good data orderliness with versioning enabled, you can deploy
lifecycle policies to automatically archive objects to Amazon S3 Glacier or to delete the objects.
Caution: You do not need to have public access enabled for your personal buckets to use the
cross-Region replication feature. It is enabled in this lab so that you can quickly test if objects are
replicated and retrievable using the Amazon S3 URL.
Note: To simplify the narrative in this lab, this newly created bucket is referred to as the
DestinationBucket in the remainder of instructions.
Optional Task 8.3: Configure a public read policy for the new destination
bucket
You now create a public object read policy for this bucket. You use the public read policy in this lab to
demonstrate during the lab time that objects are replicated and retrievable using the Amazon S3
URL. It is not recommended for most use cases to use bucket policies which allow for public access.
124. Copy edit: Copy and paste the Bucket ARN value into a text editor to save the
information for later. It is a string value like arn:aws:s3:::LabBucket located above the Policy
box.
The ARN value uniquely identifies this S3 bucket. You need this specific ARN value when creating
bucket-based policies.
125. File contents: Copy and paste the following JSON into a text editor:
{
"Version": "2012-10-17",
"Id": "Policy1621958846486",
"Statement": [
{
"Sid": "OriginalPublicReadPolicy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "RESOURCE_ARN"
}
]
}
126. Replace the RESOURCE_ARN value in the JSON with the Bucket ARN value you
copied in a previous step and append a /* to the end of the pasted Bucket ARN value.
Here is the example of the updated policy JSON:
{
"Version": "2012-10-17",
"Id": "Policy1621958846486",
"Statement": [
{
"Sid": "OriginalPublicReadPolicy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::DestinationBucket/*"
}
]
}
Note: The policies currently applied to the bucket make the objects in this bucket publicly readable.
All newly created objects in the LabBucket are replicated into the DestinationBucket.
Note: It is possible to replicate existing objects between buckets, but that is beyond the scope of this
lab. You can find more information about this topic in the document linked in the Appendix section.
Note: If you do not find the CachedObjects folder, choose Buckets from the navigation menu located
on the left side of the console. Then choose the link for the LabBucket from the list. Finally, choose
the Objects tab to ensure that you are at the correct page.
156. Choose the link for the logo2.png from the Files and folders section.
157. In the Object management overview section, examine Replication status and refresh the
page periodically until it changes from PENDING to COMPLETED.
158. From the Amazon S3 navigation menu, select Buckets.
159. In the Buckets section, choose the link for the DestinationBucket.
162. In the Object management overview section, examine Replication Status. It displays
REPLICA.
163. Choose the link located in the Object URL field.
Congratulations! You have completed setting up cross-Region replication for all new objects
uploaded into the LabBucket.
Conclusion
Congratulations! You now have successfully done the following:
End lab
Follow these steps to close the console and end your lab.
Note: Do not include any personal, identifying, or confidential information into the lab environment.
Information entered may be visible to others.
Lab overview
You are tasked with applying your new knowledge to solve several architectural challenges within a
specific business case. First, you are given a list of requirements related to the design. Then, you
perform a series of actions to deploy and configure the services needed to meet the requirements.
The task scenarios provide relevant background and help you understand how the requirements
solve a real-world business problem. Use the templates and the requirements list to complete all of
the tasks in the capstone. Now that you are familiar with concepts and services, this lab solidifies
your knowledge through practice. In the real world, you encounter problems that are not well-defined
or sequenced logically. By the end of this capstone, you should have a better understanding of how
you can apply architectural best practices to real-world problems.
Objectives
After completing this lab, you should be able to do the following:
● Deploy a virtual network spread across multiple Availability Zones in a Region using a
CloudFormation template.
● Deploy a highly available and fully managed relational database across those Availability
Zones (AZ) using Amazon Relational Database Service (Amazon RDS).
● Use Amazon Elastic File System (Amazon EFS) to provision a shared storage layer across
multiple Availability Zones for the application tier, powered by Network File System (NFS).
● Create a group of web servers that automatically scales in response to load variations to
complete your application tier.
Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:
● Expected output: A sample output that you can use to verify the output of a command or
edited file.
● Note: A hint, tip, or important guidance.
● Learn more: Where to find more information.
● Security: An opportunity to incorporate security best practices.
● Refresh: A time when you might need to refresh a web browser page or list to show new
information.
● Copy edit: A time when copying a command, script, or other text to a text editor (to edit
specific variables within it) might be easier than editing directly in the command line or
terminal.
● Hint: A hint to a question or challenge.
Start lab
1. To launch the lab, at the top of the page, choose Start Lab.
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2. To open the lab, choose Open Console .
You are automatically signed in to the AWS Management Console in a new web browser tab.
Warning: Do not change the Region unless instructed.
If you see the message, You must first log out before logging into a different AWS account:
● Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
● Refresh the page and try again
Lab scenario
Example Corp. creates marketing campaigns for small-sized to medium-sized businesses. They
recently hired you to work with the engineering teams to build out a proof of concept for their
business. To date, they have hosted their client-facing application in an on-premises data center, but
they recently decided to move their operations to the cloud in an effort to save money and transform
their business with a cloud-first approach. Some members of their team have cloud experience and
recommended the AWS Cloud services to build their solution.
In addition, they decided to redesign their web portal. Customers use the portal to access their
accounts, create marketing plans, and run data analysis on their marketing campaigns. They would
like to have a working prototype in two weeks. You must design an architecture to support this
application. Your solution must be fast, durable, scalable, and more cost-effective than their existing
on-premises infrastructure.
The following image shows the final architecture of the designed solution:
Note: If the console starts you on the Stacks page instead of the Amazon CloudFormation landing
page, then you can get to the Create stack page in two steps.
13.Choose Next.
The Review and create page is displayed. This page is a summary of all settings.
The list shows the resources that are being created. CloudFormation determines the optimal order
for resources to be created, such as creating the VPC before the subnet.
The list shows (in reverse order) the activities performed by CloudFormation, such as starting to
create a resource and then completing the resource creation. Any errors encountered during the
creation of the stack are listed in this tab.
Congratulations! You have learned to configure the stack and created all of the resources using the
provided CloudFormation template.
A Successfully created AuroraSubnetGroup. View subnet group message is displayed on top of the
screen.
31.In the Choose a database creation method section, select Standard create.
32.In the Engine options section, configure the following:
● In Engine type, select Aurora(MySQL Compatible).
33.In the Templates section, select Production.
34.In the Settings section, configure the following:
● DB cluster identifier: Enter MyDBCluster.
● Master username: Enter admin.
● Credentials management: Select Self managed
● Master password: Paste the LabPassword value from the left side of these lab instructions.
● Confirm master password: Paste the LabPassword value from the left side of these lab
instructions.
Note: Your Aurora MySQL DB cluster is in the process of launching. The cluster you configured
consists of two instances, each in a different Availability Zone. The Amazon Aurora DB cluster can
take up to 5 minutes to launch. Wait for the mydbcluster status to change to Available. You do not
have to wait for the availability of the instances to continue.
42.Choose View connection details displayed on the success message border to save the
connection details of your mydbcluster database to a text editor.
Note If you notice the error “Failed to turn on enhanced monitoring for database mydbcluster
because of missing permissions” you can safely ignore the error.
56.Choose Customize.
59.From the Virtual Private Cloud (VPC) dropdown menu, select LabVPC.
60.For Mount targets, configure the following:
● Availability zone: Select the Availability Zone ending in “a” from the dropdown menu.
● Subnet ID: Select AppSubnet1 from the dropdown menu.
● Security groups: Select EFSMountTargetSecurityGroup from the dropdown menu.
● To remove the default Security group, choose the X.
● Availability zone: Select Availability Zone ending in “b” from the dropdown menu.
● Subnet ID: Select AppSubnet2 from the dropdown menu.
● Security groups: Select EFSMountTargetSecurityGroup from the dropdown menu.
● To remove the default Security group, choose the X.
61.Choose Next.
62.Choose Next.
A Success! File system (fs-xxxxxxx) is available. message is displayed on top of the screen.
71.Choose Next.
The Register targets page is displayed. There are no targets to register currently.
72.Scroll to the bottom of the page and choose Create target group.
A Successfully created target group: myWPTargetGroup message is displayed on top of the screen.
75.In the Load balancer types section, for Application Load Balancer, choose Create.
The load balancer is in the Provisioning state for a few minutes and then changes to Active.
Congratulations, you have created the target group and an Application Load Balancer.
Note: If the console starts you on the Stacks page instead of the Amazon CloudFormation landing
page, then you can get to the Create stack page in two steps.
The Configure stack options page is displayed. You can use this page to specify additional
parameters. You can browse the page, but leave settings at their default values.
92.Choose Next.
The Review and create page is displayed. This page is a summary of all settings.
Congratulations, you have created the stack using the provided CloudFormation template.
Task 6 instructions: Create the application servers by
configuring an Auto Scaling group and a scaling policy
Task 6.1: Create an Auto Scaling group
98.At the top of the AWS Management Console, in the search box, search for and choose
EC2.
99.In the left navigation pane, under the Auto Scaling section, choose Auto Scaling Groups.
100. Choose Create Auto Scaling group.
105. On the Configure advanced options - optional page, configure the following:
● Select Attach to an existing load balancer.
● Select Choose from your load balancer target groups.
● From the Existing load balancer target groups dropdown menu, select myWPTargetGroup |
HTTP.
● For Additional health check types - optional: Select Turn on Elastic Load Balancing health
checks.
● Health check grace period: Leave at the default value of 300 or more.
● Monitoring: Select Enable group metrics collection within CloudWatch.
106. Choose Next.
107. On the Configure group size and scaling - optional page, configure the following:
● In the Group size section:
○ Desired capacity: Enter 2.
● In the Scaling section:
○ Min desired capacity: Enter 2
○ Max desired capacity: Enter 4
108. In the Automatic scaling - optional section, configure the following:
● Select Target tracking scaling policy.
The remaining settings on this section can be left at their default values.
113. Review the Auto Scaling group configuration for accuracy, and then at the bottom of the
page, choose Create Auto Scaling group.
Now that you have created your Auto Scaling group, you can verify that the group has launched your
EC2 instances.
The Activity history section maintains a record of events that have occurred in your Auto Scaling
group. The Status column contains the current status of your instances. When your instances are
launching, the status column shows PreInService. The status changes to Successful after an
instance is launched.
Your Auto Scaling group has launched two Amazon EC2 instances and they are in the InService
lifecycle state. The Health Status column shows the result of the Amazon EC2 instance health check
on your instances.
If your instances have not reached the InService state yet, you need to wait a few minutes. You can
choose the refresh button to retrieve the current lifecycle state of your instances.
118. Choose the Monitoring tab. Here, you can review monitoring-related information for your
Auto Scaling group.
This page provides information about activity in your Auto Scaling group, as well as the usage and
health status of your instances. The Auto Scaling tab displays Amazon CloudWatch metrics about
your Auto Scaling group, while the EC2 tab displays metrics for the Amazon EC2 instances
managed by the Auto Scaling group.
Note: It can take up to 5 minutes for the health checks to show as healthy. Wait for the Health status
to display healthy before continuing.
myWPAppELB-4e009e86b4f704cc.elb.us-west-2.amazonaws.com/wp-login.php
125. Paste the WordPress application URL value into a new browser tab.
Congratulations, you have created the AWS Auto Scaling group and successfully launched the
WordPress application.
Conclusion
Congratulations! You now have successfully:
● Deployed a virtual network spread across multiple Availability Zones in a Region using a
CloudFormation template.
● Deployed a highly available and fully managed relational database across those Availability
Zones using Amazon RDS.
● Used Amazon EFS to provision a shared storage layer across multiple Availability Zones for
the application tier, powered by NFS.
● Created a group of web servers that automatically scales in response to load variations to
complete your application tier.
End lab
Follow these steps to close the console and end your lab.
For more information about AWS Training and Certification, see https://aws.amazon.com/training/.