0% found this document useful (0 votes)
33 views120 pages

Lab 1: Exploring and Interacting With The AWS Management Console and AWS CLI

This document outlines a lab focused on exploring the AWS Management Console and AWS CLI, providing hands-on experience in creating and managing AWS resources. Participants will learn to navigate the AWS interface, create an Amazon S3 bucket, and upload objects using both the console and command line. The lab emphasizes understanding AWS services and their interaction through the AWS API, with specific tasks and objectives for effective learning.

Uploaded by

Alberderry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views120 pages

Lab 1: Exploring and Interacting With The AWS Management Console and AWS CLI

This document outlines a lab focused on exploring the AWS Management Console and AWS CLI, providing hands-on experience in creating and managing AWS resources. Participants will learn to navigate the AWS interface, create an Amazon S3 bucket, and upload objects using both the console and command line. The lab emphasizes understanding AWS services and their interaction through the AWS API, with specific tasks and objectives for effective learning.

Uploaded by

Alberderry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 120

Lab 1: Exploring and Interacting with the

AWS Management Console and AWS CLI

Lab overview
The Amazon Web Services (AWS) environment is an integrated collection of hardware and software
services designed to provide quick and inexpensive use of resources. The AWS API sits atop the
AWS environment. An API represents a way to communicate with a resource. There are different
ways to interact with AWS resources, but all interaction uses the AWS API. The AWS Management
Console provides a simple web interface for AWS. The AWS Command Line Interface (AWS CLI) is
a unified tool to manage your AWS services through the command line. Whether you access AWS
through the AWS Management Console or using the command line tools, you are using tools that
make calls to the AWS API.

This lab follows the Architecting Fundamentals module, which focuses on the core requirements for
creating workloads in AWS. This lab reinforces module discussions on the what, where, and how of
building AWS workloads. Students first explore the features of the AWS Management Console and
then use the Amazon Simple Storage Service (Amazon S3) API to deploy and test connectivity to an
Amazon S3 bucket using two different methods:

●​ AWS Management Console


●​ AWS CLI

Objectives
After completing this lab, you should be able to do the following:

●​ Explore and interact with the AWS Management Console.


●​ Create resources using the AWS Management Console.
●​ Explore and interact with the AWS CLI.
●​ Create resources using the AWS CLI.

Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:

●​ Note: A hint, tip, or important guidance.


●​ Learn more: Where to find more information.
●​ Caution: Information of special interest or importance (not so important to cause problems
with the equipment or data if you miss it, but it could result in the need to repeat certain
steps).
●​ WARNING: An action that is irreversible and could potentially impact the failure of a
command or process (including warnings about configurations that cannot be changed after
they are made).
●​ Expected output: A sample output that you can use to verify the output of a command or
edited file.
●​ Command: A command that you must run.
●​ Consider: A moment to pause to consider how you might apply a concept in your own
environment or to initiate a conversation about the topic at hand.

Start lab
1.​ To launch the lab, at the top of the page, choose Start Lab.​
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2.​ To open the lab, choose Open Console .​
You are automatically signed in to the AWS Management Console in a new web browser tab.​
Warning: Do not change the Region unless instructed.

Common sign-in errors

Error: You must first sign out

If you see the message, You must first log out before logging into a different AWS account:

●​ Choose the click here link.


●​ Close your Amazon Web Services Sign In web browser tab and return to your initial lab
page.
●​ Choose Open Console again.

Error: Choosing Start Lab has no effect

In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:

●​ Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
●​ Refresh the page and try again.
Lab environment
The lab environment provides you with the following resources to get started: an Amazon Virtual
Private Cloud (Amazon VPC), the necessary underlying network structure, a security group allowing
the HTTP protocol over port 80, an Amazon Elastic Compute Cloud (Amazon EC2) instance with the
Amazon CLI installed, and an associated Amazon EC2 instance profile. The instance profile
contains the permissions necessary to allow Session Manager, a capability of AWS Systems
Manager, to access the Amazon EC2 instance.

The following diagram shows the interactive flow of the AWS API for creating AWS services and
resources used in the lab through the AWS Management Console and AWS CLI.

AWS services not used in this lab


AWS services not used in this lab are deactivated in the lab environment. In addition, the capabilities
of the services used in this lab are limited to only what the lab requires. Expect errors when
accessing other services or performing actions beyond those provided in this lab guide.

Task 1: Explore and configure the AWS Management


Console
In this task, you explore the AWS Management Console and the unified search tool. You then
configure the Region, widgets, and services.

Learn more: The AWS Management Console provides secure sign-in using your AWS account root
user credentials or AWS Identity and Access Management (IAM) account credentials. When you first
sign in, the user credentials are authenticated and the home page is displayed. The home page
provides access to each service console and offers a single place to access the information you
need to perform your AWS related tasks. For more information, see What is the AWS Management
Console?.

Task 1.1: Choose an AWS Region


In this task, you choose an AWS Region that specifies where your resources are managed. Regions
are sets of AWS resources located in the same geographical area.

3.​ On the navigation bar, choose the Region selector displayed at the top-right corner of the
console, and then choose the Region to which you want to switch.

The Region on the console home page is now changed to the Region you chose.

Caution: If the chosen Region opens up a different webpage instead of the console home page,
choose Cancel and try to choose a different Region.

Next, you configure the default Region.

4.​ To open the General Settings page, click gear icon from menu bar.
5.​ Click on More user settings.

The Unified Settings page is displayed.

6.​ In the Localization and default Region section, choose Edit.


7.​ For Default Region, select any Region from the dropdown menu.
8.​ Choose Save settings.

A Successfully updated localization and Region settings message is displayed on top of the screen.

Caution: If the current Region shown on the Region selector in the top-right corner is the same
Region you choose in the default Region dropdown list, you will not see the success message with
Go to new default Region. Try choosing a different Region from the dropdown menu to see this
message and complete the next step.

9.​ Choose Go to new default Region.

The Unified Settings page is displayed with the Region set to the Default Region you chose.

Note: If you do not choose a default Region, the last Region you visited becomes your default.

10.​Choose the AWS logo displayed in the upper-left-hand corner to return to the console home
page.
11.​ On the navigation bar, choose the Region selector displayed at the top-right corner of the
console, and then choose the Region that matches the LabRegion value located to the left of
these instructions.

Caution: Verify that you are in the correct region that matches to the LabRegion value located to the
left of these instructions.
Task 1.2: Search with the AWS Management Console
In this task, you explore the search box on the navigation bar, which provides a unified search tool
for locating AWS services and features, service documentation, and the AWS Marketplace.

12.​To open a console for a service, go to the Search box in the navigation bar of the AWS
Management Console, and enter cloud.

The more characters you type, the more the search refines your results.

13.​To narrow the results to the type of content that you want, choose one of the categories on
the left navigation pane.
14.​To quickly navigate to a service or popular features of a service, in the Services section,
hover over the AWS Cloud Map service name in the results and choose the link.

The AWS Cloud Map console page is displayed.

Note: For more details about a documentation result or AWS Marketplace result, hover on the result
title and choose a link.

15.​Choose the AWS logo displayed in the upper-left-hand corner to return to the console home
page.

Task 1.3: Add and remove favorites


In this task, you explore the AWS Management Console to add AWS services to your Favorites list
and remove added services from the Favorites list.

Add a service to the list of favorites

16.​On the navigation bar, choose Services to open a full list of services.
17.​From the left navigation menu, choose All services or Recently visited, and then choose a
service from the list that you want to add as a favorite.
18.​To the left of the service name, select the star.

Note: Repeat the previous step to add more services to your Favorites list.

19.​To view the list of favorite services, from the left navigation menu, choose Favorites.

Note: Alternatively, Favorites are pinned and visible on the navigation bar at the top of the console
window.

Remove a service from the list of favorites

20.​On the navigation bar, choose Services to open a full list of services.
21.​In the Favorites list, deselect the star next to the name of a service you wish to remove.

Note: Alternatively, in the Recently visited list or All services list, deselect the star next to the name
of a service that is in your Favorites list.
Task 1.4: Open a console for a service
22.​On the navigation bar, choose Services to open a full list of services.
23.​Choose a service under Favorites or Recently visited or All services to quickly navigate to a
specific service.

The chosen service console page is displayed.

24.​Choose the AWS logo displayed in the upper-left-hand corner to return to the AWS
Management Console home page.

Task 1.5: Create and use dashboard widgets


In this task, you learn about the widgets that display important information about your AWS
environment and provide shortcuts to your services. You can customize your experience by adding
and removing widgets, rearranging them, or changing their size.

25.​To add a widget, choose + Add widgets.

The Add widgets window is displayed.

26.​In the Add widgets menu, choose the title bar at the top of the widget that you want to add
and then drag the widget on the console page.
27.​To rearrange a widget, configure the following:
●​ Choose the title bar at the top of the widget, for example, Favorites, and then drag the widget
to a new location on the console page.
28.​To resize a widget, configure the following:
●​ Choose the Recently Visited widget.
●​ Drag the bottom-right corner of the widget to resize.

Note: You cannot adjust the size of the Welcome to AWS, Explore AWS, and AWS Health widgets.

29.​To remove a widget, configure the following:


●​ Choose the Welcome to AWS widget.
●​ In the upper-right corner of the widget, choose the widget actions ellipsis icon, represented
by three vertical dots.
●​ Choose Remove widget.

Congratulations! You have explored the AWS Management Console and learned to customize your
console home screen.

Task 2: Create an Amazon S3 bucket using the AWS


Management Console
In this task, you create and configure a new Amazon S3 bucket in the LabRegion using the AWS
Management Console.

Caution: Verify that you are in the correct region that matches to the LabRegion value located to the
left of these instructions.

Learn more: Amazon S3 is an object storage service that offers industry-leading scalability, data
availability, security, and performance. Customers can use Amazon S3 to store and protect any
amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup
and restore, archive, enterprise applications, Internet of Things (IoT) devices, and big data analytics.
For more information, see What is Amazon S3?.

30.​On the Services menu, choose All Services.


31.​On the left navigation menu, scroll down the list and choose Storage.
32.​From the Storage list, choose S3.

Note: You can also search for

S3 in the search bar Search at the top of the console.

33.​In the navigation pane on the left-hand side of the console, choose Buckets.
34.​Choose Create bucket.

The Create bucket page is displayed.

35.​In the General configuration section, for Bucket name, enter labbucket-NUMBER.

Note: Replace NUMBER in the bucket name with a random number. This ensures that you have a
unique name.

●​ Example bucket name: labbucket-987987

Note: Amazon S3 bucket names must be globally unique and Domain Name System (DNS)
compliant.
36.​The AWS Region should match the LabRegion value found to the left of these lab
instructions.
37.​Leave all other settings on this page as the default configurations.
38.​Choose Create bucket at the bottom of the screen.

In terms of implementation, you can create a bucket using the Amazon S3 API, but you performed
the same operation using the Amazon S3 console instead. The console uses the Amazon S3 APIs
to send requests to Amazon S3.

A Successfully created bucket “labbucket-xxxxx” message is displayed on top of the screen.

The S3 console is displayed. The newly created bucket is displayed among the list of all the buckets
for the account.

Congratulations! You have created a new Amazon S3 bucket with the default configuration.

Task 3: Upload an object into the Amazon S3 bucket


using the S3 console
In this task, you upload an object into the previously created S3 bucket using the S3 console.

39.​To open the context (Right-click) menu, choose this image link and choose the option to save
the image to your computer.
●​ Name your file similar to HappyFace.jpg.

Note: The method to save files varies by web browser. Choose the appropriately worded option
from your context menu.

40.​In the Amazon S3 console, choose the labbucket-xxxxx bucket.


41.​Choose Upload.

The Upload page is displayed.

42.​Choose Add files.


43.​Browse to and choose the HappyFace.jpg picture you downloaded.
44.​Choose Upload.

A Upload succeeded message is displayed on top of the screen.

45.​Choose Close.

Congratulations! You have uploaded an object into the Amazon S3 bucket.


Task 4: Create an Amazon S3 bucket and uploading an
object using the AWS CLI
In this task, you use the AWS CLI to create an Amazon S3 bucket. The AWS CLI is an open-source
tool that you can use to interact with AWS services using commands in your command line shell.

Task 4.1: Create a connection to the Command Host using Session


Manager
An Amazon EC2 instance pre-configured with the AWS CLI has been provided for you to use in this
lab. It has the name Command Host.

46.​At the top of the AWS Management Console, in the search box, search for and choose ​
EC2.
47.​In the navigation pane on the left-hand side of the console, choose Instances.
48.​Select Command Host.
49.​Choose Connect.

The Connect to instance page is displayed.

50.​Choose the Session Manager tab.

Learn more: With Session Manager, you can connect to Amazon EC2 instances without having to
expose the SSH port on your firewall or Amazon VPC security group. For more information, see
AWS Systems Manager Session Manager.

51.​Choose Connect.

Note: Alternatively, you can copy the CommandHostSessionUrl value from the left side of these lab
instructions and paste it in a new browser tab. The terminal for the Command Host instance opens.

A new browser tab or window opens with a connection to the Command Host instance.
Task 4.2: Use high-level S3 commands with the AWS CLI
In this task, you access the high-level features of Amazon S3 using the AWS CLI.

52.​ Command: Enter the following command in your Command Host session:

Tip: To copy the command, hover on it and choose the copy icon. Paste the command in the
Command Host session.

Note: The following ls command lists all of the buckets owned by the user.

aws s3 ls

53.​ Command: Copy the following command to a text editor, replace NUMBER with the random
number you chose for your bucket, and paste the command in the Command Host session.

Note: The following mb command creates a bucket.

aws s3 mb s3://labclibucket-NUMBER

●​ Example bucket name: labclibucket-787787


54.​To run the modified command in your Command Host session, press Enter.

Expected output:

make_bucket: labclibucket-xxxxx

Note: To simplify the instructions in this lab, this newly created bucket will be referred to as the
labclibucket-NUMBER for the remainder of the instructions, regardless of what bucket name you
actually choose in this step.

55.​ Command: Enter the following command in your Command Host session:

aws s3 ls

Notice the newly created bucket in the output list.

56.​ Command: Copy the following command to a text editor, replace labclibucket-NUMBER with
the name of the S3 bucket you created in the previous step, and paste the command in the
Command Host session.

Note: The following cp command copies a single file to a specified bucket.

aws s3 cp /home/ssm-user/HappyFace.jpg s3://labclibucket-NUMBER

57.​To run the modified command in your Command Host session, press Enter.

Expected output:
upload: ../../home/ssm-user/HappyFace.jpg to
s3://labclibucket-xxxxx/HappyFace.jpg

58.​ Command: Copy the following command to a text editor, replace labclibucket-NUMBER with
the name of the S3 bucket you created in the previous step, and paste the command in the
Command Host session.

Note: The following ls command lists objects under a specified bucket.

aws s3 ls s3://labclibucket-NUMBER

Notice the uploaded object in the newly created bucket in the output list. You can close the browser
tab.

As demonstrated in this task, the high-level Amazon S3 commands simplify managing Amazon S3
objects. Using these commands, you can manage the contents of Amazon S3 within itself and with
local directories. The S3 commands are built on top of the operations found in the S3 API
commands.

Congratulations! You have used the AWS CLI to create, list, and copy objects into the Amazon S3
bucket.

Conclusion
Congratulations! You now have successfully:

●​ Explored and interacted with the AWS Management Console.


●​ Created resources using the AWS Management Console.
●​ Explored and interacted with the AWS CLI.
●​ Created resources using the AWS CLI.

End lab
Follow these steps to close the console and end your lab.

59.​Return to the AWS Management Console.


60.​At the upper-right corner of the page, choose AWSLabsUser, and then choose Sign out.
61.​Choose End Lab and then confirm that you want to end your lab.
Lab 2: Building your Amazon VPC
Infrastructure
© 2024 Amazon Web Services, Inc. or its affiliates. All rights reserved. This work may not be
reproduced or redistributed, in whole or in part, without prior written permission from Amazon Web
Services, Inc. Commercial copying, lending, or selling is prohibited. All trademarks are the property
of their owners.

Note: Do not include any personal, identifying, or confidential information into the lab environment.
Information entered may be visible to others.

Corrections, feedback, or other questions? Contact us at AWS Training and Certification.

Lab overview
As an AWS solutions architect, it is important that you understand the overall functionality and
capabilities of Amazon Web Service (AWS) and the relationship between the AWS networking
components. In this lab, you create an Amazon Virtual Private Cloud (Amazon VPC), a public and a
private subnet in a single Availability Zone, public and private routes, a NAT gateway, and an internet
gateway. These services are the foundation of networking architecture inside of AWS. This
architecture design covers concepts of infrastructure, design, routing, and security.

The following image shows the final architecture for this lab environment:
Objectives
After completing this lab, you should know how to do the following:

●​ Create an Amazon VPC.


●​ Create public and private subnets.
●​ Create an internet gateway.
●​ Configure a route table and associate it to a subnet.
●​ Create an Amazon Elastic Compute Cloud (Amazon EC2) instance and make the instance
publicly accessible.
●​ Isolate an Amazon EC2 instance in a private subnet.
●​ Create and assign security groups to Amazon EC2 instances.
●​ Connect to Amazon EC2 instances using Session Manager, a capability of AWS Systems
Manager.

Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:

●​ Command: A command that you must run.


●​ Expected output: A sample output that you can use to verify the output of a command or
edited file.
●​ Note: A hint, tip, or important guidance.
●​ Learn more: Where to find more information.
●​ Security: An opportunity to incorporate security best practices.
●​ Caution: Information of special interest or importance (not so important to cause problems
with the equipment or data if you miss it, but it could result in the need to repeat certain
steps).
●​ WARNING: An action that is irreversible and could potentially impact the failure of a
command or process (including warnings about configurations that cannot be changed after
they are made).

Start lab
1.​ To launch the lab, at the top of the page, choose Start Lab.​
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2.​ To open the lab, choose Open Console .​
You are automatically signed in to the AWS Management Console in a new web browser tab.​
Warning: Do not change the Region unless instructed.

Common sign-in errors

Error: You must first sign out


If you see the message, You must first log out before logging into a different AWS account:

●​ Choose the click here link.


●​ Close your Amazon Web Services Sign In web browser tab and return to your initial lab
page.
●​ Choose Open Console again.

Error: Choosing Start Lab has no effect

In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:

●​ Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
●​ Refresh the page and try again.

Scenario
Your team has been tasked with prototyping an architecture for a new web-based application. To
define your architecture, you need to have a better understanding of public and private subnets,
routing, and Amazon EC2 instance options.

AWS services not used in this lab


AWS services not used in this lab are deactivated in the lab environment. In addition, the capabilities
of the services used in this lab are limited to only what the lab requires. Expect errors when
accessing other services or performing actions beyond those provided in this lab guide.

Task 1: Create an Amazon VPC in a Region


In this task, you create a new Amazon VPC in the AWS Cloud.

Learn more: With Amazon VPC, you can provision a logically isolated section of the AWS Cloud
where you can launch AWS resources in a virtual network that you define. You have complete
control over your virtual networking environment, including selection of your own IP address ranges,
creation of subnets, and configuration of route tables and network gateways. You can also use the
enhanced security options in Amazon VPC to provide more granular access to and from the Amazon
EC2 instances in your virtual network.

3.​ At the top of the AWS Management Console, in the search bar, search for and choose VPC.

Caution: Verify that the Region displayed in the top-right corner of the console is the same as the
Region value on the left side of this lab page.

Note: The VPC management console offers a VPC Wizard, which can automatically create several
VPC architectures. However, in this lab you create the VPC components manually.

4.​ In the left navigation pane, choose Your VPCs.

The console displays a list of your currently available VPCs. A default VPC is provided so that you
can launch resources as soon as you start using AWS.

5.​ Choose Create VPC and configure the following:


●​ Resources to create: Choose VPC only.
●​ Name tag - optional: Enter Lab VPC
●​ IPv4 CIDR: Enter 10.0.0.0/16
6.​ Choose Create VPC.

A You successfully created vpc-xxxxxxxxxx / Lab VPC message is displayed on top of the screen.

The VPC Details page is displayed.

7.​ Verify the state of the Lab VPC.


Expected output: It should display the following:

●​ State: Available

The lab VPC has a Classless Inter-Domain Routing (CIDR) range of 10.0.0.0/16, which includes all
IP addresses that start with 10.0.x.x. This range contains over 65,000 addresses. You later divide
the addresses into separate subnets.

8.​ From the same page, choose Actions and choose Edit VPC settings.

The Edit VPC settings page is displayed.

9.​ From the DNS settings section, select Enable DNS hostnames.

This option assigns a friendly Domain Name System (DNS) name to Amazon EC2 instances in the
VPC, such as the following:

ec2-52-42-133-255.us-west-2.compute.amazonaws.com

10.​Choose Save.

A You have successfully modified the settings for vpc-xxxxxxxxxx / Lab VPC. message is displayed
on top of the screen.

Any Amazon EC2 instances launched into this Amazon VPC now automatically receive a DNS
hostname. You can also create a more meaningful DNS name (for example, app.company.com)
using records in Amazon Route 53.

Congratulations! You have successfully created your own VPC and now you can launch the AWS
resources in this defined virtual network.

Task 2: Create public subnets and private subnets


In this task, you create a public subnet and a private subnet in the lab VPC. To add a new subnet to
your VPC, you must specify an IPv4 CIDR block for the subnet from the range of your VPC. You can
specify the Availability Zone in which you want the subnet to reside. You can have multiple subnets
in the same Availability Zone.
Note: A subnet is a sub-range of IP addresses within a network. You can launch AWS resources
into a specified subnet. Use a public subnet for resources that must be connected to the internet,
and use a private subnet for resources that are to remain isolated from the internet.

Task 2.1: Create your public subnet


The public subnet is for internet-facing resources.

11.​ In the left navigation pane, choose Subnets.


12.​Choose Create subnet and configure the following:
●​ VPC ID: Select Lab VPC from the dropdown menu.
●​ Subnet name: Enter Public Subnet.
●​ Availability Zone: Select the first Availability Zone in the list. (Do not choose No Preference.)
●​ IPv4 subnet CIDR block: Enter 10.0.0.0/24.
13.​Choose Create subnet.

A You have successfully created 1 subnet: subnet-xxxxxx message is displayed on top of the
screen.

14.​Verify the state.

Expected output: It should display the following:

●​ State: Available

Note: The VPC has a CIDR range of 10.0.0.0/16, which includes all 10.0.x.x IP addresses. The
subnet you just created has a CIDR range of 10.0.0.0/24, which includes all 10.0.0.x IP addresses.
These ranges might look similar, but the subnet is smaller than the VPC because of the /24 in the
CIDR range.

Now, configure the subnet to automatically assign a public IP address for all instances launched
within it.

15.​Select Public Subnet.


16.​Choose Actions and choose Edit subnet settings.

The Edit subnet settings page is displayed.

17.​From the Auto-assign IP settings section, select Enable auto-assign public IPv4 address.
18.​Choose Save.

A You have successfully changed subnet settings: Enable auto-assign public IPv4 address
message is displayed on top of the screen.

Note: Even though this subnet is named Public Subnet, it is not yet public. A public subnet must
have an internet gateway and route to the gateway. You create and attach the internet gateway and
route tables in this lab.

Task 2.2: Create your private subnet


The private subnet is for resources that are to remain isolated from the internet.

19.​Choose Create subnet, and then configure the following:


●​ VPC ID: Select Lab VPC from the dropdown menu.
●​ Subnet name: Enter Private Subnet.
●​ Availability Zone: Select the first Availability Zone in the list. (Do not choose No Preference.)
●​ IPv4 subnet CIDR block: Enter 10.0.2.0/23.
20.​Choose Create subnet.

A You have successfully created 1 subnet: subnet-xxxxxx message is displayed on top of the
screen.

21.​Verify the state.

Expected output: It should display the following:

●​ State: Available

Note: The CIDR block of 10.0.2.0/23 includes all IP addresses that start with 10.0.2.x and 10.0.3.x.
This is twice as large as the public subnet because most resources should be kept private, unless
they specifically need to be accessible from the internet.

Your VPC now has two subnets. However, these subnets are isolated and cannot communicate with
resources outside the VPC. Next, you configure the public subnet to connect to the internet through
an internet gateway.
Congratulations! You have successfully created a public subnet and a private subnet in the lab VPC.

Task 3: Create an internet gateway


In this task, you create an internet gateway so that internet traffic can access the public subnet. To
grant access to or from the internet for instances in a subnet in a VPC, you create an internet
gateway and attach it to your VPC. Then you add a route to your subnet’s route table that directs
internet-bound traffic to the internet gateway.

Learn more: An internet gateway serves two purposes: To provide a target in your VPC route tables
for internet-bound traffic, and to perform network address translation (NAT) for instances that have
been assigned public IPv4 addresses.

22.​In the left navigation pane, choose Internet gateways.


23.​Choose Create internet gateway and configure the following:
●​ Name tag: Enter Lab IGW.
24.​Choose Create internet gateway.

A The following internet gateway was created: igw-xxxxxx - Lab IGW. You can now attach to a VPC
to enable the VPC to communicate with the internet. message is displayed on top of the screen.

You can now attach the internet gateway to your Lab VPC.

25.​From the same page, choose Actions and choose Attach to VPC.
26.​For Available VPCs, select Lab VPC from the dropdown menu.
27.​Choose Attach internet gateway.

A Internet gateway igw-xxxxx successfully attached to vpc-xxxxx message is displayed on top of the
screen.

28.​Verify the state.

Expected output: It should display the following:

●​ State: Attached

The internet gateway is now attached to your Lab VPC. Even though you have created an internet
gateway and attached it to your VPC, you must also configure the route table of the public subnet to
use the internet gateway.

Congratulations! You have successfully created an internet gateway so that internet traffic can
access the public subnet.
Task 4: Route internet traffic in the public subnet to the
internet gateway
In this task, you create a route table and add a route to the route table to direct internet-bound traffic
to your internet gateway and associate your public subnets with your route table. Each subnet in
your VPC must be associated with a route table; the table controls the routing for the subnet. A
subnet can only be associated with one route table at a time, but you can associate multiple subnets
with the same route table.

Learn more: A route table contains a set of rules, called routes, that are used to determine where
network traffic is directed. To use an internet gateway, your subnet’s route table must contain a route
that directs internet-bound traffic to the internet gateway. You can scope the route to all destinations
not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6), or you can scope the route
to a narrower range of IP addresses. If your subnet is associated with a route table that has a route
to an internet gateway, it’s known as a public subnet.

29.​In the left navigation pane, choose Route tables.

There is currently one default route table associated with the VPC, Lab VPC. This routes traffic
locally. You now create an additional route table to route public traffic to your internet gateway.

30.​Choose Create route table, and then configure the following:


●​ Name - optional: Enter ​
Public Route Table.
●​ VPC: Select Lab VPC from the dropdown menu.
31.​Choose Create route table.

A Route table rtb-xxxxxxx | Public Route Table was created successfully. message is displayed on
top of the screen.

32.​Choose the Routes tab in the lower half of the page.

Note: There is one route in your route table that allows traffic within the 10.0.0.0/16 network to flow
within the network, but it does not route traffic outside of the network.

You now add a new route to permit public traffic.

33.​Choose Edit routes.


34.​Choose Add route, and then configure the following:
●​ Destination: Enter 0.0.0.0/0.
●​ Target: Choose Internet Gateway in the dropdown menu, and then choose the displayed
internet gateway ID.
35.​Choose Save changes.

A Updated routes for rtb-xxxxxxx / Public Route Table successfully message is displayed on top of
the screen.
36.​Choose the Subnet associations tab.
37.​Choose Edit subnet associations.
38.​Select Public Subnet
39.​Choose Save associations.

A You have successfully updated subnet associations for rtb-xxxxxxx / Public Route Table. message
is displayed on top of the screen.

Note: The subnet is now public because it has a route to the internet through the internet gateway.

Congratulations! You have successfully configured the route table.

Task 5: Create a public security group


In this task, you create a security group so that users can access your Amazon EC2 instance.
Security groups in a VPC specify which traffic is allowed to or from an Amazon EC2 instance.

Learn more: You can use Amazon EC2 security groups to help secure instances within an Amazon
VPC. By using security groups in a VPC, you can specify both inbound and outbound network traffic
that is allowed to or from each Amazon EC2 instance. Traffic that is not explicitly allowed to or from
an instance is automatically denied.

Security: It is recommended to use HTTPS protocol to improve web traffic security. However, to
simplify this lab, only HTTP protocol is used.

40.​In the left navigation pane, choose Security groups.


41.​Choose Create security group, and then configure the following:
●​ Security group name: Enter Public SG.
●​ Description: Enter Allows incoming traffic to public instance.
●​ VPC: Select Lab VPC from the dropdown menu.
42.​In the Inbound rules section, choose Add rule and configure the following:
●​ Type: Select HTTP from the dropdown menu.
●​ Source: Select Anywhere-IPv4 from the dropdown menu.
43.​In the Tags - optional section, choose Add new tag and configure the following:
●​ Key: Enter Name.
●​ Value: Enter Public SG.
44.​Choose Create security group.

A Security group (sg-xxxxxxx | Public SG) was created successfully message is displayed on top of
the screen.

Congratulations! You have successfully created a security group that allows HTTP traffic. You need
this in the next task when you launch an Amazon EC2 instance in the public subnet.
Task 6: Launch an Amazon EC2 instance into a public
subnet
In this task, you launch an Amazon EC2 instance into a public subnet. To activate communication
over the internet for IPv4, your instance must have a public IPv4 address that’s associated with a
private IPv4 address on your instance. By default, your instance is only aware of the private
(internal) IP address space defined within the VPC and subnet.

Learn more: The internet gateway that you created logically provides the one-to-one NAT on behalf
of your instance. So when traffic leaves your VPC subnet and goes to the internet, the reply address
field is set to the public IPv4 address or Elastic IP address of your instance, and not its private IP
address.

45.​At the top of the AWS Management Console, in the search bar, search for and choose EC2.

The Amazon EC2 Management Console is displayed.

Task 6.1: Begin the instance configuration


46.​From the console navigation menu on the left, choose EC2 Dashboard.
47.​From the Launch instances section, choose Launch instances.

The Launch an instance page is displayed.

Task 6.2: Add tags to the instance


You can use tags to categorize your AWS resources in different ways, such as by purpose, owner, or
environment. You can apply tags to most AWS Cloud resources. Each tag consists of a key and a
value, both of which you define. One use of tags is for when you must manage many resources of
the same type. You can quickly search for and identify a specific resource by the tag you have
applied to it.

In this task, you add a tag to the Amazon EC2 instance.

48.​Locate the Name and tags section.


49.​In the Name field, enter Public Instance.

Note: No additional instance tags are required for this lab.

Task 6.3: Select an AMI


In this task, you choose an Amazon Machine Image (AMI). The AMI contains a copy of the disk
volume used to launch the instance.

50.​Locate the Application and OS Images (Amazon Machine Image) section.


51.​Ensure that Amazon Linux is selected as the OS.
52.​Ensure that Amazon Linux 2023 AMI is selected in the dropdown menu.

Task 6.4: Choose the Amazon EC2 instance type


Each instance type allocates a specific combination of virtual CPUs (vCPUs), memory, disk storage,
and network performance.

For this lab, use a t3.micro instance type. This instance type has 2 vCPUs and 1 GiB of memory.

53.​Locate the Instance type section.


54.​From the Instance type dropdown menu, choose t3.micro.

Task 6.5: Configure key pair for login


55.​Locate the Key pair (login) section.
56.​From the Key pair name - required dropdown menu, choose Proceed without a key pair (Not
recommended) .

Task 6.6: Configure instance networking


57.​Locate the Network settings section.
58.​Choose Edit.
59.​Configure the following settings from the dropdown menus:
●​ VPC - required: Select Lab VPC.
●​ Subnet: Select Public Subnet.
●​ Auto-assign public IP: Select Enable.
Task 6.7: Configure instance security groups
You can use security groups to define both the allowed/denied and the inbound/outbound traffic for
the elastic network interface. The network interface is attached to an Amazon EC2 instance. Port 80
is the default port for HTTP traffic, and it is necessary for the web server you launch in this lab to
work correctly.

60.​For Firewall (security groups), choose Select existing security group.


61.​From the Common security groups dropdown menu, choose the security group that has a
name like Public SG.

Task 6.8: Add storage


You can use the Configure storage section to specify or modify the storage options for the instance
and add additional Amazon Elastic Block Store (Amazon EBS) disk volumes attached to the
instance. The EBS volumes can be configured in both their size and performance.

In this lab, the default storage settings are all that is needed. No changes are required.

Task 6.9: Configure user data


62.​Locate and expand the Advanced details section.
63.​From the IAM instance profile dropdown menu, select the role that has a name like
EC2InstProfile.

Note: To install and configure the new instance as a web server, you provide a user data script that
automatically runs when the instance launches.

64.​In the User data - optional section, copy and paste the following:

#!/bin/bash
# To connect to your EC2 instance and install the Apache web server with
PHP
yum update -y
yum install -y httpd php8.1
systemctl enable httpd.service
systemctl start httpd
cd /var/www/html
wget
https://us-west-2-tcprod.s3.amazonaws.com/courses/ILT-TF-200-ARCHIT/v7.9.2
.prod-7555a90f/lab-2-VPC/scripts/instanceData.zip
unzip instanceData.zip

The remaining settings on the page can be left at their default values.

Task 6.10: Review the instance launch


Take a moment to review that the configuration for the Amazon EC2 instance you are about to
launch is correct.

65.​Locate the Summary section.


66.​Choose Launch instance.

The Launch an instance page is displayed.

Your Amazon EC2 instance is now launched and configured as you specified.

67.​Choose View all instances.

The Amazon EC2 console is displayed.

68.​Occasionally choose the console refresh button and wait for Public Instance to display the
Instance state as Running and wait for Status check to pass 3/3 checks passed.

Note: The Amazon EC2 instance named Public Instance is initially in a Pending state. The instance
state then changes to Running indicating that the instance has finished booting.

Congratulations! You have successfully launched an Amazon EC2 instance into a public subnet.

Task 7: Connect to a public instance through HTTP


In this task, you connect to the public instance and launch the basic Apache web server page. The
inbound rules added earlier that allow HTTP access (port 80) allow you to connect to the web server
running Apache.

69.​In the left navigation pane, choose Instances.


70.​Select Public Instance.
71.​Choose the Networking tab in the lower pane.

Note: If you need to make any section of the console larger, you can resize the horizontal edges of
the containers displayed on the console.

72.​Locate the Public IPv4 DNS value.


73.​Copy the public DNS value. Do not choose the open address option, because HTTPS is not
set up for this lab environment.
74.​Open a new browser tab and paste the public DNS value for Public Instance in the URL
address bar.

The web page hosted on the Amazon EC2 instance is displayed. The page displays the instance ID
and the AWS Availability Zone where the Amazon EC2 instance is located.

75.​Close the browser tab and return to the console.


Congratulations! You have successfully launched an Apache web server in the public subnet and
tested the HTTP connection. You can safely close the tab and return to the console.

Task 8: Connect to the Amazon EC2 instance in the public


subnet through Session Manager
In this task, you connect to your Amazon EC2 instance in the public subnet using Session Manager.

Learn more: Session Manager is a fully managed AWS Systems Manager capability that you use to
manage your Amazon EC2 instances through an interactive one-click browser-based shell or
through the AWS Command Line Interface (AWS CLI). You can use Session Manager to start a
session with an Amazon EC2 instance in your account. After starting the session, you can run bash
commands as you would through any other connection type.

76.​At the top of the AWS Management Console, in the search bar, search for and choose ​
EC2.
77.​In the left navigation pane, choose Instances.
78.​Select Public Instance and choose Connect.

The Connect to instance page is displayed.

79.​Choose the Session Manager tab.

Learn more: With Session Manager, you can connect to Amazon EC2 instances without needing to
expose the SSH port on your firewall or Amazon VPC security group. For more information, see
AWS Systems Manager Session Manager.

80.​Choose Connect.

A new browser tab or window opens with a connection to the Public Instance.

Note: The Session Manager service is not updated in real time. If you experience errors with
Session Manager connecting to an Amazon EC2 instance you just launched, ensure that you have
given the instance a few minutes to launch, pass health checks, and communicate with the Session
Manager service before trying to open a session connection again.

81.​ Command: Enter the following command to change to the home directory (/home/ssm-user/)
and test web connectivity using the cURL command:

cd ~
curl -I https://aws.amazon.com/training/

Expected output:

HTTP/2 200
content-type: text/html;charset=UTF-8
server: Server
date: Wed, 19 Apr 2023 14:43:47 GMT
x-amz-rid: 6HVPS1JY1XW2S1K34Q3Z
set-cookie: aws-priv=eyJ2IjoxLCJldSI6MCwic3QiOjB9; Version=1;
Comment="Anonymous cookie for privacy regulations";
Domain=.aws.amazon.com; Max-Age=31536000; Expires=Thu, 18-Apr-2024
14:43:47 GMT; Path=/; Secure
set-cookie: aws_lang=en; Domain=.amazon.com; Path=/
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
strict-transport-security: max-age=63072000
x-content-type-options: nosniff
x-amz-id-1: 6HVPS1JY1XW2S1K34Q3Z
last-modified: Thu, 30 Mar 2023 15:58:02 GMT
content-security-policy-report-only: default-src *; connect-src *;
font-src * data:; frame-src *; img-src * data:; media-src *; object-src *;
script-src *; style-src 'unsafe-inline' *; report-uri
https://prod-us-west-2.csp-report.marketing.aws.dev/submit
vary: accept-encoding,Content-Type,Accept-Encoding,User-Agent
x-cache: Miss from cloudfront
via: 1.1 88c333921d5c405e037b84bb8c2dc33e.cloudfront.net (CloudFront)
x-amz-cf-pop: GRU3-P1
x-amz-cf-id: 89R1wtM9vYV0kIQXrEVkcoNzg_C3UfQJIEVkC5BA3xiIH3FD0nVnYw==

Congratulations! You have successfully connected to your public instance using Session Manager.
You can safely close the tab and return to the console.

Task 9: Create a NAT gateway and configuring routing in


the private subnet
In this task, you create a NAT gateway and then create a route table to route non-local traffic to the
NAT gateway. You then attach the route table to the private subnet. You can use a NAT gateway to
allow instances in a private subnet to connect to the internet or other AWS services, but prevent the
internet from initiating a connection with those instances.

Note: To create a NAT gateway, you must specify the public subnet in which the NAT gateway
should reside. You must also specify an Elastic IP address to associate with the NAT gateway when
you create it. You cannot change the Elastic IP address after you associate it with the NAT gateway.
After you’ve created a NAT gateway, you must update the route table associated with one or more of
your private subnets to point internet-bound traffic to the NAT gateway. This allows instances in your
private subnets to communicate with the internet.

82.​Return to the AWS Management Console browser tab.


83.​At the top of the AWS Management Console, in the search box, search for and choose ​
VPC.
84.​In the left navigation pane, choose NAT gateways.
85.​Choose Create NAT gateway and configure the following:
●​ Name - optional: Enter Lab NGW.
●​ Subnet: Select Public Subnet from the dropdown menu.
●​ For Elastic IP allocation ID, choose Allocate Elastic IP.
86.​Choose Create NAT gateway.

A NAT gateway nat-xxxxxxx | Lab NGW was created successfully. message is displayed on top of
the screen.

In the next step, you create a new route table for a private subnet that redirects non-local traffic to
the NAT gateway.

87.​In the left navigation pane, choose Route tables.


88.​Choose Create route table and configure the following:
●​ Name - optional: Enter Private Route Table.
●​ VPC: Select Lab VPC from the dropdown menu.
89.​Choose Create route table.

A Route table rtb-xxxxxxx | Private Route Table was created successfully. message is displayed on
top of the screen.

The private route table is created and the details page for the private route table is displayed.

90.​Choose the Routes tab.

There is currently one route that directs all traffic locally.

You now add a route to send internet-bound traffic through the NAT gateway.

91.​Choose Edit routes.


92.​Choose Add route and then configure the following:
●​ Destination: Enter 0.0.0.0/0.
●​ Target: Choose NAT Gateway in the dropdown menu, and then choose the displayed NAT
Gateway ID.
93.​Choose Save changes.

A Updated routes for rtb-xxxxxxx / Private Route Table successfully message is displayed on top of
the screen.

94.​Choose the Subnet associations tab.


95.​Choose Edit subnet associations.
96.​Select Private Subnet.
97.​Choose Save associations.
A You have successfully updated subnet associations for rtb-xxxxxxx / Private Route Table.
message is displayed on top of the screen.

This route sends internet-bound traffic from the private subnet to the NAT gateway that is in the
same Availability Zone.

Congratulations! You have successfully created the NAT gateway and configured the private route
table.

Task 10: Create a security group for private resources


In this task, you create a security group that allows incoming HTTP traffic from resources assigned
to the public security group. In a multi-tiered architecture, resources in a private subnet are should
not directly accessible from the internet, however their is a common use case to route web traffic
from publicly accessible resources to private resources.

Learn more: When you specify a security group as the source for a rule, traffic is allowed from the
network interfaces that are associated with the source security group for the specified port and
protocol. Incoming traffic is allowed based on the private IP addresses of the network interfaces that
are associated with the source security group (and not the public IP or Elastic IP addresses). Adding
a security group as a source does not add rules from the source security group.

98.​In the left navigation pane, choose Security groups.


99.​Choose Create security group, and then configure the following:
●​ Security group name: Enter Private SG.
●​ Description: Enter Allows incoming traffic to private instance using
public security group.
●​ VPC: Select Lab VPC from the dropdown menu.
100.​ In the Inbound rules section, choose Add rule and configure the following:
●​ Type: Select HTTP.
●​ Source: Select Custom.
○​ In the box to the right of Custom, type sg.
○​ Choose Public SG from the list.
101.​ In the Tags - optional section, choose Add new tag and configure the following:
●​ Key: Enter Name.
●​ Value: Enter Private SG.
102.​ Choose Create security group.

A Security group (sg-xxxxxxx | Private SG) was created successfully message is displayed on top of
the screen.

Congratulations! You have successfully created the private security group.


Task 11: Launch an Amazon EC2 instance into a private
subnet
In this task, you launch an Amazon EC2 instance into a private subnet.

Learn more: Private instances can route their traffic through a NAT gateway or a NAT instance to
access the internet. Private instances use the public IP address of the NAT gateway or NAT instance
to traverse the internet. The NAT gateway or NAT instance allows outbound communication but
doesn’t allow machines on the internet to initiate a connection to the privately addressed instances.

103.​ At the top of the AWS Management Console, in the search bar, search for and choose
EC2.

The Amazon EC2 console is displayed.

Task 11.1: Begin the instance configuration


104.​ Choose EC2 Dashboard from the console navigation menu on the left.
105.​ Choose Launch instance from the Launch instance section.

The Launch an instance page is displayed. In this task, you add a tag to the Amazon EC2 instance.

106.​ Locate the Name and tags section.


107.​ Enter Private Instance in the Name field.

Note: No additional instance tags are required for this lab.

Task 11.3: Select an AMI


In this task, you choose an AMI. The AMI contains a copy of the disk volume used to launch the
instance.

108.​ Locate the Application and OS Images (Amazon Machine Image) section.
109.​ Ensure that Amazon Linux is selected as the OS.
110.​ Ensure that Amazon Linux 2023 AMI is selected in the dropdown menu.

Task 11.4: Choose the Amazon EC2 instance type


Each instance type allocates a specific combination of vCPUs, memory, disk storage, and network
performance.

For this lab, use a t3.micro instance type. This instance type has 2 vCPUs and 1 GiB of memory.

111.​ Locate the Instance type section.


112.​ Choose t3.micro from the Instance type dropdown menu.

Task 11.5: Configure key pair for login


113.​ Locate the Key pair (login) section.
114.​ Choose Proceed without a key pair (Not recommended) from the Key pair name -
required dropdown menu.

Task 11.6: Configure instance networking


115.​ Locate the Network settings section.
116.​ Choose Edit and configure the following settings from the dropdown menus:
●​ VPC - required: Select Lab VPC.
●​ Subnet: Select Private Subnet.
●​ Auto-assign public IP: Select Disable.

Task 11.7: Configure instance security groups


117.​ For Firewall (security groups), choose Select existing security group
118.​ Choose the security group that has a name like Private SG from the Common security
groups dropdown menu.

Task 11.8: Add storage


You can use the Configure storage section to specify or modify the storage options for the instance
and add additional Amazon Elastic Block Store (Amazon EBS) disk volumes attached to the
instance. The EBS volumes can be configured in both their size and performance.

In this lab, the default storage settings are all that is needed. No changes are required.

Task 11.9: Configure the IAM instance profile


119.​ Locate and expand the Advanced details section.
120.​ Choose the EC2InstProfile role from the IAM instance profile dropdown menu.

The remaining settings on the page can be left at their default values.

Task 11.10: Configure user data


121.​ Locate and expand the Advanced details section.
122.​ From the IAM instance profile dropdown menu, select the role that has a name like
EC2InstProfile.

Note: To install and configure the new instance as a web server, you provide a user data script that
automatically runs when the instance launches.

123.​ In the User data - optional section, copy and paste the following:

#!/bin/bash
# To connect to your EC2 instance and install the Apache web server with
PHP
yum update -y
yum install -y httpd php8.1
systemctl enable httpd.service
systemctl start httpd
cd /var/www/html
wget
https://us-west-2-tcprod.s3.amazonaws.com/courses/ILT-TF-200-ARCHIT/v7.9.2
.prod-7555a90f/lab-2-VPC/scripts/instanceData.zip
unzip instanceData.zip

The remaining settings on the page can be left at their default values.

Task 11.11: Review the instance launch


Take a moment to review that the configuration for the Amazon EC2 instance you are about to
launch is correct.

124.​ Locate the Summary section.


125.​ Choose Launch instance.

The Launch an instance page is displayed.

Your Amazon EC2 instance is now launched and configured as you specified.

126.​ Choose View all instances.

The Amazon EC2 console is displayed.

The Amazon EC2 instance name Private Instance is initially in a Pending state. The state then
changes to Running, indicating that the instance has finished booting.

127.​ Occasionally choose the console refresh button and wait for the Instance state to
change to Running.

Congratulations! You have successfully launched an Amazon EC2 instance into a private subnet.

Task 12: Connect to the Amazon EC2 instance in the


private subnet
In this task, you connect to the Amazon EC2 instance in the private subnet using Session Manager.

128.​ In the left navigation pane, choose Instances.


129.​ Select Private Instance and choose Connect.

The Connect to instance page is displayed.


130.​ Choose the Session Manager tab.
131.​ Choose Connect.

A new browser tab or window opens with a connection to the Private Instance.

Note: The Session Manager service is not updated in real time. If you experience errors with
Session Manager connecting to an Amazon EC2 instance you just launched, ensure that you have
given the instance a few minutes to launch, pass health checks, and communicate with the Session
Manager service before trying to open a session connection again.

132.​ Command: Enter the following command to change to the home directory
(/home/ssm-user/) and test web connectivity using the cURL command:

cd ~
curl -I https://aws.amazon.com/training/

Expected output:

HTTP/2 200
content-type: text/html;charset=UTF-8
server: Server
date: Wed, 19 Apr 2023 14:59:09 GMT
x-amz-rid: AZPXJ57K93ERATZV588Z
set-cookie: aws-priv=eyJ2IjoxLCJldSI6MCwic3QiOjB9; Version=1;
Comment="Anonymous cookie for privacy regulations";
Domain=.aws.amazon.com; Max-Age=31536000; Expires=Thu, 18-Apr-2024
14:59:08 GMT; Path=/; Secure
set-cookie: aws_lang=en; Domain=.amazon.com; Path=/
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
strict-transport-security: max-age=63072000
x-content-type-options: nosniff
x-amz-id-1: AZPXJ57K93ERATZV588Z
last-modified: Thu, 30 Mar 2023 15:58:02 GMT
content-security-policy-report-only: default-src *; connect-src *;
font-src * data:; frame-src *; img-src * data:; media-src *; object-src *;
script-src *; style-src 'unsafe-inline' *; report-uri
https://prod-us-west-2.csp-report.marketing.aws.dev/submit
vary: accept-encoding,Content-Type,Accept-Encoding,User-Agent
x-cache: Miss from cloudfront
via: 1.1 fb6a4eca9caced7b791557c24b8c6606.cloudfront.net (CloudFront)
x-amz-cf-pop: GRU3-P1
x-amz-cf-id: Tjphb1UhSXmtyHvybuq4QIFwzTurEI0g_saLB2nLjlYRiBbHbqn85Q==

133.​ Close the Session Manager tab and return to the console.

Congratulations! You have successfully connected to a private instance using Session Manager.
(Optional) Task 1: Troubleshooting connectivity between
the private instance and the public instance
In this optional task, you use the Internet Control Message Protocol (ICMP) to validate a private
instance’s network reachability from the public instance.

Note: This task is optional and is provided in case you have lab time remaining. You can complete
this task or skip to the end of the lab.

134.​ Return to the AWS Management Console browser tab.


135.​ In the left navigation pane, choose Instances.
136.​ Select Private Instance.
137.​ On the Details tab, copy the value of Private IPv4 addresses to your clipboard.

Note: To copy the private IPv4 address, hover over it and choose the copy icon.

138.​ Unselect Private Instance.


139.​ Select Public Instance.
140.​ Choose Connect.

The Connect to instance page is displayed.

141.​ Choose the Session Manager tab.


142.​ Choose Connect.

A new browser tab or window opens with a connection to the Public Instance.

First, use a curl command to retrieve a header file and confirm is the web app hosted on the private
instance is reachable from the public instance.

143.​ Command: Copy the following command to your notepad. Replace PRIVATE_IP with the
value of the Private IPv4 address for the Private Instance:

curl PRIVATE_IP

Expected output:

<html><body><h1>It works!</h1></body></html>

144.​ Command: Copy the following command to your notepad. Replace PRIVATE_IP with the
value of the Private IPv4 address for the Private Instance:

ping PRIVATE_IP

145.​ Command: Copy and paste the updated command in your terminal and press Enter.

This is a sample command only. Do not use the following command.

ping 10.0.2.131
146.​ After a few seconds, stop the ICMP ping request by pressing CTRL+C.

The ping request to the private instance fails. Your challenge is to use the console and figure out the
correct inbound rule required in the Private SG to be able to successfully ping the private instance.

If you have trouble completing the optional task, refer to the Optional Task Solution section at the
end of the lab.

(Optional) Task 2: Retrieving instance metadata


In this optional task, you run instance metadata commands on AWS CLI using a tool such as cURL.
Instance metadata is available from your running Amazon EC2 instance. This can be helpful when
you write scripts to run from your Amazon EC2 instance.

Note: This task is optional and is provided in case you have lab time remaining. You can complete
this task or skip to the end of the lab .

147.​ Return to the browser tab with the AWS Management Console open.
148.​ In the left navigation pane, choose Instances.
149.​ Select Public Instance.
150.​ Choose Connect.

The Connect to instance page is displayed.

151.​ Choose the Session Manager tab.


152.​ Choose Connect.

A new browser tab or window opens with a connection to the Public Instance.

153.​ Command: To view all categories of instance metadata from within a running instance,
run the following command:

TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H


"X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v
http://169.254.169.254/latest/meta-data/

154.​ Command: Run the following command to retrieve the public-hostname (one of the
top-level metadata items that were obtained in the preceding command):

curl http://169.254.169.254/latest/meta-data/public-hostname -H
"X-aws-ec2-metadata-token: $TOKEN"

Note: The IP address 169.254.169.254 is a link-local address and is valid only from the instance.

You have successfully learned how to retrieve instance metadata from your running Amazon EC2
instance.
Conclusion
Creating a VPC with both public and private subnets provides you the flexibility to launch tasks and
services in either a public or private subnet. Tasks and services in the private subnets can access
the internet through a NAT gateway.

Congratulations! You now have successfully:

●​ Created an Amazon VPC.


●​ Created public and private subnets.
●​ Created an internet gateway.
●​ Configured a route table and associated it to a subnet.
●​ Created an Amazon EC2 instance and made the instance publicly accessible.
●​ Isolated an Amazon EC2 instance in a private subnet.
●​ Created and assigned security groups to Amazon EC2 instances.
●​ Connected to Amazon EC2 instances using Session Manager.

End lab
Follow these steps to close the console and end your lab.

155.​ Return to the AWS Management Console.


156.​ At the upper-right corner of the page, choose AWSLabsUser, and then choose Sign out.
157.​ Choose End Lab and then confirm that you want to end your lab.

Additional resources
●​ What is Amazon VPC?
●​ Subnets for Your VPC
●​ Connect to the internet using an internet gateway
●​ Configure route tables
●​ Control traffic to resources using security groups
●​ NAT gateways
●​ Public IPv4 addresses
●​ Understanding the basics of IPv6 networking on AWS

Optional task solution


158.​ Return to the AWS Management Console browser tab.
159.​ At the top of the AWS Management Console, in the search box, search for and choose ​
EC2.
160.​ In the left navigation pane, choose Security Groups.
161.​ Select Private SG.
162.​ Choose Actions and then choose Edit inbound rules.
163.​ On the Edit inbound rules page, in the Inbound rules, choose Add rule and configure the
following:
●​ Type: Select Custom ICMP - IPV4.
●​ Source: Select Custom.
○​ In the box to the right of Custom, type sg.
○​ Choose Public SG from the list.
164.​ Choose Save rules.
165.​ Select the Optional Task link to go to the Optional Task and re-run the steps. The Public
Instance should now be able to successfully ping Private Instance.
Lab 3: Creating a Database Layer in Your
Amazon VPC Infrastructure
Lab overview
A backend database plays an important role in any environment, and the security and access control
to this critical resource is vital to any architecture. In this lab, you create an Amazon Aurora
database (DB) cluster to manage a MySQL database and an Application Load Balancer (ALB). The
Amazon Web Services (AWS) Security pillar of the Well-Architected Framework recommends
keeping people away from data; as such, the database is separated from the front end using the
Application Load Balancer. The Application Load Balancer routes traffic to healthy Amazon Elastic
Compute Cloud (Amazon EC2) instances that hosts the front-end application. This provides high
availability and allow communication to the database to happen behind the Application Load
Balancer in a private subnet.

Objectives
By the end of this lab, you will be able to do the following:

●​ Create an Amazon Relational Database Service (Amazon RDS) database instance.


●​ Create an Application Load Balancer.
●​ Create an HTTP listener for the Application Load Balancer.
●​ Create a target group.
●​ Register targets with a target group.
●​ Test the load balancer and the application connectivity to the database.
●​ Review the Amazon RDS DB instance metadata using the console.
●​ Optional Task: Create an Amazon RDS read replica in a different AWS Region.

Prerequisites
This lab requires the following:

●​ Access to a notebook computer with Wi-Fi and Microsoft Windows, macOS, or Linux
(Ubuntu, SuSE, or Red Hat)
●​ An internet browser, such as Chrome, Firefox, or Microsoft Edge
●​ A plaintext editor

Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:

●​ Note: A hint, tip, or important guidance.


●​ Learn more: Where to find more information.
●​ Caution: Information of special interest or importance (not so important to cause problems
with the equipment or data if you miss it, but it could result in the need to repeat certain
steps).
●​ WARNING: An action that is irreversible and could potentially impact the failure of a
command or process (including warnings about configurations that cannot be changed after
they are made).
●​ Expected output: A sample output that you can use to verify the output of a command or
edited file.
●​ Command: A command that you must run.
●​ Consider: A moment to pause to consider how you might apply a concept in your own
environment or to initiate a conversation about the topic at hand.

Start lab
1.​ To launch the lab, at the top of the page, choose Start Lab.​
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2.​ To open the lab, choose Open Console .​
You are automatically signed in to the AWS Management Console in a new web browser tab.​
Warning: Do not change the Region unless instructed.

Common sign-in errors

Error: You must first sign out

If you see the message, You must first log out before logging into a different AWS account:

●​ Choose the click here link.


●​ Close your Amazon Web Services Sign In web browser tab and return to your initial lab
page.
●​ Choose Open Console again.

Error: Choosing Start Lab has no effect

In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:

●​ Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
●​ Refresh the page and try again.

Scenario
Your team has been tasked with prototyping an architecture for a new web-based application. To
define your architecture, you need to have a better understanding of load balancers and managed
databases, such as Amazon RDS.

Lab environment
The lab environment provides you with the following resources to get started: an Amazon Virtual
Private Cloud (Amazon VPC), underlying necessary network structure, three security groups to
control inbound and outbound traffic, two EC2 instances in a private subnet, and an associated EC2
instance profile. The instance profile contains the permissions necessary to allow the AWS Systems
Manager Session Manager feature to access the EC2 instance.

The following diagram shows the expected architecture of the important lab resources you build and
how they should be connected at the end of the lab.

AWS services not used in this lab


AWS services not used in this lab are turned off in the lab environment. In addition, the capabilities
of the services used in this lab are limited to only what the lab requires. Expect to receive errors
when accessing other services or performing actions beyond those provided in this lab guide.
Task 1: Create an Amazon RDS database
In this task, you create an Aurora DB cluster that is compatible with MySQL. An Aurora DB cluster
consists of one or more DB instances and a cluster volume that manages the data for those DB
instances.

Learn more: Amazon Aurora is a fully managed relational database engine that is compatible with
MySQL and PostgreSQL. Aurora is part of the managed database service, Amazon RDS. Amazon
RDS is a web service that makes it easier to set up, operate, and scale a relational database in the
cloud. For more information, see What is Amazon Aurora?.

3.​ At the top of the AWS Management Console, in the search bar, search for and choose ​
RDS.
4.​ In the left navigation pane, choose Databases.
5.​ Choose Create database.

The Create database page is displayed.

6.​ In the Choose a database creation method section, select Standard create.
7.​ In the Engine options section, configure the following:
●​ Engine type: Select Aurora (MySQL Compatible).
8.​ In the Templates section, select Dev/Test.
9.​ In the Settings section, configure the following:
●​ DB cluster identifier: Enter aurora.
●​ Master username: Enter dbadmin.
●​ Credentials management: Choose Self managed option.
●​ Master password: Paste the LabPassword value from the left side of these lab instructions.
●​ Confirm master password: Paste the LabPassword value from the left side of these lab
instructions.
10.​In the Instance configuration section, configure the following:
●​ DB instance class: Select Burstable classes (includes t classes).
●​ From the dropdown menu, choose the db.t3.medium instance type.
11.​ In the Availability & durability section, for Multi-AZ deployment, select Don’t create an Aurora
Replica.

Learn more: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for DB
instances, making them a natural fit for production database workloads. When you provision a
Multi-AZ DB instance, Amazon RDS automatically creates a primary DB instance and synchronously
replicates the data to a standby instance in a different Availability Zone. For more information, see
Amazon RDS Multi-AZ.

Note: Since this lab is about knowing the resources required to build a multi-tier architecture, you do
not need to perform a Multi-AZ deployment. You learn how to deploy a Multi-AZ architecture in the
next lab.
12.​In the Connectivity section, configure the following:
●​ Virtual private cloud (VPC): Select LabVPC from the dropdown menu.
●​ DB subnet group: Select labdbsubnetgroup from the dropdown menu.
●​ Public access: Select No.
●​ VPC security group (firewall): Select Choose existing.
●​ Existing VPC security groups:
○​ To remove the default security group from the Existing VPC security groups field,
select the X.
○​ In the Existing VPC security groups dropdown menu, enter LabDBSecurityGroup
to choose this option.

Learn more: Subnets are segments of an IP address range in an Amazon VPC that you designate
to group your resources based on security and operational needs. A DB subnet group is a collection
of subnets (typically private) that you create in an Amazon VPC and then designate for your DB
instances. With a DB subnet group, you can specify an Amazon VPC when creating DB instances
using the command line interface or API. If you use the console, you can just select the Amazon
VPC and subnets you want to use. For more information, see Working with DB subnet groups.

Learn more: With Amazon VPC, you can launch AWS resources into a virtual network that you have
defined. This virtual network closely resembles a traditional network that you would operate in your
own data center, with the benefits of using the scalable infrastructure of AWS. For more information,
see Amazon VPC VPCs and Amazon RDS.

13.​In the Monitoring section, de-select Enable Enhanced monitoring


14.​Expand the Additional configuration main section at the end of the page.
15.​In the Database options section, configure the following:
●​ Initial database name: Enter inventory
●​ DB cluster parameter group: Choose the value from the dropdown menu that matches the
DBClusterParameterGroup value from the left side of this page.

Caution: Ensure the correct value for DB cluster parameter group is selected from the dropdown
menu. An incorrect value results in errors when building the database replicas.

16.​In the Encryption section, unselect Enable encryption.

Learn more: You can encrypt your Amazon RDS instances and snapshots at rest by activating the
encryption option for your Amazon RDS DB instance. Data that is encrypted at rest includes the
underlying storage for a DB instance, its automated backups, read replicas, and snapshots. For
more information, see Encrypting Amazon RDS resources.

17.​In the Maintenance section, unselect Enable auto minor version upgrade.

Note: Because the nature of this lab is short lived there is no need to set up a maintenance
schedule for the database.

18.​Scroll to the bottom of the screen, then choose Create database.


19.​On the Suggested add-ons for aurora pop-up window, choose Close.
A Successfully created database aurora message is displayed on top of the screen.

Your Aurora MySQL DB cluster is in the process of launching. The Amazon RDS database can take
up to 5 minutes to launch. However, you can continue to the next task.

Congratulations! You have successfully created an Amazon RDS database.

Task 2: Create and configure an Application Load


Balancer
In this task, you create an Application Load Balancer in the public subnets to access the application
from a browser. You navigate to the Amazon EC2 console and create an Application Load Balancer
into the existing Amazon VPC infrastructure and add the private EC2 instances as a target.

A load balancer serves as the single point of contact for clients. Clients send requests to the load
balancer, and the load balancer sends them to targets, such as EC2 instances. To configure your
load balancer, you create target groups and then register targets with your target groups.

Task 2.1 : Create a target group


In this task, you create a target group and register your targets with the target group. By default, the
load balancer sends requests to registered targets using the port and protocol that you specified for
the target group.

20.​At the top of the console, in the search bar, search for and choose ​
EC2.
21.​In the left navigation pane, expand the Load Balancing section and choose Target Groups.
22.​Choose Create target group.
The Specify group details page is displayed.

23.​In the Basic configuration section, configure the following:


●​ Choose a target type: Select Instances.
●​ Target group name: Enter ALBTargetGroup.
●​ VPC: Select LabVPC from the dropdown menu.

The remaining settings on the page can be left at their default values.

24.​Choose Next.

The Register targets page is displayed.

25.​In the Available instances section, configure the following:


●​ Select the EC2 instance named AppServer1 and AppServer2.
●​ Choose Include as pending below.

The instance appears under the Targets section of the page.

26.​Choose Create target group.

A Successfully created target group: ALBTargetGroup message is displayed on top of the screen.

Task 2.2 : Create an Application Load Balancer


In this task, you create an Application Load Balancer. To do that, you must first provide basic
configuration information for your load balancer, such as a name, scheme, and IP address type.
Then, you provide information about your network and one or more listeners.

27.​In the left navigation pane, expand the Load Balancing section and choose Load Balancers.
28.​Choose Create load balancer.

The Select load balancer type page is displayed.

29.​In the Load balancer types section, for Application Load Balancer card, choose Create.

The Create Application Load Balancer page is displayed.

30.​In the Basic configuration section, configure the following:


●​ Load balancer name: Enter LabAppALB.
31.​In the Network mapping section, configure the following:
●​ VPC: Select LabVPC from the dropdown menu.
●​ Mappings:
○​ Select the check box for the first Availability Zone listed, and select PublicSubnet1
from the Subnet list dropdown menu.
○​ Select the check box for the second Availability Zone listed, and select
PublicSubnet2 from the Subnet list dropdown menu.
32.​In the Security groups section, configure the following:
●​ Select the X to remove the default security group.
●​ Select LabALBSecurityGroup from the dropdown menu.
33.​In the Listeners and routing section, configure the following:
●​ For Listener HTTP:80: From the Default action dropdown menu, select ALBTargetGroup.
34.​Choose Create load balancer.

A Successfully created load balancer: LabAppALB message is displayed on top of the screen.

35.​Choose View load balancer.

The load balancer is in the Provisioning state for few minutes and then changes to Active.

In this task, you created an Application Load Balancer and you added EC2 instances as a target to
the load balancer. This task provides a demonstration on how to register a target with a load
balancer. In addition to individual EC2 instances, Auto Scaling groups can also be registered as
targets for the load balancer. When you use Auto Scaling groups as targets for load balancing, the
instances that are launched by the Auto Scaling group are automatically registered with the load
balancer. Likewise, EC2 instances that are ended by the Auto Scaling groups are automatically
unregistered from the load balancer. Using Auto Scaling groups with a load balancer is
demonstrated in the next lab.

Congratulations! You have successfully created a load balancer, created target groups, and
registered the EC2 instances with the target group.

Task 3: Review the Amazon RDS DB instance metadata


through the console
In this task, you navigate through the Amazon RDS console to ensure the instance created in Task 1
has completed and is active. You explore the console to learn how to find the connection information
for a DB instance. The connection information for a DB instance includes its endpoint, port, and a
valid database user.

36.​At the top of the console, in the search bar, search for and choose ​
RDS.
37.​In the navigation pane, choose Databases.
38.​From the list of DB identifiers, select the hyperlink for the cluster named aurora.

A page with details about the database are displayed.

39.​On the Connectivity & security tab, you can find the endpoint and port number for the
database cluster. In general, you need the endpoints and the port number to connect to the
database.
40.​Copy and paste the Endpoint name of the writer instance value to a notepad. You need this
value later in the lab.

It should look similar to aurora.cluster-crwxbgqad61a.us-west-2.rds.amazonaws.com.


Tip: To copy the writer instance endpoint, hover on it and choose the copy icon.

Notice that the status for the endpoints is Available.

41.​On the Configuration tab, you can find details regarding how the database is currently
configured.
42.​On the Monitoring tab, you can monitor metrics for the following items of your database:
●​ The number of connections to a database instance
●​ The amount of read and write operations to a database instance
●​ The amount of storage that a database instance is currently using
●​ The amount of memory and CPU being used for a database instance
●​ The amount of network traffic to and from a database instance

WARNING: Wait for the Status of the aurora DB instance to show as Available before continuing to
the next task.

Congratulations! You have successfully reviewed the Amazon RDS DB instance metadata through
the console.

Task 4: Test the application connectivity to the database


In this task, you identify the Application Load Balancer URL and run a basic HTTP request through
the load balancer. You launch the web application installed on the EC2 instances and test the
application connectivity to the database.

43.​At the top of the console, in the search bar, search for and choose ​
EC2.
44.​In the left navigation pane, choose Target Groups.
45.​Select ALBTargetGroup.
46.​In the Targets tab, wait until the instance status is displayed as healthy.

Learn more: Elastic Load Balancing periodically tests the ping path on your web server instance to
determine health. A 200 HTTP response code indicates a healthy status, and any other response
code indicates an unhealthy status. If an instance is unhealthy and continues in that state for a
successive number of checks (unhealthy threshold), the load balancer removes it from service until it
recovers. Fore more information, see Health checks for your target groups.

47.​In the left navigation pane, choose Load Balancers.

The Load balancers page is displayed.

48.​Copy the DNS name and paste the value in a new browser tab to invoke the load balancer.

Tip: To copy the DNS name, hover on it and select the copy icon.

Expected output: A web page like this is displayed.

49.​Choose the Settings tab and then configure the following:


●​ Endpoint: Paste the writer instance endpoint you copied earlier.
●​ Database: Enter inventory.
●​ Username: Enter dbadmin.
●​ Password: Paste the LabPassword value from the left side of these lab instructions.
50.​Choose Save.

The application connects to the database, loads some initial data, and displays information. With
this application, you can add, edit, or delete an item from a store’s inventory.

The inventory information is stored in the Amazon RDS MySQL-compatible database you created
earlier in the lab. This means that if the web application server fails, the data won’t be lost. It also
means that multiple application servers can access the same data.

Congratulations! You have successfully accessed the web application installed on the EC2 instance
through the load balancer.
Optional Task: Creating an Amazon RDS read replica in a
different AWS Region
In this challenge task, you create a cross-Region read replica from the source DB instance. You
create a read replica in a different AWS Region to improve your disaster recovery capabilities, scale
read operations into an AWS Region closer to your users, and to make it easier to migrate from a
data center in one AWS Region to a data center in another AWS Region.

Note: This challenge task is optional and is provided in case you have lab time remaining. You can
complete this task or skip to the end of the lab here.

51.​Switch back to the browser tab open to the AWS Management Console.
52.​At the top of the console, in the search bar, search for and choose ​
RDS.
53.​In the left navigation pane, choose Databases.
54.​Select aurora DB instance as the source for a read replica.
55.​Choose Actions and select Create cross-Region read replica.

The Create cross region read replica page is displayed.

For Multi-AZ deployment: Select Don’t create an Aurora Replica.

The remaining settings in this section can be left at their default values.

56.​In the Connectivity section, configure the following:


●​ Destination Region: From the dropdown menu, select the region that matches the
RemoteRegion value from the lab instructions.
●​ Virtual private cloud (VPC): LabVPC
●​ Public access: Select No.
●​ For Existing VPC security groups:
○​ To remove the default security group, select the X.
○​ From the dropdown menu, enter LabDBSecurityGroup to choose this option. The
remaining settings in this section can be left at their default values.
57.​In the Settings section, configure the following:
●​ DB instance identifier: Enter LabDBreplica.

The remaining settings in this section can be left at their default values.

58.​Choose Create.

A Your Read Replica creation has been initiated. message is displayed on the screen.

59.​To review the cross-Region read replica in the destination region, choose the hyperlink on
the same page labeled here.
60.​Otherwise, choose Close.
Congratulations! You have successfully completed the optional task and started the creation of a
cross-Region read replica for the Amazon RDS database.

Conclusion
Congratulations! You have now successfully completed the following:

●​ Created an Amazon RDS DB instance.


●​ Created an Application Load Balancer.
●​ Created an HTTP listener for the Application Load Balancer.
●​ Created a target group.
●​ Registered targets with a target group.
●​ Tested the load balancer and the application connectivity to the database.
●​ Reviewed the Amazon RDS DB instance metadata using the console.

In this lab, you learned how to deploy various resources needed for a prototype web application in
your Amazon VPC. However, the architecture that was created in this lab does not meet AWS Cloud
best practices because it is not an elastic, durable, highly available design. By relying on only a
single Availability Zone in the architecture, there is a single point of failure. You learn how to
configure your architecture for redundancy, failover, and high availability in the next lab.

End lab
Follow these steps to close the console and end your lab.

61.​Return to the AWS Management Console.


62.​At the upper-right corner of the page, choose AWSLabsUser, and then choose Sign out.
63.​Choose End Lab and then confirm that you want to end your lab.
Lab 4: Configuring High Availability in
Your Amazon VPC
Lab overview
Amazon Web Services (AWS) provides services and infrastructure to build reliable, fault-tolerant,
and highly available systems in the cloud. Fault tolerance is a system’s ability to remain in operation
even if some of the components used to build the system fail. High availability is not about
preventing system failure but the ability of the system to recover quickly from it. As an AWS solutions
architect, it is important to design your systems to be highly available and fault tolerant when
needed. You must also understand the benefits and costs of those designs. In this lab, you integrate
two powerful AWS services: Elastic Load Balancing and Auto Scaling groups. You create an Auto
Scaling group of Amazon Elastic Compute Cloud (Amazon EC2) instances operating as application
servers. You then configure an Application Load Balancer to load balance between the instances
inside that Auto Scaling group. You continue to work with the Amazon Relational Database Service
(Amazon RDS) by permitting Multi-AZ, creating a read replica, and promoting a read replica. With
read replicas, you can write to the primary database and read from the read replica. Because a read
replica can be promoted to be the primary database, it is a useful tool in high availability and disaster
recovery.

The following image shows the final architecture:

Objectives
After completing this lab, you should be able to do the following:

●​ Create an Amazon EC2 Auto Scaling group and register it with an Application Load Balancer
spanning across multiple Availability Zones.
●​ Create a highly available Amazon Aurora database (DB) cluster.
●​ Modify an Aurora DB cluster to be highly available.
●​ Modify an Amazon Virtual Private Cloud (Amazon VPC) configuration to be highly available
using redundant NAT gateways.
●​ Confirmed your database can perform a failover to a read replica instance.

Prerequisites
This lab requires the following:

●​ Access to a notebook computer with Wi-Fi and Microsoft Windows, macOS, or Linux
(Ubuntu, SuSE, or Red Hat)
●​ An internet browser, such as Chrome, Firefox, or Microsoft Edge
●​ A plaintext editor

Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:

●​ Note: A hint, tip, or important guidance.


●​ Learn more: Where to find more information.
●​ Caution: Information of special interest or importance (not so important to cause problems
with the equipment or data if you miss it, but it could result in the need to repeat certain
steps).
●​ Refresh: A time when you might need to refresh a web browser page or list to show new
information.

Start lab
1.​ To launch the lab, at the top of the page, choose Start Lab.​
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2.​ To open the lab, choose Open Console .​
You are automatically signed in to the AWS Management Console in a new web browser tab.​
Warning: Do not change the Region unless instructed.

Common sign-in errors

Error: You must first sign out


If you see the message, You must first log out before logging into a different AWS account:

●​ Choose the click here link.


●​ Close your Amazon Web Services Sign In web browser tab and return to your initial lab
page.
●​ Choose Open Console again.

Error: Choosing Start Lab has no effect

In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:

●​ Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
●​ Refresh the page and try again.

AWS services not used in this lab


AWS service capabilities used in this lab are limited to what the lab requires. Expect errors when
accessing other services or performing actions beyond those provided in this lab guide.

Task 1: Inspect your existing lab environment


Review the configuration of the existing environment. The following resources have been
provisioned for you through AWS CloudFormation:

●​ An Amazon VPC
●​ Public and private subnets in two Availability Zones
●​ An internet gateway (not shown in the diagram) associated with the public subnets
●​ A NAT gateway in one of the public subnets
●​ An Application Load Balancer deployed across the two public subnets to receive and forward
incoming application traffic
●​ An EC2 instance in one of the private subnets, running a basic inventory tracking application
●​ An Aurora DB cluster containing a single DB instance in one of the private subnets to store
inventory data
The following image shows the initial architecture:

Task 1.1: Examine the network infrastructure


In this task, you review the network configuration details for the lab environment.

3.​ At the top of the AWS Management Console, in the search bar, search for and choose VPC.

Note: The Lab VPC was created for you by the lab environment, and all of the application resources
used by this lab exercise exist inside this VPC.

4.​ In the left navigation pane, choose Your VPCs.

Your Lab VPC appears on the list along with the default VPC.

5.​ In the left navigation pane, choose Subnets.

The subnets that are part of the Lab VPC are displayed in a list. Examine the following details listed
in the columns for Public Subnet 1:

●​ In the VPC column, you can identify which VPC this subnet is associated with. This subnet
exists inside the Lab VPC.
●​ In the IPv4 Classless Inter-Domain Routing (CIDR) column, the value of 10.0.0.0/24 means
this subnet includes the 256 IPs (five of which are reserved and unusable) between 10.0.0.0
and 10.0.0.255.
●​ In the Availability Zone column, you can identify the Availability Zone in which this subnet
resides. This subnet resides in the Availability Zone ending with an “a”.
6.​ To reveal more details at the bottom of the page, select Public Subnet 1.

Note: To expand the lower window pane, drag the divider up and down. Alternatively, to choose a
preset size for the lower pane you can choose one of the three square icons.

7.​ On the lower half of the page, choose the Route table tab.

This tab displays details about the routing for this subnet:

●​ The first entry specifies that traffic destined within the VPC’s CIDR range (10.0.0.0/20) is
routed within the VPC (local).
●​ The second entry specifies that any traffic destined for the internet (0.0.0.0/0) is routed to the
internet gateway (igw-xxxx). This configuration makes it a public subnet.
8.​ Choose the Network ACL tab.

This tab displays the network access control list (ACL) associated with the subnet. The rules
currently permit all traffic to flow in and out of the subnet. You can further restrict the traffic by
modifying the network ACL rules or by using security groups.

9.​ In the left navigation pane, choose Internet gateways.

An internet gateway called Lab IG is already associated with the Lab VPC.

10.​In the left navigation pane, choose Security groups.


11.​ Select the Inventory-ALB security group.

This is the security group used to control incoming traffic to the Application Load Balancer.

12.​On the lower half of the page, choose the Inbound rules tab.

The security group permits inbound web traffic (port 80) from everywhere (0.0.0.0/0).

13.​Choose the Outbound rules tab.

By default, security groups allow all outbound traffic. However, you can modify these rules as
necessary.

14.​Select the Inventory-App security group. Ensure that it is the only security group selected.

This is the security group used to control incoming traffic to the AppServer EC2 instance.

15.​On the lower half of the page, choose the Inbound rules tab.

The security group only permits inbound web traffic (port 80) from the Application Load Balancer
security group (Inventory-ALB).

16.​Choose the Outbound rules tab.


By default, security groups allow all outbound traffic. As with the outbound rules for the Application
Load Balancer security group, you can modify these rules as necessary.

17.​Select the Inventory-DB security group. Ensure that it is the only security group selected.

This is the security group used to control incoming traffic to the database.

18.​On the lower half of the page, choose the Inbound rules tab.

The security group permits inbound MYSQL/Aurora traffic (port 3306) from the application server
security group (Inventory-App).

19.​Choose the Outbound rules tab.

By default, security groups allow all outbound traffic. As with the outbound rules for the previous
security groups, you can modify these rules as necessary.

Task 1.2: Examine the EC2 instance


An EC2 instance has been provided for you. This instance runs a basic Hypertext Preprocessor
(PHP) application that tracks inventory in a database. In this task, you inspect the instance details.

20.​At the top of the console, in the search bar, search for and choose ​
EC2.
21.​In the left navigation pane, choose Instances.
22.​Select the AppServer instance to reveal more details at the bottom of the page.
23.​After reviewing the instance details, choose the Actions dropdown menu, choose Instance
settings, and then choose Edit user data.
24.​On the Edit user data page, choose Copy user data.
25.​Paste the user data you just copied into a text editor. You use it in a later task.

Task 1.3: Examine the load balancer configuration


An Application Load Balancer and target group have been provided for you. In this task, you review
their configuration.

26.​Expand the navigation menu by choosing the menu icon in the upper-left corner.
27.​In the left navigation pane, choose Target Groups.
28.​Select the Inventory-App target group to reveal more details at the bottom of the page.
29.​On the lower half of the page, choose the Targets tab.

The Application Load Balancer forwards incoming requests to all targets on the list. The AppServer
EC2 instance you examined earlier is already registered as a target.

30.​In the left navigation pane, choose Load Balancers.


31.​Choose the Inventory-LB load balancer name to reveal more details.

Task 1.4: Open the PHP inventory application in a web browser


To confirm the inventory application is working correctly, you need to retrieve the URL for the
inventory application settings page.

32.​Copy the InventoryAppSettingsPageURL on the left side of these lab instructions to your
clipboard.

Note: It should be similar to http://Inventory-LB-xxxx.elb.amazonaws.com/settings.php.

33.​Open a new web browser tab, paste the URL you copied in the previous step, and press
Enter.

The settings page for the inventory application is displayed. The database endpoint, database name,
and login details are already populated with the values for the Aurora database.

34.​Leave all the settings on the inventory app settings page as the default configurations.
35.​Choose Save.

After saving the settings, the inventory application redirects to the main page, and inventory for
various items are displayed. You can add items to the inventory or modify the details of the existing
inventory items. When you interact with this application, the load balancer forwards your requests to
the previous AppServer in the load balancer’s target group. The AppServer registers any inventory
changes in the Aurora database. The bottom of the page displays the instance ID and the Availability
Zone where the instance resides.

Note: Leave this inventory application web browser tab open while working on the remaining lab
tasks. You return to it in later tasks.

Congratulations! You have now finished inspecting all of the resources created for you in the lab
environment and successfully accessed the provided inventory application. Next, you create a
launch template to use with Amazon EC2 Auto Scaling to make the inventory application highly
available.

Task 2: Create a launch template


Note: Before you can create an Auto Scaling group, you must create a launch template that
includes the parameters required to launch an EC2 instance, such as the ID of the Amazon Machine
Image (AMI) and an instance type.

In this task, you create a launch template.

36.​At the top of the console, in the search bar, search for and choose ​
EC2.
37.​In the left navigation pane, below Instances, choose Launch Templates.
38.​Choose Create launch template.
39.​In the Launch template name and description section, configure the following:
●​ Launch template name: Enter Lab-template-NUMBER
Note: Replace NUMBER with a random number, such as the following example:

Lab-template-98469549

●​ Template version description: Enter version 1

Note: If the template name already exists, try again with a different number.

You must choose an AMI. An AMI is an image defining the root volume of the instance along with its
operating system, applications, and related details. Without this information, your template would be
unable to launch new instances.

AMIs are available for various operating systems (OSs). In this lab, you launch instances running the
Amazon Linux 2023 OS.

40.​For Application and OS Images (Amazon Machine Image) Info, choose the Quick Start tab.
41.​Choose Amazon Linux as the OS.
42.​For Amazon Machine Image, choose Amazon Linux 2023 AMI.
43.​For Instance type, choose t3.micro from the dropdown menu.

When you launch an instance, the instance type determines the hardware allocated to your instance.
Each instance type offers different compute, memory, and storage capabilities, and they are grouped
in instance families based on these capabilities.

44.​In the Network Settings section, for Security groups, choose Inventory-App.
45.​Scroll down to the Advanced details section.
46.​Expand Advanced details.
47.​For IAM instance profile, choose Inventory-App-Role.
48.​For Metadata version, choose V2 only (token required).
49.​In the User data section, paste the user data you saved to your text editor during Task 1.2.
50.​Choose Create launch template.
51.​Choose View launch templates.

Congratulations! You have successfully created the launch template.

Task 3: Create an Auto Scaling group


In this task, you create an Auto Scaling group that deploys EC2 instances across your private
subnets. This is a security best practice when deploying applications because instances in a private
subnet cannot be accessed from the internet. Instead, users send requests to the Application Load
Balancer, which forwards the requests to the EC2 instances in the private subnets, as shown in the
following diagram:
Learn more: Amazon EC2 Auto Scaling is a service designed to launch or terminate EC2 instances
automatically based on user-defined policies, schedules, and health checks. The service also
automatically distributes instances across multiple Availability Zones to make applications highly
available. For more information, see What is Amazon EC2 Auto Scaling?.

52.​In the left navigation pane, below Auto Scaling, choose Auto Scaling Groups.
53.​Choose Create Auto Scaling group and configure the following:
●​ Auto Scaling group name: Enter Inventory-ASG
●​ Launch template: From the dropdown menu, select the launch template that you created
earlier.
54.​Choose Next.

The Choose instance launch options page is displayed.

55.​In the Network section, configure the following:


●​ VPC: Select Lab VPC from the dropdown menu.
●​ Availability Zones and subnets: Select Private Subnet 1 and Private Subnet 2 from the
dropdown menu.
56.​Choose Next.
57.​On the Integrate with other services - optional page, configure the following:
●​ Select Attach to an existing load balancer.
●​ Select Choose from your load balancer target groups.
●​ From the Existing load balancer target groups dropdown menu, select Inventory-App | HTTP.

This tells the Auto Scaling group to register new EC2 instances as part of the Inventory-App target
group that you examined earlier. The load balancer sends traffic to instances that are in this target
group.
●​ Health check grace period: Enter 300

By default, the health check grace period is set to 300.

58.​Choose Next.
59.​On the Configure group size and scaling - optional page, configure the following:
●​ Desired capacity: Enter 2
●​ Min desired capacity: Enter 2
●​ Max desired capacity: Enter 2
60.​In the Additional settings section, choose Enable group metrics collection within
CloudWatch.
61.​Choose Next.

For this lab, you always maintain two instances to ensure high availability. If the application is
expected to receive varying loads of traffic, it is also possible to create scaling policies that define
when to launch and terminate instances. However, this is not necessary for the Inventory application
in this lab.

62.​Choose Next until the Add tags - optional page is displayed.


63.​Choose Add tag and then configure the following:
●​ Key: Enter Name
●​ Value - optional: Enter Inventory-App

This tags the Auto Scaling group with a name, which also applies to the EC2 instances launched by
the Auto Scaling group. This helps you identify which EC2 instances are associated with which
application or with business concepts, such as cost centers.

64.​Choose Next.
65.​Review the Auto Scaling group configuration for accuracy, and then choose Create Auto
Scaling group.

Your application will soon be running across two Availability Zones. Amazon EC2 Auto Scaling
maintains the configuration even if an instance or Availability Zone fails.

Now that you have created your Auto Scaling group, you can verify that the group has launched your
EC2 instances.

66.​Choose your Auto Scaling group.


67.​Examine the Group details section to review information about the Auto Scaling group.
68.​Choose the Activity tab.

The Activity history section maintains a record of events that have occurred in your Auto Scaling
group. The Status column contains the status of your instances. When your instances are launching,
the status column shows PreInService. After an instance is launched, the status changes to
Successful.

69.​Choose the Instance management tab.


Your Auto Scaling group has launched two EC2 instances, and they are in the InService lifecycle
state. The Health status column shows the result of the EC2 instance health check on your
instances.

Refresh: If your instances have not reached the InService state yet, you need to wait a few minutes.
You can choose refresh to retrieve the current lifecycle state of your instances.

70.​Choose the Monitoring tab. Here, you can review monitoring-related information for your
Auto Scaling group.

Learn more: This page provides information about activity in your Auto Scaling group and the usage
and health status of your instances. The Auto Scaling tab displays Amazon CloudWatch metrics
about your Auto Scaling group, and the EC2 tab displays metrics for the EC2 instances managed by
the Auto Scaling group. For more information, see Monitor your Auto Scaling instances and groups.

Congratulations! You have now successfully created an Auto Scaling group, which maintains your
application’s availability and makes it resilient to instance or Availability Zone failures. Next, you test
the high availability of the application.

Task 4: Test the application


In this task, you confirm that your web application is running and highly available.

71.​Expand the navigation menu by choosing the menu icon in the upper-left corner.
72.​In the left navigation pane, choose Target Groups.
73.​Under Name, select Inventory-App.
74.​On the lower half of the page, choose the Targets tab.

In the Registered targets section, there are three instances. This includes the two Auto Scaling
instances named Inventory-App and the original instance you examined in Task 1, named
AppServer. The Health status column shows the results of the load balancer health check that you
performed against the instances. In this task, you remove the original AppServer instance from the
target group, leaving only the two instances managed by Amazon EC2 Auto Scaling.

75.​For the instance, select AppServer.


76.​To remove the instance from the load balancer’s target group, choose Deregister.

A Successfully deregistered 1 target. message is displayed on top of the screen.

The load balancer stops routing requests to a target as soon as it is deregistered. The Health status
column for the AppServer instance displays a draining state, and the Health Status Details column
displays Target deregistration is in progress until in-flight requests have completed. After a few
minutes, the AppServer instance finishes deregistering, and only the two Auto Scaling instances
remain on the list of registered targets.
Note: Deregistering the instance only detaches it from the load balancer. The AppServer instance
continues to run indefinitely until you terminate it.

77.​If the Health status column for the Inventory-App instances does not display healthy yet,
update the list of instances every 30 seconds using the refresh button at the top-right corner
of the page until both Inventory-App instances display healthy in the Health status column. It
might take a few minutes for the instances to finish initializing.

If the status does not eventually change to healthy, ask your instructor for help diagnosing the
problem. Hovering on the information icon in the Health status column provides more information
about the status.

The application is ready for testing. You test the application by connecting to the Application Load
Balancer, which sends your request to one of the EC2 instances managed by Amazon EC2 Auto
Scaling.

78.​Return to the Inventory Application tab in your web browser.

Note: If you closed the browser tab, you can reopen the inventory application by doing the following:

●​ In the left navigation pane, choose Load Balancers.


●​ Select the Inventory-LB load balancer.
●​ In the Details tab on the lower half of the page, copy the DNS name to your clipboard.

It should be similar to Inventory-LB-xxxx.elb.amazonaws.com.

●​ Open a new web browser tab, paste the DNS name from your clipboard, and press Enter.

The load balancer forwards your request to one of the EC2 instances. The bottom of the page
displays the instance ID and Availability Zone.

79.​ Refresh: Refresh the page in your web browser a few times. The instance ID and Availability
Zone sometimes change between the two instances.

Note: The flow of information is as follows:

●​ You send the request to the Application Load Balancer, which resides in the public subnets.
The public subnets are connected to the internet.
●​ The Application Load Balancer chooses one of the EC2 instances that reside in the private
subnets and forwards the request to the instance.
●​ The EC2 instance then returns the web page to the Application Load Balancer, which returns
the page to your web browser.

The following image displays the flow of information for this web application:
Congratulations! You have now confirmed that Amazon EC2 Auto Scaling successfully launched
two new Inventory-App instances across two Availability Zones, and you deregistered the original
AppServer instance from the load balancer. The Auto Scaling group maintains high availability for
your application in the event of failure. Next, you simulate a failure by terminating one of the
Inventory-App instances managed by Amazon EC2 Auto Scaling.

Task 5: Test high availability of the application tier


In this task, you test the high availability configuration of your application by terminating one of the
EC2 instances.

80.​Return to the EC2 Management Console, but do not close the application tab. You return to it
in later tasks.
81.​In the left navigation pane, choose Instances.

Now, terminate one of the web application instances to simulate a failure.

82.​Choose one of the Inventory-App instances. (It does not matter which one you choose.)
83.​Choose Instance State and then choose Terminate instance.
84.​Choose Terminate.

After a short period of time, the load balancer health checks will notice that the instance is not
responding and automatically route all incoming requests to the remaining instance.

85.​Leaving the console open, switch to the Inventory Application tab in your web browser and
refresh the page several times.

The Availability Zone shown at the bottom of the page stays the same. Even though an instance has
failed, your application remains available.

After a few minutes, Amazon EC2 Auto Scaling also detects the instance failure. You configured
Amazon EC2 Auto Scaling to keep two instances running, so Amazon EC2 Auto Scaling
automatically launches a replacement instance.
86.​ Refresh: Return to the EC2 Management Console. Reload the list of instances using the
refresh button every 30 seconds until a new EC2 instance named Inventory-App appears.

The newly launched instance displays Initializing under the Status check column. After a few
minutes, the health check for the new instance should become healthy, and the load balancer
resumes distributing traffic between two Availability Zones.

87.​ Refresh: Return to the Inventory Application tab and refresh the page several times. The
instance ID and Availability Zone change as you refresh the page.

This demonstrates that your application is now highly available.

Congratulations! You have successfully verified that your application is highly available.

Task 6: Configure high availability of the database tier


You verified that the application tier was highly available in the previous task. However, the Aurora
database is still operating from only one database instance.

Task 6.1: Configure the database to run across multiple Availability Zones
In this task, you make the Aurora database highly available by configuring it to run across multiple
Availability Zones.

88.​At the top of the console, in the search bar, search for and choose ​
RDS.
89.​In the left navigation pane, choose Databases.
90.​Locate the row that contains the inventory-primary value.
91.​In the fifth column, labeled Region & AZ, note in which Availability Zone the primary is
located.

Caution: In the following steps you create an additional instance for the database cluster. For true
high-availability architecture, the second instance must be located in an Availability Zone that is
different from that of the primary instance.

92.​Select the inventory-cluster radio button associated with your Aurora database cluster.
93.​Choose Actions and then choose Add reader.
94.​In the Settings section, configure the following:
●​ DB instance identifier: Enter inventory-replica
95.​In the Connectivity section, under Availability Zone, select a different Availability Zone from
the one you noted above where the inventory-primary is located.
96.​At the bottom of the page, choose Add reader.

A new DB identifier named inventory-replica appears on the list, and its status is Creating. This is
your Aurora Replica instance. You can continue to the next task without waiting.
Learn more: When your Aurora Replica finishes launching, your database is deployed in a highly
available configuration across multiple Availability Zones. This does not mean that the database is
distributed across multiple instances. Although both the primary DB instance and the Aurora Replica
access the same shared storage, only the primary DB instance can be used for writes. Aurora
Replicas have two main purposes. You can issue queries to them to scale the read operations for
your application. You typically do so by connecting to the reader endpoint of the cluster. That way,
Aurora can spread the load for read-only connections across as many Aurora Replicas as you have
in the cluster. Aurora Replicas also help to increase availability. If the writer instance in a cluster
becomes unavailable, Aurora automatically promotes one of the reader instances to take its place as
the new writer. For more information, see Replication with Amazon Aurora.

While the Aurora Replica launches, continue to the next task to configure high availability for the
NAT gateway, and then return to the Amazon RDS console in the final task to confirm high
availability of the database after the creation of the replica is complete.

Congratulations! You have successfully configured high availability for the database tier.

Task 7: Make the NAT gateway highly available


In this task, you make the NAT gateway highly available by launching another NAT gateway in the
second Availability Zone.

The Inventory-App servers are deployed in private subnets across two Availability Zones. If they
need to access the internet (for example, to download data), the requests must be redirected
through a NAT gateway (located in a public subnet). The current architecture has only one NAT
gateway in Public Subnet 1, and all of the Inventory-App servers use this NAT gateway to reach the
internet. This means that if Availability Zone 1 failed, none of the application servers would be able to
communicate with the internet. Adding a second NAT gateway in Availability Zone 2 ensures that
resources in private subnets can still reach the internet even if Availability Zone 1 fails.

The resulting architecture shown in the following diagram is highly available:


Task 7.1: Create a second NAT gateway
97.​At the top of the console, in the search bar, search for and choose ​
VPC.
98.​In the left navigation pane, choose NAT gateways.

The existing NAT gateway is displayed. Now create one for the other Availability Zone.

99.​Choose Create NAT gateway and configure the following:


●​ Name - optional: Enter my-nat-gateway.
●​ Subnet: Select Public Subnet 2 from the dropdown menu.
100.​ Choose Allocate Elastic IP.
101.​ Choose Create NAT gateway.

A NAT gateway nat-xxxxxxxx | my-nat-gateway was created successfully. message is displayed on


top of the screen.

Task 7.2: Create and configure a new route table


Now, create a new route table for Private Subnet 2 that redirects traffic to the new NAT gateway.

102.​ In the left navigation pane, choose Route tables.


103.​ Choose Create route table and configure the following:
●​ Name - optional: Enter Private Route Table 2.
●​ VPC: Select Lab VPC from the dropdown menu.
104.​ Choose Create route table.
A Route table rtb-xxxxxxx | Private Route Table 2 was created successfully. message is displayed
on top of the screen.

Details for the newly created route table are displayed. There is currently one route, which directs all
traffic locally. Now, add a route to send internet-bound traffic through the new NAT gateway.

105.​ Choose Edit routes.


106.​ Choose Add route and configure the following:
●​ Destination: Enter 0.0.0.0/0
●​ Target: Choose NAT Gateway > my-nat-gateway.
107.​ Choose Save changes.

A Updated routes for rtb-xxxxxxxxxxxx / Private Route Table 2. message is displayed on top of the
screen.

You have created the route table and configured it to route internet-bound traffic through the new
NAT gateway. Next, associate the route table with Private Subnet 2.

Task 7.3: Configure routing for Private Subnet 2


108.​ Choose the Subnet associations tab.
109.​ Choose Edit subnet associations.
110.​ Select Private Subnet 2.
111.​ Choose Save associations.

A You have successfully updated subnet associations for rtb-xxxxxxxxxxxx / Private Route Table 2.

Internet-bound traffic from Private Subnet 2 is now sent to the NAT gateway in the same Availability
Zone.

Your NAT gateways are now highly available. A failure in one Availability Zone does not impact traffic
in the other Availability Zone.

Congratulations! You have successfully verified that your NAT gateways are highly available.

Task 8: Force a failover of the Aurora database


In this task, to demonstrate that your database is capable or performing a failover, you force the
cluster to perform a failover to the Aurora Read Replica instance you created in an earlier task.

112.​ At the top of the console, in the search bar, search for and choose ​
RDS.
113.​ In the left navigation pane, choose Databases.

Caution: Verify that the inventory-replica DB instance status is changed to Available before
continuing to the next step.
114.​ For the DB identifier, select the inventory-primary DB identifier associated with your
Aurora primary DB instance.

Note: The primary DB instance with DB identifier inventory-primary currently displays Writer under
the Role column. This is the only database node in the cluster that can currently be used for writes.

115.​ Choose Actions.


116.​ Choose Failover.

The RDS console displays the Failover DB Cluster page.

117.​ Choose Failover.

The inventory-cluster status is now Failing over.

118.​ From the navigation menu on the left, choose Events.


119.​ Review the logs as the failover is occurring. Notice that the Read replica instance is
shutdown, promoted to the writer and then rebooted. When the reboot of the read replica is
completed, then the inventory-primary is rebooted.

Observe that the application continues to function correctly after the failover.

Congratulations! You have successfully verified that your database can successfully complete a
failover and is highly available.

Conclusion
Congratulations! You now have successfully completed the following:

●​ Created an Amazon EC2 Auto Scaling group and registered it with an Application Load
Balancer spanning across multiple Availability Zones.
●​ Created a highly available Aurora DB cluster.
●​ Modified an Aurora DB cluster to be highly available.
●​ Modified an Amazon VPC configuration to be highly available using redundant NAT
gateways.
●​ Confirmed your database can perform a failover to a read replica instance.

End lab
Follow these steps to close the console and end your lab.

120.​ Return to the AWS Management Console.


121.​ At the upper-right corner of the page, choose AWSLabsUser, and then choose Sign out.
122.​ Choose End Lab and then confirm that you want to end your lab.
Lab 5: Building a Serverless Architecture
Lab overview
AWS solutions architects increasingly adopt event-driven architectures to decouple distributed
applications. Often, these events must be propagated in a strictly ordered way to all subscribed
applications. Using Amazon Simple Notification Service (Amazon SNS) topics and Amazon Simple
Queue Service (Amazon SQS) queues, you can address use cases that require end-to-end
message ordering, deduplication, filtering, and encryption. In this lab, you configure an Amazon
Simple Storage Service (Amazon S3) bucket to invoke an Amazon SNS notification whenever an
object is added to an S3 bucket. You learn how to create and interact with SQS queues, and learn
how to invoke an AWS Lambda function using Amazon SQS. This scenario will help you understand
how you can architect your application to respond to Amazon S3 bucket events using serverless
services such as Amazon SNS, AWS Lambda, and Amazon SQS.

Objectives
By the end of this lab, you should be able to do the following:

●​ Understand the value of decoupling resources.


●​ Understand the potential value of replacing Amazon Elastic Compute Cloud (Amazon EC2)
instances with Lambda functions.
●​ Create an Amazon SNS topic.
●​ Create Amazon SQS queues.
●​ Create event notifications in Amazon S3.
●​ Create AWS Lambda functions using preexisting code.
●​ Invoke an AWS Lambda function from SQS queues.
●​ Monitor AWS Lambda S3 functions through Amazon CloudWatch Logs.

Lab environment
You are tasked with evaluating and improving an event-driven architecture. Currently, Customer
Care professionals take snapshots of products and upload them into a specific S3 bucket to store
the images. The development team runs Python scripts to resize the images after they are uploaded
to the ingest S3 bucket. Uploading a file to the ingest bucket invokes an event notification to an
Amazon SNS topic. Amazon SNS then distributes the notifications to three separate SQS queues.
The initial design was to run EC2 instances in Auto Scaling groups for each resizing operation. After
reviewing the initial design, you recommend replacing the EC2 instances with Lambda functions.
The Lambda functions process the stored images into different formats and stores the output in a
separate S3 bucket. This proposed design is more cost effective.

The following diagram shows the workflow:


The scenario workflow is as follows:

●​ You upload an image file to an Amazon S3 bucket.


●​ Uploading a file to the ingest folder in the bucket invokes an event notification to an Amazon
SNS topic.
●​ Amazon SNS then distributes the notifications to separate SQS queues.
●​ The Lambda functions process the images into formats and stores the output in S3 bucket
folder.
●​ You validate the processed images in the S3 bucket folders and the logs in Amazon
CloudWatch.

Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:

●​ Note: A hint, tip, or important guidance


●​ Learn more: Where to find more information
●​ WARNING: An action that is irreversible and can potentially impact the failure of a command
or process (including warnings about configurations that cannot be changed after they are
made)

Start lab
1.​ To launch the lab, at the top of the page, choose Start Lab.​
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2.​ To open the lab, choose Open Console .​
You are automatically signed in to the AWS Management Console in a new web browser tab.​
Warning: Do not change the Region unless instructed.

Common sign-in errors

Error: You must first sign out

If you see the message, You must first log out before logging into a different AWS account:

●​ Choose the click here link.


●​ Close your Amazon Web Services Sign In web browser tab and return to your initial lab
page.
●​ Choose Open Console again.

Error: Choosing Start Lab has no effect

In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:

●​ Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
●​ Refresh the page and try again.

Task 1: Create a standard Amazon SNS topic


In this task, you create an Amazon SNS topic, and then subscribe to an Amazon SNS topic.

3.​ At the top of the AWS Management Console, in the search box, search for and choose
Simple Notification Service.
4.​ Expand the navigation menu by choosing the menu icon in the upper-left corner.
5.​ From the left navigation menu, choose Topics.
6.​ Choose Create topic.

The Create topic page is displayed.

7.​ On the Create topic page, in the Details section, configure the following:
●​ Type: Choose Standard.
●​ Name: Enter a unique SNS topic name, such as resize-image-topic-, followed by four
random numbers.
8.​ Choose Create topic.

The topic is created and the resize-image-topic-XXXX page is displayed. The topic’s Name, Amazon
Resource Name (ARN), (optional) Display name, and topic owner’s AWS account ID are displayed in
the Details section.

9.​ Copy the topic ARN and Topic owner values to a notepad. You need these values later in the
lab.

Example:

ARN example: arn:aws:sns:us-east-2:123456789012:resize-image-topic Topic owner:


123456789123 (12 digit AWS Account ID)

Congratulations! You have created an Amazon SNS topic.

Task 2: Create two Amazon SQS queues


In this task, you create two Amazon SQS queues each for a specific purpose and then subscribe the
queues to the previously created Amazon SNS topic.

Task 2.1: Create an Amazon SQS queue for the thumbnail image
10.​At the top of the AWS Management Console, in the search box, search for and choose ​
Simple Queue Service.
11.​ On the SQS home page, choose Create queue.

The Create queue page is displayed.

12.​On the Create queue page, in the Details section, configure the following:
●​ Type: Choose Standard (the Standard queue type is set by default).
●​ Name: Enter thumbnail-queue.
13.​The console sets default values for the queue Configuration parameters. Leave the default
values.
14.​Choose Create queue.

Amazon SQS creates the queue and displays a page with details about the queue.

15.​On the queue’s detail page, choose the SNS subscriptions tab.
16.​Choose Subscribe to Amazon SNS topic.

A new Subscribe to Amazon SNS topic page opens.


17.​From the Specify an Amazon SNS topic available for this queue section, choose the
resize-image-topic SNS topic you created previously under Use existing resource.

Note: If the SNS topic is not listed in the menu, choose Enter Amazon SNS topic ARN and then
enter the topic’s ARN that was copied earlier.

18.​Choose Save.

Your SQS queue is now subscribed to the SNS topic named resize-image-topic-XXXX.

Task 2.2: Create an Amazon SQS queue for the mobile image
19.​On the SQS console, expand the navigation menu on the left, and choose Queues.
20.​Choose Create queue.

The Create queue page is displayed.

21.​On the Create queue page, in the Details section, configure the following:
●​ Type: Choose Standard (the Standard queue type is set by default).
●​ Name: Enter mobile-queue.
22.​The console sets default values for the queue Configuration parameters. Leave the default
values.
23.​Choose Create queue.

Amazon SQS creates the queue and displays a page with details about the queue.

24.​On the queue’s detail page, choose the SNS subscriptions tab.
25.​Choose Subscribe to Amazon SNS topic.

A new Subscribe to Amazon SNS topic page opens.

26.​From the Specify an Amazon SNS topic available for this queue section, choose the
resize-image-topic SNS topic you created previously under Use existing resource.

Note: If the SNS topic is not listed in the menu, choose Enter Amazon SNS topic ARN and then
enter the topic’s ARN that was copied earlier.

27.​Choose Save.

Your SQS queue is now subscribed to the SNS topic named resize-image-topic-XXXX.

Task 2.3: Verifying the AWS SNS subscriptions


To verify the result of the subscriptions, publish to the topic and then view the message that the topic
sends to the queue.

28.​At the top of the AWS Management Console, in the search box, search for and choose ​
Simple Notification Service.
29.​In the left navigation pane, choose Topics.
30.​On the Topics page, choose resize-image-topic-XXXX.
31.​Choose Publish message.

The console opens the Publish message to topic page.

32.​In the Message details section, configure the following:


●​ Subject - optional:: Enter Hello world.
33.​In the Message body section, configure the following:
●​ For Message structure, select Identical payload for all delivery protocols.
●​ For Message body sent to the endpoint, enter Testing Hello world or any message of
your choice.
34.​In the Message attributes section, configure the following:
●​ For Type, choose String.
●​ For Name, enter Message.
●​ For Value, enter Hello World.
35.​Choose Publish message .

The message is published to the topic, and the console opens the topic’s detail page. To investigate
the published message, navigate to Amazon SQS.

36.​At the top of the AWS Management Console, in the search box, search for and choose ​
Simple Queue Service.
37.​Choose any queue from the list.
38.​Choose Send and receive messages .
39.​On Send and receive messages page, in the Receive messages section, choose Poll for
messages .
40.​Locate the Message section. Choose any ID link in the list to review the Details, Body, and
Attributes of the message.

The Message Details box contains a JSON document that contains the subject and message that
you published to the topic.

41.​Choose Done .

Congratulations! You have successfully created two Amazon SQS queues and published to a topic
that sends notification messages to a queue.

Task 3: Create an Amazon S3 event notification


In this task, you create an Amazon S3 Event Notification and receive S3 event notifications using the
event notification destination as Amazon SNS when certain event happen in the S3 bucket.

Task 3.1: Configure the Amazon SNS access policy to allow the Amazon S3
bucket to publish to a topic
42.​At the top of the AWS Management Console, in the search box, search for and choose ​
Simple Notification Service.
43.​From the left navigation menu, choose Topics.
44.​Choose the resize-image-topic-XXXX topic.
45.​Choose Edit .
46.​Navigate to the Access policy - optional section and expand it, if necessary
47.​Delete the existing content of the JSON editor panel.
48.​Copy the following code block and paste it into the JSON Editor section.

{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"SNS:GetTopicAttributes",
"SNS:SetTopicAttributes",
"SNS:AddPermission",
"SNS:RemovePermission",
"SNS:DeleteTopic",
"SNS:Subscribe",
"SNS:ListSubscriptionsByTopic",
"SNS:Publish"
],
"Resource": "SNS_TOPIC_ARN",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "SNS_TOPIC_OWNER"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "SNS_TOPIC_ARN",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "SNS_TOPIC_OWNER"
}
}
}
]
}

49.​Replace the two occurrences of SNS_TOPIC_OWNER with the Topic owner (12-digit AWS
Account ID) value that you copied earlier in Task 1. Make sure to leave the double quotes.
50.​Replace the two occurrences of SNS_TOPIC_ARN with the SNS topic ARN value copied
earlier in Task 1. Make sure to leave the double quotes.
51.​Choose Save changes .

Task 3.2: Create a single S3 event notification on uploads to the ingest S3


bucket
52.​At the top of the AWS Management Console, in the search box, search for and choose ​
S3.
53.​On the Buckets page, choose the bucket hyperlink with a name like xxxxx-labbucket-xxxxx.
54.​Choose the Properties tab.
55.​Scroll to the Event notifications section.
56.​Choose Create event notification .
57.​In the General configuration section, do the following:
●​ Event name: Enter resize-image-event.
●​ Prefix - optional: Enter ingest/.

Note: In this lab, you set up a prefix filter so that you receive notifications only when files are added
to a specific folder (ingest).

●​ Suffix - optional: Enter .jpg.

Note: In this lab, you set up a suffix filter so that you receive notifications only when .jpg files are
uploaded.

58.​In the Event types section, select All object create events.
59.​In the Destination section, configure the following:
●​ Destination: Select SNS topic.
●​ Specify SNS topic: Select Choose from your SNS topics.
●​ SNS topic: Choose the resize-image-topic-XXXX SNS topic from the dropdown menu.

Or, if you prefer to specify an ARN, choose Enter ARN and enter the ARN of the SNS topic copied
earlier.

60.​Choose Save changes.

Congratulations! You have successfully created an Amazon S3 event notification.


Task 4: Create and configure two AWS Lambda functions
In this task, you create two AWS Lambda functions and deploy the respective functionality code to
each Lambda function by uploading code and configure each Lambda function to add an SQS
trigger.

Task 4.1: Create a Lambda function to generate a thumbnail image


In this task, you create an AWS Lambda function with an SQS trigger that reads an image from
Amazon S3, resizes the image, and then stores the new image in an Amazon S3 bucket folder.

61.​At the top of the AWS Management Console, in the search box, search for and choose ​
Lambda.
62.​Choose Create function.
63.​In the Create function window, select Author from scratch.
64.​In the Basic information section, configure the following:
●​ Function name: Enter CreateThumbnail.
●​ Runtime: Choose Python 3.9.
●​ Expand the Change default execution role section.
●​ Execution role: Select Use an existing role.
●​ Existing role: Choose the role with the name like XXXXX-LabExecutionRole-XXXXX.

This role provides your Lambda function with the permissions it needs to access Amazon S3 and
Amazon SQS.

Caution: Make sure to choose Python 3.9 under Other supported runtime. If you choose Python
3.10 or the Latest supported, the code in this lab fails as it is configured specifically for Python 3.9.

65.​Choose Create function.

At the top of the page there is a message like, Successfully created the function CreateThumbnail.
You can now change its code and configuration. To invoke your function with a test event, choose
“Test”.

Task 4.2: Configure the CreateThumbnail Lambda function to add an SQS


trigger and upload the Python deployment package
AWS Lambda functions can be initiated automatically by activities such as data being received by
Amazon Kinesis or data being updated in an Amazon DynamoDB database. For this lab, you initiate
the Lambda function whenever a new object is pushed to your Amazon SQS queue.

66.​Choose Add trigger, and then configure the following:


●​ For Select a source, choose SQS.
●​ For SQS Queue, choose thumbnail-queue.
●​ For Batch size - optional, enter 1.
67.​Scroll to the bottom of the page, and then choose Add .
At the top of the page there is a message like, The trigger thumbnail-queue was successfully added
to function CreateThumbnail. The trigger is in a disabled state.

The SQS trigger is added to your Function overview page. Now configure the Lambda function.

68.​Choose the Code tab.


69.​Configure the following settings (and ignore any settings that are not listed):
70.​Copy the CreateThumbnailZIPLocation value that is listed to the left of these instructions
71.​Choose Upload from , and choose Amazon S3 location.
72.​Paste the CreateThumbnailZIPLocation value you copied from the instructions in the
Amazon S3 link URL field.
73.​Choose Save .

The CreateThumbnail.zip file contains the following code:

Caution: Do not copy this code—it is just an example to show what is in the zip file.

Code
74.​Examine the preceding code. It is performing the following steps:
●​ Receives an event, which contains the name of the incoming object (Bucket, Key)
●​ Downloads the image to local storage
●​ Resizes the image using the Pillow library
●​ Creates and uploads the resized image to a new folder
75.​In the Runtime settings section, choose Edit.
●​ For Handler, enter CreateThumbnail.handler.
76.​Choose Save.

At the top of the page there is a message like, Successfully updated the function CreateThumbnail.

Caution: Make sure you set the Handler field to the preceding value, otherwise the Lambda function
will not be found.

77.​Choose the Configuration tab.


78.​From the left navigation menu, choose General configuration.
79.​Choose Edit .
●​ For Description, enter Create a thumbnail-sized image.

Leave the other settings at the default settings. Here is a brief explanation of these settings:

●​ Memory defines the resources that are allocated to your function. Increasing memory also
increases CPU allocated to the function.
●​ Timeout sets the maximum duration for function processing.
80.​Choose Save.

A message is displayed at the top of the page with text like, Successfully updated the function
CreateThumbnail.
The CreateThumbnail Lambda function has now been configured.

Task 4.3: Create a Lambda function to generate a mobile image


In this task, you create an AWS Lambda function with an SQS trigger that reads an image from
Amazon S3, resizes the image, and then stores the new image in an Amazon S3 bucket folder.

81.​At the top of the AWS Management Console, in the search box, search for and choose ​
Lambda.
82.​Choose Create function.
83.​In the Create function window, select Author from scratch.
84.​In the Basic information section, configure the following:
●​ Function name: Enter CreateMobileImage.
●​ Runtime: Choose Python 3.9.
●​ Expand the Change default execution role section.
●​ Execution role: Select Use an existing role.
●​ Existing role: Choose the role with the name like XXXXX-LabExecutionRole-XXXXX.

This role provides your Lambda function with the permissions it needs to access Amazon S3 and
Amazon SQS.

Caution: Make sure to choose Python 3.9 under Other supported runtime. If you choose Python
3.10 or the Latest supported, the code in this lab fails as it is configured specifically for Python 3.9.

85.​Choose Create function.

At the top of the page there is a message like, Successfully created the function CreateMobile. You
can now change its code and configuration. To invoke your function with a test event, choose “Test”.

Task 4.4: Configure the CreateMobileImage Lambda function to add an SQS


trigger and upload the Python deployment package
AWS Lambda functions can be initiated automatically by activities such as data being received by
Amazon Kinesis or data being updated in an Amazon DynamoDB database. For this lab, you initiate
the Lambda function whenever a new object is pushed to your Amazon SQS queue.

86.​Choose Add trigger, and then configure the following:


●​ For Select a source, choose SQS.
●​ For SQS Queue, choose mobile-queue.
●​ For Batch size - optional, enter 1.
87.​Scroll to the bottom of the page, and then choose Add .

At the top of the page there is a message like, The trigger mobile-queue was successfully added to
function CreateMobileImage. The trigger is in a disabled state.

The SQS trigger is added to your Function overview page. Now configure the Lambda function.

88.​Choose the Code tab.


89.​Configure the following settings (and ignore any settings that are not listed):
90.​Copy the CreateMobileZIPLocation value that is listed to the left of these instructions
91.​Choose Upload from , and choose Amazon S3 location.
92.​Paste the CreateMobileImageZIPLocation value you copied from the instructions in the
Amazon S3 link URL field.
93.​Choose Save .

The CreateMobileImage.zip file contains the following code:

Caution: Do not copy this code—it is just an example to show what is in the zip file.

Code
94.​In the Runtime settings section, choose Edit.
●​ For Handler, enter CreateMobileImage.handler.
95.​Choose Save.

At the top of the page there is a message like, Successfully updated the function
CreateMobileImage.

Caution: Make sure you set the Handler field to the preceding value, otherwise the Lambda function
will not be found.

96.​Choose the Configuration tab.


97.​From the left navigation menu, choose General configuration.
98.​Choose Edit .
●​ For Description, enter Create a mobile friendly image.

Leave the other settings at the default settings. Here is a brief explanation of these settings:

●​ Memory defines the resources that are allocated to your function. Increasing memory also
increases CPU allocated to the function.
●​ Timeout sets the maximum duration for function processing.
99.​Choose Save.

A message is displayed at the top of the page with text like, Successfully updated the function
CreateMobileImage.

The CreateMobileImage Lambda function has now been configured.

Congratulations! You have successfully created 2 AWS Lambda functions for the serverless
architecture and set the appropriate SQS queue as trigger for their respective functions.

Task 5: Upload an object to an Amazon S3 bucket


In this task, you upload an object to the previously created S3 bucket using the S3 console.

Task 5.1: Upload an image to the S3 bucket folder for processing


The following diagram shows the workflow:

Upload a picture to test what you have built.

100.​ Choose to download one image from the following options:


●​ Open the context menu for the AWS.jpg link to download the picture to your computer.
●​ Open the context menu for the MonaLisa.jpg link to download the picture to your computer.
●​ Open the context menu for the HappyFace.jpg link to download the picture to your computer.
●​ Name your file similar to InputFile.jpg.

Caution: Firefox users – Make sure the saved file name is InputFile.jpg (not .jpeg).

101.​ At the top of the AWS Management Console, in the search box, search for and choose ​
S3.
102.​ In the S3 Management Console, choose the xxxxx-labbucket-xxxxx bucket hyperlink.
103.​ Choose the ingest/ link.
104.​ Choose Upload.
105.​ In the Upload window, choose Add files.
106.​ Browse to and choose the XXXXX.jpg picture you downloaded.
107.​ Choose Upload.

At the top of the page, there is a message like, Upload succeeded.

Congratulations! You have successfully uploaded JPG images to S3 bucket.

Task 6: Validate the processed file


In this task, you validate the processed file from the logs generated by the function code through
Amazon CloudWatch Logs.

Task 6.1: Review Amazon CloudWatch Logs for Lambda activity


You can monitor AWS Lambda functions to identify problems and view log files to assist in
debugging.

108.​ At the top of the AWS Management Console, in the search box, search for and choose ​
Lambda.
109.​ Choose the hyperlink for one of your Create- functions.
110.​ Choose the Monitor tab.

The console displays graphs showing the following:

●​ Invocations: The number of times that the function was invoked.


●​ Duration: The average, minimum, and maximum execution times.
●​ Error count and success rate (%): The number of errors and the percentage of executions
that completed without error.
●​ Throttles: When too many functions are invoked simultaneously, they are throttled. The
default is 1000 concurrent executions.
●​ Async delivery failures: The number of errors that occurred when Lambda attempted to write
to a destination or dead-letter queue.
●​ Iterator Age: Measures the age of the last record processed from streaming triggers
(Amazon Kinesis and Amazon DynamoDB Streams).
●​ Concurrent executions: The number of function instances that are processing events.

Log messages from Lambda functions are retained in Amazon CloudWatch Logs.

111.​ Choose View CloudWatch logs .


112.​ Choose the hyperlink for the newest Log stream that appears.
113.​ Expand each message to view the log message details.

The REPORT line provides the following details:

●​ RequestId: The unique request ID for the invocation


●​ Duration: The amount of time that your function’s handler method spent processing the event
●​ Billed Duration: The amount of time billed for the invocation
●​ Memory Size: The amount of memory allocated to the function
●​ Max Memory Used: The amount of memory used by the function
●​ Init Duration: For the first request served, the amount of time it took the runtime to load the
function and run code outside of the handler method

In addition, the logs display any logging messages or print statements from the functions. This
assists in debugging Lambda functions.

Note: Reviewing the logs you may notice that the Lambda function has been executed multiple
times. This is because the Lambda function is receiving the test message posted to the SNS topic in
task 2. Another one of logs was generated when the event notifications for your S3 bucket was
created. The third log was generated when an object was uploaded the S3 bucket, and triggered the
functions.

Task 6.2: Validate the S3 bucket for processed files


114.​ At the top of the AWS Management Console, in the search box, search for and choose ​
S3.
115.​ Choose the hyperlink for xxxxx-labbucket-xxxxx to enter the bucket.
116.​ Navigate through these folders to find the resized images (for example,
Thumbnail-AWS.jpg, MobileImage-MonaLisa.jpg).

If you find the resized image here, you have successfully resized the image from its original to
different formats.

Congratulations! You have successfully validated the processed image file from the logs generated
by the function code through browsing Amazon S3 and Amazon CloudWatch Logs.

Optional Tasks
Challenge tasks are optional and are provided in case you have extra time remaining in your lab.
You can complete the optional tasks or skip to the end of the lab.

●​ (Optional) Task 1: Create a lifecycle configuration to delete files in the ingest bucket after 30
days.

Note: If you have trouble completing the optional task, refer to the Optional Task 1 Solution
Appendix section at the end of the lab.

●​ (Optional) Task 2: Add an SNS email notification to the existing SNS topic.

Note: If you have trouble completing the optional task, refer to the Optional Task 2 Solution
Appendix section at the end of the lab.

Conclusion
Congratulations! You now have successfully:

●​ Created an Amazon SNS topic


●​ Created Amazon SQS queues
●​ Created event notifications in Amazon S3
●​ Created AWS Lambda functions using preexisting code
●​ Invoked an AWS Lambda function from SQS queues
●​ Monitored AWS Lambda S3 functions through Amazon CloudWatch Logs

End lab
Follow these steps to close the console and end your lab.

117.​ Return to the AWS Management Console.


118.​ At the upper-right corner of the page, choose AWSLabsUser, and then choose Sign out.
119.​ Choose End Lab and then confirm that you want to end your lab.
Lab 6: Lab 6: Configuring an Amazon
CloudFront Distribution with an Amazon
S3 Origin
Lab overview
Amazon Web Services (AWS) solutions architects must frequently design and build secure,
high-performing, resilient, efficient architectures for applications and workloads to deliver content.
Amazon CloudFront is a web service that provides a cost-effective way to distribute content with low
latency and high data transfer speeds. You can use CloudFront to accelerate static website content
delivery, serve video on demand or live streaming video, and even run serverless code at the edge
location. In this lab, you configure a CloudFront distribution in front of an Amazon Simple Storage
Service (Amazon S3) bucket and secure it using origin access control (OAC) provided by
CloudFront.

Objectives
After completing this lab, you should be able to do the following:

●​ Create an S3 bucket with default security settings.


●​ Configure an S3 bucket for public access.
●​ Add an S3 bucket as a new origin to an existing CloudFront distribution.
●​ Secure an S3 bucket to permit access only through the CloudFront distribution.
●​ Configure OAC to lock down security to an S3 bucket.
●​ Configure Amazon S3 resource policies for public or OAC access.

Prerequisites
This lab requires the following:

●​ Access to a notebook computer with Wi-Fi and Microsoft Windows, macOS, or Linux
(Ubuntu, SuSE, or Red Hat)
●​ An internet browser, such as Chrome, Firefox, or Microsoft Edge

Technical knowledge prerequisites


To successfully complete this lab, you should be familiar with the AWS Management Console and
have a basic understanding of edge services in the AWS Cloud.

Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:

●​ Note: A hint, tip, or important guidance.


●​ Learn more: Where to find more information.
●​ Caution: Information of special interest or importance (not important enough to cause
problems with the equipment or data if you miss it, but it can result in the need to repeat
certain steps).
●​ WARNING: An action that is irreversible and can potentially impact the failure of a command
or process (including warnings about configurations that cannot be changed after they are
made).
●​ Hint: A hint to a question or challenge.
●​ File contents: A code block that displays the contents of a script or file you need to run that
has been pre-created for you.
●​ Copy edit: A time when copying a command, script, or other text to a text editor (to edit
specific variables within it) might be easier than editing directly in the command line or
terminal.

Start lab
1.​ To launch the lab, at the top of the page, choose Start Lab.​
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2.​ To open the lab, choose Open Console .​
You are automatically signed in to the AWS Management Console in a new web browser tab.​
Warning: Do not change the Region unless instructed.

Common sign-in errors

Error: You must first sign out

If you see the message, You must first log out before logging into a different AWS account:

●​ Choose the click here link.


●​ Close your Amazon Web Services Sign In web browser tab and return to your initial lab
page.
●​ Choose Open Console again.
Error: Choosing Start Lab has no effect

In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:

●​ Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
●​ Refresh the page and try again.

Lab Environment
The lab environment provides you with some resources to get started. There is an Auto Scaling
group of EC2 instances being used as publicly accessible web servers. The web server
infrastructure is deployed in an Amazon Virtual Private Cloud (Amazon VPC) and configured for
multiple Availability Zones. It also uses load balancers. The lab also provides a CloudFront
distribution with this load balancer as an origin.

The following diagram shows the general expected architecture you should have at the end of this
lab. During this lab, you create a new S3 bucket for the existing lab environment. You then configure
this bucket as a new, secure origin to the existing CloudFront distribution.

Services used in this lab

Amazon CloudFront
CloudFront is a content delivery web service. It integrates with other AWS products so that
developers and businesses can distribute content to end users with low latency, high data transfer
speeds, and no minimum usage commitments.

You can use CloudFront to deliver your entire website, including dynamic, static, streaming, and
interactive content, using a global network of edge locations. CloudFront automatically routes
requests for your content to the nearest edge location to deliver content with the best possible
performance. CloudFront is optimized to work with other AWS services, like Amazon S3, Amazon
Elastic Compute Cloud (Amazon EC2), Elastic Load Balancing (ELB), and Amazon Route 53.
CloudFront also works seamlessly with any origin server that doesn’t use AWS, which stores the
original, definitive versions of your files.

Amazon Simple Storage Service (Amazon S3)

Amazon S3 provides developers and information technology teams with secure, durable, highly
scalable object storage. Amazon S3 has a simple web services interface to store and retrieve any
amount of data from anywhere on the web.

You can use Amazon S3 alone or together with other AWS services such as Amazon EC2, Amazon
Elastic Block Store (Amazon EBS), and Amazon Simple Storage Service Glacier (Amazon S3
Glacier), along with third-party storage repositories and gateways. Amazon S3 provides
cost-effective object storage for a wide variety of use cases, including cloud applications, content
distribution, backup and archiving, disaster recovery, and big data analytics.

AWS services not used in this lab


AWS services not used in this lab are turned off in the lab environment. In addition, the capabilities
of the services used in this lab are limited to only what the lab requires. Expect errors when
accessing other services or performing actions beyond those provided in this lab guide.

Task 1: Explore the existing CloudFront distribution


In this task, you examine the existing CloudFront distribution that was built for web server content.
Before making changes to an environment, it is a good practice to understand the existing
configuration. If you want to use CloudFront distributions for your personal AWS environments, you
need to build and configure the distribution itself first. In later tasks, you add an S3 bucket as a new
origin to this CloudFront distribution.

Task 1.1: Open the CloudFront console


3.​ If you have not already opened the console, follow the instructions in the Start Lab section to
log in to the console.
4.​ At the top of the console, in the search bar, search for and choose ​
CloudFront.
Task 1.2: Open the existing CloudFront distribution
5.​ Choose the ID link for the only available distribution.

Note: If you do not find the list of distributions, ensure that you are at the correct page. Choose
Distributions from the CloudFront navigation menu located on the left side of the console.

A page showing the details of the distribution is displayed.

Task 1.3: Explore the properties of the existing distribution


In this task, you explore each tab of the distribution to review the existing configuration. In this lab,
you are not configuring this CloudFront distribution in great detail. However, it is useful to know
where all of the configurations you might need for managing a CloudFront distribution are located.

6.​ Examine the contents of the General tab.

This tab contains the details about the current configuration of this particular CloudFront distribution.
It contains the most generally needed information about a distribution. It is also where you configure
the common high-level items for the distribution, such as activating the distribution, logging, and
certificate settings.

7.​ Copy edit: From the Details section, in the General tab, copy the ARN value and save it in a
text editor. You need this value for a later task.
8.​ Copy edit: From the Details section, in the General tab, copy the Distribution domain name
value.

The Distribution domain name is also found to the left of these lab instructions under the listing
LabCloudFrontDistributionDNS.

9.​ Paste the Distribution domain value you copied into a new browser tab.

A simple web page is loaded displaying the information of the web server from which CloudFront
retrieved the content. By requesting content from the Distribution domain value for the CloudFront
distribution, you are verifying that the existing cache is working.

You can close this tab.

10.​Return to the CloudFront console.


11.​ Choose the Security tab.

This tab contains the distribution’s configuration if you need to keep your application secure from the
most common web threats using AWS WAF or need to prevent users in specific countries from
accessing your content using geographic restrictions. These features are not configured for use in
this lab.

12.​Choose the Origins tab.


This tab contains the details about the current origins that exist for this particular CloudFront
distribution. It is also the area of the console you can use to configure new or existing CloudFront
origins. A CloudFront Origin defines the location of the definitive, original version of the content that
is delivered through the CloudFront distribution.

Note: The only origin currently on the distribution is an ELB load balancer. This load balancer is
accepting and directing web traffic for the auto scaling web servers in its target group.

13.​ Copy edit: Copy the load balancer’s Domain Name System (DNS) value for this origin from
the column labeled Origin domain.

Note: You can adjust the widths of most columns in the console by dragging the dividers in the
header.

14.​Paste the DNS value for the load balancer into a new browser tab.

The DNS value for this distribution is also found to the left of these lab instructions under the listing
LabLoadBalancerDNS.

The simple web page hosted on the EC2 instances is displayed again. This web page displays the
same content that was delivered by the CloudFront distribution earlier. However, by requesting from
the load balancer directly you are not using the existing CloudFront caching system. In any single
request, the IP address displayed on the page might differ because traffic is not always routed to the
same EC2 instance behind the load balancer.

This step demonstrates that the origins defined for a distribution are the locations used to retrieve
novel content when a request is made to the CloudFront distribution’s frontend.

You can close this tab.

15.​Return to the CloudFront console.


16.​Choose the Behaviors tab.

Behaviors define the actions that the CloudFront distribution takes when there is a request for
content, such as which origin to serve which content, Time To Live of content in the cache, cookies,
and how to handle various headers.

This tab contains a list of current behaviors defined for the distribution. You configure new or existing
behaviors here. Behaviors for the distribution are evaluated in the explicit order in which you define
them on this tab.

Do the following to review or edit the configuration of any single behavior:

●​ Select the radio button in the row next to the behavior you want to modify.
●​ Choose Edit.
●​ Choose Cancel to close the page and return to the console.

There is only one behavior currently configured in this lab environment. The behavior accepts HTTP
and HTTPS for both GET and HEAD requests to the load balancer origin.
17.​Choose the Error Pages tab.

This tab details which error page is to be returned to the user when the content requested results in
an HTTP 4xx or 5xx status code. You can configure custom error pages for specific error codes
here.

18.​Choose the Invalidations tab.

This tab contains the distribution’s configuration for object invalidation. Invalidated objects are
removed from CloudFront edge caches. A faster and less expensive method is to use versioned
objects or directory names. There are no invalidations configured for CloudFront distributions by
default.

19.​Choose the Tags tab.

This tab contains the configuration for any tags applied to the distribution. You can view and edit
existing tags and create new tags here. Tags help you identify and organize your distributions.

Congratulations! You have explored the existing CloudFront distribution.

Task 2: Create an S3 bucket


In this task, you create and configure a new S3 bucket. This bucket is used as a new origin for the
CloudFront distribution.

20.​At the top of the console, in the search bar, search for and choose ​
S3.
21.​In the Buckets section, choose Create bucket.

Note: If you do not find the Create bucket button, ensure you are at the correct page. Choose
Buckets from the navigation menu located on the left side of the console.

The Create bucket page is displayed.

22.​Copy the LabBucketName from left of the lab instructions and paste into the Bucket name
field.

Note: To simplify the written instructions in this lab, this newly created bucket is referred to as the
LabBucket for the remainder of the instructions.

The AWS Region should match the PrimaryRegion value found to the left of these lab instructions.

23.​Leave all other settings on this page as the default configurations.


24.​Choose Create bucket.

The Amazon S3 console is displayed. The newly created bucket is displayed among the list of all the
buckets for the account.
Congratulations! You have created a new S3 bucket with the default configuration.

Task 3: Configure the S3 LabBucket for public access


In this task, you review the default access setting for S3 buckets. Next, you modify the permissions
settings to allow public access to the bucket.

Task 3.1: Configure the LabBucket to allow public policies to be created


25.​Select the link for the newly created LabBucket found in the Buckets section.

A page with all of the bucket details is displayed.

26.​Choose the Permissions tab.


27.​Locate the Block public access (bucket settings) section.
28.​Choose Edit.

The Edit Block public access (bucket settings) page is displayed.

29.​Unselect Block all public access.


30.​Choose Save Changes.

A message window titled Edit Block public access (bucket settings) is displayed.

31.​In the message field, enter ​


confirm.
32.​Choose Confirm.

You have removed the block on all public access policies for the LabBucket. You are now able to
create access policies for the bucket that allow for public access. The bucket is currently not public,
but anyone with the appropriate permissions can grant public access to objects stored within the
bucket.

Task 3.2: Configure a public read policy for the LabBucket


You now create a public object read policy for this bucket.

33.​On the Permissions tab, locate the Bucket policy section.


34.​Choose Edit.

The Edit bucket policy page is displayed.

35.​ Copy edit: Copy and paste the Bucket ARN value into a text editor to save the information
for later. It is a string value like arn:aws:s3:::LabBucket located above the Policy box.
The ARN value uniquely identifies this S3 bucket. You need this specific ARN value when creating
bucket based policies.

36.​ File contents: Copy and paste the following JSON into a text editor.

{
"Version": "2012-10-17",
"Id": "Policy1621958846486",
"Statement": [
{
"Sid": "OriginalPublicReadPolicy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "RESOURCE_ARN"
}
]
}

37.​Replace the RESOURCE_ARN value in the JSON with the Bucket ARN value you copied in
a previous step and append a /* to the end of the pasted Bucket ARN value.

By appending the

/* wildcard to the end of the ARN, the policy definition applies to all objects located in the
bucket.

Here is the example of the updated policy JSON:

{
"Version": "2012-10-17",
"Id": "Policy1621958846486",
"Statement": [
{
"Sid": "OriginalPublicReadPolicy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::lab-bucket-1234/*"
}
]
}
38.​Return to the Amazon S3 console.
39.​Paste the completed JSON into the Policy box.
40.​Choose Save changes.

Caution: If you receive an error message at the bottom of the screen, it’s probably caused by a
syntax error with JSON. The policy will not save until the JSON is valid. You can expand the error
message in the Amazon S3 console for more information about correcting the policy.

By using the

* wildcard as the Principal value, all identities requesting the actions defined in the policy
document are allowed to do so. By appending the /* wildcard to the allowed Resources, this
policy applies to all objects located in the bucket.

Note: The policies currently applied to the bucket make the objects in this bucket publicly readable.

In later lab steps, you configure the bucket to be accessible only from the CloudFront distribution.

Congratulations! You have configured an S3 bucket for public read access.

Task 4: Upload an object into the bucket and testing


public access
In this task, you upload a single object to the LabBucket. You use this object to test access in the
remaining lab tasks.

Task 4.1: Create a new folder in the bucket


41.​Choose the Objects tab.
42.​Choose Create folder.
43.​Enter ​
CachedObjects into the Folder name field.
44.​Leave all other settings on the page at the default values.
45.​Choose Create folder.

Task 4.2: Upload an object to the bucket


46.​Download the object for these lab instructions by choosing logo.png and saving it to your
local device.
47.​Return to the Amazon S3 Console.
48.​Choose the link for the CachedObjects/ folder that you created previously.
49.​Choose Upload.

The Upload page is displayed.


50.​Choose Add files.
51.​Choose the logo.png object from your local storage location.
52.​Choose Upload.

The Upload: status page is displayed.

A Upload succeeded message is displayed on top of the screen.

Task 4.3: Test public access to an object


53.​Choose the logo.png link from the Files and folders section.

A page with details about the Amazon S3 object is displayed.

54.​Select the link located in the Object URL field.

The picture is displayed in a browser tab.

55.​Inspect the URL for the object and notice it is an Amazon S3 URL.
56.​Close this page with the object.

Congratulations! You have created a folder in an S3 bucket, uploaded an object, and tested that the
object can be retrieved from the S3 URL.

Task 5: Secure the bucket with Amazon CloudFront and


Origin Access Control
In a previous task, you have confirmed public access to the LabBucket works, but are not utilizing
the CloudFront distribution for object access. In this task, you add the lab bucket as an new origin to
the CloudFront distribution and make the objects in the LabBucket accessible only from the
CloudFront distribution.

Task 5.1: Update the bucket policy for the LabBucket


Update the bucket policy to allow read-only access from the CloudFront distribution.

57.​At the top of the console, in the search bar, search for and choose S3.
58.​Select the link for the LabBucket found in the Buckets section.

A page with all of the bucket details is displayed.

59.​Choose the Permissions tab.


60.​Locate the Bucket policy section.
61.​Choose Edit.
The Edit bucket policy page is displayed.

62.​ Copy edit: Copy and paste the Bucket ARN value into a text editor to save the information
for later. It is a string value like arn:aws:s3:::LabBucket located above the Policy box.

The ARN value uniquely identifies this S3 bucket. You need this specific ARN value when creating
bucket based policies.

63.​ File contents: Copy and paste the following JSON into a text editor.

{
"Version": "2012-10-17",
"Statement": {
"Sid": "AllowCloudFrontServicePrincipalReadOnly",
"Effect": "Allow",
"Principal": {
"Service": "cloudfront.amazonaws.com"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "RESOURCE_ARN",
"Condition": {
"StringEquals": {
"AWS:SourceArn": "CLOUDFRONT_DISTRIBUTION_ARN"
}
}
}
}

64.​Replace the RESOURCE_ARN value in the JSON with the Bucket ARN value you copied in
a previous step and append a ​
/* to the end of the pasted Bucket ARN value.
65.​Replace the CLOUDFRONT_DISTRIBUTION_ARN value in the JSON with the ARN value
you copied in a previous step.

Here is the example of the updated policy JSON:

{
"Version": "2012-10-17",
"Id": "Policy1621958846486",
"Statement": [
{
"Sid": "OriginalPublicReadPolicy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::lab-bucket-1234/*"
"Condition": {
"StringEquals": {
"AWS:SourceArn":
"arn:aws:cloudfront::123456789:distribution/E3LU8VQUNZACBE"
}
}
}
}

66.​Return to the Amazon S3 console.


67.​Paste the completed JSON into the Policy box.
68.​Choose Save changes.

Task 5.2: Enable the public access blockers


69.​On the Permissions tab, locate the Block public access (bucket settings) section.
70.​Choose Edit.

The Edit Block public access (bucket settings) page is displayed.

71.​Select Block all public access.


72.​Choose Save changes.

A message window titled Edit Block public access (bucket settings) is displayed.

73.​In the field of the message window, enter ​


confirm.
74.​Choose Confirm.

A Successfully edited Block Public Access settings for this bucket. message is displayed on top of
the screen.

A page with all the bucket details is displayed.

Congratulations! You have edited the S3 bucket policy so that the only principal allowed to read
objects CloudFront distribution.

Task 5.3: Create a new origin with Origin Access Control (OAC)
In this task, you add the LabBucket as a new origin to the existing CloudFront distribution.

75.​At the top of the console, in the search bar, search for and choose ​
CloudFront.
76.​From the CloudFront Distributions page, choose the ID link for the only available distribution.
A page showing the details of the distribution is displayed.

77.​Choose the Origins tab.


78.​Choose Create origin.

The Create Origin page is displayed.

79.​From the Origin domain field, choose the name of your LabBucket from the Amazon S3
section.

Note: Recall that the S3 bucket in this lab is never configured as a website. You have only changed
the bucket policy regarding who is allowed to perform GetObject API requests against the S3 bucket
into an Allow Public read policy.

80.​Leave the entry for Origin path empty.

Note: The Origin Path field is optional and configures which directory in the origin CloudFront should
forward requests to. In this lab, rather than configuring the origin path, you leave it blank and instead
configure a behavior to return only objects matching a specific pattern in the requests.

81.​For Name, enter ​


My Amazon S3 Origin
82.​For Origin access, select Origin access control settings (recommended).
83.​Choose Create new OAC.

The console displays the Create new OAC message window.

84.​Leave the default settings and choose Create.


85.​Choose Create origin.

A Successfully created origin My Amazon S3 Origin message is displayed on top of the screen.

You can safely ignore any message like, The S3 bucket policy needs to be updated as you
completed updating the bucket policy already.

Task 5.4: Create a new behavior for the Amazon S3 origin


In this task, you create a new behavior for the Amazon S3 origin so that the distribution has
instructions for how to handle incoming requests for the origin.

86.​Choose the Behaviors tab.


87.​Choose Create behavior.

The Create behavior page is displayed.

88.​In the Path pattern field, enter CachedObjects/*.png

This field configures which matching patterns of object requests the origin can return. Specifically, in
this behavior only .png objects stored in the CachedObjects folder of the Amazon S3 origin can be
returned. Unless there is a behavior configured for them, all other requests to the Amazon S3 origin
would result in an error being returned to the requester. Typically, users would not be requesting
objects directly from the CloudFront distribution URL in this manner; instead, your frontend
application would generate the correct object URL to return to the user.

89.​From the Origin and origin groups dropdown menu, choose My Amazon S3 Origin.
90.​From the Cache key and origin requests section, ensure Cache policy and origin request
policy (recommended) is selected.
91.​From the Cache policy dropdown menu, ensure CachingOptimized is selected.
92.​Leave all other settings on the page at the default values.
93.​Choose Create behavior.

A Successfully created new cache behavior CachedObjects/*.png. message is displayed on top of


the screen.

Congratulations! You have created: a new origin for the Amazon S3 bucket, an Origin Access
Control, and distribution behavior on a CloudFront distribution for the objects stored in the Amazon
S3 bucket for the lab.

Task 6: Test direct access to a file in the bucket using the


Amazon S3 URL
In this task, you test if the object can still be directly accessed using the Amazon S3 URL.

94.​At the top of the console, in the search bar, search for and choose S3.
95.​Select the link for the LabBucket found in the Buckets section.

A page with all of the bucket details is displayed.

96.​Choose the Objects tab.


97.​Choose the link for the CachedObjects/ folder.
98.​Choose the link for the logo.png object.
99.​Select the link located in the Object URL field.

An error message is displayed with Access denied messages. This is expected because the new
bucket policy does not allow access to the object directly from Amazon S3 URLs. By denying access
to S3 objects directly through Amazon S3, users can no longer bypass the controls provided by
CloudFront cache, which can include logging, behaviors, signed URLs, or signed cookies.

Congratulations! You have confirmed the object is no longer directly accessible from the Amazon S3
URL.
Task 7: Test access to the object in the bucket using the
CloudFront distribution
In this task, you confirm that you can access objects in the Amazon S3 origin for the CloudFront
distribution.

100.​ Copy edit: Copy the CloudFront distribution’s domain DNS value from the left side of
these lab instructions under the listing LabCloudFrontDistributionDNS.
101.​ Paste the DNS value into a new browser tab.

A simple web page is loaded displaying the information of the web server where CloudFront
retrieved the content from.

102.​ Append /CachedObjects/logo.png to the end of the CloudFront distribution’s


domain DNS and press Enter.

The browser makes a request to the CloudFront distribution and the object is returned from the
Amazon S3 origin.

Hint: If the CloudFront URL redirects you to the Amazon S3 URL, or if the object isn’t immediately
available, the CloudFront distribution might still be updating from your recent changes. Return to the
CloudFront console. Select Distributions from the navigation menu. Confirm that the Status column
is Enabled and the Last modified column has a timestamp. You need to wait for this before testing
the new origin and behavior. After you have confirmed the status of the distribution, wait a few
minutes and try this task again.

Congratulations! You have confirmed that the object is returned from a CloudFront request.

Optional Task 8: Replicate an S3 bucket across AWS


Regions
This optional task is provided to you if you have extra lab time or want to learn something a little
more advanced. This task is not necessary to complete. You can end the lab now if you choose by
following the steps to end the lab; otherwise, keep reading.

Cross-Region replication is a feature of Amazon S3 that allows for automatic copying of your data
from one bucket to another bucket located in a different AWS Region. It is a useful feature for
disaster recovery. After the cross-Region replication feature is enabled for a bucket, every new
object that you currently have read permissions for, which is created in the source bucket, is
replicated into the destination bucket you define. This means that objects replicated to the
destination bucket have the same names. Objects encrypted using an Amazon S3 managed
encryption key are encrypted in the same manner as their source bucket.
To perform Cross-Region replication, you must enable object versioning for both the source and
destination buckets. To maintain good data orderliness with versioning enabled, you can deploy
lifecycle policies to automatically archive objects to Amazon S3 Glacier or to delete the objects.

Optional Task 8.1: Enable versioning on your source bucket


103.​ Return to the browser tab open to the AWS Management Console.
104.​ At the top of the console, in the search bar, search for and choose ​
S3.
105.​ Select the link for the LabBucket found in the Buckets section.

A page with all the bucket details is displayed.

106.​ Select the Properties tab.


107.​ Locate the Bucket Versioning section.
108.​ Choose Edit.

The Edit Bucket Versioning page is displayed.

109.​ Select Enable for Bucket Versioning.


110.​ Choose Save changes.

Optional Task 8.2: Create a destination bucket for cross-Region replication


111.​ From the Amazon S3 navigation menu, choose Buckets.
112.​ At the top of the screen, choose the drop-down next to the region you are in and choose
the SecondaryRegion value displayed to the left of these instructions.

This navigates you to the SecondaryRegion to create the bucket.

113.​ Choose Create bucket.

The Create bucket page is displayed.

114.​ In the Bucket name field, enter a unique bucket name.


115.​ In the Block Public Access settings for this bucket section, unselect Block all public
access.
116.​ In the warning message, select I acknowledge that the current settings might result in
this bucket and the objects within becoming public.

Caution: You do not need to have public access enabled for your personal buckets to use the
cross-Region replication feature. It is enabled in this lab so that you can quickly test if objects are
replicated and retrievable using the Amazon S3 URL.

117.​ For Bucket Versioning, select Enable.


118.​ Choose Create bucket.

The Amazon S3 console is displayed.


The newly created bucket is displayed among the list of all the buckets for the account.

Note: To simplify the narrative in this lab, this newly created bucket is referred to as the
DestinationBucket in the remainder of instructions.

Optional Task 8.3: Configure a public read policy for the new destination
bucket
You now create a public object read policy for this bucket. You use the public read policy in this lab to
demonstrate during the lab time that objects are replicated and retrievable using the Amazon S3
URL. It is not recommended for most use cases to use bucket policies which allow for public access.

119.​ From the Amazon S3 navigation menu, choose Buckets.


120.​ Choose the link for the DestinationBucket from the list of buckets.
121.​ Choose the Permissions tab.
122.​ Locate the Bucket policy section.
123.​ Choose Edit.

The Edit bucket policy page is displayed.

124.​ Copy edit: Copy and paste the Bucket ARN value into a text editor to save the
information for later. It is a string value like arn:aws:s3:::LabBucket located above the Policy
box.

The ARN value uniquely identifies this S3 bucket. You need this specific ARN value when creating
bucket-based policies.

125.​ File contents: Copy and paste the following JSON into a text editor:

{
"Version": "2012-10-17",
"Id": "Policy1621958846486",
"Statement": [
{
"Sid": "OriginalPublicReadPolicy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "RESOURCE_ARN"
}
]
}

126.​ Replace the RESOURCE_ARN value in the JSON with the Bucket ARN value you
copied in a previous step and append a /* to the end of the pasted Bucket ARN value.
Here is the example of the updated policy JSON:

{
"Version": "2012-10-17",
"Id": "Policy1621958846486",
"Statement": [
{
"Sid": "OriginalPublicReadPolicy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::DestinationBucket/*"
}
]
}

127.​ Return to the Amazon S3 console.


128.​ Paste the completed JSON into the Policy box.
129.​ Choose Save changes.

The bucket details page is displayed.

Note: The policies currently applied to the bucket make the objects in this bucket publicly readable.

Optional Task 8.4: Create a replication rule


130.​ From the Amazon S3 navigation menu, choose Buckets.
131.​ In the Buckets section, choose the link for the LabBucket.
132.​ Choose the Management tab.
133.​ Locate the Replication rules section.
134.​ Choose Create replication rule.

The Create replication rule page is displayed.

135.​ In the Replication rule name field, enter ​


MyCrossRegionReplication
136.​ Verify that LabBucket is set for Source bucket name. If it is not, then you chose the
incorrect bucket before choosing the replication rules.
137.​ In the Choose a rule scope section, select Apply to all objects in the bucket.
138.​ Locate the Destination section.
139.​ Choose Browse S3.
140.​ Select the DestinationBucket.
141.​ Select Choose path.
142.​ Locate the IAM Role section.
143.​ Select Create new role.
144.​ Leave all other options as their default selection.
145.​ Choose Save.
146.​ If the Replicate existing objects window is displayed, select No, do not replicate existing
objects then choose Submit.

The Replication rules page for the LabBucket is displayed.

A Replication configuration successfully updated If changes to the configuration aren’t displayed,


choose the refresh button. Changes apply only to new objects. To replicate existing objects with this
configuration, choose Create replication job. message is displayed on top of the screen.

All newly created objects in the LabBucket are replicated into the DestinationBucket.

Note: It is possible to replicate existing objects between buckets, but that is beyond the scope of this
lab. You can find more information about this topic in the document linked in the Appendix section.

Optional Task 8.5: Verify object replication


147.​ From the Amazon S3 navigation menu, choose Buckets. You might need to expand the
menu by choosing the menu icon.
148.​ In the Buckets section, choose the link for the LabBucket.
149.​ Download the object for these lab instructions by right-clicking logo2.png and saving it to
your local device.
150.​ Return to the Amazon S3 console.
151.​ Choose the link for the CachedObjects/ folder.

Note: If you do not find the CachedObjects folder, choose Buckets from the navigation menu located
on the left side of the console. Then choose the link for the LabBucket from the list. Finally, choose
the Objects tab to ensure that you are at the correct page.

152.​ Choose Upload.

The Upload page is displayed.

153.​ Choose Add files.


154.​ Choose the logo2.png object from your local storage location.
155.​ Choose Upload.

The Upload: status page is displayed.

A Upload succeeded message is displayed on top of the screen.

156.​ Choose the link for the logo2.png from the Files and folders section.

A page with details about the Amazon S3 object is displayed.

157.​ In the Object management overview section, examine Replication status and refresh the
page periodically until it changes from PENDING to COMPLETED.
158.​ From the Amazon S3 navigation menu, select Buckets.
159.​ In the Buckets section, choose the link for the DestinationBucket.

A page with all the bucket details is displayed.

160.​ Choose the link for the CachedObjects/ folder.


161.​ In the Files and folders section, choose the link for the logo2.png.

A page with details about the Amazon S3 object is displayed.

162.​ In the Object management overview section, examine Replication Status. It displays
REPLICA.
163.​ Choose the link located in the Object URL field.

The picture is displayed in a browser tab.

Congratulations! You have completed setting up cross-Region replication for all new objects
uploaded into the LabBucket.

Consider these follow-up questions to this optional task:

●​ How can you restrict access to objects in the DestinationBucket?


●​ What steps are needed to add the DestinationBucket to the CloudFront distribution?

Conclusion
Congratulations! You now have successfully done the following:

●​ Created an S3 bucket with default security settings.


●​ Configured an S3 bucket for public access.
●​ Added an S3 bucket as a new origin to an existing CloudFront distribution.
●​ Secured an S3 bucket to allow access only through the CloudFront distribution.
●​ Configured OAC to lock down security to an S3 bucket.
●​ Configured Amazon S3 resource policies for public or OAC access.

End lab
Follow these steps to close the console and end your lab.

164.​ Return to the AWS Management Console.


165.​ At the upper-right corner of the page, choose AWSLabsUser, and then choose Sign out.
166.​ Choose End Lab and then confirm that you want to end your lab.
Lab 7: Capstone Lab
© 2024 Amazon Web Services, Inc. or its affiliates. All rights reserved. This work may not be
reproduced or redistributed, in whole or in part, without prior written permission from Amazon Web
Services, Inc. Commercial copying, lending, or selling is prohibited. All trademarks are the property
of their owners.

Note: Do not include any personal, identifying, or confidential information into the lab environment.
Information entered may be visible to others.

Corrections, feedback, or other questions? Contact us at AWS Training and Certification.

Lab overview
You are tasked with applying your new knowledge to solve several architectural challenges within a
specific business case. First, you are given a list of requirements related to the design. Then, you
perform a series of actions to deploy and configure the services needed to meet the requirements.

During this capstone lab, you have access to the following:

●​ Downloadable AWS CloudFormation templates and other lab files


●​ Task scenarios, descriptions, and requirements
●​ Optional step-by-step instructions, if needed

The task scenarios provide relevant background and help you understand how the requirements
solve a real-world business problem. Use the templates and the requirements list to complete all of
the tasks in the capstone. Now that you are familiar with concepts and services, this lab solidifies
your knowledge through practice. In the real world, you encounter problems that are not well-defined
or sequenced logically. By the end of this capstone, you should have a better understanding of how
you can apply architectural best practices to real-world problems.

Objectives
After completing this lab, you should be able to do the following:

●​ Deploy a virtual network spread across multiple Availability Zones in a Region using a
CloudFormation template.
●​ Deploy a highly available and fully managed relational database across those Availability
Zones (AZ) using Amazon Relational Database Service (Amazon RDS).
●​ Use Amazon Elastic File System (Amazon EFS) to provision a shared storage layer across
multiple Availability Zones for the application tier, powered by Network File System (NFS).
●​ Create a group of web servers that automatically scales in response to load variations to
complete your application tier.

Icon key
Various icons are used throughout this lab to call attention to different types of instructions and
notes. The following list explains the purpose for each icon:

●​ Expected output: A sample output that you can use to verify the output of a command or
edited file.
●​ Note: A hint, tip, or important guidance.
●​ Learn more: Where to find more information.
●​ Security: An opportunity to incorporate security best practices.
●​ Refresh: A time when you might need to refresh a web browser page or list to show new
information.
●​ Copy edit: A time when copying a command, script, or other text to a text editor (to edit
specific variables within it) might be easier than editing directly in the command line or
terminal.
●​ Hint: A hint to a question or challenge.

Start lab
1.​ To launch the lab, at the top of the page, choose Start Lab.​
Caution: You must wait for the provisioned AWS services to be ready before you can
continue.
2.​ To open the lab, choose Open Console .​
You are automatically signed in to the AWS Management Console in a new web browser tab.​
Warning: Do not change the Region unless instructed.

Common sign-in errors

Error: You must first sign out

If you see the message, You must first log out before logging into a different AWS account:

●​ Choose the click here link.


●​ Close your Amazon Web Services Sign In web browser tab and return to your initial lab
page.
●​ Choose Open Console again.

Error: Choosing Start Lab has no effect


In some cases, certain pop-up or script blocker web browser extensions might prevent the Start Lab
button from working as intended. If you experience an issue starting the lab:

●​ Add the lab domain name to your pop-up or script blocker’s allow list or turn it off.
●​ Refresh the page and try again

Lab scenario
Example Corp. creates marketing campaigns for small-sized to medium-sized businesses. They
recently hired you to work with the engineering teams to build out a proof of concept for their
business. To date, they have hosted their client-facing application in an on-premises data center, but
they recently decided to move their operations to the cloud in an effort to save money and transform
their business with a cloud-first approach. Some members of their team have cloud experience and
recommended the AWS Cloud services to build their solution.

In addition, they decided to redesign their web portal. Customers use the portal to access their
accounts, create marketing plans, and run data analysis on their marketing campaigns. They would
like to have a working prototype in two weeks. You must design an architecture to support this
application. Your solution must be fast, durable, scalable, and more cost-effective than their existing
on-premises infrastructure.

The following image shows the final architecture of the designed solution:

AWS services not used in this lab


AWS services not used in this lab are deactivated in the lab environment. In addition, the capabilities
of the services used in this lab are limited to only what the lab requires. Expect errors when
accessing other services or performing actions beyond those provided in this lab guide.

Task 1 instructions: Review and run a pre-configured


CloudFormation template
Task 1.1: Navigate to the CloudFormation console
3.​ At the top of the AWS Management Console, in the search box, search for and choose
CloudFormation.

Task 1.2: Obtain and review the CloudFormation template


4.​ Open the context (right-click) menu on this Task1.yaml link, and choose the option to save
the CloudFormation template to your computer.
5.​ Open the downloaded file in a text editor (not a word processor).
6.​ Review the CloudFormation template.
7.​ Predict what resources are created by this template.

Task 1.3: Create the CloudFormation stack


8.​ Choose Create stack.

Note: If the console starts you on the Stacks page instead of the Amazon CloudFormation landing
page, then you can get to the Create stack page in two steps.

●​ Choose the Create stack dropdown menu.


●​ Choose With new resources (standard).

The Create stack page is displayed.

9.​Configure the following:


●​ Select Choose an existing template.
●​ Select Amazon S3 URL.
●​ Copy the Task1TemplateUrl value from the left side of these lab instructions and paste it in
the Amazon S3 URL text box.
●​ Choose Next.

The Specify stack details page is displayed.

10.​Stack name: Enter VPCStack.


11.​ Parameters: Keep the default values.
12.​Choose Next.
The Configure stack options page is displayed. You can use this page to specify additional
parameters. You can browse the page, but leave settings at the default values.

13.​Choose Next.

The Review and create page is displayed. This page is a summary of all settings.

14.​At the bottom of the page, choose Submit.

The stack details page is displayed.

The stack enters the CREATE_IN_PROGRESS status.

15.​Choose the Stack info tab.


16.​Occasionally choose the Overview refresh .
17.​Wait for the status to change to CREATE_COMPLETE.

Note: This stack can take up to 5 minutes to deploy the resources.

Task 1.4: View created resources from the console


18.​Choose the Resources tab.

The list shows the resources that are being created. CloudFormation determines the optimal order
for resources to be created, such as creating the VPC before the subnet.

19.​Review the resources that were deployed in the stack.


20.​Choose the Events tab and scroll through the list.

The list shows (in reverse order) the activities performed by CloudFormation, such as starting to
create a resource and then completing the resource creation. Any errors encountered during the
creation of the stack are listed in this tab.

21.​Choose the Outputs tab.


22.​Review the key-value pairs in the Outputs section. These values might be useful in later lab
tasks.

Congratulations! You have learned to configure the stack and created all of the resources using the
provided CloudFormation template.

Task 2 instructions: Create an Amazon RDS database


Task 2.1: Navigate to the Amazon RDS console
23.​At the top of the AWS Management Console, in the search box, search for and choose RDS.
The Amazon RDS console page is displayed.

Task 2.2: Create a new DB subnet group


24.​In the left navigation pane, choose Subnet groups.
25.​Choose Create DB subnet group.

The Create DB subnet group page is displayed.

26.​In the Subnet group details section, configure the following:


●​ Name: Enter AuroraSubnetGroup.
●​ Description: Enter A 2 AZ subnet group for my database.
●​ VPC: Select LabVPC from the dropdown menu.
27.​In the Add subnets section, configure the following:
●​ From the Availability Zones dropdown menu:
○​ Select the Availability Zone ending in a.
○​ Select the Availability Zone ending in b.
●​ From the Subnets dropdown menu:
○​ Select the subnet with the CIDR block 10.0.4.0/24 from the Availability Zone ending
in a.
○​ Select the subnet with the CIDR block 10.0.5.0/24 from the Availability Zone ending
in b.
28.​Choose Create.

A Successfully created AuroraSubnetGroup. View subnet group message is displayed on top of the
screen.

Task 2.3: Create a new Amazon Aurora database


29.​In the left navigation pane, choose Databases.
30.​Choose Create database.

The Create database page is displayed.

31.​In the Choose a database creation method section, select Standard create.
32.​In the Engine options section, configure the following:
●​ In Engine type, select Aurora(MySQL Compatible).
33.​In the Templates section, select Production.
34.​In the Settings section, configure the following:
●​ DB cluster identifier: Enter MyDBCluster.
●​ Master username: Enter admin.
●​ Credentials management: Select Self managed
●​ Master password: Paste the LabPassword value from the left side of these lab instructions.
●​ Confirm master password: Paste the LabPassword value from the left side of these lab
instructions.

Note: Remember your credentials.


35.​In the Instance configuration section, configure the following:
●​ DB instance class: Select Burstable classes (includes t classes).
●​ From the instance type dropdown menu, choose db.t3.medium.
36.​In the Availability & durability section, for Multi-AZ deployment, select Create an Aurora
Replica or Reader node in a different AZ (recommended for scaled availability).
37.​In the Connectivity section, configure the following:
●​ Virtual private cloud (VPC): Select LabVPC from the dropdown menu.
●​ DB subnet group: Select aurorasubnetgroup from the dropdown menu.
●​ Public access: Select No.
●​ VPC security group (firewall): Select Choose existing.
●​ Existing VPC security groups:
○​ Select only the xxxxx-RDSSecurityGroup-xxxxx group from the dropdown menu.
○​ To remove the default security group, choose the X.
●​ Expand the Additional configuration section and configure the following:
○​ Database port: Leave the configuration at the default value.
38.​In the Monitoring section, clear Enable Enhanced monitoring.
39.​Scroll to the bottom of the page and expand the main Additional configuration section.
●​ In the Database options section, configure the following:
○​ Initial database name: Enter WPDatabase.
●​ In the Encryption section, clear Enable encryption.
●​ In the Maintenance section, clear Enable auto minor version upgrade.
●​ In the Deletion protection section, clear Enable deletion protection.
40.​Scroll to the bottom of the screen and choose Create database.
41.​On the Suggested add-ons for mydbcluster pop-up window, choose Close.

Note: Your Aurora MySQL DB cluster is in the process of launching. The cluster you configured
consists of two instances, each in a different Availability Zone. The Amazon Aurora DB cluster can
take up to 5 minutes to launch. Wait for the mydbcluster status to change to Available. You do not
have to wait for the availability of the instances to continue.

A Successfully created database mydbcluster message is displayed on top of the screen.

42.​Choose View connection details displayed on the success message border to save the
connection details of your mydbcluster database to a text editor.

Much of this information can also be found in the next task.

Note If you notice the error “Failed to turn on enhanced monitoring for database mydbcluster
because of missing permissions” you can safely ignore the error.

Task 2.4: Copy database metadata


43.​In the left navigation pane, choose Databases.
44.​Choose the mydbcluster link.
45.​Choose the Connectivity & security tab.
46.​Copy the endpoint value for the Writer instance to a text editor.
Tip: To copy the Writer instance endpoint, hover on it and choose the copy icon.

47.​Choose the Configuration tab.


48.​Copy the Master username value to a text editor.
49.​For Master password: Use the LabPassword value from the left side of these lab instructions.
50.​In the left navigation pane, choose Databases.
51.​Choose the mydbcluster-instance-x writer instance link.
52.​Choose the Configuration tab.
53.​Copy the DB name value to a text editor.

Congratulations! You have created the Amazon Aurora database.

Task 3 instructions: Creating an Amazon EFS file system


Task 3.1: Navigate to the Amazon EFS console
54.​At the top of the AWS Management Console, in the search box, search for and choose EFS.

The Amazon Elastic File System console page is displayed.

Task 3.2: Create a new file system


55.​Choose Create file system.

The Create file system page is displayed.

56.​Choose Customize.

The File system settings page is displayed.

57.​In the General section, configure the following:


●​ Name - optional: Enter myWPEFS.
●​ Clear Enable automatic backups.
●​ Clear Enable encryption of data at rest.
●​ Expand the Tags - optional section, configure the following:
○​ Tag key: Enter Name.
○​ Tag value – optional: Enter myWPEFS.
○​ Leave all other settings at their default value.
58.​Choose Next.

The Network access page is displayed.

59.​From the Virtual Private Cloud (VPC) dropdown menu, select LabVPC.
60.​For Mount targets, configure the following:
●​ Availability zone: Select the Availability Zone ending in “a” from the dropdown menu.
●​ Subnet ID: Select AppSubnet1 from the dropdown menu.
●​ Security groups: Select EFSMountTargetSecurityGroup from the dropdown menu.
●​ To remove the default Security group, choose the X.
●​ Availability zone: Select Availability Zone ending in “b” from the dropdown menu.
●​ Subnet ID: Select AppSubnet2 from the dropdown menu.
●​ Security groups: Select EFSMountTargetSecurityGroup from the dropdown menu.
●​ To remove the default Security group, choose the X.
61.​Choose Next.

The File system policy – optional page is displayed.

Note: Configuring this page is not necessary in this lab.

62.​Choose Next.

The Review and create page is displayed.

63.​Scroll to the bottom of the page, and choose Create.

A Success! File system (fs-xxxxxxx) is available. message is displayed on top of the screen.

The file system state displays as Available after several minutes.

Task 3.3: Copy Amazon EFS metadata


64.​In the left navigation pane, select File systems.
65.​Copy the File system ID generated for myWPEFS to a text editor. It has a format like
fs-a1234567.

Congratulations, you have created the Amazon EFS.

Task 4 instructions: Create an Application Load Balancer


Task 4.1: Navigate to the Amazon EC2 console
66.​At the top of the AWS Management Console, in the search box, search for and choose EC2.

Task 4.2: Create a Target group


67.​In the left navigation pane, under the Load Balancing section, choose Target Groups.
68.​Choose Create target group.

The Specify group details page is displayed.

69.​In the Basic configuration section, configure the following:


●​ For Choose a target type: Select Instances.
●​ For Target group name: Enter myWPTargetGroup.
●​ For VPC: Select LabVPC from the dropdown menu.
70.​In the Health checks section, configure the following:
●​ For Health check path: Enter /wp-login.php.
●​ Expand the Advanced health check settings section and configure the following:
○​ Healthy threshold: Enter 2.
○​ Unhealthy threshold: Enter 10.
○​ Timeout: Enter 50.
○​ Interval: Enter 60.

Leave the remaining settings on the page at their default values.

71.​Choose Next.

The Register targets page is displayed. There are no targets to register currently.

72.​Scroll to the bottom of the page and choose Create target group.

A Successfully created target group: myWPTargetGroup message is displayed on top of the screen.

Task 4.3: Create an Application Load Balancer


73.​In the left navigation pane, choose Load Balancers.
74.​Choose Create load balancer.

The Compare and select load balancer type page is displayed.

75.​In the Load balancer types section, for Application Load Balancer, choose Create.

The Create Application Load Balancer page is displayed.

76.​In the Basic configuration section, configure the following:


●​ For Load balancer name: Enter myWPAppALB.
77.​In the Network mapping section, configure the following:
●​ VPC: Select LabVPC from the dropdown menu.
●​ Mappings:
○​ Select the first Availability Zone listed, and select PublicSubnet1 from the Subnet
dropdown menu.
○​ Select the second Availability Zone listed, and select PublicSubnet2 from the
Subnet dropdown menu.
78.​In the Security groups section, configure the following:
●​ From the Security groups dropdown menu, select AppInstanceSecurityGroup.
●​ To remove the default security group, choose the X.
79.​In the Listeners and routing section, configure the following:
●​ For Listener HTTP:80: from the Default action dropdown menu, select myWPTargetGroup.
80.​Scroll to the bottom of the page and choose Create load balancer.
A Successfully created load balancer: myWPAppALB message is displayed on top of the screen.

The load balancer is in the Provisioning state for a few minutes and then changes to Active.

81.​Copy the myWPAppALB load balancer DNS name to a text editor.

Congratulations, you have created the target group and an Application Load Balancer.

Task 5 instructions: Creating a launch template using


CloudFormation
Task 5.1: Navigate to the CloudFormation console
82.​At the top of the AWS Management Console, in the search box, search for and choose
CloudFormation.

Task 5.2: Obtain and review the CloudFormation template


83.​Open the context (right-click) menu on this Task5.yaml link, and choose the option to save
the CloudFormation template to your computer.
84.​Open the downloaded file in a text editor (not a word processor).
85.​Review the CloudFormation template.
86.​Predict what resources are created by this template.

Task 5.3: Create the CloudFormation stack


87.​Choose Create stack.

Note: If the console starts you on the Stacks page instead of the Amazon CloudFormation landing
page, then you can get to the Create stack page in two steps.

●​ Choose the Create stack dropdown menu.


●​ Choose With new resources (standard).

The Create stack page is displayed.

88.​Configure the following:


●​ Select Choose an existing template.
●​ Select Amazon S3 URL.
●​ Copy the Task5TemplateUrl value from the left side of these lab instructions and paste it in
the Amazon S3 URL text box.
●​ Choose Next.

The Specify stack details page is displayed.


89.​Set the Stack name as ​
WPLaunchConfigStack.
90.​Configure the following parameters:
●​ DB name: Paste the initial database name you copied in Task 2.​
Note: Make sure that you paste the initial database name, not the cluster name.
●​ Database endpoint: Paste the writer endpoint you copied in Task 2.
●​ Database User Name: Paste the Master username you copied in Task 2.
●​ Database Password: Paste the Master password you copied in Task 2.
●​ WordPress admin username: Defaults to ​
wpadmin.
●​ WordPress admin password: Paste the LabPassword value from the left side of these lab
instructions.
●​ WordPress admin email address: Input a valid email address.
●​ Instance Type: Leave the default value of t3.medium.
●​ ALBDnsName: Paste the DNS name value you copied in Task 4.
●​ LatestAL2023AmiId: Leave the default value.
●​ WPElasticFileSystemID: Paste the File system ID value you copied in Task 3.
91.​Choose Next.

The Configure stack options page is displayed. You can use this page to specify additional
parameters. You can browse the page, but leave settings at their default values.

92.​Choose Next.

The Review and create page is displayed. This page is a summary of all settings.

93.​Scroll to the bottom of the page and choose Submit.

The stack details page is displayed.

The stack enters the CREATE_IN_PROGRESS status.

94.​Choose the Stack info tab.


95.​Occasionally choose the Overview refresh .
96.​Wait for the stack status to change to CREATE_COMPLETE.

Note: This stack can take up to 5 minutes to deploy the resources.

Task 5.4: View created resources from the console


97.​Choose the Resources tab.

The list shows the resources that are created.

Congratulations, you have created the stack using the provided CloudFormation template.
Task 6 instructions: Create the application servers by
configuring an Auto Scaling group and a scaling policy
Task 6.1: Create an Auto Scaling group
98.​At the top of the AWS Management Console, in the search box, search for and choose ​
EC2.
99.​In the left navigation pane, under the Auto Scaling section, choose Auto Scaling Groups.
100.​ Choose Create Auto Scaling group.

The Choose launch template or configuration page is displayed.

101.​ Configure the following:


●​ Auto Scaling group name: Enter WP-ASG.
●​ Launch template: Select the launch template that you created in Task 5.
102.​ Choose Next.

The Choose instance launch options page is displayed.

103.​ In the Network section, configure the following:


●​ VPC: Select LabVPC from the dropdown menu.
●​ Availability Zones and subnets: Select AppSubnet1 and AppSubnet2 from the dropdown
menu.
104.​ Choose Next.

The Configure advanced options - optional page is displayed.

105.​ On the Configure advanced options - optional page, configure the following:
●​ Select Attach to an existing load balancer.
●​ Select Choose from your load balancer target groups.
●​ From the Existing load balancer target groups dropdown menu, select myWPTargetGroup |
HTTP.
●​ For Additional health check types - optional: Select Turn on Elastic Load Balancing health
checks.
●​ Health check grace period: Leave at the default value of 300 or more.
●​ Monitoring: Select Enable group metrics collection within CloudWatch.
106.​ Choose Next.

The Configure group size and scaling - optional page is displayed.

107.​ On the Configure group size and scaling - optional page, configure the following:
●​ In the Group size section:
○​ Desired capacity: Enter 2.
●​ In the Scaling section:
○​ Min desired capacity: Enter 2
○​ Max desired capacity: Enter 4
108.​ In the Automatic scaling - optional section, configure the following:
●​ Select Target tracking scaling policy.

The remaining settings on this section can be left at their default values.

109.​ Choose Next.

The Add notifications - optional page is displayed.

110.​ Choose Next.

The Add tags - optional page is displayed.

111.​ Choose Add tag and then configure the following:


●​ Key: Enter Name.
●​ Value - optional: Enter WP-App.
112.​ Choose Next.

The Review page is displayed.

113.​ Review the Auto Scaling group configuration for accuracy, and then at the bottom of the
page, choose Create Auto Scaling group.

The Auto Scaling groups page is displayed.

Now that you have created your Auto Scaling group, you can verify that the group has launched your
EC2 instances.

114.​ Choose the Auto Scaling group WP-ASG link.


115.​ To review information about the Auto Scaling group, examine the Group details section.
116.​ Choose the Activity tab.

The Activity history section maintains a record of events that have occurred in your Auto Scaling
group. The Status column contains the current status of your instances. When your instances are
launching, the status column shows PreInService. The status changes to Successful after an
instance is launched.

117.​ Choose the Instance management tab.

Your Auto Scaling group has launched two Amazon EC2 instances and they are in the InService
lifecycle state. The Health Status column shows the result of the Amazon EC2 instance health check
on your instances.

If your instances have not reached the InService state yet, you need to wait a few minutes. You can
choose the refresh button to retrieve the current lifecycle state of your instances.

118.​ Choose the Monitoring tab. Here, you can review monitoring-related information for your
Auto Scaling group.
This page provides information about activity in your Auto Scaling group, as well as the usage and
health status of your instances. The Auto Scaling tab displays Amazon CloudWatch metrics about
your Auto Scaling group, while the EC2 tab displays metrics for the Amazon EC2 instances
managed by the Auto Scaling group.

Task 6.2: Verify the target groups are healthy


119.​ Expand the navigation menu by choosing the menu icon in the upper-left corner.
120.​ In the left navigation pane, choose Target Groups.
121.​ Choose the myWPTargetGroup link.
122.​ In the Targets tab, wait until the instance Health status is displayed as healthy.

Note: It can take up to 5 minutes for the health checks to show as healthy. Wait for the Health status
to display healthy before continuing.

Task 6.3: Log in to the WordPress web application


123.​ In the left navigation pane, choose Load Balancers.
124.​ Copy the myWPAppALB load balancer DNS name to a text editor and append the value
/wp-login.php to the end of the DNS name to complete your WordPress application URL.

Expected output: Example of a completed WordPress application URL:

myWPAppELB-4e009e86b4f704cc.elb.us-west-2.amazonaws.com/wp-login.php

125.​ Paste the WordPress application URL value into a new browser tab.

The WordPress login page is displayed.

126.​ Enter the following:


●​ Username or Email Address: Enter wpadmin.
●​ Password: Paste the LabPassword value from the left side of these lab instructions.
127.​ Choose the Log in button.

Congratulations, you have created the AWS Auto Scaling group and successfully launched the
WordPress application.

Conclusion
Congratulations! You now have successfully:

●​ Deployed a virtual network spread across multiple Availability Zones in a Region using a
CloudFormation template.
●​ Deployed a highly available and fully managed relational database across those Availability
Zones using Amazon RDS.
●​ Used Amazon EFS to provision a shared storage layer across multiple Availability Zones for
the application tier, powered by NFS.
●​ Created a group of web servers that automatically scales in response to load variations to
complete your application tier.

End lab
Follow these steps to close the console and end your lab.

128.​ Return to the AWS Management Console.


129.​ At the upper-right corner of the page, choose AWSLabsUser, and then choose Sign out.
130.​ Choose End Lab and then confirm that you want to end your lab.

For more information about AWS Training and Certification, see https://aws.amazon.com/training/.

Your feedback is welcome and appreciated.​


If you would like to share any feedback, suggestions, or corrections, please provide the details in our
AWS Training and Certification Contact Form.

You might also like