PT AWS Project

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

CLOUD COMPUTING

PROJECT

Philip T
May 4th, 2022
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

CONTENTS

OVERVIEW 2

SERVICES USED 2

ARCHITECTURE 3

SERVICE CONFIGURATION AND DEPLOYMENT 3


IDENTITY AND ACCESS MANAGEMENT (IAM) 4
COMMAND LINE INTERFACE CLIENT (CLI) 4
VIRTUAL MACHINES 4
ELASTIC IP ADDRESS 6
ROUTE 53 6
LOAD BALANCER 7
S3 STORAGE BUCKET 7
CLOUDFRONT 8
ELASTIC FILE STORAGE 8
CLOUDWATCH 8
BILLING ALARMS 9

TESTING AND DEMONSTRATION 10

LESSONS LEARNED 13

CONCLUSION 14

REFERENCES 16

1
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

Amazon Web Services Project


Overview
Amazon Web Services (AWS), as one of the top two cloud computing providers, offers
an ever-expanding list of infrastructures, platforms, and software. As demand for cloud
services grows, I imagine more and more corporations will be turning to AWS for their
cloud computing needs. The increased availability, redundancy, and scalability offer
plenty of incentives for business owners to transition to cloud infrastructure. Additionally,
cloud services are easily scalable, making them a cost-effective option for the customer
to pay for their infrastructure “a-la-carte”.

My goal for this AWS project was to deploy a simple WordPress video-hosting website
with redundancy while keeping the service owner (me) apprised of all associated costs
and usage statistics. In deploying this project, I have only encountered a small number
of the products offered by AWS.

The instructor can click here to log in to the backend project console using the Identity
and Access Management URL (login credentials are detailed below in the IAM section).

The public-facing portion of my Azure project can be found at http://philsdomain.site/.

Services Used
I have kept my project infrastructure simple by design while achieving the full desired
capability of the web application/servers.

Since my AWS account already included a Virtual Private Cloud (VPC), I did not have to
worry about deploying a Virtual Network to provide connectivity for my infrastructure
components. I established an IAM account for the professor and generated an access
key so I could manage my resources through the AWS CLI. I utilized three Ubuntu EC2
virtual machines and set up a few Elastic IP addresses for them. I also configured a
DNS and utilized Route 53 to manage it. I created a Network Security Group to set
inbound and outbound rules for my systems. Next, an Application Load Balancer was
attached to manage traffic between my IPs. To store my web content, I stood up an S3
storage bucket and used Cloudfront to efficiently serve my content. Additionally, I
attached an Elastic File System (EFS) to my machines to have a central repository for
my files. Lastly, I enabled alerts for CPU usage, storage usage, and billing using
Cloudwatch to stay informed about the cost and performance of my services

Here is a list of all services used within the project resource group:

2
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

§ IAM (Tier 1)
§ CLI (Tier 1 New)
§ EC2 (Tier 1)
§ Elastic IP (Tier 2)
§ Route 53 (Tier 1)
§ Load Balancer (Tier 1)
§ S3 Storage Buckets (Tier 1)
§ Cloudfront (Tier 1)
§ EFS (Tier 1 New)
§ Cloudwatch (Tier 2)
§ Billing Alarms (Tier 2)
§ Usage Alarms (Tier 2)

Architecture
I have generated a network diagram showing all the services used in this project and
how they are connected to each other within the cloud.

Service Configuration and Deployment


Each of my resources was deployed in the east-us-1 region. I believe I attained a good
balance of deployments using both the AWS Console (GUI) and the Command Line

3
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

Interface (CLI) client. When creating resources through the console, I searched for the
service from the home page search bar, selected the service, then clicked “Create”
within the service. For each resource used in my project, I will outline how it was
deployed and list the steps involved in configuring the service.

Identity and Access Management (IAM)


To allow the instructor to access my AWS infrastructure, I created a user profile with the
following credentials.

Username:
Password:

This profile was assigned full administrative privileges. Follow this link to log in to the
IAM portal to view my service configurations.

I also created an additional administrative user profile for myself to avoid using the root
login every time. Multi-factor authentication was also enabled for the root login to further
protect my account. Lastly, I created an access key to use with the CLI client to access
my account.

Command Line Interface Client (CLI)


To make resource deployment and management a little easier, I installed the AWS CLI
client (version 2) onto two of my personal machines. Before installation, I needed to
update my Python version to 3.10 (at least 3.6).

After installation, I used aws configure to input my access key to connect to my


account. I primarily used the CLI to deploy my EC2 instances. This was a great tool to
bring resource management right to my local desktop.

Virtual Machines
I chose to deploy three Linux Ubuntu virtual machines (EC2s) as my web servers, all
running version 18.04. Before launching an instance, I created a Network Security
Group to be used by the VMs and opened inbound ports 22 (for SSH, by my IP only for
added security) and port 80 (for HTTP internet traffic). I also set an outbound rule to
allow an internet connection through the gateway. Then I created a key pair to be used
for the machines.

To start, I deployed one EC2 instance to configure Apache and WordPress before
creating an image (AMI) to create the other machines. I used the following code in the
CLI to deploy the first instance from an existing AMI:
aws ec2 run-instances /
--image-id ami-05e284a3f59b905d3 /
--count 1 --instance-type t2.micro /
--key-name project-key2 /
--security-group-ids sg-01d5e3b6e6dd2914f

4
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

I connected to the new EC2 remotely using my SSH client (Terminal) and started
configuration by upgrading all applications and installing the LAMP web server.

I created a MySQL database for WordPress and then installed WordPress on the
machine using wget https://wordpress.org/latest.tar.gz. Then I unpacked the
WordPress files and created my wp-config.php file. I opened this file using nano and
entered my database information.

After configuration, I navigated to my server’s WordPress admin page using


3.215.243.191/wp-admin.php. From there I was able to log in to WordPress and
manage my website.

When I first tested my website homepage, it wouldn’t load anything besides the default
Apache “It Works!” page. I tried disabling the 000.default.conf virtual host file for
apache2 and enabling my own, then restarting the apache2 service without success. I
was deep into the stackoverflow forums when my website finally displayed after a few
reboots and trial & error.

Now that I got my WordPress site working, I made some simple edits to the theme,
added content, and a few sub-pages to host content on. I created a HOME page, where
I linked three drone footage clips. I added three more pages (Salisbury, Rockport, and
Lancaster) to host these individual embedded videos. These page URLs are
hyperlinked to the sample photos on my homepage.

With my WordPress server configured, it was time to create an EC2 AMI and deploy a
few more machines. I backed up the AMI and then used the following command to
deploy the two machines from the CLI.
aws ec2 run-instances /

5
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

--image-id ami-057c4ca132a290efa /
--count 2 /
--instance-type t2.micro /
--key-name project-key2 /
--security-group-ids sg-01d5e3b6e6dd2914f

I logged into each of these machines using SSH and confirmed my content and
configurations were successfully copied over.

One convenient feature I noticed is that AWS EC2s come with a network interface
already attached. This allows me to skip the step of configuring a VNIC for each
machine to connect to my network. Each machine was assigned a subnet address on
my network (172.31.26.92, 172.31.19.139, and 172.31.18.42).

Elastic IP Address
In this step, I created an Elastic (public) IP Address for each of my EC2 instances to
maintain static IPs and allow users to access my web servers using these addresses.
After creation, I associated each address with one of my EC2 instances. The addresses
produced were 3.215.243.191 for VM1, 34.199.190.28 for VM2, and 52.4.130.79 for
VM3. This was probably the easiest part of my project.

Route 53
To increase accessibility for my site, I was able to use the domain name I purchased
earlier in the semester (philsdomain.site). I used Route 53 to set up a hosted zone and
added an “A” routing record for philsdomain.site. After this, I logged into my GoDaddy
DNS manager and entered the Nameservers produced by Route 53. It took a bit of time
for the DNS routing to occur, but I was finally able to navigate to my webserver using
my new domain.

6
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

Load Balancer
An application load Balancer was deployed to manage traffic between my IP addresses
among availability zones. First, I created a target group containing the public IP
addresses of my EC2 instances. Then I configured the load balancer to distribute traffic
across availability zones us-east-1a, us-east-1b, us-east-1c, and us-east-1d.

A health rule was added to determine which VM to send inbound traffic to. The health
probe rule was configured using Port 80 (HTTP) with path health.html. The load
balancer will help maintain optimal traffic routing and provide an enhanced level of
redundancy to my infrastructure. I am also able to view my load balancer metrics and
view the number of requests being processed through it.

S3 Storage Bucket
Next, I created an S3 Storage Bucket to host some of my files and web content from the
cloud. I uploaded video files and pictures to the bucket, which I will use to embed on my
WordPress site. I changed the ACL permissions to allow public access to the files,
which was required to make them publicly viewable on my site.

7
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

Cloudfront
While exploring the AWS services, I found the Cloudfront web service which speeds up
the distribution of web content for users by delivering content through edge locations. I
simply had to create my distribution, connect it to my S3 bucket, and copy the link to
use for my content embeds. To use Cloudfront, I just needed to paste the new link for
my bucket and add my file name to the end of the link (e.g.
https://dy3fj9ll1nen4.cloudfront.net/Rockport_Screenshot.png). I did this for each of my
picture and video files before embedding them into my WordPress site.

Elastic File Storage


To implement a central repository for my files on the Ubuntu servers, I created an
Elastic File Storage system. I installed the NFS utility on my machines, created an efs
directory, and mounted the EFS to my new directory. After this, I relocated my
wordpress directory and files to the EFS and updated the vhost files to point to this
directory. Effectively, WordPress files are now hosted on my EFS and can be accessed
and managed using any one of my three virtual machines. This provides ultimate
management efficiency and redundancy for my servers.

I logged into each EC2 and navigated to the efs folder to ensure my files were syncing
properly.

Cloudwatch
I enabled some account alerts to monitor my services and stay informed about the
performance and cost of the resources. I started by using Simple Notification Service
(SNS) to create topics and subscriptions for the alert types. I set the notification
preference to email me at my student address. Then I used Cloudwatch to configure
performance alerts for my EC2 instances. The alert was configured to notify me if the
CPU usage percentage surpassed 75% (which will surely not happen with the current
use case). However, I temporarily changed the setpoint to trigger the alert at 0.9% just
to see if the alert worked. I received the email notification as demonstrated in the next
section.

8
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

I also created an S3 storage usage alert to notify me if the bucket surpasses a certain
use limit. I also triggered this alert in the next section.

Billing Alarms
Lastly, from the Billing Console, I created a budget of $200 and added incremental
alerts at 50%, 75%, and 90%. I do not anticipate reaching any of these thresholds for
this project, so I added an alert for 3.4% ($6.80) to try to trigger this alarm.

I additionally created an EC2 budget of $10 (I’m cheap, I know) and set alerts at the
50% and 90% thresholds in hopes of triggering the alarm.

9
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

Testing and Demonstration


Although I had been testing throughout the entire creation process, I performed a final
functionality test and demonstration after all my resources were deployed and
configured. It was my goal to demonstrate that I could navigate to my domain and be
properly routed to my web servers. I also wanted to demonstrate that my content was
properly accessible from the website and could be accessed using the Cloudfront links.

I used a web browser to navigate to the DNS (http://philsdomain.site). The DNS properly
routed to one of my server’s Elastic IP addresses I was greeted with my WordPress
homepage.

10
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

I clicked each of the photo links which brought me to their respective pages. I played the
videos on each page, which were successfully being served (quickly and efficiently
through Cloudfront).

Next, I tested the Cloudwatch and Billing alerts I configured. First, I triggered the EC2
CPU alarm which I set to notify at 0.9% usage. I received the following email
notification:

11
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

The S3 storage alarm was triggered by setting the threshold to 0.4 gigabytes, which
successfully dispatched the following email alert to me:

Lastly, I tested the functionality of my billing alerts by setting a low threshold and tacking
on a few extra resources to boost the cost. After a few hours, the alarms initiated the
below emails to me.

After triggering these alarms, I removed the low threshold alerts to avoid inundation of
my inbox.

12
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

Lessons Learned
This project was a fairly straightforward implementation and the AWS resources proved
to be quite cohesive. That being said, with no previous AWS experience, I had some
difficulty during some of the deployments and wanted to share what I learned and
maybe provide a bit of insight for others. While a cloud infrastructure can be quick and
easy to set up, it can be just as easily misconfigured. One must exercise caution when
making configuration changes (as in any environment) to avoid catastrophic
modifications. I also want to emphasize the importance of having a “game plan” before
beginning the deployments. Setting up resources out of order can cost valuable time
and cause a need for certain integrations to be redeployed. As much as I tried to plan
ahead, I found myself wishing I had processed certain parts in a different order (partly
because I was not previously aware of how AWS resources interact).

One issue I encountered was when I attempted to set up my own VPC (on top of my
default one). I connected my EC2s and some other resources to the network, but for
some reason could not establish a successful SSH session to any of the EC2s from my
local client. I am not very network-oriented, so I gave up on this separate VPC endeavor
and decided to use the default VPC, which proved to be much more rewarding.

Another piece of information I wish I had known prior to this project is that DNS routing
changes can take hours to go into effect. When I modified the nameservers on my
GoDaddy management portal, I could not validate that the routing was properly
occurring until late the next day. If I had known this, I would not have spent so much
time refreshing my browser session hoping to see my website.

Lastly, as I had mentioned earlier, I had difficulty getting my webserver to display


anything besides the default Apache page. I believe the solution was to disable the
default vhost file in /etc/apache2/sites-available and enable one of my own. I created
one called wordpress.conf and configured it as displayed below, changed the
DocumentRoot to the proper address of my wordpress directory. After making this
modification, and a few reboots, I was able to get my WordPress homepage to display
when navigating to my IP.

13
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

Conclusion
The AWS has been a valuable learning experience for me in the cloud computing
environment. After learning the basics and deploying a simple infrastructure in the
Azure project, I did not anticipate that this project would be much different. However,
AWS has proven to be unique in such a way that taught me more about the basic
resources I believed myself to be familiar with.

I found the AWS console to be quite easy to interact with. The user interface is simple to
learn, and the best part is whenever I had to research an issue, the first result was
usually a guide on docs.aws.amazon.com that helped me work through the problem.
Another feature that impressed me was the built-in cost estimation for each resource.
This allows me to make informed decisions about the types and sizes of resources I
desire to utilize, as cost can be a factor. I thought my project would approach at least
50% of my $200 budget, but to this point I have not even reached $10, which is
impressive. This project was relatively low cost, which is a testament to the cost-
effectiveness of cloud computing. This has motivated me to explore some other AWS
deployments, given that they will not cost me much and I still have remaining credits.

I noticed many differences between Azure and AWS – one being the lack of resource
groups. I was able to use tags for all my resources, but this was not as helpful to me as
Azure’s resource groups. I hope they plan to implement something similar in the future,
in addition to a comprehensive list of active resources outside of the Billing Dashboard.
Otherwise, I had the impression that the AWS resources were easier to deploy and
configure, taking time out of the deployment process.

In my observation, the AWS services interact with each other seamlessly, making for a
cohesive network infrastructure and a happy system administrator. The CLI was also a
very useful tool to make resource deployment and management just a click away from
my personal desktop. This brought my AWS account right “next door” to my EC2 SSH
sessions in Terminal.

14
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

Although I hit a few time-eating obstacles, I feel that I achieved the desired outcome for
my cloud infrastructure and am satisfied with the way my project turned out. The
textbook labs were a useful guide throughout the semester in working through each
resource, which was beneficial to me as a new cloud user. AWS has revealed to me
more of the endless capabilities that cloud services can provide and I plan to refer to
this service for my future computing needs.

15
THIBODEAU, PHILIP CLOUD COMPUTING SECTION 032
01299952

References
Clinton, David. “Learn Amazon Web Services in a Month of Lunches”. Manning
Publications, August 2017. https://learning.oreilly.com/library/view/learn-amazon-
web/9781617294440/

16

You might also like