0% found this document useful (0 votes)
57 views26 pages

Implementation UCD AU19B1014

The document describes implementing a user centered design project by integrating analysis, design, architecture, optimization, experiments and simulation to build an application subject to given constraints. It lists the table of contents for a project on creating a secured AWS account and infrastructure for an animal care organization. The objectives include platform consolidation using AWS Organization service, creating an elastic architecture with CloudFormation, fast delivery of static and dynamic websites to S3 and CloudFront, detailed monitoring with CloudWatch and CloudWatch Agent, and setting up a central artifact repository with Nexus.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views26 pages

Implementation UCD AU19B1014

The document describes implementing a user centered design project by integrating analysis, design, architecture, optimization, experiments and simulation to build an application subject to given constraints. It lists the table of contents for a project on creating a secured AWS account and infrastructure for an animal care organization. The objectives include platform consolidation using AWS Organization service, creating an elastic architecture with CloudFormation, fast delivery of static and dynamic websites to S3 and CloudFront, detailed monitoring with CloudWatch and CloudWatch Agent, and setting up a central artifact repository with Nexus.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Implementation

CS7201
Integrate analysis, design, architecture, optimization,
experiments and simulation to build application
subject to given constraints for user centered design
project.

Under the Guidance of

Sourabh Sharma

Sheikh M. Tadeeb AU19B1014


TABLE OF CONTENTS

Companies IP-Planning ....................................... 01-03

Creating Secured Account................................... 04

Objective-1 (Platform Consolidation) ............... 05-06

Objective-2 (Elastic Architecture)....................... 07-08

Objective-3 (Fast Delivery of Website)............... 09-13

Objective-4 (Detailed Monitoring)....................... 14-16

Central Artifact Repository-Setup....................... 17-21

Central Storage Solution........................................ 22-24


IP-Ranges to Avoid
These are the pre-existing IP-Ranges the Company is using on different LAN's

01
Few Considerations for designing VPC:
1. Always a good practice not to use existing IP ranges of local area networks of our
organization.

2. Plan for future changes as well.

3. Reserve 2+ network ranges per region and per account.

Eg: 3 in US, 1 in Europe, 1 in Australia


(5) x2 - Assume 4 Accounts.
We assumed this for our project i.e., animal_care organisation

Total 40 ranges (ideally)

4. Beforehand assume the no’s of regions where the organization will operate & also the
no’s of accounts in our organization.

5. In AWS VPC we have a minimum /28 (16 IP), a maximum /16 (65456 IPs) nos of CIDR

6. My Personal preference for the 10.x.y.z range.

( I Avoid common ranges - avoid future issues. From 10.0 to 10.15 as its human
tendency to pick a range between these limits ).

Deciding How many Subnets I need ?

1. In AWS Services don't get launched and run directly into VPC, we will launch them in
subnets.

2. VPC is attached/bound to a region whereas a subnet is bound to an availability zone,


the number of AZs in a region decides the maximum number of subnets but as a safe or
ideal practice I always consider choosing three networks inside a VPC plus one future
network and I assume three tiers plus one future tier.

3 subnets + 1 Future Subnet = 4 networks


3 tier + 1 spare tier = 4 networks / AZ

As these tiers are in all AZ's so 4*4 = 16 networks.


Hence, we require 16 subnets for our organisation.

02
Snipped from IP-Structure Excel Sheet
I have submitted this IP-STructure in Zip, under IP Planning folder

Visually this is how our CIDR will look (Utilizing above IP-Plan)

03
Adding Multi-Factor Authentication to all our accounts will add an extra layer of security to
the accounts. For MFA I used AUTHY tool which is a multi-platform tool

04
Service Used
AWS Organization

Step1) Creating AWS Organization

Step2) Creating Organization Units & adding accounts to it.

Step3) We can see the bills for different accounts are consolidated

05
This is how our Final AWS-Organization Looks and How I have added different
features to achieve my objective-1

Trust Policy

Role Switching

Permissions Policy

Role Switching 06
Service Used Language Used Editor Used

CloudFormation YAML Bracket

Designer View of IaC

This is the visuals of elastic architecture I implemented over AWS using VPC, ELB,
ASG EFS and RDS

Visuals of Elastic Architecture


07
Software used for Making Architectures Cloud Craft

NOTE: The following Infrastructure as codes are submitted in zip file under Elastic Architecture
folder

Code Snippet - (Instance Role)


Code Snippet - (Base VPC)

Code Snippet - (EFS)


08
MAKING A STATIC WEBSITE
Step1) Uploading all the objects to S3 bucket

Step2) Enabling Static-webiste hosting on S3 bucket and attaching public policy to it.

09
CLOUDFRONT
Step1) Using principle of least priviledge, I will assign proper permission policy to the
Developer who will be utilizing s3 and CloudFront.

Permissions for the Developer

When we access the website using s3 URL, we can see that our static
website is not secure that is: its not using HTTPS protocol

10
Now there exist 2 major problems with our website

1.As its created in N.Virginia region, we will only have good performance for
N.Virginia region but there will be global performance issues.

2.Our website is running on HTTP (i.e., it’s not secured, data is not
encrypted in transit) and even if we put https in URL, the page wont load as
S3 is not capable of delivering the static website functionality using https.

Step2) So, to solve 1st problem, I’ll make use of CloudFront and Start by
making a Cloudfront distribution

11
Step3) We need to increase CF limit for that, I logged a ticket to increase my
CloudFront service limit

Step4) Once the limit is increased by AWS, I created the CloudFront distribution and
again accessed the website. I can see both the website now is secured using HTTPS

12
MAKING A DYNAMIC WEBSITE

I will create Pre-signed URL of each image, to make my website light


weight as all the website is stored on Ec2's EBS which is SPO i.e. Single
point of Failure. So to avoid this we must store our Media files on
secondary storage like s3 and use its pre-signed URL feature

We can see below that the Dynamic website could be accessed using Ec2/machines IP address but
the images, video and audio is getting featched on the webiste using S3 pre-signed URL

13
CLOUDWATCH
Now below is the resource i.e., Ec2 running to Monitor its Hardware resources we have a pre-built
service i.e., CloudWatch which provides us with different Metrics such as CPUUtilization, disk read/write,
Network in/out etc. But it doesnt give any metrics on application or metrics on MemoryUtilization

All the respective Metrics

Now, to log the custom metrics, I installed CloudWatch Agent on Ec2-instance so that
we can log application logs as well as generate custom metrics

CLOUDWATCH AGENT (FOR CUSTOM


METRICS)
Practical where I download, install and configure CloudWatch agent on
ec2 instance:

1st I created VPC and 4 subnets (web, app, db., reserved) in each AZ (we
have 4 AZ) but we created subnet in 3 AZ’s, along with route table, IGW
and security group, 1 Ec2 instance (in web-a) subnet with Apache,
WordPress site & MariaDB already installed on it. (Monolithic style) using
my CloudFormation template

14
Step1) We download the cloud watch agent on our ec2.

wget https://s3.amazonaws.com/amazoncloudwatch
agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm

Step2) We install the cloud watch agent on our ec2.


sudo rpm -U ./amazon-cloudwatch-agent.rpm

Step3) To allow CloudWatch agent to interact with CloudWatch service we


must give proper role to ec2 instance on which we downloaded & installed
the CloudWatch Agent.

# IAM ROLE
# EC2 Role
# Name: CloudWatchRole

#These are two AWS managed policy we’ll attach to this role.

1. CloudWatchAgentServerPolicy
2. AmazonSSMFullAccess

IaC Code Snippet 15


Step4) After configuring the CloudWatch Agent we can see the application
logs are successfully coming in CloudWatch.

Below is the WordPress configured over our machine using IaC

16
• Installing NEXUS on Cloud Server on Digital Ocean

• Creating Nexus user

17
• Configuring Cloud Server

18
• Starting the Nexus Software

19
• Checking on what port Nexus got started

• Configuring Cloud-Server Firewall

20
• Creating Users over Nexus

21
• Creating a S3 Bucket

As its production data we need to restrict the access to the outside


world. Hence I will block all public access.

22
• Encryption of S3 Data

SSE-S3 is where the keys are provided and managed by S3 so we don't have
any admin overhead as well as no control over the keys

What S3 does is that it generates individual key for each individual object
and encrypt the data with that individual plaintext key. After encrypting the
data, it encrypts that individual plaintext key with its master key and discard
the plaintext key.

Then it is stores the ciphertext key along with encrypted data on the S3
storage.

If S3 wants to decrypt the data it first decrypts the ciphertext key with its
master key to get to obtain the plain text key and with the help of plain text
key it further decrypts the ciphertext-data to plaintext data. Finally, it
discards the key again.

But it does have 3 disadvantages:


1. We can’t perform role-separation (i.e., whom to allow encryption rights,
whom to decryption rights, who can manage keys etc. etc.) as keys are not
Owned & managed by us neither they are in our control (as they are AWS
owned).
2. We can’t rotate the keys.
3. We don’t have control over the keys.

Its default object-encryption in s3. And it uses AES256 algorithm for


encryption-decryption
23
• SSE-S3

• Adding Lifecycle Rules for Automation

24

You might also like