0% found this document useful (0 votes)
24 views13 pages

Project Documentation

Uploaded by

xibolo8655
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views13 pages

Project Documentation

Uploaded by

xibolo8655
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Project Documentation

Table of contents

Introduction
Architecture options
- Multi-tenant with ec2 and ASG using independent/shared DBs
- Multi-tenant with EKS using independent/shared DB

High availability setup


Monitoring setup for the infra
Infra as code using CloudFormation
Scaling of environment
Security setup
CICD setup for automation

Introduction

In this demo architecture, I am explaining about two architecture options


including ec2 and EKS deployments. After explaining different deployment
method, I have explained the IaC creation, monitoring and finally the entire
CICD setup for automating the new customer build. Starting by explaining the
architecture options.

Architecture options

Multi-tenant is the better option for saving architecture cost. I am explaining the multi-tenant
architecture options.

Multi-tenant with ec2 and ASG using independent DB

In this architecture we are using ec2 instances under the autoscaling group with a load
balancer and sharing the tenant workload for PHP and nginx via the ec2 group. Here we can
map the application URL to the load balancer IP address and can setup autoscaling rule for
the ec2 machine using cpu utilization metrics. In this method, whenever the application load
is higher the scaling group will add additional required instances and scale down the instance
automatically when there is low usage. For database setup we can use two methods.

1. Assign independent DB for each tenant


2. Use shared DB and provide user authentication (can use AWS Cognito for user
management)
Assign independent DB for each tenant: In this method we can use independent DB for each
tenant and can use shared web tier under an auto scaling group. Below diagram explain the
same

Use shared DB and provide user authentication: Here we are using shared RDS database for
saving application data this will save the cost of RDS servers, also using autoscaling with ec2
instance for managing the webserver layer. For tenant authentication for the DB can be
achieved by using the aws Cognito service. Authentication more details can be available in
the below reference link [1]

[1] https://docs.aws.amazon.com/cognito/latest/developerguide/bp_user-pool-based-multi-
tenancy.html

This RDS sharing method helps in reducing the application cost, however there can be cross-
tenant impact if the DB authorization setup is not correct. Also, can cause compliance
challenges.

Implementation details with CICD setup present at the


Multi-tenant with EKS using shared DB : In this method, considering that the application
is available for micro service deployment and using EKS for deployment of the application
tier and using RDS for database management. Here also we are using the shared RDS
database between tenants. Here the tenants will access the application url, which is pointed to
the cloud load balancer and eks master will balancer the nodes and scaling as per the load.
Below is the diagram explaining the same.

Further, can refer multi-tenant design considerations for Amazon EKS clusters via below link
[2]

[2] https://aws.amazon.com/blogs/containers/multi-tenant-design-considerations-for-amazon-
eks-clusters/
Monitoring setup:

Here I’m adding the monitoring setup for EKS clustered multi-tenant application architecture.

EKS as default integrate the logs to cloudwatch for the control plane. For node monitoring we
can also use cloudwatch by installing the agents on the nodes. This will also collect the
container logs too. Also, for the cluster, node, pod, task, and service level logging we can use
cloudwatch container insights. However this will cost extra money for the service.

Another option is by using opensource tools for monitoring, this can be implemented by
using Prometheus and Grafana. EKS supports metrics for Prometheus. This can be later
imported to Grafana dashboard. Sample architecture diagram for monitoring setup is below :

Prometheus can be deployed using helm :

helm upgrade -i prometheus prometheus-community/prometheus --namespace prometheus


--set
alertmanager.persistentVolume.storageClass="gp2",server.persistentVolume.storageClass="g
p2"

After the port mapping metrics will be available on the dashboard

kubectl --namespace=prometheus port-forward deploy/prometheus-server 9090

Now we can provide these metrics as source for Grafana dashboard. More details available
on reference [3]

[3] https://docs.aws.amazon.com/eks/latest/userguide/prometheus.html
Grafana setup for dashboard: Grafana can also deploy using helm :

helm upgrade --install grafana grafana/grafana -n grafana_namespace -f


./amp_query_override_values.yaml

After that we need to add the Prometheus data source to Grafana and can generate the
dashboards. More details available at [4]

[4] https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-onboard-query-grafana-
7.3.html

We can also integrate Prometheus with CloudWatch alarm for sending custom notifications
as per the alerts. This is well documented in the link [5]

[5] https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-CW-examples.html

We can refer the diagram for sample view.

Infra as Code (IaC) : Here I’m using CloudFormation as infra as code for the EKS cluster.
Cluster can be managed using the CloudFormation and the same stack can be used as a
destination for the CICD pipeline. So if there is an update in the application from pipeline the
cicd will trigger update for the CloudFormation stack and which will update the stack with
the new infra setup.

As an example, below are a slice of control plane infra code for a cluster (partial code only
added):
….
ControlPlane:
Type: 'AWS::EKS::Cluster'
Properties:
KubernetesNetworkConfig:
IpFamily: ipv4
Name: my-cluster
ResourcesVpcConfig:
EndpointPrivateAccess: false
EndpointPublicAccess: true
SecurityGroupIds:
- !Ref ControlPlaneSecurityGroup
SubnetIds:
- !Ref SubnetPublicAPSOUTH1A
- !Ref SubnetPublicAPSOUTH1C
RoleArn: !GetAtt
- ServiceRole
- Arn
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}/ControlPlane'
Version: '1.24'
ControlPlaneSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: Communication between the control plane and worker nodegroups
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}/ControlPlaneSecurityGroup'
VpcId: !Ref VPC

Sample diagram showing EKS deployment using CloudFormation, it can also be added to the
CICD pipeline ad provide destination as CloudFormation stack, which will update the stack
as per trigger.

Scalability:

In both ec2 and in eks methods, auto scaling is possible. Autoscaling group in ec2 is the easy
method for setting up scaling for the server layer and I’m not explaining that process. EKS
scaling can be done by using different methods as per the application usage :
HPA:

A HorizontalPodAutoscaler (HPA for short) automatically updates a workload resource with


the aim of automatically scaling the workload to match demand.Horizontal scaling means
that the response to increased load is to deploy more Pods. This is different from vertical
scaling, which for Kubernetes would mean assigning more resources (for example: memory
or CPU) to the Pods that are already running for the workload.If the load decreases, and the
number of Pods is above the configured minimum, the HorizontalPodAutoscaler instructs the
workload resource (the Deployment, StatefulSet, or other similar resource) to scale back
down.

Sample horizontal pod autoscaler code:



autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80

Also the below diagram represent the example :

We can find more details in the below documentation [6]

[6] https://docs.aws.amazon.com/eks/latest/userguide/horizontal-pod-autoscaler.html

Cluster Autoscaler

This Cluster Autoscaler automatically adjusts the number of nodes in the cluster when pods
fail or are rescheduled onto other nodes This will adjusts the size of a Kubernetes Cluster so
that all pods have a place to run and there are no unneeded nodes.

More details available on the below link [7]


[7] https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html

Below diagram is a representation of cluster autoscaling.

In our case we can setup cluster scaling based on the application access usage and the EKS
will be able to handle the scaling without affecting the application availability. Also
whenever a new user created it will create the pod deployment and scale as per the new user
accounts.

Security:
Network Security:

We can use VPC with Public and private Subnets. Nodes, Database Will Create in Private
Subnets, Private Subnets protected With Nat Gateway that means no one can connect from
the internet, and the application will be able to connect to the database internally.

Additionally we can use WAF (web application firewall) for security. Below is a sample
diagram :
Single tenant vs multi-tenant

Batter Approach is multi-tenant because it provides high availability due to recovery and
scaling, also its very fast in deployment. The main advantage Is the cost saving of
infrastructure. In the setup we can use different Pods for applications and use one RDS with
auto Scaling and can use Cognito for user management authorizations. When a new customer
gets created, it will create new database Inside RDS. Multi tenancy in RDS can be achieved
via different methods, ,ore details available on the reference link [8]

[8] https://docs.aws.amazon.com/whitepapers/latest/multi-tenant-saas-storage-strategies/
multitenancy-on-rds.html
CICD setup for multi-tenant EKS deployment

CICD is the main part of setting up the automation for the deployment. For this
we can use the below services.

- AWS code-commits for app repo


- AWS code-build for building and deploying apps
- AWS pipeline for automating the triggers
- ECR/docker hub for the image upload
- For multiple environments or customers, we can create automated
dynamic docker container images (will explain below)
- Deployment as EKS cluster, can be provided in buildspec.yaml file in code
build.
- Once the pipeline is triggered, the CodeBuild will then pull buildspec.yaml
file from source and execute the commands defined within the file.
In this, EKS deployment can be triggered in two ways, we can specify
deployment in buildspec.yaml file by manually providing the
deploy.yaml(manifest file) or can deploy to EKS as a container workload
using Helm charts.

Here is a sample buildpsec.yaml file I have created for this process:

---
phases:
install:
commands:
- curl -sS -o aws-iam-authenticator https://amazon-eks.s3.us-west-
2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/aws-iam-
authenticator
- curl -sS -o kubectl
https://amazon-eks.s3.us-west-2.amazonaws.com/1.20.4/2021-04-12/bin/
linux/amd64/kubectl
- chmod +x ./kubectl ./aws-iam-authenticator
- export PATH=$PWD/:$PATH
pre_build:
commands:
- echo Logging in to Docker Hub...
- docker login -u $user -p $pass
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker image build -t $IMAGE_REPO_NAME:$IMAGE_TAG
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG
$hub/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push $hub/$IMAGE_REPO_NAME:$IMAGE_TAG
- export AWS_ACCESS_KEY_ID=$
- export AWS_SECRET_ACCESS_KEY=$
- export AWS_EXPIRATION=$
- aws eks update-kubeconfig --name $cluster
- kubectl apply -f deploy.yaml –force

This pipeline will work as below:



1. Commit the application code changes to an AWS CodeCommit
repository. (this include new user creation trigger also)
2. An Amazon CloudWatch Events event is generated by the new
commit.
3. The CloudWatch Events event initiates AWS CodePipeline.
4. CodePipeline runs the build phase (continuous integration).
5. Codebuild create the container image and pushed to ECR or dockerhub
6. Codebuild deploy phase completed the deploy to EKS, also can be done by helm
deploy

Here is the diagram containing the entire pipeline structure

You might also like