Project Documentation
Project Documentation
Table of contents
Introduction
Architecture options
- Multi-tenant with ec2 and ASG using independent/shared DBs
- Multi-tenant with EKS using independent/shared DB
Introduction
Architecture options
Multi-tenant is the better option for saving architecture cost. I am explaining the multi-tenant
architecture options.
In this architecture we are using ec2 instances under the autoscaling group with a load
balancer and sharing the tenant workload for PHP and nginx via the ec2 group. Here we can
map the application URL to the load balancer IP address and can setup autoscaling rule for
the ec2 machine using cpu utilization metrics. In this method, whenever the application load
is higher the scaling group will add additional required instances and scale down the instance
automatically when there is low usage. For database setup we can use two methods.
Use shared DB and provide user authentication: Here we are using shared RDS database for
saving application data this will save the cost of RDS servers, also using autoscaling with ec2
instance for managing the webserver layer. For tenant authentication for the DB can be
achieved by using the aws Cognito service. Authentication more details can be available in
the below reference link [1]
[1] https://docs.aws.amazon.com/cognito/latest/developerguide/bp_user-pool-based-multi-
tenancy.html
This RDS sharing method helps in reducing the application cost, however there can be cross-
tenant impact if the DB authorization setup is not correct. Also, can cause compliance
challenges.
Further, can refer multi-tenant design considerations for Amazon EKS clusters via below link
[2]
[2] https://aws.amazon.com/blogs/containers/multi-tenant-design-considerations-for-amazon-
eks-clusters/
Monitoring setup:
Here I’m adding the monitoring setup for EKS clustered multi-tenant application architecture.
EKS as default integrate the logs to cloudwatch for the control plane. For node monitoring we
can also use cloudwatch by installing the agents on the nodes. This will also collect the
container logs too. Also, for the cluster, node, pod, task, and service level logging we can use
cloudwatch container insights. However this will cost extra money for the service.
Another option is by using opensource tools for monitoring, this can be implemented by
using Prometheus and Grafana. EKS supports metrics for Prometheus. This can be later
imported to Grafana dashboard. Sample architecture diagram for monitoring setup is below :
Now we can provide these metrics as source for Grafana dashboard. More details available
on reference [3]
[3] https://docs.aws.amazon.com/eks/latest/userguide/prometheus.html
Grafana setup for dashboard: Grafana can also deploy using helm :
After that we need to add the Prometheus data source to Grafana and can generate the
dashboards. More details available at [4]
[4] https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-onboard-query-grafana-
7.3.html
We can also integrate Prometheus with CloudWatch alarm for sending custom notifications
as per the alerts. This is well documented in the link [5]
[5] https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-CW-examples.html
Infra as Code (IaC) : Here I’m using CloudFormation as infra as code for the EKS cluster.
Cluster can be managed using the CloudFormation and the same stack can be used as a
destination for the CICD pipeline. So if there is an update in the application from pipeline the
cicd will trigger update for the CloudFormation stack and which will update the stack with
the new infra setup.
As an example, below are a slice of control plane infra code for a cluster (partial code only
added):
….
ControlPlane:
Type: 'AWS::EKS::Cluster'
Properties:
KubernetesNetworkConfig:
IpFamily: ipv4
Name: my-cluster
ResourcesVpcConfig:
EndpointPrivateAccess: false
EndpointPublicAccess: true
SecurityGroupIds:
- !Ref ControlPlaneSecurityGroup
SubnetIds:
- !Ref SubnetPublicAPSOUTH1A
- !Ref SubnetPublicAPSOUTH1C
RoleArn: !GetAtt
- ServiceRole
- Arn
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}/ControlPlane'
Version: '1.24'
ControlPlaneSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: Communication between the control plane and worker nodegroups
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}/ControlPlaneSecurityGroup'
VpcId: !Ref VPC
Sample diagram showing EKS deployment using CloudFormation, it can also be added to the
CICD pipeline ad provide destination as CloudFormation stack, which will update the stack
as per trigger.
Scalability:
In both ec2 and in eks methods, auto scaling is possible. Autoscaling group in ec2 is the easy
method for setting up scaling for the server layer and I’m not explaining that process. EKS
scaling can be done by using different methods as per the application usage :
HPA:
[6] https://docs.aws.amazon.com/eks/latest/userguide/horizontal-pod-autoscaler.html
Cluster Autoscaler
This Cluster Autoscaler automatically adjusts the number of nodes in the cluster when pods
fail or are rescheduled onto other nodes This will adjusts the size of a Kubernetes Cluster so
that all pods have a place to run and there are no unneeded nodes.
In our case we can setup cluster scaling based on the application access usage and the EKS
will be able to handle the scaling without affecting the application availability. Also
whenever a new user created it will create the pod deployment and scale as per the new user
accounts.
Security:
Network Security:
We can use VPC with Public and private Subnets. Nodes, Database Will Create in Private
Subnets, Private Subnets protected With Nat Gateway that means no one can connect from
the internet, and the application will be able to connect to the database internally.
Additionally we can use WAF (web application firewall) for security. Below is a sample
diagram :
Single tenant vs multi-tenant
Batter Approach is multi-tenant because it provides high availability due to recovery and
scaling, also its very fast in deployment. The main advantage Is the cost saving of
infrastructure. In the setup we can use different Pods for applications and use one RDS with
auto Scaling and can use Cognito for user management authorizations. When a new customer
gets created, it will create new database Inside RDS. Multi tenancy in RDS can be achieved
via different methods, ,ore details available on the reference link [8]
[8] https://docs.aws.amazon.com/whitepapers/latest/multi-tenant-saas-storage-strategies/
multitenancy-on-rds.html
CICD setup for multi-tenant EKS deployment
CICD is the main part of setting up the automation for the deployment. For this
we can use the below services.
---
phases:
install:
commands:
- curl -sS -o aws-iam-authenticator https://amazon-eks.s3.us-west-
2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/aws-iam-
authenticator
- curl -sS -o kubectl
https://amazon-eks.s3.us-west-2.amazonaws.com/1.20.4/2021-04-12/bin/
linux/amd64/kubectl
- chmod +x ./kubectl ./aws-iam-authenticator
- export PATH=$PWD/:$PATH
pre_build:
commands:
- echo Logging in to Docker Hub...
- docker login -u $user -p $pass
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker image build -t $IMAGE_REPO_NAME:$IMAGE_TAG
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG
$hub/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push $hub/$IMAGE_REPO_NAME:$IMAGE_TAG
- export AWS_ACCESS_KEY_ID=$
- export AWS_SECRET_ACCESS_KEY=$
- export AWS_EXPIRATION=$
- aws eks update-kubeconfig --name $cluster
- kubectl apply -f deploy.yaml –force