Kubernetes Deployment Strategy
Kubernetes Deployment Strategy
In this article, we will look into three main kubernetes deployment strategies and
how to deploy each of these strategies.
1. Rolling Update
3. Blue/Green Deployment
minikube
Once your minikube is up and running, we will create a deployment with an nginx
image and try to check the rolling update.
minikube start
4. Let us increase the replica to 4 as we need 4 replicas to try the rolling update
in the example, so run the below command to edit.
5. Again check with pods to see the deployment has scaled its replica.
This command will watch for the pods and we can see the rolling update in this
tab.
nginx:v2 image does not exist, so the pod will go to crashbackoff as soon as we
apply the changes.
8. You can see that it will try to change only one pod of the old version and try to
create a new pod with new version. But since we have given a non-existent
image it will go into the imagePullBackOff state.
9. Now when you try to run the kubectl get pods command, you can see that
instead of 4 replicas 3 of them are running.
11. So in this way if you’re using a deployment strategy using rolling update, even
if something goes wrong we still have old copies running. Hence user will not
see any downtime. If you have provided a existing image, then one by one the
pods will get updated to the new version and the application is deployed
successfully.
Once confidence increases in the new version, you can gradually roll it out to the
entire infrastructure. This can be achieved by using certain parameters in your
load balancer spec section. As in when a user is trying to hit the load balancer of
your application, then 90% user should reach the version 1 and 10% user should
reach the version 2. Let us see the hands-on demo on how this is achieved in real
time.
The reason for using a load balancer here is to manage traffic distribution across
multiple versions and also rollback smoothly in case if the new version isn’t stable.
3. For example we will use the default nginx ingress controller canary which is
available in github. Visit the below page for your example.
https://kubernetes.github.io/ingress-nginx/examples/canary/
Ingress Nginx Has the ability to handle canary routing by setting specific
annotations, the following is an example of how to configure a canary deployment
with weighted canary routing.
4. Copy the deployment and service example, and run it in your terminal.
echo "
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: production
labels:
app: production
spec:
replicas: 1
selector:
matchLabels:
app: production
template:
metadata:
labels:
app: production
6. Now let us create the deployment and service for canary ( version 2 ). Copy the
below code.
echo "
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: canary
labels:
app: canary
spec:
replicas: 1
selector:
matchLabels:
app: canary
template:
metadata:
labels:
app: canary
spec:
containers:
- name: canary
image: registry.k8s.io/ingress-nginx/e2e-test-echo@sha25
ports:
echo "
---
# Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: production
annotations:
spec:
ingressClassName: nginx
rules:
- host: echo.prod.mydomain.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: production
port:
number: 80
" | kubectl apply -f -
8. Now let us create an ingress for the canary deployment ( version 2 ) in which
you will notice that there is an additional field called annotation. This
annotation tells the controller that the traffic percentage is 10% .
echo "
---
# Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
9. So as per the above ingress, we can see that the traffic is distributed and only
10%of the users will be directed to the new version and the rest 90% will be
directed to the old version.
10. We can check this using the curl command. First let us copy the minikube IP
minikube ip
11. Copy the ip and keep it aside. To test our setup, run minikube ssh and once
inside the cluster run the below command
13. You can see that the load balancer is forwarding 90% of request to the old
version (version 1) and 10% of request to the new version (version 2).
14. Once the canary model is tested and all the features are working fine, then we
can slowly implement more traffic using the annotations in the ingress file by
increasing the canary weight.
15. Lastly we will use the canary weight as 100 and see how it results.
Finally all the requests is sent to canary. This is how canary deployment strategy
is achieved.
Blue/Green Deployment
This method involves running two identical environments, a new version(green)
alongside a old version(blue). The blue/green strategy keeps only one version live
at any given time. It involves routing traffic to a blue deployment while creating
apiVersion: apps/v1
kind: Deployment
metadata:
name: blue
spec:
replicas: 3
selector:
matchLabels:
app: myapp
env: blue
template:
metadata:
labels:
app: myapp
env: blue
spec:
containers:
- name: myapp
image: balav8/blue-env
ports:
- containerPort: 3000
This code creates a deployment with 3 replicas and the env set is blue and the
image used is blue-env.
3. Next step is to create the service, use the below code for the service file.
apiVersion: v1
kind: Service
metadata:
name: blue
spec:
selector:
app: myapp
env: blue
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: NodePort
This code creates a service of env blue and we have set the target Port as 3000 in
which our application was running. Also the type of service we are using here is
NodePort.
4. Apply the service file to expose the application, so that we can access our app
with Nodeport IP.
6. But before that, since we are using Minikube in WSL we need to use minikube
tunnel to expose the service and then we will be able to access from host
machine.
Here blue represents the service name. Now keep this terminal running and copy
the URL which is provided by the tunnel for service blue.
Copy the above URL and open a browser and run it.
9. The next step is to create the green-deployment. Use the below code for the
green-deployment.
This code creates a deployment with 0 replicas and the env set is green and the
image used is green-env.
You can see that the green deployment replica count is set to
0. This means that the green deployment is defined and
ready to be activated, but it is not consuming resources until
needed. We will scale the green environment only when we
are ready to route the traffic to it. This avoids the overhead of
running additional pods and instances unnecessarily
optimizing resources utilization.
or just edit the deployment file and change the replica count to 3 .
As you can see both the blue and the green deployment pods are running.
13. Now switch the traffic to green deployment. To do this, just edit your service
file which you had created earlier.
14. Once you have run the above command, go to your browser and refresh the
page to see the new version deployed.
No single strategy fits all scenarios. The decision should come from thoroughly
analyzing the application requirements, organizational context and capabilities.
Conclusion: To sum up, there are different ways to deploy an application. Each
strategy - Rolling Update, Canary Deployment, and Blue/Green Deployment -
offers unique benefits and challenges. Rolling Update provides a smooth transition
with minimal downtime, Canary Deployment allows for controlled testing with real
users, and Blue/Green Deployment offers quick rollback capabilities. The key is to
choose the strategy that best aligns with your application's needs, your team's
expertise, and your organization's goals.