Pocket Guide Progressive Delivery With GitOps
Pocket Guide Progressive Delivery With GitOps
DELIVERY
WITH GITOPS
A Pocket Guide
2
▼
PROGRESSIVE DELIVERY WITH GITOPS
With the push to deliver cutting edge features faster than ever before, customer experience has
become a top priority for software delivery teams. These teams are torn between maintaining
outstanding customer experience and building and releasing features faster than before. To find
a way to balance both these priorities, the most successful organizations like Google, Amazon,
Netflix, and Uber have adopted an innovative approach that not only allows them to release
faster, but to maintain very high customer experience ratings. This approach has been termed
Progressive Delivery.
Organizations can massively reduce the risk of failures in deployments by adopting progressive
delivery approaches. We say ‘approaches’ because there are many ways to implement progressive
delivery. Some approaches are by adding flag management to test code, and progressive rollouts
of applications through techniques like blue-green and canary deployments. In essence, you can
decide which user group gets the release and how much to limit the risk exposure.
Everyone
Staged Rollouts
Beta
Team
PROGRESSIVE DE LIVE RY
weave.works
▼ 3
Efficient source code management allows you to push, test, and roll back feature
developments seamlessly.
Below are the top three commonly used feature release strategies:
A. Blue-Green Deployment
B. A/B Testing
C. Canary Deployment
weave.works
4
▼
PROGRESSIVE DELIVERY WITH GITOPS
4 Observability
Implementing progressive delivery is essentially observing how changes in your
application are affecting your customer’s experience with the software. It leverages
information, metrics, and logging capabilities to ensure that the new version of the
software is safe for release. These metrics need to be reported and monitored in real-time.
5 Service Mesh
A service mesh facilitates service-to-service communication and adds capabilities like
observability, traffic management, and user segmentation. It plays a key role in facilitating
communication between the various microservices that makeup the application. It gives
the Network Administrator superpowers over what happens at the networking layer of
cloud-native applications. As traffic routing is integral to progressive delivery, service
meshes play a key role in enabling progressive delivery.
As you can tell, progressive delivery is a collection of a number of tactics and tools that
make up an entire strategy. You will not use all these tools and tactics together at any
given time, but which you choose will depend on your end goals.
PRODUCTION
weave.works
▼ 5
1. Your team writes code, commits it to Git, and initiates its release.
2. The released update is rolled out to a very specific and a small section of your users.
3. You gather intel on the feature’s performance, detect errors, and performance
bottlenecks, if any.
4. Use the insights to plan and implement follow-up improvements.
Though progressive delivery is built on the principles of continuous delivery, there are some
fundamental distinctions between the two in terms of testing.
Progressive delivery, on the other hand, is an upgrade to the rules of continuous delivery where
greater emphasis is laid on developer control and software reliability. It enforces delivery speed
without compromising on quality by going granular i.e., to the feature level.
Entire codebase should be rolled out Features can be rolled out or rolled
or rolled back back independently
weave.works
6
▼
PROGRESSIVE DELIVERY WITH GITOPS
In addition to this, the deployment strategies bring in their own complexities. For example, in blue-
green mirroring, if a customer calls for customer support, you will first have to verify which version
group the user belongs to.
Additionally, many progressive delivery solutions that are available lock organizations into a
particular product or platform as they’re often vendor-specific solutions. Such a predicament forces
organizations to seek new tools that are more open. One such approach is GitOps.
weave.works
▼ 7
The GitOps practice is built on the four foundational principles, which are critical for anyone to
implement a successful GitOps methodology.
1. Declarative
2. Versioned and Immutable
3. Pulled Automatically
4. Continuously Reconciled
A git-based hosting service to store and manage source code and infrastructure code is a vital
aspect of GitOps. A Git repository enables your team to write code, collaborate with your team,
manage code changes, document version history, and push code changes to be implemented.
The popular Git repositories are GitHub, GitLab, and BitBucket.
Once you have the code in Git, the next step is to package the code using containers. You create
container images and store them in a container registry. The container images contain information
like application executables and dependencies. They are used as templates to scale software
deployment at speed. Therefore they need to be stored in a secure registry. Docker Hub, GitHub
container registry, and AWS Elastic Container Registry (ECR) are popular container registries.
weave.works
8
▼
PROGRESSIVE DELIVERY WITH GITOPS
In this part of the pipeline Flux plays a key role in automatically detecting the new container image.
When this happens Flux updates the configuration in Git either directly or via a pull request. The
merged configuration change is pulled automatically and continuous reconciliation updates the
Kubernetes cluster. Flux uses Kubernetes solutions - Helm and Kustomize to efficiently implement
the automated workflow.
Kustomize is another Kubernetes tool that automates configuration changes across clusters.
Keeping a base layer of a resource like a Helm chart, Kustomize adds several patch layers (like
staging, or prod) to customize the base.
GitOps employs progressive delivery strategies enabling traffic routing in a phased manner to keep
negative ramifications at a bare minimum. As a leading progressive delivery tool, Flagger is tightly
integrated with Flux and service meshes like Linkerd and Istio to automate canary deployments.
We discuss Flagger in greater detail below.
Monitoring
Monitoring and observability are critical parts of the GitOps pipeline. Once the deployments begin,
the system constantly monitors the pipeline for errors, latencies, and security risks. Prometheus is
the leading monitoring tool for Kubernetes
systems and is the default monitoring tool used by Flux
and Flagger. Prometheus or other O11y tools such as New Relic, Datadog or similar collect real-time
performance metrics and integrates with Grafana to visualize this data.
weave.works
▼ 9
Flagger is a progressive delivery operator for Kubernetes that automates the release of applications
securely. It mitigates the underlying risk factors of introducing new software versions in production
through deployment strategies like canary releases, blue-green mirroring, and A/B testing). It
gradually shifts traffic to the new version using service mesh tools like Istio, Linkerd, and App Mesh
or an ingress controller like Skipper, NGINX, and Contour. Flagger can be integrated with GitOps
tools like Flux since it is declarative and made for Kubernetes.
ve
Wea Flux
Cluster repo
apply
analise
ry
Dep
na
10% oy a
Ingress Fla
g g er
l
m ent C
e
ot
om
query
pr
User HTTPS
90%
ar y
Dep
oy
m
i ro
m e n t Pr m etheu
s
l
Source: https://github.com/stefanprodan/gitops-progressive-delivery
weave.works
10
▼
PROGRESSIVE DELIVERY WITH GITOPS
Flagger also comes with capabilities to automate the analysis and testing of code changes to
ensure their stability before releasing them across the whole cluster. It shifts traffic to the new
version while keeping the old one as a backup. You can also integrate Flagger with tools like Slack
and Microsoft Teams to send out timely updates across your team.
Flagger employs the below strategies to automate the end-to-end process of application analysis,
testing, and rollback:
Flagger implements a canary release deployment strategy in the form of a control loop gradually
shifting traffic to the new software version. The loop comes to an end once the entire traffic
is shifted to the new version. It constantly measures the health of the canary by monitoring
performance metrics like HTTP requests success rate and requests average duration. If the KPIs do
not meet the criteria, the traffic routing is aborted and the release will be rolled back.
1 2 3
v1 v1 v1 v2
5% v2 10...50%
v1 v1 v1 v2
4 5 6
v2 v2 v2 v2 v2 v2
50% 0%
v1 v2 v2 v2 v2 v2
weave.works
▼ 11
Under normal circumstances, the canary analysis is run at regular intervals until the entire traffic is
moved to the new version or the failed checks threshold is met.
With Flagger, you run A/B testing by implementing match conditions based on HTTP headers or
cookies to ensure that your subset of users is routed to the respective canary for the whole duration
of the analysis. This Flagger configuration is particularly important for frontend applications that
need session affinity.
1 2 3
v1 v1 v2 v1 v2
v1 v1 v2 v1 v2
4 5 6
v1 v2 v1 v2 v2 v2
v1 v2 v1 v2 v2 v2
When orchestrating a blue-green deployment strategy, Flagger quickly runs tests to ensure that the
canary pods are stable. If the analysis turns out well, the traffic is gradually shifted from the primary
version (blue) to the canary (green). However, if the failure threshold is met, Flagger will abort the
traffic routing.
weave.works
12
▼
PROGRESSIVE DELIVERY WITH GITOPS
v1 v1 v2 v1 v2
v1 v1 v2 v1 v2
1 2 3
4 5 6
v1 v2 v1 v2 v2 v2
v1 v2 v1 v2 v2 v2
Once the traffic is routed to the canary (green), the primary (blue) is updated with the canary spec.
Following this, the traffic is routed back to the primary with the updated code.
The monitoring mechanism, meanwhile, collects metrics from both versions and evaluates factors
such as request success rate and request duration. If the canary metrics are healthy, the gradual
traffic routing to the canary begins.
Once 100% of the traffic is routed to the canary, the primary version is updated and the traffic
is sent back to the main version once primary pods are checked out.
weave.works
▼ 13
4. Greater observability
Flagger offers great observability capabilities to validate service level objectives (SLOs)
based on application-specific metrics. It monitors and measures objects like availability, error
rate percentage, and average response time. The traffic routing or shifting is decided based
on the analysis of these factors to ensure minimal impact on the users.
weave.works
14
▼
PROGRESSIVE DELIVERY WITH GITOPS
Consider a scenario where the development cluster and production cluster are running on two
different cloud platforms. Progressive delivery won’t be a problem here because you initiate a
change from the Git repository through a pull request. This is approved through a merge. With a
Git-centric workflow, you don’t need to communicate with the clusters directly. The deployments are
executed right from Git.
Executing a similar process manually will be a hassle with constant traffic re-balancing and a
number of manual deployments. With GitOps and Flagger, teams can automate the workflow
through a set of rules. The deployment will take care of itself, all you need to do is to monitor the
live implementation of progressive delivery and fine-tune it whenever necessary. In case of a failure,
the rollback process is simple thorough revoking the change within Git. Other benefits of Git are
code versioning, code reviews, change history, and security.
weave.works
▼ 15
6 With Weave GitOps Enterprise, which includes Flux and Flagger, deployments run in
an automated mode, as you monitor the live implementation of progressive delivery
and make necessary changes whenever needed. Weave GitOps is the easiest and
most controlled way to implement progressive delivery at scale.
[email protected] weave.works
weave.works