0% found this document useful (0 votes)
135 views181 pages

VMware Tanzu Application Catalog Documentation - Tutorials

VMware Tanzu Application Catalog Documentation - Tutorials

Uploaded by

namaal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views181 pages

VMware Tanzu Application Catalog Documentation - Tutorials

VMware Tanzu Application Catalog Documentation - Tutorials

Uploaded by

namaal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 181

VMware Tanzu Application

Catalog Documentation -
Tutorials
VMware Tanzu Application Catalog services
VMware Tanzu Application Catalog Documentation - Tutorials

You can find the most up-to-date technical documentation on the VMware by Broadcom website at:

https://docs.vmware.com/

VMware by Broadcom
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com

Copyright © 2024 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its
subsidiaries. For more information, go to https://www.broadcom.com. All trademarks, trade names, service
marks, and logos referenced herein belong to their respective companies. Copyright and trademark
information.

VMware by Broadcom 2
VMware Tanzu Application Catalog Documentation - Tutorials

Contents

VMware Tanzu Application Catalog (Applications Tutorials) 12


Documentation
Intended Audience 13

Create Your First Helm Chart 14


Introduction 14
Step 1: Generate your first chart 15
Templates 15
Values 16
Helpers and other functions 16
Documentation 16
Metadata 17
Step 2: Deploy your first chart 17
Step 3: Modify chart to deploy a custom service 18
Step 4: Package it all up to share 20
Repositories 20
Dependencies 21
Useful links 22

Best practices writing a Dockerfile 24


Introduction 24
Refreshing basic concepts about Docker images and Dockerfiles 24
What is a Docker image? 24
What is a Dockerfile? 24
What does "Docker build" mean? 24
What is a Docker layer? 24
What is the Docker build cache? 24
Concerns when building images 25
Prerequisites 25
Enable BuildKit 25
Install a Linter for Dockerfiles on your editor 25
A real case: improving a Node.js application's Docker image 26
Taking advantage of the build cache 27
Avoiding packaging dependencies that you do not need 28
Using minideb 29

VMware by Broadcom 3
VMware Tanzu Application Catalog Documentation - Tutorials

Reusing maintained images when possible 29


Being specific about your base image tag 30
Using multi-stage builds to separate build and runtime environments 30
Pro Tip: Using multi-stage builds to build platform-specific images 32
Using the non-root approach to enforce container security 32
Setting the WORKDIR instruction 34
Mounting the application configuration and using the volume instruction 34
Redirecting the application logs to the stdout/stderr stream 35
Defining an entrypoint 36
Storing credentials and other sensitive data securely 36
End of the journey: container images ready for production environments 36

Best Practices for Creating Production-Ready Helm charts 38


Introduction 38
Best practices for creating production-Ready Helm charts 38
Use non-root containers 38
Do not persist the configuration 39
Integrate charts with logging and monitoring tools 40
Production workloads in Kubernetes are possible 43
Useful Links 43

Running non-root containers on Openshift 44


Introduction 44
What are non-root containers? 44
Why use a non-root container? 44
How to create a non-root container? 44
How to deploy Ghost in OpenShift 45
Lessons learned: issues and troubleshooting 47
Look and feel 47
Debugging experience 47
Mounted volumes 47
Volumes in Kubernetes 47
Config Maps in Kubernetes 48
Issues with specific utilities or servers 48
Non-root containers' lights and shadows 49

Work With Non-Root Containers 50


Introduction 50
Advantages of non-root containers 50
Potential issues with non-root containers for development 50

VMware by Broadcom 4
VMware Tanzu Application Catalog Documentation - Tutorials

Using non-root containers as root containers 51


Useful links 51

Why non-root containers are important for security 52


Introduction 52
Differences between root and non-root containers 52
Advantages of non-root containers 53
Security 53
Avoid platform restrictions 53
How Bitnami does create non-root containers? 53
Useful links 55

Develop a REST API with VMware Tanzu Application Catalog's Node.js 56


and MongoDB Containers
Introduction 56
Assumptions and prerequisites 56
Step 1: Create a skeleton Node.js application 56
Step 2: Create and start a local MongoDB service 58
Step 3: Create and configure a REST API endpoint 58
Step 4: Test the REST API 60
Useful links 62

Deploy a Go Application on Kubernetes with Helm 63


Introduction 63
Go features 63
Assumptions and prerequisites 63
Step 1: Obtain the application source code 64
Step 2: Build the Docker image 65
Step 3: Publish the Docker image 65
Step 4: Create the Helm chart 66
Edit the values.yaml file 66
Edit the templates/deployment.yaml file 67
Step 5: Deploy the example application in Kubernetes 67
Step 6: Update the source code and the Helm chart 69
Useful links 70

Simplify Kubernetes Resource Access Control using RBAC 71


Impersonation
Introduction 71
Assumptions and prerequisites 71
Overview of Kubernetes authentication 71

VMware by Broadcom 5
VMware Tanzu Application Catalog Documentation - Tutorials

Overview of Kubernetes Authorization and RBAC 72


Using impersonated "virtual-users" to control access 73
A working example with RBAC rules 74
Step 1: Prepare the RBAC manifests 74
Step 2: Test the setup 75
Step 3: Save the impersonation setup to your Kubernetes config file 75

Audit trails 76
Conclusion 77
Useful links 77

Assign Pods to Nodes with Bitnami Helm Chart Affinity Rules 78


Introduction 78
Assumptions and prerequisites 78
How Affinity Rules Work in Bitnami Helm charts 78
Deploying a chart using the podAntiAffinity rule 79
Use case 1: Install the chart with the default podAntiaffinity value 79
Use Case 2: Change the podAntiAffinity Vvalue from Soft to Hard 80
Useful links 81

Resolve Chart Upgrade Issues After Migrating to Helm v3 82


Introduction 82
Assumptions and prerequisites 82
Method 1: Backup and restore data using built-in application tools 83
Step 1: Back up data using built-in PostgreSQL tools 83
Step 2: Restore the data into a new PostgreSQL release 83
Step 3: Test the upgrade process (optional) 85
Method 2: Back up and restore persistent data volumes 85
Step 1: Install Velero 85
Step 2: Back up the persistent data volumes 86
Step 3: Copy the persistent volumes to a new PostgreSQL release 86
Step 4: Test the upgrade process (optional) 87
Useful links 88

Troubleshoot Kubernetes deployments 89


Introduction 89
How to detect 89
Common issues 89
Troubleshooting checklist 90
Does your pod status show Pending? 90

VMware by Broadcom 6
VMware Tanzu Application Catalog Documentation - Tutorials

Does your pod status show ImagePullBackOff or ErrImagePull? 91


Does your PVC status show Pending? 91
Are you unable to look up or resolve Kubernetes service names using DNS? 92
Is kubectl unable to find your nodes? 92
Is kubectl not permitting access to certain resources? 92
Are you unable to find the external IP address of a node? 92
Is your pod running but not responding? 93
Useful links 94

Secure your Kubernetes application with Network policies 95


Introduction 95
Assumptions and prerequisites 95
Network policies in Kubernetes 96
Kubernetes network plugins 96
Installing a network plugin in kubeadm 97
Use case: WordPress installation 97
Step 1: Deploy a WordPress + MariaDB installation using Helm 97
Step 2: Restrict traffic between pods 97
Step 3: Allow traffic only from the WordPress pod to the MariaDB pod 98
Useful links 98

Secure a Kubernetes cluster with pod security policies 100


Introduction 100
Assumptions and prerequisites 100
Understand pod security policies 101
Case 1: Prevent pods from running with root privileges 102
Case 2: Prevent pods from accessing certain volume types 104
Case 3: Prevent pods from accessing host ports 106
Useful links 108

Best Practices for Securing and Hardening Container Images 109


Introduction 109
Rolling and immutable tags 109
Rolling tags 109
Immutable tags 110
Usage recommendations 110
Root and non-root containers 110
Non-root containers 110
Advantages of non-root containers 110
Potential issues with non-root containers for development 111

VMware by Broadcom 7
VMware Tanzu Application Catalog Documentation - Tutorials

Use non-root containers as root containers 111


Use arbitrary UUIDs 111
Execute one process per container 112
Secure processes, ports and credentials 112
Improve performance by keeping images small 112
Daily builds and release process 113
CVE and virus scanning 113
Verification and functional testing 113
FIPS 114
Conclusion 114

Best Practices for Securing and Hardening Helm Charts 115


Introduction 115
The Bitnami pipeline 115
ConfigMaps for configuration 115
Integration with logging and monitoring tools 117
Bitnami release process and tests 120
CVE scanning 121
Verification and functional tests 121
Upgrade tests 122
Conclusions 122
Useful links 122

Understand Bitnami's Rolling tags for container images 123


Introduction 123
Rolling tags 123
Immutable tags 124
Usage recommendations 124
Useful links 124

Backup and restore Bitnami container deployments 125


Introduction 125
Assumptions and prerequistes 125
Step 1: Backup data volumes 125
Step 2: Restore the data on each destination container 126
Useful links 127

Backup and Restore VMware Tanzu Application Catalog Helm Chart 128
Deployments with Velero
Introduction 128
Assumptions and prerequisites 128

VMware by Broadcom 8
VMware Tanzu Application Catalog Documentation - Tutorials

Step 1: Deploy and customize WordPress on the source cluster 128


Step 2: Install Velero on the source cluster 130
Step 3: Backup the WordPress deployment on the source cluster 131
Step 4: Restore the WordPress deployment on the destination cluster 132
Useful links 133

Backup and Restore Apache Cassandra Deployments on Kubernetes 134


Introduction 134
Assumptions and prerequisites 134
Step 1: Install Velero on the source cluster 135
Step 2: Back up the Apache Cassandra deployment on the source cluster 136
Step 3: Restore the Apache Cassandra deployment on the destination cluster 136
Useful links 138

Backup and Restore Apache Kafka Deployments on Kubernetes 139


Introduction 139
Assumptions and prerequisites 139
Step 1: Install Velero on the source cluster 140
Step 2: Back up the Apache Kafka deployment on the source cluster 141
Step 3: Restore the Apache Kafka deployment on the destination cluster 141
Useful links 142

Backup and Restore Etcd Deployments on Kubernetes 144


Introduction 144
Assumptions and prerequisites 144
Method 1: Backup and restore data using etcd's built-in tools 145
Step 1: Create a data snapshot 145
Step 2: Copy the snapshot to a PVC 146
Step 3: Restore the snapshot in a new cluster 147
Method 2: Back up and restore persistent data volumes 148
Step 1: Install Velero on the source cluster 148
Step 2: Back up the etcd deployment on the source cluster 149
Step 3: Restore the etcd deployment on the destination cluster 150
Useful links 151

Backup and Restore MongoDB Deployments on Kubernetes 152


Introduction 152
Assumptions and prerequisites 152
Method 1: Backup and restore data using MongoDB's built-in tools 153
Step 1: Backup data with mongodump 153

VMware by Broadcom 9
VMware Tanzu Application Catalog Documentation - Tutorials

Step 2: Restore data with mongorestore 154


Method 2: Back up and restore persistent data volumes 155
Step 1: Install Velero on the source cluster 155
Step 2: Back up the MongoDB deployment on the source cluster 156
Step 3: Restore the MongoDB deployment on the destination cluster 157
Useful links 158

Backup and Restore MariaDB Galera Deployments on Kubernetes 159


Introduction 159
Assumptions and prerequisites 159
Method 1: Backup and restore data using MariaDB's built-in tools 160
Step 1: Backup data with mysqldump 160
Step 2: Restore data with mysql 161
Method 2: Back up and restore persistent data volumes 162
Step 1: Install Velero on the source cluster 162
Step 2: Back up the MariaDB Galera Cluster deployment on the source cluster 163
Step 3: Restore the MariaDB Galera Cluster deployment on the destination 164
cluster
Useful links 165

Backup and Restore Redis Cluster Deployments on Kubernetes 166


Introduction 166
Assumptions and prerequisites 166
Step 1: Install Velero on the source cluster 167
Step 2: Back up the Redis Cluster deployment on the source cluster 168
Step 3: Restore the Redis Cluster deployment on the destination cluster 168
Useful links 169

Backup and Restore RabbitMQ Deployments on Kubernetes 170


Introduction 170
Assumptions and prerequisites 170
Step 1: Install Velero on the source cluster 171
Step 2: Back up the RabbitMQ deployment on the source cluster 172
Step 3: Restore the RabbitMQ deployment on the destination cluster 172
Useful links 173

Migrate Data Between Kubernetes Clusters with VMware Tanzu 175


Application Catalog and Velero
Introduction 175
Assumptions and prerequisites 175
Step 1: Deploy PostgreSQL on the source cluster and add data to it 176

VMware by Broadcom 10
VMware Tanzu Application Catalog Documentation - Tutorials

Step 2: Install Velero on the source cluster 177


Step 3: Backup the persistent PostgreSQL volumes on the source cluster 178
Step 4: Use the PostgreSQL volumes with a new deployment on the destination 179
cluster
Useful links 181

VMware by Broadcom 11
VMware Tanzu Application Catalog Documentation - Tutorials

VMware Tanzu Application Catalog


(Applications Tutorials) Documentation

This section provides information about how to use OSS applications available via VMware Tanzu
Application Catalog.

Create Your First Helm Chart

Best practices writing a Dockerfile

Best Practices for Creating Production-Ready Helm charts

Running non-root containers on Openshift

Work With Non-Root Containers for Bitnami Applications

Why non-root containers are important for security

Develop a REST API with Node.js and MongoDB Containers

Deploy a Go Application on Kubernetes with Helm

Simplify Kubernetes Resource Access Control using RBAC Impersonation

Assign Pods to Nodes with Bitnami Helm Chart Affinity Rules

Resolve Chart Upgrade Issues After Migrating to Helm v3

Troubleshoot Kubernetes Deployments

Secure Your Kubernetes Application With Network Policies

Secure A Kubernetes Cluster With Pod Security Policies

Best Practices for Securing and Hardening Container Images

Best Practices for Securing and Hardening Helm Charts

Understand Bitnami's Rolling Tags for Container Images

Backup and Restore Bitnami Container Deployments

Backup and Restore Tanzu Application Catalog Helm Chart Deployments with Velero

Backup and Restore Apache Cassandra Deployments on Kubernetes

Backup and Restore Apache Kafka Deployments on Kubernetes

Backup and Restore Etcd Deployments on Kubernetes

Backup and Restore MongoDB Deployments on Kubernetes

Backup and Restore MariaDB Galera Deployments on Kubernetes

Backup and Restore Redis Cluster Deployments on Kubernetes

VMware by Broadcom 12
VMware Tanzu Application Catalog Documentation - Tutorials

Backup and Restore RabbitMQ Deployments on Kubernetes

Migrate Data Between Kubernetes Clusters with Tanzu Application Catalog and Velero

Intended Audience
This information is intended for everyone who want to get started with Applications provided via
VMware Tanzu Application Catalog. The information is written for users who have a basic
understanding of Kubernetes and are familiar with container deployment concepts. In-depth
knowledge of Kubernetes is not required.

VMware by Broadcom 13
VMware Tanzu Application Catalog Documentation - Tutorials

Create Your First Helm Chart

Introduction
So, you've got your Kubernetes cluster up and running and set up Helm v3.x, but how do you run
your applications on it? This guide walks you through the process of creating your first ever chart,
explaining what goes inside these packages and the tools you use to develop them. By the end of it
you should have an understanding of the advantages of using Helm to deliver your own applications
to your cluster.

For a typical cloud-native application with a 3-tier architecture, the diagram below illustrates how it
might be described in terms of Kubernetes objects. In this example, each tier consists of a
Deployment and Service object, and may additionally define ConfigMap or Secret objects. Each of
these objects are typically defined in separate YAML files, and are fed into the kubectl command
line tool.

A Helm chart encapsulates each of these YAML definitions, provides a mechanism for configuration
at deploy-time and allows you to define metadata and documentation that might be useful when
sharing the package. Helm can be useful in different scenarios:

Find and use popular software packaged as Kubernetes charts

Share your own applications as Kubernetes charts

Create reproducible builds of your Kubernetes applications

Intelligently manage your Kubernetes object definitions

Manage releases of Helm packages

Let's explore the second and third scenarios by creating our first chart.

VMware by Broadcom 14
VMware Tanzu Application Catalog Documentation - Tutorials

Step 1: Generate your first chart


The best way to get started with a new chart is to use the helm create command to scaffold out an
example we can build on. Use this command to create a new chart named mychart in a new
directory:

helm create mychart

Helm will create a new directory in your project called mychart with the structure shown below. Let's
navigate our new chart (pun intended) to find out how it works.

mychart
|-- Chart.yaml
|-- charts
|-- templates
| |-- NOTES.txt
| |-- _helpers.tpl
| |-- deployment.yaml
| |-- ingress.yaml
| `-- service.yaml
`-- values.yaml

Templates

The most important piece of the puzzle is the templates/ directory. This is where Helm finds the
YAML definitions for your Services, Deployments and other Kubernetes objects. If you already have
definitions for your application, all you need to do is replace the generated YAML files for your own.
What you end up with is a working chart that can be deployed using the helm install command.

It's worth noting however, that the directory is named templates, and Helm runs each file in this
directory through a Go template rendering engine. Helm extends the template language, adding a
number of utility functions for writing charts. Open the service.yaml file to see what this looks like:

apiVersion: v1
kind: Service
metadata:
name: {{ template "fullname" . }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: {{ .Values.service.internalPort }}
protocol: TCP
name: {{ .Values.service.name }}
selector:
app: {{ template "fullname" . }}

This is a basic Service definition using templating. When deploying the chart, Helm will generate a
definition that will look a lot more like a valid Service. We can do a dry-run of a helm install and
enable debug to inspect the generated definitions:

VMware by Broadcom 15
VMware Tanzu Application Catalog Documentation - Tutorials

helm install mychart --dry-run --debug ./mychart


...
## Source: mychart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: pouring-puma-mychart
labels:
chart: "mychart-0.1.0"
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
name: nginx
selector:
app: pouring-puma-mychart
...

Values

The template in service.yaml makes use of the Helm-specific objects .Chart and .Values.. The
former provides metadata about the chart to your definitions such as the name, or version. The latter
.Values object is a key element of Helm charts, used to expose configuration that can be set at the
time of deployment. The defaults for this object are defined in the values.yaml file. Try changing the
default value for service.internalPort and execute another dry-run, you should find that the
targetPort in the Service and the containerPort in the Deployment changes. The
service.internalPort value is used here to ensure that the Service and Deployment objects work
together correctly. The use of templating can greatly reduce boilerplate and simplify your definitions.

If a user of your chart wanted to change the default configuration, they could provide overrides
directly on the command-line:

helm install mychart --dry-run --debug ./mychart --set service.internalPort=8080

For more advanced configuration, a user can specify a YAML file containing overrides with the \--
values option.

Helpers and other functions

The service.yaml template also makes use of partials defined in _helpers.tpl, as well as functions
like replace. The Helm documentation has a deeper walkthrough of the templating language,
explaining how functions, partials and flow control can be used when developing your chart.

Documentation

Another useful file in the templates/ directory is the NOTES.txt file. This is a templated, plaintext file
that gets printed out after the chart is successfully deployed. As we'll see when we deploy our first
chart, this is a useful place to briefly describe the next steps for using a chart. Since NOTES.txt is run
through the template engine, you can use templating to print out working commands for obtaining

VMware by Broadcom 16
VMware Tanzu Application Catalog Documentation - Tutorials

an IP address, or getting a password from a Secret object.

Metadata

As mentioned earlier, a Helm chart consists of metadata that is used to help describe what the
application is, define constraints on the minimum required Kubernetes and/or Helm version and
manage the version of your chart. All of this metadata lives in the Chart.yaml file. The Helm
documentation describes the different fields for this file.

Step 2: Deploy your first chart


The chart you generated in the previous step is set up to run an NGINX server exposed via a
Kubernetes Service. By default, the chart will create a ClusterIP type Service, so NGINX will only be
exposed internally in the cluster. To access it externally, we'll use the NodePort type instead. We can
also set the name of the Helm release so we can easily refer back to it. Let's go ahead and deploy
our NGINX chart using the helm install command:

helm install example ./mychart --set service.type=NodePort


NAME: example
LAST DEPLOYED: Tue May 2 20:03:27 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-mychart 10.0.0.24 <nodes> 80:30630/TCP 0s

==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
example-mychart 1 1 1 0 0s

NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePo
rt}" services example-mychart)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.
addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT/

The output of helm install displays a handy summary of the state of the release, what objects were
created, and the rendered NOTES.txt file to explain what to do next. Run the commands in the
output to get a URL to access the NGINX service and pull it up in your browser.

VMware by Broadcom 17
VMware Tanzu Application Catalog Documentation - Tutorials

If all went well, you should see the NGINX welcome page as shown above. Congratulations! You've
just deployed your very first service packaged as a Helm chart!

Step 3: Modify chart to deploy a custom service


The generated chart creates a Deployment object designed to run an image provided by the default
values. This means all we need to do to run a different service is to change the referenced image in
values.yaml.

We are going to update the chart to run a todo list application available on Docker Hub. In
values.yaml, update the image keys to reference the todo list image:

image:
repository: prydonius/todo
tag: 1.0.0
pullPolicy: IfNotPresent

As you develop your chart, it's a good idea to run it through the linter to ensure you're following
best practices and that your templates are well-formed. Run the helm lint command to see the
linter in action:

helm lint ./mychart


==> Linting ./mychart
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures

The linter didn't complain about any major issues with the chart, so we're good to go. However, as
an example, here is what the linter might output if you managed to get something wrong:

VMware by Broadcom 18
VMware Tanzu Application Catalog Documentation - Tutorials

echo "malformed" > mychart/values.yaml


helm lint ./mychart
==> Linting mychart
[INFO] Chart.yaml: icon is recommended
[ERROR] values.yaml: unable to parse YAML
error converting YAML to JSON: yaml: line 34: could not find expected ':'

Error: 1 chart(s) linted, 1 chart(s) failed

This time, the linter tells us that it was unable to parse the values.yaml file correctly. With the line
number hint, we can easily find and fix the bug we introduced.

Now that the chart is once again valid, run helm install again to deploy the todo list application:

helm install example2 ./mychart --set service.type=NodePort


NAME: example2
LAST DEPLOYED: Wed May 3 12:10:03 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example2-mychart 10.0.0.78 <nodes> 80:31381/TCP 0s

==> apps/v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
example2-mychart 1 1 1 0 0s

NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePo
rt}" services example2-mychart)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.
addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT/

Once again, we can run the commands in the NOTES to get a URL to access our application.

VMware by Broadcom 19
VMware Tanzu Application Catalog Documentation - Tutorials

If you have already built containers for your applications, you can run them with your chart by
updating the default values or the Deployment template.

Step 4: Package it all up to share


So far in this tutorial, we've been using the helm install command to install a local, unpacked chart.
However, if you are looking to share your charts with your team or the community, your consumers
will typically install the charts from a tar package. We can use helm package to create the tar
package:

helm package ./mychart

Helm will create a mychart-0.1.0.tgz package in our working directory, using the name and version
from the metadata defined in the Chart.yaml file. A user can install from this package instead of a
local directory by passing the package as the parameter to helm install.

helm install example3 mychart-0.1.0.tgz --set service.type=NodePort

Repositories

VMware by Broadcom 20
VMware Tanzu Application Catalog Documentation - Tutorials

In order to make it much easier to share packages, Helm has built-in support for installing packages
from an HTTP server. Helm reads a repository index hosted on the server which describes what
chart packages are available and where they are located.

We can use the helm serve command to run a local repository to serve our chart.

helm serve
Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879

Now, in a separate terminal window, you should be able to see your chart in the local repository and
install it from there:

helm search local


NAME VERSION DESCRIPTION
local/mychart 0.1.0 A Helm chart for Kubernetes

helm install example4 local/mychart --set service.type=NodePort

To set up a remote repository you can follow the guide in the Helm documentation.

Dependencies
As the applications your packaging as charts increase in complexity, you might find you need to pull
in a dependency such as a database. Helm allows you to specify sub-charts that will be created as
part of the same release. To define a dependency, create a requirements.yaml file in the chart root
directory:

cat > ./mychart/requirements.yaml <<EOF


dependencies:
- name: mariadb
version: 0.6.0
repository: https://charts.helm.sh/stable
EOF

Much like a runtime language dependency file (such as Python's requirements.txt), the
requirements.yaml file allows you to manage your chart's dependencies and their versions. When
updating dependencies, a lockfile is generated so that subsequent fetching of dependencies use a
known, working version. Run the following command to pull in the MariaDB dependency we defined:

helm dep update ./mychart


Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/ch
arts):
Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: getsockopt:
connection refused
...Successfully got an update from the "bitnami" chart repository
...Successfully got an update from the "incubator" chart repository
Update Complete. `Happy Helming!`
Saving 1 charts
Downloading mariadb from repo
$ ls ./mychart/charts
mariadb-0.6.0.tgz

VMware by Broadcom 21
VMware Tanzu Application Catalog Documentation - Tutorials

Helm has found a matching version in the bitnami repository and has fetched it into the chart's sub-
chart directory. Now when we go and install the chart, we'll see that MariaDB's objects are created
too:

helm install example5 ./mychart --set service.type=NodePort


NAME: example5
LAST DEPLOYED: Wed May 3 16:28:18 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME TYPE DATA AGE
example5-mariadb Opaque 2 1s

==> v1/ConfigMap
NAME DATA AGE
example5-mariadb 1 1s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESSMO
DES AGE
example5-mariadb Bound pvc-229f9ed6-3015-11e7-945a-66fc987ccf32 8Gi RWO
1s

==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example5-mychart 10.0.0.144 <nodes> 80:30896/TCP 1s
example5-mariadb 10.0.0.108 <none> 3306/TCP 1s

==> apps/v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
example5-mariadb 1 1 1 0 1s
example5-mychart 1 1 1 0 1s

NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePo
rt}" services example5-mychart)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.
addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT/

Useful links
We've walked through some of the ways Helm supercharges the delivery of applications on
Kubernetes. From an empty directory, you were able to get a working Helm chart out of a single
command, deploy it to your cluster and access an NGINX server. Then, by simply changing a few
lines and re-deploying, you had a much more useful todo list application running on your cluster!
Beyond templating, linting, sharing and managing dependencies, here are some other useful tools
available to chart authors:

Define hooks to run Jobs before or after installing and upgrading releases

Sign chart packages to help users verify its integrity

VMware by Broadcom 22
VMware Tanzu Application Catalog Documentation - Tutorials

Write integration/validation tests for your charts

Employ a handful of tricks in your chart templates

VMware by Broadcom 23
VMware Tanzu Application Catalog Documentation - Tutorials

Best practices writing a Dockerfile

Introduction
Since Bitnami published its first Docker container in 2015, the techniques for writing Dockerfiles have
significantly evolved. As part of the team which maintains a container catalog with more than 130
apps, I have worked on adapting the containers and their Dockerfiles to meet the community
requirements.

In this tutorial, I will go over these lessons learned, describing some of the best practices and
common pitfalls that you are likely to encounter when developing Dockerfiles, by applying them on
practical examples. First, I will briefly explain some basic concepts that you need to refresh before
examining specific cases. Then, I will walk you through some practical examples, to improve the
build time, the size, and the security of a Docker image. To do that, I have provided you with a
GitHub repository that contains all the files you need, to follow the tips and tricks shown in this post.

Refreshing basic concepts about Docker images and


Dockerfiles
This guide assumes you are familiar with Docker and its build environment. Let's review some of the
basic concepts before you start to put them into practice.

What is a Docker image?


A Docker image is a template that allows you to instantiate running containers. It is represented as a
list of instructions (known as layers) in a filesystem.

What is a Dockerfile?
A Dockerfile is just a blueprint that contains the instructions to build a Docker image. Currently, more
than a million Dockerfiles are on GitHub.

What does "Docker build" mean?


The process of building a Docker image from a Dockerfile is known as a Docker build.

For more information, see Dockerfile reference.

What is a Docker layer?


Each layer in a Docker context represents an instruction included in a Docker image's Dockerfile.
The layers can also be referred to as "build steps".

What is the Docker build cache?

VMware by Broadcom 24
VMware Tanzu Application Catalog Documentation - Tutorials

Every time you build a Docker image, each build step is cached. Reuse cached layers that do not
change in the image rebuild process to improve the build time.

Concerns when building images


These are the main areas of improvement that are covered in this guide:

Consistency: If you are consistent designing your images, they are easier to maintain and you
will reduce the time spent when developing new images.

Build Time: Especially when your builds are integrated in a Continuous Integration pipeline
(CI), reducing the build time can significantly reduce your apps' development cost.

Image Size: Reduce the size of your images to improve the security, performance, efficiency,
and maintainability of your containers.

Security: Critical for production environments, securing your containers is very important to
protect your applications from external threats and attacks.

Prerequisites
There are two tools that will help you develop your Dockerfiles. Before starting the tutorial, I advise
you to:

Enable BuildKit

Install a Linter for Dockerfiles on your editor

Enable BuildKit
Buildkit is a toolkit which is part of the Moby project that improves performance when building
Docker images. It can be enabled in two different ways:

Exporting the DOCKER_BUILDKIT environment variable:

$ export DOCKER_BUILDKIT=1

Note Add this instruction to your ~/.bashrc file

Or Configuring the Docker Daemon to add the Buildkit feature:

{
"features": {
"buildkit": true
}
}

Install a Linter for Dockerfiles on your editor


A Linter helps you detect syntax errors on your Dockerfiles and gives you some suggestions based
on common practices.

There are plugins that provide these functionalities for almost every Integrated Development
Environment (IDE). Here are some suggestions:

VMware by Broadcom 25
VMware Tanzu Application Catalog Documentation - Tutorials

Atom: linter-docker

Eclipse: Docker Editor

Visual Studio: Docker Linter

A real case: improving a Node.js application's Docker image


In order to help you follow the examples below, here is a GitHub repository which contains all the
files you need during each step of the tutorial.

The examples are based on building a very simple Node.js application's Docker image using the files
below:

A Dockerfile that contains the image definition.

A LICENSE.

A package.json that describes the application and its dependencies (which are basically the
Express NPM module).

A server.js that defines the web application using Express framework.

A README.md with some instructions.

The Dockerfile is pretty simple:

FROM debian
# Copy application files
COPY . /app
# Install required system packages
RUN apt-get update
RUN apt-get -y install imagemagick curl software-properties-common gnupg vim ssh
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt-get -y install nodejs
# Install NPM dependencies
RUN npm install --prefix /app
EXPOSE 80
CMD ["npm", "start", "--prefix", "app"]

This is how the lines above can be read:

Using debian as the base image, it installs nodejs and npm in the system using the apt-get
command. To run the application, it's necessary to install some extra system packages for the
Node.js setup script to work, such as curl, imagemagick, software-properties-common, or gnupg.
Furthermore, it installs vim and ssh packages for debugging purposes.

Once the image has all that it needs to build the application, it installs the application dependencies
and uses the npm start command to start the application. The port 80 is exposed since the
application uses it and it is specified with the expose parameter. To build the Docker image for this
application, use the command below:

$ docker build . -t express-image:0.0.1

Note

You can specify the image tag using the format: IMAGE_NAME:TAG.

VMware by Broadcom 26
VMware Tanzu Application Catalog Documentation - Tutorials

It takes 127.8 seconds to build the image and it is 554MB. We can improve the results by following
some good practices.

Taking advantage of the build cache


The build cache is based on the previous steps. You should always keep it in mind and reduce the
build time by reusing existing layers.

Let's try to emulate the process of rebuilding your apps' image to introduce a new change in the
code, so you can understand how the cache works. To do so, edit the message used in the
console.log at server.js and rebuild the image using the command below:

$ docker build . -t express-image:0.0.2

It takes 114.8 seconds to build the image.

Using the current approach, you can't reuse the build cache to avoid installing the system packages
if a single bit changes in the application's code. However, if you switch the order of the layers, you
will be able to avoid reinstalling the system packages:

FROM debian
- # Copy application files
- COPY . /app
# Install required system packages
RUN apt-get update
...
RUN apt-get -y install nodejs
+ # Copy application files
+ COPY . /app
# Install NPM dependencies
...

Rebuild the image using the same command, but avoiding the installation of the system packages.
This is the result: it takes 5.8 seconds to build and the improvement is significant.

If a single character changed in the README.md file (or in any other file which is in the repository
but is not related to the application), it would result in copying the entire directory into the image
which will disrupt the cache once more.

You have to be specific about the files you copy to make sure that you are not invalidating the
cache with changes that do not affect the application.

...
# Copy application files
- COPY . /app
+ COPY package.json server.js /app
# Install NPM dependencies
...

Note

Use "COPY" instead of "ADD" when possible. Both commands do basically the same

VMware by Broadcom 27
VMware Tanzu Application Catalog Documentation - Tutorials

thing, but "ADD" is more complex: it has extra features like extracting files or
copying them from remote sources. From a security perspective too, using "ADD"
increases the risk of malware injection in your image if the remote source you are
using is unverified or insecure.

Avoiding packaging dependencies that you do not need


When building containers to run in production, every unused package, or those included for
debugging purposes, should be removed.

The current Dockerfile includes the ssh system package. However, you can access your containers
using the docker exec command instead of ssh'ing into them. Apart from that, it also includes vim
for debugging purposes, which can be installed when required, instead of packaged by default. Both
packages are removable from the image.

In addition, you can configure the package manager to avoid installing packages that you don't
need. To do so, use the --no-install-recommends flag on your apt-get calls:

...
RUN apt-get update
- RUN apt-get -y install imagemagick curl software-properties-common gnupg vim ssh
+ RUN apt-get -y install --no-install-recommends imagemagick curl software-properties-
common gnupg
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
- RUN apt-get -y install nodejs
+ RUN apt-get -y install --no-install-recommends nodejs
# Install NPM dependencies
...

On the other hand, it's not logical to have separate steps for updating or installing system packages.
This might result in installing outdated packages when you rebuild the image. So, let's combine them
into a single step to avoid this issue.

...
- RUN apt-get update
- RUN apt-get install -y --no-install-recommends imagemagick curl software-properties-
common gnupg
+ RUN apt-get update && apt-get -y install --no-install-recommends imagemagick curl so
ftware-properties-common gnupg
- RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
- RUN apt-get -y install --no-install-recommends nodejs
+ RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - && apt-get -y install --
no-install-recommends nodejs
# Install NPM dependencies
...

Finally, remove the package manager cache to reduce the image size:

...
RUN apt-get update && apt-get -y install --no-install-recommends imagemagick curl soft
ware-properties-common gnupg
- RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - && apt-get -y install --
no-install-recommends nodejs
+ RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - && apt-get -y install --

VMware by Broadcom 28
VMware Tanzu Application Catalog Documentation - Tutorials

no-install-recommends nodejs && rm -rf /var/lib/apt/lists/*


# Install NPM dependencies
...

If you rebuild the image again...

$ docker build . -t express-image:0.0.3

... The image was reduced to 340MB!! That's almost half of its original size.

Using minideb
Minideb is a minimalist Debian-based image built specifically to be used as a base image for
containers. To significantly reduce the image size, use it as the base image.

- FROM debian
+ FROM bitnami/minideb
# Install required system packages
...

Minideb includes a command called install_packages that:

Installs the named packages, skips prompts, etc.

Cleans up the apt metadata afterwards to keep the image small.

Retries the build if the apt-get instructions fail.

Replace the apt-get instructions with the command as follows:

...
# Install required system packages
- RUN apt-get update && apt-get -y install --no-install-recommends imagemagick curl so
ftware-properties-common gnupg
+ RUN install_packages imagemagick curl software-properties-common gnupg
- RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - && apt-get -y install --
no-install-recommends nodejs && rm -rf /var/lib/apt/lists/*
+ RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - && install_packages node
js
# Copy application files
...

Build the image again:

$ docker build . -t express-image:0.0.4

As you can see, you saved 63MB more. The image size is now 277MB

Reusing maintained images when possible


Using Bitnami-maintained images gives you some benefits:

Reducing the size by sharing layers between images.

Ensuring all the components are packaged with the latest available patches since they are
rebuilt every day.

VMware by Broadcom 29
VMware Tanzu Application Catalog Documentation - Tutorials

Instead of installing the system packages you need to run the application (Node.js in this case), use
the bitnami/node image:

- FROM bitnami/minideb
+ FROM bitnami/node
- # Install required system packages
- RUN install_packages imagemagick curl software-properties-common gnupg
- RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - && install_packages node
js
# Copy application files
...

Being specific about your base image tag

Maintained images usually have different tags, used to specify their different flavors. For instance, the
bitnami/node image is built for different Node.js versions and it has a prod flavor which includes the
minimal needed packages to run a Node application (see Supported Tags).

Following this example, imagine that the application is requesting node >= 10 in the package.json.
Therefore, you should use the 10-prod tag to ensure that you are using Node.js 10 with the minimal
packages:

- FROM bitnami/node
+ FROM bitnami/node:10-prod
# Copy application files
...

Once you add that tag, rebuild the image again:

$ docker build . -t express-image:0.0.5

These are the results: 48MB have been saved since the image size is now 229MB. With this crucial
adjustment, concerns about system packages are no longer necessary.

Using multi-stage builds to separate build and runtime


environments
Look at the current Dockerfile (after applying the improvements above) to see the following:

FROM bitnami/node:10-prod
# Copy application files
COPY package.json server.js /app
# Install NPM dependencies
RUN npm install --prefix /app
EXPOSE 80
CMD ["npm", "start", "--prefix", "/app"]

The current status of the sample Dockerfile shows two kinds of identifiable build steps:

Building the application from source code and installing its dependencies.

Running the application.

To continue improving the efficiency and size of the image, split the build process into different
stages. That way, the final image will be as simple as possible.

VMware by Broadcom 30
VMware Tanzu Application Catalog Documentation - Tutorials

Using multi-stage builds is good practice to only copy the artifacts needed in the final image. Let's
see how to do it in this example:

FROM bitnami/node:10 AS builder


COPY package.json server.js /app
RUN npm install --prefix /app

FROM bitnami/node:10-prod
COPY --from=builder /app/package.json /app/server.js /app
COPY --from=builder /app/node_modules /app/node_modules
EXPOSE 80
CMD ["node", "/app/server.js"]

This is a short summary of the steps performed:

Using bitnami/node:10 to build our application, I added AS builder to name our first stage
"builder". Then, I used COPY --from=builder to copy files from that stage. That way, the artifacts
copied are only those needed to run the minimal image bitnami/node:10-prod.

This approach is extremely effective when building images for compiled applications. In the example
below, I have made some tweaks to dramatically decrease the image size. The sample image is the
one that builds Kubeapps Tiller Proxy, one of the core components of Kubeapps:

ARG VERSION

FROM bitnami/minideb:stretch AS builder


RUN install_packages ca-certificates curl git
RUN curl https://dl.google.com/go/go1.11.4.linux-amd64.tar.gz | tar -xzf - -C /usr/loc
al
ENV PATH="/usr/local/go/bin:$PATH" CGO_ENABLED=0
RUN go get -u github.com/golang/glog && go get -u github.com/kubeapps/kubeapps/cmd/til
ler-proxy
RUN go build -a -installsuffix cgo -ldflags "-X main.version=$VERSION" github.com/kube
apps/kubeapps/cmd/tiller-proxy

FROM scratch
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /tiller-proxy /proxy
EXPOSE 80
CMD ["/proxy"]

The final image uses scratch (which indicates that the next command in the Dockerfile is the first
filesystem layer in the image) and it contains only what we need: the binary and the SSL
certificates.

Note

Use ARG and --build-arg K=V to modify your builds from the command line.

Build the image using the command:

$ docker build . -t tiller-proxy-example --build-arg VERSION=1.0.0

VMware by Broadcom 31
VMware Tanzu Application Catalog Documentation - Tutorials

The final image size is only 37.7MB!! If you include both building and running instructions in the
same image, the image size will be > 800MB.

Pro Tip: Using multi-stage builds to build platform-specific images

Reuse those artifacts built on the builder stage to create platform-specific images. For instance,
following the Kubeapps Tiller Proxy example, use a Dockerfile to create different images for
different platforms. In the Dockerfile below, Debian Stretch and Oracle Linux 7 are the platforms
specified for the build:

...
FROM oraclelinux:7-slim AS target-oraclelinux
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /tiller-proxy /proxy
EXPOSE 80
CMD ["/proxy"]

FROM bitnami/minideb:stretch AS target-debian


COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /tiller-proxy /proxy
EXPOSE 80
CMD ["/proxy"]

In the build commands, just add the --target X flag to indicate which platform you want to build the
image for:

$ docker build . -t tiller-proxy-example:debian --target target-debian --build-arg VER


SION=1.0.0
$ docker build . -t tiller-proxy-example:oracle --target target-oraclelinux --build-ar
g VERSION=1.0.0

Using a single Dockerfile, you built images for two different platforms, while keeping the build
process very simple.

Using the non-root approach to enforce container security


Running containers such as non-root is one of the most popular best practices for security.

This approach prevents malicious code from gaining permissions in the container host. It also allows
running containers on Kubernetes distributions that don’t allow running containers as root, such as
OpenShift. For more information about the reasons to use a non-root container, check these blog
posts:

Why Non-Root Containers Are Important For Security.

Running Non-Root Containers On Openshift.

To convert the Docker image into a non-root container, change the default user from root to
nonroot:

...
EXPOSE 80
+ useradd -r -u 1001 -g nonroot root
+ USER nonroot
CMD ["node", "/app/server.js"]

VMware by Broadcom 32
VMware Tanzu Application Catalog Documentation - Tutorials

...

Note

Add the nonroot user to the root group.

Take these details into consideration when moving a container to non-root:

File permissions: What directories should be writable by the application? Adapt them by
giving writing permissions to the non-root users. Check Linux Wiki for more information
about changing permissions.

Port access: You cannot use privileged (1-1023) ports anymore.

Debugging: You cannot perform any action that requires privileged permissions for
debugging purposes.

Note

It is important to understand that you should not move a container to a non-root


approach and then use sudo to gain higher-lever privileges, as this defeats the
purpose of using a non-root approach. Similarly, you should also ensure that the
non-root user account is not part of the sudoers group, to maximize security and
avoid any risk of it obtaining root privileges.

Our sample application uses port 80 to listen for connections. Adapt it to use an alternative port such
as 8080:

Dockerfile:

...
COPY --from=builder /tiller-proxy /proxy
- EXPOSE 80
+ EXPOSE 8080
RUN useradd -r -u 1001 -g root nonroot
...

server.js:

...
const serverHost = '127.0.0.1';
- const serverPort = 80;
+ const serverPort = 8080;
...

On the other hand, the application writes its log in the /var/log/app.log file. Give permissions to the
nonroot user on that directory:

...
RUN useradd -r -u 1001 -g root nonroot
EXPOSE 80
+ RUN chmod -R g+rwX /var/log

VMware by Broadcom 33
VMware Tanzu Application Catalog Documentation - Tutorials

USER nonroot
...

Test it:

$ docker build . -t express-image:0.0.7


$ docker run --rm -p 8080:8080 -d express-image:0.0.7
$ curl http://127.0.0.1:8080
Hello world
$ docker exec express-app whoami
nonroot
$ docker stop express-app

As you can see, everything is working as expected and now your container is not running as root
anymore.

Setting the WORKDIR instruction


The default value for the working directory is /. However, unless you use FROM scratch images, it is
likely that the base image you are using set it. It is a good practice to set the WORKDIR instruction to
adapt it to your application characteristics.

Our application code is under the directory /app. Therefore, it makes sense to adapt the working
directory to it:

...
USER nonroot
+ WORKDIR /app
- CMD ["node", "/app/server.js"]
+ CMD ["node", "server.js"]
...

Note

Using absolute paths to set this instruction is recommended.

Mounting the application configuration and using the volume


instruction
When running your container on Kubernetes, chances are that you want to import your
configuration from configMaps or secrets resources. To use these kinds of resources, mount them
as configuration files in the container filesystem. Then, adapt your application so it reads the settings
from those configuration files.

Using the VOLUME instruction to create mount points is strongly recommended. Docker marks these
mount points as "holding externally mounted volumes", so the host or other containers know what
data is exposed.

Let's modify our application so the hostname and port are retrieved from a configuration file. Follow
these steps:

In the server.js file, make the following changes:

VMware by Broadcom 34
VMware Tanzu Application Catalog Documentation - Tutorials

...
// Constants
- const serverHost = '127.0.0.1';
- const serverPort = 8080;
+ const settings = require('/settings/settings.json');
+ const serverHost = settings.host;
+ const serverPort = settings.port;
...

Create the settings.json file as shown below:

$ mkdir settings && cat > settings/settings.json<<'EOF'


{
"host": "127.0.0.1",
"port": "8080"
}
EOF

Add the mount to point to Dockerfile:

...
EXPOSE 8080
+ VOLUME /settings
RUN useradd -r -u 1001 -g root nonroot
...

At this point, rebuild the image, and mount its configuration settings as shown below:

$ docker build . -t express-image:0.0.8


$ docker run -v $(pwd)/settings:/settings --rm -p 8080:8080 -d --name express-app expr
ess-image:0.0.8

Redirecting the application logs to the stdout/stderr stream


The applications should redirect their logs to stdout/stderr so the host can collect them.

On distributions like Kubernetes, it is very common to have a logging system (such as ELK) that
collects logs from every container so they're available for the sysadmins. Making the logs available
for the host to collect is mandatory for these kinds of solutions.

Our application writes its log in the /var/log/app.log file. Redirect the logs to stdout using the
workaround below:

...
VOLUME /settings
+ RUN ln -sf /dev/stdout /var/log/app.log
RUN useradd -r -u 1001 -g root nonroot
...

With that change, execute the following commands to check that Docker correctly retrieved the
logs:

$ docker build . -t express-image:0.0.9


$ docker run -v $(pwd)/settings:/settings --rm -p 8080:8080 -d --name express-app expr
ess-image:0.0.9

VMware by Broadcom 35
VMware Tanzu Application Catalog Documentation - Tutorials

$ docker logs express-app


Running on http://127.0.0.1:8080

Defining an entrypoint
To make the container more flexible, set an entrypoint to act as the main command of the image.
Then, use the CMD instruction to specify the arguments/flags of the command:

...
- CMD ["node", "server.js"]
+ ENTRYPOINT ["node"]
+ CMD ["server.js"]

This way, you can modify the container behavior depending on the arguments used to run it. For
instance, use the command below to maintain the original behavior:

$ docker build . -t express-image:0.0.10


$ docker run -v $(pwd)/settings:/settings --rm -p 8080:8080 -d --name express-app expr
ess-image:0.0.10

Or use the command below to check the code syntax:

$ docker run --rm express-image:0.0.10 --check server.js

You can always rewrite the entrypoint using the --entrypoint flag. For instance, to check the files
available at /app, run:

$ docker run --rm --entrypoint "/bin/ls" express-image:0.0.10 -l /app


total 12
drwxr-xr-x 51 root root 4096 Jan 24 12:45 node_modules
-rw-r--r-- 1 root root 301 Jan 24 10:11 package.json
-rw-r--r-- 1 root root 542 Jan 24 12:43 server.js

When an application requires initializing, use a script as your entrypoint. Find an example of the one
used on bitnami/redis image here.

Storing credentials and other sensitive data securely


The example shown in this guide did not need any credentials, but the secure storage of credentials
is an important consideration when writing Dockerfiles.

It is considered bad security practice to store sensitive information, such as login credentials or API
tokens, as plaintext in a Dockerfile. A better approach, especially for containers that will run on
Kubernetes, is to encrypt this sensitive information in a Kubernetes SealedSecret. SealedSecrets can
be safely stored in public repositories and can only be decrypted by the Kubernetes controller
running in the target cluster.

Refer to the SealedSecrets documentation for more information.

End of the journey: container images ready for production


environments
The intention of this blog post was to show you how to improve a Dockerfile in order to build

VMware by Broadcom 36
VMware Tanzu Application Catalog Documentation - Tutorials

containers in a more effective and faster way.

To demonstrate how to implement some changes on a given Dockerfile, I used an example with
several defects that would be corrected by applying these good practices. The initial Dockerfile had
the following issues:

It did not make good use of the build cache.

It was packaging too many unnecessary components.

It required too much maintenance due to its complexity.

It was not secure (running as root).

It did not export any logs to the host, so sysadmins could not analyze them.

Upon implementing these minor adjustments, the resulting Dockerfile is ready to be used to build
containers for production environments.

Apart from these steps to write the best Dockerfiles for your production containers, here are few
more tips to be proficient at builiding containers.

Whenever a container is rebuilt, you should run a battery of validation, functional, and
integration tests for it. The more tests you perform, the better.

Rebuild your containers as frequently as possible (on a daily basis preferably) to ensure you
do not package old components in them.

Implement a CI/CD pipeline automating the building, testing, and publishing of container
images.

Analyze the images and look for CVEs.

VMware by Broadcom 37
VMware Tanzu Application Catalog Documentation - Tutorials

Best Practices for Creating Production-


Ready Helm charts

Introduction
Three years have passed since the first release of Helm, and it has indeed made a name for itself.
Both avowed fans and fervent haters agree that the Kubernetes "apt-get equivalent" is the standard
way of deploying to production (at least for now, let's see what Operators end up bringing to the
table). During this time, Bitnami has contributed to the project in many ways. You can find us in PRs
in Helm's code, in solutions like Kubeapps, and especially in what we are mostly known for: our huge
application library.

<>

As maintainers of a collection of more than 45 Helm charts, we know that creating a maintainable,
secure and production-ready chart is far from trivial. In this sense, this blog post shows essential
features that any chart developer should know.

Note

Good practices guides are available in our Useful Links section.

Best practices for creating production-Ready Helm charts


Use non-root containers
Ensuring that a container is able to perform only a very limited set of operations is vital for production
deployments. This is possible thanks to the use of non-root containers, which are executed by a
user different from root. Although creating a non-root container is a bit more complex than a root
container (especially regarding filesystem permissions), it is absolutely worth it. Also, in environments
like Openshift, using non-root containers is mandatory.

In order to make your Helm chart work with non-root containers, add the securityContext section to
your yaml files.

This is what we do, for instance, in the Bitnami Elasticsearch Helm chart. This chart deploys several
Elasticsearch StatefulSets and Deployments (data, ingestion, coordinating and master nodes), all of
them with non-root containers. If we check the master node StatefulSet, we see the following:

spec:
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}

VMware by Broadcom 38
VMware Tanzu Application Catalog Documentation - Tutorials

{{- end }}

The snippet above changes the permissions of the mounted volumes, so the container user can
access them for read/write operations. In addition to this, inside the container definition, we see
another securityContext block:

{{- if .Values.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}

In this part we specify the user running the container. In the values.yaml file, we set the default
values for these parameters:

## Pod Security Context


## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001

With these changes, the chart will work as non-root in platforms like GKE, Minikube or Openshift.

Do not persist the configuration


Adding persistence is an essential part of deploying stateful applications. In our experience, deciding
what or what not to persist can be tricky. After several iterations in our charts, we found that
persisting the application configuration is not a recommended practice. One advantage of
Kubernetes is that you can change the deployment parameters very easily by just doing kubectl
edit deployment or helm upgrade. If the configuration is persisted, none of the changes would be
applied. So, when developing a production-ready Helm chart, make sure that the configuration can
be easily changed with kubectl or helm upgrade. One common practice is to create a ConfigMap
with the configuration and have it mounted in the container. Let's use the Bitnami RabbitMQ chart as
an example:

apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "rabbitmq.fullname" . }}-config
labels:
app: {{ template "rabbitmq.name" . }}
chart: {{ template "rabbitmq.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
data:
enabled_plugins: |-
{{ template "rabbitmq.plugins" . }}
rabbitmq.conf: |-
##username and password
default_user={{.Values.rabbitmq.username}}
default_pass=CHANGEME
{{ .Values.rabbitmq.configuration | indent 4 }}
{{ .Values.rabbitmq.extraConfiguration | indent 4 }}

VMware by Broadcom 39
VMware Tanzu Application Catalog Documentation - Tutorials

Note that there is a section in the values.yaml file that allows you to include any custom
configuration:

## Configuration file content: required cluster configuration


## Do not override unless you know what you are doing. To add more configuration, us
e `extraConfiguration` instead
configuration: |-
## Clustering
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.node_cleanup.interval = 10
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
# queue master locator
queue_master_locator=min-masters
# enable guest user
loopback_users.guest = false
## Configuration file content: extra configuration
## Use this instead of `configuration` to add more configuration
extraConfiguration: |-
#disk_free_limit.absolute = 50MB
#management.load_definitions = /app/load_definition.json

This ConfigMap then gets mounted in the container filesystem, as shown in this extract of the
StatefulSet spec:

volumes:
- name: config-volume
configMap:
name: {{ template "rabbitmq.fullname" . }}-config

If the application needs to write in the configuration file, then you'll need to create a copy inside the
container, as ConfigMaps are mounted as read-only. This is done in the same spec:

containers:
- name: rabbitmq
image: {{ template "rabbitmq.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
command:
# ...
#copy the mounted configuration to both places
cp /opt/bitnami/rabbitmq/conf/* /opt/bitnami/rabbitmq/etc/rabbitmq
# ...

This will make your chart not only easy to upgrade, but also more adaptable to user needs, as they
can provide their custom configuration file.

Integrate charts with logging and monitoring tools


If we are talking about production environments, we are talking about observability. It is essential
having our deployments properly monitored so we can early detect potential issues. It also essential
to have application usage, cost and resource consumption metrics. In order to gather this
information, you would commonly deploy logging stacks like EFK (ElasticSearch, Fluentd, and Kibana
and monitoring tools like Prometheus. Bitnami offers the Bitnami Kubernetes Production Runtime
(BKPR) that easily installs these tools (along with others) so your cluster is ready to handle production

VMware by Broadcom 40
VMware Tanzu Application Catalog Documentation - Tutorials

workloads.

When writing your chart, make sure that your deployment is able to work with the above tools
seamlessly. To do so, ensure the following:

All the containers log to stdout/stderr (so the EFK stack can easily ingest all the logging
information)

Prometheus exporters are included (either using sidecar containers or having a separate
deployment)

All Bitnami charts work with BKPR (which includes EFK and Prometheus) out of the box. Let's take a
look at the Bitnami PostgreSQL chart and Bitnami PostgreSQL container to see how we did it.

To begin with, the process inside the container runs at the foreground, so all the logging information
is written to stdout/stderr, as shown below:

info "** Starting PostgreSQL **"


if am_i_root; then
exec gosu "$POSTGRESQL_DAEMON_USER" "${cmd}" "${flags[@]}"
else
exec "${cmd}" "${flags[@]}"
fi

With this, we ensured that it works with EFK. Then, in the chart we added a sidecar container for the
Prometheus metrics:

{{- if .Values.metrics.enabled }}
- name: metrics
image: {{ template "postgresql.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
{{- if .Values.metrics.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.metrics.securityContext.runAsUser }}
{{- end }}
env:
{{- $database := required "In order to enable metrics you need to specify
a database (.Values.postgresqlDatabase or .Values.global.postgresql.postgresqlDatabase
)" (include "postgresql.database" .) }}
- name: DATA_SOURCE_URI
value: {{ printf "127.0.0.1:%d/%s?sslmode=disable" (int (include "postgr
esql.port" .)) $database | quote }}
{{- if .Values.usePasswordFile }}
- name: DATA_SOURCE_PASS_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-password"
{{- else }}
- name: DATA_SOURCE_PASS
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-password
{{- end }}
- name: DATA_SOURCE_USER
value: {{ template "postgresql.username" . }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /

VMware by Broadcom 41
VMware Tanzu Application Catalog Documentation - Tutorials

port: http-metrics
initialDelaySeconds: {{ .Values.metrics.livenessProbe.initialDelaySeconds
}}
periodSeconds: {{ .Values.metrics.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.metrics.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.metrics.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.metrics.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /
port: http-metrics
initialDelaySeconds: {{ .Values.metrics.readinessProbe.initialDelaySeconds
}}
periodSeconds: {{ .Values.metrics.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.metrics.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.metrics.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.metrics.readinessProbe.failureThreshold }}
{{- end }}
volumeMounts:
{{- if .Values.usePasswordFile }}
- name: postgresql-password
mountPath: /opt/bitnami/postgresql/secrets/
{{- end }}
{{- if .Values.metrics.customMetrics }}
- name: custom-metrics
mountPath: /conf
readOnly: true
args: ["--extend.query-path", "/conf/custom-metrics.yaml"]
{{- end }}
ports:
- name: http-metrics
containerPort: 9187
{{- if .Values.metrics.resources }}
resources: {{- toYaml .Values.metrics.resources | nindent 12 }}
{{- end }}
{{- end }}

We also made sure that the pods or services contain the proper annotations that Prometheus uses to
detect exporters. In this case, we defined them in the chart's values.yaml file, as shown below:

metrics:
enabled: false
service:
type: ClusterIP
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9187"
#...

In the case of the PostgreSQL chart, these annotations go to a metrics service, separate from the
PostgreSQL service, which is defined as below:

{{- if .Values.metrics.enabled }}
apiVersion: v1
kind: Service
metadata:

VMware by Broadcom 42
VMware Tanzu Application Catalog Documentation - Tutorials

name: {{ template "postgresql.fullname" . }}-metrics


labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
annotations:
{{ toYaml .Values.metrics.service.annotations | indent 4 }}
spec:
type: {{ .Values.metrics.service.type }}
{{- if and (eq .Values.metrics.service.type "LoadBalancer") .Values.metrics.service.
loadBalancerIP }}
loadBalancerIP: {{ .Values.metrics.service.loadBalancerIP }}
{{- end }}
ports:
- name: http-metrics
port: 9187
targetPort: http-metrics
selector:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name }}
role: master
{{- end }}

With these modifications, your chart will seamlessly integrate with your monitoring platform. All the
obtained metrics will be crucial for maintaining the deployment in good shape.

Production workloads in Kubernetes are possible


Now you know some essential guidelines for creating secured (with non-root containers), adaptable
(with proper configuration management), and observable (with proper monitoring) charts. With these
features, you have covered the basics to ensure that your application can be deployed to
production. However, this is just another step in your journey to mastering Helm. You should also
take into account other features like upgradability, usability, stability and testing.

Useful Links
To learn more, check the following links:

Official Helm chart good practice guidelines

Helm best practices by CodeFresh

Bitnami Kubernetes Production Runtime

Bitnami Helm charts

VMware by Broadcom 43
VMware Tanzu Application Catalog Documentation - Tutorials

Running non-root containers on Openshift

Introduction
Over the past few months, Bitnami have been working with non-root containers. We realized that
non-root images adds an extra layer of security to the containers. Therefore, we decided to release
a selected subset of our containers as non-root images so that our users could benefit from them.

In this blog post we see how a Bitnami non-root Dockerfile looks like by checking the Bitnami Nginx
Docker image. As an example of how the non-root containers can be used, we go through how to
deploy Ghost on Openshift. Finally, we will cover some of the issues we faced while moving all of
these containers to non-root containers

What are non-root containers?


By default, Docker containers are run as root users. This means that you can do whatever you want
in your container, such as install system packages, edit configuration files, bind privilege ports, adjust
permissions, create system users and groups, access networking information.

With a non-root container you can't do any of this . A non-root container should be configured for
its main purpose, for example, run the Nginx server.

Why use a non-root container?


Mainly because it is a best practise for security. If there is a container engine security issue,
running the container as an unprivileged user will prevent the malicious code from scaling
permissions on the host node. To learn more about Docker's security features, see Bitnami Nginx
Docker image. As an example of how the non-root containers can be used, we go through how to
deploy Ghost on this guide.

Another reason for using non-root containers is because some Kubernetes distributions force you
to use them. For example Openshift, a Red Hat Kubernetes distribution. This platform runs
whichever container you want with a random UUID, so unless the Docker image is prepared to work
as a non-root user, it probably won't work due to permissions issues. The Bitnami Docker images
that have been migrated to non-root containers works out-of-the-box on Openshift.

How to create a non-root container?


To explain how to build a non-root container image, we will use our Nginx non-root container and its
Bitnami Nginx Docker image. As an example of how the non-root containers can be used, we go
through how to deploy Ghost on Dockerfile.

FROM bitnami/minideb-extras:jessie-r22

VMware by Broadcom 44
VMware Tanzu Application Catalog Documentation - Tutorials

LABEL maintainer "Bitnami <[email protected]>"

ENV BITNAMI_PKG_CHMOD="-R g+rwX"


...
RUN bitnami-pkg unpack nginx-1.12.2-0 --checksum cb54ea083954cddbd3d9a93eeae0b81247176
235c966a7b5e70abc3c944d4339
...
USER 1001
ENTRYPOINT ["/app-entrypoint.sh"]
CMD ["nginx","-g","daemon off;"]

The BITNAMI_PKG_CHMOD env var is used to define file permissions for the folders where
we want to write, read or execute. The bitnami-pkg script reads this env var and performs
the changes.

The bitnami-pkg unpack nginx unpacks the Nginx files and changes the permissions as
stated by the BITNAMI_PKG_CHMOD env var.

Up until this point, everything is running as the root user.

Later, the USER 1001 directive switches the user from the default root to 1001. Although we
specify the user 1001, keep in mind that this is not a special user. It might just be whatever
UUID that doesn't match an existing user in the image. Moreover, Openshift ignores the
USER directive of the Dockerfile and launches the container with a random UUID.

Because of this, the non-root images cannot have configuration specific to the user running the
container. From this point to the end of the Dockerfile, everything is run by the 1001 user.

Finally, the entrypoint is in charge of configure Nginx. It is worth mentioning that no nginx,
www-data or similar user is created as the Nginx process will be running as the 1001 user.
Also, because we are running the Nginx service as an unprivileged user we cannot bind the
port 80, therefore we must configure the port 8080.

How to deploy Ghost in OpenShift


As an example, let's deploy Ghost, the blog platform. We need a database that runs on Openshift,
like the Bitnami MariaDB container:

For simplicity we will use Minishift, a tool that helps you run OpenShift locally.

Start the cluster and load the Openshift Client environment.

$ minishift start
$ eval $(minishift oc-env)

Deploy both MariaDB and Ghost images:

$ oc new-app --name=mariadb ALLOW_EMPTY_PASSWORD=yes --docker-image=bitnami/mariadb


$ oc new-app --name=ghost --docker-image=bitnami/ghost

Finally expose the Ghost service and access the URL:

$ oc expose svc/ghost
$ oc status

VMware by Broadcom 45
VMware Tanzu Application Catalog Documentation - Tutorials

At this point, launch the Minishift dashboard with the following command, check the Ghost logs, and
access the application:

$ minishift dashboard

The logs from the Ghost container show that it has been successfully initialized:

Access to the Ghost application by clicking the service URL. You can find it in the top-right corner in
the first screenshot.

VMware by Broadcom 46
VMware Tanzu Application Catalog Documentation - Tutorials

Lessons learned: issues and troubleshooting


All that glitters is not gold. Non-root containers have some disadvantages. Below are some issues
we've run into as well as their possible solutions.

Look and feel


When you run the container, the command prompt appears unusual as the user does not exist.

I have no name!@a0e5d8399c5b:/$

Debugging experience
Troubleshooting problems on non-root containers can be challenging. Installing system packages
like a text editor or running network utilities is restricted due to insufficient permissions.

As a workaround, it is possible to edit the Dockerfile to install a system package. Or, we can start the
container as the root user using the --user root flag for Docker or the user: root directive for
docker-compose.

Mounted volumes
Additional challenges arise when you try to mount a folder from your host. Docker preserves UUID
and GUID from the host when mounting the host volume, potentially leading to permission issues
within the Docker volume. The user executing the container might lack the necessary privileges to
write to the volume.

Possible solutions are running the container with the same UUID and GUID as the host or change
the permissions of the host folder before mounting it to the container.

VMware by Broadcom 47
VMware Tanzu Application Catalog Documentation - Tutorials

Volumes in Kubernetes
Data persistence is configured using persistent volumes. Due to the fact that Kubernetes mounts
these volumes with the root user as the owner, the non-root containers don't have permissions to
write to the persistent directory.

Here are some steps we can take to address these permission issues:

Use an init-container to change the permissions of the volume before mounting it in the
non-root container. Example:

spec:
initContainers:
- name: volume-permissions
image: busybox
command: ['sh', '-c', 'chmod -R g+rwX /bitnami']
volumeMounts:
- mountPath: /bitnami
name: nginx-data
containers:
- image: bitnami/nginx:latest
name: nginx
volumeMounts:
- mountPath: /bitnami
name: nginx-data

Use Pod Security Policies to specify the user ID and the FSGroup that will own the pod
volumes. (Recommended)

spec:
securityContext:
runAsUser: 1001
fsGroup: 1001
containers:
- image: bitnami/nginx:latest
name: nginx
volumeMounts:
- mountPath: /bitnami
name: nginx-data

Config Maps in Kubernetes


This is a very similar issue to the previous one. Mounting a config-map to a non-root container
creates the file path with root permissions. Therefore, if the container tries to write something else in
that path, it will result in a permissions error. Since Pod Security Policies don't seem to work for
configMaps, we'll need to employ an init-container to address any permission issues if they arise.

Issues with specific utilities or servers


Some utilities or servers may run some user checks and try to find the user in the /etc/passwd file.

For example, in Git, running commands as an existing user was necessary until version 2.6.5+.
Otherwise, it triggers errors.

$ git clone https://github.com/tompizmor/charts


Cloning into 'charts'...

VMware by Broadcom 48
VMware Tanzu Application Catalog Documentation - Tutorials

remote: Counting objects: 7, done.


remote: Total 7 (delta 0), reused 0 (delta 0), pack-reused 7
Unpacking objects: 100% (7/7), done.
Checking connectivity... done.
fatal: unable to look up current user in the passwd file: no such user

Another example of a server facing this problem is Zookeeper. During the startup process,
Zookeeper encounters difficulty in determining the user name or user home. However, this issue is
non-disruptive, as Zookeeper operates flawlessly thereafter.

zookeeper_1 | 2017-10-19 09:55:16,405 [myid:] - INFO [main:Environment@100] - Server


environment:os.name=Linux
zookeeper_1 | 2017-10-19 09:55:16,405 [myid:] - INFO [main:Environment@100] - Server
environment:os.arch=amd64
zookeeper_1 | 2017-10-19 09:55:16,405 [myid:] - INFO [main:Environment@100] - Server
environment:os.version=4.4.0-93-generic
zookeeper_1 | 2017-10-19 09:55:16,405 [myid:] - INFO [main:Environment@100] - Server
environment:user.name=?
zookeeper_1 | 2017-10-19 09:55:16,405 [myid:] - INFO [main:Environment@100] - Server
environment:user.home=?
zookeeper_1 | 2017-10-19 09:55:16,405 [myid:] - INFO [main:Environment@100] - Server
environment:user.dir=/

As we can see above, Zookeeper is unable to determine the user name or the user home.

Non-root containers' lights and shadows


We've observed that creating a Docker image without root privileges is simple and can be a crucial
precaution in the event of a security issue. Deploying such images on an Openshift platform is also
simple. These are compelling reasons to adopt non-root containers more frequently.

Nevertheless, in addition to the aforementioned advantages, we outlined a set of drawbacks that


should be considered before transitioning to a non-root approach, particularly concerning file
permissions.

For a hands-on exploration of the features and issues, check out one of the following Bitnami non-
root containers.

Nginx Kafka Zookeeper Memcached Redis Ghost MariaDB

Also, if you are interested in non-root containers and Kubernetes security, I encourage you to take a
look at the following articles articles:

Non-Root Containers To Show Openshift Some Love Unprivileged Containers With Azure Container
Instances How to secure a Kubernetes cluster

VMware by Broadcom 49
VMware Tanzu Application Catalog Documentation - Tutorials

Work With Non-Root Containers

Introduction
There are two types of VMware Tanzu Application Catalog (Tanzu Application Catalog) container
images: root and non-root. Non-root images add an extra layer of security and are generally
recommended for production environments. However, because they run as a non-root user,
privileged tasks such as installing system packages, editing configuration files, creating system users
and groups, and modifying network information, are typically off-limits.

This guide gives you a quick introduction to non-root container images, explains possible issues you
might face using them, and also shows you how to modify them to work as root images.

Advantages of non-root containers


Non-root containers are recommended for the following reasons:

Security: Non-root containers are automatically more secure. If there is a container engine
security issue, running the container as an unprivileged user will prevent any malicious code
from gaining elevated permissions on the container host. To learn more about Docker's
security features, see this guide.

Platform restrictions: Some Kubernetes distributions (such as OpenShift) run containers using
random UUIDs. This approach is not compatible with root containers, which must always run
with the root user's UUID. In such cases, root-only container images will simply not run and
a non-root image is a must.

Potential issues with non-root containers for development


Non-root containers could also have some issues when used for local development:

Failed writes on mounted volumes: Docker mounts host volumes preserving the host UUID
and GUID. This can lead to permission conflicts with non-root containers, as the user running
the container may not have the appropriate privileges to write to the host volume.

Failed writes on persistent volumes in Kubernetes: Data persistence in Kubernetes is


configured using persistent volumes. Kubernetes mounts these volumes with the root user
as the owner; therefore, non-root containers don't have permissions to write to the
persistent directory.

Issues with specific utilities or services: Some utilities (eg. Git) or servers (eg. PostgreSQL)
run additional checks to find the user in the /etc/passwd file. These checks will fail for non-
root container images.

Tanzu Application Catalog non-root containers already fix these issues:

For Kubernetes, the chart uses an initContainer for changing the volume permissions

VMware by Broadcom 50
VMware Tanzu Application Catalog Documentation - Tutorials

properly.

For specific utilities, Tanzu Application Catalog ships the libnss-wrapper package, which
defines custom user space files to ensure the software acts correctly.

Using non-root containers as root containers


If you wish to run a Tanzu Application Catalog non-root container image as a root container image,
you can do so by adding the line user: root right after the image: directive in the container's
docker-compose.yml file. After making this change, simply restart the container and it will run as the
root user with all privileges instead of an unprivileged user.

Useful links
To learn more about the topics discussed in this guide, consider visiting the following links:

Docker security page

Running non-root containers on Openshift

VMware by Broadcom 51
VMware Tanzu Application Catalog Documentation - Tutorials

Why non-root containers are important for


security

Introduction
As you probably already know, Docker containers typically run with root privileges by default. This
allows for unrestricted container management, which means you can do things like install system
packages, edit config files, bind privileged ports, etc. This is really useful for development purposes,
but can expose you to high risk once you put your containers into a production environment.

Why? Because anyone who accesses your container running as root can start undesirable processes
in it, such as injecting malicious code. And running a process in your container as root makes it
possible to change the user id (UID) or group id (GID) when starting the container, which makes your
application vulnerablehttps://www.projectatomic.io/blog/2016/01/how-to-run-a-more-secure-non-
root-user-container/v

Changing the configuration of your containers to make them run as non-root adds an extra layer
of security. Doing so limits the processes that can be executed and who can execute them. In our
quest to continually deliver the latest, most up-to-date and secure applications, Bitnami produces
and maintains a selection of non-root image containers - you can find them in our GitHub repository,
tagged as "non-root".

In this blog post, I will discuss root and non-root containers in more detail, exploring the difference
between the two and the benefits and disadvantages of each. I will also show you an example of how
Bitnami creates non-root containers by editing its Dockerfile to change its user permissions and
environment variables.

Differences between root and non-root containers


As is explained in the Docker security documentation, running containers and applications with
Docker involves running the Docker daemon, and this requires root privileges. Docker needs to
have enough permissions to modify the host filesystem to run; otherwise, your container won't be
initialized.

But containers don't need to be run as root user. Moreover, applications, databases, load balancers,

VMware by Broadcom 52
VMware Tanzu Application Catalog Documentation - Tutorials

etc. shouldn't ever be run as root.

Why not provide your containers with security from the beginning, by running them as non-root
user?

Following the Principle of Least Privilege (PoLP), the main difference between root and non-root
containers is that the latter are focused on ensuring the minimum amount of privileges necessary to
run a process.

In this sense, root containers offer the following capabilities:

To modify the container system, allowing the user to do things like edit the host filesystem,
install system packages at runtime, etc.

Allow containers to bind ports under 1024.

Meanwhile, non-root containers regulate which users in which namespaces are able to execute
specific processes, what volumes can be accessed, and what ports your container is allowed to
access.

Advantages of non-root containers


Security
The most important advantage to running your containers as non-root is to ensure that your
application environment is secure. To put this in perspective, ask yourself this:

"Would I run any process or application as root in my server?" The answer, of course, would be no,
right? So why would you do so in your containers?

Running your containers as non-root prevents malicious code from gaining permissions in the
container host and means that not just anyone who has pulled your container from the Docker Hub
can gain access to everything on your server, for example. If your container gives users privileges,
then anyone could run undesired processes, change the UIDs, or gain access to secrets, etc. You
will also probably want to work with non-root containers in a multi-tenant Kubernetes cluster for
enforcing security.

While security is the foremost advantage of non-root containers, there are others.

Avoid platform restrictions


Some Kubernetes distributions, such as Openshift, don't allow you to run containers as root. In the
case of Openshift, for example, it runs containers with random UIDs which are not compatible with
root containers.

Root-only containers simply do not run in that distro. So running non-root containers enables you to
use Kubernetes distributions like Openshift. For more information on this, check out the following
post about Running Non-Root Containers on Openshift.

How Bitnami does create non-root containers?


Bitnami maintains a catalog of more than 80 containers. Some of the infrastructure containers have
been released as non-root.

VMware by Broadcom 53
VMware Tanzu Application Catalog Documentation - Tutorials

These are some of the Docker containers that Bitnami has released as non-root:

Nginx Kafka Zookeeper Memcached Node Exporter Prometheus Alert Manager Blackbox Exporter
PHP-FPM Redis Ghost MariaDB

But there are many more Bitnami containers available with non-root privileges. To view all of them,
take a look at those tagged as non-root in the Bitnami GitHub repository

Let me now explain what tweaks Bitnami made to transform a root container into a non-root
container. To do so, I will use the Bitnami Redis Docker image.

In the Dockerfile you will find something similar to this:

The image above shows three lines highlighted; I am going to explain the meaning and behavior of
each:

The BITNAMI_PKG_CHMOD env var defines the file permissions for the folders: write, read or
execute.

The ../libcomponen.sh && component_unpack "redis" script is used for unpacking the
Redis files and changing the permissions as stated in the BITNAMI_PKG_CHMOD env var.

At this point, everything has been executed as root user at build time of the container. But the last
highlighted line indicates that the default user must be changed from root to 1001:

USER 1001: this is a non-root user UID, and here it is assigned to the image in order to run
the current container as an unprivileged user. By doing so, the added security and other
restrictions mentioned above are applied to the container.

Basically, it has been introduced an environment variable to set the file permissions and specify a
user, in order to avoid running the container as root in its Dockerfile. That way, any time you run the
container, it will already have the "instructions" to run as non-root user.

This is only one of the many ways to secure your containers. I encourage you to research other ways

VMware by Broadcom 54
VMware Tanzu Application Catalog Documentation - Tutorials

to turn your Docker images into non-root containers, or to take advantage of the ready-to-run non-
root containers already available from Bitnami.

Useful links
If you want to learn more about non-root containers and Docker and Kubernetes security, check out
the following articles:

Docker Security documentation Work with non-root containers for Bitnami applications Running
non-root containers on Openshift Bitnami How-to guides for containers Understanding how uid and
gid work in Docker containers by Marc Campbell Processes In Containers Should Not Run As Root
Just say no to root (containers) by Daniel J. Walsh Running a Docker container as a non-root user by
Lucas Willson-Richter How to run a more secure non-root user container by Dan Wash

VMware by Broadcom 55
VMware Tanzu Application Catalog Documentation - Tutorials

Develop a REST API with VMware Tanzu


Application Catalog's Node.js and MongoDB
Containers

Introduction
For developers building cloud-native applications and APIs for Kubernetes, VMware Tanzu
Application Catalog (Tanzu Application Catalog) offers a variety of containers and Helm charts to ease
the process. These ready-to-use assets make it easier to develop and deploy applications
consistently, follow best practices and focus on code rather than infrastructure configuration. Tanzu
Application Catalog containers and charts are also always secure, optimized and up-to-date, so you
can rest assured that your applications always have access to the latest language features and
security fixes.

To illustrate these benefits, this tutorial will walk you through the process of developing and
deploying a sample Node.js REST API locally using Tanzu Application Catalog containers. You will
create and run a sample REST API locally on your development system using the Sails framework.
You will also create a local MongoDB service for API data storage, and integrate and test your REST
API with this MongoDB service. To perform these tasks, you can either use your existing Node.js
development environment or, if you don't have one, you can use the following Tanzu Application
Catalog container images:

Tanzu Application Catalog's Node.js container image contains the Node.js runtime together
with all required dependencies and development tools.

Tanzu Application Catalog's MongoDB container image contains the official MongoDB
Community binaries together with support for persistence, SSL and replica sets.

Assumptions and prerequisites


This guide assumes that:

You have Docker installed and configured. Learn more about installing Docker.

You have a basic understanding of Node.js and REST API concepts. Learn more about
Node.js and REST.

Step 1: Create a skeleton Node.js application


The first step is to create a skeleton Node.js application. This article will use the Tanzu Application
Catalog Node.js container image and the popular Sails MVC framework; however, there are multiple
tools and methods to do this and you should feel free to use a different approach or a different
framework. For example, if you already have a Node.js development environment, you can use that

VMware by Broadcom 56
VMware Tanzu Application Catalog Documentation - Tutorials

instead and skip the Docker commands below.

1. Begin by creating a directory for your application and making it the current working
directory:

mkdir myapp
cd myapp

2. Use the following Docker commands to create and start a Tanzu Application Catalog Node.js
container on your host (replace the REGISTRY placeholder with your Tanzu Application
Catalog container registry):

docker create -v $(pwd):/app -t --net="host" --name node REGISTRY/node:13


docker start node

The -v argument to the first command tells Docker to mount the host's current directory into
the container's /app path, so that the effects of commands run in the container are seen on
the host. The --net="host" parameter tells Docker to use the host's network stack for the
container. The container is named node.

Once the container is running, connect to the container console with the command below.
This will give you a command shell and allow you to use the Node.js tools available in the
image for subsequent tasks.

docker exec -it node /bin/bash

3. Install Sails and then use the Sails CLI to create the scaffolding for a skeleton application.
When prompted for the application type, choose an "Empty" application.

npm install -g sails


sails new .

Once the application scaffolding has been generated, start the application:

sails lift

4. By default, a Sails application starts in development mode and runs at port 1337. Browse to
http://:1337, where DOCKER-HOST-ADDRESS is the IP address of your host, and confirm
that you see the Sails welcome page shown below:

VMware by Broadcom 57
VMware Tanzu Application Catalog Documentation - Tutorials

5. Exit the container console. This will terminate the Sails application process, although the
container will continue to run in the background.

Step 2: Create and start a local MongoDB service


MongoDB is a scalable and popular data storage accompaniment for Node.js applications. Tanzu
Application Catalog's MongoDB image makes it easy to create a local MongoDB service which can
be used to store, retrieve and modify data related to your REST API. Alternatively, if you already
have the MongoDB server and a MongoDB database on your host, you can use that instead and skip
the Docker commands below.

Create and start a MongoDB database service using the Tanzu Application Catalog MongoDB
container on your host. If you wish, you can replace the database credentials and other variables
shown below with your own values, but make a note of them as you will need them in the next step.
Replace the REGISTRY placeholder with a reference to your Tanzu Application Catalog container
registry.

docker create -e MONGODB_USERNAME=myapp -e MONGODB_PASSWORD=myapp -e MONGODB_DATABASE=


mydb -e MONGODB_ROOT_PASSWORD=root --net="host" --name mongodb REGISTRY/mongodb
docker start mongodb

The environment variables passed to the first command set the administrator password for the
MongoDB instance and also create a new database named mydb with corresponding user credentials.
This database will be used to store data for the REST API. As before, the --net="host" parameter
tells Docker to use the host's network stack for this container as well. The container is named
mongodb and, once started, the MongoDB service will be available on the Docker host at port 27017.

Step 3: Create and configure a REST API endpoint


At this point, you have a skeleton Node.js application and a MongoDB database service. You can
now start creating your REST API. As before, if you're using an existing Node.js development
environment, skip the Docker commands below.

VMware by Broadcom 58
VMware Tanzu Application Catalog Documentation - Tutorials

1. Connect to the container console again with the command below:

docker exec -it node /bin/bash

2. Sails comes with a built-in generator for API endpoints. Use this to generate the scaffolding
for a new sample REST API endpoint for Item objects. By default, this endpoint will be
exposed at the /item URI.

sails generate api item

3. Install the MongoDB adapter for Sails:

npm install sails-mongo --save

Exit the Docker container once the installation is complete.

4. Follow the steps outlined in the Sails documentation to configure the generated application
to use MongoDB for data storage. First, edit the myapp/config/datastores.js file and modify
the default datastore entry as shown below.

default: {
adapter: 'sails-mongo',
url: 'mongodb://myapp:myapp@localhost/mydb'
}

If you used different values when creating the MongoDB container, or if you're using a
different MongoDB installation, remember to replace the values shown above as needed.

Then, update the id and migrate attributes in the myapp/config/models.js file:

migrate: 'alter',
id: { type: 'string', columnName: '_id' },

5. Create a data model for the REST API Item object. For this article, use a simple model with
just two attributes: a name and a quantity. Edit the myapp/api/models/Item.js and update it
to look like this:

module.exports = {

attributes: {
name: 'string',
quantity: 'number'
}

6. Connect to the container console again. Start the application and put it in the background:

docker exec -it node /bin/bash


sails lift &

Exit the Docker container once the application starts.

As before, the application will start in development mode and become available at port 1337 of the
host.

VMware by Broadcom 59
VMware Tanzu Application Catalog Documentation - Tutorials

Step 4: Test the REST API


Your REST API is now active and configured to use MongoDB. You can now proceed to test it from
your host, by sending it various types of HTTP requests and inspecting the responses. If you're using
the Tanzu Application Catalog containers, remember that they are using the host's network stack and
so will be available at ports 1337 (Node.js) and 27017 (MongoDB) respectively.

1. At the host console, send a POST request to the API using curl to create a new item record:

curl -H "Content-Type: application/json" -X POST -d '{"name":"milk","quantity":


"10"}' http://localhost:1337/item

You should see output similar to that shown below:

2. Check if the item record was created with a GET request:

curl http://localhost:1337/item

You should see output similar to that shown below:

You can also connect to the running MongoDB container and use the mongo CLI to see the
data in the MongoDB database.

docker exec -it mongodb /bin/bash


mongo --authenticationDatabase mydb -u myapp -p myapp mydb --eval "db.item.find
()"

You should see output similar to that shown below:

VMware by Broadcom 60
VMware Tanzu Application Catalog Documentation - Tutorials

3. Modify the item record with a PUT request. Replace the ID placeholder in the command
below with the document's unique identifier from the previous commands.

curl -H "Content-Type: application/json" -X PUT -d '{"name":"milk","quantity":"


5"}' http://localhost:1337/item/ID

You should see output similar to that shown below:

4. Delete the item record with a DELETE request:

curl -H "Content-Type: application/json" -X DELETE http://localhost:1337/item/I


D

You should see output similar to that shown below:

You can also connect to the running MongoDB container and use the mongo CLI to confirm
that the data has been deleted from the MongoDB database:

docker exec -it mongodb /bin/bash


mongo --authenticationDatabase mydb -u myapp -p myapp mydb --eval "db.item.coun
t()"

You should see output similar to that shown below:

VMware by Broadcom 61
VMware Tanzu Application Catalog Documentation - Tutorials

At this point, you have a working Node.js REST API integrated with a MongoDB database.

Useful links
To learn more about the topics discussed in this article, use the links below:

Sails documentation

MongoDB documentation

VMware by Broadcom 62
VMware Tanzu Application Catalog Documentation - Tutorials

Deploy a Go Application on Kubernetes with


Helm

Introduction
This guide walks you through the process of running an example Go application on a Kubernetes
cluster. It uses a simple API for a "to-do list" application. The first step is to create the Go program
binary, insert the binary into a minimal Dockerfile and use it as a starting point for creating a custom
Helm chart to automate the application deployment in a Kubernetes cluster. Once the application is
deployed and working, it also explores how to modify the source code for publishing a new
application release and how to perform rolling updates in Kubernetes using the Helm CLI.

Go features
Go applications are generally suitable for running in containers because they can be compiled as a
single binary that contains all the dependencies they need to run. The compiled binary can run
without the Go runtime, so container images are smaller.

Assumptions and prerequisites


This guide will show you how to deploy an example Go application in a Kubernetes cluster running
on Minikube. The example application is a typical "to-do list" application.

This guide makes the following assumptions:

You have basic knowledge of the Go programming language and have run Go applications
before.

You have a Go development environment.

You have basic knowledge of Helm charts and how to create them.

You have a Docker environment running.

You have an account in a container registry (this tutorial assumes that you are using Docker
Hub).

You have Minikube installed on your local computer.

You have a Kubernetes cluster running.

You have the kubectl command line (kubectl CLI) installed.

You have Helm v3.x installed.

To create your own application in Go and deploy it on Kubernetes using Helm you will typically follow
these steps:

VMware by Broadcom 63
VMware Tanzu Application Catalog Documentation - Tutorials

Step 1: Obtain the application source code

Step 2: Build the Docker image

Step 3: Publish the Docker image

Step 4: Create the Helm Chart

Step 5: Deploy the example application in Kubernetes

Step 6: Update the source code and the Helm chart

Step 1: Obtain the application source code


To begin the process, clone the tutorials repository, as shown below for the sample application:

git clone https://github.com/bitnami/tutorials


cd tutorials/go-k8s

This will clone the entire tutorials repository. In this repository you will find the following:

Dockerfile: a text file that contains instructions on how to build a Docker image. Each line
indicates an instruction and a command for assembling all the pieces that comprise the
image.

app-code directory: a folder that contains the application source code and a pre-built binary.

helm-chart directory: a folder that contains the ready-made Helm chart used in this tutorial.

This tutorial skips the step of creating a Go binary, as it is out of the scope of the current guide. It
uses the existing Go binary that is in the app-code directory in the project code repository. Check
the official Go documentation for further information on how to build a Go binary.

Note

The following steps only apply if you already have a Go binary. If you don't, skip
them and build the Docker image as shown in Step 2.

If you have Go already installed, you can simply build it with the following command:

GOOS=linux GOARCH=amd64 go build -tags netgo -o http-sample

There are different ways to build a binary with Go. Using the -tags netgo option prevents the net
package from linking to the host resolver, ensuring that the binary is not linked to any system library.
This is good for containers because we can deploy a very minimal image without any required
libraries (for example, the Docker "scratch" image).

Some of the cons of this approach would be that you have to include other files for Web
applications, like CA certificates, or if you want to use a shell. For these reasons, this tutorial deploys
the Go web application on top of Bitnami's minideb image, which is a minimal version of Debian
designed for use in containers.

Copy the binary to the tutorials/go-k8s/app-code/ folder:

cp http-sample tutorials/go-k8s/app-code/

VMware by Broadcom 64
VMware Tanzu Application Catalog Documentation - Tutorials

Once you have completed the steps above, running a Go application is very easy. You only need to
add the binary to a Docker image. Check the Dockerfile you will find in the next step to see how
simple this is.

Step 2: Build the Docker image


To build the Docker image,run the docker build command in the directory containing the Dockerfile.
Add a tag to identify the current version of the application: 0.1.0. Remember to replace the
USERNAME placeholder with your Docker ID.

docker build -t USERNAME/go-k8s:0.1.0 .

You will see output like this during the build process:

Step 3: Publish the Docker image


Now that your Docker image is built and contains your application code, you can upload it into a
public registry. This tutorial uses Docker Hub, but you can select one of your own choice such as:

VMware by Broadcom 65
VMware Tanzu Application Catalog Documentation - Tutorials

Google Container Registry

Amazon EC2 Container Registry

Azure container Registry

To upload the image to Docker Hub, follow the steps below:

Log in to Docker Hub:

docker login

Push the image to your Docker Hub account. Replace the DOCKER_USERNAME
placeholder with the username of your Docker Hub account and my-custom-app:latest with
the name and the version of your Docker image:

docker push DOCKER_USERNAME/my-custom-app:latest

Confirm that you see the image in your Docker Hub repositories dashboard.

Step 4: Create the Helm chart


To create a brand new chart, you just need to run the helm create command. This creates a scaffold
with sample files that you can modify to build your custom chart. For this tutorial, we'll provide you
with a ready-made Helm chart.

If you examine the repository you just downloaded, there is a directory named helm-chart that
already contains the files you need. Check out the chart's file structure:

go-k8s
|-- Chart.yaml
|-- charts
|-- templates
| |-- NOTES.txt
| |-- _helpers.tpl
| |-- deployment.yaml
| |-- ingress.yaml
| `-- service.yaml
`-- values.yaml

With the scaffolding in place, we'll perform the following steps:

Edit the values.yaml file

Edit the templates/deployment.yaml file

Edit the values.yaml file


This file declares variables to be passed into the templates. We have modified the file to include our
Docker image, tag, and service name, ports, type, and database values. USERNAME is a
placeholder. Remember to replace it with your Docker ID.

VMware by Broadcom 66
VMware Tanzu Application Catalog Documentation - Tutorials

Edit the templates/deployment.yaml file


This file contains the instructions to launch the application container.

Step 5: Deploy the example application in Kubernetes

Note

Before performing the following steps, make sure you have a Kubernetes cluster
running with Helm v3.x installed. For detailed instructions, refer to our starter tutorial.

At this point, you have built a Docker image, published it in a container registry and created your
custom Helm chart. It's now time to deploy the example Go application to the Kubernetes cluster. To
deploy the example application using the current Helm chart, follow these steps:

Make sure that you can connect to your Kubernetes cluster by executing the command
below:

kubectl cluster-info

Deploy the Helm chart by executing the following. RELEASE-NAME is a placeholder, so


please replace it with the right value:

helm install RELEASE-NAME ./helm-chart/go-k8s/

This will create a single pod within the cluster.

VMware by Broadcom 67
VMware Tanzu Application Catalog Documentation - Tutorials

Once the chart has been installed, you will see a lot of useful information about the deployment. The
application won't be available until database configuration is complete. Follow these instructions to
check the database status:

Run the kubectl get pods command to get a list of running pods:

kubectl get pods

Once you've configured the database, you can access the application. To obtain the application URL,
run the commands shown in the "Notes" section below.

If you're using Minikube, you can also check the application service to get the application's URL. You
have two ways to do this:

Option 1: List the services by executing the minikube service list command. You'll notice two
service names:

minikube service list

VMware by Broadcom 68
VMware Tanzu Application Catalog Documentation - Tutorials

Option 2: Check the application service using the minikube service SERVICE command
(SERVICE is a placeholder. Remember to replace it with the name of the service you want to
check):

minikube service SERVICE --url

Congratulations! Your Go application has been successfully deployed on Kubernetes.

Step 6: Update the source code and the Helm chart


As a developer, you'll understand that your application may need new features or bug fixes in the
future. To release a new Docker image, you only have to perform a few basic steps: change your
application source code, rebuild and republish the image in your selected container registry. Once
the new image release has been pushed to the registry, you need to update the Helm chart.

Follow these instructions to complete the application update process:

Compile the new binary and copy it to tutorials/go-k8s/app-code/ folder.

Build the new image and tag it as 0.2.0. Remember to replace the USERNAME placeholder
with your Docker ID in the following steps.

docker build -t USERNAME/go-k8s:0.2.0 .

Note

Before performing the next step, make sure that you are logged into Docker
Hub. Run the docker login command to access your account (if applicable).

Publish the newimage following the same steps as in Step 3 but using the new version
number.

docker push USERNAME/go-k8s:0.2.0

Change to the helm-chart/go-k8s/ directory, where you have the Helm chart files. Edit the

VMware by Broadcom 69
VMware Tanzu Application Catalog Documentation - Tutorials

values.yaml file to replace the current image tag with the new one:

Run the helm upgrade command followed by the name of the chart. After that, you will see
the information about the new deployment. RELEASE-NAME is a placeholder, replace it with
the right value:

helm upgrade RELEASE-NAME helm-chart/go-k8s/

In your default browser, refresh the page to see the changes:

See what revisions have been made to the chart by executing the helm history command:

helm history RELEASE-NAME

Follow these steps every time you want to update your Docker image and Helm chart.

Useful links
To learn more about the topics discussed in this guide, use the links below:

Go official documentation

Bitnami GitHub

Minikube

Kubernetes

Bitnami development containers

Get Started with Kubernetes

Dealing With JSON With Non-Homogeneous Types In Go

An example of real Kubernetes

VMware by Broadcom 70
VMware Tanzu Application Catalog Documentation - Tutorials

Simplify Kubernetes Resource Access


Control using RBAC Impersonation

Introduction
Kubernetes, like any other secure system, supports the following concepts:

Authentication: Verifying and proving identities for users and groups, and service accounts

Authorization: Allowing users to perform specific actions with Kubernetes resources

Accounting: Storing subjects actions, typically for auditing purposes

Authorization - the process of handling users' access to resources - is always a challenge, especially
when access is to be controlled by either team membership or project membership. Two key
challenges are:

As Kubernetes group membership is handled externally to the API itself by an Identity Provider
(IdP), the cluster administrator needs to interact with the Identity Provider administrator to setup
those group memberships, making the workflow potentially cumbersome.Identity Providers may not
provide group membership at all, forcing the cluster administrator to handle access on a per-user
basis, i.e. Kubernetes RoleBindings containing the "full" list of allowed end-users.

In this tutorial, we propose a way to "mimic" group memberships - which can be either by team,
project or any other aggregation you may need - using stock Kubernetes authorization features.

Assumptions and prerequisites


This article assumes that you:

Have some knowledge about general end-user security concepts

Have some knowledge and experience with RBAC roles and bindings

Understand the difference between authentication and authorization

Configure your clusters with Kubernetes RBAC enabled, default since 1.6 release

Overview of Kubernetes authentication


Authentication is a key piece of the strategy that any cluster administrator should follow to secure the
Kubernetes cluster infrastructure and make sure that only allowed users can access it.

Here is a quick recap on how Kubernetes approaches authentication. There are two main categories
of users:

ServiceAccounts (SAs):

VMware by Broadcom 71
VMware Tanzu Application Catalog Documentation - Tutorials

The ID is managed in-cluster by Kubernetes itself.

Every ServiceAccount has an authentication token (JWT) which serves as its


credential

Users (external Personas or Bot users).

The ID is externally provided, usually by the IdP. There are many mechanisms to
provide this ID, such as:
x509 certs

Static token or user/password files

OpenID Connect Tokens (OIDC) via an external Identity Provider (IdP)

Webhook tokens

Managed Kubernetes providers (e.g. GKE, AKS, EKS) integrated with their
own Cloud authentication mechanisms

The user ID is included in every call to the Kubernetes API, which in turn is authorized by access
control mechanisms.

It's common to adopt OIDC for authentication as it provides a Single-Sign-On (SSO) experience,
although some organizations may still use end-users x509 certs as these can be issued without any
external IdP intervention. However, these common approaches present the following challenges:

x509 certs: Although they may be easy to setup, users end up owning an x509 bundle (key
and certificate) that cannot be revoked. This forces the cluster owner to specify low
expiration times, obviously depending on staff mobility. Additionally, the user's Group is
written into the x509 certificate itself. This forces the cluster administrator to re-issue the
certificate every time the user changes membership, while being unable to revoke the
previous certificate (i.e. the user will continue to remain a member of older groups until the
previous certificate expires).

OIDC authentication: This method is convenient to provide SSO using the IdP in use by the
organization. The challenge here arises when the provided identity lacks group membership,
or group membership (as setup by the organization) doesn't directly map to users' team or
project memberships regarding their Kubernetes workloads needs.

With the user now authenticated, we need to take a look at how we authorize them to use the
Kubernetes Cluster.

Overview of Kubernetes Authorization and RBAC


There are many resources on the web regarding Kubernetes RBAC. If you are not fully familiar with
these concepts, I recommend this great tutorial on demystifying RBAC in Kubernetes. To learn more
about how to configure RBAC in your cluster, check out this tutorial. Kubernetes RBAC enables
specifying:

A. SUBJECTS who are allowed to

B. do VERBS on RESOURCE KINDS (optionally narrowed to specific RESOURCE NAMES)

In the above model, B) is implemented as a Kubernetes Role (or ClusterRole), while A) → B) binding
is modeled as a Kubernetes RoleBinding (or ClusterRoleBinding), as per below example diagram:

VMware by Broadcom 72
VMware Tanzu Application Catalog Documentation - Tutorials

Using impersonated "virtual-users" to control access


Kubernetes RBAC includes a special impersonate verb, that can be used to allow Subjects (i.e.
Users, Groups, ServiceAccounts) to acquire other Kubernetes User or Group identity.

As these acquired identities do not necessarily need to exist - recall that the Kubernetes control
plane itself doesn't have a Users or Groups store - we will call them "virtual-users" in this article. This
feature enables setting up “virtual-users” as “role accounts” security Principals. For example:

[email protected], as member of the Application Frontend (app-fe) team can impersonate


app-fe-user virtual-user

[email protected], as member of the Application Backend (app-be) team can impersonate


app-be-user virtual-user

RBAC rules can be created to allow such "virtual-users" to access the Kubernetes resources they
need, as per below diagram:

As shown above, using stock Kubernetes RBAC features, authorization handling is divided into:

team membership: RBAC ClusterRoles and ClusterroleBindings that state Users who are
allowed to impersonate their team's virtual-user.

team duties: RBAC Roles and RoleBindings that state which actual Kubernetes resources the
team's virtual-user can access.

VMware by Broadcom 73
VMware Tanzu Application Catalog Documentation - Tutorials

The actual impersonate action is specified via headers in the Kubernetes API call, which is
conveniently implemented by kubectl via:

kubectl --as <user-to-impersonate> ...


kubectl --as <user-to-impersonate> --as-group <group-to-impersonate> ...

Note

kubectl --as-group … without the --as parameter is not valid. To simplify the CLI
usage, this article proposes using the first form above, by modeling the user-to-
impersonate as a "virtual-user" representing the user group or team memberships.

Following the example shown in the previous diagram, the User "[email protected]" can easily
impersonate virtual team-user "app-fe-user" by overloading kubectl CLI using the command below:

kubectl --as app-fe-user ...

A working example with RBAC rules


Now that virtual users have been "created", let's see a working example of the RBAC rules in
practice. For this use case, assume the following scenario:

Users [email protected] and [email protected] have been authenticated by some form


of SSO (e.g. OIDC connector to Google IdP).

They are members of the app-fe (i.e. Application Frontend) team.

The Kubernetes cluster has three namespaces (NS) relevant to app-fe workloads:
development, staging, and production.

A virtual-user named app-fe-user will be created, allowing Alice and Alanis to impersonate it.

The app-fe-user will be granted the following access:


dev-app-fe NS: full admin

staging-app-fe NS: edit access

prod-app-fe NS: only view access

Note

For the sake of simplicity, we will use stock Kubernetes ClusterRoles (usable for
namespace scoped RoleBindings) to implement the above access rules.

Step 1: Prepare the RBAC manifests

The example below implements the idea, using k14s/ytt as the templating language (you can find
below the ytt source code and resulting manifested YAML here):

#@ load("impersonate.lib.yml",
#@ "ImpersonateCRBinding", "ImpersonateCRole", "RoleBinding"

VMware by Broadcom 74
VMware Tanzu Application Catalog Documentation - Tutorials

#@ )
https://github.com/k14s/ytt
#@ members = ["[email protected]", "[email protected]"]
#@ prod_namespace = "prod-app-fe"
#@ stag_namespace = "staging-app-fe"
#@ dev_namespace = "dev-app-fe"
#@ team_user = "app-fe-user"

#! Add impersonation bindings <members> -> team_user


--- #@ ImpersonateCRBinding(team_user, members)
--- #@ ImpersonateCRole(team_user)

#! Allow *team_user* virtual-user respective access to below namespaces


--- #@ RoleBinding(team_user, prod_namespace, "view")
--- #@ RoleBinding(team_user, stag_namespace, "edit")
--- #@ RoleBinding(team_user, dev_namespace, "admin")

The resulting YAML output can be pushed via kubectl as usual executing the following command:

$ ytt -f . | kubectl apply -f- [ --dry-run=client ]

Note

User Identity ("[email protected]" in this example) may be provided by any


authentication mechanism discussed at the beginning of this article.

Step 2: Test the setup

After pushing above RBAC resources to the cluster, alice@example can use the kubectl auth can-i ...
command to verify the setup. For example:

$ kubectl auth can-i delete pod -n dev-app-fe


no
$ kubectl --as app-fe-user auth can-i delete pod -n dev-app-fe
yes
$ kubectl --as foo-user auth can-i get pod
Error from server (Forbidden): users "foo-user" is forbidden: User "[email protected]"
cannot impersonate resource "users" in API group "" at the cluster scope

This totally feels like sudo for Kubernetes, doesn't it?

Step 3: Save the impersonation setup to your Kubernetes config file

To have the configuration pre-set for impersonation, there are some not-so extensively documented
fields that can be added to the "user:" entry in the user's KUBECONFIG file:

- name: [email protected]@CLUSTER
user:
as: app-fe-user
auth-provider:
config:
client-id: <...>.apps.googleusercontent.com
client-secret: <...>

VMware by Broadcom 75
VMware Tanzu Application Catalog Documentation - Tutorials

id-token: <... JWT ...>


idp-issuer-url: https://accounts.google.com
refresh-token: 1//<...>
name: oidc

This persistent setup is useful as it avoids the need to:

provide --as … argument to kubectl on each invocation

require other Kubernetes tools to support impersonation, e.g. helm is a notable example
lacking this feature

Audit trails

Kubernetes impersonation is well designed regarding audit trails, as API calls get logged with full
original identity (user) and impersonated user (impersonatedUser). The following code block shows
how to configure a kube-audit log trace from kubectl --as app-fe-user get pod -n dev-app-fe:

{
"kind": "Event",
"apiVersion": "audit.k8s.io/v1",
"level": "Request",
"auditID": "032beea1-8a58-434e-acc0-1d3a0a98b108",
"stage": "ResponseComplete",
"requestURI": "/api/v1/namespaces/dev-app-fe/pods?limit=500",
"verb": "list",
"user": {
"username": "[email protected]",
"groups": ["system:authenticated"]
},
"impersonatedUser": {
"username": "app-fe-user",
"groups": ["system:authenticated"]
},
"sourceIPs": ["10.x.x.x"],
"userAgent": "kubectl/v1.18.6 (linux/amd64) kubernetes/dff82dc",
"objectRef": {
"resource": "pods",
"namespace": "dev-app-fe",
"apiVersion": "v1"
},
"responseStatus": {
"metadata": {},
"code": 200
},
"requestReceivedTimestamp": "2020-07-24T21:25:50.156032Z",
"stageTimestamp": "2020-07-24T21:25:50.161565Z",
"annotations": {
"authorization.k8s.io/decision": "allow",
"authorization.k8s.io/reason": "RBAC: allowed by RoleBinding \"rb-app-fe-user-admi
n\" of ClusterRole \"admin\" to User \"app-fe-user\""

VMware by Broadcom 76
VMware Tanzu Application Catalog Documentation - Tutorials

}
}

Picky readers will also note that the "user" field above has only "username" relevant to the User
identity, as "system:authenticated" is obviously a generic group value.

Conclusion
By using stock Kubernetes RBAC features, a cluster administrator can create virtual user security
Principals which are impersonated by Persona Users, to model "role-account" authorization
schemes. This approach provides numerous benefits related to the Kubernetes security
configuration, as follows:

It requires the authentication mechanism to only provide User identity data (i.e. no Group
needed).

It allows the Kubernetes cluster administrator to build team membership schemes using the
stock Kubernetes RBAC impersonate feature.

It allows the Kubernetes cluster administrator to create RBAC rules to access Kubernetes
Resources against these impersonated "virtual-users" (Kubernetes Rolebinding "subjects",
usually just a single entry).

It decouples membership from actual resource access rules which allows creating cleaner
RBAC entries. Such entries are easier to maintain and audit, reducing complexity and
workload for cluster administrators.

It benefits the Organization as the cluster administrator can more precisely implement team
and project controls, easing staff on/off boarding and related security aspects.

Useful links
RBAC, from wikipedia

Kubernetes authentication

Kubernetes RBAC

Kubernetes user impersonation

k14s/ytt templating tool

VMware by Broadcom 77
VMware Tanzu Application Catalog Documentation - Tutorials

Assign Pods to Nodes with Bitnami Helm


Chart Affinity Rules

Introduction
When you install an application in a Kubernetes cluster, the Kubernetes scheduler decides in which
nodes the application pods will be installed unless certain constraints are defined. For example,
Kubernetes scheduler may decide to install application pods in a node with more available memory.
This is mostly useful except when cluster administrators prefer to distribute a group of pods across
the cluster in a specific manner. For this use case, they need a tool that can force Kubernetes to
follow custom rules specified by the user.

Affinity rules supply a way to force the scheduler to follow specific rules that determine where pods
should be distributed. To help users to implement affinity rules, Bitnami has enhanced its Helm
charts by including opinionated affinities in their manifest files. Cluster administrators now only need
to define the criteria to be followed by the scheduler when placing application pods in cluster nodes.
They can then enable this feature via a simple install-time parameter

This tutorial will demonstrate the available affinity rules and how they can be adapted to your needs.

Assumptions and prerequisites

Note

This guide uses a Kubernetes cluster created in GKE. These steps are the same for
all Kubernetes engines. They don't work, however, in Minikube, since with Minikube
you only can create single-node clusters.

This article assumes that:

You have a Google Cloud account. To register, see Google Cloud account.

You have a Kubernetes cluster running with Helm v3.x and kubectl installed. To learn more,
see Getting started with Kubernetes and Helm using different cloud providers.

How Affinity Rules Work in Bitnami Helm charts


All Bitnami infrastructure solutions available in the Bitnami Helm charts catalog now include pre-
defined affinity rules exposed through the podAffinityPreset and podAntiAffinitypreset parameters in
their values.yml file:

VMware by Broadcom 78
VMware Tanzu Application Catalog Documentation - Tutorials

Pod affinity and anti-affinity rules enable you to define how the scheduler should operate when
locating application pods in your cluster's eligible nodes. Based on the option you select, the
scheduler will operate in the following manner.

podAffinityPreset: Using the podAffinity rule, the scheduler will locate a new pod on the
same node where other pods with the same label are located. This approach is especially
helpful to group under the same node pods that meet specific pre-defined patterns.

podAntiAffinitypreset: Using the podAntiAffinity parameter lets the scheduler locates one
pod in each node. Thus, you will prevent locating a new pod on the same node as other
pods are running. This option is convenient if your deployment will demand high availability.

Distributing the pods across all nodes enables Kubernetes to maintain high availability in your cluster
by keeping the remaining nodes running in the event of one node failure.

These are the values you can set for both pod affinity and anti-affinity rules:

Soft: Use this value to make the scheduler enforce a rule wherever it can be met (best-effort
approach). If the rule cannot be met, the scheduler will deploy the required pods in the
nodes with enough resources.

Hard: Use this value to make the scheduler enforce a rule. This means that if there are
remaining pods that do not comply with the pre-defined rule, they won't be allocated in any
node.

Bitnami Helm charts have the podAntiAffinity rule with the soft value enabled by default. Hence, if
there are not enough nodes to place one pod per node, it will leave the scheduler to decide where
the remaining pods should be located.

The following section shows two different use cases of configuring podAntiaffinity parameter.

Deploying a chart using the podAntiAffinity rule


The following examples illustrate how the podAntiAffinity rule works in the context of the Bitnami
MySQL Helm chart. They cover two use cases: installing the chart with the default podAntiAffinity
value and changing the podAntiAffinity value from soft to hard.

Use case 1: Install the chart with the default podAntiaffinity value
Install the Bitnami Helm charts repository by running:

helm repo add bitnami https://charts.bitnami.com/bitnami

Deploy the MySQL Helm chart by executing the command below. Note that the chart will
deploy the cluster with three nodes and two replicas - one primary and one secondary.

VMware by Broadcom 79
VMware Tanzu Application Catalog Documentation - Tutorials

To make the scheduler follow the default podAntiAffinity rule, set the parameter as follows:

helm install mysql bitnami/mysql --set architecture=replication --set secondary


.replicaCount=2 --set secondary.podAntiAffinityPreset=soft

* Verify the cluster by checking the nodes. To list the connected nodes, execute:

```bash
kubectl get nodes

You will see an output message like this:

Three nodes are running in the cluster.

Check how the pods are distributed. Execute:

kubectl get pods -o wide

As expected, both the primary and the secondary pods are in different nodes.

To verify how the scheduler acts when the soft value is defined, scale up the cluster by setting the
number of secondary replicas to three instead of one. Thus, the resulting number of pods will be
four, instead of two.

To scale the cluster, execute:

kubectl scale sts/mysql-secondary --replicas 3

Check the pods by running again the kubectl get pods command. The soft value left the
scheduler to locate the remaining pod that didn't comply with the "one-pod-per-node" rule:

Note that two pods are running in the same node.

Use Case 2: Change the podAntiAffinity Vvalue from Soft to Hard


<<This is not appearing with heading 3 style, please check>>

To try the hard type of the podAntiAffinity rule, deploy the chart again by changing the
secondary.podAntiAffinityPreset value from soft to hard as shown below. The chart will deploy the
cluster with three nodes and two replicas - one primary and one secondary.

helm install mysql-hard bitnami/mysql --set architecture=replication --set secondary.r

VMware by Broadcom 80
VMware Tanzu Application Catalog Documentation - Tutorials

eplicaCount=2 --set secondary.podAntiAffinityPreset=hard

Check the nodes and the pods by running the kubectl get nodes and the kubectl get pods –
o wide commands:

Both the primary and secondary pods are running in the same node.

To verify how the scheduler acts when the hard value is defined, scale up the cluster by
setting the number of secondary replicas to three instead of one. Thus, the resulting number
of pods will be four, instead of two.

To scale up the cluster, execute:

kubectl scale sts/mysql-hard secondary --replicas 3

When checking the pods, you will see that the scheduler has ignored the "one-pod-per-node" rule
and also located only as many pods as there are nodes. The fourth pod was not deployed as there
are only three nodes available.

The podAntiAffinity rule is an easy way to control how application pods will be distributed across the
cluster nodes when installing a Helm chart. Deploy your favorite Bitnami applications and enable this
feature via a simple install-time parameter.

Useful links
To learn more about the topics discussed in this article, use the links below:

Bitnami Helm charts catalog

Bitnami Helm charts documentation

Kubernetes scheduler documentation

Kubernetes pod affinity documentation

VMware by Broadcom 81
VMware Tanzu Application Catalog Documentation - Tutorials

Resolve Chart Upgrade Issues After


Migrating to Helm v3

Introduction
Helm v3 was released a few months ago, bringing with a number of architectural changes and new
features - most notably, the removal of Tiller and an improved upgrade process. To make it easier
for users to transfer their Helm v2 releases to Helm v3, the Helm maintainers also released a plugin
that takes care of migration tasks automatically.

After migrating your releases to Helm v3, you might come across some cases where subsequent
upgrades fail - for example, when an upgrade attempts to modify an immutable field in a StatefulSet.
In these situations, you can attempt to resolve the issue using one of the following methods:

Back up the data from the migrated release and restore it in a new release using the
application's built-in backup/restore tools.

Back up the persistent volumes from the migrated release and re-deploy them in a new
release using Velero, a Kubernetes backup/restore tool.

This guide walks you through both these methods.

Assumptions and prerequisites


This guide makes the following assumptions:

You have a Kubernetes cluster with kubectl and Helm v3 installed. This guide uses a Google
Kubernetes Engine (GKE) cluster but you can also use any other Kubernetes provider. Learn
about deploying a Kubernetes cluster on different cloud platforms and how to install kubectl
and Helm.

You have previously deployed a Bitnami Helm chart using Helm v2, added data to it and then
migrated it to Helm v3 using the Helm migration plugin. Example command sequences to
perform these tasks are shown below, where the PASSWORD and REPL-PASSWORD
placeholders refer to the database and replication user passwords respectively.

helm2 repo add bitnami https://charts.bitnami.com/bitnami


helm2 install --name postgres bitnami/postgresql \
--set postgresqlPassword=PASSWORD \
--set replication.password=REPL-PASSWORD \
--set replication.slaveReplicas=1 \
--set replication.enabled=true \
--namespace default
helm3 plugin install https://github.com/helm/helm-2to3
helm3 2to3 move config
helm3 2to3 convert postgres

VMware by Broadcom 82
VMware Tanzu Application Catalog Documentation - Tutorials

Note

For illustrative purposes, this guide demonstrates how to resolve post-migration


upgrade issues using the Bitnami PostgreSQL Helm chart. However, the same
approach can also be followed for other Bitnami Helm charts, subject to certain
caveats explained in the following sections.

Throughout this guide, helm2 refers to the Helm v2 CLI and helm3 refers to the Helm v3 CLI.

Method 1: Backup and restore data using built-in application


tools
This method involves using the application's built-in backup/restore functionality to backup the data
in the existing release and then restore this data in a new Helm v3 release. This method is only
suitable for those applications which have built-in backup/restore functionality.

Step 1: Back up data using built-in PostgreSQL tools


The first step is to back up the data in the running PostgreSQL release. Follow these steps:

Obtain the PostgreSQL password:

export POSTGRES_PASSWORD=$(kubectl get secret --namespace default postgres-post


gresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)

Forward the PostgreSQL service port:

kubectl port-forward --namespace default svc/postgres-postgresql 5432:5432 &

Back up the contents of all the databases to a file using the PostgreSQL pg_dumpall tool. If
this tool is not installed on your system, use Bitnami's PostgreSQL Docker container image,
which contains this and other PostgreSQL client tools, to perform the backup, as shown
below:

docker run --rm --name postgresql -e "PGPASSWORD=$POSTGRES_PASSWORD" --net="hos


t" bitnami/postgresql:latest pg_dumpall -h 127.0.0.1 -U postgres > all.sql

Here, the --net parameter lets the Docker container use the host's network stack and
thereby gain access to the forwarded port. The pg_dumpall command connects to the
PostgreSQL service and creates an SQL output file containing all the database structures and
records in the PostgreSQL cluster. Finally, the --rm parameter deletes the container after the
pg_dumpall command completes execution.

Stop the service port forwarding.

At the end of this step, you should have a backup file containing the data from your running
PostgreSQL release.

Step 2: Restore the data into a new PostgreSQL release

VMware by Broadcom 83
VMware Tanzu Application Catalog Documentation - Tutorials

The next step is to create an empty PostgreSQL cluster and restore the data into it:

Create a new PostgreSQL release in a separate namespace using Helm v3. Replace the
PASSWORD and REPL-PASSWORD placeholders with the database and replication user
passwords respectively.

kubectl create namespace postgres-new


helm3 repo add bitnami https://charts.bitnami.com/bitnami
helm3 install postgres bitnami/postgresql \
--namespace postgres-new \
--set postgresqlPassword=PASSWORD \
--set replication.password=REPL-PASSWORD \
--set replication.slaveReplicas=1 \
--set replication.enabled=true

Note

It is important to create the new release using the same credentials as the
original release to avoid authentication problems.

Create an environment variable with the password for the new release:

export POSTGRES_PASSWORD=$(kubectl get secret --namespace postgres-new postgres


-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)

Forward the PostgreSQL service port for the new release:

kubectl port-forward --namespace postgres-new svc/postgres-postgresql 5432:5432


&

Restore the contents of the backup file into the new release using the psql tool. If this tool is
not available on your system, mount the directory containing the backup file as a volume in
Bitnami's PostgreSQL Docker container and use the psql client tool in the container image to
import the backup file's contents into the new cluster, as shown below:

docker run --rm --name postgresql -v $(pwd):/app -e "PGPASSWORD=$POSTGRES_PASSW


ORD" --net="host" bitnami/postgresql:latest psql -h 127.0.0.1 -U postgres -d p
ostgres -f /app/all.sql

Here, the -v parameter mounts the current directory (containing the backup file) to the
container's /app path. Then, the psql client tool is used to connect to the PostgreSQL service
and execute the SQL commands in the backup file, thereby restoring the data from the
original deployment. As before, the --rm parameter destroys the container after the
command completes execution.

Stop the service port forwarding.

Connect to the new deployment and confirm that your data has been successfully restored:

kubectl run postgres-postgresql-client --rm --tty -i --restart='Never' --namesp


ace postgres-new --image docker.io/bitnami/postgresql:latest --env="PGPASSWORD=
$POSTGRES_PASSWORD" --command -- psql --host postgres-postgresql -U postgres -d
postgres -p 5432

VMware by Broadcom 84
VMware Tanzu Application Catalog Documentation - Tutorials

Step 3: Test the upgrade process (optional)


You should now be able to upgrade to a new release. You can test this with the following command,
replacing the VERSION placeholder with the chart version you wish to upgrade to:

helm3 upgrade --version VERSION postgres bitnami/postgresql \


--namespace postgres-new \
--set postgresqlPassword=PASSWORD \
--set replication.password=REPL-PASSWORD \
--set replication.slaveReplicas=1 \
--set replication.enabled=true

Note

When upgrading the release, use the same parameters as when you installed it.

After confirming that all is in order, you can optionally delete your original release.

Method 2: Back up and restore persistent data volumes


This method involves copying the application's persistent data volumes and reusing them in a new
release. This method is only suitable for charts that allow using an existing persistent volume claim at
install-time and on platforms supported by Velero. Many Bitnami Helm charts support this feature;
review your specific chart's available parameters for more information.

Step 1: Install Velero


Velero is an open source tool that makes it easy to backup and restore Kubernetes resources. It can
be used to back up an entire cluster or, as shown below, it can be fine-tuned to only backup specific
resources such as persistent volumes.

Follow the Velero plugin setup instructions for your cloud provider. For example, if you are
using Google Cloud Platform (as this guide does), follow the GCP plugin setup instructions to
create a service account and storage bucket and obtain a credentials file.

Then, install Velero by executing the command below, remembering to replace the
BUCKET-NAME placeholder with the name of your storage bucket and the SECRET-
FILENAME placeholder with the path to your credentials file:

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

You should see output similar to the screenshot below as Velero is installed:

VMware by Broadcom 85
VMware Tanzu Application Catalog Documentation - Tutorials

Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

Step 2: Back up the persistent data volumes


Next, back up the persistent volumes using Velero.

Create a backup of the volumes in the running PostgreSQL deployment. This backup will
contain both the master and slave volumes.

velero backup create pgb --include-resources pvc,pv --selector release=postgres

To view the contents of the backup and confirm that it contains all the required resources,
execute:

velero backup describe pgb --details

To avoid the backup data being overwritten, switch the bucket to read-only access:

kubectl patch backupstoragelocation default -n velero --type merge --patch '{"s


pec":{"accessMode":"ReadOnly"}}'

Step 3: Copy the persistent volumes to a new PostgreSQL release


You can now restore the persistent volumes and integrate them with a new Helm v3 release.

Restore the persistent volumes in a separate namespace using Velero. The --namespace-
mappings parameter allows you to map resources from the original namespace to the new
one.

kubectl create namespace postgres-new


velero restore create --from-backup pgb --namespace-mappings default:postgres-n

VMware by Broadcom 86
VMware Tanzu Application Catalog Documentation - Tutorials

ew

Confirm that the persistent volumes have been restored in the target namespace and note
the volume name for the master database node:

kubectl get pvc --namespace postgres-new

Delete the persistent volume corresponding to the slave node and retain only the volume
corresponding to the master node. If there is more than one slave volume (depending on
how you originally deployed the chart), delete all the slave volumes.

kubectl delete pvc --namespace postgres-new SLAVE-PVC-NAME

Create a new PostgreSQL release in the target namespace using Helm v3. Use the chart's
persistence.existingClaim parameter to create a release with an existing volume instead of a
fresh one. Replace the PASSWORD and REPL-PASSWORD placeholders with the same
database and replication user passwords used in the original release, and the MASTER-PVC-
NAME placeholder with the name of the restored master node volume.

helm3 repo add bitnami https://charts.bitnami.com/bitnami


helm3 install postgres bitnami/postgresql \
--set postgresqlPassword=PASSWORD \
--set replication.password=REPL-PASSWORD \
--set replication.slaveReplicas=1 \
--set replication.enabled=true \
--namespace postgres-new \
--set persistence.existingClaim=MASTER-PVC-NAME

Note

It is important to create the new release using the same credentials as the
original release to avoid authentication problems.

This will create a new release that uses the original master volume (and hence the original
data). Note that if replication is enabled, as in the example above, installing the chart will
automatically create a new slave volume for each slave node.

Connect to the new deployment and confirm that your original data is intact:

kubectl run postgres-postgresql-client --rm --tty -i --restart='Never' --namesp


ace postgres-new --image docker.io/bitnami/postgresql:latest --env="PGPASSWORD=
$POSTGRES_PASSWORD" --command -- psql --host postgres-postgresql -U postgres -d
postgres -p 5432

Step 4: Test the upgrade process (optional)


You should now be able to upgrade to a new release. You can test this with the following command,
replacing the VERSION placeholder with the chart version you wish to upgrade to:

helm3 upgrade --version VERSION postgres bitnami/postgresql --set postgresqlPassword=h


ell0 --set replication.password=repl --set replication.slaveReplicas=1 --set replicati

VMware by Broadcom 87
VMware Tanzu Application Catalog Documentation - Tutorials

on.enabled=true --namespace postgres-new --set persistence.existingClaim=data-postgres


-postgresql-master-0

Note

When upgrading the release, use the same parameters as when you installed it.

After confirming that all is in order, you can optionally delete your original release.

Useful links
Bitnami PostgreSQL Helm chart

PostgreSQL client applications pg_dumpall and psql.

Velero documentation

VMware by Broadcom 88
VMware Tanzu Application Catalog Documentation - Tutorials

Troubleshoot Kubernetes deployments

Introduction
Kubernetes is the recommended way to manage containers in production. To make it even easier to
work with Kubernetes, Bitnami offers:

Stable, production-ready Helm charts to deploy popular software applications, such as


WordPress, Magento, Redmine and many more, in a Kubernetes cluster

Kubeapps, a set of tools to super-charge your Kubernetes cluster with an in-cluster


deployment dashboard and out-of-the-box support for Kubeless, a Kubernetes-native
Serverless Framework and SealedSecrets.

In many cases, though, you might find that your Bitnami application has not been correctly deployed
to your Kubernetes cluster, or that it is not behaving the way it should. This guide provides some
broad troubleshooting ideas and solutions to common problems you might encounter with your
Kubernetes cluster.

How to detect
Kubernetes issues are not always very easy to detect. In some cases, errors are obvious and can be
resolved easily; in others, you might find that your deployed application is not responding, but
detecting the error requires you to dig deeper and run various commands to identify what went
wrong.

Typically, running kubectl get pods gives you a top-level overview of whether pods are running
correctly, or which pods have issues. You can then use kubectl describe and kubectl logs to obtain
more detailed information.

Common issues
The following are the most common issues that Bitnami users face:

VMware by Broadcom 89
VMware Tanzu Application Catalog Documentation - Tutorials

Pods in Pending, CrashLoopBackOff or Waiting state

PVCs in Pending state

DNS lookup failures for exposed services

Non-responsive pods or containers

Authentication failures

Difficulties in finding the external IP address of a node

Troubleshooting checklist
The following checklist covers the majority of the cases described above and will help you to find and
debug most Kubernetes deployment issues.

Does your pod status show Pending?

If kubectl get pods shows that your pod status is Pending or CrashLoopBackOff, this means that the
pod could not be scheduled on a node. Typically, this is because of insufficient CPU or memory
resources. It could also arise due to the absence of a network overlay or a volume provider. To
confirm the cause, note the pod identifier from the previous command and then run the command
below, replacing the POD-UID placeholder with the correct identifier:

kubectl describe pod POD-UID

The output of the command should provide information about why the pod is pending. Here's an
example:

VMware by Broadcom 90
VMware Tanzu Application Catalog Documentation - Tutorials

If available resources are insufficient, try freeing up existing cluster resources or adding more nodes
to the cluster to increase the available cluster resources.

Does your pod status show ImagePullBackOff o r ErrImagePull?

If kubectl get pods shows that your pod status is ImagePullBackOff or ErrImagePull, this means that
the pod could not run because it could not pull the image. To confirm this, note the pod identifier
from the previous command and then run the command below, replacing the POD-UID placeholder
with the correct identifier:

kubectl describe pod POD-UID

The output of the command should provide more information about the failed pull. Check that the
image name is correct and try pulling the image manually on the host using docker pull. For
example, to manually pull an image from Docker Hub, use the command below:

docker pull IMAGE

Does your PVC status show Pending?

If kubectl get pvc shows that your PVC status is Pending when using a Bitnami Helm chart, this may
be because your cluster does not support dynamic provisioning (such as a bare metal cluster). In this
case, the pod is unable to start because the cluster is unable to fulfil the request for a persistent
volume and attach it to the container.

To fix this, you must manually configure some persistent volumes or set up a StorageClass resource
and provisioner for dynamic volume provisioning, such as the NFS provisioner. Learn more about

VMware by Broadcom 91
VMware Tanzu Application Catalog Documentation - Tutorials

dynamic provisioning and storage classes.

Are you unable to look up or resolve Kubernetes service names using DNS?

This usually occurs when the service has not been properly registered. The service may be using a
different namespace or may simply not be available. Try these steps:

Check if the service name you are using is correct.

Run these commands to check if the service is registered and the pods selected:

kubectl get svc


kubectl get endpoints

If the service is registered, run kubectl get pods, get the UID for your pod and then run the
command below. Replace the POD-UID placeholder with the pod UID and the SERVICE-
NAME placeholder with the DNS name of the service. This will give you an indication if the
DNS resolution is working or not.

kubectl exec -ti POD-UID nslookup SERVICE-NAME

If the error persists, then confirm that DNS is enabled for your Kubernetes cluster. If you're
using minikube, the command minikube addons list will give you a list of all enabled features.
If it is disabled, enable it and try again.

I s kubectl unable to find your nodes?

When running kubectl get nodes, you may see the following error:

the server doesn't have a resource type "nodes"

This occurs because the authentication credentials are not correctly set. To resolve this, copy the
configuration file /etc/kubernetes/admin.conf to ~/.kube/config in a regular user account (with sudo
if necessary) and try again. This command should not be performed in the root user account.

cp /etc/kubernetes/admin.conf ~/.kube/config

I s kubectl not permitting access to certain resources?

When using kubectl, you may see the following error:

the server does not allow access to the requested resource

This typically occurs when Kubernetes Role-Based Access Control (RBAC) is enabled, the default
situation from Kubernetes 1.6 onwards. To resolve this, you must create and deploy the necessary
RBAC policies for users and resources. Read our RBAC guide for more information and examples.

Are you unable to find the external IP address of a node?

If using minikube, the command minikube ip will return the IP address of the node.

If not using minikube, the command kubectl get nodes -o yaml will show you, amongst other

VMware by Broadcom 92
VMware Tanzu Application Catalog Documentation - Tutorials

data, the IP address of the node.

Is your pod running but not responding?

If your pod is running but appears to be non-responsive, it could be due to a failed process in the
container - for example, because of an invalid configuration or insufficient storage space. To check
this, you can log into the running container and check that the required processes are running.

To do this, first open a shell to the container and then, at the container shell, use ps to check for
running processes:

kubectl exec -ti POD-UID -- /bin/bash


> ps ax

If the required process is not running, inspect the container logs to identify and resolve the issue.

VMware by Broadcom 93
VMware Tanzu Application Catalog Documentation - Tutorials

It is also helpful to look at the RESTARTS column of the kubectl get pods output to know if the pod is
repeatedly being restarted and the READY column to find out if the readiness probe (health check)
is executing positively.

Useful links
The following resources may be of interest to you:

Application troubleshooting guide

Configure RBAC in your Kubernetes cluster

Secure Kubernetes with pod security policies or network policies

VMware by Broadcom 94
VMware Tanzu Application Catalog Documentation - Tutorials

Secure your Kubernetes application with


Network policies

Introduction
This guide walks you through the basic Kubernetes network policy operations. First you are
introduced to the Kubernetes network policies and network plugins. Then you apply these concepts
to a multi-tier WordPress installation in kubeadm by completing the following steps:

Install a Network plugin.

Deploy a Wordpress Helm chart.

Apply two network policies for securing the WordPress deployment.

At the end of this guide, you should be able to install a Kubernetes network plugin and apply
network policies in your cluster. The examples described in this guide were tested in kubeadm, but
they can be applied to any Kubernetes cluster.

By default, all pods in a Kubernetes cluster can communicate freely with each other without any
issues. In many environments, you can isolate the services running in pods from each other by
applying network restrictions. For example, the following can improve your cluster's security:

Only allow traffic between pods that form part of the same application. For example, in a
frontend-backend application, only allow communication to the backend from frontend pods.

Isolate pod traffic in namespaces. That is, a pod can only communicate with pods that belong
to the same namespace.

This guide shows you how you can use the Kubernetes network policies to apply these kinds of
restrictions. These restrictions can be combined with pod security policies which are explained in this
guide.

Assumptions and prerequisites


This guide makes the following assumptions:

You have kubeadm installed.

You have a Kubernetes cluster running.

You have the kubectl command line (kubectl CLI) installed.

You have Helm v3.x installed.

You have an advanced level of understanding of how Kubernetes works, and its core
resources and operations. You are expected to be familiar with concepts like:
Pods

VMware by Broadcom 95
VMware Tanzu Application Catalog Documentation - Tutorials

Deployments

Namespaces

Replicasets

PersistentVolumes

ConfigMaps

Nodes

Network policies in Kubernetes


A network policy is a set of network traffic rules applied to a given group of pods in a Kubernetes
cluster. Just like every element in Kubernetes, it is modeled using an API Resource: NetworkPolicy.
The following describes the broad structure of a network policy:

The metadata section of the policy specifies its name.

The spec section of the policy outlines the key criteria a pod must fulfil in order to be allowed
to run.

Here is a brief description of the main options available (you can find more details in the
official Kubernetes API Reference):

podSelector: if the conditions defined in the next element apply, the podSelector
establishes which pods the network can accept traffic from (destination pods from
now on). Pods can be specified using the following criteria:

namespaceSelector: a pod belongs to a given namespace.

labelSelector: a pod contains a given label.

Network Policy Ingress Rules (ingress): establishes a set of allowed traffic rules. You
can specify:

from (origin pods): specifies which pods are allowed to access the previously
specified destination pods. Just like with destination pods, these origin pods
can be specified using NamespaceSelectors and LabelSelectors.

ports (allowed ports): specifies which destination pod's ports can be accessed
by the origin pods.

For more information on the example, see Kubernetes official documentation.

Kubernetes network plugins


In order to implement network policies in your cluster, you must use a compatible container network
plugin.

In a usable Kubernetes cluster, pods must be able to communicate between themselves, with
services and outside the cluster. Kubernetes is flexible enough (thanks to the use of the Container
Networking Interface), or CNI) to allow the administrator to choose between numerous container
network technologies (known as CNI plugins). Each one has its own properties and advantages (it is
out of the scope of this guide to go through all of them), but not all of them are compatible with the
Kubernetes network policies. The following are examples of compatible technologies:

VMware by Broadcom 96
VMware Tanzu Application Catalog Documentation - Tutorials

Calico

Weave

Romana

You can find more information on Kubernetes networking in Kubernetes official documentation.

Installing a network plugin in kubeadm


To use the network policies, you must install a network plugin in kubeadm . You can choose among
any of the ones detailed in the previous section. For this guide, we will use the Calico plugin.

Install the Calico CNI plugin.

kubectl apply -f https://docs.projectcalico.org/v2.4/getting-started/kubernetes/instal


lation/hosted/kubeadm/1.6/calico.yaml

Use case: WordPress installation


In this example, you will add the following network policies:

Restrict all traffic between pods.

Allow only traffic from Wordpress to MariaDB through port 3306 in a WordPress+MariaDB
installation.Crea

Step 1: Deploy a WordPress + MariaDB installation using Helm


If you do not have helm installed, install Helm v3.x using the following guide.

Deploy a WordPress Helm chart. We will use the name np-test so we know that the
WordPress pod will have the app: np-test-wordpress label and the MariaDB pod will have the
app: np-test-mariadb label.

helm repo add bitnami https://charts.bitnami.com/bitnami


helm install np-test bitnami/wordpress

Step 2: Restrict traffic between pods


Create a policy-deny-all.yaml using the content below. In this yaml file you are creating a
network rule with an empty PodSelector, which is equal to a wildcard. As there is no ingress
section, no traffic is allowed. This means that, unless there are other network rules applied,
no pod can communicate with any other pod.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector:

Create the network policy using the kubectl create command:

VMware by Broadcom 97
VMware Tanzu Application Catalog Documentation - Tutorials

kubectl create -f policy-deny-all.yaml

After some time, check that the WordPress pod crashes because it cannot connect to the
MariaDB pod. This is expected as we restricted traffic between all pods.

Step 3: Allow traffic only from the WordPress pod to the MariaDB
pod
Now that we have restricted all network traffic between pods, we create an additional policy that
allows WordPress to connect to the MariaDB service. We can create additional network rules such as
this to ensure that we only allow traffic where needed.

Create a policy-wordpress-mariadb.yaml with the content below. In this yaml file you are
creating a network rule with the MariaDB pods (i.e. all pods with label app: np-test-mariadb)
as destination pods. In the network policy ingress rules, you set the WordPress pods (that is,
all pods with label app: np-test-wordpress) as the origin pods, and 3306 as the allowed port.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-frontend-allow
spec:
podSelector:
matchLabels:
app: np-test-mariadb
ingress:
- from:
- podSelector:
matchLabels:
app: np-test-wordpress
ports:
- protocol: TCP
port: 3306

Create the network policy using the kubectl create command:

kubectl create -f policy-wordpress-mariadb.yaml

Check that the WordPress pod now works without issues.

kubectl get pods

NAME READY STATUS RESTARTS AGE


np-test-mariadb-2450482308-4sw28 1/1 Running 0 2d
np-test-wordpress-2742268951-kg4km 1/1 Running 7 2d

Useful links
Now you are able to perform basic network policy operationIn this guide we walked you through the
concept of network policies and network plugins. Thanks to that, you were able to secure a multi-tier
application (Wordpress+MariaDB Helm chart) by isolating them from the rest of the cluster pods. This
is a first step in the numerous applications where network policies and network plugins offer to your
cluster. For more information, see the following links:

VMware by Broadcom 98
VMware Tanzu Application Catalog Documentation - Tutorials

Official Kubernetes documentation for network policies

Official Kubernetes example with an NGINX container

Configure RBAC in your Kubernetes Cluster

Secure a Kubernetes Cluster with Pod Security Policies

Sealed Secrets: Protecting Your Passwords Before They Reach Kubernetes

Kubeadm Setup Tool

Bitnami

VMware by Broadcom 99
VMware Tanzu Application Catalog Documentation - Tutorials

Secure a Kubernetes cluster with pod


security policies

Introduction
As container technologies mature and more applications transition to clustered environments,
defining and implementing cluster security policies becomes ever more important. Cluster security
policies provide a framework to ensure that pods and containers run only with the appropriate
privileges and access only a finite set of resources. Security policies also provide a way for cluster
administrators to control resource creation, by limiting the capabilities available to specific roles,
groups or namespaces.

This guide introduces you to pod security policies in Kubernetes. It provides the definition, the
process of creation and activation, and the testing procedures. However, considering that pod
security policies are often tailored to an organization's rules and specific application requirements,
there is no universal solution. Instead, this guide will delve into three typical scenarios and guide you
through creating pod security policies tailored to each.

Note

To know more about configuring role-based access control (RBAC) in your


Kubernetes cluster, see RBAC guide.

Assumptions and prerequisites


This guide makes the following assumptions:

You have a Kubernetes cluster running with the kubectl command-line tool installed and
support for pod security policies enabled.

<<This link might not work, pleas reconfirm>>

You have an intermediate understanding of how Kubernetes works, and its core resources
and operations.

It's important to know that you can only use pod security policies if your Kubernetes cluster's
admission controller has them enabled. You can check this by running the command kubectl get
psp. If it's not supported, you'll see an error when you run this command.

the server doesn't have a resource type "podSecurityPolicies".

The following image illustrates the difference in command output between a server that does not
support pod security policies and one that does:

VMware by Broadcom 100


VMware Tanzu Application Catalog Documentation - Tutorials

To enable support for pod security policies:

If you are using Minikube, enable support for pod security policies and the other
recommended admission controller plugins by starting Minikube with the following command
(note the PodSecurityPolicy option at the end):

minikube start --extra-config=apiserver.GenericServerRunOptions.AdmissionControl=Names


paceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,Res
ourceQuota,DefaultTolerationSeconds,PodSecurityPolicy

If you're using a different platform for your Kubernetes cluster, check the documentation
from the company that provides your cluster to see if they already support pod security
policies. If not, look for instructions on how to turn them on.

The examples in this guide have been tested using a Minikube cluster running Kubernetes v1.6.4,
but should be generally applicable to any Kubernetes cluster with pod security policy support.

Understand pod security policies


In Kubernetes, a pod security policy is represented by a PodSecurityPolicy resource. This resource
lists the conditions a pod must meet in order to run in the cluster. Here's an example of a pod
security policy, expressed in YAML:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: example
spec:
privileged: false
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:

VMware by Broadcom 101


VMware Tanzu Application Catalog Documentation - Tutorials

- 'nfs'
hostPorts:
- min: 100
max: 100

Briefly, this pod security policy implements the following security rules:

Disallow containers running in privileged mode

Disallow containers that require root privileges

Disallow containers that access volumes apart from NFS volumes

Disallow containers that access host ports apart from port 100

Later parts of this guide will delve into these rules with more detail. For now, let's examine the
overall structure of a pod security policy.

The metadata section of the policy specifies its name.

The spec section of the policy outlines the key criteria a pod must fulfil in order to be allowed
to run.

Here is a brief description of the main options available. To learn more, see official Kubernetes
documentation.

The privileged field indicates whether to allow containers that use privileged mode. Learn
more about privileged mode.
The runAsUser field defines which users a container can run as. Most commonly, it is
used to prevent pods from running as the root user.

The seLinux field defines the Security-Enhanced Linux (SELinux) security context for
containers and only allows containers that match that context. Learn more about
SELinux.

The supplementalGroups and fsGroup fields define the user groups or fsGroup-
owned volumes that a container may access. Learn more about fsGroups and
supplemental groups.

The volumes field defines the type(s) of volumes a container may access. Learn more
about volumes.

The hostPorts field, together with related fields like hostNetwork, hostPID and
hostIPC, restrict the ports (and other networking capabilities) that a container may
access on the host system.

To better explain how pod security policies work in practice, the following sections illustrate common
use cases and also walk you through the commands to add, view and remove pod security policies in
a cluster.

Case 1: Prevent pods from running with root privileges


Restricting containers in pods from running as the root user and thereby creating a more secure
cluster environment is one of the most common uses for pod security policies. To see this in action,
create the following pod security policy and save it as restrict-root.yaml:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy

VMware by Broadcom 102


VMware Tanzu Application Catalog Documentation - Tutorials

metadata:
name: restrict-root
spec:
privileged: false
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'

Activate the policy by executing the command below, which creates the new policy from the file:

kubectl create -f restrict-root.yaml

Check that the policy has been installed with the command below, which lists all active policies in the
cluster:

kubectl get psp

Here's a sample of what you should see:

Once the policy has been installed, the next step is to test it, by attempting to run a container that
requires root privileges. One such example is Bitnami's MariaDB container. Try to deploy it using the
command below:

kubectl run --image=bitnami/mariadb:10.1.24-r2 mymariadb --port=3306 --env="MARIADB_RO


OT_PASSWORD=gue55m3"

Since the pod security policy explicitly disallows pods or containers with root privileges, this request
should be rejected and you should see an error like this when you check pod status:

container has runAsNonRoot and image will run as root

Here's an example of the error output from kubectl get pods:

Delete the pod security policy as follows:

VMware by Broadcom 103


VMware Tanzu Application Catalog Documentation - Tutorials

kubectl delete psp restrict-root

Then, create a more permissive policy by setting the runAsUser field to runAsAny and save it as
permit-root.yaml. Here's what this more permissive policy looks like:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: permit-root
spec:
privileged: false
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'

As before, install and activate the policy with the following command:

kubectl create -f permit-root.yaml

Now, delete the previous deployment and try deploying Bitnami's MariaDB container again. This
time, the deployment should take place successfully, as the new security policy allows containers to
run as any user, including the root user. Check the status with kubectl get pods and you should see
the pod running, as shown below:

Note

When a Kubernetes cluster is started with pod security policy support, Kubernetes
follows a "default-deny" approach. This means that, by default, pods are not allowed
to run unless they match the criteria outlined in a pod security policy. This also
means that if your cluster does not have at least one pod security policy in place, no
pods will run and Kubernetes will let you know that you should activate a pod security
policy with the error message no providers available to validate pod request.

VMware by Broadcom 104


VMware Tanzu Application Catalog Documentation - Tutorials

Case 2: Prevent pods from accessing certain volume types


As a cluster administrator, you may wish to limit the available storage choices for containers, to
minimize costs or prevent information access. This can be accomplished by specifying the available
volume types in the volumes key of a pod security policy. To illustrate, consider the following policy
which restricts containers to only NFS volumes:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restrict-volumes
spec:
privileged: false
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'nfs'

Install this policy as explained in the previous section and then test it, by attempting to deploy a pod
that requests a different volume type. One such example is Bitnami's WordPress deployment, which
uses PersistentVolumeClaims (PVCs) and secrets for its data. Attempt to deploy it using the following
command:

kubectl create -f https://raw.githubusercontent.com/bitnami/bitnami-docker-wordpress/m


aster/kubernetes.yml

Since the pod security policy only allows pods that use NFS volumes, this request should be rejected
and you should see an error like this when you check status:

Invalid value: "persistentVolumeClaim": persistentVolumeClaim volumes are not allowed


to be used

Here's an example of the error output:

VMware by Broadcom 105


VMware Tanzu Application Catalog Documentation - Tutorials

Delete the policy and the failed deployment, and install a different policy that allows access to both
secret and PVC volume types, as shown below:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: permit-volumes-pvc-secret
spec:
privileged: false
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'persistentVolumeClaim'
- 'secret'

With this, your next attempt at deploying Bitnami WordPress should be successful:

Case 3: Prevent pods from accessing host ports


Another common security concern is containers gaining access to host resources, such as host ports

VMware by Broadcom 106


VMware Tanzu Application Catalog Documentation - Tutorials

or network interfaces. Pod security policies allow cluster administrators to implement in-depth
security rules to restrict such access. A simple example is restricting a container from accessing any
host ports, as shown below:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restrict-ports
spec:
privileged: false
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
hostPorts:
- min: 0
max: 0

Install this policy as explained previously and then test it, by deploying a container that attempts to
map a container port to a host port. Bitnami's MariaDB container is well-suited for this experiment.
Try to deploy it and make it available on host port 3306 using the following command:

kubectl run --image=bitnami/mariadb:10.1.24-r2 mymariadb --port=3306 --hostport=3306 -


-env="MARIADB_ROOT_PASSWORD=gue55m3"

Since the pod security policy clearly states no to accessing the host port, your request will be denied.
When you check the deployment status, you'll likely see an error message like this:

Invalid value: 3306: Host port 3306 is not allowed to be used. Allowed ports: [{0 0}]

Here's an example of the error output:

Delete the policy and the failed deployment, and install a different policy that allows restricted host

VMware by Broadcom 107


VMware Tanzu Application Catalog Documentation - Tutorials

port access, as shown below.

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: permit-port-3306
spec:
privileged: false
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
hostPorts:
- min: 3306
max: 3306

You should now be able to deploy the Bitnami MariaDB container using the command shown
previously. Here is the result:

Useful links
These three examples show that pod security policies let cluster administrators control the pods and
containers in a Kubernetes cluster. They can allow or block access to resources based on
organizational rules and each application's needs. Pod security policies are easy to understand,
create, and implement, making them a valuable tool in a cluster administrator's security kit.

To learn more about the topics discussed in this guide, visit the following links:

Official Kubernetes documentation for pod security policies

Pod security policy implementation details and use cases

Bitnami's Kubernetes RBAC tutorial

Bitnami's Kubernetes starter tutorial

Bitnami's production-ready Kubernetes applications

Minikube

Bitnami

VMware by Broadcom 108


VMware Tanzu Application Catalog Documentation - Tutorials

Best Practices for Securing and Hardening


Container Images

Introduction
When a container is built and/or used, it is important to ensure that the image is built by following
best practices in terms of security, efficiency, performance, etc. This article will go over some of the
key points VMware Tanzu Application Catalog (Tanzu Application Catalog) takes into account when
publishing containers. It covers image tagging, non-root configuration and arbitrary UIDs, the
importance of reducing size and dependencies, and the release process, including CVE scanning
and tests.

Rolling and immutable tags


A Docker tag is a label used to uniquely identify a Docker image. It allows users to deploy a specific
version of an image. A single image can have multiple tags associated with it.

Every time Tanzu Application Catalog publishes a new version of an image, the associated tags are
also updated to make it easier for users to get the latest version.

Rolling tags
Tanzu Application Catalog uses rolling tags (a tag that may not always point to the same image) for its
Docker container images. To understand how this works, let's use the Tanzu Application Catalog
etcd container image tags as an example:

3, 3-debian-10, 3.4.13, 3.4.13-debian-10-r8, latest

The latest tag always points to the latest revision of the etcd image.

The 3 tag is a rolling tag that always points to the latest revision of etcd 3.x.

The 3.4.13 tag is a rolling tag that points to the latest revision of etcd 3.4.13. It will be
updated with different revisions or daily releases but only for etcd 3.4.13.

The 3-debian-10 tag points to the latest revision of etcd 3.x for Debian 10, in case there are
other distros supported.

When Tanzu Application Catalog releases container images - typically to upgrade system packages -
it fixes bugs or improves the system configuration and also updates container tags to point to the
latest revision of the image. Therefore, the rolling tags shown above are dynamic; they will always
point to the latest revision or daily release of the corresponding image.

Continuing with the example above, the 3.4.13 tag might point to the etcd 3.4.13 revision 8 today,
but it will refer to the etcd 3.4.13 revision 9 when Tanzu Application Catalog next updates the

VMware by Broadcom 109


VMware Tanzu Application Catalog Documentation - Tutorials

container image.

The suffix revision number (rXX) is incremented every time that Tanzu Application Catalog releases
an updated version of the image for the same version of the application. As explained in the next
section, suffixed tags are also known as immutable tags.

Immutable tags
A static, or immutable, tag always points to the same image. This is useful when you depend on a
specific revision of an image For example, if you use the tag 3.4.13-debian-10-r8, this tag will
always refer to etcd 3.4.13 revision 8. The use of this tag ensures that users get the same image
every time.

Usage recommendations
Which tag should you use and when? Follow these guidelines:

If you are using containers in a production environment (such as Kubernetes), use immutable
tags. Tanzu Application Catalog uses immutable tags by default in the Tanzu Application
Catalog Helm Charts. This ensures that your deployment won't be affected if a new revision
inadvertently breaks existing functionality.

If you are using containers for development, use rolling tags. This ensures that you are
always using the latest version. Rolling tags also make it easier to use a specific version of a
development tool (such as REGISTRY/node:12 for Node.js 12).

Root and non-root containers


There are two types of Tanzu Application Catalog container images: root and non-root. Non-root
images add an extra layer of security and are generally recommended for production environments.
However, because they run as a non-root user, privileged tasks such as installing system packages,
editing configuration files, creating system users and groups, and modifying network information, are
typically off-limits.

This section gives you a quick introduction to non-root container images, explains possible issues
you might face using them, and also shows how to modify them to work as root images.

Non-root containers
By default, Docker containers are run as root users. This means that you can do whatever you want
in the container, such as install system packages, edit configuration files, bind privilege ports, adjust
permissions, create system users and groups, or access networking information.

With a non-root container, you can't do any of this. A non-root container must be configured only
for its main purpose, for example, run the NGINX server.

A non-root container is a container in which the user executing the processes is not the root user
but a unprivileged user, like 1001. This is usually modified through the USER instruction in the
Dockerfile.

Advantages of non-root containers


Non-root containers are recommended for the following reasons:

VMware by Broadcom 110


VMware Tanzu Application Catalog Documentation - Tutorials

Security: Non-root containers are more secure. If there is a container engine security issue,
running the container as an unprivileged user will prevent any malicious code from gaining
elevated permissions on the container host. Learn more about Docker's security features.

Platform restrictions: Some Kubernetes distributions (such as OpenShift) run containers using
random UUIDs. This approach is not compatible with root containers, which must always run
with the root user's UUID. In such cases, root-only container images will simply not run and
a non-root image is a must. Learn more about random UUIDs

Potential issues with non-root containers for development


Non-root containers could also have some issues when used for local development:

Write failures on mounted volumes: Docker mounts host volumes preserving the host UUID
and GUID. This can lead to permission conflicts with non-root containers, as the user running
the container may not have the appropriate privileges to write on the host volume.

Write failures on persistent volumes in Kubernetes: Data persistence in Kubernetes is


configured using persistent volumes. Kubernetes mounts these volumes with the root user
as the owner; therefore, non-root containers don't have permissions to write to the
persistent directory.

Issues with specific utilities or services: Some utilities (eg. Git) or servers (eg. PostgreSQL)
run additional checks to find the user in the /etc/passwd file. These checks will fail for non-
root container images.

Tanzu Application Catalog non-root containers fix the above issues:

For Kubernetes, Tanzu Application Catalog Helm charts use an initContainer for changing
the volume permissions properly. As the image runs as non-root by default, it is necessary to
adjust the ownership of the persistent volume so that the container can write data to it. By
default, the charts are configured to use Kubernetes Security Context to automatically
change the ownership of the volume. However, this feature does not work in all Kubernetes
distributions. As an alternative, the charts support using an initContainer to change the
ownership of the volume before mounting it in the final destination.

For specific utilities, Tanzu Application Catalog ships the libnss-wrapper package, which
defines custom userspace files to ensure the software acts correctly.

Use non-root containers as root containers


If you wish to run a Tanzu Application Catalog non-root container image as a root container image,
you can do it by adding the line user: root right after the image: directive in the container's docker-
compose.yml file. After making this change, restart the container and it will run as the root user with
all privileges instead of an unprivileged user.

In Kubernetes, the user that executes the container can be customized by using Security Context.

Use arbitrary UUIDs


On some platforms like OpenShift, to support running containers with volumes mounted in a secure
way, images must run as an arbitrary user ID. When those platforms mount volumes for a container,
they configure the volume so it can only be written to by a particular user ID, and then run the image

VMware by Broadcom 111


VMware Tanzu Application Catalog Documentation - Tutorials

using that same user ID. This ensures the volume is only accessible to the appropriate container, but
requires that the image is able to run as an arbitrary user ID.

That means a non-root container executing on a platform with this policy can't assume anything
about the UUID. These platforms change the default container user to an arbitrary UUID, but the
GUID is unmodified and containers are executed as XXX:root (where XXX is the arbitrary UUID).

Tanzu Application Catalog images are configured with the proper permissions for the user and group
in order to meet the requirements of these platforms. They do this by ensuring that the XXX user
belongs to the root group and that the directories have the appropriate read, write and execution
permissions.

Execute one process per container


Each container should have only one concern. Decoupling applications into multiple containers
makes it easier to scale horizontally and reuse containers. For instance, a web application stack might
consist of three separate containers, each with its own unique image, to manage the web application,
database, and an in-memory cache in a decoupled manner.

Although all Tanzu Application Catalog images follow this good practice, there are cases where two
or more processes need to be executed at the same time in the same image. One such case is that
of the Tanzu Application Catalog PostgreSQL with Replication Manager Docker Image where, apart
from the postgres process, there is a separate process for the repmgr daemon. There are also other
cases where the application spawns additional processes on its own.

It is therefore important to take a decision about the number of processes per container, keeping in
mind the goal of keeping each container as clean and modular as possible.

Secure processes, ports and credentials


Keep the following important security considerations in mind:

Do not allow containers to bypass the host system's authentication or ingress. For example,
Tanzu Application Catalog images do not run an SSH daemon inside the container.

Do not allow executables in the container that can escalate permissions.

Do not place passwords, secrets or credentials in container images.

Change privileged ports to non-privileged ports.

Improve performance by keeping images small


As indirectly described in the previous sections, it is important to follow the "Principle of least
privilege" (POLP), an important concept in computer security. This refers to the practice of limiting
access rights for users to the bare minimum permissions they need to perform their work.

In the same way, a good security practice is to install and maintain only the minimum necessary
dependencies in a container image. It is also important to reduce the size of the images to improve
the security, performance, efficiency, and maintainability of the containers.

Package installation in Tanzu Application Catalog images (also applicable to already-installed


packages) is usually done using the install_packages script. This tool was created to install system

VMware by Broadcom 112


VMware Tanzu Application Catalog Documentation - Tutorials

packages in a smart way for container environments. Apart from installing packages only with the
required dependencies (no recommended packages or documentation), it also removes the cache
and unnecessary package repositories.

Daily builds and release process


Tanzu Application Catalog automatically re-releases its container catalog every 24 hours (this can be
modified by the customer). In terms of security, releasing the Tanzu Application Catalog containers
on a daily basis ensures that the system packages and components bundled in the image are up-to-
date from the package repositories.

As explained previously, this approach means that a new immutable tag is produced every day,
increasing the revision number. At the same time, rolling tags are updated to point to this new
immutable tag.

Apart from daily releases, there are other processes that can trigger a new release. For example, if
there is a new version (major, minor, or patch) of the main component, Tanzu Application Catalog's
tracking system detects this new upstream release and trigger a new release of the Tanzu
Application Catalog image, which uses the -r0 tag suffix.

Before a new image is released, antivirus scanners and other tests are executed. If these are
unsuccessful, the release is blocked. These are discussed in the following sections

CVE and virus scanning


If you are running development containers to create a proof of concept or for production workloads,
you will probably already be aware of CVEs that may affect the container's operating system and
packages. There are various tools/scanners to check containers for CVEs, such as Clair, Anchore,
Notary and others.

There are two ways of ensuring the health of containers: using a virus scan or a CVE scan.

The virus scan is executed during the release process. The virus scan performed by Tanzu
Application Catalog uses antivirus engines for scanning the files present in the container,
stopping the release if a positive is detected.

While the antivirus scan is a blocking step when releasing a container, the CVE scan is a tool
executed periodically to trigger new releases. This tool analyzes the containers bundled by
the Tanzu Application Catalog Helm charts. If it finds a CVE, it triggers the release of the
affected container.

Verification and functional testing


During the release process, all containers are tested to work with all deployment technologies with
which they are likely to be used:

Docker Compose, using several Docker Compose files to test different features like LDAP,
cluster topologies, etc.

Helm charts, tested on different Kubernetes platforms such as GKE, AKS, IKS, TKG, etc., and
under different scenarios.

Two types of tests are executed for each deployment method:

Verification tests: This type of testing involves inspecting a deployment to check certain

VMware by Broadcom 113


VMware Tanzu Application Catalog Documentation - Tutorials

properties. For example, checking if a particular file exists on the system and if it has the
correct permissions.

Functional tests: This type of testing is used to verify that an application is behaving as
expected from the user's perspective. For example, if the application must be accessible
using a web browser, functional testing uses a headless browser to interact with the
application and perform common actions such as logging in and out and adding users.

FIPS
Containers should follow modern cryptographic standards for security. If customers require
compliance with FIPS 140-2, Tanzu Application Catalog containers can ship a FIPS-enabled version
of OpenSSL. In a FIPS-enabled kernel, OpenSSL (and the applications using it) will only use FIPS-
approved encryption algorithms. In the case of applications that have a FIPS mode (such as
Elasticsearch), this would be enabled as well.

Conclusion
By implementing the above points in the Tanzu Application Catalog build and release process, Tanzu
Application Catalog ensures that its container images are built following best practices in terms of
security and performance and can be safely used on most platforms as part of production
deployments.

VMware by Broadcom 114


VMware Tanzu Application Catalog Documentation - Tutorials

Best Practices for Securing and Hardening


Helm Charts

Introduction
When developing a chart, it is important to ensure that the packaged content (chart source code,
container images, and subcharts) is created by following best practices in terms of security, efficiency
and performance.

This article will go over the key points Bitnami takes into account when publishing Bitnami Helm
charts. It covers the best practices applied to the bundled containers, the use of configuration as
ConfigMaps, integration with logging and monitoring tools, and the release process, including CVE
scanning and tests.

The Bitnami pipeline


A Helm chart is composed of different containers and subcharts. Therefore, when securing and
hardening Helm charts, it is important to ensure that the containers used in the main chart and
subcharts are also secure and hardened.

In order to have full control over the published charts, an indispensable requirement for all Bitnami
Helm charts is that all the bundled images are released through the Bitnami pipeline following
Bitnami's best practices for securing and hardening containers.

ConfigMaps for configuration

Note

In our experience, deciding which data should or should not be persistent can be
complicated. After several iterations, our recommended approach has been to use
ConfigMaps, but this recommendation could change depending on the configuration
file or scenario. One advantage of Kubernetes is that users can change the
deployment parameters very easily by just executing kubectl edit deployment or
helm upgrade. If the configuration is persistent, none of the changes will be applied.
So, when developing Bitnami Helm charts, we make sure that the configuration can
be easily changed with kubectl or helm upgrade.

One common practice is to create a ConfigMap with the configuration and have it mounted in the
container. Let's use the Bitnami RabbitMQ chart as an example:

apiVersion: v1

VMware by Broadcom 115


VMware Tanzu Application Catalog Documentation - Tutorials

kind: ConfigMap
metadata:
name: {{ template "rabbitmq.fullname" . }}-config
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
data:
rabbitmq.conf: |-
{{- include "common.tplvalues.render" (dict "value" .Values.configuration "context
" $) | nindent 4 }}
{{- if .Values.advancedConfiguration}}
advanced.config: |-
{{- include "common.tplvalues.render" (dict "value" .Values.advancedConfiguration
"context" $) | nindent 4 }}
{{- end }}

Note that there is a section in the values.yaml file which allows you to include custom configuration:

## Configuration file content: required cluster configuration


## Do not override unless you know what you are doing.
## To add more configuration, use `extraConfiguration` of `advancedConfiguration` inst
ead
##
configuration: |-
## Username and password
default_user = {{ .Values.auth.username }}
default_pass = CHANGEME
## Clustering
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.{{ .Values.clusterDomain }}
cluster_formation.node_cleanup.interval = 10
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal

## Configuration file content: extra configuration


## Use this instead of `configuration` to add more configuration
##
extraConfiguration: |-
#default_vhost = {{ .Release.Namespace }}-vhost
#disk_free_limit.absolute = 50MB
#load_definitions = /app/load_definition.json

This ConfigMap then gets mounted in the container filesystem, as shown in this extract of the
StatefulSet spec:

volumes:
- name: configuration
configMap:
name: {{ template "rabbitmq.fullname" . }}-config
items:
- key: rabbitmq.conf
path: rabbitmq.conf
{{- if .Values.advancedConfiguration}}
- key: advanced.config
path: advanced.config
{{- end }}

This approach makes Bitnami charts easy to upgrade and also more adaptable to user needs, as
users can provide their own custom configuration file.

VMware by Broadcom 116


VMware Tanzu Application Catalog Documentation - Tutorials

Integration with logging and monitoring tools


One of the key concerns when deploying charts in production environments is observability. It is
essential to have deployments properly monitored for early detection of potential issues. It is also
important to have application usage, cost, and resource consumption metrics. In order to gather this
information, users commonly deploy logging stacks like EFK (ElasticSearch, Fluentd, and Kibana and
monitoring tools like Prometheus. In the same way, there are Bitnami charts available for each of
those solutions.

Bitnami charts are developed ensuring that deployments are able to work with the above tools
seamlessly. To achieve this, the Bitnami charts ensure that:

All the containers log to stdout/stderr (so that the EFK stack can easily ingest all the logging
information)

Prometheus exporters are included (either using sidecar containers or having a separate
deployment)

Bitnami offers the Bitnami Kubernetes Production Runtime (BKPR) that installs all these tools (along
with others) and makes your cluster capable of handling production workloads. All Bitnami charts
work with BKPR (which includes EFK and Prometheus) out of the box. Let's take a look at the
Bitnami PostgreSQL chart and Bitnami PostgreSQL container to see how this is achieved.

To begin with, the process inside the container runs in the foreground, so all the logging information
is written to stdout/stderr, as shown below:

info "** Starting PostgreSQL **"


if am_i_root; then
exec gosu "$POSTGRESQL_DAEMON_USER" "${cmd}" "${flags[@]}"
else
exec "${cmd}" "${flags[@]}"
fi

This ensures that it works with EFK.

Although there are different approaches to implement logging capabilities, such as adding a logging
agent at the node level or configuring the application to push the info to the backend, the most
common approach is to use sidecar containers. For more information, see logging architectures.

In the example above, the chart adds a sidecar container for Prometheus metrics:

containers:
{{- if .Values.metrics.enabled }}
- name: metrics
image: {{ template "postgresql.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
{{- if .Values.metrics.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.metrics.securityContext.runAsUser }}
{{- end }}
env:
{{- $database := required "In order to enable metrics you need to specify
a database (.Values.postgresqlDatabase or .Values.global.postgresql.postgresqlDatabase
)" (include "postgresql.database" .) }}
{{- $sslmode := ternary "require" "disable" .Values.tls.enabled }}

VMware by Broadcom 117


VMware Tanzu Application Catalog Documentation - Tutorials

{{- if and .Values.tls.enabled .Values.tls.certCAFilename }}


- name: DATA_SOURCE_NAME
value: {{ printf "host=127.0.0.1 port=%d user=%s sslmode=%s sslcert=%s s
slkey=%s" (int (include "postgresql.port" .)) (include "postgresql.username" .) $sslmo
de (include "postgresql.tlsCert" .) (include "postgresql.tlsCertKey" .) }}
{{- else }}
- name: DATA_SOURCE_URI
value: {{ printf "127.0.0.1:%d/%s?sslmode=%s" (int (include "postgresql.
port" .)) $database $sslmode }}
{{- end }}
{{- if .Values.usePasswordFile }}
- name: DATA_SOURCE_PASS_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-password"
{{- else }}
- name: DATA_SOURCE_PASS
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-password
{{- end }}
- name: DATA_SOURCE_USER
value: {{ template "postgresql.username" . }}
{{- if .Values.metrics.extraEnvVars }}
{{- include "postgresql.tplValue" (dict "value" .Values.metrics.extraEnvVa
rs "context" $) | nindent 12 }}
{{- end }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /
port: http-metrics
initialDelaySeconds: {{ .Values.metrics.livenessProbe.initialDelaySeconds
}}
periodSeconds: {{ .Values.metrics.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.metrics.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.metrics.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.metrics.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /
port: http-metrics
initialDelaySeconds: {{ .Values.metrics.readinessProbe.initialDelaySeconds
}}
periodSeconds: {{ .Values.metrics.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.metrics.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.metrics.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.metrics.readinessProbe.failureThreshold }}
{{- end }}
volumeMounts:
{{- if .Values.usePasswordFile }}
- name: postgresql-password
mountPath: /opt/bitnami/postgresql/secrets/
{{- end }}
{{- if .Values.tls.enabled }}
- name: postgresql-certificates
mountPath: /opt/bitnami/postgresql/certs
readOnly: true

VMware by Broadcom 118


VMware Tanzu Application Catalog Documentation - Tutorials

{{- end }}
{{- if .Values.metrics.customMetrics }}
- name: custom-metrics
mountPath: /conf
readOnly: true
args: ["--extend.query-path", "/conf/custom-metrics.yaml"]
{{- end }}
ports:
- name: http-metrics
containerPort: 9187
{{- if .Values.metrics.resources }}
resources: {{- toYaml .Values.metrics.resources | nindent 12 }}
{{- end }}
{{- end }}

Bitnami also ensures that the pods or services contain the proper annotations for Prometheus to
detect exporters. In this case, they are defined in the chart's values.yaml file, as shown below:

## Configure metrics exporter


##
metrics:
enabled: false
# resources: {}
service:
type: ClusterIP
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9187'

In the case of the PostgreSQL chart, these annotations go to a metrics service, separate from the
PostgreSQL service, which is defined as below:

{{- if .Values.metrics.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "postgresql.fullname" . }}-metrics
labels:
{{- include "common.labels.standard" . | nindent 4 }}
annotations:
{{- if .Values.commonAnnotations }}
{{- include "postgresql.tplValue" ( dict "value" .Values.commonAnnotations "contex
t" $ ) | nindent 4 }}
{{- end }}
{{- toYaml .Values.metrics.service.annotations | nindent 4 }}
spec:
type: {{ .Values.metrics.service.type }}
{{- if and (eq .Values.metrics.service.type "LoadBalancer") .Values.metrics.service.
loadBalancerIP }}
loadBalancerIP: {{ .Values.metrics.service.loadBalancerIP }}
{{- end }}
ports:
- name: http-metrics
port: 9187
targetPort: http-metrics
selector:
{{- include "common.labels.matchLabels" . | nindent 4 }}
role: master

VMware by Broadcom 119


VMware Tanzu Application Catalog Documentation - Tutorials

{{- end }}

Apart from that, a ConfigMap is created to support a custom configuration file:

{{- if and .Values.metrics.enabled .Values.metrics.customMetrics }}


apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "postgresql.metricsCM" . }}
labels:
{{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "postgresql.tplValue" ( dict "value" .Values.commonAnnotati
ons "context" $ ) | nindent 4 }}
{{- end }}
data:
custom-metrics.yaml: {{ toYaml .Values.metrics.customMetrics | quote }}
{{- end }}

Some parameters related to the metrics exporter can be configured in the values.yaml:

## Define additional custom metrics


## ref: https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-conf
ig-file
customMetrics:
pg_database:
query: "SELECT d.datname AS name, CASE WHEN pg_catalog.has_database_privilege(d.
datname, 'CONNECT') THEN pg_catalog.pg_database_size(d.datname) ELSE 0 END AS size_byt
es FROM pg_catalog.pg_database d where datname not in ('template0', 'template1', 'post
gres')"
metrics:
- name:
usage: "LABEL"
description: "Name of the database"
- size_bytes:
usage: "GAUGE"
description: "Size of the database in bytes"

Apart from all of the above, this container has its own probes, environment variables and security
context.

These modifications ensure that Bitnami charts seamlessly integrate with monitoring platforms. The
metrics obtained can be used to keep the deployment in good condition throughout its lifetime.

Bitnami release process and tests


This Bitnami release process is another important pillar for keeping Bitnami charts safe, updated and
fully functional.

Charts releases are triggered under different conditions:

Upstream new version: If there is a new version of the main container bundled in the chart,
Bitnami triggers a new release.

Maintenance release: If there was no update in the last 30 days (this time period can be
customized for charts released as part of the VMware Tanzu Application Catalog(TM), Bitnami
triggers a new release.

VMware by Broadcom 120


VMware Tanzu Application Catalog Documentation - Tutorials

CVE detected: Bitnami triggers a release of a chart when a package that includes a fix for a
CVE is detected.

User PR: When a pull request or any other change performed by a user or the Bitnami team
is merged into the Bitnami GitHub repository, Bitnami triggers a new release.

In all the above cases, the main image is updated to the latest version and the secondary containers
(such as metrics exporters) and chart dependencies are also updated to the latest published version
at that time.

However, before the release is performed, various scanners and tests are executed. These stop the
release from proceeding if the result is not successful.

CVE scanning
To ensure that all Bitnami images include the latest security fixes, Bitnami implements the following
policies:

Bitnami triggers a release of a new Helm chart when a new version of the main server or
application is detected. For example, if the system automatically detects a new version of
RabbitMQ, Bitnami’s pipeline automatically releases a new container with that version and
also releases the corresponding Helm chart if it passes all tests. This way, Bitnami ensures
that the application version released is always the latest stable one and has the latest security
fixes.

The system scans all Bitnami containers and releases new images daily with the latest
available system packages. Once the pipeline detects there is a new package that fixes a
CVE, our team triggers the release of a new Helm chart to point to the latest container
images.

The Bitnami team monitors different CVE feeds - such as Heartbleed or Shellshock - to fix
the most critical issues as soon as possible. Once a critical issue is detected in any of the
charts included in the Bitnami catalog (or any of the assets that Bitnami distributes amongst its
different cloud providers), a new solution is released. Usually, Bitnami provides updates in
less than 48 business hours.

Note

Open CVEs are CVEs that depend directly on the Linux distribution maintainers and
have not yet been fixed by those maintainers. Bitnami is not able to fix such CVEs
directly. Learn more about Bitnami's open CVE policy.

Verification and functional tests


During the release process, Bitnami charts are tested on different Kubernetes platforms such as
GKE, AKS, IKS, TKG, and others. Charts are tested using different Kubernetes server versions and
Helm versions. Apart from these tests, different scenarios are configured in order to test other
functionalities beyond the default parameters, like SMTP or LDAP configuration.

Two types of tests are executed for each Kubernetes platform:

Verification tests: This type of testing involves inspecting a deployment to check certain

VMware by Broadcom 121


VMware Tanzu Application Catalog Documentation - Tutorials

properties. For example, checking if a particular file exists on the system and if it has the
correct permissions.

Functional tests: This type of testing is used to verify that an application is behaving as
expected from the user's perspective. For example, if the application must be accessible
using a web browser, functional testing uses a headless browser to interact with the
application and perform common actions such as logging in and out and adding users.

Upgrade tests
One of the most common use cases for Helm charts is the upgrade process. Helm charts follow the
SemVer specification (MAJOR.MINOR.PATCH) to handle chart version numbering. According to that
specification, backward compatibility should be guaranteed for MINOR and PATCH versions, while it
is not guaranteed in MAJOR versions.

From Bitnami, when a new change is implemented in the chart, the Bitnami team determine the
version number change required. Typically, bug fixes are PATCH changes, feature additions are
MINOR changes and MAJOR changes occur when backward compatibility cannot be guaranteed.

In the case of MAJOR versions, the changes and, if possible, an upgrade path is documented in the
README. Here is an example from the PostgreSQL chart.

When the changes are not MAJOR changes, backward compatibility should be guaranteed. To test
this, Bitnami applies a "Chart Upgrade" test as part of the Bitnami release pipeline to ensure that
helm upgrade works for non-major changes. In this test:

The first available version in the current major version is installed (such as X.0.0).

Tests are executed to populate some data or create content.

The helm upgrade command is executed to install the most recent version (such as X.3.5).

Tests are executed to check that the populated data is still present in the upgraded chart.

Bitnami has also published some guides about how to backup and restore deployments in
Kubernetes for common infrastructure charts like MongoDB and MariaDB Galera.

Conclusions
By implementing the above steps in the Bitnami package and release process, Bitnami ensures that
its Helm charts are packaged following best practices in terms of security and performance and can
be safely used on most platforms as part of production deployments.

Useful links
To learn more about the topics discussed in this guide, use the links below:

Best Practices for Creating Production-Ready Helm charts

Running Helm in Production - Security Best Practices

Exploring the Security of Helm

VMware by Broadcom 122


VMware Tanzu Application Catalog Documentation - Tutorials

Understand Bitnami's Rolling tags for


container images

Introduction

Note

Did you know Bitnami automatically releases new tags under the following
circumstances?

Once detected a new version of the application (always using revision 0)

When a fixable CVE in any system package is detected

If there are changes in the configuration scripts: new features, improvements, bug fixes, etc
impacting Dockerfile, bash logic, ...

Container image tags uniquely identify a container image, allowing you to deploy a specific version of
an image. A single image can have multiple tags associated with it. Typically, every time you publish
a new version of an image, you will also update its tags to make it easier for your users to get the
latest version.

This guide will explain Bitnami's tagging system and how you can use it to identify different versions
of its container images.

Rolling tags
Bitnami uses rolling tags for its container images. To understand how this works, let's look at the tags
for the Bitnami WordPress container image:

latest, 6, 6-debian-12, 6.4.3

The latest tag always points to the latest revision of the WordPress image.

The 6 tag is a rolling tag that always points to the latest revision of WordPress 6.y.z

The 6-debian-12 tag points to the latest revision of WordPress 6.y.z for Debian 12.

The 6.4.3 tag is a rolling tag that points to the latest revision of WordPress 6.4.3. It will be
updated with different revisions or daily releases but only for WordPress 6.4.3.

When Bitnami revises container images, typically to upgrade system packages, fix bugs or improve
system configuration, it also updates the container tags to point to the latest revision of the image.
Therefore, the rolling tags shown above are dynamic; they will always point to the latest revision or
daily release for the corresponding image.

VMware by Broadcom 123


VMware Tanzu Application Catalog Documentation - Tutorials

As an example, the 6.4.3 tag might point to WordPress 6.4.3 revision 10 now but will refer to
WordPress 6.4.3 revision 11 when Bitnami next updates the container image. The suffix revision
number (rXX) is incremented every time Bitnami releases an updated version of the image for the
same version of the application.

It is worth noting that any tags that do not explicitly specify a distribution should be assumed to refer
to the base image used in the Bitnami Application Catalog, at this moment, Debian 12.

Immutable tags
What if you depend on a specific revision of an image? For these scenarios, Bitnami also attaches a
static (immutable) tag to each revision. In this example, the 6.4.3-debian-12-r10 tag refers to
WordPress 6.4.3 revision 10, and using this tag ensures that users always get the same image every
time.

Usage recommendations
Which tag should you use and when? Follow these guidelines:

If you are using containers in a production environment (such as Kubernetes), Bitnami


recommends using immutable tags. This ensures that your deployment is not affected if a
new revision inadvertently breaks existing functionality.

If you are using containers for development, Bitnami suggests using rolling tags. This ensures
that you are always using the latest version. Rolling tags also make it easier to use a specific
version of a development tool (such as bitnami/node:18 for Node.js 18).

Useful links
To learn more, consider visiting the following links:

Docker tagging command reference

Docker tagging best practices

VMware by Broadcom 124


VMware Tanzu Application Catalog Documentation - Tutorials

Backup and restore Bitnami container


deployments

Introduction
Developers are increasingly adopting containers as their preferred way to build cloud-native
applications. They are portable, easy to use and consistent. Bitnami provides a wide range of pre-
packaged Docker containers. These ready-to-use assets follow the industry best practices, and
bundle the most up to date and secure versions of applications and their components.

Creating regular backups of your container deployments is a good practice that prevents data loss,
and gives the ability to restore your backup elsewhere.

This guide shows you how to create a backup of your container's persisted data and restore it using
Bitnami container images. It uses the Bitnami WordPress image as an example, but you can follow
this guide using any other Bitnami container image.

Assumptions and prerequistes


This guide makes the following assumptions:

You are running a Bitnami container image that mounts persistent volumes and includes
custom data that you wish to back up.

You already have a Docker environment with Docker Compose installed.

You are running a solution which is comprised of two containers: one for the application and
another for the database.

IMPORTANT Some Bitnami images use a single container and others use more than one container.
The example WordPress container image used in this guide uses two separate containers, one for
the application and another for the database. To backup and restore the application data, you must
backup and restore all volumes mounted in each container.

Step 1: Backup data volumes


To back up the data stored in the running container, it is essential to backup all the volumes for all
the containers created when running the container. In the case of the Bitnami WordPress image,
there are two different containers that persist the data and mount volumes: one for the application,
and another for the database.

Note

To learn how the image is configured, see docker-compose.yml file.

VMware by Broadcom 125


VMware Tanzu Application Catalog Documentation - Tutorials

To back up your container data, you must generate a tar.gz file within the backup directory in each
container. In this case, backup both the volumes mounted in the application container and the
database container. Follow these instructions:

TIP Some Bitnami containers such as Magento, create extra volumes apart for the application and
database ones. Check out container's docker-compose.yml file to learn which are the volumes that
your container will mount.

Copy the container ID of each container:

$ docker-compose ps -q wordpress
$ docker-compose ps -q mariadb

Stop the containers you want to back up.

$ docker-compose stop
Stopping wp_wordpress_1 ... done
Stopping wp_mariadb_1 ... done

Execute the commands below to create a backup file for each container. Remember to
replace the CONTAINER-WORDPRESS and CONTAINER-MARIADB placeholder with the
corresponding WordPress or MariaDB container ID.

$ docker run --rm --volumes-from=CONTAINER-WORDPRESS -v $(pwd)/backup:/tmp bi


tnami/minideb tar czf /tmp/wordpress_data_backup.tar.gz -C /bitnami/wordpress .
$ docker run --rm --volumes-from=CONTAINER-MARIADB -v $(pwd)/backup:/tmp bitn
ami/minideb tar czf /tmp/mariadb_data_backup.tar.gz -C /bitnami/mariadb .

Check that you own the right permissions on the backup file:

$ ls -lah backup/wordpress_data_backup.tar.gz

You should see something similar to this:

-rw-r--r-- 1 root root backup/wordpress_data_backup.tar.gz

Step 2: Restore the data on each destination container


You can now restore the backed up data on a new application image. Restart each container
(application and database) as if they have empty volumes. Execute the commands below.
Remember to replace the CONTAINER-WORDPRESS and CONTAINER-MARIADB placeholder with
the corresponding WordPress or MariaDB container ID.

Restore the application data:

$ docker run --rm --volumes-from=CONTAINER-WORDPRESS -v $(pwd)/backup:/tmp bi


tnami/minideb bash -c "rm -rf /bitnami/wordpress/* && tar xzf /tmp/wordpress_da
ta_backup.tar.gz -C /bitnami/wordpress"
$ docker restart CONTAINER-WORDPRESS

Restore the database data:

VMware by Broadcom 126


VMware Tanzu Application Catalog Documentation - Tutorials

$ docker run --rm --volumes-from=CONTAINER-MARIADB -v $(pwd)/backup:/tmp bitn


ami/minideb bash -c "rm -rf /bitnami/mariadb/* && tar xzf /tmp/mariadb_data_bac
kup.tar.gz -C /bitnami/mariadb"
$ docker restart CONTAINER-MARIADB

Your data is now restored. You can check it by accessing the application and verifying that the data
exists and the application is functioning correctly.

Useful links
To learn more about the topics discussed in this guide, use the links below:

Bitnami Containers

Bitnami How-To Guides for Containers

Understand Bitnami's Rolling Tags for Container Images

Develop Locally a Custom WordPress Using Bitnami Containers

Bitnami's Best Practices for Securing and Hardening Container Images

VMware by Broadcom 127


VMware Tanzu Application Catalog Documentation - Tutorials

Backup and Restore VMware Tanzu


Application Catalog Helm Chart
Deployments with Velero

Introduction
VMware Tanzu Application Catalog (Tanzu Application Catalog) offers Helm charts for popular
applications and infrastructure components like WordPress, MySQL, Elasticsearch and many others.
These charts let you deploy your applications and infrastructure on Kubernetes in a secure and
reliable manner without worrying about packaging, dependencies or Kubernetes YAML file
configurations.

Once you have your applications and your infrastructure running on Kubernetes, you need to start
thinking about how to backup the data flowing in and out of your cluster, so that you can protect
yourself from a failure or service outage. That's where Velero comes in.

Velero is an open source tool that makes it easy to backup and restore Kubernetes resources. It can
be used to back up an entire cluster, or it can be fine-tuned to only backup specific deployments
and/or namespaces. This guide gets you started with Velero by showing you how to use it to backup
and restore deployments created with Tanzu Application Catalog's Helm charts.

Assumptions and prerequisites


This guide makes the following assumptions:

You have two separate multi-node Kubernetes cluster - a source cluster and a destination
cluster - running on the same cloud provider. This guide uses the Google Kubernetes
Engine (GKE) service from Google Cloud Platform but you can use any cloud provider
supported by Velero. Learn about the providers supported by Velero.

You have configured Helm to use the Tanzu Application Catalog chart repository following
the instructions for Tanzu Application Catalog or the instructions for VMware Tanzu
Application Catalog for Tanzu Advanced.

You have the kubectl CLI and the Helm v3.x package manager installed and configured to
work with your Kubernetes clusters. Learn how to install kubectl and Helm v3.x.

This guide uses the Tanzu Application Catalog WordPress Helm chart as an example and describes
how to backup and restore all the components of a Tanzu Application Catalog WordPress
deployment created with this chart from one cluster to another. The steps are similar for other Tanzu
Application Catalog Helm charts.

Step 1: Deploy and customize WordPress on the source

VMware by Broadcom 128


VMware Tanzu Application Catalog Documentation - Tutorials

cluster

NOTE

This step creates a fresh WordPress deployment using Tanzu Application Catalog's
Helm chart and then customizes it to simulate a real-world backup/restore scenario.
If you already have a customized Tanzu Application Catalog WordPress deployment,
you can go straight to Step 2.

Follow the steps below:

1. Modify your context to reflect the source cluster. Deploy WordPress on the source cluster
and make it available at a public load balancer IP address. Replace the PASSWORD
placeholder with a password for your WordPress dashboard and the REPOSITORY
placeholder with a reference to your Tanzu Application Catalog chart repository.

helm install wordpress REPOSITORY/wordpress --set service.type=LoadBalancer --s


et wordpressPassword=PASSWORD

2. Wait for the deployment to complete and then use the command below to obtain the load
balancer IP address:

kubectl get svc --namespace default wordpress --template "{{ range (index .stat
us.loadBalancer.ingress 0) }}{{.}}{{ end }}"

3. Browse to the IP address and log in to the WordPress dashboard using the password
specified at deployment-time. Create and publish a sample post with a title, body, category
and image.

4. Confirm that you see the new post in the WordPress blog, as shown below:

VMware by Broadcom 129


VMware Tanzu Application Catalog Documentation - Tutorials

Step 2: Install Velero on the source cluster


The next step is to install Velero on the source cluster using the appropriate plugin for your cloud
provider. To do this, follow the steps below:

1. Modify your context to reflect the source cluster (if not already done).

2. Follow the plugin setup instructions for your cloud provider. For example, if you are using
Google Cloud Platform (as this guide does), follow the GCP plugin setup instructions to create
a service account and storage bucket and obtain a credentials file.

3. Then, install Velero by executing the command below, remembering to replace the
BUCKET-NAME placeholder with the name of your storage bucket and the SECRET-
FILENAME placeholder with the path to your credentials file:

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

You should see output similar to the screenshot below as Velero is installed:

VMware by Broadcom 130


VMware Tanzu Application Catalog Documentation - Tutorials

4. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

Step 3: Backup the WordPress deployment on the source


cluster
Once Velero is running, create a backup of the WordPress deployment:

velero backup create wpb --selector release=wordpress

TIP: The previous command uses a label to select and backup only the resources related to the
WordPress deployment. Optionally, you can backup all deployments in a specific namespace with
the --include-namespaces parameter, or backup the entire cluster by omitting all selectors.

Execute the command below to view the contents of the backup and confirm that it contains all the
required resources:

velero backup describe wpb --details

VMware by Broadcom 131


VMware Tanzu Application Catalog Documentation - Tutorials

At this point, your backup is ready. You can repeat this step every time you wish to have a manual
backup, or you can configure a schedule for automatic backups.

Step 4: Restore the WordPress deployment on the


destination cluster
Once your backup is complete and confirmed, you can now turn your attention to restoring it. For
illustrative purposes, this guide will assume that you wish to restore your WordPress backup to the
second (destination) cluster.

1. Modify your context to reflect the destination cluster.

2. Install Velero on the destination cluster as described in Step 2. Remember to use the same
values for the BUCKET-NAME and SECRET-FILENAME placeholders as you did originally, so
that Velero is able to access the previously-saved backups.

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

3. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

4. To avoid the backup data being overwritten, switch the bucket to read-only access:

kubectl patch backupstoragelocation default -n velero --type merge --patch '{"s


pec":{"accessMode":"ReadOnly"}}'

VMware by Broadcom 132


VMware Tanzu Application Catalog Documentation - Tutorials

5. Confirm that Velero is able to access the original backup:

velero backup describe wpb --details

6. Restore the backup. Note that this may take a few minutes to complete.

velero restore create --from-backup wpb

Wait until the backed-up resources are fully deployed and active. Use the kubectl get pods and
kubectl get svc commands to track the status of the pods and service endpoint. Once the
deployment has been restored, browse to the load balancer IP address and confirm that you see the
same post content as that on the source cluster.

At this point, you have successfully restored the Tanzu Application Catalog Helm deployment chart
using Velero.

TIP: A new public IP address will be associated with the load balancer service after the deployment is
restored. If you configured a domain to point to the original public IP address, remember to
reconfigure your DNS settings to use the new public IP address after restoring the deployment.

Useful links
To learn more about the topics discussed in this guide, use the links below:

Velero documentation

VMware by Broadcom 133


VMware Tanzu Application Catalog Documentation - Tutorials

Backup and Restore Apache Cassandra


Deployments on Kubernetes

Introduction
VMware Tanzu Application Catalog's (Tanzu Application Catalog) Apache Cassandra Helm chart
makes it easy to deploy a scalable Apache Cassandra database cluster on Kubernetes. This Helm
chart is compliant with current best practices and can also be easily upgraded to ensure that you
always have the latest fixes and security updates.

Once the database cluster is deployed and in use, it's necessary to put a data backup/restore
strategy in place. This backup/restore strategy is needed for many operational scenarios, including
disaster recovery planning, off-site data analysis or application load testing.

This guide explains how to back up and restore an Apache Cassandra deployment on Kubernetes
using Velero, an open-source Kubernetes backup/restore tool.

Assumptions and prerequisites


This guide makes the following assumptions:

You have two separate Kubernetes clusters - a source cluster and a destination cluster - with
kubectl and Helm v3 installed. This guide uses Google Kubernetes Engine (GKE) clusters
but you can also use any other Kubernetes provider. Learn how to install kubectl and Helm
v3.x.

You have configured Helm to use the Tanzu Application Catalog chart repository following
the instructions for Tanzu Application Catalog or the instructions for VMware Tanzu
Application Catalog for Tanzu Advanced.

You have previously deployed the Tanzu Application Catalog Apache Cassandra Helm chart
on the source cluster and added some data to it. Example command sequences to perform
these tasks are shown below, where the PASSWORD placeholder refers to the database
administrator password and the cluster is deployed with 3 replicas. Replace the
REPOSITORY and REGISTRY placeholders with references to your Tanzu Application
Catalog chart repository and container registry.

helm install cassandra REPOSITORY/cassandra \


--set replicaCount=3 \
--set cluster.seedCount=2 \
--set dbUser.user=admin \
--set dbUser.password=PASSWORD \
--set cluster.minimumAvailable=2
kubectl run --namespace default cassandra-client --rm --tty -i --restart='Never
' --env CASSANDRA_PASSWORD=PASSWORD --image REGISTRY/cassandra:3.11.8-debian

VMware by Broadcom 134


VMware Tanzu Application Catalog Documentation - Tutorials

-10-r20 -- bash
cqlsh -u admin -p $CASSANDRA_PASSWORD cassandra
CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION= {'class': 'SimpleStrategy'
, 'replication_factor': '2' } ;
USE test;
CREATE TABLE items (id UUID PRIMARY KEY, name TEXT);
INSERT INTO items (id, name) VALUES (now(), 'milk');
INSERT INTO items (id, name) VALUES (now(), 'eggs');
exit

The Kubernetes provider is supported by Velero.

Both clusters are on the same Kubernetes provider, as this is a requirement of Velero's
native support for migrating persistent volumes.

The restored deployment on the destination cluster will have the same name, namespace
and credentials as the original deployment on the source cluster.

NOTE

For persistent volume migration across cloud providers with Velero, you have the
option of using Velero's Restic integration. This integration is not covered in this
guide.

Step 1: Install Velero on the source cluster


Velero is an open source tool that makes it easy to backup and restore Kubernetes resources. It can
be used to back up an entire cluster or specific resources such as persistent volumes.

1. Modify your context to reflect the source cluster (if not already done).

2. Follow the Velero plugin setup instructions for your cloud provider. For example, if you are
using Google Cloud Platform (as this guide does), follow the GCP plugin setup instructions to
create a service account and storage bucket and obtain a credentials file.

3. Then, install Velero on the source cluster by executing the command below, remembering
to replace the BUCKET-NAME placeholder with the name of your storage bucket and the
SECRET-FILENAME placeholder with the path to your credentials file:

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

You should see output similar to the screenshot below as Velero is installed:

VMware by Broadcom 135


VMware Tanzu Application Catalog Documentation - Tutorials

4. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

Step 2: Back up the Apache Cassandra deployment on the


source cluster
The next step involves using Velero to copy the persistent data volumes for the Apache Cassandra
pods. These copied data volumes can then be reused in a new deployment.

1. Create a backup of the volumes in the running Apache Cassandra deployment on the source
cluster. This backup will contain both the primary and secondary node volumes.

velero backup create cassandra-backup --include-resources=pvc,pv --selector app


.kubernetes.io/instance=cassandra

2. Execute the command below to view the contents of the backup and confirm that it contains
all the required resources:

velero backup describe cassandra-backup --details

3. To avoid the backup data being overwritten, switch the bucket to read-only access:

kubectl patch backupstoragelocation default -n velero --type merge --patch '{"s


pec":{"accessMode":"ReadOnly"}}'

Step 3: Restore the Apache Cassandra deployment on the


destination cluster
You can now restore the persistent volumes and integrate them with a new Apache Cassandra
deployment on the destination cluster.

VMware by Broadcom 136


VMware Tanzu Application Catalog Documentation - Tutorials

1. Modify your context to reflect the destination cluster.

2. Install Velero on the destination cluster as described in Step 1. Remember to use the same
values for the BUCKET-NAME and SECRET-FILENAME placeholders as you did originally, so
that Velero is able to access the previously-saved backups.

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

3. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

4. Restore the persistent volumes in the same namespace as the source cluster using Velero.

velero restore create --from-backup cassandra-backup

5. Confirm that the persistent volumes have been restored:

kubectl get pvc

6. Create a new Apache Cassandra deployment. Use the same name, namespace and cluster
topology as the original deployment. Replace the PASSWORD placeholder with the same
database administrator password used in the original deployment and the REPOSITORY
placeholder with a reference to your Tanzu Application Catalog chart repository.

helm install cassandra REPOSITORY/cassandra \


--set replicaCount=3 \
--set cluster.seedCount=2 \
--set dbUser.user=admin \
--set dbUser.password=PASSWORD \
--set cluster.minimumAvailable=2

NOTE: If using Tanzu Application Catalog for Tanzu Advanced, install the chart following the
steps described in the VMware Tanzu Application Catalog for Tanzu Advanced
documentation instead.

NOTE: The deployment command shown above is only an example. It is important to create
the new deployment on the destination cluster using the same namespace, deployment
name, credentials and cluster topology as the original deployment on the source cluster.

This will create a new deployment that uses the original pod volumes (and hence the original
data).

7. Connect to the new deployment and confirm that your original data is intact using a query
like the example shown below. Replace the PASSWORD placeholder with the database
administrator password and the REGISTRY placeholder with a reference to your Tanzu
Application Catalog container registry.

kubectl run --namespace default cassandra-client --rm --tty -i --restart='Never


' --env CASSANDRA_PASSWORD=PASSWORD --image REGISTRY/cassandra:3.11.8-debian-10
-r20 -- bash

VMware by Broadcom 137


VMware Tanzu Application Catalog Documentation - Tutorials

cqlsh -u admin -p $CASSANDRA_PASSWORD cassandra


USE test;
SELECT * FROM items;

Confirm that your original data is intact.

Useful links
Apache Cassandra Helm chart

Velero documentation

VMware by Broadcom 138


VMware Tanzu Application Catalog Documentation - Tutorials

Backup and Restore Apache Kafka


Deployments on Kubernetes

Introduction
Apache Kafka is a scalable and highly-available data streaming platform. It is a powerful tool for
stream processing and is available under an open source license.

VMware Tanzu Application Catalog's (Tanzu Application Catalog) Apache Kafka Helm chart makes it
easy to get started with an Apache Kafka cluster on Kubernetes. This Helm chart is compliant with
current best practices and can also be easily upgraded to ensure that you always have the latest fixes
and security updates.

Once the cluster is deployed and in operation, it is important to back up its data regularly and ensure
that it can be easily restored as needed. Data backup and restore procedures are also important for
other scenarios, such as off-site data migration/data analysis or application load testing.

This guide explains how to back up and restore an Apache Kafka deployment on Kubernetes using
Velero, an open-source Kubernetes backup/restore tool.

Assumptions and prerequisites


This guide makes the following assumptions:

You have two separate Kubernetes clusters - a source cluster and a destination cluster - with
kubectl and Helm v3 installed. This guide uses Google Kubernetes Engine (GKE) clusters
but you can also use any other Kubernetes provider. Learn how to install kubectl and Helm
v3.x.

You have configured Helm to use the Tanzu Application Catalog chart repository following
the instructions for Tanzu Application Catalog or the instructions for VMware Tanzu
Application Catalog for Tanzu Advanced.

You have previously deployed the Tanzu Application Catalog Apache Kafka Helm chart on
the source cluster and added some data to it. Example command sequences to perform
these tasks are shown below. Replace the REPOSITORY and REGISTRY placeholders with
references to your Tanzu Application Catalog chart repository and container registry.

helm install kafka REPOSITORY/kafka


kubectl run kafka-client --restart='Never' --image REGISTRY/kafka:2.8.0-debian-
10-r27 --namespace default --command -- sleep infinity
kubectl exec --tty -i kafka-client --namespace default -- bash
kafka-console-producer.sh --broker-list kafka-0.kafka-headless.default.svc.clus
ter.local:9092 --topic test
>first
>second

VMware by Broadcom 139


VMware Tanzu Application Catalog Documentation - Tutorials

>third
exit

The Kubernetes provider is supported by Velero.

Both clusters are on the same Kubernetes provider, as this is a requirement of Velero's
native support for migrating persistent volumes.

The restored deployment on the destination cluster will have the same name, namespace
and credentials as the original deployment on the source cluster.

NOTE

For persistent volume migration across cloud providers with Velero, you have the
option of using Velero's Restic integration. This integration is not covered in this
guide.

Step 1: Install Velero on the source cluster


Velero is an open source tool that makes it easy to backup and restore Kubernetes resources. It can
be used to back up an entire cluster or specific resources such as persistent volumes.

1. Modify your context to reflect the source cluster (if not already done).

2. Follow the Velero plugin setup instructions for your cloud provider. For example, if you are
using Google Cloud Platform (as this guide does), follow the GCP plugin setup instructions to
create a service account and storage bucket and obtain a credentials file.

3. Then, install Velero on the source cluster by executing the command below, remembering
to replace the BUCKET-NAME placeholder with the name of your storage bucket and the
SECRET-FILENAME placeholder with the path to your credentials file:

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.2.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

You should see output similar to the screenshot below as Velero is installed:

VMware by Broadcom 140


VMware Tanzu Application Catalog Documentation - Tutorials

4. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

Step 2: Back up the Apache Kafka deployment on the source


cluster
The next step involves using Velero to copy the persistent data volumes for the Apache Kafka pods.
These copied data volumes can then be reused in a new deployment.

1. Create a backup of the volumes in the running Apache Kafka deployment on the source
cluster. This backup will contain both the primary and secondary node volumes.

velero backup create kafka-backup --include-resources=pvc,pv --selector app.kub


ernetes.io/instance=kafka

2. Execute the command below to view the contents of the backup and confirm that it contains
all the required resources:

velero backup describe kafka-backup --details

3. To avoid the backup data being overwritten, switch the bucket to read-only access:

kubectl patch backupstoragelocation default -n velero --type merge --patch '{"s


pec":{"accessMode":"ReadOnly"}}'

Step 3: Restore the Apache Kafka deployment on the


destination cluster
You can now restore the persistent volumes and integrate them with a new Apache Kafka
deployment on the destination cluster.

VMware by Broadcom 141


VMware Tanzu Application Catalog Documentation - Tutorials

1. Modify your context to reflect the destination cluster.

2. Install Velero on the destination cluster as described in Step 1. Remember to use the same
values for the BUCKET-NAME and SECRET-FILENAME placeholders as you did originally, so
that Velero is able to access the previously-saved backups.

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.2.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

3. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

4. Restore the persistent volumes in the same namespace as the source cluster using Velero.

velero restore create --from-backup kafka-backup

5. Confirm that the persistent volumes have been restored:

kubectl get pvc

6. Create a new Apache Kafka deployment. Use the same name, namespace and cluster
topology as the original deployment. Replace the PASSWORD placeholder with the same
administrator password used in the original deployment and the REPOSITORY placeholder
with a reference to your Tanzu Application Catalog chart repository.

helm install kafka REPOSITORY/kafka

NOTE: If using Tanzu Application Catalog for Tanzu Advanced, install the chart following the
steps described in the Tanzu Application Catalog for Tanzu Advanced documentation
instead.

NOTE: The deployment command shown above is only an example. It is important to create
the new deployment on the destination cluster using the same namespace, deployment
name, credentials and cluster topology as the original deployment on the source cluster.

This will create a new deployment that uses the original pod volumes (and hence the original
data).

7. Connect to the new deployment and confirm that your original messages are intact using a
query like the example shown below. Replace the REGISTRY placeholder with a reference to
your Tanzu Application Catalog container registry.

kubectl run kafka-client --restart='Never' --image REGISTRY/kafka:2.8.0-debian-


10-r27 --namespace default --command -- sleep infinity
kubectl exec --tty -i kafka-client --namespace default -- bash
kafka-console-consumer.sh --bootstrap-server kafka.default.svc.cluster.local:9
092 --topic test --from-beginning

Confirm that your original data is intact.

VMware by Broadcom 142


VMware Tanzu Application Catalog Documentation - Tutorials

Useful links
Apache Kafka Helm chart

Velero documentation

VMware by Broadcom 143


VMware Tanzu Application Catalog Documentation - Tutorials

Backup and Restore Etcd Deployments on


Kubernetes

Introduction
etcd is a reliable and efficient key-value store, most commonly used for data storage in distributed
systems. It offers a simple interface for reading and writing data and is available under an open
source license.

VMware Tanzu Application Catalog (Tanzu Application Catalog) offers an etcd Helm chart that
enables quick and easy deployment of an etcd cluster on Kubernetes. This Helm chart is compliant
with current best practices and is suitable for use in production environments, with built-in features
for role-based access control (RBAC), horizontal scaling, disaster recovery and TLS.

Of course, the true business value of an etcd cluster comes not from the cluster itself, but from the
data that resides within it. It is critical to protect this data, by backing it up regularly and ensuring that
it can be easily restored as needed. Data backup and restore procedures are also important for other
scenarios, such as off-site data migration/data analysis or application load testing.

This guide walks you through two different approaches you can follow when backing up and
restoring Tanzu Application Catalog etcd Helm chart deployments on Kubernetes:

Back up the data from the source deployment and restore it in a new deployment using
etcd's built-in backup/restore tools.

Back up the persistent volumes from the source deployment and attach them to a new
deployment using Velero, a Kubernetes backup/restore tool.

Assumptions and prerequisites


This guide makes the following assumptions:

You have two separate Kubernetes clusters - a source cluster and a destination cluster - with
kubectl and Helm v3 installed. This guide uses Google Kubernetes Engine (GKE) clusters
but you can also use any other Kubernetes provider. Learn how to install kubectl and Helm
v3.x.

You have configured Helm to use the Tanzu Application Catalog chart repository following
the instructions for Tanzu Application Catalog or the instructions for VMware Tanzu
Application Catalog for Tanzu Advanced.

You have previously deployed the Tanzu Application Catalog etcd Helm chart on the source
cluster and added some data to it. Example command sequences to perform these tasks are
shown below, where the PASSWORD placeholder refers to the etcd administrator password.
Replace the REPOSITORY placeholder with a reference to your Tanzu Application Catalog

VMware by Broadcom 144


VMware Tanzu Application Catalog Documentation - Tutorials

chart repository.

helm install etcd REPOSITORY/etcd \


--set auth.rbac.rootPassword=PASSWORD \
--set statefulset.replicaCount=3
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/na
me=etcd,app.kubernetes.io/instance=etcd" -o jsonpath="{.items[0].metadata.name}
")
kubectl exec -it $POD_NAME -- etcdctl --user root:PASSWORD put /message1 foo
kubectl exec -it $POD_NAME -- etcdctl --user root:PASSWORD put /message2 bar

Method 1: Backup and restore data using etcd's built-in tools


This method involves using etcd's etcdctl tool to create a snapshot of the data in the source cluster
and the Tanzu Application Catalog's Helm chart's recovery features to create a new cluster using the
data from the snapshot.

Step 1: Create a data snapshot


The first step is to back up the data in the etcd deployment on the source cluster. Follow these steps:

1. Forward the etcd service port and place the process in the background:

kubectl port-forward --namespace default svc/etcd 2379:2379 &

2. Create a directory for the backup files and make it the current working directory:

mkdir etcd-backup
chmod o+w etcd-backup
cd etcd-backup

3. Use the etcdctl tool to create a snapshot of the etcd cluster and save it to the current
directory. If this tool is not installed on your system, use Tanzu Application Catalog's etcd
Docker image to perform the backup, as shown below (replace the example container
registry shown with your Tanzu Application Catalog container registry). Replace the
PASSWORD placeholder with the administrator password set at deployment-time and the
REGISTRY placeholder with a reference to your Tanzu Application Catalog container registry.

docker run -it --env ALLOW_NONE_AUTHENTICATION=yes --rm --network host -v $(pw


d):/backup REGISTRY/etcd etcdctl --user root:PASSWORD --endpoints http://127.0.
0.1:2379 snapshot save /backup/mybackup

Here, the --net parameter lets the Docker container use the host's network stack and
thereby gain access to the forwarded port. The etcdctl command connects to the etcd
service and creates a snapshot in the /backup directory, which is mapped to the current
directory (etcd-backup/) on the Docker host with the -v parameter. Finally, the --rm
parameter deletes the container after the etcdctl command completes execution.

4. Stop the service port forwarding by terminating the corresponding background process.

5. Adjust the permissions of the backup file to make it world-readable:

sudo chmod -R 644 mybackup

VMware by Broadcom 145


VMware Tanzu Application Catalog Documentation - Tutorials

At the end of this step, the backup directory should contain a file named mybackup, which is a
snapshot of the data from the etcd deployment.

Step 2: Copy the snapshot to a PVC


The etcd cluster can be restored from the snapshot created in the previous step. There are different
ways to do this; one simple approach is to make the snapshot available to the pods using a
Kubernetes PersistentVolumeClaim (PVC). Therefore, the next step is to create a PVC and copy the
snapshot file into it. Further, since each node of the restored cluster will access the PVC, it is
important to create the PVC using a storage class that supports ReadWriteMany access, such as NFS.

1. Begin by installing the NFS Server Provisioner. The easiest way to get this running on any
platform is with the stable Helm chart. Use the command below, remembering to adjust the
storage size to reflect your cluster's settings:

helm repo add stable https://charts.helm.sh/stable


helm install nfs stable/nfs-server-provisioner \
--set persistence.enabled=true,persistence.size=5Gi

2. Create a Kubernetes manifest file named etcd.yaml to configure an NFS-backed PVC and a
pod that uses it, as below. Replace the REGISTRY placeholder with a reference to your
Tanzu Application Catalog container registry.

## etcd.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: etcd-backup-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: nfs
---
apiVersion: v1
kind: Pod
metadata:
name: etcd-backup-pod
spec:
volumes:
- name: etcd-backup
persistentVolumeClaim:
claimName: etcd-backup-pvc
containers:
- name: inspector
image: REGISTRY/tac-shell:latest
command:
- sleep
- infinity
volumeMounts:
- mountPath: "/backup"
name: etcd-backup

VMware by Broadcom 146


VMware Tanzu Application Catalog Documentation - Tutorials

3. Apply the manifest to the Kubernetes cluster:

kubectl apply -f etcd.yaml

This will create a pod named etcd-backup-pod with an attached PVC named etcd-backup-
pvc. The PVC will be mounted at the /backup mount point of the pod.

4. Copy the snapshot to the PVC using the mount point:

kubectl cp mybackup etcd-backup-pod:/backup/mybackup

5. Verify that the snapshot exists in the PVC, by connecting to the pod command-line shell and
inspecting the /backup directory:

kubectl exec -it etcd-backup-pod -- ls -al /backup

The command output should display a directory listing containing the snapshot file, as shown
below:

6. Delete the pod, as it is not longer required:

kubectl delete pod etcd-backup-pod

Step 3: Restore the snapshot in a new cluster


The next step is to create an empty etcd deployment on the destination cluster and restore the data
snapshot into it. The Tanzu Application Catalog etcd Helm chart provides built-in capabilities to do
this, via its startFromSnapshot.* parameters.

1. Create a new etcd deployment. Replace the PASSWORD placeholder with the same
password used in the original deployment and replace the REPOSITORY placeholder with a
reference to your Tanzu Application Catalog chart repository.

helm install etcd-new REPOSITORY/etcd \


--set startFromSnapshot.enabled=true \
--set startFromSnapshot.existingClaim=etcd-backup-pvc \
--set startFromSnapshot.snapshotFilename=mybackup \
--set auth.rbac.rootPassword=PASSWORD \
--set statefulset.replicaCount=3

This command creates a new etcd cluster and initializes it using an existing data snapshot.
The startFromSnapshot.existingClaim and startFromSnapshot.snapshotFilename define
the source PVC and source filename for the snapshot respectively.

NOTE: It is important to create the new deployment on the destination cluster using the
same credentials as the original deployment on the source cluster.

2. Connect to the new deployment and confirm that your data has been successfully restored.

VMware by Broadcom 147


VMware Tanzu Application Catalog Documentation - Tutorials

Replace the PASSWORD placeholder with the administrator password set at deployment-
time.

export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/na


me=etcd,app.kubernetes.io/instance=etcd-new" -o jsonpath="{.items[0].metadata.n
ame}")
kubectl exec -it $POD_NAME -- etcdctl --user root:PASSWORD get /message1
kubectl exec -it $POD_NAME -- etcdctl --user root:PASSWORD get /message2

Here is an example of what you should see:

NOTE

Tanzu Application Catalog's etcd Helm chart also supports auto disaster recovery by
periodically snapshotting the keyspace. If the cluster permanently loses more than
(N-1)/2 members, it tries to recover the cluster from a previous snapshot. Learn more
about this feature.

Method 2: Back up and restore persistent data volumes


This method involves copying the persistent data volumes for the etcd nodes and reusing them in a
new deployment with Velero, an open source Kubernetes backup/restore tool. This method is only
suitable when:

The Kubernetes provider is supported by Velero.

Both clusters are on the same Kubernetes provider, as this is a requirement of Velero's
native support for migrating persistent volumes.

The restored deployment on the destination cluster will have the same name, namespace,
topology and credentials as the original deployment on the source cluster.

NOTE

For persistent volume migration across cloud providers with Velero, you have the
option of using Velero's Restic integration. This integration is not covered in this
guide.

Step 1: Install Velero on the source cluster


Velero is an open source tool that makes it easy to backup and restore Kubernetes resources. It can
be used to back up an entire cluster or specific resources such as persistent volumes.

1. Modify your context to reflect the source cluster (if not already done).

VMware by Broadcom 148


VMware Tanzu Application Catalog Documentation - Tutorials

2. Follow the Velero plugin setup instructions for your cloud provider. For example, if you are
using Google Cloud Platform (as this guide does), follow the GCP plugin setup instructions to
create a service account and storage bucket and obtain a credentials file.

3. Then, install Velero on the source cluster by executing the command below, remembering
to replace the BUCKET-NAME placeholder with the name of your storage bucket and the
SECRET-FILENAME placeholder with the path to your credentials file:

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

You should see output similar to the screenshot below as Velero is installed:

4. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

Step 2: Back up the etcd deployment on the source cluster


Next, back up the persistent volumes using Velero.

1. Create a backup of the volumes in the running etcd deployment on the source cluster. This
backup will contain all the node volumes.

velero backup create etcd-backup --include-resources pvc,pv --selector app.kube


rnetes.io/instance=etcd

2. Execute the command below to view the contents of the backup and confirm that it contains
all the required resources:

velero backup describe etcd-backup --details

3. To avoid the backup data being overwritten, switch the bucket to read-only access:

VMware by Broadcom 149


VMware Tanzu Application Catalog Documentation - Tutorials

kubectl patch backupstoragelocation default -n velero --type merge --patch '{"s


pec":{"accessMode":"ReadOnly"}}'

Step 3: Restore the etcd deployment on the destination cluster


You can now restore the persistent volumes and integrate them with a new etcd deployment on the
destination cluster.

1. Modify your context to reflect the destination cluster.

2. Install Velero on the destination cluster as described in Step 1. Remember to use the same
values for the BUCKET-NAME and SECRET-FILENAME placeholders as you did originally, so
that Velero is able to access the previously-saved backups.

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

3. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

4. Restore the persistent volumes in the same namespace as the source cluster using Velero.

velero restore create --from-backup etcd-backup

5. Confirm that the persistent volumes have been restored:

kubectl get pvc --namespace default

6. Create a new etcd deployment. Use the same deployment name, credentials and other
parameters as the original deployment. Replace the PASSWORD placeholder with the same
database password used in the original release and the REPOSITORY placeholder with a
reference to your Tanzu Application Catalog chart repository.

helm install etcd REPOSITORY/etcd \


--set auth.rbac.rootPassword=PASSWORD \
--set statefulset.replicaCount=3

NOTE: It is important to create the new deployment on the destination cluster using the
same namespace, deployment name, credentials and number of replicas as the original
deployment on the source cluster.

This will create a new deployment that uses the original volumes (and hence the original
data).

7. Connect to the new deployment and confirm that your original data is intact:

export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/na


me=etcd,app.kubernetes.io/instance=etcd" -o jsonpath="{.items[0].metadata.name}
")
kubectl exec -it $POD_NAME -- etcdctl --user root:PASSWORD get /message1
kubectl exec -it $POD_NAME -- etcdctl --user root:PASSWORD get /message2

VMware by Broadcom 150


VMware Tanzu Application Catalog Documentation - Tutorials

Here is an example of what you should see:

Useful links
Tanzu Application Catalog etcd Helm chart

etcd client application etcdctl

Velero documentation

VMware by Broadcom 151


VMware Tanzu Application Catalog Documentation - Tutorials

Backup and Restore MongoDB Deployments


on Kubernetes

Introduction
VMware Tanzu Application Catalog (Tanzu Application Catalog) offers a MongoDB Helm chart that
makes it quick and easy to deploy a horizontally-scalable MongoDB cluster on Kubernetes with
separate primary, secondary and arbiter nodes. This Helm chart is compliant with current best
practices and can also be easily upgraded to ensure that you always have the latest fixes and security
updates.

However, setting up a scalable MongoDB service is just the beginning; you also need to regularly
backup the data being stored in the service, and to have the ability to restore it elsewhere if needed.
Common scenarios for such backup/restore operations include disaster recovery, off-site data
analysis or application load testing.

This guide walks you through two different approaches you can follow when backing up and
restoring MongoDB deployments on Kubernetes:

Back up the data from the source deployment and restore it in a new deployment using
MongoDB's built-in backup/restore tools.

Back up the persistent volumes from the source deployment and attach them to a new
deployment using Velero, a Kubernetes backup/restore tool.

Assumptions and prerequisites


This guide makes the following assumptions:

You have two separate Kubernetes clusters - a source cluster and a destination cluster - with
kubectl and Helm v3 installed. This guide uses Google Kubernetes Engine (GKE) clusters
but you can also use any other Kubernetes provider. Learn how to install kubectl and Helm
v3.x.

You have configured Helm to use the Tanzu Application Catalog chart repository following
the instructions for Tanzu Application Catalog or the instructions for VMware Tanzu
Application Catalog for Tanzu Advanced.

You have previously deployed the Tanzu Application Catalog MongoDB Helm chart with
replication on the source cluster and added some data to it. Example command sequences
to perform these tasks are shown below, where the PASSWORD placeholder refers to the
database administrator password. Replace the REPOSITORY and REGISTRY placeholders
with references to your Tanzu Application Catalog chart repository and container registry.

helm install mongodb REPOSITORY/mongodb \

VMware by Broadcom 152


VMware Tanzu Application Catalog Documentation - Tutorials

--namespace default \
--set replicaSet.enabled=true \
--set mongodbRootPassword=PASSWORD
kubectl run --namespace default mongodb-client --rm --tty -i --restart='Never'
--image REGISTRY/mongodb:4.2.5-debian-10-r35 --command -- mongo admin --host mo
ngodb --authenticationDatabase admin -u root -p PASSWORD
use mydb
db.accounts.insert({name:"john", total: "1058"})
db.accounts.insert({name:"jane", total: "6283"})
db.accounts.insert({name:"james", total: "472"})
exit

Method 1: Backup and restore data using MongoDB's built-in


tools
This method involves using MongoDB's mongodump tool to backup the data in the source cluster and
MongoDB's mongorestore tool to restore this data on the destination cluster.

Step 1: Backup data with mongodump


The first step is to back up the data in the MongoDB deployment on the source cluster. Follow these
steps:

1. Obtain the MongoDB administrator password:

export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb -


o jsonpath="{.data.mongodb-root-password}" | base64 --decode)

2. Forward the MongoDB service port and place the process in the background:

kubectl port-forward --namespace default svc/mongodb 27017:27017 &

3. Create a directory for the backup files and make it the current working directory:

mkdir mybackup
chmod o+w mybackup
cd mybackup

4. Back up the contents of all the databases to the current directory using the mongodump tool. If
this tool is not installed on your system, use Tanzu Application Catalog's MongoDB Docker
image to perform the backup, as shown below (replace the REGISTRY placeholder with your
Tanzu Application Catalog container registry):

docker run --rm --name mongodb -v $(pwd):/app --net="host" REGISTRY/mongodb:lat


est mongodump -u root -p $MONGODB_ROOT_PASSWORD -o /app

Here, the --net parameter lets the Docker container use the host's network stack and
thereby gain access to the forwarded port. The mongodump command connects to the
MongoDB service and creates backup files in the /app directory, which is mapped to the
current directory (mybackup/) on the Docker host with the -v parameter. Finally, the --rm
parameter deletes the container after the mongodump command completes execution.

5. Stop the service port forwarding by terminating the corresponding background process.

VMware by Broadcom 153


VMware Tanzu Application Catalog Documentation - Tutorials

At the end of this step, the backup directory should contain the data from your running MongoDB
deployment.

Step 2: Restore data with mongorestore


The next step is to create an empty MongoDB deployment on the destination cluster and restore the
data into it. You can also use the procedure shown below with a MongoDB deployment in a separate
namespace in the same cluster.

1. Create a new MongoDB deployment. Replace the PASSWORD placeholder with the
database administrator password and the REPOSITORY placeholder with a reference to your
Tanzu Application Catalog chart repository.

helm install mongodb-new REPOSITORY/mongodb \


--set replicaSet.enabled=true \
--set mongodbRootPassword=PASSWORD

2. Create an environment variable with the password for the new deployment:

export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb-n


ew -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)

3. Forward the MongoDB service port for the new deployment and place the process in the
background:

kubectl port-forward --namespace default svc/mongodb-new 27017:27017 &

4. Restore the contents of the backup into the new release using the mongorestore tool. If this
tool is not available on your system, mount the directory containing the backup files as a
volume in Tanzu Application Catalog's MongoDB Docker container and use the mongorestore
client tool in the container image to import the backup into the new cluster, as shown below
(replace the REGISTRY placeholder with your Tanzu Application Catalog container registry):

cd mybackup
docker run --rm --name mongodb -v $(pwd):/app --net="host" REGISTRY/mongodb:lat
est mongorestore -u root -p $MONGODB_ROOT_PASSWORD /app

Here, the -v parameter mounts the current directory (containing the backup files) to the container's
/app path. Then, the mongorestore client tool is used to connect to the new MongoDB service and
restore the data from the original deployment. As before, the --rm parameter destroys the container
after the command completes execution.

1. Stop the service port forwarding by terminating the background process.

2. Connect to the new deployment and confirm that your data has been successfully restore
(replace the REGISTRY placeholder with your Tanzu Application Catalog container registry):

kubectl run --namespace default mongodb-new-client --rm --tty -i --restart='Nev


er' --image REGISTRY/mongodb:4.2.5-debian-10-r35 --command -- mongo mydb --host
mongodb-new --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD -
-eval "db.accounts.find()"

VMware by Broadcom 154


VMware Tanzu Application Catalog Documentation - Tutorials

Here is an example of what you should see:

Method 2: Back up and restore persistent data volumes


This method involves copying the persistent data volume for the primary MongoDB node and
reusing it in a new deployment with Velero, an open source Kubernetes backup/restore tool. This
method is only suitable when:

The cloud provider is supported by Velero.

Both clusters are on the same cloud provider, because Velero does not support the
migration of persistent volumes across cloud providers.

The restored deployment on the destination cluster will have the same name, namespace
and credentials as the original deployment on the source cluster.

NOTE

For persistent volume migration across cloud providers with Velero, you have the
option of using Velero's Restic integration. This integration is currently beta quality
and is not covered in this guide.

Step 1: Install Velero on the source cluster


Velero is an open source tool that makes it easy to backup and restore Kubernetes resources. It can
be used to back up an entire cluster or specific resources such as persistent volumes.

1. Modify your context to reflect the source cluster (if not already done).

2. Follow the Velero plugin setup instructions for your cloud provider. For example, if you are
using Google Cloud Platform (as this guide does), follow the GCP plugin setup instructions to
create a service account and storage bucket and obtain a credentials file.

3. Then, install Velero on the source cluster by executing the command below, remembering
to replace the BUCKET-NAME placeholder with the name of your storage bucket and the
SECRET-FILENAME placeholder with the path to your credentials file:

VMware by Broadcom 155


VMware Tanzu Application Catalog Documentation - Tutorials

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

You should see output similar to the screenshot below as Velero is installed:

4. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

Step 2: Back up the MongoDB deployment on the source cluster


Next, back up the persistent volumes using Velero.

1. Create a backup of the volumes in the running MongoDB deployment on the source cluster.
This backup will contain both the primary and secondary node volumes.

velero backup create mongo-backup --include-resources pvc,pv --selector release


=mongodb

2. Execute the command below to view the contents of the backup and confirm that it contains
all the required resources:

velero backup describe mongo-backup --details

3. To avoid the backup data being overwritten, switch the bucket to read-only access:

kubectl patch backupstoragelocation default -n velero --type merge --patch '{"s


pec":{"accessMode":"ReadOnly"}}'

4. Obtain and note the replicaset key from the deployment:

kubectl get secret mongodb -o jsonpath="{.data.mongodb-replica-set-key}" | base


64 --decode

VMware by Broadcom 156


VMware Tanzu Application Catalog Documentation - Tutorials

Step 3: Restore the MongoDB deployment on the destination


cluster
You can now restore the persistent volumes and integrate them with a new MongoDB deployment
on the destination cluster.

1. Modify your context to reflect the destination cluster.

2. Install Velero on the destination cluster as described in Step 1. Remember to use the same
values for the BUCKET-NAME and SECRET-FILENAME placeholders as you did originally, so
that Velero is able to access the previously-saved backups.

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

3. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

4. Restore the persistent volumes in the same namespace as the source cluster using Velero.

velero restore create --from-backup mongo-backup

5. Confirm that the persistent volumes have been restored and note the volume name for the
primary node:

kubectl get pvc --namespace default

6. Delete the persistent volume corresponding to the secondary node and retain only the
volume corresponding to the primary node. If there is more than one secondary volume
(depending on how you originally deployed the chart), delete all the secondary volumes.

kubectl delete pvc --namespace default SECONDARY-PVC-NAME

7. Create a new MongoDB deployment. Use the same name and namespace as the original
deployment and use the chart's persistence.existingClaim parameter to attach the existing
volume. Replace the PASSWORD placeholder with the same database administrative
password used in the original release, the PRIMARY-PVC-NAME placeholder with the name
of the restored primary node volume, the REPLICASET-KEY with the key obtained at the
end of the previous step and the REPOSITORY placeholder with a reference to your Tanzu
Application Catalog chart repository.

helm install mongodb REPOSITORY/mongodb \


--namespace default \
--set replicaSet.enabled=true \
--set mongodbRootPassword=PASSWORD \
--set persistence.existingClaim=PRIMARY-PVC-NAME \
--set replicaSet.key=REPLICASET-KEY

NOTE: It is important to create the new deployment on the destination cluster using the
same namespace, deployment name, credentials and replicaset key as the original

VMware by Broadcom 157


VMware Tanzu Application Catalog Documentation - Tutorials

deployment on the source cluster.

This will create a new deployment that uses the original primary node volume (and hence the
original data). Note that if replication is enabled, as in the example above, installing the chart
will automatically create a new volume for each secondary node.

8. Connect to the new deployment and confirm that your original data is intact (replace the
REGISTRY placeholder with your Tanzu Application Catalog container registry):

export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb -


o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
kubectl run --namespace default mongodb-client --rm --tty -i --restart='Never'
--image REGISTRY/mongodb:4.2.5-debian-10-r35 --command -- mongo mydb --host mon
godb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD --eval "d
b.accounts.find()"

Here is an example of what you should see:

Useful links
Tanzu Application Catalog MongoDB Helm chart

MongoDB client applications mongodump and mongorestore

Velero documentation

VMware by Broadcom 158


VMware Tanzu Application Catalog Documentation - Tutorials

Backup and Restore MariaDB Galera


Deployments on Kubernetes

Introduction
MariaDB Galera Cluster makes it easy to create a high-availability database cluster with synchronous
replication while still retaining all the familiar MariaDB clients and tools. VMware Tanzu Application
Catalog (Tanzu Application Catalog) offers a MariaDB Galera Helm chart that makes it quick and easy
to deploy such a cluster on Kubernetes. This Helm chart is compliant with current best practices and
can also be easily upgraded to ensure that you always have the latest fixes and security updates.

Once you have a MariaDB Galera Cluster deployed, you need to start thinking about ongoing
maintenance and disaster recovery, and begin putting a data backup/restore strategy in place. This
backup/restore strategy is needed for many operational scenarios, including disaster recovery
planning, off-site data analysis or application load testing.

This guide walks you through two different approaches you can follow when backing up and
restoring MariaDB Galera Cluster deployments on Kubernetes:

Back up the data from the source deployment and restore it in a new deployment using
MariaDB's built-in backup/restore tools.

Back up the persistent volumes from the source deployment and attach them to a new
deployment using Velero, a Kubernetes backup/restore tool.

Assumptions and prerequisites


This guide makes the following assumptions:

You have two separate Kubernetes clusters - a source cluster and a destination cluster - with
kubectl and Helm v3 installed. This guide uses Google Kubernetes Engine (GKE) clusters
but you can also use any other Kubernetes provider. Learn how to install kubectl and Helm
v3.x.

You have configured Helm to use the Tanzu Application Catalog chart repository following
the instructions for Tanzu Application Catalog or the instructions for VMware Tanzu
Application Catalog for Tanzu Advanced.

You have previously deployed the Tanzu Application Catalog MariaDB Galera Helm chart on
the source cluster and added some data to it. Example command sequences to perform
these tasks are shown below, where the PASSWORD and REPL-PASSWORD placeholders
refer to the database administrator and replication user passwords respectively. Replace the
REPOSITORY and REGISTRY placeholders with references to your Tanzu Application
Catalog chart repository and container registry respectively.

VMware by Broadcom 159


VMware Tanzu Application Catalog Documentation - Tutorials

helm install galera REPOSITORY/mariadb-galera \


--namespace default \
--set rootUser.password=PASSWORD \
--set galera.mariabackup.password=REPL-PASSWORD
kubectl run galera-mariadb-galera-client --rm --tty -i --restart='Never' --name
space default --image REGISTRY/mariadb-galera:10.4.12-debian-10-r78 --command -
- mysql -h galera-mariadb-galera -P 3306 -uroot -pPASSWORD
CREATE DATABASE mydb;
USE mydb;
CREATE TABLE accounts (name VARCHAR(255) NOT NULL, total INT NOT NULL);
INSERT INTO accounts VALUES ('user1', '647'), ('user2', '573');
exit

Method 1: Backup and restore data using MariaDB's built-in


tools
The method described below involves using MariaDB's mysqldump tool to create a point-in-time
backup of the data in the source cluster, and then using the mysql tool to restore this data on the
destination cluster.

Step 1: Backup data with mysqldump


The first step is to back up the data in the MariaDB Galera source cluster. Follow these steps:

1. Obtain the MariaDB Galera Cluster's administrator password:

export PASSWORD=$(kubectl get secret --namespace default galera-mariadb-galera


-o jsonpath="{.data.mariadb-root-password}" | base64 --decode)

2. Forward the MariaDB Galera Cluster service port and place the process in the background:

kubectl port-forward --namespace default svc/galera-mariadb-galera 3306:3306 &

3. Create a directory for the backup files and make it the current working directory:

mkdir mybackup
cd mybackup

4. Back up the contents of all the databases to the current directory using the mysqldump tool. If
this tool is not installed on your system, use Tanzu Application Catalog's MariaDB Galera
Docker image to perform the backup, as shown below. Replace the PASSWORD
placeholder with the database administrator password and the REGISTRY placeholder with a
reference to your Tanzu Application Catalog container registry.

docker run --rm --name mysqldump -v $(pwd):/app --net="host" REGISTRY/mariadb-g


alera:latest mysqldump -h 127.0.0.1 -u root -pPASSWORD -A > backup.sql

Here, the --net parameter lets the Docker container use the host's network stack and
thereby gain access to the forwarded port. The mysqldump command connects to the
forwarded MariaDB Galera service and creates an SQL backup file in the /app directory,
which is mapped to the current directory (mybackup/) on the Docker host. Finally, the --rm
parameter deletes the container after the mysqldump command completes execution.

VMware by Broadcom 160


VMware Tanzu Application Catalog Documentation - Tutorials

5. Stop the service port forwarding by terminating the corresponding background process.

At the end of this step, the backup directory should contain a file with the data from your running
MariaDB Galera Cluster deployment.

Step 2: Restore data with mysql


The next step is to create an empty MariaDB Galera Cluster deployment on the destination cluster
and restore the data into it. You can also use the procedure shown below with a new MariaDB Galera
Cluster deployment in a separate namespace in the same cluster.

1. Create a new MariaDB Galera Cluster deployment. Replace the PASSWORD, REPL-
PASSWORD and REPOSITORY placeholders with the database administrator password,
replication user password and a reference to your Tanzu Application Catalog chart repository.

helm install galera-new REPOSITORY/mariadb-galera \


--set rootUser.password=PASSWORD \
--set galera.mariabackup.password=REPL-PASSWORD

NOTE: If using Tanzu Application Catalog for Tanzu Advanced, install the chart following the
steps described in the VMware Tanzu Application Catalog for Tanzu Advanced
documentation instead.

2. Forward the MariaDB Galera Cluster service port for the new deployment and place the
process in the background:

kubectl port-forward --namespace default svc/galera-new-mariadb-galera 3306:330


6 &

3. Create and start a Tanzu Application Catalog MariaDB Galera container image. Mount the
directory containing the backup file as a volume in this container and use the mysql client
tool in the container image to import the backup into the new cluster, as shown below.
Replace the PASSWORD placeholder with the database administrator password and the
REGISTRY placeholder with a reference to your Tanzu Application Catalog container registry.

cd mybackup
docker create -t --rm --name galera-client -v $(pwd):/app --net="host" REGISTRY
/mariadb-galera:latest bash
docker start galera-client
docker exec -i galera-client mysql -h 127.0.0.1 -uroot -pPASSWORD < backup.sql

Here, the -v parameter mounts the current directory (containing the backup file) to the
container's /app path. Then, the mysql client tool is used to connect to the new MariaDB
Galera Cluster service and restore the data from the original deployment. The --rm
parameter destroys the container on exit.

4. Stop the service port forwarding by terminating the background process.

5. Connect to the new deployment and confirm that your data has been successfully restored.
Replace the PASSWORD placeholder with the database administrator password and the
REGISTRY placeholder with a reference to your Tanzu Application Catalog container registry.

kubectl run galera-new-mariadb-galera-client --rm --tty -i --restart='Never' --

VMware by Broadcom 161


VMware Tanzu Application Catalog Documentation - Tutorials

namespace default --image REGISTRY/mariadb-galera:10.4.12-debian-10-r78 --comma


nd -- mysql -h galera-new-mariadb-galera -P 3306 -uroot -pPASSWORD -e "SELECT *
FROM mydb.accounts"

Here is an example of what you should see:

Method 2: Back up and restore persistent data volumes


This method involves copying the persistent data volume for the primary MariaDB Galera Cluster
node and reusing it in a new deployment with Velero, an open source Kubernetes backup/restore
tool. This method is only suitable when:

The cloud provider is supported by Velero.

Both clusters are on the same cloud provider, because Velero does not support the
migration of persistent volumes across cloud providers.

The restored deployment on the destination cluster will have the same name, namespace
and credentials as the original deployment on the source cluster.

NOTE

This approach requires scaling the cluster down to a single node to perform the
backup.

NOTE

For persistent volume migration across cloud providers with Velero, you have the
option of using Velero's Restic integration. This integration is currently beta quality
and is not covered in this guide.

Step 1: Install Velero on the source cluster


Velero is an open source tool that makes it easy to backup and restore Kubernetes resources. It can
be used to back up an entire cluster or specific resources such as persistent volumes.

1. Modify your context to reflect the source cluster (if not already done).

2. Follow the Velero plugin setup instructions for your cloud provider. For example, if you are
using Google Cloud Platform (as this guide does), follow the GCP plugin setup instructions to
create a service account and storage bucket and obtain a credentials file.

VMware by Broadcom 162


VMware Tanzu Application Catalog Documentation - Tutorials

3. Then, install Velero on the source cluster by executing the command below, remembering
to replace the BUCKET-NAME placeholder with the name of your storage bucket and the
SECRET-FILENAME placeholder with the path to your credentials file:

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.1.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

You should see output similar to the screenshot below as Velero is installed:

4. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

Step 2: Back up the MariaDB Galera Cluster deployment on the


source cluster
Next, back up the main persistent volume using Velero.

1. Scale down the cluster to only a single node:

kubectl scale statefulset --replicas=1 galera-mariadb-galera

NOTE: In a multi-node cluster, Galera's grastate.dat file typically has the


safe_to_bootstrap value set to 0. When restoring node PVCs and attempting to bootstrap a
new Galera cluster with any one of them, Galera will fail to bootstrap due to this value. One
approach to this is to examine each PVC to identify the one with the highest transaction
seqno and manually set the safe_to_bootstrap value to 1 for that PVC. This is tedious when
dealing with a large number of PVCs. The alternative approach discussed in this guide
involves scaling the cluster down to one node (this sets the safe_to_bootstrap value to 1 on
that node) and then using the corresponding PVC to bootstrap the new cluster without
needing to manually inspect/edit Galera's grastate.dat files.

2. Obtain the name of the running pod. Make a note of the node number which is suffixed to

VMware by Broadcom 163


VMware Tanzu Application Catalog Documentation - Tutorials

the name. For example, if the running pod is galera-mariadb-galera-0, the node number is
0.

kubectl get pods | grep galera

3. Create a backup of the persistent volumes on the source cluster:

velero backup create galera-backup --include-resources=pvc,pv --selector app.ku


bernetes.io/instance=galera

4. Execute the command below to view the contents of the backup and confirm that it contains
all the required resources:

velero backup describe galera-backup --details

5. To avoid the backup data being overwritten, switch the bucket to read-only access:

kubectl patch backupstoragelocation default -n velero --type merge --patch '{"s


pec":{"accessMode":"ReadOnly"}}'

6. If required, scale the cluster back to its original size:

kubectl scale statefulset --replicas=3 galera-mariadb-galera

Step 3: Restore the MariaDB Galera Cluster deployment on the


destination cluster
You can now restore the persistent volumes and integrate them with a new MariaDB Galera Cluster
deployment on the destination cluster.

1. Modify your context to reflect the destination cluster.

2. Install Velero on the destination cluster as described in Step 1. Remember to use the same
values for the BUCKET-NAME and SECRET-FILENAME placeholders as you did originally, so
that Velero is able to access the previously-saved backups.

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.1.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

3. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

4. Restore the persistent volumes in the same namespace as the source cluster using Velero.

velero restore create --from-backup galera-backup

5. Confirm that the persistent volumes have been restored:

kubectl get pvc --namespace default

VMware by Broadcom 164


VMware Tanzu Application Catalog Documentation - Tutorials

6. Delete all the restored persistent volumes except the volume with number corresponding to
the running node number seen in Step 2. For example, if the node number noted in Step 2
is 0, delete all the persistent volumes except the volume named data-galera-mariadb-
galera-0.

kubectl delete pvc --namespace default PVC-NAME-1 PVC-NAME-2

7. Create a new MariaDB Galera Cluster deployment. Use the same name and namespace as
the original deployment. Replace the PASSWORD and REPL-PASSWORD placeholders with
the same database administrator and replication user passwords used in the original
deployment and the REPOSITORY placeholder with a reference to your Tanzu Application
Catalog chart repository.

helm install galera REPOSITORY/mariadb-galera \


--namespace default \
--set rootUser.password=PASSWORD \
--set galera.mariabackup.password=REPL-PASSWORD

NOTE: It is important to create the new deployment on the destination cluster using the
same namespace, deployment name and credentials as the original deployment on the
source cluster.

This will create a new deployment that bootstraps from the original node volume (and hence
the original data).

8. Connect to the new deployment and confirm that your data has been successfully restored.
Replace the PASSWORD placeholder with the database administrator password and the
REGISTRY placeholder with a reference to your Tanzu Application Catalog container registry.

kubectl run galera-mariadb-galera-client --rm --tty -i --restart='Never' --name


space default --image REGISTRY/mariadb-galera:10.4.12-debian-10-r78 --command -
- mysql -h galera-mariadb-galera -P 3306 -uroot -pPASSWORD -e "SELECT * FROM my
db.accounts"

Here is an example of what you should see:

Useful links
MariaDB Galera Helm chart

MariaDB client applications mysqldump and mysql

Velero documentation

VMware by Broadcom 165


VMware Tanzu Application Catalog Documentation - Tutorials

Backup and Restore Redis Cluster


Deployments on Kubernetes

Introduction
Redis is a popular open source in-memory data store with a number of advanced features for high
availability and data optimization. VMware Tanzu Application Catalog (Tanzu Application Catalog)
offers a Redis Cluster Helm chart which makes it quick and easy to deploy a Redis cluster with
sharding and multiple write points on Kubernetes. This Helm chart is compliant with current best
practices and can also be upgraded to ensure that you always have the latest fixes and security
updates.

Once you have your Redis cluster deployed on Kubernetes, it is essential to put a data backup
strategy in place to protect the data within it. This backup strategy is needed for many operational
scenarios, including disaster recovery planning, off-site data analysis or application load testing.

This guide walks you through the process of backing up and restoring a Redis cluster on Kubernetes
using Velero, a Kubernetes backup/restore tool.

Assumptions and prerequisites


This guide makes the following assumptions:

You have two separate Kubernetes clusters - a source cluster and a destination cluster - with
kubectl and Helm v3 installed. This guide uses Google Kubernetes Engine (GKE) clusters
but you can also use any other Kubernetes provider. Learn how to install kubectl and Helm
v3.x.

You have configured Helm to use the Tanzu Application Catalog chart repository following
the instructions for Tanzu Application Catalog or the instructions for VMware Tanzu
Application Catalog for Tanzu Advanced.

You have previously deployed the Redis Cluster Helm chart on the source cluster and added
some data to it. Example command sequences to perform these tasks are shown below,
where the PASSWORD placeholder refers to the Redis password. Replace the REPOSITORY
and REGISTRY placeholders with references to your Tanzu Application Catalog chart
repository and container registry respectively.

helm install redis REPOSITORY/redis-cluster \


--set global.redis.password=PASSWORD
kubectl run --namespace default redis-redis-cluster-client --rm --tty -i --rest
art='Never' --image REGISTRY/redis-cluster:6.0.5-debian-10-r2 -- bash
redis-cli -c -h redis-redis-cluster -a PASSWORD
set foo 100
set bar 200

VMware by Broadcom 166


VMware Tanzu Application Catalog Documentation - Tutorials

exit

This method involves copying the persistent data volumes for the Redis Cluster nodes and reusing
them in a new deployment with Velero, an open source Kubernetes backup/restore tool. This
method is only suitable when:

The cloud provider is supported by Velero.

Both clusters are on the same cloud provider, because Velero does not support the
migration of persistent volumes across cloud providers.

The restored deployment on the destination cluster will have the same name, namespace,
topology and credentials as the original deployment on the source cluster.

Step 1: Install Velero on the source cluster


Velero is an open source tool that makes it easy to backup and restore Kubernetes resources. It can
be used to back up an entire cluster or specific resources such as persistent volumes.

1. Modify your context to reflect the source cluster (if not already done).

2. Follow the Velero plugin setup instructions for your cloud provider. For example, if you are
using Google Cloud Platform (as this guide does), follow the GCP plugin setup instructions to
create a service account and storage bucket and obtain a credentials file.

3. Then, install Velero on the source cluster by executing the command below, remembering
to replace the BUCKET-NAME placeholder with the name of your storage bucket and the
SECRET-FILENAME placeholder with the path to your credentials file:

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.1.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

You should see output similar to the screenshot below as Velero is installed:

4. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

VMware by Broadcom 167


VMware Tanzu Application Catalog Documentation - Tutorials

kubectl get pods -n velero

Step 2: Back up the Redis Cluster deployment on the source cluster


Next, back up all the persistent volumes using Velero.

1. Create a backup of the persistent volumes on the source cluster:

velero backup create redis-backup --include-resources=pvc,pv --selector app.kub


ernetes.io/name=redis-cluster

2. Execute the command below to view the contents of the backup and confirm that it contains
all the required resources:

velero backup describe redis-backup --details

3. To avoid the backup data being overwritten, switch the bucket to read-only access:

kubectl patch backupstoragelocation default -n velero --type merge --patch '{"s


pec":{"accessMode":"ReadOnly"}}'

Step 3: Restore the Redis Cluster deployment on the destination


cluster
You can now restore the persistent volumes and integrate them with a new Redis Cluster
deployment on the destination cluster.

1. Modify your context to reflect the destination cluster.

2. Install Velero on the destination cluster as described in Step 1. Remember to use the same
values for the BUCKET-NAME and SECRET-FILENAME placeholders as you did originally, so
that Velero is able to access the previously-saved backups.

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.1.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

3. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

4. Restore the persistent volumes in the same namespace as the source cluster using Velero.

velero restore create --from-backup redis-backup

5. Confirm that the persistent volumes have been restored:

kubectl get pvc --namespace default

6. Create a new Redis Cluster deployment. Use the same name, topology and namespace as
the original deployment. Replace the PASSWORD placeholder with the same password used
in the original deployment and the REPOSITORY placeholder with a reference to your Tanzu

VMware by Broadcom 168


VMware Tanzu Application Catalog Documentation - Tutorials

Application Catalog chart repository.

helm install redis REPOSITORY/redis-cluster \


--set global.redis.password=PASSWORD

NOTE: If using Tanzu Application Catalog for Tanzu Advanced, install the chart following the
steps described in the VMware Tanzu Application Catalog for Tanzu Advanced
documentation instead.

NOTE: It is important to create the new deployment on the destination cluster using the
same namespace, deployment name, topology and credentials as the original deployment on
the source cluster.

This will create a new deployment that uses the restored persistent volumes (and hence the
original data).

7. Connect to the new deployment and confirm that your data has been successfully restored.
Replace the PASSWORD placeholder with the correct password and the REGISTRY
placeholder with a reference to your Tanzu Application Catalog container registry.

kubectl run --namespace default redis-redis-cluster-client --rm --tty -i --rest


art='Never' --image REGISTRY/redis-cluster:6.0.5-debian-10-r2 -- bash
redis-cli -c -h redis-redis-cluster -a PASSWORD
get foo
get bar

Here is an example of what you should see:

Useful links
Redis Cluster Helm chart

Redis client application

Velero documentation

VMware by Broadcom 169


VMware Tanzu Application Catalog Documentation - Tutorials

Backup and Restore RabbitMQ


Deployments on Kubernetes

Introduction
RabbitMQ is a highly-scalable and reliable open source message broking system. It supports a
number of different messaging protocols, as well as message queueing and plugins for additional
customization.

VMware Tanzu Application Catalog's (Tanzu Application Catalog) RabbitMQ Helm chart makes it easy
to deploy a scalable RabbitMQ cluster on Kubernetes. This Helm chart is compliant with current best
practices and can also be easily upgraded to ensure that you always have the latest fixes and security
updates.

Once the RabbitMQ cluster is operational, backing up the data held within it becomes an important
and ongoing administrative task. A data backup/restore strategy is required not only for data security
and disaster recovery planning, but also for other tasks like off-site data analysis or application load
testing.

This guide explains how to back up and restore a RabbitMQ deployment on Kubernetes using
Velero, an open-source Kubernetes backup/restore tool.

Assumptions and prerequisites


This guide makes the following assumptions:

You have two separate Kubernetes clusters - a source cluster and a destination cluster - with
kubectl and Helm v3 installed. This guide uses Google Kubernetes Engine (GKE) clusters
but you can also use any other Kubernetes provider. Learn how to install kubectl and Helm
v3.x.

You have configured Helm to use the Tanzu Application Catalog chart repository following
the instructions for Tanzu Application Catalog or the instructions for VMware Tanzu
Application Catalog for Tanzu Advanced.

You have previously deployed the Tanzu Application Catalog RabbitMQ Helm chart on the
source cluster and added some data to it. Example command sequences to perform these
tasks are shown below, where the PASSWORD placeholder refers to the administrator
password and the cluster is deployed with a single node. Replace the REPOSITORY
placeholder with a reference to your Tanzu Application Catalog chart repository.

helm install rabbitmq REPOSITORY/rabbitmq --set auth.password=PASSWORD --set se


rvice.type=LoadBalancer --set replicaCount=1
export SERVICE_IP=$(kubectl get svc --namespace default rabbitmq --template "{{
range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")

VMware by Broadcom 170


VMware Tanzu Application Catalog Documentation - Tutorials

rabbitmqadmin -H $SERVICE_IP -u user -p PASSWORD declare queue name=my-queue du


rable=true
rabbitmqadmin -H $SERVICE_IP -u user -p PASSWORD publish routing_key=my-queue p
ayload="message 1" properties="{\"delivery_mode\":2}"
rabbitmqadmin -H $SERVICE_IP -u user -p PASSWORD publish routing_key=my-queue p
ayload="message 2" properties="{\"delivery_mode\":2}"
exit

The Kubernetes provider is supported by Velero.

Both clusters are on the same Kubernetes provider, as this is a requirement of Velero's
native support for migrating persistent volumes.

The restored deployment on the destination cluster will have the same name, namespace
and credentials as the original deployment on the source cluster.

NOTE

The procedure outlined in this guide can only be used to back up and restore
persistent messages in the source RabbitMQ cluster. Transient messages will not be
backed up or restored.

NOTE

For persistent volume migration across cloud providers with Velero, you have the
option of using Velero's Restic integration. This integration is not covered in this
guide.

Step 1: Install Velero on the source cluster


Velero is an open source tool that makes it easy to backup and restore Kubernetes resources. It can
be used to back up an entire cluster or specific resources such as persistent volumes.

1. Modify your context to reflect the source cluster (if not already done).

2. Follow the Velero plugin setup instructions for your cloud provider. For example, if you are
using Google Cloud Platform (as this guide does), follow the GCP plugin setup instructions to
create a service account and storage bucket and obtain a credentials file.

3. Then, install Velero on the source cluster by executing the command below, remembering
to replace the BUCKET-NAME placeholder with the name of your storage bucket and the
SECRET-FILENAME placeholder with the path to your credentials file:

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.2.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

You should see output similar to the screenshot below as Velero is installed:

VMware by Broadcom 171


VMware Tanzu Application Catalog Documentation - Tutorials

4. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

Step 2: Back up the RabbitMQ deployment on the source cluster


The next step involves using Velero to copy the persistent data volumes for the RabbitMQ pods.
These copied data volumes can then be reused in a new deployment.

1. Create a backup of the volumes in the running RabbitMQ deployment on the source cluster.
This backup will contain both the primary and secondary node volumes.

velero backup create rabbitmq-backup --include-resources=pvc,pv --selector app.


kubernetes.io/instance=rabbitmq

2. Execute the command below to view the contents of the backup and confirm that it contains
all the required resources:

velero backup describe rabbitmq-backup --details

3. To avoid the backup data being overwritten, switch the bucket to read-only access:

kubectl patch backupstoragelocation default -n velero --type merge --patch '{"s


pec":{"accessMode":"ReadOnly"}}'

Step 3: Restore the RabbitMQ deployment on the destination


cluster
You can now restore the persistent volumes and integrate them with a new RabbitMQ deployment
on the destination cluster.

1. Modify your context to reflect the destination cluster.

VMware by Broadcom 172


VMware Tanzu Application Catalog Documentation - Tutorials

2. Install Velero on the destination cluster as described in Step 1. Remember to use the same
values for the BUCKET-NAME and SECRET-FILENAME placeholders as you did originally, so
that Velero is able to access the previously-saved backups.

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.2.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

3. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

4. Restore the persistent volumes in the same namespace as the source cluster using Velero.

velero restore create --from-backup rabbitmq-backup

5. Confirm that the persistent volumes have been restored:

kubectl get pvc

6. Create a new RabbitMQ deployment. Use the same name, namespace and cluster topology
as the original deployment. Replace the PASSWORD placeholder with the same
administrator password used in the original deployment and the REPOSITORY placeholder
with a reference to your Tanzu Application Catalog chart repository.

helm install rabbitmq REPOSITORY/rabbitmq --set auth.password=PASSWORD --set se


rvice.type=LoadBalancer --set replicaCount=1

NOTE: If using Tanzu Application Catalog for Tanzu Advanced, install the chart following the
steps described in the VMware Tanzu Application Catalog for Tanzu Advanced
documentation instead.

NOTE: The deployment command shown above is only an example. It is important to create
the new deployment on the destination cluster using the same namespace, deployment
name, credentials and cluster topology as the original deployment on the source cluster.

This will create a new deployment that uses the original pod volumes (and hence the original
data).

7. Connect to the new deployment and confirm that your original queues and messages are
intact using a query like the example shown below. Replace the PASSWORD placeholder
with the administrator password

export SERVICE_IP=$(kubectl get svc --namespace default rabbitmq --template "{{


range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
./rabbitmqadmin -H $SERVICE_IP -u user -p PASSWORD list queues
./rabbitmqadmin -H $SERVICE_IP -u user -p PASSWORD get queue=my-queue ackmode=a
ck_requeue_true count=10

Confirm that your original data is intact.

Useful links

VMware by Broadcom 173


VMware Tanzu Application Catalog Documentation - Tutorials

RabbitMQ Helm chart

Velero documentation

VMware by Broadcom 174


VMware Tanzu Application Catalog Documentation - Tutorials

Migrate Data Between Kubernetes Clusters


with VMware Tanzu Application Catalog and
Velero

Introduction
VMware Tanzu Application Catalog (Tanzu Application Catalog) offers Helm charts for popular
database solutions like MySQL, MariaDB and PostgreSQL. These charts let you deploy database
services on Kubernetes in a secure and reliable manner whilst also ensuring compliance with current
best practices. Tanzu Application Catalog's Helm charts can also be easily upgraded to ensure that
you always have the latest fixes and security updates.

As time passes, there may be situations where you need to migrate the data stored in these
database deployments to other clusters. For example, you might want to transfer a copy of your data
to a separate cluster for isolated integration testing or load testing. Or, you might want to experiment
with alpha Kubernetes features that are not yet available in production deployments.

Velero, combined with Tanzu Application Catalog's charts, offers one possible solution to this
migration problem. Many of Tanzu Application Catalog's Helm charts let you create new
deployments that make use of existing persistent volumes (PVCs). This means that you can use
Velero to backup and copy the persistent data volumes from your source cluster, and then use
Tanzu Application Catalog's Helm charts to create new deployments in your destination cluster using
the copied volumes.

Assumptions and prerequisites


This guide makes the following assumptions:

You have two separate Kubernetes clusters - a source cluster and a destination cluster - with
kubectl and Helm v3 installed. This guide uses Google Kubernetes Engine (GKE) clusters
but you can also use any other Kubernetes provider. Learn how to install kubectl and Helm
v3.x.

You have configured Helm to use the Tanzu Application Catalog chart repository following
the instructions for Tanzu Application Catalog or the instructions for VMware Tanzu
Application Catalog for Tanzu Advanced.

The cloud provider is supported by Velero.

Both clusters are on the same cloud provider, because Velero does not support the
migration of persistent volumes across cloud providers.

This guide walks you through the process of migrating a PostgreSQL database from one cluster to
another using Velero and VMware Application Catalog's PostgreSQL Helm chart. The steps

VMware by Broadcom 175


VMware Tanzu Application Catalog Documentation - Tutorials

described will also work with other VMware Application Catalog charts that support using existing
persistent volumes.

Step 1: Deploy PostgreSQL on the source cluster and add


data to it

NOTE

This step creates a fresh PostgreSQL deployment using VMware Application


Catalog's Helm chart and then adds an example database and records to it to
simulate a real-world migration scenario. If you already have a VMware Application
Catalog PostgreSQL deployment with one or more custom databases, you can go
straight to Step 2.

Follow the steps below:

1. Deploy PostgreSQL on the source cluster. Replace the PASSWORD placeholder with a
password for your PostgreSQL deployment and REPOSITORY placeholder with a reference
to your VMware Application Catalog chart repository.

helm install postgresql REPOSITORY/postgresql --set postgresqlPassword=PASSWORD

2. Wait for the deployment to complete and then use the commands below to connect to the
database service. Replace the REGISTRY placeholder with a reference to your VMware
Application Catalog container registry.

export POSTGRES_PASSWORD=$(kubectl get secret --namespace default postgresql -o


jsonpath="{.data.postgresql-password}" | base64 --decode)
kubectl run postgresql-client --rm --tty -i --restart='Never' --namespace defau
lt --image REGISTRY/postgresql:11.7.0-debian-10-r0 --env="PGPASSWORD=$POSTGRES_
PASSWORD" --command -- psql --host postgresql -U postgres -d postgres -p 5432

3. At the command prompt, create an empty database and table and enter some dummy
records into it

CREATE DATABASE example;


\c example
CREATE TABLE users (userid serial PRIMARY KEY, username VARCHAR (50) UNIQUE NOT
NULL);
INSERT INTO users VALUES (11, 'john');
INSERT INTO users VALUES (22, 'jane');

4. Confirm that the records entered exist in the database:

SELECT * FROM users;

VMware by Broadcom 176


VMware Tanzu Application Catalog Documentation - Tutorials

Step 2: Install Velero on the source cluster


The next step is to install Velero on the source cluster using the appropriate plugin for your cloud
provider. To do this, follow the steps below:

1. Follow the plugin setup instructions for your cloud provider. For example, if you are using
Google Cloud Platform (as this guide does), follow the GCP plugin setup instructions to create
a service account and storage bucket and obtain a credentials file.

2. Then, install Velero by executing the command below, remembering to replace the
BUCKET-NAME placeholder with the name of your storage bucket and the SECRET-
FILENAME placeholder with the path to your credentials file:

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

You should see output similar to the screenshot below as Velero is installed:

VMware by Broadcom 177


VMware Tanzu Application Catalog Documentation - Tutorials

1. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

Step 3: Backup the persistent PostgreSQL volumes on the


source cluster
Once Velero is running, create a backup of the PVCs from the PostgreSQL deployment:

velero backup create pgb --include-resources pvc,pv --selector release=postgresql

TIP: The previous command uses additional resource and label selectors to backup only the PVCs
related to the PostgreSQL deployment. Optionally, you can backup all deployments in a specific
namespace with the --include-namespaces parameter, or backup the entire cluster by omitting all
selectors.

Execute the command below to view the contents of the backup and confirm that it contains all the
required resources:

velero backup describe pgb --details

VMware by Broadcom 178


VMware Tanzu Application Catalog Documentation - Tutorials

At this point, your data is ready for migration.

Step 4: Use the PostgreSQL volumes with a new deployment


on the destination cluster
Once your data backup is complete and confirmed, you can now turn your attention to migrating it to
another cluster. For illustrative purposes, this guide will assume that you wish to transfer your
PostgreSQL database content to the second (destination) cluster.

1. Modify your context to reflect the destination cluster.

2. Install Velero on the destination cluster as described in Step 2. Remember to use the same
values for the BUCKET-NAME and SECRET-FILENAME placeholders as you did originally, so
that Velero is able to access the previously-saved backups.

velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --b


ucket BUCKET-NAME --secret-file SECRET-FILENAME

3. Confirm that the Velero deployment is successful by checking for a running pod using the
command below:

kubectl get pods -n velero

4. To avoid the backup data being overwritten, switch the bucket to read-only access:

kubectl patch backupstoragelocation default -n velero --type merge --patch '{"s


pec":{"accessMode":"ReadOnly"}}'

5. Confirm that Velero is able to access the original backup:

velero backup describe pgb --details

6. Restore the backed-up volumes. Note that this may take a few minutes to complete.

VMware by Broadcom 179


VMware Tanzu Application Catalog Documentation - Tutorials

velero restore create --from-backup pgb

7. Confirm that the persistent volumes have been restored on the destination cluster and note
the PVC name:

kubectl get pvc

1. Create a new PostgreSQL deployment using VMware Application Catalog's PostgreSQL


Helm chart. This chart includes a persistence.existingClaim parameter which, when used,
creates a deployment with an existing volume instead of a fresh one. Replace the
PASSWORD placeholder with the same password as your original deployment, the PVC-
NAME placeholder with the name of the migrated volume from the previous command and
the REPOSITORY placeholder with a reference to your VMware Application Catalog chart
repository.

helm install migrated-db REPOSITORY/postgresql --set postgresqlPassword=PASSWOR


D --set persistence.existingClaim=PVC-NAME

2. Wait until the deployment is complete. Once the deployment is complete, log in to the
PostgreSQL console and confirm that your original data has been successfully migrated. Use
the commands below and confirm that you see output similar to that shown in the
screenshot.

export POSTGRES_PASSWORD=$(kubectl get secret --namespace default migrated-db-p


ostgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
kubectl run migrated-db-postgresql-client --rm --tty -i --restart='Never' --nam
espace default --image REGISTRY/postgresql:11.7.0-debian-10-r0 --env="PGPASSWOR
D=$POSTGRES_PASSWORD" --command -- psql --host migrated-db-postgresql -U postgr
es -d example -p 5432
SELECT * FROM users;

VMware by Broadcom 180


VMware Tanzu Application Catalog Documentation - Tutorials

At this point, you have successfully migrated your database content using Velero.

Useful links
To learn more about the topics discussed in this guide, use the links below:

VMware Application Catalog PostgreSQL Helm chart

Velero documentation

VMware by Broadcom 181

You might also like