In our previous blog, we have done an introduction to CloudNativePG (CNPG) and showed how we can create a single instance cluster. In this blog, we’ll walk through creating a PostgreSQL cluster using initdb
with custom options, manage roles, and create databases—all declaratively with Kubernetes manifests. Additionally, we will explain how to connect our database outside of the Kubernetes cluster.
Table of Contents
initdb
We’ll start by creating a PostgreSQL cluster named cluster-example-initdb
using the initdb
bootstrap method.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
apiVersion: v1 data: username: Y3liZXJ0ZWM= password: Y3liZXJ0ZWMxMjM= kind: Secret metadata: name: app-secret type: kubernetes.io/basic-auth --- apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-example-initdb spec: instances: 1 bootstrap: initdb: database: cybertec owner: cybertec dataChecksums: true encoding: 'LATIN1' secret: name: app-secret managed: roles: - name: nikola login: true superuser: false createdb: true - name: tesla superuser: true login: true storage: size: 1Gi |
This manifest:
cybertec
owned by user cybertec
nikola
(with createdb
) and tesla
(as a superuser
)For the other parameters that can be used with initdb, you can check bootstrapinitdb.
Apply the manifest:
1 |
kubectl apply -f single_initdb.yaml |
Check that the pod is running:
1 2 3 4 |
kubectl get pods NAME READY STATUS RESTARTS AGE cluster-example-initdb-1 1/1 Running 0 2m34s cnpg-controller-manager-6848689f4-j756l 1/1 Running 0 41h |
Enter the pod to connect to PostgreSQL:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
kubectl exec -it pod/cluster-example-initdb-1 -- /bin/bash Defaulted container "postgres" out of: postgres, bootstrap-controller (init) postgres@cluster-example-initdb-1:/$ psql psql (17.5 (Debian 17.5-1.pgdg110+1)) Type "help" for help. postgres=# \du List of roles Role name | Attributes ----------------+------------------------------------------------------------ cybertec | nikola | Create DB postgres | Superuser, Create role, Create DB, Replication, Bypass RLS streaming_replica | Replication tesla | Superuser postgres=# \l cybertec List of databases Name | Owner | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules | Access privileges ----------+----------+----------+-----------------+---------+--------+--------+-----------+------------------- cybertec | cybertec | LATIN1 | libc | C | C | | | (1 row) postgres=# show data_checksums; data_checksums ---------------- on (1 row) |
We can still create additional roles and databases manually via kubectl exec
:
1 2 3 4 5 |
kubectl exec -it pod/cluster-example-initdb-1 -- \ psql -U postgres -c "CREATE ROLE marie LOGIN PASSWORD 'secret';" kubectl exec -it pod/cluster-example-initdb-1 -- \ psql -U postgres -c "CREATE DATABASE curie OWNER marie;" |
We can also manage additional roles and databases using managed
and Database
CRs.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cybertec-single-instance-example spec: instances: 1 imageName: ghcr.io/cloudnative-pg/postgresql:17 managed: roles: - name: nikola login: true superuser: false createdb: true - name: tesla superuser: true login: true storage: size: 1Gi |
We will create another manifest (database.yaml) for database:
1 2 3 4 5 6 7 8 9 |
apiVersion: postgresql.cnpg.io/v1 kind: Database metadata: name: db-one spec: name: one owner: app cluster: name: cybertec-single-instance-example |
Apply both manifests:
1 2 |
kubectl apply -f single.yaml kubectl apply -f database.yaml |
This way, we can create different strategies for our IaC to manage our database infrastructure.
As a last example for this section, we will use the following manifest:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
apiVersion: v1 kind: Secret metadata: name: cybertec-secret type: Opaque stringData: secret.sql: | CREATE ROLE cybertec WITH LOGIN PASSWORD 's3cr3t'; GRANT ALL PRIVILEGES ON DATABASE app TO cybertec; --- apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: configmap.sql: | CREATE TABLE IF NOT EXISTS demo ( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); --- apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-example-initdb spec: instances: 3 bootstrap: initdb: database: app owner: app postInitApplicationSQLRefs: secretRefs: - name: my-secret key: secret.sql configMapRefs: - name: my-configmap key: configmap.sql storage: size: 1Gi |
This manifest:
Depending on the goal, we can use postInitSQL, postInitApplicationSQL, or postInitTemplateSQL to customize our cluster and database(s).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
kubectl exec -it pod/cluster-example-initdb-1 -- psql -U postgres -d app -c "\du" Defaulted container "postgres" out of: postgres, bootstrap-controller (init) List of roles Role name | Attributes -------------------+------------------------------------------------------------ app | cybertec | postgres | Superuser, Create role, Create DB, Replication, Bypass RLS kubectl exec -it pod/cluster-example-initdb-1 -- psql -U postgres -d app -c "\l app" Defaulted container "postgres" out of: postgres, bootstrap-controller (init) List of databases Name | Owner | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules | Access privileges ------+-------+----------+-----------------+---------+-------+--------+-----------+------------------- app | app | UTF8 | libc | C | C | | | =Tc/app + | | | | | | | | app=CTc/app + | | | | | | | | cybertec=CTc/app (1 row) kubectl exec -it pod/cluster-example-initdb-1 -- psql -U postgres -d app -c "\d" Defaulted container "postgres" out of: postgres, bootstrap-controller (init) List of relations Schema | Name | Type | Owner --------+-------------+----------+---------- public | demo | table | postgres public | demo_id_seq | sequence | postgres (2 rows) |
As can be seen above, the cybertec role is created and granted user on app database. Also, the demo table is created as expected.
In this second part, we will showcase how to enable applications and users to connect to our database outside of the Kubernetes cluster. For this example, we will use the following manifest:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-example-initdb spec: instances: 1 bootstrap: initdb: database: cybertec owner: cybertec secret: name: app-secret managed: roles: - name: nikola login: true superuser: false createdb: true - name: tesla superuser: true login: true services: ## disable the default services disabledDefaultServices: ["ro", "r"] additional: - selectorType: rw serviceTemplate: metadata: name: "test-rw" labels: test-label: "true" annotations: test-annotation: "true" spec: type: LoadBalancer ## In order to static IP address ## LoadBalancerIP: Your Static IP ports: - port: 5432 targetPort: 5432 protocol: TCP name: postgres storage: size: 1Gi |
This config will:
LoadBalancer
service named test-rw
-ro
, -r
). If we don' t disable them, CNPG creates 3 services for rw, ro, and r as default
1 |
kubectl apply -f cluster-initdb-lb.yaml |
Wait for the pod and service to be ready:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cluster-example-initdb-rw ClusterIP 10.106.204.97 None 5432/TCP 5s cnpg-webhook-service ClusterIP 10.98.146.114 None 443/TCP 2d10h test-rw LoadBalancer 10.109.19.23 10.109.19.23 5432:31306/TCP 5s kubectl describe service test-rw Name: test-rw Namespace: cnpg-system Labels: cnpg.io/cluster=cluster-example-initdb cnpg.io/isManaged=true test-label=true Annotations: cnpg.io/operatorVersion: 1.26.0 cnpg.io/updateStrategy: patch test-annotation: true Selector: cnpg.io/cluster=cluster-example-initdb,cnpg.io/instanceRole=primary Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.109.19.23 IPs: 10.109.19.23 LoadBalancer Ingress: 10.109.19.23 (VIP) Port: postgres 5432/TCP TargetPort: 5432/TCP NodePort: postgres 31306/TCP Endpoints: 10.244.0.51:5432 Session Affinity: None External Traffic Policy: Cluster Internal Traffic Policy: Cluster Events: None |
1 |
kubectl describe service test-rw |
Key values to notice is
LoadBalancer Ingress: 10.109.19.23
Even though we didn't specify a static IP address, minikube assigned an IP address to our service. However, this IP address and TargetPort, which is 5432, cannot be used to connect to our database from outside of the cluster. That is why we will use the IP address of the minikube on our local.
In order to get full information for the connection:
1 |
minikube service test-rw -n cnpg-system --url |
Sample output:
1 |
http://192.168.39.229:31306 |
Then connect using psql
:
1 |
psql -h 192.168.39.229 -U postgres -p 31306 |
Enter your password when prompted. If everything is configured correctly, voilà:
1 2 3 |
psql (15.13), server 17.5 SSL connection (protocol: TLSv1.3) Type "help" for help. |
We can now query the cluster as usual!
By leveraging CloudNativePG, you bring true infrastructure as code principles to your PostgreSQL environment to gain operational consistency and automation. Whether you're running on Minikube or in a full-blown cloud environment, CNPG helps bridge the gap between DevOps and database administration.
Leave a Reply