Kubernetes (and OpenShift) have become highly popular deployment methods to run PostgreSQL at scale. While Kubernetes is an excellent way of running things it also helps to make things such as consistency, compliance and a lot more easier to achieve. In this post you will learn how to deploy the Zalando operator for PostgreSQL and make things work quickly and easily.
Table of Contents
To make things simple for you, I have used Minikube which is an easy to use way to run Kubernetes locally on your development environment or even on your laptop.
If you are new to Minikube I highly recommend to take a look at the "Getting started" guide. It explains the most basic steps in detail.
Let us assume Minikube is already deployed on your machine. You can easily start the system by calling the following command:
1 |
$ minikube start |
We can verify if things are working as follows:
1 2 |
$ kubectl get pod ... |
This command should return list of pods in 'default'
namespace.
Once we have ensured that Kubernetes is indeed running we can download the Zalando operator from the Github repository which is easy to do:
1 2 |
$ git clone https://github.com/zalando/postgres-operator.git $ cd postgres-operator |
The repository is public and can be cloned directly. In the next step we can simply apply templates. The way this works is that such templates basically describes what we want from Kubernetes. All the vital information of the service we desire to deploy is there.
Before looking at the content of the file we can simply apply the template. The first thing to throw at Kubernetes is the configmap
:
1 |
$ kubectl create -f manifests/configmap.yaml |
The next thing to set is identities and permissions which can be stored in a simple YAML file as well. Here is how it works:
1 |
$ kubectl create -f manifests/operator-service-account-rbac.yaml |
In the next step we deploy the operator that is going to control all our future PostgreSQL instances and clusters:
1 |
$ kubectl create -f manifests/postgres-operator.yaml |
Those calls are expected to work flawlessly. However, it is important to understand what this file actually contains:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
apiVersion: apps/v1 kind: Deployment metadata: name: postgres-operator labels: application: postgres-operator spec: replicas: 1 strategy: type: "Recreate" selector: matchLabels: name: postgres-operator template: metadata: labels: name: postgres-operator spec: serviceAccountName: postgres-operator containers: - name: postgres-operator image: ghcr.io/zalando/postgres-operator:v1.13.0 imagePullPolicy: IfNotPresent resources: requests: cpu: 100m memory: 250Mi limits: cpu: 500m memory: 500Mi securityContext: runAsUser: 1000 runAsNonRoot: true readOnlyRootFilesystem: true allowPrivilegeEscalation: false env: # provided additional ENV vars can overwrite individual config map entries - name: CONFIG_MAP_NAME value: "postgres-operator" |
The template contains a handful of important variables such as number of replicas, number of CPUs, memory and a lot more. It is the most basic description of service there is.
Under normal circumstances the process is expected to work flawlessly. However, it is better to be safe than sorry. It is always a good idea to verify what we have just done:
1 2 3 |
$ kubectl get pod -l name=postgres-operator NAME READY STATUS RESTARTS AGE postgres-operator-5bcb7b8d94-sb8pv 1/1 Running 1 (63s ago) 77 |
If the status is shown as "Running" we have achieved success.
The great moment has come! It is time to spin up the first PostgreSQL instances and see how they operate. If the operator pod is running it listens to new events related to "postgresql" resources. For the purpose of demonstration it makes sense to edit "manifests/minimal-postgres-manifest.yaml"
and set numberOfInstances: 1
in line 9. Now we can apply our first postgres manifest:
1 |
$ kubectl create -f manifests/minimal-postgres-manifest.yaml |
After the cluster manifest has been submitted and the validation has been passed, the operator will create "Service" and "Endpoint" resources as well as a "StatefulSet" which spins up new Pod(s) given the number of instances specified in the manifest.
The following command shows what is going on:
1 |
$ kubectl get postgres |
But, let us dig a little deeper and check created database pods and service resources:
1 2 3 4 5 |
$ kubectl get pods -l application=spilo -L spilo-role ... $ kubectl get svc -l application=spilo -L spilo-role ... |
After those checks we can move on to the next big thing: Connecting to the database instance we have just created.
The final stage has to do with networking. Keep in mind that thousands of services might be running inside a single Kubernetes cluster. Therefore we have to make sure that networking is right. In order to connect to the database, we can use port-forwarding. This can be done with some tool like k8s, or manually:
1 2 3 |
$ export PGMASTER=$(kubectl get pods -o jsonpath={.items..metadata.name} -l application=spilo,cluster-name=acid-minimal-cluster,spilo-role=master -n default) $ kubectl port-forward $PGMASTER 6432:5432 -n default |
After wiring up the network we can fetch the password from Kubernetes secrets and pass them to psql in order to connect:
1 2 3 |
$ export PGPASSWORD=$(kubectl get secret postgres.acid-minimal-cluster.credentials.postgresql.acid.zalan.do -o 'jsonpath={.data.password}' | base64 -d) $ export PGSSLMODE=require $ psql -U postgres -h localhost -p 6432 |
Voila, you have just successfully deployed the Zalando operator and created your first database instance in your personal Kubernetes cluster.
After this success we can even sophisticate our deployment by adding a replica. From now on deployment is really easy and we can make changes to our environment quickly and easily. Now let's add an replica to the cluster. For this let us edit "manifests/minimal-postgres-manifest.yaml"
and set numberOfInstances:2
in line 9.
1 |
$ kubectl apply -f manifests/minimal-postgres-manifest.yaml |
What we have essentially done here is to simply change the desired specification of our deployment. Kubernetes has made sure that those changes which are necessary to transition to the desired state happen behind the scenes. The system has made everything needed to transition to whatever we asked it to do.
The operator has many more sample scripts that allow us to run various operations including backups, recovery and a a lot more. Make sure you check out our blog on a regular basis to find out more.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Leave a Reply