Kubernetes is a powerful and popular container orchestration platform that offers numerous benefits for organisations and developers. Many companies out there these days rely on Kubernetes (or some flavour of it including Rancher, OpenShift, etc.) to automate their environments.
Table of Contents
PostgreSQL is perfectly suitable to run on Kubernetes. The way to handle a large relational database on a container orchestration platform is to use a thing called an “operator”. It takes care of all important features such as:
A Kubernetes operator is basically supported to be a replacement for a human “operator” handling all things “PostgreSQL” inside a Kubernetes deployment.
There are various operators for PostgreSQL out there. However, today we want to focus on the CYBERTEC PG operator which is available as Open Source inside our Github. repository. Feel free to check it out and run it in your environment without restrictions.
What is important to mention here: We are running using the CYBERTEC PG operator to manage thousands of instances around the world already. In other words: This code is reliable and just works. And if you find an area of improvement, we are open to pull requests.
In this blog you will learn how to use Minikube to run your first PostgreSQL cluster in your own local Kubernetes deployment.
Minikube is the easiest way to test Kubernetes locally on your machine. However, the procedure is the same in “real Kubernetes”. To make life easy for you we have included the deployment procedure of Minikube:
1 2 3 4 |
[hs@hansmacbook ~]$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 … [hs@hansmacbook ~]$ sudo install minikube-darwin-amd64 /usr/local/bin/minikube Password: |
Now that we have installed Minikube, we can start it. In this case I have used Docker for virtualization. However, things such as VirtualBox are fine as well. Before running the following command make sure Docker is up and running:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
[hs@hansmacbook ~]$ minikube start --driver=docker ???? minikube v1.31.2 on Darwin 13.2 ✨ Using the docker driver based on user configuration ???? Using Docker Desktop driver with root privileges ???? Starting control plane node minikube in cluster minikube ???? Pulling base image ... > gcr.io/k8s-minikube/kicbase...: 447.62 MiB / 447.62 MiB 100.00% 25.95 M ???? Creating docker container (CPUs=2, Memory=4000MB) ... ???? Preparing Kubernetes v1.27.4 on Docker 24.0.4 ... ▪ Generating certificates and keys ... ▪ Booting up control plane ... ▪ Configuring RBAC rules ... ???? Configuring bridge CNI (Container Networking Interface) ... ???? Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 ???? Enabled addons: default-storageclass, storage-provisioner ???? Done! kubectl is now configured to use 'minikube' cluster and 'default' namespace by default |
Voila, Minikube is up and running. We can now work on PostgreSQL and test our shiny CYBERTEC PG operator. Deploying PostgreSQL is easiest when using a Helm chart. If you happen to be on Mac OS X it is therefore necessary to ensure that Helm is readily available:
1 2 3 4 |
[hs@hansmacbook ~]$ brew install helm … [hs@hansmacbook ~]$ which helm /usr/local/bin/helm |
Using
1 |
brew |
makes this really easy.
Now we have all we need: Kubernetes, Helm, etc. We can now follow up with:
The entire process is really quick and should not take more than a couple of minutes. Of course this is true for Minikube. Deploying a full Enterprise grade Kubernetes / OpenShift infrastructure requires some more knowledge than just a sample cluster. However, I think this is a good way to get started.
After deploying a simple Minikube installation we can take a look at the next steps. One good way to start is to take a look at our Github page which has a lot more information.
You will find a lot more information there.
The first thing is to clone the desired Github repo:
1 2 |
git clone https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials.git cd CYBERTEC-operator-tutorials |
Once we have cloned the repository and enter it we can create a Kubernetes namespace. For those of you who happen to use OpenShift we have also included the OpenShift equivalent:
1 2 3 4 5 |
# kubectl (kubernetes) kubectl create namespace cpo # oc (Openshift) oc create namespace cpo |
Then we can deploy all those Helm charts. This is super easy as everything is fully automated inside those charts. The operators and all those additional components will be deployed seamlessly.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
# install Helm charts for the operator helm install cpo -n cpo setup/helm/operator/. NAME: cpo LAST DEPLOYED: Mon Nov 6 14:51:46 2023 NAMESPACE: cpo STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: ______________.___.______________________________________________________________ __________ ________ ________ __ _ ___ __ | |______ _ _____/______ __ ___/_ _____/_ ___ ______ / _____/ _____ ______ ________________ _/ |_ ___________ / // | | | | _/| __)_ | _/ | | | __)_ / / ______ | ___/ ___ ______ / | \____ _/ __ _ __ __ \ __/ _ _ __ _______ | | | | | | | | | \ ____ /_____/ | | _ /_____/ / | |_> > ___/| | // __ | | ( <_> ) | / ______ / ______| |______ /_______ / |____|_ / |____| /_______ / ______ / |____| ______ / _______ / __/ ___ >__| (____ /__| ____/|__| // / / / / / / /|__| / / The operator has been successfully installed and is ready for use. Support for use can be found here: Community mailing list: https://groups.google.com/g/cpo-community Github project: https://github.com/cybertec-postgresql/postgres-pg-operator Documentation: https://cybertec-postgresql.github.io/CYBERTEC-pg-operator/ Apache License Version 2.0 Copyright 2022-2023 CYBERTEC PostgreSQL International GmbH |
Wow, that was it. That is all it takes. The following command (for OpenShift) checks if the operators is indeed around:
1 2 3 4 |
# Show Operator Pod $ oc get pods -n cpo NAME READY STATUS RESTARTS AGE postgres-operator-8477679896-72d7c 1/1 Running 0 16s |
Finally we can already deploy our first cluster. We can apply the desired YAML file containing our requirements. Here is what the YAML file might contain:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
apiVersion: cpo.opensource.cybertec.at/v1 kind: postgresql metadata: name: cluster-1 spec: dockerImage: 'docker.io/cybertecpostgresql/cybertec-pg-container:postgres-rocky9-16.0-1-1' numberOfInstances: 1 postgresql: version: '16' parameters: password_encryption: 'scram-sha-256' resources: limits: cpu: 500m memory: 500Mi requests: cpu: 500m memory: 500Mi teamId: acid volume: size: 5Gi |
What we see here is the amount of CPU power we want to be assigned to, which type of storage we want and a lot more. In short: The YAML file describes what kind of PostgreSQL instance we are looking for.
In the next step we can use this file which is also part of our Github repo to make life easy for you and send it to the Kubernetes cluster:
1 2 3 4 5 6 7 |
# Create Cluster kubectl apply -f cluster-tutorials/single-cluster/postgres.yaml -n cpo postgresql.cpo.opensource.cybertec.at/cluster-1 created oc get pods -n cpo NAME READY STATUS RESTARTS AGE cluster-1-0 0/1 ContainerCreating 0 9s postgres-operator-86c7f5cd78-zvdw6 1/1 Running 0 5m34s |
If you want to do the same thing on OpenShift and not on Kubernetes use instead:
1 |
oc apply -n cpo -f cluster-tutorials/single-cluster/postgres.yaml |
Once the new PostgreSQL instance has been deployed we can check what is going on inside Kubernetes to validate that all is working:
1 2 3 4 |
# Show Cluster $ kubectl get postgresqls.cpo.opensource.cybertec.at -n cpo NAME TEAM VERSION PODS VOLUME CPU-REQUEST MEMORY-REQUEST AGE STATUS cluster-1 acid 16 1 5Gi 500m 500Mi 18s Running |
Note that this is optional - it is only needed for demonstration purposes to show that everything is indeed working.
Your first PostgreSQL database is now working inside Kubernetes. The next goal is to be able to connect to this server. To do that we first have to allow traffic to the instance and then figure out which passwords and so on we can use to actually connect and establish a database connection.
The first step is to sort out networking:
1 2 3 4 |
# Connect to database # patch svc to allow traffic via nodeport kubectl patch svc cluster-1 -n cpo -p '{"spec": {"type": "NodePort"}}' service/cluster-1 patched |
Voila, we now got a cluster IP. The service is accessible:
1 2 3 4 |
$ kubectl get svc -n cpo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cluster-1 NodePort 10.99.250.53 5432:32551/TCP 3m30s cluster-1-repl ClusterIP 10.104.92.153 5432/TCP 3m30s |
Mind the last line in the listing - it reveals that the IP in fact exists and is operational.
The next piece of information we need is the password to create a database connection:
1 2 3 4 5 6 7 8 9 10 |
# Get Password from secret $ kubectl get secret -n cpo NAME TYPE DATA AGE cpo-monitoring-role Opaque 2 38m cpo-replication.cluster-1.credentials.postgresql.cpo.opensource.cybertec.at Opaque 2 5m54s postgres.cluster-1.credentials.postgresql.cpo.opensource.cybertec.at Opaque 2 5m54s sh.helm.release.v1.cpo.v1 helm.sh/release.v1 1 38m $ kubectl get secret postgres.cluster-1.credentials.postgresql.cpo.opensource.cybertec.at -n cpo -o jsonpath='{.data.password}' | base64 --decode Vm4q1DErS5zjuuX7j2zI0NNhfRyPmGpNJWPTqi4DBq6fEbCOXnzgRUtu4b11XscX |
But there is more. We can determine the nodes in operation:
1 2 3 |
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME minikube Ready control-plane 5h25m v1.27.4 192.168.49.2 Ubuntu 22.04.2 LTS 6.5.9-1-MANJARO docker://24.0.4 |
Voila, our first cluster is now operational. We can already run psql to connect to our shiny new database instance:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# connect to database $ psql -h 192.168.49.2 -p 32551 -U postgres Passwort für Benutzer postgres: psql (16.0) SSL-Verbindung (Protokoll: TLSv1.3, Verschlüsselungsmethode: TLS_AES_256_GCM_SHA384, Komprimierung: aus) Type 'help' for help. postgres=# # Connect to Database via Pod-Shell $ kubectl exec --stdin --tty cluster-1-0 -n cpo -- /bin/bash [postgres@cluster-1-0 ~]$ psql psql (16.0) Type 'help' for help. |
As you can see there are even two methods. We can create a normal database connection using psql and use the same information in our applications. Alternatively we can make use of the Pod-Shell to talk to PostgreSQL via Kubernetes.
Deploying new PostgreSQL instances in Kubernetes is easy and the same is true if we want to get rid of those database instances again.
1 2 |
$ kubectl delete postgresqls.cpo.opensource.cybertec.at cluster-1 -n cpo postgresql.cpo.opensource.cybertec.at 'cluster-1' deleted |
Finally here is proof that the operation was indeed successful:
1 2 3 4 5 6 7 8 |
$ kubectl get pods -n cpo NAME READY STATUS RESTARTS AGE cluster-1-0 1/1 Terminating 0 3m postgres-operator-86c7f5cd78-zvdw6 1/1 Running 0 18m $ kubectl get pods -n cpo NAME READY STATUS RESTARTS AGE postgres-operator-86c7f5cd78-zvdw6 1/1 Running 0 18m |
Running PostgreSQL is easy and offers a great deal of automation. Our operator is Open Source under an MIT license and can therefore be used free of charge (we of course provide full support). It has proven to work on thousands of systems worldwide - in short: We are talking about reliable, easy to use software. Please make sure that you checkout our blog on a regular basis - we will publish PostgreSQL and Kubernetes related content on a regular basis. So make sure that you don’t miss out.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Leave a Reply