CYBERTEC PostgreSQL Logo

PostgreSQL clustering: vip-manager

10.2020 / Category: / Tags: |

Patroni has dramatically simplified the setup of the PostgreSQL High-Availability cluster. Personally I consider Patroni to be by far the best tool out there to cluster PostgreSQL and to ensure HA in the most reliable and straightforward way possible.

The main questions asked by many people are: How can you make a PostgreSQL cluster transparent? How does the application figure out where the primary database is? The solution to the problem is a tool we implemented some time ago: Let me introduce you to vip-manager (Virtual IP Manager), a simple to use tool to handle service IPs in a PostgreSQL Patroni cluster.

vip-manager: Automatic IP failover

Highavailability with Patroni


Before we dig a bit deeper into vip-manager, I want to explain the basic architecture of Patroni itself briefly. As you might know, Patroni is a tool developed initially by Zalando, which was a milestone in the development of PostgreSQL High-Availability. Over the years, we (CYBERTEC) have contributed countless patches to Patroni. We hope that we have made a difference in this crucial area relevant to so many people relying on PostgreSQL HA.

Patroni uses a key-value store (usually, etcd, zookeeper or consul) to handle consensus. The idea is to keep a leader key in the key-value store, which knows where the current master is. In other words: the cluster knows who happens to be primary and who happens to be secondary. Why not use this information to move a service IP around inside the cluster and assign it to the active primary? This is precisely what vip-manager does. It runs on every database node and checks if the node it runs on happens to be the "chosen one". In case vip-manager sees that it runs on the current primary it grabs the IP and creates an IP alias. In case the vip-manager sees that the node it is running on is not the master, it ensures that the IP is removed.

In a cluster consisting of 3 database servers, you will therefore need 4 IPs. 1 IP per server will be static - one IP will be moved around and follow the primary. Of course, the floating IP is what you should use in your applications.

A typical node in a Patroni cluster has to run two services: Patroni and vip-manager. Patroni controls PostgreSQL (stop, start, sync, etc.). vip-manager is a simple Go application doing nothing else but control the floating IP pointing to the master.

Configuring vip-manager

Configuring vip-manager is simple. Here is an example:

Basically, the configuration is simple: First of all, vip-manager has to know where etcd is. Then it has to know where it can find the leader key (URL). Finally, we want to know which IP has to be bound to which network interface. The rest is the simple login information for etcd or some retry configuration. The core is really: Where is my etcd and how can I log in? Where in etcd is my leader key and which IP is assigned to which device. That is the entire magic.

vip-manager helps to make a cluster fully transparent. It is available as binary packages for the most common Linux distributions and Windows. We maintain the package and add improvements as needed. One improvement worth pointing out is that vip-manager is also able to work on Hetzner (a leading German hosting company). We are able to talk to the Hetzner API to move IPs around. The reason I am mentioning this is that if you've got special requirements vip-manager can be adjusted to your needs with reasonable effort.

Finally …

I hope you enjoyed this posting about virtual IPs and PostgreSQL clustering. If you want to know more about recent software released, I want to point you to Scalefield, a solution to automate PostgreSQL deployments using Kubernetes.
If you want to read something right now, I want to tell you about one more tool we have recently implemented in Go (golang). pg_timetable is a cutting-edge job scheduler for PostgreSQL, which allows you to run SQL code as well as built-ins (email, etc.) and other executables. Check out our posts about that here.

3 responses to “PostgreSQL clustering: vip-manager”

  1. thanx for helpful content, I want to ask, can we implement hot stanby rooting for readonly queries with same vip-manager service? for instance: 192.168.0.123 for R/W and 192.168.0.124 for R

  2. Hello
    I have a question similar to Багир Гварамадзе's
    We create 2 nodes clusters so we always have only one replica.
    Does a solution already exist (based on vip-manager) to manage an additional VIP for the replica node ?
    Thank you !

  3. You can manage this by setting keepalived with config that might look like this:

    First you need to create tracking script that will check if current host is leader:


    #!/bin/bash

    # Settings for Patroni API
    PATRONI_API_URL="http://127.0.0.1:8008"

    # Check role via Patroni API
    role=$(curl -s $PATRONI_API_URL/patroni | jq -r .role)

    if [ "$role" == "leader" ]; then
    exit 0 # Leader
    else
    exit 1 # Not leader
    fi

    And then script for checking if server is replica:


    #!/bin/bash

    # Settings for Patroni API
    PATRONI_API_URL="http://127.0.0.1:8008"

    # Check role via Patroni API
    role=$(curl -s $PATRONI_API_URL/patroni | jq -r .role)

    if [ "$role" == "replica" ]; then
    exit 0 # Is replica
    else
    exit 1 # Is not replica
    fi

    then the keepalived configuration /etc/keepalived/keepalived.conf:


    vrrp_script check_leader {
    script "/etc/keepalived/scripts/check_leader.sh"
    interval 5
    timeout 3
    fall 2
    rise 2
    }

    vrrp_script check_replica {
    script "/etc/keepalived/scripts/check_replica.sh"
    interval 5
    timeout 3
    fall 2
    rise 2
    }

    # Configuration for leader vrrp instance
    vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1

    authentication {
    auth_type PASS
    auth_pass your_password
    }

    virtual_ipaddress {
    192.168.1.10 # VIP for leader
    }

    track_script {
    check_leader
    }

    notify_master "/etc/keepalived/scripts/notify_master.sh"
    notify_backup "/etc/keepalived/scripts/notify_backup.sh"
    notify_fault "/etc/keepalived/scripts/notify_fault.sh"
    }

    # Configuration for replica vrrp instance
    vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 52
    priority 100
    advert_int 1

    authentication {
    auth_type PASS
    auth_pass your_password
    }

    virtual_ipaddress {
    192.168.1.20 # VIP for replica
    }

    track_script {
    check_replica
    }

    notify_master "/etc/keepalived/scripts/notify_replica_master.sh"
    notify_backup "/etc/keepalived/scripts/notify_replica_backup.sh"
    notify_fault "/etc/keepalived/scripts/notify_replica_fault.sh"
    }

    Notify scripts could be some curl for Slack channel or something like that:


    #!/bin/bash
    echo "VIP switched to leader" | logger

Leave a Reply

Your email address will not be published. Required fields are marked *

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf
Austria

+43 (0) 2622 93022-0
office@cybertec.at

Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram