Blog post updated 05.12.2022 People keep asking: "What are the best ways to upgrade and update my PostgreSQL to the latest version?" This blog post covers how you can upgrade or update to the latest PostgreSQL release.
Table of Contents
Before we get started, we have to make a distinction between two things:
Let’s take a look at both scenarios.
The first scenario I want to shed some light on is updating within a major version of PostgreSQL: What does that mean? Suppose you want to move from PostgreSQL 15.0 to PostgreSQL 15.1. In that case, all you have to do is to install the new binaries and restart the database.
There is no need to reload the data. The downtime required is minimal. The binary format does not change; a quick restart is all you need. The same is true if you are running a HA (high availability) cluster solution such as Patroni. Simply restart all nodes in your PostgreSQL cluster and you are ready to go.
The following table contains a little summary:
Tooling | Task | Reloading data | Downtime needed |
yum /dnf /apt | Minor release update | Not needed | Close to zero |
Now let’s take a look at upgrades: if you want to move from PostgreSQL 9.6, 10, 13 or some other older version to PostgreSQL 15, an upgrade is needed. To do that, you have various options:
pg_dumpall
: Dump / reloadpg_upgrade
: Copy data on a binary levelpg_upgrade --link
: In-place upgradesIf you dump and reload data, it might take a lot of time. The bigger your database is, the more time you will need to do the upgrade. It follows that pg_dump
/pg_dumpall
and restore are not the right tools to upgrade a large multi-terabyte database.pg_upgrade
is here to do a binary upgrade. It copies all the data files from the old directory to the new one. Depending on the amount of data, this can take quite a lot of time and cause serious downtime. However, if the new and the old data directory are on the same filesystem, there is a better option: “pg_upgrade --link
”. Instead of copying all the files, pg_upgrade will create hard links for those data files. The amount of data is not a limiting factor anymore, because hard links can be created quickly. “pg_upgrade --link
” therefore promises close to zero downtime.
Tooling | Task | Reloading data | Downtime needed |
pg_upgrade | Major release update | Metadata and files are copied | Downtime is needed |
pg_upgrade --link | Major release update | Only metadata are copied | Close to zero |
What is important to note here is that pg_upgrade
is never destructive. If something goes wrong, you can always delete the new data directory and start from scratch.
To show pg_upgrade in action, I have created a little sample database to demonstrate how things work in real life:
1 2 3 4 5 6 7 8 9 10 11 12 |
test=# CREATE TABLE a AS SELECT id AS a, id AS b, id AS c FROM generate_series(1, 50000000) AS id; SELECT 50000000 test=# CREATE TABLE b AS SELECT * FROM a; SELECT 50000000 test=# CREATE INDEX idx_a_a ON a (a); CREATE INDEX test=# CREATE INDEX idx_a_b ON a (b); CREATE INDEX test=# CREATE INDEX idx_b_a ON b (a); CREATE INDEX |
This is a fairly small database, but it is already large enough so that users can feel the difference when doing the upgrade:
1 2 3 4 5 |
test=# SELECT pg_size_pretty(pg_database_size('test')); pg_size_pretty ---------------- 7444 MB (1 row) |
7.4 GB are ready to be upgraded. Let’s see how it works.
To upgrade a database, three steps are needed:
initdb ...
pg_hba.conf
and pg_ident.conf
and adapt the new postgresql.conf
pg_upgrade ...
Let us start with initdb
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
iMac:tmp hs$ initdb -D /data/db15 The files belonging to this database system will be owned by user 'hs'. This user must also own the server process. The database cluster will be initialized with locales COLLATE: en_US CTYPE: UTF-8 MESSAGES: en_US MONETARY: en_US NUMERIC: en_US TIME: en_US The default database encoding has accordingly been set to 'UTF8'. initdb: could not find suitable text search configuration for locale 'UTF-8' The default text search configuration will be set to 'simple'. Data page checksums are disabled. creating directory /data/db15 ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... Europe/Vienna creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok initdb: warning: enabling 'trust' authentication for local connections You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. Success. You can now start the database server using: pg_ctl -D /data/db15 -l logfile start |
Note that pg_upgrade
is only going to work in case the encodings of the old and the new database instance match. Otherwise, it will fail.
After adapting the configuration files, we can run pg_upgrade
: Basically, we need four pieces of information here: The old and the new data directory as well as the path of the old and the new binaries:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
iMac:tmp hs$ time pg_upgrade -d /data/db12/ -D /data/db15 -b /path/pg12/bin/ -B /usr/local/Cellar/postgresql/15.1/bin/ Performing Consistency Checks ----------------------------- Checking cluster versions ok Checking database user is the install user ok Checking database connection settings ok Checking for prepared transactions ok Checking for reg* data types in user tables ok Checking for contrib/isn with bigint-passing mismatch ok Creating dump of global objects ok Creating dump of database schemas ok Checking for presence of required libraries ok Checking database user is the install user ok Checking for prepared transactions ok If pg_upgrade fails after this point, you must re-initdb the new cluster before continuing. Performing Upgrade ------------------ Analyzing all rows in the new cluster ok Freezing all rows in the new cluster ok Deleting files from new pg_xact ok Copying old pg_xact to new server ok Setting next transaction ID and epoch for new cluster ok Deleting files from new pg_multixact/offsets ok Copying old pg_multixact/offsets to new server ok Deleting files from new pg_multixact/members ok Copying old pg_multixact/members to new server ok Setting next multixact ID and offset for new cluster ok Resetting WAL archives ok Setting frozenxid and minmxid counters in new cluster ok Restoring global objects in the new cluster ok Restoring database schemas in the new cluster ok Copying user relation files ok Setting next OID for new cluster ok Sync data directory to disk ok Creating script to analyze new cluster ok Creating script to delete old cluster ok Upgrade Complete ---------------- Optimizer statistics are not transferred by pg_upgrade so, once you start the new server, consider running: ./analyze_new_cluster.sh Running this script will delete the old cluster's data files: ./delete_old_cluster.sh real 4m11.702s user 0m0.255s sys 0m13.385s |
What is worth mentioning here is that the upgrade process takes over four minutes because all the data had to be copied to the new directory. My old Mac is not very fast and copying takes a very long time.
To reduce downtime, we can clean out the directory, run initdb
again and add the --link
option:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
iMac:~ hs$ time pg_upgrade -d /data/db12/ -D /data/db15 -b /path/pg12/bin/ -B /usr/local/Cellar/postgresql/15.1/bin/ --link Performing Consistency Checks ----------------------------- Checking cluster versions ok Checking database user is the install user ok ... Upgrade Complete ---------------- Optimizer statistics are not transferred by pg_upgrade so, once you start the new server, consider running: ./analyze_new_cluster.sh Running this script will delete the old cluster's data files: ./delete_old_cluster.sh real 0m3.538s user 0m0.198s sys 0m0.322s |
In this case, it took only 3.5 seconds to upgrade. We can start the database instance directly and keep working with the new instance.
pg_upgrade --link
under the hoodThe --link
option can reduce the downtime to close to zero. The question is what happens under the hood? Let's check out the data files:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
-rw------- 2 hs wheel 1073741824 Nov 22 11:55 16388 -rw------- 2 hs wheel 1073741824 Nov 22 11:58 16388.1 -rw------- 2 hs wheel 66576384 Nov 22 12:01 16388.2 -rw------- 2 hs wheel 1073741824 Nov 22 12:12 16391 -rw------- 2 hs wheel 1073741824 Nov 22 12:13 16391.1 -rw------- 2 hs wheel 66576384 Nov 22 12:14 16391.2 -rw------- 2 hs wheel 1073741824 Nov 22 12:01 16394 -rw------- 2 hs wheel 49373184 Nov 22 12:01 16394.1 -rw------- 2 hs wheel 1073741824 Nov 22 12:03 16395 -rw------- 2 hs wheel 49373184 Nov 22 12:03 16395.1 -rw------- 2 hs wheel 1073741824 Nov 22 12:15 16396 -rw------- 2 hs wheel 49373184 Nov 22 12:15 16396.1 -rw------- 1 hs wheel 8192 Nov 24 10:03 174 -rw------- 1 hs wheel 8192 Nov 24 10:03 175 |
What you see here is that the relevant data files have two entries: “2” means that the same file (inode) shows up as two directories. pg_upgrade
creates hard links, and this is exactly what we see here.
That's why it is super important to make sure that the mount point is /data
and not /data/db12
. Otherwise, hard links are not possible.
pg_upgrade
is an important tool. However, there are more ways to upgrade: Logical replication is an alternative method. Check out our blog post about this exciting technology.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
https://uploads.disquscdn.com/images/40f117442e2b213df3223ef1b9655e5c67dc91509657a77446acd8548c5028bb.png
pg_upgrade --link failed while migrating from postgres 10 to 12 with below error
[unrecognized win32 error code: 50]
[could not create hard link between old and new data directories: Invalid argument]
[In link mode the old and new data directories must be on the same file system.]
D:PostgreSQLbinpg_upgrade --old-datadir=D:OldPostgreSQLdata --new-datadir=D:PostgreSQLdata --old-bindir=D:OldPostgreSQLbin --new-bindir=D:PostgreSQLbin --old-port=5433 --new-port=49121 --username=postgres --retain --link
Is failure due to mounting issue between old data dir and new data dir?
Note: Getting issue only in one of the machines
Machine OS: Windows server 2019
Can you please help me in this?
how to upgrade from 9.4 to 13.2
how to upgrade form 8 to 9?
You don't upgrade to an old, outdated version like 9.X.
Use a more recent version.
The method is dump/restore.
thank you but in year 2024 our business software relies on postgres 13 and upgrade from 8 to 13 failed. developer said upgrade from 8 to 9 then 9 to 13 and if you want 13 to 16. or 9 to 16. dump/restore has to be the best solution, the developer told us.
Your developer is wrong.
dear laurenz, do you want to keep in contact with me, outside of you great webpage? i need help with the upgrade of postgres and i am willing to pay a fair price. are you Mr. Schönig in real life?
I am not Mr. Schönig, I am Mr. Albe.
We are happy to provide support to you, simply contact sales@cybertec.at and order a consulting pool.
I'm also interested in upgrading from v8 to v13. Is it directly possible, or only via path 8->9->13(->16) ?
(Did not find any official info - perhaps it's my bad.)
how can i upgrade my replica environment , where replica is syning data using seperate slot for each node, during upgrade link clause, slote is not coppied to new version , rsync has been done for replicas , but there is no slot and no where i can see replica will start from that lsn number, so guide me how can i do that,
You have to rebuild the standby servers from scratch after upgrading the primary.
There is some documentation how you can do it faster using rsync, but they are not for the faint of heart and cannot be recommended.