UPDATED September 2023: If you use transaction log shipping (or Point-In-Time-Recovery) in PostgreSQL to keep your data safe, you surely already know about pg_basebackup
. The idea behind pg_basebackup
is to allow users to create a binary copy of the data which can serve as the basis for Point-In-Time-Recovery. However, recently we saw some support cases related to improper use of pg_basebackup
. So let's clean out some of those myths surrounding this command and shed some light on how things really work. Note that the information in this blog is true for PostgreSQL versions 10 and higher. Everything said and described here therefore works perfectly for PostgreSQL 15, PostgreSQL 16 and higher. pg_basebackup creating self-sufficient backups, how?
Table of Contents
Basically, pg_basebackup
is a way to “tar” data over a database connection. When you start pg_basebackup
, it will wait for the server to do a checkpoint, and then start to copy the Postgres data directory over. In most databases, a checkpoint is a fairly fundamental thing: if you want to know more, find out about PostgreSQL checkpoints in our blog.
Once the checkpoint is done, data will be copied over while the “donor” server (usually the master but possible for replicas also) is fully operational. Thousands of transactions (reads and writes) might happen while the base backup is running. What that means is that the data files you are about to copy over are incomplete, corrupted, and basically just not usable. Yes, you have read this correctly: We are copying data while it is modified and this will return broken, incomplete and wrong data files.
However, let us reflect a bit on these “errors”. The content of the base backup is something like a mixture of the data as it was at the beginning of the base backup and of the data as it was at the end of the base backup. Since there is a transaction log entry for every change, we can always repair any inconsistencies using the PostgreSQL transaction log. In other words: corrupted base backup + WAL = consistent data.
By default, a base backup does not contain the WAL which was created while the base backup was created. In other words, unless NOTHING happened during the base backup (which is highly unlikely), the base backup might appear to work without a WAL archive. However, as stated before – it “MIGHT”. There is no guarantee. For example, an autovacuum process could start during the backup, which would change the data and cause inconsistency.
To make sure that the base backup contains enough WAL to reach at least a consistent state, I recommend adding --wal-method=stream
when you call pg_basebackup
. It will open a second stream which fetches the WAL
created during the backup. The advantage is that the backup now has everything it needs for a consistent restore without an external WAL
archive.
Here is my favorite method of running pg_basebackup
:
1 2 3 |
pg_basebackup -D /target_dir -h master.example.com --checkpoint=fast --wal-method=stream -R |
-R
will automatically set the necessary parameters to run streaming replication. For Point-In-Time-Recovery, -R
is neither necessary nor useful. Once this is done, we can already start the replica. Since the WAL
was streamed along with the data, we'll have a consistent database, which is guaranteed to be free of corruption.
By default, pg_basebackup
will operate at full speed. This can lead to trouble, because your network connection or your disk might run at peak capacity. The way to fix this problem is by reducing the speed of pg_basebackup
in the first place:
1 2 |
-r, --max-rate=RATE maximum transfer rate to transfer data directory (in kB/s, or use suffix 'k' or 'M') |
Throttling your backups will of course increase your backup times, but in exchange, your network connections will have some spare capacity.
In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Leave a Reply