CYBERTEC PostgreSQL Logo

Upgrade to a new PostgreSQL major version

09.2019 / Category: / Tags: |

by Kaarel Moppel

It’s almost that time of year again—PostgreSQL's next major version, v12, is on the way! Upgrading databases, including Postgres, is a rare event for many users, leading to misconceptions about the difficulty of the process. In reality, upgrading is straightforward, which makes it surprising that many still use outdated versions. This article aims to encourage faster upgrades in the future. For a quick overview, check the TL;DR table at the end.

Why should I upgrade in the first place?

Some main points that come to my mind:

  • More security through an upgrade
    Needless to say, I think security is massively important nowadays! Due to various government regulations and the threat of being sued over data leaks, mistakes often carry a hefty price tag. Both new and modern authentication methods and SSL/TLS version support are regularly added to new PostgreSQL versions.
  • More performance without changing anything on the application level!
    Based on my gut feeling and on some previous testing, typically in total around ~5-15% of runtime improvements can be observed following an upgrade (if not IO-bound). So not too much...but on some releases (9.6 parallel query, 10.0 JIT for example) a possibly game changing 30-40% could be observed. As a rule, the upgrades only resulted in upsides – over the last 8 years, I've only seen a single case where the new optimizer chose a different plan so that things got slower for a particular query, and it needed to be re-written.
  • New features for developers
  • Every year, new SQL standard implementations, non-standard feature additions, functions and constructs for developers are introduced, in order to reduce the amounts of code written - or to achieve something that was very hard, or possible only on the application layer.
  • Better DBA / management features
    Better high-availability, backup, monitoring and scaling features for DBA-s.  The last couple of releases have been a real boon in these matters for example with Logical Replication, partitioning and built-in monitoring roles.

And to make it even more easy for you - there’s a really cool website (much kudos to Depesz!) where you can directly get a highlighted listing of all the changes between two target Postgres versions! Link here!

Minor Updates

Minor version changes a.k.a. bugfix releases happen  regularly and can be used as a warmp-up here. There is a minimum of 1 minor release per quarter, if no security-critical stuff is found. It is, of course, highly recommended to apply them as soon as they are released.

The good thing about minor releases is that they’re as fast as restarting your server! I recommend to “prefix” the restart  with a hand-initiated “checkpoint” so that effective downtime for applications is minimized since “pg_ctl stop / restart” already does checkpointing in exclusive mode, after active sessions have been kicked from the server.

Caution! When using some distribution-specific Postgres packages some extra attention is needed. For example running “apt upgrade postgresql-X” might mean an imminent restart to the running instance, after new binaries have been pulled in! For RedHat / CentOS it’s usually a bit better though and there you need to restart explicitly.

Also when running a clustered streaming replication setup where downtime on the primary node is scarce and restarts are to be avoided, my recommendation is to immediately update all replicas to the latest minor version (minor versions don’t need to match, only major). That way, when some accidental server issue appears (e.g.reboot, hardware maintenance etc) you’ll already be promoted to the latest version with a sort of “free downtime”.

Let's now move on to the real thing - major upgrade options.

Logical Dump / Restore - the old-fashioned way to upgrade

This is the original way of doing major version upgrades and it's the only option up until version 9.0 which introduced the binary / in-place option. Logical dumps use “pg_dumpall” or “pg_dump” / “pg_restore” tools, that basically “reverse engineer” the database state to normal SQL statements. When those statements are executed sequentially on a target DB, they re-create exactly the same state for all user-level objects as it was on the source cluster / DB. NB! “Pg_dumpall” works on the instance level and “pg_dump” on the database level.

PROS:

  • 100% safe - the original instance is not changed in any way so testing etc is quite easy and straightforward and rolling back is also no problem.
  • Dumps can theoretically be pulled online or from a degraded “read only” instance. The only thing to watch out for: make sure your applications are stopped (or access limited on Postgres side) when pulling the “dump”, as it will only contain a snapshot of the data from the exact moment when the process was launched. If any rows are changed after that, they will be lost if they have not been picked up by some 3rd party “merge” tools (like dbForge for example).
  • Flexible - there are lots of flags to only dump / restore specific objects or schemas, re-map or remove ownership / access privileges. Data can also be represented with SQL standard INSERT statements so theoretically one could also move between different database systems.

CONS:

  • Slow - although pg_dump / pg_restore on modern PG versions can be parallelized very easily for better performance longer downtimes might be unavoidable. In practice you start to notice problems around 100GB, especially when you have a lot of indexes and constraints. This is where most of the time will be typically burnt during restore as it’s both CPU and disk intensive.
  • Possibly a “per database” approach is needed, where consistency on the instance level is not guaranteed out of the box (see“pg_dump --snapshot” param for a workaround).

One more recommendation - when dumping out the DB contents it’s usually best to use the tools from the latest Postgres version. Note that this might not be officially supported, if you fall out of the “5 latest releases supported” window. By the way, on the topic of dump / restore and Postgres versions, I’d recommend to read this nice FAQ by our colleague Pavlo. It’s a few years old, but still correct.

Binary In-Place Upgrade

Binary in-place upgrades are quite different from the logical ones as they happen on the filesystem level (i.e. always on the same machine if no shared storage is in play) and are a multi-step process with some additional complexity. However, things are still quite straightforward - after installing the new binaries and following a bit of preparation, it basically boils down to running a single command that typically finishes in 10 to 20 seconds in “--link” mode!! That's also why it's the most common upgrade option nowadays: nobody wants extended downtimes.

High-level steps to take (see documentation for details):

1. Install new major version binaries.
2. Initialize the new instance with new binaries (with the same major settings as on the old instance).
3. Do a test run with “pg_upgrade”. FYI - I tend to test first on a completely spare replica or a P.I.T.R. restored instance.
4. Stop the old instance.
5. Run “pg_upgrade” with or without file linking. Using hard-linking of files from the old data directory into the new one is the fastest way to upgrade! Basically only system catalogs are dumped and reloaded. Without linking all data files are copied over and it boils down to filesystem speed vs instance size, with the benefit that the old cluster is not modified.
6. Start the instance (it is usually also necessary to change the port back to 5432).
7. Start the “analyze” script generated and hinted at by the “pg_upgrade” tool. Performance for complex queries might suffer until it finishes.

PROS:

  • Typically very fast. Especially with “pg_upgrade --link --jobs X”.

CONS:

  • More risky: more things can theoretically go wrong (especially in “--link” mode), so some extra testing would be nice. At a minimum, after performing a successful upgrade process (for more important DB-s) I do a quick logical dump / restore (heavily parallel if possible) on the side. If that would take too long for large instances I at least make sure that a dump to /dev/null runs through clean. For large instances, I also tend to utilize some home-brewed scripts to dump single tables independently. Doing that avoids the negative effects of a prolonged snapshot. It only makes sense to do so as long as the DB is not a humongous single table (which we see far more often than we should).
  • All extensions and other modules also need to be present for the new version - this could be especially burdensome if using a lot of 3rd party extensions
  • No semi-online mode - always real downtime incurred, albeit short.
  • You lose all standby servers if not using “--link” mode! With “--link” there are some ways to get around that problem though if an extended downtime allows it - see here for details.
  • Degraded 1st startup. The upgrade process currently does not copy over any statistics on our data distributions! This is stuff like histogram, most common values and their frequencies and could mean a considerable performance hit for complex queries before our “vacuumdb --analyze-only --analyze-in-stages” finishes.

NB! Also note that some distributions provide wrapper scripts (like “pg_upgradecluster” on Debian-based systems) to assist with in-place upgrades. They might be worth a look.

Logical Replication

Logical Replication (LR) is a new option for Postgres major version upgrades, available from version 10 and up, and unofficially from 9.4. It allows you to transfer data and changes from an old primary server to a newly created, independent master server without extended downtime. LR operates at the database level, decoupling from the binary format and using data objects similar to JSON. This enables near real-time syncing, allowing time to test and validate results before finalizing the switch. For more details, check the sample code here.

It looks awesome and relatively simple, doesn’t it? And it is mostly! To spoil the party a bit - there's also a lot of "small print" to read, and an out-of-the box LR process might not always be possible - it depends a bit on the data model being used. Some tweaking / changing might be needed.

PROS:

  • Flexible. One can for example already make some changes on the target DB - new columns, indexes etc...
  • Safe. One should only watch out for the replication slot on the source DB if the process is canceled or the target server taken down suddenly.
  • Minimal downtime required.

CONS:

  • Quite a few steps to take.
  • Could take a long time for big databases.
  • Possibly there will be a need to modify the schema or at least REPLICA IDENTITY.

Summary of Pros / Cons for Different Upgrade Methods

Upgrade methodProContra
Dump / restore
  • Simple
  • Safe
  • Somewhat flexible
  • Slowest method
  • Per database approach has some pitfalls lurking
Binary in-place
  • Fast / very fast (depending on chosen mode)
  • Old instance not affected in default mode
  • More complex than Dump / Restore
  • Somewhat risky in “link” mode
  • Possibly loses standby servers
  • Double the disk space required in default mode
Logical Replication
  • Shortest possible downtime
  • Safe, with possibility of thorough "live tests"
  • Very flexible
  • Most complex method
  • Possibly some schema changes needed
  • Not everything is transferred (sequence state, large objects)
  • Possibly "slowish"
  • Always per database

Some General Advice and Parting Words of Upgrade Advice

Besides the obvious pre-upgrade DBA activities, such as testing, backups, and thoroughly analyzing the release notes, there are other important considerations. It’s beneficial to discuss application compatibility. Additionally, the compatibility of UI or monitoring tools with end users and application teams should also be addressed. All in all there is a very low risk factor involved in upgrading, main SQL features are backwards compatible.

I can't, of course, 100% recommend doing "lone ranger" types of covert database migrations. Although I’ve done my fair share of these, they do come with risks. In these scenarios, the database may disappear for a minute, only to reappear in a shiny new “coat” without anyone raising an eyebrow. However, if your schema consists only of a handful of plain tables with the most common data types, the situation may be different. If there are no stored procedures or other PostgreSQL-specific features like LISTEN / NOTIFY involved, and standard ANSI SQL is performed, then most likely everything will be just fine. This is due to the fact that the Postgres wire protocol has not been changed since v7-point-something. Changing it for basic SQL operations would be a massive problem for all users globally. That's why changing it has been avoided.

If you're not running an absolutely time-critical 24/7 operation (something like a space-shuttle) in my opinion there are no excuses not to upgrade at least once every couple of years. Keep in mind – if you ignore the topic for too long you’ll stop receiving security patches at some point. Then, when it is most needed, the upgrade process becomes slightly more tricky. This is because the PostgreSQL Global Development Group only supports releases from the previous five years in its tooling. If you've waited too long to upgrade, you might need to run "pg_upgrade" twice in a row. Additionally, there's some extra risk involved, as you may encounter corner case backward compatibility issues. This is particularly concerning because there is no testing for such scenarios. In short, better upgrade sooner than later!

3 responses to “Upgrade to a new PostgreSQL major version”

  1. You might want to mention these other CONs of pglogical/logical replication. No support for:
    1. large objects
    2. TRUNCATE
    3. sequence changes

    • I think TRUNCATE got fixed with PG11 and LO-s are mostly a non-topic...but true, there are quite some details with LR - that's why I labelled it as the most complex method. Will add a note but planning a 2nd more detailed post on LR also.

  2. Are there clearly documented steps to be followed for a minor upgrade... say from 10.10 to 10.12 on a centos 7 vm? I'm an oracle/mysql guy new to postgresql.

Leave a Reply

Your email address will not be published. Required fields are marked *

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf
Austria

+43 (0) 2622 93022-0
office@cybertec.at

Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram