With the launch of Postgres 9.5, a new setting called "wal_compression" was introduced, that should decrease the IO load on behalf of CPU load. That is a desirable compromise considering typical constraints of modern hardware. But somehow the feature didn't get the publicity it deserved - it wasn't even mentioned in the What's new roundup and by default it's turned off. So I thought I'll try to broadcast the message out again and provide a simple test scenario to highlight the benefits.
Table of Contents
First, we'll run the small script you can find below (for the "do try this at home" folks out there) in two versions, with wal_compression "on" and "off" accordingly. Testing scenario itself is pretty simple, we rely on our good old friend pgbench again. First we initialize the pgbench schema with a "scale" value of 100, translating into 10 Mio rows in the main "pgbench_accounts" table and at about 1.3 GB of "on disk" size. Then we do a short benchmark run with 200k transactions. Default transactions in pgbench consist mostly of random updates over the whole primary key space.
Here we'll be only interested here though only in the written WAL size, and take a note of that after every step. One thing to notice here is that for the WAL-s to accumulate we'll need to set the wal_keep_segments (min_wal_size would do the trick also) to a high value, so that no WAL files would be recycled and we can measure write activity directly off the filesystem.
Here is our test script for WAL compression.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
FOLDER=wal_compression WAL_COMPRESSION=off PORT=5433 SCALE=100 TX=100000 # per client. we use 2 clients echo 'running initdb...' initdb -D $FOLDER &>/dev/null OPTS=$(cat </dev/null echo 'size after testdata creation' du -b -s $FOLDER/pg_xlog pgbench -p $PORT -c 2 -j 2 -t $TX -M prepared &>/dev/null echo 'size after $TX transactions' du -b -s $FOLDER/pg_xlog pg_ctl -D $FOLDER stop #rm -rf $FOLDER |
After going through 2 runs I got the following numbers: 1st run, wal_compression = on
1 2 3 4 5 6 |
size after initdb 16785408 wal_compression2/pg_xlog/ size after testdata creation 1207967744 wal_compression2/pg_xlog size after 100000 transactions 1509957632 wal_compression2/pg_xlog |
2nd run, wal_compression = off
1 2 3 4 5 6 |
size after initdb 16785408 wal_compression3/pg_xlog/ size after testdata creation 1308631040 wal_compression3/pg_xlog size after 100000 transactions 2382381056 wal_compression3/pg_xlog |
Now let's do some calculations...
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
krl@postgres=# select (1308631040 - 1207967744 - 2*16785408) / (1207967744 - 16785408)::float; ?column? ──────────────────── 0.0563242737676056 (1 row) krl@postgres=# with q_no_comp_diff as (select 2382381056 - 1308631040 as value), q_comp_diff as (select 1509957632 - 1207967744 as value) select (q_no_comp_diff.value - q_comp_diff.value) / q_comp_diff.value::float from q_no_comp_diff, q_comp_diff; ?column? ────────────────── 2.55558268229167 (1 row) |
What we can see is that the size difference during initialization (sequential INSERTs) of the test schema yields a moderate ~6%, but random updates exhibit a whopping 255% difference in WAL written!
Ok, conclusion time. As we had a synthetic dataset we could say that if you're doing mostly random updates over the whole dataset, enabling wal_compression
could bring you noticeable improvement in the form of: a replication lag decrease when the network between replicas is slow; disk space savings for your WAL archive when doing PITR; or most importantly, a general decrease of server disk IO contention. You should definitely give this setting a try! And the best part is that it's as easy as switching the flag in your config and restarting your server.
In case you need any assistance, please feel free to contact us.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Hello,
I ran many tests around this parameter on UBUNTU for INSERTs and Updates of 15,00,000 rows. Here is my observation:
Parameter can be changed dynamically.
In replicated environment, WAL_COMPRESSION needs to be enabled on all servers (master and slaves) for benefits..
INSERT were 22% faster and UPDATES were 31% faster post wal_compression. I ran this test 4 times, every time response was faster.
There is less positive impact on network bandwidth utilization due to wal_compression.
WAL space consumption was around 5% less than normal. I didn't get big benefit in space, however I can see overall improvement in response time of INSERT and UPDATE.
To reduce network bandwidth you can apply SSL compression. Sadly this has started to be disabled by default, and even removed in later versions (i.e. distros build the package with no-zlib rather than zlib-dynamic). You have to build the OpenSSL package yourself to enable it again - as well as patch the code in 1.1.0 because PostgreSQL doesn't do what it says and enable SSL compression by default; it just doesn't clear SSL compression by default.