By Kaarel Moppel - If you read this blog post the new PostgreSQL version will be probably already officially released to the public for wider usage...but seems some eager DBA already installed the last week’s Release Candidate 1 and took it for a spin 😉 The “spin” though takes 3 days to run for my scripts, so that’s the reason I didn’t want to wait for the official release.
Table of Contents
As this is an RC, and some things could change, etc, just a very brief posting this time with some synthetic pgbench test numbers that I got from my testing laid out for you and a mini conclusion in the end.
The source code statistics, by the way, again look very similar to last year's v12 release, with some very slight decrease in the number of changes done (which on the other hand could be just chunked into larger pieces) so one might assume that overall we would get a very stable release.
1 2 3 4 5 |
git diff --shortstat REL_12_4..REL_13_RC1 3109 files changed, 257376 insertions(+), 198880 deletions(-) git log --oneline REL_12_4..REL_13_RC1 | wc -l 2216 |
Test hardware was my good old testing workstation: 4 CPU, 16GB RAM, SATA SSD
OS: A clean install of Ubuntu Server 18.04.5 LTS (Bionic Beaver),
Postgres: v12.4 and v13rc1 self-compiled, all default postgresql.conf parameters except a few “standard” tunings:
1 2 3 4 5 |
shared_buffers=4GB logging_collector=on shared_preload_libraries=’pg_stat_statements’ checkpoint_completion_target=0.9 synchronous_commit=off |
Note though that the last parameter “synchronous_commit” is not really a standard setting...but disabling it for testing generally makes sense as we’re not really interested in testing the IO syncing capabilities, but we rather want to move forward with our transactions to push through more “transactions per second” to possibly spot some algorithmic differences. Also, note that pgbench “scaling factor” was selected in a way that the generated dataset would fit mostly into memory for the same purpose.
All timings measured via ‘pg_stat_statements’, which should give the most accurate information.
1 transaction = 5 operations: 3x UPDATE, 1x INSERT, 1x SELECT, 4h runtime per PG version / scale pair, 2 concurrent clients.
TPS improvement for scale 200 (Shared buffers): 4662 vs 4660 ~ 0%
TPS improvement for scale 1200: 249.6 vs 247.9 ~ -0.7%
Timing info for the most costly operation: UPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2
Scale | Mean time (ms) | Change % | Stddev (ms) | Change % | |
12.4 | 200 (Shared Buffers) | 0.0291 | 3.47 | ||
13rc1 | 200 (Shared Buffers) | 0.0275 | -5.5 | 2.85 | -17.8 |
12.4 | 1200 (75% in RAM) | 7.515 | 19.38 | ||
13rc1 | 1200 (75% in RAM) | 7.562 | +0.6 | 20.93 | +8.0 |
1 transaction = 1 indexed SELECT, 4h runtime per PG version / scale pair, 4 concurrent clients.
Query: SELECT abalance FROM pgbench_accounts WHERE aid = $1
Scale | Mean time (ms) | Change % | Stddev (ms) | Change % | |
12.4 | 500 (fits RAM) | 0.0085 | 0.0027 | ||
13rc1 | 500 (fits RAM) | 0.0087 | +2.6 | 0.0030 | +11.6 |
12.4 | 1200 (75% in RAM) | 0.0976 | 0.1043 | ||
13rc1 | 1200 (75% in RAM) | 0.1020 | +4.5 | 0.1093 | +4.8 |
These SELECT-s are some home-brewed queries that I threw together some years ago for the purpose of testing out freshly released v11 in a very similar matter. The queries are doing some bulkier aggregations that one sees quite often, on a larger subset or all of the data rows.
Scaling factor = 500 (data fits 100% in RAM), clients = 2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
/* 1st let’s create a copy of the accounts table to be able to test massive JOIN-s */ CREATE TABLE pgbench_accounts_copy AS SELECT * FROM pgbench_accounts; CREATE UNIQUE INDEX ON pgbench_accounts_copy (aid); /* Query 1 */ SELECT avg(abalance) FROM pgbench_accounts JOIN pgbench_branches USING (bid) WHERE bid % 2 = 0; /* Query 2 */ SELECT COUNT(DISTINCT aid) FROM pgbench_accounts; /* Query 3 */ SELECT count(*) FROM pgbench_accounts JOIN pgbench_accounts_copy USING (aid) WHERE aid % 2 = 0; /* Query 4 */ SELECT sum(a.abalance) FROM pgbench_accounts a JOIN pgbench_accounts_copy USING (aid) WHERE a.bid % 10 = 0; |
Query | Mean time (ms) v12.4 | Mean time v13rc1 | Change % | Stddev v12.4 (ms) | Stddev v13rc1 | Change % |
Q 1 | 2384.2 | 2118.5 | -11.1 | 314.05 | 365.22 | +16.3 |
Q 2 | 12198 | 14552 | +19.3 | 169.4 | 272.3 | +60.8 |
Q 3 | 17584 | 14036 | -20.2 | 1458.0 | 1636.1 | +12.2 |
Q 4 | 4725.7 | 4527.4 | -4.2 | 1099.8 | 1118.4 | +1.7 |
So what do we think of the test numbers? Well… looking good in general - if we sum up all the percentual “winnings” for the mean operation times, then there was a 14% speedup!
By the way, 14% is more than observed in previous years actually, so I’m pretty excited about this release now. The only thing that’s holding me back from being even more excited is the fact that on the other hand, the sum of all percentual standard deviation changes increased quite noticeably by +97%! Though this increase mostly comes from our analytical query nr. 3, doing a “DISTINCT COUNT” which could be rewritten to be more efficient (but which most people of course don’t do...), it still seems to hint that there is some more jitter now in play with the improved algorithms. Or it just might be my test rig of course... so waiting to see some other people’s results in the near future also to see what they get.
So in the end - some test items were a bit slower, others faster... and most importantly it seems like there are no grave problems. Something serious like that would probably be reported by the project’s “test farm”, so not really worried about that though...
And in real life in the end it mostly comes down to how you access your data, and how well it fits into Shared Buffers or RAM. When moving beyond that we see orders of magnitude fall-offs, so any 10 or 20% algorithmic improvement will be powerless there anyways. But still, the v13 release will be great, with indexing improvements on the forefront - hope to write about that also soonish. And in the meantime, you all start preparing for that imminent version upgrade, mkay!
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Excellent post