By Kaarel Moppel - In recent years, there has been quite a lot of buzz about “insert-only” approaches and some database-like products (Datomic, Apache Samza) have emerged, being inspired by the idea of having an immutable data store. In light of cheap storage and powerful hardware, I see the idea definitely having potential for certain use cases. So why not to apply the concept using a PostgreSQL database? There are no inherent limitations there. One should just find a suitable use case and design the data model accordingly.
Table of Contents
My own interest was more towards performance of the approach in comparison to the standard “update” way, as in theory according to my understanding “insert-only” approach should provide some performance benefits as Postgres implementation of UPDATE is actually a DELETE+INSERT together with the possible update of all index entries in the worst-case scenario.
When doing only INSERT’s instead I would reckon that altogether fewer IO operations would be needed, bloat would be minimized (meaning no AUTOVACUUM hiccups), mostly sequential writing would happen (index pages could still split), a lot less full page images need to be written into XLOG. Plus on the logical level one would get the full history of changes making audit tables unnecessary.
Of course there would also be downsides – more disk space and additional indexing would be needed, queries could easily get unwieldy for joins etc, limited foreign key usage, for really big amounts of data one could need custom batch jobs for cleaning up old versions of rows still at some point, to name a few considerations.
So after some thinking about a good but simple test case for my theoretical hunch about possible performance benefits, I decided to model my “insert-only” schema to match the short version (using the –skip-some-updates flag) of the default “pgbench” TPC-B scenario. What the short version does, is that it leaves out balance updates on the smaller “pgbench_branches” and “pgbench_tellers” tables (where locking could occur, thus not maxing out IO), meaning that my customized “insert-only” script would become very simple, only doing inserts into the “pgbench_accounts”, leaving out also the insert on “pgbench_history” as we would already have the full history in the accounts table itself anyways.
Now to the course of testing itself. Files for converting the schema to insert-only mode and the custom “pgbench” test script can be found here.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
# Initialize the normal “pgbench” schema with a small 1.3GB dataset (fits in RAM) pgbench -i -s 100 &>/dev/null # run the short version for 5min pgbench -T 300 --skip-some-updates -c4 -j4 –protocol=prepared starting vacuum...end. transaction type: Update only pgbench_accounts scaling factor: 100 query mode: prepared number of clients: 4 number of threads: 4 duration: 300 s number of transactions actually processed: 1555859 latency average: 0.771 ms tps = 5186.167970 (including connections establishing) tps = 5186.221344 (excluding connections establishing) # Initialize the read-only schema with a small 1.3GB dataset pgbench -i -s 100 &>/dev/null # convert the schema to insert-only mode psql -f insert_only_data_model_changes.sql # run the custome version for 5min pgbench -s 100 -T 300 -c4 -j4 --protocol=prepared -f insert_only_tx_pgbench.sql starting vacuum...end. transaction type: Custom query scaling factor: 100 query mode: prepared number of clients: 4 number of threads: 4 duration: 300 s number of transactions actually processed: 2228239 latency average: 0.539 ms tps = 7427.369377 (including connections establishing) tps = 7427.546212 (excluding connections establishing) |
After looking at the TPS numbers for both runs we see that the performance benefit expressed in TPS was +43% here! And behind the scenes I actually also measured the amount of WAL generated, the difference there was even bigger with +135%.
To sum it up - although a very simplistic example here - “insert-only” data modelling provided a nice TPS boost for our test scenario, while providing exactly the same (and even more, full history) information. So in short it could be a worthwhile investigation when one has to deal with sudden peaks for example, as it should give you more predictable IO when applied correctly. A nice exercise in thought in any case.
You also may be interested in my post about insert-only data modelling to smooth peaks on slow disks.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Hi there,
can you show details about the scripts that you used? There's no link for the "custom pgbench".
Thanks.
Thanks for the hint! Indeed the link went missing somehow. I've corrected the link in post now
Did you adjust the FILLFACTOR on the tables since you no longer need to leave space for updates?
Nope. No need as for tables it is by default 100