CYBERTEC PostgreSQL Logo

More on Postgres trigger performance

06.2018 / Category: / Tags: |

By Kaarel Moppel - In my last post I described what to expect from simple PL/pgSQL triggers in performance degradation sense, when doing some inspection/changing on the incoming row data. The conclusion for the most common “audit fields” type of use case was that we should not worry about it too much and just create those triggers. But in which use cases would it make sense to start worrying a bit?

So, to get more insights I conjured up some more complex trigger use cases and again measured transaction latencies on them for an extended period of time. Please read on for some extra info on the performed tests or just jump to the concluding results table at end of article.

Default pgbench vs audit triggers for all updated tables

This was the initial test that I ran for the original blog post - default pgbench transactions, with schema slightly modified to include 2 auditing columns for all tables being updated, doing 3 updates, 1 select, 1 insert (see here to see how the default transaction looks like) vs PL/PgSQL audit triggers on all 3 tables getting updates. The triggers will just set the last modification timestamp to current time and username to current user, if not already specified in the incoming row.
Results: 1.173ms vs 1.178ms i.e. <1% penalty for the version with triggers.

Single row update use case

With multi statement transactions a lot of time is actually spent on communication over the network. To get rid of that the next test consisted of just a single update on the pgbench_accounts table (again 2 audit columns added to the schema). And then again the same with an PL/pgSQL auditing trigger enabled that sets the modification timestamp and username if left empty.
Results: 0.390ms vs 0.405ms ~ 4% penalty for the trigger version. Already a bit visible, but still quite dismissible I believe.

Single row update with a trigger written in C

But what it the above 4% performance degradation is not acceptable and it sums up if we are actually touching a dozen of tables (ca 60% hit)? Can we somehow shave off some microseconds?
Well one could try to write triggers in the Postgres native language of “C”! As well with optimizing normal functions it should help with triggers. But hughh, “C” you think...sounds daunting? Well...sure, it’s not going to be all fun and play, but there a quite a lot of examples actually included in the Postgres source code to get going, see here for example.

So after some tinkering around (I'm more of a Python / Go guy) I arrived at these numbers: 0.405ms for PL/pgSQL trigger vs 0.401ms for the “C” version meaning only ~ +1% speedup! So in short – absolutely not worth the time for such simple trigger functionality. But why so little speedup against an interpreted PL language you might wonder? Yes, PL/pgSQL is kind of an interpreted language, but with a good property that execution plans and resulting prepared statements actually stay cached within one session. So if we’d use pgbench in "re-connect" mode I’m pretty sure we’d see some very different numbers.

See here for my full “C” code.

Single row update with a trigger doing “logging insert”

Here things get a bit incomparable actually as we’re adding some new data, which is not there in the “un-triggered” version. So basically I was doing from the trigger the same as the insert portion (into pgbench_history) from the default pgbench transaction. Important to note though - although were seeing some slowdown...it’s most probably still faster that doing that insert from the user transaction as we can space couple of network bytes + the parsing (in our default pgbench case statements are always re-parsed from text vs pl/pgsql code that are parsed only once (think “prepared statements”). By the way, to test how pgbench works with prepared statements (used mostly to test max IO throughput) set the “protocol” parameter to “prepared“.
Results - 0.390ms vs 0.436ms ~ 12%. Not too bad at all given we double the amount of data!

Default pgbench vs 3 “logging insert” triggers

Here we basically double the amount of data written – all updated tables get a logging entry (including pgbench_accounts, which actually gets an insert already as part on normal transaction). Results - 1.173 vs 1.285 ~ 10%. Very tolerable penalties again – almost doubling the dataset here and only paying a fraction of the price! This again shows that actually the communication latency and transaction mechanics together with the costly but essential fsync during commit have more influence than a bit of extra data itself (given we don’t have tons of indexes on the data of course). For reference - full test script can be found here if you want to try it out yourself.

Summary table

Use CaseLatencies (ms)Penalty per TX (%)
Pgbench default vs with audit triggers for all 3 updated tables1.173 vs 1.1780.4%
Single table update (pgbench_accounts) vs with 1 audit trigger0.390 vs 0.4053.9%
Single table update (pgbench_accounts) vs with 1 audit trigger written in “C”0.390 vs 0.4012.8%
Single table update vs with 1 “insert logging” trigger0.390 vs 0.43611.8%
Pgbench default vs with 3 “insert logging” triggers on updated tables1.173 vs 1.2859.6%

Bonus track - trigger trivia!

Did you know that in Postgres one can also write DDL triggers so that you can capture/reject/log structural changes for all kinds of database objects? Most prominent use case might be checking for full table re-writes during business hours.

  • Also there are “statement level” triggers that are executed only once per SQL. They’re actually the default even if you don’t specify the level. And in Postgres 10 they were also extended with the “transition tables” feature, allowing you to inspect all rows changed by the statement to possibly do some summary aggregations or validation.
  • When you have many triggers on a table the execution order happens alphabetically by trigger name! Additionally in case of BEFORE and INSTEAD OF triggers, the possibly-modified row returned by each trigger becomes the input to the next trigger.
  • Row level BEFORE triggers are much more “cheaper” then AFTER triggers when updating a lot of rows, as they fire immediately vs at the end of the statement in which case Postgres needs to temporarily store the row state information. Situation can usually be alleviated though with some sane WHEN conditions in trigger declarations.
  • And yes, it’s possible to create for example an insert trigger that inserts again into the same table that caused the trigger to fire:) Won’t then there be an infinite loop, eating up all your disk space? Yes it would...if max_stack_depth wouldn’t kick in 😉 But of course I’d advise you to keep triggers always as simple as possible.
  • For writing triggers you’re not actually tied to most popular trigger language of PL/pgSQL and abovementioned “C” - at least PL/Python and PL/Perl also support triggers, and there might be some more.
  • Postgres 11 will include support for triggers on partitioned tables, allowing to declare them only once! Currently one had to define them for all sub-partitions separately.

Leave a Reply

Your email address will not be published. Required fields are marked *

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf
Austria

+43 (0) 2622 93022-0
office@cybertec.at

Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram