CYBERTEC PostgreSQL Logo

Exposing PostgreSQL server logs to users via SQL

11.2016 / Category: / Tags: |

By Kaarel Moppel - During the last training session, a curious participant asked if there’s a way to easily expose the PostgreSQL database logs to users - and indeed, there’s a pretty neat way for SQL-based access! So this time, I'll give you a quick demo on that. The approach, taking advantage of the File Foreign Data Wrapper extension, is actually even brought out in the official docus, but still not too well-known, although mentioned as an “obvious use case” 😉 I must say that this approach is best suited for development setups, as under normal circumstances you would most probably want to keep the lid on your logs.

Setup steps

1.

First you need to change the server configuration (postgresql.conf) and enable CSV logging as described in detail here. This might result in some overhead on busy systems, as compared to ‘sysout’, as all the “columns” or info that Postgres has on the logged event is logged, especially problematic with log_statement = 'all'.

2.

Install the “file_fdw” extension (“contrib” package needed) and create a foreign file server and a foreign table, linking to our above configured log file name.

3.

Grant access as needed, or if you want that every user can see only his/her own entries, bring views into play, with security_barrier set when security matters. For 9.5+ servers one could even use the flashy Row Level Security mechanisms to set up some more obscure row visibility rules. The downside is that you need to set up a parent-child relationship then, as RLS cannot work with the “virtual” table directly.

4.

And another additional idea - a handy way to expose and physically keep around (automatic truncation) only 7 days of logs is to define 7 child tables for a master one. Process would then look something like that:

Not a "one size fits all" solution

The only problem with the approach I've laid out is that it might not be a perfect fit if you need relatively frequent monitoring queries on the logs, since queries need to read through all of the logfiles every single time.

We can see this via EXPLAIN:

In such cases, a typical approach would be to write some kind of simple logs importing Python cronjob that scans and parses the CSV logfiles and inserts entries into an actual table (typically on a dedicated logging database), where the “log_time” column could be indexed for better performance. Or another direction (if you’re not super worried about privacy) would be to use a 3rd party SaaS provider like Loggly or Scalyr, which have log exporting means available.

4 responses to “Exposing PostgreSQL server logs to users via SQL”

    • Glad to hear you found it useful! Btw, there's one more interesting non-standard option available: with an extension one can hook into the "log message emitted" event and send the info to some central place without parsing, here for example to Redis (and then to Elasticsearch): https://github.com/2ndquadrant-it/redislog

      • Ahh interesting. Would you recommend this as a production solution? The "without parsing" is a loss though. I like querying for specific error messages and getting their related data. I see "csv support" is on their to do list as of writing this.

Leave a Reply

Your email address will not be published. Required fields are marked *

CYBERTEC Logo white
Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram