CYBERTEC PostgreSQL Logo

3 ways to detect slow queries in PostgreSQL

08.2018 / Category: / Tags: | |

(Last updated 18.01.2023) When digging into PostgreSQL performance, it is always good to know which options one has to spot performance problems, and to figure out what is really happening on a server. Finding slow queries in PostgreSQL and pinpointing performance weak spots is therefore exactly what this post is all about.

There are many ways to approach performance problems. However, three methods have proven to be really useful to quickly assess a problem. Here are my top three suggestions to handle bad performance:

  • Make use of the slow query log
  • Check execution plans with auto_explain
  • Rely on aggregate information in pg_stat_statements

 

detect slow queries in postgresql

Each method has its own advantages and disadvantages, which will be discussed in this document.

Making use of the PostgreSQL slow query log - method 1

A more traditional way to attack slow queries is to make use of PostgreSQL's slow query log. The idea is: If a query takes longer than a certain amount of time, a line will be sent to the log. This way, slow queries can easily be spotted so that developers and administrators can quickly react and know where to look.

Turn on the slow query log

In a default configuration the slow query log is not active. Therefore it is necessary to turn it on. You have version choices: If you want to turn the slow query log on globally, you can change postgresql.conf:

If you set log_min_duration_statement in postgresql.conf to 5000, PostgreSQL will consider queries which take longer than 5 seconds to be slow queries and send them to the logfile. If you change this line in postgresql.conf there is no need for a server restart. A “reload” will be enough:

You can do this using an init script or simply by calling the SQL function shown above.

If you change postgresql.conf, the change will be done for the entire instance, which might be too much. In many cases you want to be a lot more precise.

Therefore it can make sense to make the change only for a certain user or for a certain database:

ALTER DATABASE allows you to change the configuration parameter for a single database.

Let's reconnect and run a slow query:

In my example I use pg_sleep to just make the system wait for 10 seconds. When we inspect the logfile, we already see the desired entry:

Now you can take the statement and analyze why it is slow. A good way to do that is to run “explain analyze”, which will run the statement and provide you with an execution plan.

Advantages and disadvantages of the slow query log

The advantage of the slow query log is that you can instantly inspect a slow query. Whenever something is slow, you can respond instantly to any individual query which exceeds the desired threshold. However, the strength of this approach is also its main weakness. The slow query log will track single queries. But what if bad performance is caused by a ton of not quite so slow queries? We can all agree that 10 seconds can be seen as an expensive query. But what if we are running 1 million queries which take 500 milliseconds each?

All those queries will never show up in the slow query log because they are still considered to be “fast”. What you might find, however, consists of backups, CREATE INDEX, bulk loads and so on. You might never find the root cause if you only rely on the slow query log. The purpose of the slow query log is therefore to track down individual slow statements.

Checking unstable execution plans - method 2 to fix slow queries in PostgreSQL

The same applies to our next method. Sometimes your database is just fine, but once in a while a query goes crazy. The goal is now to find those queries and fix them. One way to do that is to make use of the auto_explain module.

The idea is similar to what the slow query log does: Whenever something is slow, create log entries. In case of auto_explain you will find the complete execution plan in the logfile - not just the query. Why does it matter?

Consider the following example:

The table I have just created contains 10 million rows. In addition to that an index has been defined.

Let's take a look at two almost identical queries:

The queries are basically the same, but PostgreSQL will use totally different execution plans. The first query will only fetch a handful of rows and therefore go for an index scan. The second query will fetch all the data and therefore prefer a sequential scan. Although the queries appear to be similar, the runtime will be totally different. The first query will execute in a millisecond or so while the second query might very well take up to half a second or even a second (depending on hardware, load, caching and all that). The trouble now is: A million queries might be fast because the parameters are suitable - however, in some rare cases somebody might want something which leads to a bad plan or simply returns a lot of data. What can you do? Use auto_explain.

Finding a query which takes too long is the best time to use auto_explain.

This is the idea: If a query exceeds a certain threshold, PostgreSQL can send the plan to the logfile for later inspection.

Here's an example:

Using LOAD

The LOAD command will load the auto_explain module into a database connection. For the demo we can easily do that. In a production system you would use postgresql.conf or ALTER DATABASE / ALTER TABLE to load the module. If you want to make the change in postgresql.conf, consider adding the following line to the config file:

session_preload_libraries will ensure that the module is loaded into every database connection by default. There is no need for the LOAD command anymore. Once the change has been made to the configuration (don't forget to call pg_reload_conf() ) you can try to run the following query:

The query will need more than 500ms and therefore show up in the logfile as expected:

As you can see, a full “explain analyze” will be sent to the logfile.

Advantages and disadvantages of checking unstable execution plans

The advantage of this approach is that you can have a deep look at certain slow queries and see when a query decides on a bad plan. However, it's still hard to gather overall information, because there might be millions of queries running on your system.

Checking pg_stat_statements - the 3rd way to fix slow queries in PostgreSQL

The third method is to use pg_stat_statements. The idea behind pg_stat_statements is to group identical queries which are just used with different parameters and aggregate runtime information in a system view.

In my personal judgement pg_stat_statements is really like a Swiss army knife. It allows you to understand what is really happening in your system.

To enable pg_stat_statements, add the following line to postgresql.conf and restart your server:

Then run “CREATE EXTENSION pg_stat_statements” in your database.

PostgreSQL will create a view for you (table UPDATED 26.04.2022!):

The view tells you which kind of query has been executed how often, and tells you about the total runtime of this type of query as well as about the distribution of runtimes for those particular queries. The data presented by pg_stat_statements can then be analyzed. Some time ago I wrote a blog post about this issue which can be found on our website.

When to use pg_stat_statements

The advantage of this module is that you will even be able to find millions of fairly fast queries which could be the reason for a high load. In addition to that, pg_stat_statements will tell you about the I/O behavior of various types of queries. The downside is that it can be fairly hard to track down individual slow queries which are usually fast but sometimes slow.

Conclusion - fix slow queries in PostgreSQL with one of three methods

Tracking down slow queries and bottlenecks in PostgreSQL is easy, assuming that you know which technique to use when. Use the slow query log to track down individual slow statements. auto_explain will help you to understand which slow queries are due to bad execution plans. Finally, pg_stat_statements, the Swiss army knife of PostgreSQL, will help you to understand which groups of queries might all be working together to slow your system down. This post gives you a quick overview of what's possible and what can be done to track down performance issues.

Update: see my 2021 blog post on pg_stat_statements for more detailed information on that method.

Check out Kaarel Moppel's blogpost on pg_stat_statements for a real-life example of how to fix slow queries.

Update on pg_stat_statements 26.04.2022:

From PG 12 to PG 13 the following has changed:

  • There are now counters for planning, i.e. how often the query was planned, how long it took in total, minimum and maximum.
  • Many columns were renamed accordingly, to avoid confusion (smart thinking!): total_time became total_exec_time because now, there is also a total_plan_time (see table above)
  • There are now counters that record how much transaction log (WAL) was caused by each statement.
  • There may be changes on the horizon coming up for pg_stat_statements... we'll keep you posted!

Update on pg_stat_statements 18.01.2023:

PostgreSQL v15 added counters for JIT (Just-In-Time compiler) statistics.


Please leave comments below. In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.

 

2 responses to “3 ways to detect slow queries in PostgreSQL”

Leave a Reply

Your email address will not be published. Required fields are marked *

CYBERTEC Logo white
Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram