If you really want to track down slow queries, massive I/O and lousy performance, there is no way around the pg_stat_statements
extension.
However, the pg_stat_statements
system view is full of information and many people get lost. Therefore it can make sense, to come up with a clever query to provide administrators with really relevant information. Without finding the really relevant information, tuning is somewhat pointless.
Here is my personal favorite query to track down slow queries:
1 2 3 4 5 6 7 8 9 |
test=# SELECT substring(query, 1, 50) AS short_query, round(total_exec_time::numeric, 2) AS total_exec_time, calls, round(mean_exec_time::numeric, 2) AS mean, round((100 * total_exec_time / sum(total_exec_time::numeric) OVER ())::numeric, 2) AS percentage_cpu FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 20; |
The output contains a short version of the query (this can be handy if you are using text terminal as I do). Then there is the tota_execl_time of the query in the first column along with the number of calls and the mean execution time.
Personally I have found it useful to calculate an overall percentage for each query. It helps me to get a feeling of what lies in stock for me in case I can manage to optimize a certain query. To me the percentage value provides me with relevance because it is pretty pointless to work on queries, which only need 0.5% of time:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
short_query | total_exec_time | calls | mean | percentage_cpu ----------------------------------------------------+-----------------+--------+--------+---------------- UPDATE pgbench_tellers SET tbalance = tbalance + ? | 585005.76 | 143881 | 4.07 | 54.03 UPDATE pgbench_branches SET bbalance = bbalance + | 481968.27 | 143881 | 3.35 | 44.51 UPDATE pgbench_accounts SET abalance = abalance + | 9801.43 | 143881 | 0.07 | 0.91 SELECT abalance FROM pgbench_accounts WHERE aid = | 2928.61 | 143881 | 0.02 | 0.27 INSERT INTO pgbench_history (tid, bid, aid, delta, | 2236.59 | 143881 | 0.02 | 0.21 copy pgbench_accounts from stdin | 371.97 | 1 | 371.97 | 0.03 vacuum analyze pgbench_accounts | 128.22 | 1 | 128.22 | 0.01 BEGIN; | 108.18 | 143881 | 0.00 | 0.01 END; | 88.91 | 143881 | 0.00 | 0.01 alter table pgbench_accounts add primary key (aid) | 74.88 | 1 | 74.88 | 0.01 vacuum analyze pgbench_branches | 14.99 | 1 | 14.99 | 0.00 vacuum pgbench_branches | 11.57 | 1 | 11.57 | 0.00 create table pgbench_history(tid int,bid int,aid | 4.36 | 1 | 4.36 | 0.00 vacuum analyze pgbench_tellers | 2.28 | 1 | 2.28 | 0.00 alter table pgbench_branches add primary key (bid) | 2.03 | 1 | 2.03 | 0.00 SELECT substring(query, ?, ?) AS short_query, | 1.93 | 2 | 0.96 | 0.00 SELECT substring(query, ?, ?) AS short_query, | 1.82 | 1 | 1.82 | 0.00 SELECT a.attname, + | 1.39 | 2 | 0.69 | 0.00 pg_catalog.format_type(a.attty | | | | alter table pgbench_tellers add primary key (tid) | 1.07 | 1 | 1.07 | 0.00 vacuum analyze pgbench_history | 1.02 | 1 | 1.02 | 0.00 (20 rows) |
Working top down is usually a good idea.
Of course everybody will have his own ideas of how to approach the problem and the information provided by the query is not sufficient to fully optimize a system. However, I have found it useful to gain a quick overview of what is going on.
See also my blog about speeding up slow queries.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Thanks for this !!! Now, do we have to restart the Postgres to reset the queries ?
You can use pg_stat_statements_reset()
In Azure Postgres SQL, I get for the short_query field on most of the rows, except for a few python scripts I used to create the tables. Any tips on seeing the queries?
You have to use an account with superuser privileges, like the user postgres, to see those queries.
Hello
I do not understand how total_time became CPU percentage.
Can you please shed light on this?
query should be updated, because there is no total_time or mean_time instead there is total_exec_time and mean_exec_time
thanks updated
please find the updated query for postgres-14
select substring(query,1, 50) AS short_query,
round(total_exec_time::numeric, 2) AS total_exec_time,
calls,round(min_exec_time::numeric,2) AS mean,
round((100*total_exec_time/ sum(total_exec_time::numeric)
OVER())::numeric,2) AS percentage_overall
from pg_stat_statements
order by total_exec_time desc
LIMIT 20;
Thanks. updated