In 99% of all cases the PostgreSQL planner does a perfect job to optimize your queries and to make sure that you enjoy high performance and low response times. The infrastructure ensuring this is both sophisticated as well as robust. However, there are some corner cases which can turn out to be quite nasty. In some cases, the planner has simply no real chance and has to rely on guesswork to come up with a reasonable plan.
Table of Contents
The question now is: How can a user figure out where things go wrong? Here is an example …
The goal here is to see, how a simple query can be dissected and analyzed. Here is the query we want to optimize:
1 |
SELECT * FROM pg_stats; |
Looks like a simple thing, doesn't it? Well, not quite … Let us take a look at the definition of this system view:
1 2 3 4 5 6 7 8 9 |
SELECT ... ... FROM pg_statistic s JOIN pg_class c ON c.oid = s.starelid JOIN pg_attribute a ON c.oid = a.attrelid AND a.attnum = s.staattnum LEFT JOIN pg_namespace n ON n.oid = c.relnamespace WHERE NOT a.attisdropped AND has_column_privilege(c.oid, a.attnum, 'select'::text); |
Still, this looks like a simple query. Let us take a look at the execution plan:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
test=# explain SELECT * FROM pg_stats; QUERY PLAN ---------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=47.28..135.42 rows=6 width=535) -> Hash Join (cost=47.14..133.69 rows=6 width=475) Hash Cond: ((a.attrelid = c.oid) AND (a.attnum = s.staattnum)) Join Filter: has_column_privilege(c.oid, a.attnum, 'select'::text) -> Seq Scan on pg_attribute a (cost=0.00..68.19 rows=2419 width=70) Filter: (NOT attisdropped) -> Hash (cost=40.75..40.75 rows=426 width=421) -> Hash Join (cost=14.64..40.75 rows=426 width=421) Hash Cond: (s.starelid = c.oid) -> Seq Scan on pg_statistic s (cost=0.00..20.26 rows=426 width=349) -> Hash (cost=10.95..10.95 rows=295 width=72) -> Seq Scan on pg_class c (cost=0.00..10.95 rows=295 width=72) -> Index Scan using pg_namespace_oid_index on pg_namespace n (cost=0.13..0.19 rows=1 width=68) Index Cond: (oid = c.relnamespace) Planning time: 1.746 ms (15 rows) |
Again, this one looks pretty sane. As always we need to read the thing from inside to outside. Therefore the first thing to see is the sequential scan on pg_class. 295 lines are expected. Then comes the next sequential scan as well as the hash join. With overall costs of 136 and 6 rows the plan looks very promising.
Let us look what happens underneath. “explain analyze” will reveal what is really going on:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
test=# explain analyze SELECT * FROM pg_stats; QUERY PLAN --------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=47.28..135.42 rows=6 width=535) (actual time=9.284..14.197 rows=400 loops=1) -> Hash Join (cost=47.14..133.69 rows=6 width=475) (actual time=9.245..13.215 rows=400 loops=1) Hash Cond: ((a.attrelid = c.oid) AND (a.attnum = s.staattnum)) Join Filter: has_column_privilege(c.oid, a.attnum, 'select'::text) -> Seq Scan on pg_attribute a (cost=0.00..68.19 rows=2419 width=70) (actual time=0.006..3.282 rows=2328 loops=1) Filter: (NOT attisdropped) -> Hash (cost=40.75..40.75 rows=426 width=421) (actual time=0.827..0.827 rows=400 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 143kB -> Hash Join (cost=14.64..40.75 rows=426 width=421) (actual time=0.133..0.538 rows=400 loops=1) Hash Cond: (s.starelid = c.oid) -> Seq Scan on pg_statistic s (cost=0.00..20.26 rows=426 width=349) (actual time=0.011..0.106 rows=400 loops=1) -> Hash (cost=10.95..10.95 rows=295 width=72) (actual time=0.113..0.113 rows=299 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 31kB -> Seq Scan on pg_class c (cost=0.00..10.95 rows=295 width=72) (actual time=0.002..0.050 rows=299 loops=1) -> Index Scan using pg_namespace_oid_index on pg_namespace n (cost=0.13..0.19 rows=1 width=68) (actual time=0.001..0.001 rows=1 loops=400) Index Cond: (oid = c.relnamespace) Planning time: 0.471 ms Execution time: 14.294 ms (18 rows) |
Again the idea is to read things from inside out. Now we need two brackets for each node: One for “cost” and one for “actual time”: The important thing here is that costs are measured in abstract units - “actual time”, however, is measured in milliseconds. What we see here is that the planner estimated 295 rows and we got 299 rows in reality. Pretty good, no? The time spent on this operation is close to zero - a very good sign. Let us move up the plan now: The sequential scan on pg_statistics expects 426 lines. We get slightly less, which is pretty good.
Moving up the plan, pg_attribute reveals that those estimates are pretty close to what PostgreSQL has predicted. 2419 vs 2328 lines sounds like a pretty reasonable thing as well.
Then we can see a couple of interesting lines:
1 2 3 4 |
-> Hash Join (cost=47.14..133.69 rows=6 width=475) (actual time=9.245..13.215 rows=400 loops=1) Hash Cond: ((a.attrelid = c.oid) AND (a.attnum = s.staattnum)) Join Filter: has_column_privilege(c.oid, a.attnum, 'select'::text) |
The estimate here is just 6 lines - in reality, however, we get a set of 400 rows. What a whopping difference.
Why does it matter? If an estimate is very low, it can easily happen that a “nested loop” on top of this is heavily underestimated. With just 400 lines we cannot expect issues here - but what if you end up with a nested loop on a million rows if there should be just 500 rows around? It can easily turn out to be a disaster.
So, how do we end up with this wrong estimate? The problem here is the join filter. To see, if something is allowed or not, the view calls the has_column_privilege() function. The clue here is that PostgreSQL has absolutely no statistics about the return values of this function. As I have logged in as superuser, I happen to be able to read everything. And voila, this is where the wrong estimate comes from.
Of course, in my example this is not dangerous at all. It just shows nicely, how things can become a hidden bomb.
In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Great walk through and analysis. The comment about being root, though is off: I see similar values for root vs. unprivileged user on a mostly empty db as well. Perhaps in this case, the discrepancy might show up if you had a lot of private schema?
This kind of visibility isn't taken into consideration when handling statistics within the planner. Pretty much everything comes from pg_stats combined with various GUC settings like random_page_cost. The issue is that the planner has long had issues with correlations---as in, it can't do them. If two columns have a relationship that defines the statistics between them, PG will not notice this, but will use the raw ANALYZE data instead, which can be off by several orders of magnitude. This becomes especially pronounced when handling multiple tables via JOIN. Even with a direct equality, the proportions are unknown except via local table stats per column.
Since PostgreSQL doesn't (and shouldn't, really) gather statistics between tables, this can't really improve. About the only possible way forward here is to analyze the returning data and stash any observed correlations in a pg_observed_stats table or something. Then the planner could use it as an override when it sees that join set again. I believe Oracle does something like this, actually.