“Our database is slow. What if we just buy more disks? Is it going to fix things?”. I think every PostgreSQL database consultant in the world has heard this kind of question already more than once. Of course the very same questions are asked by our customers here at CYBERTEC. While more disks are surely a nice thing to have, it is not always economical to buy more hardware to fix problems, which are in many cases not caused by bad disk performance.
Table of Contents
To answer the question whether additional disks make sense or not, it is important to extract statistics from the system. In my opinion, the best tool to do that pg_stat_statements
, which is currently part of the PostgreSQL contrib module.
It will give you deep insights into what is going on inside the server and it will also give a clue, what happens on the I/O side. In short: It is possible to measure “disk wait”. Therefore, it is always a good idea to enable this module by default. The overhead is minimal, so it is definitely worth to add this thing to the server.
First of all you have to set …
1 |
shared_preload_libraries = 'pg_stat_statements' |
… postgresql.conf and restart the server.
Then you can run …
1 |
CREATE EXTENSION pg_stat_statements; |
… in your database. This will create a view, which contains most of the information you will need. This includes, but is not limited to, how often a query was called, the total runtime of a certain type of query, caching behavior and so on.
The pg_stat_statements
view will contain 4 fields, which will be vital to our investigation: query
, total_time
, blk_read_time
and blk_write_time
.
The blk_*
fields will tell us how much time a certain query has spent on reading and writing. We can then compare this to total_time
to see if I/O time is relevant or not. In case you've got enough memory, data will reside in RAM anyway, and so the disk might only be needed to store changes.
There is one important aspect, which is often missed: blk_* is by default empty as PostgreSQL does not sum up I/O time by default due to potentially high overhead.
pg_test_timing
: Measuring overheadTo sum up I/O times, set track_io_timing
to true
in postgresql.conf. In this case, pg_stat_statements
will start to show the data you need.
However, before you do that, consider running pg_test_timing
: Remember, if you want to measure I/O timing you have to check time twice per block to determine runtime. This can cause overhead. pg_test_timing
will show how much overhead there is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
iMac:~ hs$ pg_test_timing Testing timing overhead for 3 seconds. Per loop time including overhead: 37.97 nsec Histogram of timing durations: < usec % of total count 1 96.25135 76053866 2 3.74404 2958388 4 0.00022 172 8 0.00004 31 16 0.00410 3238 32 0.00019 150 64 0.00004 32 128 0.00001 6 256 0.00001 6 512 0.00000 2 1024 0.00001 4 |
On my iMac, the average overhead for a call is 37.97 nanoseconds. On a good Intel server you can maybe reach 14-15 nsec. If you happen to run bad virtualization solutions this number can easily explode to 1400 or even 1900 nsec. The value pg_test_timing will return will also depend on the operating system you are using. It seems to be the case that Windows 8.1 was the first version of Windows, which managed to come close to what Linux is able to deliver.
Having good data is really the key to making good decisions. Buying more and better disks really only makes sense if you are able to detect a disk bottleneck using pg_stat_statements
. However, before you do that: Try to figure out if the queries causing the problems can actually be improved. More hardware is really just the last thing you want to try.
In case you need any assistance, please feel free to contact us.
In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.
+43 (0) 2622 93022-0
office@cybertec.at
You are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
About "bad virtualization solutions" --- why would PG / disk perform so poorly?