CYBERTEC PostgreSQL Logo

PostgreSQL: Parallel CREATE INDEX for better performance

09.2018 / Category: / Tags: |

As of PostgreSQL 11 postgres users have the ability to create indexes in parallel.

For many years various commercial database vendors have already offered this feature and we are glad that PostgreSQL is part of this elite club, which offers multi-core index creation that will dramatically improve the usability of large database deployments in the future.

Creating large tables in PostgreSQL

Since version 11, PostgreSQL supports classical “stored procedures”. The beauty is that a procedure can run more than one transaction, which is ideal if you want to generate huge amounts of random data. When you call generate_series to generate 1 million rows, PostgreSQL has to keep this data in memory. Consequently generating hundreds of millions of random rows using more than 1 transactions can be really useful to reduce the memory footprint. Here is how it works:

This tiny bit of code loads 500 million random numeric values, which should be enough to demonstrate, how CREATE INDEX can be improved in PostgreSQL 11. In our example 500 million rows translate to roughly 21 GB of data:

The reason why I went for numeric is that numeric causes the most overhead of all number data types. Creating a numeric index is a lot more costly than indexing, say, int4 or int8. The goal is to see, how much CPU time we can save by building a large index on a fairly expensive field.

CREATE INDEX: Using just 1 CPU core

In PostgreSQL 11 parallel index creation is on by default. The parameter in charge for this issue is called max_parallel_maintenance_workers, which can be set in postgresql.conf:

The default value here tells PostgreSQL that if the table is sufficiently large, it can launch two workers to help with index creation. To compare a “traditional” way to create the index with the new settings, I have set max_parallel_maintenance_workers to 0. This will ensure that no multicore indexing is available:

The consequence is that indexing will take forever. When running the CREATE INDEX statement we will see a lot of I/O and a lot of CPU. To make things worse I left all memory parameters at their default value, which means that the index creation has to work with only 4 MB of memory, which is nothing given the size of the table.

Here are the results on my “Intel(R) Core(TM) i5-4460 CPU @ 3.20GHz”:

17 minutes, not too bad. Remember, we are talking about 500 million of really nasty lines of data.

Using more than just one core

Let us run the same type of indexing on 2 cores:

Wow, we are down to 11 minutes. Of course the operation is not completely linear because we have to keep in mind that those partial results have to be merged together and all that. But, there is a catch: If set max_parallel_maintenance_workers to 2 and what we saw is 2 cores, right? What if we set the value to 4? In my case 4 is the number of physical cores in the machine so it makes no sense to use any higher values. What you will see is that PostgreSQL still uses only two cores.

How can we change that? The answer can be found in the next listing: ALTER TABLE … SET … allows us to lift this restriction and use more workers:

In this case both, max_parallel_workers and the table parameter are set to 4. What we will see now is that PostgreSQL will utilize 5 processes. Why does that happen? What you will see is one main process and 4 processes helping with index creation. That might not be totally obvious, but it makes sense when you think about it.

Of course we cannot add an infinite number of workers and expect performance to grow linearly. At this stage our (single) SSD will also start to run into performance limitations and we won't see a two times increase anymore:

Everybody is doing the same thing pretty much at the same time so we will see wild swings in our I/O curve, which naturally makes the entire thing a bit slower and not linear. Still, we managed to speed up our index creation from 17 minutes to close to 9 minutes by simply adding more cores to the system.

Using more memory for CREATE INDEX

CPU cores are not the only limiting factor during index creation. Memory is also of significant importance. By default maintenance_work_mem is set to a really low value (64 MB), which greatly limits the amount of data, which can be sorted in memory. Therefore the next logical step is to increase this parameter and set it to a higher value creating the new index:

In my case I decided to pump the value to 4 GB. My server has 32 GB of memory and we have to keep in mind that we are not the only ones, which might create an index so 4 GB x 5 cores might already be a really aggressive value in a real-world scenario.

What we will see while creating the index is a lot more parallelism going on in the first phase of the index creation, which is exactly what we are supposed to see and what we expected. You can also see quite clearly that towards the end CPU usage is pretty low and PostgreSQL is waiting on the disk to do its job. The entire system has been set up with default values so writes have not been optimized yet and are therefore going to be an issue.

However, we will still see a nice improvement:

7 minutes and 28 seconds. That is already very nice. But let us see if we can do even better. What we have seen so far is that checkpoints and I/O have started to become a limiting factor. Therefore we will try to improve on that by telling PostgreSQL to use larger checkpoint distances. In this example I have decided to change postgresql.conf to the following values:

Those settings can easily be activated by reloading the config file:

Let us create a new index using those larger checkpoint distances.

When looking at the process table while building the index you can notice that PostgreSQL spent quite a lot of time on writing the WAL to disk. As long as we stick to a single SSD there is not much more we can do about it. However, what will happen if we play our next trump card? Additional hardware. What if we created all our temporary data on one disk, send the WAL to the main disk and create the index on a third SSD? This way we could split the amount of I/O needed quite nicely and see what happens.

Using tablespaces in PostgreSQL to speed up indexing

As already stated adding more hardware by using tablespaces might be a good idea. I am well aware that this might not be possible in a modern cloud environment. However, on my test server I still got the luxury items: A couple of real physical SSD drives.

So let us give them a try and create two tablespaces, which can store the data. On top of that I will tell PostgreSQL to use those tablespaces for sorting and to store the new index:

Then we can tell PostgreSQL where to put temporary data:

In the next step the index creation can start:

Index Creation in detail...

What we see here during the index creation is that our throughput peaks at higher values than before because more than one SSD can work at the same time. Instead of 500 MB / sec peak our throughput goes up to as much as 900 MB / sec at times. The overall speed has improved as well. We are already below 7 minutes, which is really nice.

If you add more hardware to the box it might be worth considering creating one filesystem using all disks at once. I did not have time to test this options, but I assume that it might similar and maybe even better results then what I was able to come up with in this first test.

PostgreSQL, CREATE INDEX on many CPUs
Multicore index creation in PostgreSQL. CREATE INDEX can use more than one CPU

Extra tip

Don't underestimate the importance of the data type in use. If we did the same test using normal integer values, we could create the index in 3 min 51 seconds. In other words: The data type is of significant importance.

In this post you have seen that creating indexes can be improved. However, keep in mind that new indexes are not always beneficial. Pointless indexes can even slow down things. To figure out, which indexes might not be needed, consider reading a post written by Laurenz Albe, who explains, how to tackle this kind of problem.

 


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.

3 responses to “PostgreSQL: Parallel CREATE INDEX for better performance”

  1. hai thanks for the infom it will be nice to try.
    but one question, how do we know time execution when creating index ?

    like this i mean
    test=# CREATE INDEX idx3 ON t_demo (data);
    CREATE INDEX
    Time: 534775.040 ms (08:54.775)

  2. max parallel maintenance workers=64 in system
    have changed work_mem to 4GB (Old value 332800 KB)
    Table paraller workers also changed to 32 then 64... (Old value NULL).

    I have tried the same..But no difference.. Am i doing anything wrong?

    Table size 48 GB. It took 3minutes 32 seconds with old value and new value of the parameters which you said.

    Pls help

Leave a Reply

Your email address will not be published. Required fields are marked *

CYBERTEC Logo white
Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram