Wikipedia is one of the most relevant collections of text data on the Internet today. It consists of countless entries covering pretty much every topic conceivable. Therefore Wikipedia is an interesting collection of text to play with. To prepare for my talk at the PostgreSQL conference in Vienna, we decided to import Wikipedia into PostgreSQL. The goal is to see which settings are most suitable for indexing Wikipedia, using the creation of a GIN index.
Table of Contents
The structure of the table is pretty simple:
1 2 3 4 5 6 7 8 9 10 11 |
test=# d wiki_articles Table 'public.wiki_articles' Column | Type | Modifiers ----------+---------+----------- page_id | integer | not null title | text | content | text | filtered | text | Indexes: 'wiki_articles_pkey' PRIMARY KEY, btree (page_id) Tablespace: 'myspace' |
Overall, the size of the table adds up to around 27 GB. We store the title, the raw content as well as some filtered content, which does not contain mark-up anymore.
The size of the filtered column is as follows:
1 2 3 4 5 6 7 8 9 |
test=# timing Timing is on. test=# SELECT sum(length(filtered)) FROM wiki_articles ; sum ------------- 15326058215 (1 row) Time: 181819.396 ms |
Given the very special nature of this corpus, it seems interesting to inspect the distribution of values inside those texts. PostgreSQL has some simple means to gather this information:
1 2 3 4 5 6 |
test=# CREATE TABLE wiki_stats AS SELECT * FROM ts_stat('SELECT to_tsvector(''english'', filtered) FROM wiki_articles') ; Time: 3925449.024 ms |
A full CPU core will consume about 4-7 MB / sec from disk when this query is executed. This is far lower than what the test hardware can provide. The operation is surprisingly expensive.
Here is what vmstat says:
1 2 3 4 5 6 7 8 |
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 51012 88076 992 15708304 0 0 3896 0 1154 489 12 0 87 0 0 1 0 51012 88324 992 15706008 0 0 4024 0 1155 512 12 0 87 0 0 1 0 51012 94276 992 15696848 0 0 3964 0 1158 488 12 0 87 0 0 2 0 51012 96260 992 15692184 0 0 3892 0 1161 508 12 0 87 0 0 1 0 51012 95388 992 15689188 0 0 4104 10 1187 566 12 0 87 0 0 1 0 51004 96344 992 15686012 8 0 4296 0 1190 660 12 0 87 0 0 |
Let's do some speed tests now. The first observation is that indexing on a function is incredibly slow:
1 2 3 4 5 6 7 8 9 |
test=# SET maintenance_work_mem TO '1 MB'; SET Time: 0.353 ms test=# CREATE INDEX idx_gin ON wiki_articles USING gin(to_tsvector('english', filtered)); CREATE INDEX Time: 7411870.102 ms |
After materializing the tsvector
column, things are a lot faster. Calculating the stemmed representation of the input is a major speed issue:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
test=# CREATE TABLE wiki_ts AS SELECT page_id, to_tsvector('english', title) AS title, to_tsvector('english', filtered) FROM wiki_articles ; test=# SET maintenance_work_mem TO '1 MB'; SET Time: 0.326 ms test=# d wiki_ts Table 'public.wiki_ts' Column | Type | Modifiers -------------+----------+----------- page_id | integer | title | tsvector | to_tsvector | tsvector | Tablespace: 'myspace' test=# CREATE INDEX idx_ts_to ON wiki_ts USING gin (to_tsvector); CREATE INDEX Time: 4478601.021 ms |
Increasing memory certainly helps in case of GIN.
NOTE: This is not necessarily the case if you are using btree indexes: /adjusting-maintenance_work_mem/
maintenance_work_mem Index creation
1 MB 4478 seconds
16 MB 2307 seconds
64 MB 1410 seconds
256 MB 1173 seconds
1024 MB 1171 seconds
4096 MB 1160 seconds
12 GB 1160 seconds
The build process speeds up dramatically with more memory. However, it is interesting to see that the curve flattens out pretty soon. At 256 MB we already see pretty much the best we can expect. More memory does not improve the situation anymore.
It is interesting to admit that we are not I/O bound but CPU bound. A better I/O system won't speed up things anymore.
1 2 3 4 5 6 7 |
test=# SELECT pg_size_pretty(pg_relation_size('idx_ts_to')); pg_size_pretty ---------------- 2993 MB (1 row) Time: 0.524 ms |
In case you need any assistance, please feel free to contact us.
In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Leave a Reply