When an index is missing,
good performance won’t be kissing
a PostgreSQL user looking for efficiency
but instead feels like a legacy.
Table of Contents
To satisfy a DBA’s desire and thirst,
let us load some data first.
pgbench is the tool of the day
but the next listing will explain that anyway:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[hs@fedora ~]$ pgbench -s 100 -i test dropping old tables... NOTICE: table 'pgbench_accounts' does not exist, skipping NOTICE: table 'pgbench_branches' does not exist, skipping NOTICE: table 'pgbench_history' does not exist, skipping NOTICE: table 'pgbench_tellers' does not exist, skipping creating tables... generating data (client-side)... 10000000 of 10000000 tuples (100%) done (elapsed 9.96 s, remaining 0.00 s) vacuuming... creating primary keys... done in 14.58 s (drop tables 0.00 s, create tables 0.07 s, client-side generate 10.03 s, vacuum 1.68 s, primary keys 2.80 s). |
Loading millions of rows into PostgreSQL is not hard,
that is not the tricky part.
Data so quickly created,
is usually not hated.
10 seconds for 10 million rows,
the DBA in charge bows.
Performance will be good
ensuring all users’ good mood.
To celebrate the success
let the database run again and confess.
This time we are looking for fast reads,
an important thing when aiming for high speeds:
1 2 3 4 5 6 7 8 9 10 11 12 |
[hs@fedora ~]$ pgbench -S -c 10 -T 30 -j 10 test starting vacuum...end. transaction type: scaling factor: 100 query mode: simple number of clients: 10 number of threads: 10 duration: 30 s number of transactions actually processed: 3054408 latency average = 0.098 ms initial connection time = 13.872 ms tps = 101859.582811 (without initial connection time) |
pgbench is the name of the game.
My slow box, 101 thousand transactions, insane.
30 seconds to execute the test,
10 concurrent connections that did not rest.
Even 10 threads for the client code,
running in super quick mode.
However, only indexes kept us happy and fine,
bringing good performance, almost divine.
Without indexing life is harsh to the one that suffers,
just like misconfigured shared_buffers
.
Our holy lord the superior b-tree.
without it bad performance we see.
A single missing index for a test,
the entire database feels like it is at rest.
Do you want to know why?
Let us give it a try:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
test=# d+ List of relations Schema | Name | Type | Owner | Persistence | Access method | Size | Description --------+------------------+-------+-------+-------------+---------------+---------+------------- public | pgbench_accounts | table | hs | permanent | heap | 1281 MB | public | pgbench_branches | table | hs | permanent | heap | 40 kB | public | pgbench_history | table | hs | permanent | heap | 0 bytes | public | pgbench_tellers | table | hs | permanent | heap | 80 kB | (4 rows) test=# d pgbench_accounts Table 'public.pgbench_accounts' Column | Type | Collation | Nullable | Default ----------+---------------+-----------+----------+--------- aid | integer | | not null | bid | integer | | | abalance | integer | | | filler | character(84) | | | Indexes: 'pgbench_accounts_pkey' PRIMARY KEY, btree (aid) |
Killing an index, only one …
All the performance will be gone:
1 2 3 4 5 6 7 8 9 10 |
test=# ALTER TABLE pgbench_accounts DROP CONSTRAINT pgbench_accounts_pkey; ALTER TABLE test=# d pgbench_accounts Table 'public.pgbench_accounts' Column | Type | Collation | Nullable | Default ----------+---------------+-----------+----------+--------- aid | integer | | not null | bid | integer | | | abalance | integer | | | filler | character(84) | | |
Let me run the test again.
Good performance? Back then?
Good lord? What happened to my data?
Will speed be back later?
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[hs@fedora ~]$ pgbench -S -c 10 -T 30 -j 10 test pgbench (14beta2) starting vacuum...end. transaction type: scaling factor: 100 query mode: simple number of clients: 10 number of threads: 10 duration: 30 s number of transactions actually processed: 259 latency average = 1189.653 ms initial connection time = 5.727 ms tps = 8.405815 (without initial connection time) |
8 transactions per second will reveal
that bad performance is hard to conceal.
A single index is broken,
all end users have been walking.
Calling support hotlines like crazy,
No more chances for DBAs to be lazy.
The conclusion of the day.
Make sure no important indexes go away.
It is a really important affair
to index data with care.
Don’t forget a single thing
and performance will be king.
I hope this little poem makes people happy.
Make sure your databases are not crappy.
+43 (0) 2622 93022-0
office@cybertec.at
You are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Ein echter Dichter!
https://media1.giphy.com/media/3ohs7LbjnQH2ZxV11m/giphy-downsized-medium.gif
This is going to make history 🙂 Kudos!
that would be nice but i am not that confident that it will survive shakespeare 🙂