CYBERTEC PostgreSQL Logo

Four reasons why VACUUM won't remove dead rows from a table

03.2018 / Category: / Tags: | |

Why vacuum?

Whenever rows in a PostgreSQL table are updated or deleted, dead rows are left behind. VACUUM gets rid of them so that the space can be reused. If a table doesn't get vacuumed, it will get bloated, which wastes disk space and slows down sequential table scans (and – to a smaller extent – index scans).

VACUUM also takes care of freezing table rows so to avoid problems when the transaction ID counter wraps around, but that's a different story.

Normally you don't have to take care of all that, because the autovacuum daemon built into PostgreSQL does it for you. To find out more about enabling and disabling autovacuum, read this post.

Problems with vacuum: bloated tables

In case your tables bloat, the first thing you check is whether autovacuum processed them or not:

If your bloated table does not show up here, n_dead_tup is zero and last_autovacuum is NULL, you might have a problem with the statistics collector.

If the bloated table is right there on top, but last_autovacuum is NULL, you might need to configure autovacuum to be more aggressive so that it finishes the table.

But sometimes the result will look like this:

Here autovacuum ran recently, but it didn't free the dead tuples!

We can verify the problem by running VACUUM (VERBOSE):

Why won't VACUUM remove the dead rows?

VACUUM only removes those row versions (also known as “tuples”) that are not needed any more. A tuple is not needed if the transaction ID of the deleting transaction (as stored in the xmax system column) is older than the oldest transaction still active in the PostgreSQL database. (Or, in the whole cluster for shared tables).

This value (22300 in the VACUUM output above) is called the “xmin horizon”.

There are three things that can hold back this xmin horizon in a PostgreSQL cluster:

  1. Long-running transactions and VACUUM:

    You can find those and their xmin value with the following query:

    You can use the pg_terminate_backend() function to terminate the database session that is blocking your VACUUM.

     

  2. Abandoned replication slots and VACUUM:

    A replication slot is a data structure that keeps the PostgreSQL server from discarding information that is still needed by a standby server to catch up with the primary.

    If replication is delayed or the standby server is down, the replication slot will prevent VACUUM from deleting old rows.

    You can find all replication slots and their xmin value with this query:

    Use the pg_drop_replication_slot() function to drop replication slots that are no longer needed.

    Note: This can only happen with physical replication if hot_standby_feedback = on. For logical replication there is a similar hazard, but only it only affects system catalogs. Examine the column catalog_xmin in that case.

     

  3. Orphaned prepared transactions and VACUUM:

    During two-phase commit, a distributed transaction is first prepared with the PREPARE statement and then committed with the COMMIT PREPARED statement.

    Once Postgres prepares a transaction, the transaction is kept “hanging around” until it Postgres commits it or aborts it. It even has to survive a server restart! Normally, transactions don't remain in the prepared state for long, but sometimes things go wrong and the administrator has to remove a prepared transaction manually.

    You can find all prepared transactions and their xmin value with the following query:

    Use the ROLLBACK PREPARED SQL statement to remove prepared transactions.

     

  4. Standby server with hot_standby_feedback = on and VACUUM:

    Normally, the primary server in a streaming replication setup does not care about queries running on the standby server. Thus, VACUUM will happily remove dead rows which may still be needed by a long-running query on the standby, which can lead to replication conflicts. To reduce replication conflicts, you can set hot_standby_feedback = on on the standby server. Then the standby will keep the primary informed about the oldest open transaction, and VACUUM on the primary will not remove old row versions still needed on the standby.

    To find out the xmin of all standby servers, you can run the following query on the primary server:

Read more about PostgreSQL table bloat and autocommit in my post here.

13 responses to “Four reasons why VACUUM won't remove dead rows from a table”

  1. Hi Laurenz, Thanks for this post. It was helpful to us in identifying some issues related to auto-vacuum.

    We are on postgresql 9.6 and when we run the vacuum verbose cmd, it does not show us the oldest xmin as seen in your output above. Wondering if you tried this on a newer version.

    Regards

  2. Hi Laurenz, thanks a lot for post.
    I got bloated table because of oldest xmin, but this xmin belong to physical replication. How can I solve this problem without loosing replication?
    Best regards.

    • It's a bit unclear what your problem is, and it seems unrelated to the article, but perhaps you need to drop the replication slot.

  3. Hi Laurenz,
    I've got an issue with this auto-vacuum
    [2020-08-05 16:45:17.157 07][][][][][431][XX001]ERROR: found xmin 2756976979 from before relfrozenxid 300006063
    [2020-08-05 16:45:17.157 07][][][][][431][XX001]CONTEXT: automatic vacuum of table "template1.pg_catalog.pg_authid"

    May u please kindly help me with this problem?

  4. Hello Laurenz,
    on 'long-running transactions'. In isolation level 'read committed'. If I just do selects and keep the transaction open in between. Can that cause problems for vacuum? I guess not as it does not guarantee read consistency? Thank you!

    • You are right. What holds back VACUUM is open snapshots, and in READ COMMITTED isolation, each statement takes a new snapshot. If you look at the query I provide, you will see that it checks backend_xmin and backend_xid. You will see that your read-only
      READ COMMITTED has both values set to NULL between queries.

  5. Hi Laurenz,
    we have a database where a vacuum on pg_largeobject does not remove the dead items.
    I've checked the 4 reasons but I'm not sure if any of the reasons apply here. From my understanding, I would rather say that nothing applies.
    Have I overlooked anything? And what else could it be?

    VACUUM VERBOSE ANALYSE pg_largeobject;
    INFO: vacuuming "pg_catalog.pg_largeobject"
    INFO: table "pg_largeobject": index scan bypassed: 420500 pages from table (1.19% of total) have 690946 dead item identifiers
    INFO: table "pg_largeobject": found 0 removable, 1816641 nonremovable row versions in 533548 out of 35474423 pages
    DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 25410676
    Skipped 0 pages due to buffer pins, 0 frozen pages.
    CPU: user: 2.66 s, system: 1.43 s, elapsed: 4.29 s.
    INFO: analyzing "pg_catalog.pg_largeobject"
    INFO: "pg_largeobject": scanned 30000 of 35474423 pages, containing 106967 live rows and 587 dead rows; 30000 rows in sample, 126486420 estimated total rows

    Please find the results of the queries enclosed:
    1) Long-running transactions and VACUUM
     pid | datname | usename |    state    | backend_xmin | backend_xid
    --------+----------+----------+---------------------+--------------+-------------
     235573 | ........ | ........ | idle in transaction |  25410676 |
     236156 | ........ | ........ | idle in transaction |  25410676 |
     237118 | ........ | ........ | idle in transaction |  25410676 |
     240698 | ........ | ........ | idle in transaction |  25410676 |
     242505 | ........ | postgres | active       |  25410677 |

    2) Abandoned replication slots and VACUUM
     slot_name | slot_type | database | xmin
    -----------+-----------+----------+------
     stndb01 | physical |     |
     stndb02 | physical |     |
    (2 rows)

    3) Orphaned prepared transactions and VACUUM
     gid | prepared | owner | database | xmin
    -----+----------+-------+----------+------
    (0 rows) 

    4) Standby server with hot_standby_feedback = on and VACUUM
     application_name | client_addr | backend_xmin
    ------------------+---------------+--------------
     stndb01     | 172.16.24.24 |
     stndb02     | 172.16.24.22 |
    (2 rows)

    FYI:
    hot_standby = on
    hot_standby_feedback = off (default)

    Thanks in advance!
    Best regards

    • Everything is fine, and VACUUM removed all dead rows. See the line "0 dead row versions cannot be removed yet", which is the same as "all dead row versions could be removed". The "nonremovable row versions" are the live rows in the table.
      To actually shrink the table, you would need VACUUM (FULL).

  6. I just want to say thank you for this! I did not understand that long-lived transactions would block vacuuming across all tables (including those the transaction doesn't use), but this information helped me figure out what was going on. You are a saint for all the help you provide!

Leave a Reply

Your email address will not be published. Required fields are marked *

CYBERTEC Logo white
Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram