Table of Contents
Many people know that explicit table locks with LOCK TABLE
are bad style and usually a consequence of bad design. The main reason is that they hamper concurrency and hence performance.
Through a recent support case I learned that there are even worse effects of explicit table locks.
Before an SQL statement uses a table, it takes the appropriate table lock. This prevents concurrent use that would conflict with its operation. For example, reading from a table will take a ACCESS SHARE
lock which will conflict with the ACCESS EXCLUSIVE
lock that TRUNCATE
needs.
You can find a description of the individual lock levels in the documentation. There is also the matrix that shows which lock levels conflict with each other.
You don't have to perform these table locks explicitly, PostgreSQL does it for you automatically.
LOCK TABLE
statementYou can also explicitly request locks on a table with the LOCK
statement:
LOCK [ TABLE ] [ ONLY ] name [ * ] [, ...] [ IN lockmode MODE ] [ NOWAIT ]
There are some cases where it is useful and indicated to use such an explicit table lock. One example is a bulk update of a table, where you want to avoid deadlocks with other transactions that modify the table at the same time. In that case you would use a SHARE
lock on the table that prevents concurrent data modifications:
1 |
LOCK atable IN SHARE MODE; |
LOCK TABLE
Unfortunately, most people don't think hard enough and just use “LOCK atable
” without thinking that the default lock mode is ACCESS EXCLUSIVE
, which blocks all concurrent access to the table, even read access. This harms performance more than necessary.
But most of the time, tables are locked because developers don't know that there are less restrictive ways to achieve what they want:
SELECT ... FOR UPDATE
!REPEATABLE READ
transaction may be even better. That means that you have to be ready to retry the operation if the UPDATE
fails due to a serialization error.SELECT
s on the table and want to be sure that nobody modifies the table between your statements? Use a transaction with REPEATABLE READ
isolation level, so that you see a consistent snapshot of the database!DELETE ... RETURNING
, then the row will be locked immediately!SELECT ... LIMIT 1 FOR UPDATE SKIP LOCKED
!LOCK TABLE
versus autovacuumIt is necessary that autovacuum processes a table from time to time so that
Now VACUUM
requires a SHARE UPDATE EXCLUSIVE
lock on the table. This conflicts with the lock levels people typically use to explicitly lock tables, namely SHARED
and ACCESS EXCLUSIVE
. (As I said, the latter lock is usually used by mistake.)
Now autovacuum is designed to be non-intrusive. If any transaction that that wants to lock a table is blocked by autovacuum, the deadlock detector will cancel the autovacuum process after a second of waiting. You will see this message in the database log:
1 2 |
ERROR: canceling autovacuum task DETAIL: automatic vacuum of table 'xyz' |
The autovacuum launcher process will soon start another autovacuum worker for this table, so this is normally no big problem. Note that “normal” table modifications like INSERT
, UPDATE
and DELETE
do not require locks that conflict with VACUUM
!
If you use LOCK
on a table frequently, there is a good chance that autovacuum will never be able to successfully process that table. This is because it is designed to run slowly, again in an attempt not to be intrusive.
Then dead tuples won't get removed, live tuples won't get frozen, and the table will grow (“get bloated” in PostgreSQL jargon). The bigger the table grows, the less likely it becomes that autoacuum can finish processing it. This can go undetected for a long time unless you monitor the number of dead tuples for each table.
Eventually, though, the sticky brown substance is going to hit the ventilation device. This will happen when there are non-frozen live rows in the table that are older than autovacuum_freeze_max_age
. Then PostgreSQL knows that something has to be done to prevent data corruption due to transaction counter wrap-around. It will start autovacuum in “anti-wraparound mode” (you can see that in pg_stat_activity
in recent PostgreSQL versions).
Such an anti-wraparound autovacuum will not back down if it blocks other processes. The next LOCK
statement will block until autovacuum is done, and if it is an ACCESS EXCLUSIVE
lock, all other transactions will queue behind it. Processing will come to a sudden stop. Since by now the table is probably bloated out of proportion and autovacuum is slow, this will take a long time.
If you cancel the autovacuum process or restart the database, the autovacuum will just start running again. Even if you disable autovacuum (which is a really bad idea), PostgreSQL will launch the anti-wraparound autovacuum. The only way to resume operation for a while is to increase autovacuum_freeze_max_age
, but that will only make things worse eventually: 1 million transactions before the point at which you would suffer data corruption from transaction counter wrap-around, PostgreSQL will shut down and can only be started in single-user mode for a manual VACUUM
.
First, if you already have the problem, declare downtime, launch an explicit VACUUM (FULL, FREEZE)
on the table and wait until it is done.
To avoid the problem:
LOCK
on a routine basis. Once a day for the nightly bulk load is fine, as long as autovacuum has enough time to finish during the day.autovacuum_vacuum_cost_limit
and reducing autovacuum_vacuum_cost_delay
.
In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Twitter, Facebook, or LinkedIn.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Leave a Reply