PostgreSQL allows end users to store BLOBs (binary large objects) in the database. Many people use these functions to handle data directly in SQL. There has long been a discussion about whether this is a good thing or not. We expect that particular discussion to go on indefinitely. However, if you are in the “pro BLOB” camp, we want to share some insights into how binary data can be handled in PostgreSQL with maximum efficiency.
Table of Contents
BLOBs are important: to show how they work, I have created a file which we can import into the database later on:
1 |
iMac:~ hs$ echo abcdefgh > /tmp/file.txt |
To import this file, we can call the lo_import
function and pass the name of the file you want to load (as superuser). Here is how it works:
1 2 3 4 5 |
test=# SELECT lo_import('/tmp/file.txt'); lo_import ----------- 98425 (1 row) |
What we see here is that PostgreSQL gives us a number (= object ID). Note that the filename has “vanished”, so you have to store it somewhere if you want to remember those values. The way to do this is to create a basic table:
1 2 3 4 5 6 7 8 |
test=# CREATE TABLE t_file (name text, oid_number oid); CREATE TABLE test=# INSERT INTO t_file VALUES ('/tmp/file.txt', lo_import('/tmp/file.txt')) RETURNING *; name | oid_number ---------------+------------ /tmp/file.txt | 98432 (1 row) |
PostgreSQL gives you the option of using the OID data type to store object IDs. Storing the filename is easy - a text or varchar column will do the job, in case the path is needed later on. In general, the large object is totally independent of the file in the filesystem - it has nothing to do with it. Storing the filename is therefore merely done in order to remember what we imported.
However, don’t let’s worry about a single file. Let’s import it a million times, and see what happens:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
test=# INSERT INTO t_file SELECT '/tmp/file.txt', lo_import('/tmp/file.txt') FROM generate_series(1, 1000000); INSERT 0 1000000 test=# TABLE t_file LIMIT 10; name | oid_number ---------------+------------ /tmp/file.txt | 98432 /tmp/file.txt | 98433 /tmp/file.txt | 98434 /tmp/file.txt | 98435 /tmp/file.txt | 98436 /tmp/file.txt | 98437 /tmp/file.txt | 98438 /tmp/file.txt | 98439 /tmp/file.txt | 98440 /tmp/file.txt | 98441 (10 rows) |
The file has been imported nicely. Each file has a new object ID, as shown in the previous listing.
Now that you have seen how to import data, we can take a look at the internals to figure out how data is stored behind the scenes. The way PostgreSQL handles things is by adding the data to pg_largeobject:
1 2 3 4 5 6 7 8 9 |
test=# d pg_largeobject Table 'pg_catalog.pg_largeobject' Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- loid | oid | | not null | pageno | integer | | not null | data | bytea | | not null | Indexes: 'pg_largeobject_loid_pn_index' UNIQUE, btree (loid, pageno) |
The “loid” contains the object ID we have stored in our table. The real data is stored as a “bytea” (byte array) column. Since some files might be really large, PostgreSQL breaks them up into chunks.
Byte array fields are easy to display. However, in the default setting it is not really human readable (which is pointless anyway, because we are talking about binary data). I have imported some text in my example, so the “escape” format is a bit more readable:
1 2 3 4 5 6 7 |
test=# SET bytea_output TO escape; SET test=# SELECT * FROM pg_largeobject WHERE loid = 98437 ORDER BY pageno; loid | pageno | data -------+--------+-------------- 98437 | 0 | abcdefgh\012 (1 row) |
What we see here is that the BLOB interface is really a wrapper around the “bytea” data type. That is the real magic.
Now let’s move on to deleting BLOBs again. The way to do that is to call the “lo_unlink” function. It will remove the entry from the system catalog:
1 2 3 4 5 |
test=# SELECT lo_unlink(98432); lo_unlink ----------- 1 (1 row) |
Removing a simple entry will be easy. However, let’s try to answer the following question: What happens when millions of BLOBs are removed from the system within the very same transaction?
Running the deletion is straightforward. The result, on the other hand, is fairly unexpected:
1 2 3 4 5 6 7 |
test=# BEGIN; BEGIN test=*# SELECT lo_unlink(oid_number) FROM t_file ; ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction. test=!# ROLLBACK; ROLLBACK |
PostgreSQL is not able to run the transaction because we have run out of memory! How can that happen? When a large object is deleted, it has to be locked. The trouble is that the number of locks in shared memory is limited.
NOTE: Please bear in mind that row locks
are NOT stored in shared memory. This is
only true for objects (table locks, etc.)
As a result, we are running out of memory. The main question now becomes: How many locks can we store? The answer can be found within three config variables (these are located in postgresql.conf):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
test=# SHOW max_locks_per_transaction; max_locks_per_transaction --------------------------- 64 (1 row) test=# SHOW max_connections; max_connections ----------------- 100 (1 row) test=# SHOW max_prepared_transactions; max_prepared_transactions --------------------------- 0 (1 row) |
The number of locks available is:
1 2 |
number_of_locks = max_locks_per_transaction * (max_connections + max_prepared_transactions). |
So in my case it is 6.400. In short, there is no way we can handle these deletions in one transaction.
Basically, we’ve got two ways to solve this problem:
Those are the only ways we can get rid of those BLOBs in the database.
Sometimes, BLOBs are lost because the reference to the OID goes missing, but the object itself is not unlinked. In this case, a command line tool is available:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
iMac:~ hs$ vacuumlo --help vacuumlo removes unreferenced large objects from databases. Usage: vacuumlo [OPTION]... DBNAME... Options: -l, --limit=LIMIT commit after removing each LIMIT large objects -n, --dry-run don't remove large objects, just show what would be done -v, --verbose write a lot of progress messages -V, --version output version information, then exit -?, --help show this help, then exit Connection options: -h, --host=HOSTNAME database server host or socket directory -p, --port=PORT database server port -U, --username=USERNAME user name to connect as -w, --no-password never prompt for password -W, --password force password prompt |
vacuumlo will look for unreferenced large objects and delete them. This is a good way to clean a database, in case your application is buggy or something bad happens.
If you are struggling with BLOBs, we want to point out that we offer PostgreSQL consulting as well as 24x7 database support. Our highly professional team can definitely help you with your BLOB-related problems. Contact us.
If you want to learn about the latest PostgreSQL tips and tricks, consider checking out this blogpost.
In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Leave a Reply