CYBERTEC PostgreSQL Logo

Checking per-memory context memory consumption

09.2014 / Category: / Tags:

Writing a complex database server like PostgreSQL is not an easy task. Especially memory management is an important task, which needs special attention. Internally PostgreSQL makes use of so called “memory contexts”. The idea of a memory context is to organize memory in groups, which are organized hierarchically. The main advantage is that in case of an error, all relevant memory can be freed at once.

Understanding PostgreSQL memory contexts can be useful to solve a bunch of interesting support cases. Here is an example: Recently we have stumbled across a problem. A database server was constantly running out of memory and was finally killed by the OOM killer over and over again. Backend processes were constantly increasing memory consumption for non-obvious reasons. How can a problem like this be approached?

GDB comes to the rescue

GDB can come to the rescue and solve the riddle of memory consumption nicely. The basic procedure works as follows:

  • Create a core dump of the process in question
  • Come up with a GDB macro to debug memory
  • Run the macro

The first part is actually quite simple. To extract a core dump of a running process we have to find out the process ID first:

In this example the process ID of the process we want to inspect is 1999 (a simple, idle local backend).
Then it is time to create the core file. gcore can do exactly that for you:

The beauty here is that gcore is just a simple shell script calling some gdb magic internally. The result will be a core file we can then make use of:

The harder part

Then comes the harder part: Writing the gdb macro to debug those memory contexts. gdb has a scripting language to handle that. Here is the code:

The last line in the file is the actual call executing the code just written. walk_contexts will go through those memory contexts starting at the TopMemoryContext.
To run the script the following line will be useful. The script can simply be piped into gdb. The result will list information about memory consumption:

The output is actually quite long so I decided to remove a couple of lines. What you see here is how memory contexts are organized and how much memory is in each memory context.
If you happen to see any context which uses insane amounts of memory, it will definitely bring you one step closer to finding the root cause of a memory related problem.

In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.

One response to “Checking per-memory context memory consumption”

  1. If you can afford to briefly pause a backend's operation (such as to use gdb's gcore on it) you can probably also afford to take a memory context dump directly with:

    print MemoryContextStats(TopMemoryContext)

    Per https://wiki.postgresql.org/wiki/Developer_FAQ#Examining_backend_memory_use

    It'd be nice if Linux offered non-blocking coredumps of live processes (say, by cloning the memory mapping copy-on-write like it does for fork()ed processes) but AFAIK it doesn't support that, so writing a coredump could actually be more intrusive than just asking the running Pg to print a dump.

Leave a Reply

Your email address will not be published. Required fields are marked *

CYBERTEC Logo white
Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram