Keith's Ramblings…

WARNING: If accidentally read, induce vomiting

Cleaning Up PostgreSQL Bloat

with 4 comments

As a followup to my previous post on checking for bloat, I figured I’d share some methods for actually cleaning up bloat once you find it. I’ll also be providing some updates on the script I wrote due to issues I encountered and thanks to user feedback from people that have used it already.

First, as these examples will show, the most important thing you need to clean up bloat is extra disk space. This means it is critically important to monitor your disk space usage if bloat turns out to be an issue for you. And if your database is of any reasonably large size, and you regularly do updates & deletes, bloat will be an issue at some point. I’d say a goal is to always try and stay below 75% disk usage either by archiving and/or pruning old data that’s no longer needed. Or simply adding more disk space or migrating to new hardware all together. Having less 25% free can put you in a precarious situation where you may have a whole lot of disk space you can free up, but not enough room to actually do any cleanup at all or without possibly impacting performance in big ways (Ex. You have to drop & recreate a bloated index instead of rebuilding it concurrently, making previously fast queries extremely slow).


The above graph (y-axis terabytes) shows my recent adventures in bloat cleanup after using this new scan, and validates that what is reported by is actually bloat. In both this graph and the one below, there were no data purges going on and each of the significant line changes coincided exactly with a bloat cleanup session. You can see back on May 26-27th a huge drop in size. You can see an initial tiny drop followed by a fairly big increase then the huge drop. This is me first fixing one small, but very bloated index followed by running a pg_repack to take care of both table and a lot of index bloat. This is actually the group_members table I used as the example in my previous post. Over the next week or so I worked through roughly 80 bloated objects to recover about 270GB of disk space. Now, it may turn out that some of these objects will have their bloat return to their previous values quickly again and those could be candidates for exclusion from the regular report. But I figured I’d go through everything wasting more than a few hundred MB just so I can better assess what the actual normal bloat level of this database is. Here’s another example from another client that hadn’t really had any bloat monitoring in place at all before (that I was aware of anyway). It’s showing disk space available instead of total usage, hence the line going the opposite direction, and db12 is a slave of db11.


The easiest, but most intrusive, bloat removal method is to just run a VACUUM FULL on the given table. This will take an exclusive lock on the table (blocks all reads and writes) and completely rebuild the table to new underlying files on disk. This clears out 100% of the bloat in both the table and all indexes it contains at the expense of blocking all access for the duration. If you can afford the outage, it’s the easiest, most reliable method available. For very small tables this is likely your best option.

The next option is to use the REINDEX command. This can be run on several levels: INDEX, TABLE, DATABASE. 9.5 introduced the SCHEMA level as well. Running it on the TABLE level has the same consequence of likely locking the entire table for the duration, so if you’re going that route, you might as well just run a VACUUM FULL. Same for running at the DATABASE level, although if you’re running 9.5+, it did introduce parallel vacuuming to the vacuumdb console command, which would be much more efficient. When running on the INDEX level, things are a little more flexible. All writes are blocked to the table, but if a read-only query does not hit the index that you’re rebuilding, that is not blocked. If you can afford several shorter outages on a given table, or the index is rather small, this is the best route to take for bloat cleanup.

If you’ve got tables that can’t really afford long outages, then things start getting tricky. Before getting into pg_repack, I’d like to share some methods that can be used without third-party tools. Index bloat is the most common occurrence, so I’ll start with that.

If you’ve just got a plain old index (b-tree, gin or gist), there’s a combination of 3 commands that can clear up bloat with minimal downtime (depending on database activity). The CONCURRENTLY flag to the CREATE INDEX command allows an index to be built without blocking any reads or writes to the table. So say we had this bloated index

No dead tuples (so autovacuum is running efficiently) and 60% of the total index is free space that can be reclaimed. A handy command to get the definition of an index is pg_get_indexdef(regclass). In this case it’s a very easy index definition, but when you start getting into some really complicated functional or partial indexes, having a definition you can copy-n-paste is a lot safer.

Now we can write our set of commands to rebuild the index

I threw the ANALYZE calls in there just to ensure that the catalogs are up to date for any queries coming in during this rebuild. May not really be necessary, but I was doing this on a very busy table, so I’d rather be paranoid about it. Neither the CREATE nor the DROP command will block any other sessions that happen to come in while this is running. However, that final ALTER INDEX call can block other sessions coming in that try to use the given table. But the rename is optional and can be done at any time later. After the DROP command, your bloat has been cleaned up. While concurrent index creation does not block, there are some caveats with it, the major one being it can take much longer to rebuild the index. One of these for the second client above took 4.5 hours to complete. The documentation on building indexes concurrently goes into more detail on this, and how to deal with it possibly failing.

If you’re running this on a UNIQUE index, you may run into an issue if it was created as a UNIQUE CONSTRAINT vs a UNIQUE INDEX. Functionally, both are the same as far as PostgreSQL is concerned. And under the hood, creating a unique constraint will just create a unique index anyway. The big difference is you will not be able to drop a unique constraint concurrently. You will have to do an ALTER TABLE [..]. DROP CONSTRAINT […] call, which will require an exclusive lock, just like the RENAME above. Also, the index is more flexible since you can make a partial unique index as well. So it’s better to just make a unique index vs a constraint if possible.

PRIMARY KEYs are another special case. Functionally, they’re no different than a unique index with a NOT NULL constraint on the column. But they are marked specially in the catalog and some applications specifically look for them. You can do something very similar to the above, taking advantage of the USING clause to the ADD PRIMARY KEY command.

I gave full command examples here so you can see the runtimes involved. The concurrent index creation took quite a while (about 46 minutes), but everything besides the analyze commands was sub-second. Giving the command to create a primary key an already existing unique index to use allows it to skip the creation and validation usually done with that command.

As always, there are caveats to this. If the primary key, or any unique index for that matter, has any FOREIGN KEY references to it, you will not be able to drop that index without first dropping the foreign key(s). If there’s only 1 or 2 of those, you can likely do this in a transaction surrounding the drop & recreation of the primary key with commands that also drop and recreate the foreign keys. But if you start getting more in there, that’s just taking a longer and longer outage for the foreign key validation which will lock all tables involved. And also increasing the likelyhood of an error in the DDL you’re writing to manage recreating everything. In that case, it may just be better to take the outage to rebuild the primary key with the REINDEX command.

In all cases where I can use the above methods, I always try to use those first. They’re the native methods built into the database and, as long as you don’t typo the DDL commands, not likely to be prone to any issues cropping up later down the road. And since index bloat is primarily where I see the worst problems, it solves most cases (the second graph above was all index bloat). If you’re unable to use any of them, though, the pg_repack tool is very handy for removing table bloat or handling situations with very busy or complicated tables that cannot take extended outages. It’s gotten pretty stable over the last year or so, but just seeing some of the bugs that were encountered with it previously, I use it as a last resort for bloat removal. Also, if you’re running low on disk space, you may not have enough room for pg_repack since it requires rebuilding the entire table and all indexes in secondary tables before it can remove the original bloated table. As I said above, I did use it where you see that initial huge drop in disk space on the first graph, but before that there was a rather large spike to get there. In that case, the table had many, many foreign keys & triggers and was a very busy table, so it was easier to let pg_repack handle it.

For table bloat, Depesz wrote some blog posts a while ago that are still relevant with some interesting methods of moving data around on disk. This can also be handy when you are very low on disk space.

Bloat Removal Without Table Swapping

Bloat Remval By Tuples Moving

Since I initially wrote my blog post, I’ve had some great feedback from people using already. I’ve gotten several bugs fixed as well as adding some new features with version 2.1.0 being the latest available as of this blog post. json is now the preferred, structured output method if you need to see more details outside of querying the stats table in the database. I also added some additional options with –exclude_object_file  that allows for more fine grained filtering when you want to ignore certain objects in the regular report, but not forever in case they get out of hand. I updated the README with some examples of that since it’s a little more complex.

I also made note of the fact that this script isn’t something that’s made for real-time monitoring of bloat status. Since it’s doing full scans on both tables and indexes, this has the potential to force data out of shared buffers. So if you keep running it often, you may affect query performance of things that rely on data being readily available there. It’s best to run it maybe once a month or once a week at most during off-peak hours. If you have particularly troublesome tables you want to keep an eye on more regularly, the –tablename option allows you to scan just that specific table and nothing else. Once you’ve gotten the majority of your bloat issues cleaned up after your first few times running the script and see how bad things may be, bloat shouldn’t get out of hand that quickly that you need to run it that often. If it is, you may want to re-evaluate how you’re using PostgreSQL (Ex. MVCC makes it not great as a queuing system).

If anyone else has some handy tips for bloat cleanup, I’d definitely be interested in hearing them.

Written by Keith

June 8th, 2016 at 5:25 pm

Posted in PostgreSQL

Tagged with , , , ,

Checking for PostgreSQL Bloat

with 6 comments

My post almost 2 years ago about checking for PostgreSQL bloat is still one of the most popular ones on my blog (according to Google Analytics anyway). Since that’s the case, I’ve gone and changed the URL to my old post and reused that one for this post. I’d rather people be directed to correct and current information as quickly as possible instead of adding an update to my old post pointing to a new one. I’ve included my summary on just what exactly bloat is again below since that seemed to be the most popular part.

UPDATE: I wrote a followup post on methods for actually Cleaning Up PostgreSQL Bloat once you’ve identified it.

The intent of the original post was to discuss a python script I’d written for monitoring bloat status: Since that time, I’ve been noticing that the query used in v1.x of that script (obtained from the module) was not always accurate and was often not reporting on bloat that I knew for a fact was there (Ex: I just deleted over 300 million rows, vacuumed & analyzed the table and still no bloat? Sure it could happen, but highly unlikely). So I continued looking around and discovered the pgstattuple contrib module that comes with PostgreSQL. After discussing it with several of the core developers at recent PostgreSQL conferences (PGConfUS & PGCon) I believe this is a much, much better way to get an accurate assessment of the bloat situation. This encouraged me to do a near complete rewrite of my script and v2.0.0 is now available. It’s not a drop-in replacement for v1.x, so please check the –help for new options.

pgstattuple is a very simple, but powerful extension. It doesn’t require any additional libraries to be loaded and just adds a few functions you can call on database objects to get some statistics about them. The key function for bloat being the default one, pgstattuple(regclass), which returns information about live & dead tuples and free space contained in the given object. If you read the description below on what bloat actually is, you’ll see that those data points are exactly what we’re looking for. The difference between what this function is doing and what the query is doing is quite significant, though. The check_postgres query is doing its best to guess what is dead & free space based on the current statistics in the system catalogs. pgstattuple actually goes through and does a full scan on the given table or index to see what the actual situation is. This does mean this query can be very, very slow on large tables. The database I got the examples below from is 1.2TB and a full bloat check on it takes just under 1 hour. But with the inaccuracies I’ve seen being returned by the simpler query, this time can be well worth it. The script stores the statistics gathered in a table so they can be easily reviewed at any time and even used for monitoring purposes, just like check_postgres.

Before showing what the script can do, I just want to re-iterate some things from my old post because they’re important. Bloat percentage alone is a poor indicator of actual system health. Small tables may always have a higher than average bloat, or there may always be 1 or 2 pages considered waste, and in reality that has next to zero impact on database performance. Constantly “debloating” them is more a waste of time than the space used. So the script has some filters for object size, wasted space and wasted percentage. This allows the final output of the bloat report to provide a more accurate representation of where there may actually be problems that need to be looked into.

Another option is a filter for individual tables or indexes to be ignored. If you understand why bloat happens, you will come across cases where a table is stuck at a certain bloat point at all times, no matter how many times you VACUUM FULL it or run pg_repack on it (those two things do remove it, but it quickly comes back). This happens with tables that have a specific level of churn with the rows being inserted, updated & deleted. The number of rows being updated/deleted is balanced with the number of rows being inserted/updated as well as the autovacuum schedule to mark space for reuse. Removing the bloat from tables like this can actually cause decreased performance because instead of re-using the space that VACUUM marks as available, Postgres has to again allocate more pages to that object from disk first before the data can be added. So bloat is actually not always a bad thing and the nature of MVCC can lead to improved write performance on some tables. On to the new script!

So as an example of why this new, slower method can be worth it, here’s the bloat report for a table and its indexes from the old script using check_postgres

Here’s the results from the statistic table in the new version

Yes, all those indexes did exist before. The old query just didn’t think they had any bloat at all. There’s also a nearly 4x difference in wasted space in the table alone. It’s only 37% of the table in this case, but if you’re trying to clean up bloat due to low disk space, 12GB can be a lot. Another really nice thing pgstattuple provides is a distinction between dead tuples and reusable (free) space. You can see the dead tuple space is quite low in this example. That means autovacuum is running efficiently on this table and marking dead rows from updates & deletes as re-usable. If you see dead tuples is high, that could indicate autovacuum is not running properly and you may need to adjust some of the vacuum tuning parameters that are available. In this case, even a normal vacuum was not freeing the reusable space back to the operating system. See below for why this is. This means either a VACUUM FULL or pg_repack run is required to reclaim it. Here’s the result from making a new index on user_id:

You can see the new index group_members_user_id_idx1 is now down to only 9% wasted space and much smaller. Here’s the result after running pg_repack to clear both the table and all index bloat:

PostgreSQL 9.5 introduced the pgstattuple_approx(regclass) function which tries to take advantage of some visibility map statistics to increase the speed of gathering tuple statistics but possibly sacrificing some accuracy since it’s not hitting each individual tuple. It only works on tables, though. This option is available with the script using the –quick argument. There’s also the pgstatindex(regclass) that gives some more details on index pages and how the data in them is laid out, but I haven’t found a use for that in the script yet.

The same output options the old script had are still available: –simple to provide a text summary useful for emails & –dict which is a python dictionary that provides a structured output and also greater details on the raw statistics (basically just the data straight from the table). UPDATE: As of version 2.1.0 of the script, the –json & –jsonpretty options have been added and are the preferred structured output format unless you actually need a python dictionary. The table inside the database provides a new, easy method for reviewing the bloat information as well, but just be aware this is rebuilt from scratch every time the script runs. There’s also a new option which I used above (-t, –tablename) that you can use to get the bloat information on just a single table. See the –help for more information on all the options that are available.

Why Bloat Happens

For those of you newer to PostgreSQL administration, and this is the first time you may be hearing about bloat, I figured I’d take the time to explain why this scenario exists and why tools like this are necessary (until they’re hopefully built into the database itself someday). It’s something most don’t understand unless someone first explains it to them or you run into the headaches it causes when it’s not monitored and you learn about it the hard way.

MVCC (multi-version concurrency control) is how Postgres has chosen to deal with multiple transactions/sessions hitting the same rows at (nearly) the same time. The documentation, along with wikipedia provide excellent and extensive explanations of how it all works, so I refer you there for all the details. Bloat is a result of one particular part of MVCC, concentrated around the handling of updates and deletes.

Whenever you delete a row, it’s not actually deleted, it is only marked as unavailable to all future transactions taking place after the delete occurs. The same happens with an update: the old version of a row is kept active until all currently running transactions have finished, then it is marked as unavailable. I emphasize the word unavailable because the row still exists on disk, it’s just not visible any longer. The VACUUM process in Postgres then comes along and marks any unavailable rows as space that is now available for future inserts or updates. The auto-vacuum process is configured to run VACUUM automatically after so many writes to a table (follow the link for the configuration options), so it’s not something you typically have to worry about doing manually very often (at least with more modern versions of Postgres).

People often assume that VACUUM is the process that should return the disk space to the file system. It does do this but only in very specific cases. That used space is contained in page files that make up the tables and indexes (called objects from now on) in the Postgres database system. Page files all have the same size and differently sized objects just have as many page files as they need. If VACUUM happens to mark every row in a page file as unavailable AND that page also happens to be the final page for the entire object, THEN the disk space is returned to the file system. If there is a single available row, or the page file is any other but the last one, the disk space is never returned by a normal VACUUM. This is bloat. Hopefully this explanation of what bloat actually is shows you how it can sometimes be advantageous for certain usage patterns of tables as well, and why I’ve included the option to ignore objects in the report.

If you give the VACUUM command the special flag FULL, then all of that reusable space is returned to the file system. But VACUUM FULL does this by completely rewriting the entire table (and all its indexes) to new pages and takes an exclusive lock on the table the entire time it takes to run (CLUSTER does the same thing, but what that does is outside the scope of this post). For large tables in frequent use, this is problematic. pg_repack has been the most common tool we’ve used to get around that. It recreates the table in the background, tracking changes to it, and then takes a brief lock to swap the old bloated table with the new one.

Why bloat is actually a problem when it gets out of hand is not just the disk space it uses up. Every time a query is run against a table, the visibility flags on individual rows and index entries is checked to see if is actually available to that transaction. On large tables (or small tables with a lot of bloat) that time spent checking those flags builds up. This is especially noticeable with indexes where you expect an index scan to improve your query performance and it seems to be making no difference or is actually worse than a sequential scan of the whole table. And this is why index bloat is checked independently of table bloat since a table could have little to no bloat, but one or more of its indexes could be badly bloated. Index bloat (as long as it’s not a primary key) is easier to solve because you can either just reindex that one index, or you can concurrently create a new index on the same column and then drop the old one when it’s done.

In all cases when you run VACUUM, it’s a good idea to run ANALYZE as well, either at the same time in one command or as two separate commands. This updates the internal statistics that Postgres uses when creating query plans. The number of live and dead rows in a table/index is a part of how Postgres decides to plan and run your queries. It’s a much smaller part of the plan than other statistics, but every little bit can help.

I hope this explanation of what bloat is, and how this tool can help with your database administration, has been helpful.

Written by Keith

May 27th, 2016 at 11:55 am

Posted in PostgreSQL

Tagged with , , ,

Document Storage in PostgreSQL & Open Source Benefits

with 2 comments

Update 2016-07-20: Since this blog post, I’ve recently gone back and updated pg_doc_store to take advantage of the new INSERT ON CONFLICT (upsert) feature in 9.5. So the extension is much more ready for possible production use if anyone finds it useful.

This past week I’ve had two experiences that show the amazing benefits of having your code be part of an open source community. The first involves my pg_partman extension. People have been asking me for quite some time about having a more generalized partitioning solution beyond just time/serial. I’ve resisted because that’s really not the focus of the tool since outside of those two types, partitioning is usually a once and done setup and only rarely needs further maintenance. Also, that would add quite a bit more complexity if I wanted to support, for example, all the things that MySQL does (range, list, hash, key) and all the variations that can be possible in there. I’ve already been feeling feature creep as it is, so wasn’t in a hurry for this. But, a user of pg_partman submitted some great work for getting a generalized range partition setup going with hopes of possibly integrating it with pg_partman. I instead thought it would be better released as its own tool. You can find his repo here and give it a try if you’ve been needing that feature: Range Partitioning.

I told that story to tell another similar story with the roles reversed. Around the same time I was testing out that range partitioning code, I came across this talk from Rob Conery about the new JSONB document storage features in PostgreSQL 9.4 (as well as learning about his interesting Star Wars Theory (possible episode 7 spoiler alert)).

I still hadn’t had any chances for practical use of JSON in any work I’ve done, but the pg_docs_api he demos at the end caught my interest. And then his note at the bottom of his README requesting a plpgsql version gave me something practical to work on to learn a bit more.

The closest I’ve come to actually using any NoSQL has been using Redis as a caching frontend and even then it was for a third-party tool where I don’t really look at the contents much. I did do the MongoDB tutorial program so I’m familiar with how it basically works, but never used it in production. One thing I always thought was neat was how you can just throw schemaless data at it and it (mostly) stores it without argument, adding some extra meta-data to make it more useful later. The pg_docs_api is the first attempt I saw to bring something similar to this to PostgreSQL. And even if it turns out there’s something like this already out there and I’m duplicating work, or it doesn’t actually end up being very useful for others in the end, it was still useful for me as a learning opportunity.

So I forked the repo and did the work. I submitted it back as a push request, turning it into an extension as well. Turns out the author decided to do exactly what I did above. He suggested I release it as my own project since he felt it was unique enough from his. So I give you pg_doc_store.

Just as a quick demo, there are some examples below. More details can be found in the docs. The original author looked to be replicating the create, save, & find commands in MongoDB so I tried to keep that theme going. Just throw your json at it with the save_document() command and things should “just work”. Only caveat at this time is that it will require the UPSERT feature in PostgreSQL 9.5 to actually be 100% atomic and cause no consistency errors. I just use an exception loop for now, so beware of race conditions.

And while this functionality is currently all based on MongoDB, I don’t have any plans on restricting the interface or underlying structure to the way MongoDB is. Like Rob found, it does provide a good template to get an API going and it’s widely known so its basic functionality isn’t so foreign.

The create_document() function can be thought of like the create command in MongoDB. The tablename given is your collection. All functions return a set, so you can either call them with just SELECT or you can do SELECT * FROM to get more formatted output.

The save_document() function is the main workhorse, just as the save command is in Mongo.

You can see I dropped the table so it didn’t exist anymore. Just pass the tablename anyway, and it will create the table if it’s not already there. You can see it also added the primary key id value into the document itself. These are always kept in sync so you can then update your document later. If you pass an id value that does not exist, this will insert it as a new row. Keep in mind the id value here is the uuid data type, so it must meet certain criteria and can’t just be any random number/string. So it’s better to just let it generate it on its own if it’s a new document. This function is where the UPSERT is really needed.

Here I inserted another row and use the find_document() function to look for a document with a matching json key/value pair.

By default the result is sorted by id. There are parameters to find to sort by different keys and order. There is also a search_document() function that tries to make use of full-text search. I’ve not really done much with that before, and it seems pretty limited to me right now. It’s only populating the tsvector search column with data based on the document values. I hope to learn more about this and make it more effective. A trigger keeps this up to date after every insert/update. Both these methods take advantage of GIN indexing, so finding document data via either method should be extremely fast and efficient.

So two different PostgreSQL projects out there taking advantage of others building off their ideas. Some other interesting news following this same theme is Greenplum has open sourced their PostgreSQL fork and looks to try to contribute some of its features back to the core system. Hopefully the potential patent issues can be resolved and we’ll see some great new features in future versions of Postgres.

So thank you to Rob for releasing your code under the BSD license and letting me do this!

Written by Keith

November 6th, 2015 at 11:48 am

PG Partition Manager v2.0.0 – Background Worker, Better Triggers & Extension Versioning Woes

with 8 comments

PG Partition Manager has been the most popular project I’ve ever done a significant amount of work on and I really appreciate everyone’s feedback for the roughly 2 years it’s been out there. I’ve got plenty more ideas for development and features and look forward to being able to move forward on them with this new major version released.

PostgreSQL 9.3 introduced the ability for user created, programmable background workers (BGW). 9.4 then introduced the ability to dynamically start & stop these with an already running cluster. The first thing that popped into my mind when I heard about this was hopefully having some sort of built-in scheduling system. There still hasn’t been a generalized version of anything like this, so in the mean time I studied the worker_spi contrib module. This is a very simple BGW example with a basic scheduler that runs a process with a configurable interval. This is basically all pg_partman needs for partition maintenance, and what required an external scheduler like cron before.

The above are the new postgresql.conf options that pg_partman v2.0.0 can use to control the background worker it now comes with. These options can be changed at any time with a simple reload, but to start the BGW requires a restart of the database. This is my first venture into writing C, so it’s still very simplistic, and essentially does nothing different than an external scheduler would do. But now you no longer need that external scheduler if you don’t require calling run_maintenance() on any individual partition sets!

The BGW is completely optional and the Makefile has an option to not compile it if you’d just like to stick with the plain, plpgsql version of the extension.

Another major change with v2 is the contents of the partitioning trigger function. If you look back at my first post about pg_partman, you’ll see me mention there are two possible methods to writing the trigger: static & dynamic. You can read that post for more details. All you need to know now is that there are no longer distinct static & dynamic partitioning types. The different trigger methods have been combined into one trigger (thanks to a suggestion from Andrew Dunstan). An example below from the daily partitioning unit test:

You can see that the trigger function starts out like the static method, explicitly giving the SQL statements for inserting into each child table that it handles. But then the final condition falls back to using the dynamic method to first see if the relevant child table exists and then inserting it there (otherwise it goes to the parent). So this hybrid trigger now allows the performance benefit of the static method and the flexibility of the dynamic method, only losing performance when you try and insert data outside of the premake range. The time-custom type still exists and still uses the lookup-table method from v1, so nothing has changed there. This new hybrid function is only available for the standard time intervals and serial partitioning.

For those updating from the v1.x.x version of pg_partman, this new trigger function will be put in place automatically the next time a new child table is created for a partition set. If you’d like to convert all your trigger functions immediately, see the v2.0.0 extension update file for some simple plpgsql that can do that for you. I couldn’t make updating the trigger functions part of the extension update itself because doing so would then make all the trigger functions part of the extension (that was a fun accident to run into during testing).

There have been some other fixes to the extension as well, but these are the biggest and most relevant to most users. Please see the CHANGELOG for more info.

I’ve also taken this time to make pg_partman only compatible with PostgreSQL 9.4+. It uses the dynamic BGW feature, so I couldn’t go with 9.3. And while you can compile the extension without the BGW feature, I did include some new code that is only compatibly with 9.2+. I know this will inconvenience some users, but with the way extension versions are managed within the database, I couldn’t see any easy way to maintain both a v1 & v2 branch. For every new v1 release I’d make, I’d also have to maintain a separate update file to get that version to v2 as well as having to go back and update every previous v1 -> v2 update to include any relevant changes for people sticking with v1 that want to change to v2 in the future. I just don’t have that kind of time for extension management and I could see it very easily leading to a mess if I didn’t find a way to keep all that versioning straight. I will still fix bugs in v1, but all new feature development will be on v2 only. Drawing the line at 9.4 also gives me the freedom to use the latest and greatest features in Postgres and not have to compromise for backward compatibility, at least for the near future.

Upgrading to 9.4 is definitely worth the time to do if you’d like to keep using this extension easily and get the new features it will have in the future. If you’re 2 or 3 major versions behind (or more) and are curious why upgrading is worth the time, I’ll be giving an updated version of my Don’t Forget the Elephant talk at this year’s PGOpen in Dallas, Tx.

Written by Keith

June 11th, 2015 at 3:30 pm

PG Partman – Sub-partitioning

without comments

After my talk at PGCon 2014 where I discussed pg_partman, someone I met at the bar track said they’d use it in a heartbeat if it supported sub-partitioning. Discussing this with others and reading online, I found that there is quite a demand for this feature and the partitioning methods in MySQL & Oracle both support this as well. So I set out to see if I could incorporate it. I thought I’d had it figured out pretty easily and started writing this blog post a while ago (last October) to precede the release of version 1.8.0. Then I started working on the examples here and realized this is a trickier problem to manage than I anticipated. The tricky part being managing the context relationship between the top level parent and their child sub-partitions in a general manner that would work for all partitioning types pg_partman supports. When I first started working on the feature, I’d get things like this:

Obviously, having 2014 child sub-partitions in the 2013 parent partition set doesn’t make sense. I believe I’ve gotten this figured out and handled now in version 1.8.0 and fixed several issues encountered since then in 1.8.1 and 1.8.2 (thanks to the users that reported them!). Also if the parent is serial and the child is time (or vice versa), there’s no contextual relationship and all the child tables will be created in every partition set.

When I first saw that all sub-partitioning did was move the data further down into the inheritance tree, I at first wondered at the expected gains from this, outside of just organizing the data better. The use case of the person I mentioned in the first sentence gave a bit of a hint to the gains. If I remember correctly, they had an extremely large amount of time-series data that needed to be queried as efficiently as possible. One of the advantages of partitioning is the constraint exclusion feature (see my other post for more details on this) which allows the query plan to skip tables that it knows don’t contain that data. But postgres still has to do some work in order to figure out that those tables can be excluded in the first place. For very large partition sets, even this is a noticeable performance hit. Or so I’m told, as I do not have data sets near big enough to really evaluate this claim. But knowing how constraint exclusion works, I could see that as a possibility. Sub-partitioning, with a known naming pattern to the child tables, allows an application to target directly the exact child tables it needs and avoid even the minor overhead of constraint exclusion in the query plan.

Let’s see how it works.

First I create a standard, yearly partitioned table set

Next some data is added and I check that everything looks right

Say now we want to subpartition by day to better organize our data because we’re expecting to get A LOT of it. The new create_sub_parent() function works just like create_parent() except the first parameter is instead an already existing parent table whose children we want to partition. In this case, we’ll be telling it we want each yearly child table to be further partitioned by day.

Hopefully you’ve realized that all the data we inserted isn’t yet partitioned to the new daily tables yet. It all still resides in each one of the yearly sub-parent tables and the only tables that were created in 2015 are the ones around the current date of March 6th, 2015 +/- 4 days (since the premake config value is set to 4). For previous and future years, only a single partition was created for the lowest possible values. All parent tables in a partition set managed by pg_partman, at all partitioning levels, have at least one child, even if they have no data. You can see that for 2014 below. I don’t yet have things figured out for the data partitioning functions & scripts to handle sub-partitioning, but in the mean time, a query like the the one below the table definitions can generate the script lines for every sub-parent table for a given parent.

After running the partitioning script for each parent, you can see it automatically created 365 child partitions for 2014 (because there was data in the tables for every day) and only 69 for 2015 since we’re only partway into the year. It did so for the other years as well, but I figured showing one should be proof enough it worked.

Since this is the first sub-partition level, that parent table argument to create_sub_parent() just happens to be the same as we originally used for create_parent(). If you then wanted to again further sub-partition one of the new child tables, you would feed that to create_sub_partition() and it would be different.

I’ve also included a file in pg_partman now that gives some more detailed instructions on this and also how to undo such partitioning as well. If anyone has any issues with this feature, I’d appreciate the feedback.

Also, as a sneak preview for what’s currently in development, I believe I’ve gotten a very simple background worker process to handle partition maintenance working. This means, for the general maintenance where you call run_maintenance() with no parent table argument, you will no longer need an external scheduler such as cron! Just set a few variables in postgresql.conf and pg_partman will take care of things all within postgres itself. This does mean the next major version of pg_partman (2.0.0) will be 9.4+ only (I’m creating a dynamic BGW), but it also allows me to simplify a lot of code I’d been keeping around for 9.1 compatibility and add more features that are only available in later versions. So, think of this as more motivation to get your systems upgraded if you want to keep up with new features in this extension!

Written by Keith

March 9th, 2015 at 11:40 am