Keith's Ramblings…

Archive for the ‘tips’ tag

PostgreSQL Extension Developer Tips – Part 4

without comments

So my latest release of mimeo (v0.10.0) taught me a lesson to look up a new feature to make sure of when it was added and that it would be compatible for the versions I’m trying to support. That feature was the new GET STACKED DIAGNOSTICS command that allows the capture and display of more details in the exception block to help make debugging problems easier. However that was introduced in 9.2 and I’m trying to stay compatible with 9.1. I tried wrapping the call in an IF statement that checked the current postgres version, but apparently that command gets evaluated when the function is installed, not on execution. So I’ve had to remove what I thought was a nice additional feature for now.

The reason this post is an extension developer tip was the way I handled releasing a fix for this. If you’re upgrading an extension and you are several released versions behind, when you run the ALTER EXTENSION … UPDATE TO … command, all of the updates between your version and the target version are run in sequence. However, due to the issue described above, the 0.9.3 -> 0.10.0 update script would fail if you’re not running postgres 9.2. This means anyone updating from 0.9.3 and earlier to 0.10.0 and later would never be able to run the intermediary update script, and hence never upgrade. The key to allowing a bypass of this version update is in the way the update scripts are named.


The name you give the file must be the version you’re upgrading from and the version it is upgrading you to. The trick to bypass the 0.10.0 update is to just create an update script like this


Seeing this file, the extension update planner will see that it is a shorter update path and choose it instead of running the intermediate update step from 0.9.3 to 0.10.0. The important thing you have to remember is to include ALL updates that occurred between the given versions in the bypass script.

There’s a handy function that can show you the update path that an extension update (or downgrade) can take so you can be sure things will work as expected


This will show you every single update and downgrade path possible, and if a valid path exists, all the steps along that path. This can be quite spammy for extensions with a long update history. Luckily this function returns a record set, so you can filter with a WHERE condition. Here’s an example with mimeo 0.9.2 installed and the path that would be taken without the bypass script.

keith=# select * from pg_extension_update_paths('mimeo') where source = '0.9.2' and path is not null;
 source | target |             path             
 0.9.2  | 0.9.3  | 0.9.2--0.9.3
 0.9.2  | 0.10.0 | 0.9.2--0.9.3--0.10.0
 0.9.2  | 0.10.1 | 0.9.2--0.9.3--0.10.0--0.10.1

With the bypass script available, the update paths turn into this.

keith=# select * from pg_extension_update_paths('mimeo') where source = '0.9.2' and path is not null;
 source | target |         path         
 0.9.2  | 0.9.3  | 0.9.2--0.9.3
 0.9.2  | 0.10.0 | 0.9.2--0.9.3--0.10.0
 0.9.2  | 0.10.1 | 0.9.2--0.9.3--0.10.1

So now when I give the update command


it will skip right over the 0.10.0 update script. This will allow PostgreSQL versions older than 9.2 to update without any issues. So, my apologies to anyone that grabbed my extension right away after my blog post and ran into issues. At least this gave me an opportunity for another tip!

Written by Keith

February 9th, 2013 at 11:42 am

Posted in PostgreSQL

Tagged with , , ,

PostgreSQL Extension Developer Tips – Part 3

without comments

My venture into PostgreSQL extension development was the first time I’d actually had to worry about the contents of a Makefile. The PostgreSQL Extension Network’s How To gave enough of an example that I was able to just copy-n-paste it and it worked. The fact that my extensions have no code to actually compile has a lot to do with that. The Makefile PGXN provides assumes that all your SQL code is in a single file. At first that wasn’t a big deal. But once my extensions started getting over several thousand lines combined with many separate functions, maintenance started becoming more of a pain. So I started learning a bit more about Makefiles, specifically the part that made the extension sql file that gets installed.

sql/$(EXTENSION)--$(EXTVERSION).sql: sql/$(EXTENSION).sql
    cp $< $@

Basically, that just copies your extension sql file (ex. pg_jobmon.sql) that resides in the /sql folder to the specially formatted file required by postgres (ex. pg_jobmon–0.1.2.sql). So, it works great if all your sql is in a single file. I wanted to be able to have each of my functions in their own file. And maybe other folders and files for things like tables and types. Looking through the gnu make docs I found the variable I needed to do this and how to use it

sql/$(EXTENSION)--$(EXTVERSION).sql: sql/tables/*.sql sql/functions/*.sql
	cat $^ > $@

The $^ variable represents all the prerequisites (part after the colon) for the make rule that you list on the previous line. In my case I have two folders, tables & functions, that contain .sql files. The $@ variable represents the target of the rule (part before the colon). For extensions, this is the specially formatted file they require. Using the cat command and the appropriate redirect, this command just takes all the files in the given folders and dumps them into a single file. One thing I had to be careful of was that the functions required that the tables be made first in the resulting SQL file if the install was going to run properly. The cat operation is done in the order that the prerequisites are listed, so the tables folder was placed before the functions folder.

Since my extensions are all SQL based, doing this has made my extension development tremendously easier to maintain. To see real examples, take a look at these extensions: pg_jobmon or mimeo.

And another tip that concerns the Makefile was something I just came across recently. At OmniTI, we’ve developed our own operating system called OmniOS. To make things easier for us internally, I started learning how to create packages for it, specifically for my PostgreSQL extensions so they’re easier for anyone to install. During the learning process I came across the fact that the PGXN Makefile I copied from makes some assumptions about the environment it’s being built on, specifically that it expects the version of grep to just automatically be gnu grep. This isn’t the case in OmniOS, and may not be the case for many other environments. So one of my coworkers showed me how to fix the Makefile to be a little more platform independent by adding this line

GREP ?= grep

This allows the build environment to set the grep command that is used during the build process, or if it doesn’t set it, defaults to “grep”. OmniOS does have gnu grep available, it’s just called ggrep instead.

So in addition to making extension development easier, I’ve learned my first lesson in making a saner build environment.

Written by Keith

November 19th, 2012 at 4:39 pm

Posted in PostgreSQL

Tagged with , ,

PostgreSQL Extension Developer Tips – Part 2

without comments

Following up on my previous tip, there’s a way you can make other extensions optional using the last tip’s method of finding another extension’s schema. A use case for this is my pg_jobmon extension.

First, you do the same check for the other extension’s schema

SELECT nspname INTO v_jobmon_schema FROM pg_namespace n, pg_extension e WHERE e.extname = 'pg_jobmon' AND e.extnamespace = n.oid;

Then, wherever you would optionally use your other extension, just check and see if the schema variable is null

IF v_jobmon_schema IS NOT NULL THEN
    v_job_id := add_job('PARTMAN CREATE FUNCTION: '||p_parent_table);
    v_step_id := add_step(v_job_id, 'Creating partition function for table '||p_parent_table);

If you want to see a working example, the partition management extension I’m working on does this. If pg_jobmon is installed, it just automatically does the logging. If it’s not there, all logging steps are simply ignored and the extension does only its core functionality.

You can also use this method to check for a specific version of another extension and allow you to support multiple versions of the other extension over time

SELECT replace(extversion,'.','')::int FROM pg_extension e WHERE e.extname = 'pg_jobmon';

IF v_jobmon_version >= 30 THEN
    ... do some stuff only in 0.3.0 or later...
    ... do stuff the pre 0.3.0 way...

Written by Keith

October 7th, 2012 at 10:44 pm

Posted in PostgreSQL

Tagged with , ,

PostgreSQL Extension Developer Tips – Part 1

without comments

I’ve been working on several extensions for PostgreSQL over the last few months. I was going to do one big blog post of some developer tips, but figured I’d just do this as an ongoing series as I come across something useful to share.

One of the first issues I came across was when my extension required another extension to work. I was designing my extension to be released to the public, so I couldn’t assume what schema someone may put those other extensions in. After some discussion with coworkers and at PGCon 2012 in the Royal Oak track, I managed to find a method to dynamically use any other extension’s schema.

I’ve been working on a per-table replication extension that requires both dblink and another extension I’ve blogged about called pg_jobmon. I can’t control where someone may install dblink and I didn’t set any requirements for pg_jobmon’s schema either. The first step is to grab what schema these extensions were installed to and save that for use later.

SELECT nspname INTO v_dblink_schema FROM pg_namespace n, pg_extension e WHERE e.extname = 'dblink' AND e.extnamespace = n.oid;
SELECT nspname INTO v_jobmon_schema FROM pg_namespace n, pg_extension e WHERE e.extname = 'pg_jobmon' AND e.extnamespace = n.oid;

Now, I could just use those variables and dynamically write EXECUTE statements where those schema names are needed for the rest of the function. I found this to be quite a pain when using pg_jobmon frequently to log different steps of a function’s execution. Also, using a lot of dynamic EXECUTE statements isn’t that great for efficiency. So, just for the running of this function, I change the search path, first saving the original one so I can restore it later.

SELECT current_setting('search_path') INTO v_old_search_path;
EXECUTE 'SELECT set_config(''search_path'',''@extschema@,'||v_jobmon_schema||','||v_dblink_schema||''',''false'')';

Note I added the extension’s own schema in there as well to make calling its own functions easier too. Then after the function body, I restore the search path to what it was originally

EXECUTE 'SELECT set_config(''search_path'','''||v_old_search_path||''',''false'')';

So, it’s still using EXECUTE statements, but at most it’s two instead of potentially many more. And it makes calling the functions in other schemas much easier since I can just use SELECT or PERFORM statements most of the time.

One issue I ran into with this is during exceptions. The exception block resets the path, so if you have to use any of the other extensions’ functions in the exception block you either have to write execute statements using the schema variables or reset the search path again the same way.

        EXECUTE 'SELECT set_config(''search_path'',''@extschema@,'||v_jobmon_schema||','||v_dblink_schema||''',''false'')';
       ... do stuff with other extensions...
        EXECUTE 'SELECT set_config(''search_path'','''||v_old_search_path||''',''false'')';

Now, if you look at the documentation for set_config, if you set the last parameter to ‘true’ instead of false, it should only apply the change to the current session. I was running into some weird edge cases if/when the function failed that would not reset the path properly and potentially cause other issues. So I just went with ‘false’ and manually resetting the path to ensure it doesn’t interfere with other queries in the session.

Of course this can be applied to function authoring in general for PostgreSQL. But extension development is where I’ve come across it the most so far since this is most likely where you’re going to be sharing code that cannot assume the destination’s environment.

Written by Keith

October 7th, 2012 at 12:57 pm

Posted in PostgreSQL

Tagged with , ,