At work I am using Django for a web-based management interface, using PostgreSQL as the database back-end.
Django does not yet come with a migration framework to evolve the database schema, so I wrote a really simple one, based on Rails migrations.
The system is simple. You put fragments of SQL code in separate text files at a predetermined location on disk, and prefix each one of them with a number, representing the version of the database schema. Your migration code also maintains a schema table in the database to keep track of the current versions, and whether the last migration attempt was successful.
So, on upgrade, if the schema version is 30 and the fragments on disk to up to 43, then you run 31-43, wrapping each in a transaction, and rolling-back at the first failure. So, if 41 fails, you’ll get as far as version 40, store an error, and you’re done.
Sounds ok, right? Well, if this is for a single database instance, sure. Unfortunately for this scheme, the product that I work on has roughly 6000 instances in the field in servers all over the world, with less than half running the most recent release. So, stream management becomes an issue. And stream management is something that I find that most of these modern frameworks overlook.
What if a bugfix that I just made on the HEAD has an associated schema change, and I want to backport that fix to the previous maintenance release? If the last schema version of the previous release is, say, 16, and I just added migration fragment 62, then we have a problem. And this is it.
Every migration fragment is dependent on the success of the previous one.
So, I can’t just backport fragment 62, I’d have to backport 17 - 62. Yikes.
The solution is actually simple, and it’s something that the SMEServer’s native databases do already. Each migration fragment is not raw SQL, it’s code, in this case Perl, but it could be anything. So, instead of blinding executing the migration fragments in order, you blinding execute the migration code fragments and let each and every one determine whether they need to do their particular job.
Need to make a varchar(512) a varchar(1024)? No problem, just check it’s size now, and if it’s 1024 then you don’t need to do anything. Now each fragment doesn’t depend on the one that came before it, and you can safely backport only what you wanted to backport.
So how do we know what the database, in this case a relational database like PostgreSQL, looks like now? As it turns out, the standard does have some support for that, and it’s in the
This would fetch all of the column names from a table called “clients”:
select column_name from information_schema.columns
where table_name = 'clients';
And this would determine the current length of the character field in that table called “name”:
where table_name = 'clients'
and column_name = 'name';
So, an ideal migration framework would provide this information to the migration fragments, to keep their job simple. Migration isn’t done constantly so we’re not that concerned with performance. Keep the code simple and bug-free.
Anyone feel like writing it? I suspect I may have to.