Backup PostgreSQL Database Schema

07-24-2020

If you’re using a PostgreSQL database and need to backup the database schema, this article will walk through some of the handy scripts I use to backup and restore a database schema. I like to use Docker for everything, especially for my development environment. I can easily automate my environment and be confident I will get the same results time and time again. This article highlights some of the ways I use Docker for development, backup, and restoration of my PostgreSQL databases, but Docker is not a requirement.

While developing an application that uses PostgreSQL as a database I often need to backup the latest version of the database schema. I might change or add columns or tables and want to keep track of the latest version. While always being able to reference a prior version. Once I’m happy with my updates I run a simple script to snapshot just the schema, I ignore the data, save it with a time stamp and check it into version control.

The script looks something like this:

db-dump.sh

1
2
3
4
5
6
7
8
#! /bin/bash
date=$(date +%Y-%m-%d-%H:%M)
DB_HOST=my-db-container
DB_USER=my-db-user
# -c Cleanup commands included
# -s schema only, no data

docker exec -t $DB_HOST pg_dumpall -c -s -U $DB_USER > database/db-$date.sql

How it works:

With the database running inside of a container, identified by the container name ($DB_HOST), I use the exec docker command to execute the _pg_dumpall_ command within the docker container and write the result to a new timestamped file db-1970-01-01-00:00.sql. The new timestamped file is a snapshot of the current database schema. I include the date, hour, and minute, but that level of precision isn’t required.

I can run this database schema backup script regularly to ensure updates to the database are captured and shared with others through version control.

To use the backed up schema to setup a new database, I can do the opposite by reading in the schema into a new database container instance, again I don’t care about the data at this point of development.

initial-setup.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
#! /bin/bash

LATEST=./database/$(ls -t ./database/ | head -1)
DATABASE=my-database-name
DB_HOST=my-db-container
DB_USER=my-db-user

echo $LATEST # A quick sanity check

echo "SELECT * from pg_database where datname = '$DATABASE'; UPDATE pg_database SET datallowconn = 'false' WHERE datname = '$DATABASE'; ALTER DATABASE $DATABASE CONNECTION LIMIT 1;SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = '$DATABASE';DROP DATABASE IF EXISTS $DATABASE;" | cat - $LATEST > temp && mv temp $LATEST

# To Seed the DB
cat $LATEST | docker exec -i $DB_HOST psql -U $DB_USER

How it works:

This script will find the latest file, based on the time it was saved (NOTE: not the actual timestamp in the filename), and read the contents of the file into the database container using the cat command and piping the results. Prior to reading the contents of the schema file into the container, we modify the beginning of the .sql file to allow us to overwrite all existing content in the database. I’m sure there is a better way to manage this complete over write, but this process works.

This approach should be used with caution, for local development this works well.