PostgreSQL Backup¶
The postgres backup type creates logical backups of PostgreSQL databases using pg_dump.
Overview¶
- Backup Method:
pg_dumpper database, packaged into a tar archive - Compression: zstd compression
- Output Format:
.tar.zstcontaining individual.sqlfiles per database - Restore Method:
psqlto restore SQL dumps
Configuration¶
labels:
- docker-backup.enable=true
- docker-backup.db.type=postgres
- docker-backup.db.schedule=0 3 * * *
- docker-backup.db.retention=7
Requirements¶
Environment Variables¶
The container must have PostgreSQL credentials set via environment variables:
| Variable | Required | Description |
|---|---|---|
POSTGRES_USER |
Yes* | PostgreSQL username |
PGUSER |
Yes* | Alternative to POSTGRES_USER |
POSTGRES_PASSWORD |
No | Password (used by pg_dump) |
POSTGRES_DB |
No | Default database |
*At least one of POSTGRES_USER or PGUSER must be set.
Container Requirements¶
- PostgreSQL client tools (
pg_dump,psql) must be available in the container - The user must have permissions to dump all databases
How It Works¶
Backup Process¶
- List Databases: Queries
pg_databaseto get all non-template databases - Dump Each Database: Runs
pg_dumpfor each database with options: --clean- Include DROP statements--if-exists- Use IF EXISTS with DROP--create- Include CREATE DATABASE statement- Package: Creates a tar archive with each database as a separate
.sqlfile - Compress: Applies zstd compression to the archive
Backup Contents¶
The backup file (.tar.zst) contains:
backup.tar.zst
├── myapp.sql # Database 'myapp' dump
├── users.sql # Database 'users' dump
└── analytics.sql # Database 'analytics' dump
Restore Process¶
- Decompress: Reads zstd-compressed tar archive
- Extract: Processes each
.sqlfile in the archive - Restore: Pipes each SQL dump to
psqlconnected to thepostgresdatabase - Recreate: The
CREATE DATABASEstatements in the dump recreate the databases
Example Configurations¶
Basic Setup¶
services:
postgres:
image: postgres:16
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: myapp
labels:
- docker-backup.enable=true
- docker-backup.db.type=postgres
- docker-backup.db.schedule=0 3 * * *
- docker-backup.db.retention=7
Multiple Backup Schedules¶
services:
postgres:
image: postgres:16
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
labels:
- docker-backup.enable=true
- docker-backup.notify=telegram
# Hourly backups (short retention)
- docker-backup.hourly.type=postgres
- docker-backup.hourly.schedule=0 * * * *
- docker-backup.hourly.retention=24
- docker-backup.hourly.storage=local-fast
# Daily backups (long retention)
- docker-backup.daily.type=postgres
- docker-backup.daily.schedule=0 2 * * *
- docker-backup.daily.retention=30
- docker-backup.daily.storage=s3
With Replication/Standby¶
For replicated setups, back up from the standby:
services:
postgres-primary:
image: postgres:16
# Primary configuration...
postgres-standby:
image: postgres:16
environment:
POSTGRES_USER: replicator
POSTGRES_PASSWORD: secret
labels:
- docker-backup.enable=true
- docker-backup.db.type=postgres
- docker-backup.db.schedule=0 3 * * *
Manual Operations¶
Trigger Backup¶
List Backups¶
Output:
KEY SIZE DATE
postgres/db/2024-01-15/030000.tar.zst 2.1 MB 2024-01-15 03:00:00
postgres/db/2024-01-14/030000.tar.zst 2.0 MB 2024-01-14 03:00:00
Restore Backup¶
Restore Behavior
Restoring will:
- Drop and recreate databases included in the backup
- Overwrite any existing data in those databases
- Not affect databases not included in the backup
Download Backup¶
Via CLI:
Via Dashboard: Navigate to Backups > postgres and click the download button.
Extracting Backups Manually¶
To manually extract and inspect a backup:
# Decompress and extract
zstd -d backup.tar.zst
tar -xf backup.tar
# View contents
ls -la
# myapp.sql users.sql analytics.sql
# Restore a single database manually
psql -U postgres -d postgres < myapp.sql
Troubleshooting¶
"Missing PostgreSQL user" Error¶
Ensure the container has POSTGRES_USER or PGUSER environment variable set:
Backup Fails with Permission Error¶
The configured user must have permissions to:
- Connect to all databases
- Run
pg_dumpon all databases
For full backups, use a superuser or grant necessary permissions:
Large Databases Timing Out¶
For very large databases, consider:
- Increasing the write timeout on the daemon
- Using incremental backup strategies
- Backing up during low-traffic periods