// the problem with database backups
Every developer knows they should back up their database.
Almost no one does it properly.
Not because they're lazy. Because doing it right is genuinely painful. You write a cron job, then forget to test the restore. You set up a script, then it silently fails for three weeks. You use a managed service, then get a bill you didn't expect. You finally get something working, then you switch databases and start over.
The result is most projects run in production with either no backups, or backups that have never been tested and probably don't work.
Archon is my fix for this.
// what it is
Archon is a database backup sidecar for Docker projects.
You drop one block into your docker-compose.yml. That's it. No code changes to your application. No new infrastructure to manage. No backup scripts to write or maintain.
Archon runs as a separate container alongside your existing stack, reads a single config.yaml, and handles everything -- scheduled backups, encryption, integrity verification, retention, and restore.
Your existing project Archon sidecar
───────────────────── ──────────────────────────────
┌───────────┐ ┌─────────┐ ┌────────────────────────────┐
│ App │ │ DB │ ◄─ dump ── │ pg_dump / mongodump / │
│ Container │ │ postgres│ │ mysqldump / sqlite3 │
└───────────┘ └─────────┘ │ │ │
│ ▼ │
│ [ Encrypt ] │
│ AES-256-CBC (optional) │
│ │ │
│ ▼ │
│ [ SHA-256 checksum ] │
│ always written │
│ │ │
│ ▼ │
│ [ Storage ] │
│ Local / S3 / Azure Blob │
│ │ │
│ ▼ │
│ [ Retention ] │
│ auto-delete old backups │
└────────────────────────────┘
On restore, the pipeline reverses -- checksum verified before any database is touched.
// zero code changes
This is the part that actually matters.
Most backup solutions require you to change how your application runs, add a library, modify your database config, or set up a separate service with its own credentials and dashboard.
Archon requires none of that. Your application never knows Archon exists. It reads directly from your database using standard CLI tools (pg_dump, mongodump, mysqldump, sqlite3) and stores the result wherever you tell it to.
The entire integration is this:
archon:
image: archon:latest
volumes:
- ./archon.config.yaml:/app/config.yaml
- ./backups:/app/backups
env_file: .env
depends_on:
postgres:
condition: service_healthy
restart: unless-stopped
That's the only change to your docker-compose.yml.
// what it supports
Databases: PostgreSQL, MongoDB, MySQL, SQLite
Storage: Local filesystem, AWS S3, Azure Blob Storage
Scheduling: Hourly, daily, weekly, monthly, or raw cron expressions with full IANA timezone support
Encryption: AES-256-CBC at rest, toggled with one line in config
Integrity: SHA-256 checksum sidecar written with every backup, verified before every restore
Retention: Auto-delete old backups with separate limits per rotation tier
// the config
Minimal config for a PostgreSQL database:
databases:
- name: my_app_db
type: postgres
host: postgres
port: 5432
db: myapp
user: ${DB_USER}
password: ${DB_PASS}
schedule:
frequency: daily
at: "02:00"
timezone: "Asia/Kolkata"
storage: local
storage:
local:
path: /app/backups
encryption:
enabled: true
key: ${ENCRYPTION_KEY}
retention:
daily: 7
weekly: 4
monthly: 12
api:
port: 8765
api_key: ${ARCHON_API_KEY}
One file. Every backup decision in one place. Environment variables for secrets so nothing sensitive lives in the config.
// the integrity guarantee
This is the piece most backup tools skip.
Every backup Archon creates gets a .sha256 sidecar file written alongside it:
archon_mydb_2026-03-20T02-00-00_daily.sql.enc
archon_mydb_2026-03-20T02-00-00_daily.sql.enc.sha256
Before any restore, Archon recomputes the hash and compares it against the sidecar. If they don't match -- corrupted file, partial download, accidental modification -- Archon refuses to proceed.
A backup you can't trust is worse than no backup. It gives you false confidence and fails exactly when you need it most.
// granular restore
Full database restores are the nuclear option. Sometimes you just need one table back. Or ten rows.
Archon supports row-level recovery from any SQL backup:
- Open a session -- Archon parses the backup into a browsable structure
- Browse tables and row data without touching the live database
- Select the rows you want to restore
- Archon resolves foreign key dependencies automatically
- Choose a conflict strategy:
skip,replace, ormerge - Apply -- only the selected rows are written
# Open session
curl -X POST http://localhost:8765/granular/session \
-H "X-API-Key: $ARCHON_API_KEY" \
-d '{"filename": "archon_mydb_2026-03-20T02-00-00_daily.sql.enc"}'
# Browse tables
curl http://localhost:8765/granular/session/{id}/tables \
-H "X-API-Key: $ARCHON_API_KEY"
# Apply selected rows
curl -X POST http://localhost:8765/granular/session/{id}/restore-multi \
-H "X-API-Key: $ARCHON_API_KEY" \
-d '{"tables": ["orders"], "conflict_strategy": "replace"}'
The React dashboard exposes this as a step-by-step wizard if you prefer a UI over curl.
// up in 5 minutes
git clone https://github.com/AnonymousCoderDev/Archon.git
cd Archon/testing
docker compose up --build
The testing/ directory is a fully self-contained environment -- a real PostgreSQL database with seed data, no configuration required. Trigger a backup, simulate data loss, restore, verify. The full cycle in under 5 minutes.
# Trigger a backup
curl -s -X POST http://localhost:8765/backup \
-H "X-API-Key: test-api-key-12345" | python -m json.tool
# Drop a table (simulate data loss)
docker exec -it testing-postgres-1 psql -U testuser -d testdb \
-c "DROP TABLE products;"
# Restore
curl -s -X POST http://localhost:8765/restore \
-H "X-API-Key: test-api-key-12345" \
-H "Content-Type: application/json" \
-d '{"filename": "PASTE_FILENAME_HERE", "confirm": true}'
# Verify
docker exec -it testing-postgres-1 psql -U testuser -d testdb \
-c "SELECT * FROM products;"
// the api
Every operation is available via REST. Full reference:
| Method | Path | Description |
|---|---|---|
| POST | /backup |
Queue a backup job |
| GET | /jobs/{job_id} |
Poll job status |
| POST | /restore |
Restore from a backup file |
| GET | /backups |
List all backups with metadata |
| DELETE | /backups/{filename} |
Delete a backup |
| GET | /status |
Next scheduled run per database |
| GET | /logs/stream |
Live log tail via SSE |
| GET | /health |
Liveness probe, no auth required |
| POST | /reload |
Re-read config without restarting |
All endpoints require X-API-Key header. Returns 401 if missing or incorrect.
// why a sidecar
The alternative is baking backup logic into your application. Libraries, cron jobs inside your app container, scheduled tasks tied to your framework.
That approach has two problems.
First, if your application container dies -- which is exactly when you're most likely to need a backup -- your backup system dies with it.
Second, it creates coupling between your application and your backup strategy. Change your database, rewrite your app, migrate to a new framework -- now you have to redo your backups too.
A sidecar is operationally independent. It doesn't care what language your app is written in, what framework you use, or how your application is structured. It speaks directly to the database and stores the result. Your app is irrelevant to it.
// get it
github.com/AnonymousCoderDev/Archon
MIT licensed. Fork it, modify it, use it in production. No attribution required.
If you find a bug, open an issue. If you want a feature, open a pull request.
// end of transmission //