Self-Hosted Backup Strategy Guide

Why You Need a Backup Strategy

Self-hosting means your data lives on hardware you control. That’s the point. But it also means there’s no “contact support” button when things go wrong. A failed drive, a bad update, an accidental docker compose down -v, or ransomware — any of these can destroy everything.

A backup strategy is the plan that turns “disaster” into “minor inconvenience.” This guide covers the tools, the schedule, the automation, and the testing that makes your self-hosted setup resilient.

The 3-2-1 Rule

Every backup strategy starts with the 3-2-1 rule:

RuleMeaningWhy
3 copiesYour live data + 2 backupsOne backup can fail. Two failing simultaneously is extremely unlikely.
2 media typesStore backups on different physical media (SSD + HDD, local + cloud)A power surge that kills your SSD won’t touch your HDD in a different machine.
1 offsiteAt least one backup physically away from your serverA fire, flood, or theft takes out everything in one location.

For a deeper dive into implementing the 3-2-1 rule with specific examples, see The 3-2-1 Backup Rule Explained.

What to Back Up

Not everything on your server needs the same backup treatment.

Data TypePriorityBackup FrequencyExamples
Application databasesCriticalEvery 4-6 hoursPostgreSQL, MariaDB, SQLite databases
User-generated contentCriticalDailyPhotos (Immich), documents (Paperless-ngx), notes (BookStack)
Configuration filesHighDaily or on changeDocker Compose files, .env files, reverse proxy configs
Docker volumesHighDailyNamed volumes with persistent app state
Media librariesMediumWeeklyJellyfin/Plex media (often replaceable from original sources)
Container imagesLowNot neededPulled from registries on demand
LogsLowOptionalRotate and archive if needed for compliance

Rule of thumb: If losing it would cost you more than 30 minutes to recreate, back it up. If losing it would be permanent (photos, personal data), back it up with the highest priority.

For Docker-specific backup procedures (named volumes, bind mounts, database dumps), see Backing Up Docker Volumes.

Backup Tools Compared

ToolTypeDeduplicationEncryptionCompressionDocker ImageBest For
ResticCLIYes (content-defined)AES-256 (always on)zstdrestic/restic:0.18.1Most self-hosters — fast, simple, works with every backend
BorgBackupCLIYes (content-defined)AES-256 (optional)lz4/zstd/zlibb3vis/borgmatic:1.9.12 (via Borgmatic)Large datasets — best dedup ratio, mature
KopiaCLI + GUIYesAES-256 (optional)Multiple algorithmskopia/kopia:0.22.3Users who want a web UI for managing backups
DuplicatiGUIYes (block-level)AES-256Ziplscr.io/linuxserver/duplicati:v2.1.0.108Beginners — point-and-click web interface

Our Recommendation

Restic is the best choice for most self-hosters. It’s fast, always encrypts your data, supports every major storage backend (local, S3, SFTP, Backblaze B2, Wasabi, rclone), and has excellent documentation. Borgmatic (BorgBackup with a config file wrapper) is the runner-up for users who want slightly better dedup ratios on very large datasets.

Storage Backends

Where you send your backups matters as much as how you create them.

BackendCostSpeedOffsiteSetup Complexity
Local HDD/SSDOne-time hardware costFastNoLowest
USB external drive$50-150FastManual (rotate drives)Low
NAS (Synology, TrueNAS)$200-500+Fast (LAN)No (unless remote NAS)Medium
Backblaze B2$0.006/GB/moMediumYesLow
Wasabi$0.0069/GB/mo, no egress feesMediumYesLow
SFTP to second serverCost of second serverMediumYesMedium
Hetzner Storage BoxFrom €3.81/mo for 1TBMediumYesLow
S3-compatible (MinIO)Self-hostedFast (LAN)Depends on locationMedium

Cost-Effective Offsite Strategy

For most self-hosters, the cheapest reliable offsite setup is:

  1. Primary backup: Local HDD or NAS (fast restores)
  2. Offsite backup: Backblaze B2 or Hetzner Storage Box (disaster recovery)

At Backblaze B2 rates, 500GB of backup data costs about $3/month. That’s cheap insurance.

Backup Schedule

WhatHow OftenWhenRetention
Database dumpsEvery 6 hours00:00, 06:00, 12:00, 18:00 UTC7 days of 6-hourly, 4 weeks of daily, 6 months of weekly
Docker volumesDaily02:00 UTC (low activity)7 daily, 4 weekly, 12 monthly
Config filesOn change + daily03:00 UTC30 daily, 12 monthly
Full systemWeeklySunday 04:00 UTC4 weekly, 6 monthly

Stagger your backups. Don’t run everything at midnight. Spread jobs across the early morning hours to avoid I/O contention that slows down your services.

Automating Backups with Restic

Here’s a practical setup using Restic with a local backup target and Backblaze B2 for offsite.

Local Backup Script

Create /opt/backups/backup.sh:

#!/bin/bash
set -euo pipefail

BACKUP_DIR="/mnt/backup/restic-repo"
RESTIC_PASSWORD_FILE="/opt/backups/.restic-password"

# Back up Docker volumes
restic -r "$BACKUP_DIR" --password-file "$RESTIC_PASSWORD_FILE" \
  backup /var/lib/docker/volumes \
  --tag docker-volumes \
  --exclude="*.tmp" \
  --exclude="*.log"

# Back up configuration
restic -r "$BACKUP_DIR" --password-file "$RESTIC_PASSWORD_FILE" \
  backup /opt/docker /etc/docker \
  --tag config

# Prune old snapshots (keep 7 daily, 4 weekly, 6 monthly)
restic -r "$BACKUP_DIR" --password-file "$RESTIC_PASSWORD_FILE" \
  forget --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prune

Database Dump Script

Create /opt/backups/dump-databases.sh:

#!/bin/bash
set -euo pipefail

DUMP_DIR="/opt/backups/db-dumps"
mkdir -p "$DUMP_DIR"

# PostgreSQL (used by Immich, Nextcloud, etc.)
docker exec postgres pg_dumpall -U postgres > "$DUMP_DIR/postgres-$(date +%Y%m%d-%H%M).sql"

# MariaDB (used by BookStack, etc.)
docker exec mariadb mariadb-dump --all-databases -u root -p"$MARIADB_ROOT_PASSWORD" > "$DUMP_DIR/mariadb-$(date +%Y%m%d-%H%M).sql"

# Clean up dumps older than 7 days
find "$DUMP_DIR" -name "*.sql" -mtime +7 -delete

Systemd Timer (Preferred Over Cron)

Create /etc/systemd/system/backup.service:

[Unit]
Description=Run Restic backup
After=docker.service

[Service]
Type=oneshot
ExecStart=/opt/backups/backup.sh
Environment=HOME=/root

Create /etc/systemd/system/backup.timer:

[Unit]
Description=Daily backup at 2 AM

[Timer]
OnCalendar=*-*-* 02:00:00
Persistent=true
RandomizedDelaySec=300

[Install]
WantedBy=timers.target

Enable with:

systemctl daemon-reload
systemctl enable --now backup.timer

Testing Restores

A backup you haven’t tested is not a backup. Schedule restore tests monthly.

Restore Test Checklist

StepCommandWhat to Verify
List snapshotsrestic -r /path/to/repo snapshotsSnapshots exist and are recent
Restore to temp dirrestic -r /path/to/repo restore latest --target /tmp/restore-testFiles are intact and readable
Verify database dumppsql -f /tmp/restore-test/dump.sql (on test instance)Database restores without errors
Check file countsfind /tmp/restore-test -type f | wc -lFile count matches expectations
Verify integrityrestic -r /path/to/repo checkNo corruption in repository

Automate Restore Verification

Add this to your backup script:

# Verify repository integrity after backup
restic -r "$BACKUP_DIR" --password-file "$RESTIC_PASSWORD_FILE" check

# Verify latest snapshot is readable
restic -r "$BACKUP_DIR" --password-file "$RESTIC_PASSWORD_FILE" \
  ls latest | tail -5

Monitoring Backups

A backup that silently fails is worse than no backup — it gives you false confidence.

Monitoring MethodToolHow
Heartbeat monitoringUptime Kuma, Healthchecks.ioBackup script pings a URL on success. Alert if no ping received.
Systemd timer statussystemctl list-timersCheck that backup timer last triggered recently
Backup age checkCustom scriptAlert if newest snapshot is older than 48 hours
Disk space monitoringNetdata, BeszelAlert if backup volume drops below 20% free

Healthchecks Integration

Add to the end of your backup script:

# Notify healthcheck on success
curl -fsS --retry 3 https://hc-ping.com/YOUR-UUID-HERE > /dev/null

# Or for Uptime Kuma push monitor
curl -fsS "http://uptime-kuma:3001/api/push/YOUR-TOKEN?status=up&msg=OK" > /dev/null

Common Mistakes

MistakeWhy It’s BadFix
Only backing up to the same diskDrive failure takes live data AND backupUse a separate physical drive or offsite storage
No encryption on offsite backupsAnyone who accesses the storage can read your dataRestic encrypts by default. BorgBackup: use --encryption repokey
Never testing restoresYou discover your backups are corrupted when you need them mostSchedule monthly restore tests
Backing up running databases by copying filesResults in corrupted, unusable database backupsAlways use pg_dump/mariadb-dump for database backups
No retention policyBackup storage grows forever until the disk is fullSet --keep-daily 7 --keep-weekly 4 --keep-monthly 6
Running backups during peak hoursBackup I/O slows down your servicesSchedule backups for early morning (02:00-05:00)
Forgetting .env filesLosing environment variables means losing app configurationInclude /opt/docker/ (or wherever your compose files live) in backups

Next Steps

  1. Pick a tool. Restic for most people. Kopia if you want a web UI.
  2. Set up local backup. Get a working backup to a local HDD or NAS first.
  3. Add offsite. Configure Backblaze B2 or another cloud backend as your second target.
  4. Automate. Set up systemd timers so backups run without you thinking about it.
  5. Monitor. Integrate with Uptime Kuma or similar to alert on failures.
  6. Test. Restore from backup at least once a month to verify it works.

Comments