How to Self-Host Duplicati with Docker
What Is Duplicati?
Duplicati is a free, open-source backup client that stores encrypted, incremental, compressed backups on cloud storage services and remote file servers. It supports over 20 backends including Amazon S3, Backblaze B2, Google Drive, OneDrive, SFTP, WebDAV, and local storage. If you are paying for Backblaze, CrashPlan, or any cloud backup service, Duplicati lets you replace them with a self-hosted solution that gives you full control over your backup encryption keys and destination choices.
Prerequisites
- A Linux server (Ubuntu 22.04+ recommended)
- Docker and Docker Compose installed (guide)
- 1 GB of free RAM (minimum)
- Disk space for local backup staging or a configured remote backend (S3, B2, SFTP, etc.)
- A domain name (optional, for remote access to the web UI)
Docker Compose Configuration
Create a directory for your Duplicati configuration:
mkdir -p /opt/duplicati
cd /opt/duplicati
Create a docker-compose.yml file:
services:
duplicati:
image: lscr.io/linuxserver/duplicati:v2.2.0.3_stable_2026-01-06-ls284
container_name: duplicati
restart: unless-stopped
environment:
- PUID=${PUID} # User ID for file permissions
- PGID=${PGID} # Group ID for file permissions
- TZ=${TZ} # Timezone for scheduled backups
- CLI_ARGS=${CLI_ARGS:-} # Optional extra CLI arguments for Duplicati
ports:
- "8200:8200" # Web UI
volumes:
- ./config:/config # Duplicati database and settings
- /opt/duplicati/backups:/backups # Local backup destination
- /:/source:ro # Source data to back up (read-only)
Create a .env file alongside it:
# User/group ID — run `id` to find yours
PUID=1000
PGID=1000
# Timezone — controls when scheduled backups run
TZ=America/New_York
# Optional: extra CLI arguments passed to Duplicati server
# Example: --webservice-allowed-hostnames=* to allow reverse proxy access
CLI_ARGS=--webservice-allowed-hostnames=*
Start the stack:
docker compose up -d
Duplicati’s web UI is now available at http://your-server-ip:8200.
A note on the /source volume: The configuration above mounts the entire host filesystem as /source inside the container in read-only mode. This gives Duplicati access to back up anything on the server. If you prefer to limit what Duplicati can see, mount only specific directories:
volumes:
- ./config:/config
- /opt/duplicati/backups:/backups
- /home:/source/home:ro
- /opt:/source/opt:ro
- /etc:/source/etc:ro
Initial Setup
- Open
http://your-server-ip:8200in your browser. - On first launch, Duplicati asks whether this is a single-user machine or if others can access it. If your server is accessible on a network, select Yes and set a UI password. This password protects the web interface only — it is separate from backup encryption passwords.
- You will land on the Duplicati home screen. There are no backup jobs configured yet.
- Click Add backup to create your first backup job. The wizard walks you through five steps: General settings, Destination, Source data, Schedule, and Options.
Configuration
Creating a Backup Job
Step 1 — General:
- Give the backup a descriptive name (e.g., “Server Config Backup”).
- Set an encryption passphrase. Duplicati uses AES-256 encryption by default. Store this passphrase somewhere safe — without it, your backups are unrecoverable.
- Choose the encryption module. AES-256 is the default and recommended choice.
Step 2 — Destination:
- Select your backup destination. Options include local folder, S3-compatible storage, Backblaze B2, SFTP, FTP, Google Drive, OneDrive, WebDAV, and more.
- For local backups, point to
/backups(which maps to/opt/duplicati/backupson the host). - For remote destinations, enter the connection details. Duplicati will test the connection before proceeding.
Step 3 — Source Data:
- Select folders under
/sourceto back up. Since/sourcemaps to your host filesystem,/source/optcorresponds to/opton the host. - Use filters to exclude files by extension, size, or path pattern. Common exclusions:
*.tmp,*.log,node_modules/,.cache/.
Step 4 — Schedule:
- Set how often the backup runs. Daily at 2:00 AM is a sensible default.
- The schedule respects the
TZenvironment variable you set in the.envfile.
Step 5 — Options:
- Set the remote volume size. The default of 50 MB works for most setups. Increase to 200-500 MB for large backups over fast connections.
- Set retention policy (see Advanced Configuration below).
- Click Save to finish.
Key Settings
- Block size: Controls deduplication granularity. Default 100 KB works well. Smaller values improve deduplication but increase database size.
- Upload speed limit: Useful if your ISP has limited upstream bandwidth. Set under each backup job’s options.
- Concurrency: Duplicati runs one backup job at a time by default. If you have multiple jobs, they queue sequentially.
Advanced Configuration
Backup Encryption
Duplicati encrypts all data before it leaves your server. The default AES-256 module is solid and recommended for most users. Key points:
- Encryption happens client-side. Your backup destination never sees unencrypted data.
- The passphrase is not stored on the destination. If you lose it, your backups are gone.
- You can also use GPG encryption if you prefer key-based encryption over passphrases. Select “GNU Privacy Guard” as the encryption module when creating a backup job.
- To disable encryption entirely (only recommended for trusted local destinations), select “No encryption.”
Cloud Backend Configuration
Amazon S3 / S3-compatible (MinIO, Wasabi):
- Storage type: “S3 Compatible”
- Server:
s3.amazonaws.com(or your MinIO/Wasabi endpoint) - Bucket name: your bucket
- AWS Access ID and Secret Key
- Region: match your bucket region
- Storage class:
STANDARDorSTANDARD_IAfor infrequent access backups
Backblaze B2:
- Storage type: “B2 Cloud Storage”
- Bucket name, Account ID, and Application Key from your B2 dashboard
- B2 is one of the cheapest cloud storage options at $0.006/GB/month
SFTP:
- Storage type: “SFTP (SSH)”
- Server, port (default 22), path, username
- Authentication: password or SSH key file
- Point to a key file mounted into the container if using key-based auth
Google Drive / OneDrive:
- Duplicati uses OAuth2. The web UI will redirect you to Google/Microsoft to authorize access.
- Note: OAuth tokens can expire. Check your backup logs periodically for authentication failures.
Retention Policies
Set retention in each backup job under Options. Common strategies:
- Keep all backups: Not recommended long-term. Storage costs grow unbounded.
- Delete backups older than X days: Simple. Example:
30Dkeeps 30 days of history. - Smart retention: Keep one backup per day for the last 7 days, one per week for the last 4 weeks, one per month for 12 months. Set with:
1W:1D,4W:1W,12M:1M - Keep a specific number:
--keep-versions=10keeps the last 10 backup versions regardless of age.
CLI Usage
You can interact with Duplicati from the command line inside the container:
# List configured backups
docker exec duplicati duplicati-cli list
# Run a specific backup job by name
docker exec duplicati duplicati-cli backup <backup-url> <source-path> --passphrase="your-passphrase"
# Verify backup integrity
docker exec duplicati duplicati-cli test <backup-url> --passphrase="your-passphrase"
# Restore files
docker exec duplicati duplicati-cli restore <backup-url> --passphrase="your-passphrase" --restore-path=/tmp/restore
The CLI_ARGS environment variable in your .env file passes additional arguments to the Duplicati server process. Useful flags:
--webservice-allowed-hostnames=*— allows access via reverse proxy or non-localhost hostnames--webservice-port=8200— change the web UI port (default 8200)
Reverse Proxy
If you are running Nginx Proxy Manager, create a proxy host:
- Domain:
duplicati.yourdomain.com - Scheme:
http - Forward Hostname/IP:
duplicati(if on the same Docker network) or your server’s IP - Forward Port:
8200 - Enable SSL with Let’s Encrypt
- Under Advanced, add:
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
Make sure you set CLI_ARGS=--webservice-allowed-hostnames=* in your .env file, otherwise Duplicati will reject requests that arrive via the reverse proxy with a hostname other than localhost.
For other reverse proxy setups, see Reverse Proxy Explained.
Backup
Duplicati itself needs backing up. The critical data is in the /config volume, which contains:
- The Duplicati SQLite database — stores all backup job configurations, schedules, and metadata about what has been backed up
- Server settings — web UI password, encryption settings
Back up the ./config directory (or the named volume) to a separate location. A simple approach:
# Stop Duplicati to ensure database consistency
docker compose stop duplicati
# Copy the config directory
cp -r /opt/duplicati/config /path/to/safe/location/duplicati-config-$(date +%Y%m%d)
# Restart
docker compose start duplicati
Alternatively, configure a second Duplicati backup job that backs up /config to a different destination than your main backups. This way, you can restore Duplicati itself if your server dies.
Troubleshooting
Web UI Not Accessible via Reverse Proxy
Symptom: Duplicati works on http://localhost:8200 but returns “Host header not allowed” through a reverse proxy.
Fix: Set CLI_ARGS=--webservice-allowed-hostnames=* in your .env file and recreate the container:
docker compose down && docker compose up -d
Backup Fails with “Unauthorized” on Cloud Backend
Symptom: Backup jobs fail with authentication errors on Google Drive, OneDrive, or other OAuth-based backends.
Fix: OAuth tokens expire. Open the Duplicati web UI, edit the affected backup job, go to the Destination step, and re-authenticate with your cloud provider. For headless servers, consider using S3-compatible or SFTP backends that use API keys instead of OAuth.
Database Corruption or “Failed to Connect to Database”
Symptom: Duplicati fails to start or shows database errors in the log.
Fix: The SQLite database in /config can become corrupted after an unclean shutdown. Try the repair tool:
docker exec duplicati duplicati-cli repair <backup-url> --passphrase="your-passphrase"
If the server database itself is corrupted (not a backup database), delete the file and recreate it:
docker compose down
# The server database is typically at /config/Duplicati-server.sqlite
mv /opt/duplicati/config/Duplicati-server.sqlite /opt/duplicati/config/Duplicati-server.sqlite.bak
docker compose up -d
You will need to reconfigure your backup jobs, but your actual backup data on the remote destination is untouched.
Backup Runs Slowly or Uses Too Much Memory
Symptom: Backup jobs take hours or Duplicati consumes excessive RAM during large backup operations.
Fix: Reduce the block size (default 100 KB) if you have many small files, or increase the remote volume size (default 50 MB) to reduce the number of uploads. For very large source datasets (multiple TB), increase the Docker container’s memory limit:
deploy:
resources:
limits:
memory: 2G
Also consider excluding large files that do not need backup (VM images, media libraries already stored elsewhere).
Permission Denied Errors on Source Files
Symptom: Backup job logs show “Access to the path is denied” for certain files.
Fix: Ensure the PUID and PGID in your .env file match a user that has read access to the source directories. The /source mount is read-only by design, but the container process still needs filesystem-level read permission. Check with:
docker exec duplicati id
# Compare with file ownership on the host
ls -la /path/to/problematic/file
Frequently Asked Questions
Is Duplicati reliable for large backups (multiple terabytes)?
Duplicati works for datasets up to a few hundred gigabytes, but it has a history of database corruption issues on very large, long-running backup sets. The local SQLite database that tracks block hashes can grow large and become slow or corrupt over time. For multi-terabyte datasets, BorgBackup via Borgmatic or Restic are significantly more reliable. If you use Duplicati for large backups, run the database repair tool periodically and keep database backups.
Can Duplicati back up to S3, B2, Google Drive, and other cloud storage?
Yes. Duplicati supports 25+ storage backends natively, including Amazon S3, Backblaze B2, Google Drive, Google Cloud Storage, Azure Blob, OneDrive, SFTP, FTP, WebDAV, Dropbox, and local/network paths. All backups are encrypted before upload, so you do not need to trust the cloud provider with your data. Configure the destination during backup job creation in the web UI.
How does Duplicati’s deduplication compare to BorgBackup?
Duplicati uses fixed-size blocks (default 100 KB) for deduplication. BorgBackup uses content-defined chunking, which adapts to file content and produces better deduplication ratios — especially when files are modified by inserting or removing data. In practice, BorgBackup achieves 5-20x deduplication on incremental backups, while Duplicati achieves 3-10x for similar workloads. BorgBackup is also faster for large datasets.
Can I access individual files from a Duplicati backup without restoring everything?
Yes. In the web UI, go to Restore and browse the backup archive. You can select individual files or folders to restore without downloading the entire backup. Duplicati downloads only the blocks needed for the selected files. This works for all storage backends including cloud storage.
Does Duplicati support scheduling backups automatically?
Yes. Each backup job has a built-in scheduler. During job creation or editing, set the frequency (hourly, daily, weekly, custom cron expression) and the allowed time window. The Duplicati service runs continuously in the Docker container and executes jobs on schedule. You can also trigger backups manually from the web UI.
How do I migrate Duplicati to a new server?
Copy the entire /config volume to the new server. This contains the local database, backup configuration, encryption keys, and job definitions. The backup data itself remains on the remote storage backend. On the new server, start Duplicati with the same config volume — all jobs and settings are preserved. You may need to update local path references if source directories have different mount points on the new server.
Resource Requirements
- RAM: 256 MB idle, 512 MB—2 GB during active backup operations depending on dataset size and block settings
- CPU: Low when idle. Moderate during backup and encryption (single-threaded per job)
- Disk: The
/configvolume is typically 50—500 MB depending on the number of backup jobs and the size of the local database. The/backupsvolume depends entirely on your backup size and retention policy
Verdict
Duplicati is the best self-hosted backup solution for users who want a web UI, encrypted cloud backups, and broad backend support without writing scripts. The web-based setup wizard makes it accessible even if you have never configured backup software before, and AES-256 encryption is on by default.
Where Duplicati falls short is performance on very large datasets. If you are backing up multiple terabytes, BorgBackup (via Borgmatic) will be significantly faster thanks to content-defined chunking and better deduplication. Borg is also more battle-tested for bare-metal disaster recovery scenarios.
Choose Duplicati if you want a GUI-driven backup tool with native support for S3, B2, Google Drive, and dozens of other cloud backends. Choose Borgmatic if you prefer CLI-driven backups, need to handle multi-terabyte datasets efficiently, or want the best deduplication ratios.
Related
- BackupPC vs Duplicati: Backup Tools Compared
- Duplicati vs BorgBackup: Backup Tools Compared
- Duplicati vs Duplicacy: Backup Tools Compared
- Duplicati vs Kopia: Backup Tools Compared
- How to Self-Host Borgmatic with Docker
- Duplicati vs Borgmatic
- Duplicati vs Restic
- Best Self-Hosted Backup Solutions
- Replace Backblaze
- Replace CrashPlan
- Docker Compose Basics
- Reverse Proxy Explained
- Backup Strategy: The 3-2-1 Rule
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments