Self-Hosting Duplicacy for Cloud Backup
What Is Duplicacy?
Duplicacy is a backup tool that combines lock-free deduplication with broad cloud storage support. Its defining feature is cross-computer deduplication — multiple machines backing up to the same storage can share dedup data, dramatically reducing total storage needs. Duplicacy comes in two editions: a free CLI and a paid web UI ($20 one-time for personal use).
Prerequisites
- A Linux server (Ubuntu 22.04+ recommended)
- Docker and Docker Compose installed (guide)
- 512 MB of free RAM (minimum)
- A backup destination (local disk, NAS, S3, B2, Wasabi, etc.)
Docker Compose Configuration
Duplicacy’s web UI edition has a Docker image maintained by the community. Create a docker-compose.yml file:
services:
duplicacy:
image: saspus/duplicacy-web:mini_v1.8.1
container_name: duplicacy
restart: unless-stopped
hostname: duplicacy-server
ports:
- "3875:3875" # Web UI
volumes:
- duplicacy-config:/config # Duplicacy configuration and logs
- duplicacy-cache:/cache # Dedup cache (speeds up backups)
- /path/to/backup-source:/data:ro # CHANGE: directories to back up
- /path/to/local-storage:/storage # CHANGE: local backup destination (optional)
environment:
TZ: UTC
USR_ID: 1000 # Host user ID for file permissions
GRP_ID: 1000 # Host group ID for file permissions
networks:
- backup
networks:
backup:
driver: bridge
volumes:
duplicacy-config:
duplicacy-cache:
Important: Replace /path/to/backup-source with the directory you want to back up, and /path/to/local-storage with your local backup destination. For cloud-only backup, you can remove the /storage volume.
Start the server:
docker compose up -d
Initial Setup
- Open
http://your-server-ip:3875in your browser - Create an admin password on first access
- Add a new backup:
- Repository: Path to your data inside the container (e.g.,
/data) - Storage: Choose your backup destination:
- Local disk:
/storage - S3:
s3://region@bucket-name - Backblaze B2:
b2://bucket-name - SFTP:
sftp://user@host/path - Wasabi:
wasabi://region@bucket-name
- Local disk:
- Schedule: Set backup frequency (hourly, daily, etc.)
- Repository: Path to your data inside the container (e.g.,
| Default Setting | Value |
|---|---|
| Web UI port | 3875 |
| Default auth | Set on first access |
| Config location | /config (container) |
| Cache location | /cache (container) |
Configuration
Supported Storage Backends
| Backend | URI Format | Cost |
|---|---|---|
| Local disk | /path/to/storage | $0 (your hardware) |
| SFTP | sftp://user@host/path | $0 (your server) |
| Amazon S3 | s3://region@bucket | ~$23/TB/month |
| Backblaze B2 | b2://bucket | $6/TB/month |
| Wasabi | wasabi://region@bucket | $7/TB/month |
| Google Cloud Storage | gcs://bucket | ~$20/TB/month |
| Azure Blob | azure://container | ~$18/TB/month |
| MinIO (S3-compatible) | minio://endpoint/bucket | $0 (self-hosted) |
Cross-Computer Deduplication
Duplicacy’s unique feature: multiple machines can share a single storage backend and deduplicate data across all of them. This is particularly valuable for teams or households where multiple computers contain similar files.
To enable: point all machines at the same storage backend with the same repository ID prefix. Duplicacy handles dedup automatically.
Retention Policies
Configure in the web UI under each backup’s schedule:
| Policy | Description |
|---|---|
| Keep all | Keep all snapshots (default) |
| Smart pruning | Keep daily for 7d, weekly for 30d, monthly for 365d |
| Custom | Define your own retention periods |
Example CLI pruning:
duplicacy prune -keep 0:365 -keep 30:30 -keep 7:7 -keep 1:1
This keeps: all snapshots from the last day, daily for 7 days, weekly for 30 days, monthly for a year, then deletes older.
Advanced Configuration (Optional)
Encryption
Enable encryption when initializing a repository:
duplicacy init -e my-repo-id sftp://user@host/backup
Duplicacy uses AES-256-GCM encryption. The encryption key is derived from your password — do not lose it, as there is no recovery mechanism.
Filters (Include/Exclude)
Create a .duplicacy/filters file in your repository root:
# Exclude patterns
-node_modules/
-.git/
-*.tmp
-__pycache__/
-.cache/
# Include patterns
+Documents/
+Photos/
CLI Usage
For users who prefer the free CLI edition:
# Initialize a repository
cd /path/to/backup-source
duplicacy init my-repo sftp://user@host/backup
# Run a backup
duplicacy backup
# List snapshots
duplicacy list
# Restore from snapshot
duplicacy restore -r 42 # Restore revision 42
# Prune old snapshots
duplicacy prune -keep 0:180 -keep 7:30 -keep 1:7
Reverse Proxy
To access Duplicacy’s web UI behind SSL, proxy to port 3875. For setup details, see our Reverse Proxy Setup guide.
Backup
Back up the Duplicacy configuration itself:
# The config volume contains all settings and credentials
docker compose exec duplicacy tar czf /tmp/config-backup.tar.gz /config
docker compose cp duplicacy:/tmp/config-backup.tar.gz ./
For a comprehensive backup strategy, see our Backup Strategy guide.
Troubleshooting
Web UI not accessible
Symptom: Cannot connect to port 3875.
Fix: Check container logs: docker compose logs duplicacy. Ensure the port mapping is correct and no other service is using 3875.
Slow initial backup
Symptom: First backup takes much longer than expected. Fix: This is normal — the initial backup uploads all data. Subsequent incremental backups are fast. For large datasets, consider running the first backup to a local destination and then migrating the repository.
Permission errors on backup source
Symptom: “Permission denied” when backing up mounted directories.
Fix: Ensure USR_ID and GRP_ID environment variables match the owner of the source files. Check with ls -la /path/to/backup-source.
Cache growing too large
Symptom: The cache volume consumes significant disk space. Fix: Duplicacy caches chunk metadata locally for faster dedup. For very large repositories, the cache can grow to several GB. You can safely delete the cache contents — Duplicacy rebuilds it automatically (first backup after clearing will be slower).
Resource Requirements
| Resource | Requirement |
|---|---|
| RAM | 256 MB idle, 512 MB during backup |
| CPU | Low to Medium (dedup is CPU-intensive during initial backup) |
| Disk | Cache: 1-5 GB. Storage: depends on source data. |
Verdict
Duplicacy fills a specific niche: cross-computer deduplication with cloud storage support. If you back up multiple machines to the same storage, Duplicacy’s shared dedup can save 30-50% more space than per-machine tools. The paid web UI ($20 one-time) provides the most polished management experience in the self-hosted backup space. However, for single-machine backup, Restic (free, larger community) or Kopia (free, built-in UI) are better choices.
Related
- BorgBackup vs Duplicacy: Which Backup Tool Wins?
- Duplicacy vs Kopia: Which Backup Tool to Self-Host?
- Duplicacy vs Restic: Which Backup Tool to Self-Host?
- Duplicati vs Duplicacy: Backup Tools Compared
- Best Self-Hosted Backup Solutions
- Restic vs Kopia vs BorgBackup
- Duplicati vs Restic
- Kopia vs Restic
- Self-Hosted Alternatives to CrashPlan
- Self-Hosted Alternatives to Backblaze
- Docker Compose Basics
- Reverse Proxy Setup
- Backup Strategy
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments