Self-Hosting Duplicacy for Cloud Backup

What Is Duplicacy?

Duplicacy is a backup tool that combines lock-free deduplication with broad cloud storage support. Its defining feature is cross-computer deduplication — multiple machines backing up to the same storage can share dedup data, dramatically reducing total storage needs. Duplicacy comes in two editions: a free CLI and a paid web UI ($20 one-time for personal use).

Prerequisites

  • A Linux server (Ubuntu 22.04+ recommended)
  • Docker and Docker Compose installed (guide)
  • 512 MB of free RAM (minimum)
  • A backup destination (local disk, NAS, S3, B2, Wasabi, etc.)

Docker Compose Configuration

Duplicacy’s web UI edition has a Docker image maintained by the community. Create a docker-compose.yml file:

services:
  duplicacy:
    image: saspus/duplicacy-web:mini_v1.8.1
    container_name: duplicacy
    restart: unless-stopped
    hostname: duplicacy-server
    ports:
      - "3875:3875"    # Web UI
    volumes:
      - duplicacy-config:/config        # Duplicacy configuration and logs
      - duplicacy-cache:/cache          # Dedup cache (speeds up backups)
      - /path/to/backup-source:/data:ro # CHANGE: directories to back up
      - /path/to/local-storage:/storage # CHANGE: local backup destination (optional)
    environment:
      TZ: UTC
      USR_ID: 1000         # Host user ID for file permissions
      GRP_ID: 1000         # Host group ID for file permissions
    networks:
      - backup

networks:
  backup:
    driver: bridge

volumes:
  duplicacy-config:
  duplicacy-cache:

Important: Replace /path/to/backup-source with the directory you want to back up, and /path/to/local-storage with your local backup destination. For cloud-only backup, you can remove the /storage volume.

Start the server:

docker compose up -d

Initial Setup

  1. Open http://your-server-ip:3875 in your browser
  2. Create an admin password on first access
  3. Add a new backup:
    • Repository: Path to your data inside the container (e.g., /data)
    • Storage: Choose your backup destination:
      • Local disk: /storage
      • S3: s3://region@bucket-name
      • Backblaze B2: b2://bucket-name
      • SFTP: sftp://user@host/path
      • Wasabi: wasabi://region@bucket-name
    • Schedule: Set backup frequency (hourly, daily, etc.)
Default SettingValue
Web UI port3875
Default authSet on first access
Config location/config (container)
Cache location/cache (container)

Configuration

Supported Storage Backends

BackendURI FormatCost
Local disk/path/to/storage$0 (your hardware)
SFTPsftp://user@host/path$0 (your server)
Amazon S3s3://region@bucket~$23/TB/month
Backblaze B2b2://bucket$6/TB/month
Wasabiwasabi://region@bucket$7/TB/month
Google Cloud Storagegcs://bucket~$20/TB/month
Azure Blobazure://container~$18/TB/month
MinIO (S3-compatible)minio://endpoint/bucket$0 (self-hosted)

Cross-Computer Deduplication

Duplicacy’s unique feature: multiple machines can share a single storage backend and deduplicate data across all of them. This is particularly valuable for teams or households where multiple computers contain similar files.

To enable: point all machines at the same storage backend with the same repository ID prefix. Duplicacy handles dedup automatically.

Retention Policies

Configure in the web UI under each backup’s schedule:

PolicyDescription
Keep allKeep all snapshots (default)
Smart pruningKeep daily for 7d, weekly for 30d, monthly for 365d
CustomDefine your own retention periods

Example CLI pruning:

duplicacy prune -keep 0:365 -keep 30:30 -keep 7:7 -keep 1:1

This keeps: all snapshots from the last day, daily for 7 days, weekly for 30 days, monthly for a year, then deletes older.

Advanced Configuration (Optional)

Encryption

Enable encryption when initializing a repository:

duplicacy init -e my-repo-id sftp://user@host/backup

Duplicacy uses AES-256-GCM encryption. The encryption key is derived from your password — do not lose it, as there is no recovery mechanism.

Filters (Include/Exclude)

Create a .duplicacy/filters file in your repository root:

# Exclude patterns
-node_modules/
-.git/
-*.tmp
-__pycache__/
-.cache/

# Include patterns
+Documents/
+Photos/

CLI Usage

For users who prefer the free CLI edition:

# Initialize a repository
cd /path/to/backup-source
duplicacy init my-repo sftp://user@host/backup

# Run a backup
duplicacy backup

# List snapshots
duplicacy list

# Restore from snapshot
duplicacy restore -r 42    # Restore revision 42

# Prune old snapshots
duplicacy prune -keep 0:180 -keep 7:30 -keep 1:7

Reverse Proxy

To access Duplicacy’s web UI behind SSL, proxy to port 3875. For setup details, see our Reverse Proxy Setup guide.

Backup

Back up the Duplicacy configuration itself:

# The config volume contains all settings and credentials
docker compose exec duplicacy tar czf /tmp/config-backup.tar.gz /config
docker compose cp duplicacy:/tmp/config-backup.tar.gz ./

For a comprehensive backup strategy, see our Backup Strategy guide.

Troubleshooting

Web UI not accessible

Symptom: Cannot connect to port 3875. Fix: Check container logs: docker compose logs duplicacy. Ensure the port mapping is correct and no other service is using 3875.

Slow initial backup

Symptom: First backup takes much longer than expected. Fix: This is normal — the initial backup uploads all data. Subsequent incremental backups are fast. For large datasets, consider running the first backup to a local destination and then migrating the repository.

Permission errors on backup source

Symptom: “Permission denied” when backing up mounted directories. Fix: Ensure USR_ID and GRP_ID environment variables match the owner of the source files. Check with ls -la /path/to/backup-source.

Cache growing too large

Symptom: The cache volume consumes significant disk space. Fix: Duplicacy caches chunk metadata locally for faster dedup. For very large repositories, the cache can grow to several GB. You can safely delete the cache contents — Duplicacy rebuilds it automatically (first backup after clearing will be slower).

Resource Requirements

ResourceRequirement
RAM256 MB idle, 512 MB during backup
CPULow to Medium (dedup is CPU-intensive during initial backup)
DiskCache: 1-5 GB. Storage: depends on source data.

Verdict

Duplicacy fills a specific niche: cross-computer deduplication with cloud storage support. If you back up multiple machines to the same storage, Duplicacy’s shared dedup can save 30-50% more space than per-machine tools. The paid web UI ($20 one-time) provides the most polished management experience in the self-hosted backup space. However, for single-machine backup, Restic (free, larger community) or Kopia (free, built-in UI) are better choices.

Comments