Duplicati vs Duplicacy: Backup Tools Compared

The Problem

If you need encrypted backups to cloud storage without paying monthly for Backblaze Personal or CrashPlan, Duplicati and Duplicacy both solve it — but they approach deduplication, multi-machine backup, and licensing very differently. The confusing names don’t help. Here is what actually matters.

Quick Verdict

Duplicati is free, has a polished web UI, and supports 20+ cloud backends out of the box. Duplicacy’s killer feature is lock-free cross-computer deduplication — multiple machines backing up to the same cloud storage share dedup data, cutting total storage costs dramatically. For single-server backups, Duplicati wins on simplicity and cost. For multi-machine environments, Duplicacy’s dedup architecture saves real money on cloud storage bills.

Feature Comparison

FeatureDuplicatiDuplicacy
DeduplicationBlock-level, per-backup-jobLock-free, cross-computer
Cloud backends20+ (S3, B2, Google Drive, OneDrive, SFTP, WebDAV, etc.)15+ (S3, B2, Wasabi, Azure, GCS, SFTP, etc.)
Web UIFree, includedPaid ($20 one-time personal, $50 commercial)
CLIYesFree
EncryptionAES-256 (built-in)AES-256-GCM (built-in)
CompressionZip, 7zLZ4, zstd
SchedulingBuilt-in scheduler with cron-like optionsBuilt-in scheduler (web UI) or system cron (CLI)
Restore granularityFile-level with version browsingFile-level with snapshot browsing
Bandwidth throttlingYesYes
Email notificationsBuilt-inVia scripts or web UI
Multi-machine dedupNo — each backup job deduplicates independentlyYes — multiple machines share a single storage pool
Docker imageLinuxServer.io (free)Community-maintained (saspus/duplicacy-web)
LicenseLGPL (fully free)Free CLI / Paid web UI
LanguageC# (.NET)Go

Installation Complexity

Duplicati deploys with a single container and a web UI accessible immediately:

services:
  duplicati:
    image: lscr.io/linuxserver/duplicati:v2.2.0.3-ls5
    container_name: duplicati
    restart: unless-stopped
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=UTC
    ports:
      - "8200:8200"
    volumes:
      - duplicati-config:/config
      - /path/to/backups:/backups
      - /path/to/source:/source:ro

volumes:
  duplicati-config:

Open http://your-server:8200, create a backup job, pick a destination, set a schedule. Done.

Duplicacy’s web UI edition also deploys as a single container:

services:
  duplicacy:
    image: saspus/duplicacy-web:mini_v1.8.1
    container_name: duplicacy
    restart: unless-stopped
    hostname: duplicacy-server
    ports:
      - "3875:3875"
    volumes:
      - duplicacy-config:/config
      - duplicacy-cache:/cache
      - /path/to/source:/data:ro
    environment:
      - TZ=UTC

volumes:
  duplicacy-config:
  duplicacy-cache:

The setup workflow is similar — open the web UI, configure storage, add directories. Duplicacy’s UI is functional but less polished than Duplicati’s. The CLI version requires no Docker at all — it is a single Go binary.

Winner: Duplicati. Both are straightforward Docker deployments, but Duplicati’s web UI is free and more intuitive. Duplicacy’s web UI costs $20.

Deduplication Architecture

This is the core technical difference.

Duplicati deduplicates within a single backup job. If you back up /data on Server A and /data on Server B to the same S3 bucket, they create two independent sets of deduplicated blocks. Identical files on both servers are stored twice.

Duplicacy deduplicates across all machines backing up to the same storage. If Server A and Server B both have the same 5 GB video file, it is stored once. This is possible because Duplicacy uses a lock-free algorithm — no coordination between machines is needed. Each machine independently identifies chunks and checks whether they already exist in storage.

ScenarioDuplicati storage usedDuplicacy storage used
1 machine, 100 GB unique data~100 GB~100 GB
3 machines, 100 GB each, 50% overlap~300 GB~200 GB
10 machines, 100 GB each, 70% overlap~1 TB~370 GB
Same machine, daily backups (1% daily change)Incremental — smallIncremental — small

For single-server backups, the difference is negligible. For multi-machine environments with overlapping data (development teams, media libraries, shared codebases), Duplicacy can cut cloud storage costs by 50–70%.

Performance

MetricDuplicatiDuplicacy
RAM (idle)200–400 MB100–200 MB
RAM (backing up 100 GB)500 MB–1 GB200–500 MB
Backup speed (local → S3)Moderate (limited by .NET runtime)Fast (Go, parallel uploads)
Restore speedModerateFast
CPU during backupModerate–high (compression + encryption)Moderate (efficient compression)

Duplicacy is generally faster for large backups because Go handles concurrency better than Duplicati’s .NET runtime. Duplicacy also uses LZ4 compression by default, which trades slightly larger file sizes for much faster compression speeds. Duplicati defaults to Zip compression, which is slower but produces smaller archives.

Reliability

Duplicati has a well-documented history of database corruption issues (the local SQLite database that tracks backup metadata). The Duplicati team has improved this significantly in v2.1+, but the reputation persists. If the local database corrupts, you can still restore from the remote backup — but you need to rebuild the database first.

Duplicacy’s lock-free design means there is no local database to corrupt. Backup metadata is stored alongside the backup data in cloud storage. This makes Duplicacy inherently more resilient to local failures — your backup state lives in the same place as your backup data.

Winner: Duplicacy. The lock-free, database-free architecture is fundamentally more reliable.

Use Cases

Choose Duplicati If…

  • You back up a single server or workstation
  • You want a free, polished web UI with no licensing costs
  • You need a wide range of cloud backends (Google Drive, OneDrive especially)
  • You prefer point-and-click configuration over CLI
  • Budget is the primary concern — Duplicati is 100% free

Choose Duplicacy If…

  • You back up multiple machines to the same cloud storage
  • Storage cost optimization matters — cross-computer dedup saves 30–70%
  • You need fast, reliable backups with minimal RAM overhead
  • You are comfortable with CLI-based tools (the free tier is CLI-only)
  • Reliability is critical — no local database to corrupt

Final Verdict

For a single homelab server with straightforward backup needs, Duplicati is the better choice — it is completely free, has a capable web UI, and supports every cloud storage service you would want. The dedup difference is irrelevant with one machine.

For anyone backing up 3+ machines to cloud storage, Duplicacy wins on economics alone. Cross-computer deduplication can save hundreds of dollars per year in storage costs, easily justifying the $20 web UI license. The Go-based CLI is also noticeably faster than Duplicati’s .NET engine for large data sets.

If reliability is your top priority above all else, Duplicacy’s architecture is fundamentally safer. No local database means no local database corruption.

FAQ

Are these the same project? The names are confusing.

No. Duplicati and Duplicacy are completely separate projects by different developers. Duplicati is a C#/.NET project primarily developed by Kenneth Skovhede. Duplicacy is a Go project by Gilbert Chen. They share a naming coincidence, not code.

Can I migrate from Duplicati to Duplicacy?

Not directly. They use incompatible backup formats. You would need to create new backups with Duplicacy and maintain Duplicati temporarily for restoring old data. Once you are confident in Duplicacy backups, decommission Duplicati.

Is Duplicacy’s free CLI enough?

For automated backups via cron, yes. The CLI handles init, backup, restore, prune, and check operations. The web UI adds scheduling, monitoring, and a graphical configuration interface — worth $20 if you manage multiple backup jobs.

Which is better for Backblaze B2?

Both support B2 natively. Duplicacy tends to be faster due to parallel uploads. For a single machine, pick either. For multiple machines backing up to the same B2 bucket, Duplicacy’s cross-dedup makes it the clear winner.

Comments