BorgBackup: Slow Backup Speed — Fix
The Problem
BorgBackup is running significantly slower than expected. Initial backups take hours for moderate data sets (~100-500 GB), or incremental backups that should finish in minutes are taking 30+ minutes. Common symptoms:
- Backup speed drops below 10 MB/s for local repositories
- SSH-based backups are noticeably slower than local
- Incremental backups re-process files that haven’t changed
- High CPU usage during backup without corresponding throughput
borg createappears to hang on certain directories
The Cause
BorgBackup performance depends on multiple factors. The most common bottlenecks:
| Bottleneck | Symptoms | Frequency |
|---|---|---|
| Heavy compression algorithm | High CPU, low throughput | Very common |
| SSH overhead / no multiplexing | Slow remote backups | Common |
| Stale cache / full re-scan | Incremental behaves like full | Common |
| Many small files | Slow regardless of total size | Common |
| Slow disk I/O (HDD, USB drives) | Low throughput, high iowait | Common |
| Network bandwidth limit | Slow remote backups | Occasional |
The Fix
Method 1: Switch to a Faster Compression Algorithm
The default compression (lz4) is already fast, but if you’ve configured zstd or zlib at high levels, that’s likely your bottleneck.
| Algorithm | Speed | Ratio | Recommendation |
|---|---|---|---|
none | Fastest | 1:1 | Use if network/disk is the bottleneck |
lz4 | Very fast | ~1.5:1 | Best default for most users |
zstd,1 | Fast | ~2:1 | Good balance of speed and ratio |
zstd,6 | Moderate | ~2.5:1 | Noticeably slower |
zstd,19 | Slow | ~3:1 | Only for archival (very CPU-heavy) |
zlib,6 | Slow | ~2.3:1 | Legacy — use zstd instead |
Change compression:
# Use lz4 for speed
borg create --compression lz4 /repo::backup /data
# Or zstd level 1 for better ratio without much speed loss
borg create --compression zstd,1 /repo::backup /data
# In Borgmatic config.yaml:
compression: lz4
Method 2: Fix SSH Performance for Remote Backups
SSH-based repositories are inherently slower due to encryption overhead and round-trip latency. Optimize with:
Enable SSH multiplexing (~/.ssh/config):
Host backup-server
HostName nas.local
User backup
ControlMaster auto
ControlPath ~/.ssh/sockets/%r@%h-%p
ControlPersist 600
Compression no
ServerAliveInterval 60
mkdir -p ~/.ssh/sockets
Key optimizations:
ControlMaster auto— reuses a single SSH connection for all Borg operationsCompression no— Borg handles its own compression; SSH compression adds unnecessary CPU overheadControlPersist 600— keeps the connection alive for 10 minutes between operations
Use a faster SSH cipher:
borg create --rsh "ssh -c [email protected]" /repo::backup /data
AES-128-GCM is hardware-accelerated on most modern CPUs and significantly faster than the default.
Method 3: Rebuild the Cache
If incremental backups are slow (re-scanning files that haven’t changed), the Borg cache may be corrupted or out of sync:
# Delete and rebuild the cache
borg delete --cache-only /repo
borg create /repo::backup /data
The first backup after cache rebuild will be slower (full scan), but subsequent incremental backups should be fast again.
Method 4: Exclude Unnecessary Files
Large numbers of small files (node_modules, .git directories, caches) dramatically slow down Borg:
borg create --exclude '*.tmp' \
--exclude 'node_modules' \
--exclude '.git' \
--exclude '__pycache__' \
--exclude '.cache' \
--exclude '*.log' \
/repo::backup /data
In Borgmatic:
exclude_patterns:
- 'node_modules'
- '.git'
- '__pycache__'
- '*.tmp'
- '.cache'
Method 5: Use Chunker Tuning for Large Files
For repositories primarily containing large files (media, databases, VMs), adjust the chunker:
# Larger chunks = faster for large files, less dedup granularity
borg create --chunker-params buzhash,19,23,21,4095 /repo::backup /data
Default is buzhash,19,23,21,4095. For large-file workloads, increase the minimum and maximum chunk sizes:
# Min 2MB, target 8MB, max 32MB chunks
borg create --chunker-params buzhash,21,25,23,4095 /repo::backup /data
Larger chunks reduce metadata overhead but decrease deduplication effectiveness for small changes within large files.
Method 6: Check Hardware Bottlenecks
# Monitor disk I/O during backup
iostat -x 1
# Check if the repository disk is the bottleneck
iotop -o
| Hardware Issue | Solution |
|---|---|
| Repository on USB HDD | Move to internal SATA/NVMe drive |
| Repository on slow NAS (SMB/NFS) | Switch to SSH-based repository on the NAS |
| Source files on slow disk | Nothing Borg can do — disk speed is the limit |
| Low RAM (Borg uses 200-500 MB) | Increase available RAM or reduce --checkpoint-interval |
Prevention
- Always use
lz4orzstd,1unless you specifically need maximum compression for archival - Configure SSH multiplexing before setting up remote repositories
- Exclude build artifacts and caches — they churn constantly and waste backup time
- Monitor backup duration via Borgmatic’s Healthchecks.io integration to catch slowdowns early
- Run
borg compactperiodically to reclaim space and improve repository performance
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments