BorgBackup: Slow Backup Speed — Fix

The Problem

BorgBackup is running significantly slower than expected. Initial backups take hours for moderate data sets (~100-500 GB), or incremental backups that should finish in minutes are taking 30+ minutes. Common symptoms:

  • Backup speed drops below 10 MB/s for local repositories
  • SSH-based backups are noticeably slower than local
  • Incremental backups re-process files that haven’t changed
  • High CPU usage during backup without corresponding throughput
  • borg create appears to hang on certain directories

The Cause

BorgBackup performance depends on multiple factors. The most common bottlenecks:

BottleneckSymptomsFrequency
Heavy compression algorithmHigh CPU, low throughputVery common
SSH overhead / no multiplexingSlow remote backupsCommon
Stale cache / full re-scanIncremental behaves like fullCommon
Many small filesSlow regardless of total sizeCommon
Slow disk I/O (HDD, USB drives)Low throughput, high iowaitCommon
Network bandwidth limitSlow remote backupsOccasional

The Fix

Method 1: Switch to a Faster Compression Algorithm

The default compression (lz4) is already fast, but if you’ve configured zstd or zlib at high levels, that’s likely your bottleneck.

AlgorithmSpeedRatioRecommendation
noneFastest1:1Use if network/disk is the bottleneck
lz4Very fast~1.5:1Best default for most users
zstd,1Fast~2:1Good balance of speed and ratio
zstd,6Moderate~2.5:1Noticeably slower
zstd,19Slow~3:1Only for archival (very CPU-heavy)
zlib,6Slow~2.3:1Legacy — use zstd instead

Change compression:

# Use lz4 for speed
borg create --compression lz4 /repo::backup /data

# Or zstd level 1 for better ratio without much speed loss
borg create --compression zstd,1 /repo::backup /data

# In Borgmatic config.yaml:
compression: lz4

Method 2: Fix SSH Performance for Remote Backups

SSH-based repositories are inherently slower due to encryption overhead and round-trip latency. Optimize with:

Enable SSH multiplexing (~/.ssh/config):

Host backup-server
    HostName nas.local
    User backup
    ControlMaster auto
    ControlPath ~/.ssh/sockets/%r@%h-%p
    ControlPersist 600
    Compression no
    ServerAliveInterval 60
mkdir -p ~/.ssh/sockets

Key optimizations:

  • ControlMaster auto — reuses a single SSH connection for all Borg operations
  • Compression no — Borg handles its own compression; SSH compression adds unnecessary CPU overhead
  • ControlPersist 600 — keeps the connection alive for 10 minutes between operations

Use a faster SSH cipher:

borg create --rsh "ssh -c [email protected]" /repo::backup /data

AES-128-GCM is hardware-accelerated on most modern CPUs and significantly faster than the default.

Method 3: Rebuild the Cache

If incremental backups are slow (re-scanning files that haven’t changed), the Borg cache may be corrupted or out of sync:

# Delete and rebuild the cache
borg delete --cache-only /repo
borg create /repo::backup /data

The first backup after cache rebuild will be slower (full scan), but subsequent incremental backups should be fast again.

Method 4: Exclude Unnecessary Files

Large numbers of small files (node_modules, .git directories, caches) dramatically slow down Borg:

borg create --exclude '*.tmp' \
            --exclude 'node_modules' \
            --exclude '.git' \
            --exclude '__pycache__' \
            --exclude '.cache' \
            --exclude '*.log' \
            /repo::backup /data

In Borgmatic:

exclude_patterns:
  - 'node_modules'
  - '.git'
  - '__pycache__'
  - '*.tmp'
  - '.cache'

Method 5: Use Chunker Tuning for Large Files

For repositories primarily containing large files (media, databases, VMs), adjust the chunker:

# Larger chunks = faster for large files, less dedup granularity
borg create --chunker-params buzhash,19,23,21,4095 /repo::backup /data

Default is buzhash,19,23,21,4095. For large-file workloads, increase the minimum and maximum chunk sizes:

# Min 2MB, target 8MB, max 32MB chunks
borg create --chunker-params buzhash,21,25,23,4095 /repo::backup /data

Larger chunks reduce metadata overhead but decrease deduplication effectiveness for small changes within large files.

Method 6: Check Hardware Bottlenecks

# Monitor disk I/O during backup
iostat -x 1

# Check if the repository disk is the bottleneck
iotop -o
Hardware IssueSolution
Repository on USB HDDMove to internal SATA/NVMe drive
Repository on slow NAS (SMB/NFS)Switch to SSH-based repository on the NAS
Source files on slow diskNothing Borg can do — disk speed is the limit
Low RAM (Borg uses 200-500 MB)Increase available RAM or reduce --checkpoint-interval

Prevention

  1. Always use lz4 or zstd,1 unless you specifically need maximum compression for archival
  2. Configure SSH multiplexing before setting up remote repositories
  3. Exclude build artifacts and caches — they churn constantly and waste backup time
  4. Monitor backup duration via Borgmatic’s Healthchecks.io integration to catch slowdowns early
  5. Run borg compact periodically to reclaim space and improve repository performance

Comments