UrBackup vs BackupPC: Self-Hosted Backup Compared
Quick Verdict
UrBackup is the better choice for most self-hosters. It handles both file and full disk image backups, auto-discovers clients on your LAN, and gives end-users a lightweight agent that lets them trigger restores themselves. BackupPC is the stronger pick if you need to back up many Linux/macOS servers over SSH or rsync without installing any agent software. Both are mature, both deduplicate, but UrBackup’s image backup support and easier setup give it the edge for the typical homelab.
How They Differ at the Architecture Level
These two tools solve backup differently, and understanding the architecture gap matters more than any feature table.
UrBackup is a true client-server system. You install a dedicated agent on every machine you want to protect. The agent monitors file changes in real time (using the OS filesystem journal on Windows, inotify on Linux), so incremental backups finish fast — the server already knows what changed. The agent also handles VSS snapshots on Windows and LVM snapshots on Linux for consistent image-level backups of live disks. Communication runs over a custom protocol on TCP ports 55413-55415. The server broadcasts UDP on port 35623 for automatic client discovery on the local network.
BackupPC is agentless. It connects to target machines using standard protocols — rsync over SSH, SMB/CIFS shares, tar over SSH, or rsyncd. No software needs to be installed on the client. This is a huge advantage when backing up servers you do not control or machines where installing software is not practical. The trade-off: BackupPC cannot do disk image backups, only file-level. It also cannot use filesystem-level change tracking, so it relies on rsync’s delta algorithm to determine what changed, which is slower for large file trees.
Deduplication works differently too. UrBackup deduplicates identical files across clients using hardlinks on the filesystem, plus optional support for BTRFS or ZFS snapshots for copy-on-write deduplication of image backups. BackupPC uses its own pooling engine — files are checksummed and stored in a content-addressed pool, with hardlinks pointing from each backup’s directory tree into the pool. BackupPC 4.x replaced the old per-file pooling with a more efficient rsync-based approach that deduplicates at the block level.
Feature Comparison
| Feature | UrBackup | BackupPC |
|---|---|---|
| File backup | Yes | Yes |
| Disk image backup | Yes (VHD/VHDX/raw) | No |
| Client agent required | Yes (Windows, Linux, macOS) | No — uses rsync, SSH, SMB, tar |
| Auto-discovery (LAN) | Yes (UDP broadcast) | No — manual host configuration |
| Web interface | Yes (port 55414) | Yes (port 8080 via lighttpd) |
| Deduplication | Hardlinks + BTRFS/ZFS CoW | Content-addressed pool with hardlinks |
| Incremental strategy | Filesystem journal tracking | Rsync delta algorithm |
| Bare metal restore | Yes (bootable USB) | No — file-level only |
| Backup over internet | Yes (built-in internet mode) | Yes (via SSH tunnels) |
| Email notifications | Yes | Yes (SMTP) |
| User self-service restore | Yes (client UI + web) | Yes (web UI per-user view) |
| LDAP/AD authentication | No (web UI only) | Yes |
| Prometheus metrics | No | Yes (BackupPC 4.4.0+) |
| Scheduling | Per-client intervals | Per-host schedules with blackout periods |
| Compression | Optional | Yes (built-in, configurable) |
| Encryption in transit | Yes (custom protocol + TLS) | Yes (SSH) |
| License | AGPL-2.0 | GPL-3.0 |
| Server platforms | Windows, Linux, FreeBSD | Linux, macOS (Perl-based) |
| Active development | Yes (v2.5.35, 2024) | Slower (v4.4.0, June 2023) |
Docker Compose: UrBackup
UrBackup’s Docker image uses network_mode: host by default because the server needs to send UDP broadcasts for client discovery. If you only back up clients you configure manually (or remote clients over the internet), you can use standard port mapping instead. This config uses host networking for the simplest setup.
services:
urbackup:
image: uroni/urbackup-server:2.5.35
container_name: urbackup
network_mode: host
environment:
# User/group ID for file ownership — match your host user
PUID: "1000"
PGID: "1000"
# Timezone
TZ: "America/New_York"
volumes:
# Database and server configuration
- urbackup-data:/var/urbackup
# Backup storage — point this to your large disk
- /mnt/backups/urbackup:/backups
restart: unless-stopped
volumes:
urbackup-data:
Start it:
docker compose up -d
Open http://your-server-ip:55414 in your browser. No default credentials — the web UI is open by default. Add authentication through a reverse proxy or firewall rules. Install the UrBackup client agent on each machine you want to back up. On the local network, clients auto-discover the server within minutes.
Optional: Bridge Networking (No Auto-Discovery)
If you do not need LAN auto-discovery, use explicit port mapping:
services:
urbackup:
image: uroni/urbackup-server:2.5.35
container_name: urbackup
ports:
# Web interface
- "55414:55414"
# Backup data transfer
- "55415:55415"
environment:
PUID: "1000"
PGID: "1000"
TZ: "America/New_York"
volumes:
- urbackup-data:/var/urbackup
- /mnt/backups/urbackup:/backups
restart: unless-stopped
volumes:
urbackup-data:
With bridge networking, you must manually configure each client with the server’s IP address instead of relying on auto-discovery.
Docker Compose: BackupPC
BackupPC’s most maintained Docker image is adferrand/backuppc, which packages BackupPC 4.4.0 on Alpine Linux. The image is under 80 MB.
services:
backuppc:
image: adferrand/backuppc:4.4.0-12
container_name: backuppc
ports:
# Web UI
- "8080:8080"
environment:
# User/group ID for the backuppc process
BACKUPPC_UUID: "1000"
BACKUPPC_GUID: "1000"
# Web UI credentials — change the password before first run
BACKUPPC_WEB_USER: "backuppc"
BACKUPPC_WEB_PASSWD: "change-me-immediately"
# Authentication method: file or ldap
AUTH_METHOD: "file"
# Enable HTTPS with a self-signed certificate
USE_SSL: "false"
# SMTP for email notifications
SMTP_HOST: "smtp.example.org"
SMTP_MAIL_DOMAIN: "example.org"
# Timezone
TZ: "America/New_York"
volumes:
# BackupPC configuration (config.pl, per-host configs)
- backuppc-config:/etc/backuppc
# SSH keys for connecting to backup targets
- backuppc-home:/home/backuppc
# Backup pool and logs — point to your large disk
- /mnt/backups/backuppc:/data/backuppc
restart: unless-stopped
volumes:
backuppc-config:
backuppc-home:
Start it:
docker compose up -d
Open http://your-server-ip:8080 and log in with the credentials you set in the environment variables. To back up remote hosts, generate an SSH key pair inside the container and distribute the public key:
# Generate SSH key for backuppc user
docker exec -it backuppc su -s /bin/bash backuppc -c "ssh-keygen -t ed25519 -N '' -f /home/backuppc/.ssh/id_ed25519"
# Copy the public key to your backup target
docker exec backuppc cat /home/backuppc/.ssh/id_ed25519.pub
# Add this key to ~/.ssh/authorized_keys on each target host
Then add hosts through the web UI under “Edit Hosts” and configure each one to use rsync over SSH.
Performance and Resources
UrBackup is written in C++ and is efficient with CPU and memory. The server idles at around 50-100 MB of RAM with a handful of clients. During active backups, memory usage scales with the number of concurrent backup streams but rarely exceeds 500 MB even with dozens of clients. The client agent is lightweight — under 20 MB of RAM. The big performance advantage is incremental speed: because the agent tracks filesystem changes in real time, incremental file backups often complete in seconds for machines with minimal daily changes. Image backups read only changed disk sectors using CBT (Changed Block Tracking) on Windows.
BackupPC is written in Perl, which uses more memory per process. The server typically consumes 200-500 MB of RAM at idle, scaling to 1+ GB with many active backup streams. CPU usage during backup is dominated by rsync compression and checksumming. BackupPC’s deduplication pool operations can be CPU-intensive during cleanup. Incremental performance depends entirely on rsync’s ability to scan the filesystem on the remote host, which gets slower as file counts grow into the millions. For a typical server with a few hundred thousand files, incremental backups take minutes rather than seconds.
| Resource | UrBackup | BackupPC |
|---|---|---|
| Idle RAM | 50-100 MB | 200-500 MB |
| Active RAM (10 clients) | 200-500 MB | 500 MB-1 GB |
| CPU during backup | Low (client does heavy lifting) | Medium (rsync + pool ops) |
| Disk overhead | Low (hardlinks/CoW) | Medium (pool metadata) |
| Incremental backup speed | Seconds (journal-based) | Minutes (rsync scan) |
| Language | C++ | Perl |
Use Cases
Choose UrBackup If…
- You want both file and disk image backups from a single tool
- You need bare metal restore capability (boot from USB, restore entire disk)
- Your backup targets are Windows workstations or desktops where you can install the agent
- You want automatic client discovery on your LAN — plug in a new machine and it starts backing up
- You want end-users to restore their own files without admin involvement
- Fast incremental backups matter (seconds, not minutes)
- You back up machines over the internet and want built-in support for that
Choose BackupPC If…
- You back up Linux servers where installing a client agent is undesirable or not permitted
- You already use rsync/SSH for file transfers and want backup tooling that fits that workflow
- You need LDAP authentication on the web interface
- You manage many hosts and want per-host scheduling with blackout periods
- You want Prometheus metrics for monitoring backup health in Grafana
- Agentless operation is a hard requirement (network appliances, embedded systems, NAS devices)
- You only need file-level backup, not disk images
Neither Is Ideal If…
- You want encrypted, deduplicated backups to cloud storage — look at Restic, BorgBackup, or Kopia instead
- You need a single-machine backup tool without a server component — Duplicati is simpler
- You want a backup orchestration layer on top of existing tools — Borgmatic wraps Borg with scheduling and notifications
Final Verdict
UrBackup wins for the typical self-hoster running a homelab. The combination of file backup, disk image backup, bare metal restore, automatic client discovery, and fast journal-based incrementals covers what most people actually need. The C++ server is lighter on resources, and the client agent takes care of all the hard parts (VSS snapshots, change tracking, consistent backups of open files). Setup is straightforward: deploy the server container, install the agent on your machines, and backups start automatically.
BackupPC wins in environments with many Linux servers or mixed infrastructure where installing agents is impractical. Its agentless model using standard protocols (rsync, SSH, SMB) means you can back up anything that exposes files over a network, including machines where you have SSH access but cannot install software. The LDAP integration and Prometheus metrics also make it a better fit for larger, more structured environments.
For a homelab with a mix of desktops, laptops, and a few servers, start with UrBackup. For a fleet of headless Linux servers, BackupPC is the more natural choice.
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments