Restic: Repository Locked — Fix
The Problem
When running a Restic backup, you see:
Fatal: unable to create lock in backend: repository is already locked by PID xxxxx on hostname by user (UID xxx, GID xxx)
Or:
Fatal: unable to create lock in backend: repository is already locked exclusively by PID xxxxx
This happens when Restic detects an existing lock file in the repository. Only one Restic operation can modify a repository at a time — the lock prevents concurrent writes that could corrupt data.
The Cause
Lock errors have three common causes:
| Cause | Frequency | Risk of Data Loss |
|---|---|---|
| Previous backup was interrupted (crash, timeout, killed process) | Most common | None (stale lock) |
| Another Restic process is actively running | Common | Yes if you force-unlock |
| NFS/SFTP connection dropped mid-operation | Occasional | None (stale lock) |
When Restic starts a backup, prune, or check operation, it creates a lock file in the repository. If the process terminates abnormally (OOM kill, power loss, Docker container restart, SSH timeout), the lock file remains — Restic has no way to clean it up automatically.
The Fix
Method 1: Check if Another Process Is Running (Do This First)
Before removing any lock, verify no Restic process is actually running:
# Check for running restic processes
ps aux | grep restic
# If running in Docker
docker ps | grep restic
docker exec restic-container ps aux | grep restic
If another process is running, wait for it to finish. Do not unlock the repository while a backup is in progress — this can cause corruption.
Method 2: Remove Stale Locks
If no Restic process is running, the lock is stale and safe to remove:
# Remove all stale locks
restic -r /path/to/repo unlock
# For S3 backends
restic -r s3:s3.amazonaws.com/bucket unlock
# For SFTP backends
restic -r sftp:user@host:/backup unlock
The unlock command removes all lock files from the repository. It does not modify backup data.
Method 3: Force Unlock (Use With Caution)
If a normal unlock doesn’t work (rare), use --remove-all:
restic -r /path/to/repo unlock --remove-all
This removes all locks including exclusive locks. Only use this when you’re certain no other process is accessing the repository.
Method 4: Manual Lock Removal (Last Resort)
If the unlock command itself fails, you can manually remove the lock file:
# For local repositories
ls /path/to/repo/locks/
rm /path/to/repo/locks/*
# For S3 backends
aws s3 rm s3://bucket/locks/ --recursive
After manual removal, run a repository check:
restic -r /path/to/repo check
Prevention
Set Up Automatic Lock Cleanup
Add --cleanup-cache to your backup command to clean up stale cache data:
restic backup /data --cleanup-cache
Use --retry-lock (Restic 0.16+)
Instead of failing immediately on a locked repository, Restic can wait and retry:
# Wait up to 2 hours for the lock to clear
restic backup /data --retry-lock 2h
This is the best approach for scheduled backups that might overlap — the second run waits for the first to finish instead of failing.
Handle Timeouts in Docker
If running Restic in Docker with cron or a scheduler, ensure the container isn’t killed before the backup completes:
services:
restic:
image: restic/restic:0.17.3
stop_grace_period: 30m # Give backup time to finish on container stop
# ...
Add Pre-Backup Lock Check to Scripts
#!/bin/bash
# Pre-backup stale lock cleanup
restic -r "$REPO" unlock 2>/dev/null
restic -r "$REPO" backup /data
Running unlock before every backup is safe — it only removes locks from dead processes. If a backup is actively running, unlock won’t remove its lock.
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments