How to Self-Host Garage with Docker Compose

What Is Garage?

Garage is a lightweight, self-hosted S3-compatible object storage system built in Rust. It’s designed for small to medium deployments where you need S3 API compatibility without the complexity of MinIO (now archived) or Ceph. Garage supports multi-node replication, static website hosting, and is lightweight enough to run on a Raspberry Pi.

Official site: garagehq.deuxfleurs.fr

Prerequisites

  • A Linux server (Ubuntu 22.04+ recommended)
  • Docker and Docker Compose installed (guide)
  • 1 GB of free disk space (plus storage for your data)
  • 512 MB of RAM (minimum)
  • A domain name (optional, for web hosting and S3 endpoint)

Docker Compose Configuration

Create a garage.toml configuration file first:

metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"
db_engine = "sqlite"

replication_factor = 1

[rpc_bind_addr]
addr = "[::]:3901"

[rpc_secret]
# Generate with: openssl rand -hex 32
secret = "CHANGE_THIS_generate_with_openssl_rand_hex_32"

[s3_api]
s3_region = "garage"
api_bind_addr = "[::]:3900"
root_domain = ".s3.garage.localhost"

[s3_web]
bind_addr = "[::]:3902"
root_domain = ".web.garage.localhost"

[k2v_api]
api_bind_addr = "[::]:3904"

[admin]
api_bind_addr = "[::]:3903"
# Generate with: openssl rand -hex 32
admin_token = "CHANGE_THIS_generate_admin_token"
metrics_token = "CHANGE_THIS_generate_metrics_token"

Generate the required secrets:

# RPC secret (shared between cluster nodes)
openssl rand -hex 32

# Admin token
openssl rand -hex 32

# Metrics token
openssl rand -hex 32

Create a docker-compose.yml file:

services:
  garage:
    container_name: garage
    image: dxflrs/garage:v2.2.0
    restart: unless-stopped
    ports:
      - "3900:3900"   # S3 API
      - "3901:3901"   # RPC (only expose if running multi-node)
      - "3902:3902"   # S3 web hosting
      - "3903:3903"   # Admin API
    volumes:
      - ./garage.toml:/etc/garage.toml:ro
      - garage-meta:/var/lib/garage/meta
      - garage-data:/var/lib/garage/data
    networks:
      - garage

networks:
  garage:
    driver: bridge

volumes:
  garage-meta:
  garage-data:

Start the stack:

docker compose up -d

Initial Setup

After starting the container, configure the cluster layout:

# Get the node ID
docker compose exec garage /garage status

# Assign a zone and capacity (in bytes) to the node
# Replace NODE_ID with the actual ID from the status command
docker compose exec garage /garage layout assign NODE_ID -z dc1 -c 100GB

# Apply the layout
docker compose exec garage /garage layout apply --version 1

Create an API Key

docker compose exec garage /garage key create my-app-key

This outputs an access key ID and secret key. Save both — you’ll need them for S3 clients.

Create a Bucket

# Create a bucket
docker compose exec garage /garage bucket create my-bucket

# Grant read-write access to your key
docker compose exec garage /garage bucket allow --read --write --owner my-bucket --key my-app-key

Test With AWS CLI

aws s3 ls --endpoint-url http://your-server-ip:3900 \
  --region garage

Configure credentials with aws configure using the key ID and secret from the creation step.

Configuration

Using a Custom S3 Endpoint Domain

For production, point a domain at your server and update garage.toml:

[s3_api]
s3_region = "garage"
api_bind_addr = "[::]:3900"
root_domain = ".s3.example.com"

This enables virtual-hosted-style bucket URLs like my-bucket.s3.example.com.

Static Website Hosting

Garage can serve S3 buckets as static websites:

# Enable website hosting for a bucket
docker compose exec garage /garage bucket website --allow my-bucket

Upload an index.html and access via my-bucket.web.garage.localhost:3902.

Replication (Multi-Node)

For redundancy, run Garage on multiple servers. Set replication_factor = 3 in garage.toml, use the same rpc_secret on all nodes, and connect them:

docker compose exec garage /garage node connect OTHER_NODE_ID@other-server:3901

Reverse Proxy

For S3 API access behind a reverse proxy, proxy to port 3900. Ensure your proxy passes the Host header correctly — S3 virtual-hosted-style requests depend on it.

For Nginx Proxy Manager, create a proxy host pointing to port 3900. Set the domain to s3.example.com.

See Reverse Proxy Setup for details.

Backup

Garage stores metadata and data separately:

  • Meta volume — SQLite database with bucket/key/object metadata
  • Data volume — actual object data blocks
docker compose stop
tar czf garage-backup-$(date +%Y%m%d).tar.gz \
  $(docker volume inspect garage_garage-meta --format '{{ .Mountpoint }}') \
  $(docker volume inspect garage_garage-data --format '{{ .Mountpoint }}')
docker compose start

For multi-node deployments with replication, losing one node doesn’t lose data — but back up metadata regularly regardless.

See Backup Strategy for a comprehensive approach.

Troubleshooting

”NoSuchBucket” Error

Symptom: S3 clients return NoSuchBucket even though the bucket exists. Fix: Check that the bucket name in the request matches exactly (case-sensitive). Also verify the key has --read and --write permissions on the bucket.

Layout Not Applied

Symptom: garage status shows “no current cluster layout.” Fix: You must assign capacity and apply the layout after first start:

docker compose exec garage /garage layout assign NODE_ID -z dc1 -c 100GB
docker compose exec garage /garage layout apply --version 1

High Memory Usage

Symptom: Garage uses more memory than expected. Fix: SQLite is the default database engine and works well for most deployments. If you have millions of objects, consider LMDB (db_engine = "lmdb") which has better performance at scale but uses more disk for the metadata.

Slow Uploads

Symptom: S3 PUT operations are slower than expected. Fix: Garage chunks objects into blocks. For large files, ensure the network between client and server isn’t the bottleneck. Consider using multipart uploads for files over 100 MB.

Resource Requirements

  • RAM: 100-200 MB idle, scales with number of concurrent connections and object count
  • CPU: Low — Rust binary is efficient
  • Disk: Minimal for the application. Data storage depends on your usage.

Verdict

Garage is the best self-hosted S3-compatible storage for small to medium deployments. It replaced MinIO (now archived on GitHub) as the go-to lightweight option. The Rust implementation is memory-efficient, the multi-node replication works well, and the S3 API compatibility means it integrates with any tool that speaks S3.

Use Garage when you need S3 API compatibility for backups, application storage, or static site hosting. For file sync and sharing with a web UI, look at Nextcloud or Seafile instead.

Frequently Asked Questions

Can I use Garage as a backup target for Restic or Borg?

Yes. Restic supports S3 backends natively. Point it at your Garage endpoint. Borg doesn’t support S3 directly, but you can use rclone as a transport layer.

Is Garage a drop-in MinIO replacement?

For most S3 API operations, yes. Garage supports the core S3 API (GET, PUT, DELETE, list, multipart upload). Some advanced features like S3 Select or bucket notifications aren’t supported.

How does Garage compare to SeaweedFS?

Garage is simpler to deploy and more lightweight. SeaweedFS supports more features (FUSE mount, HDFS API, Kafka integration) but requires more resources. For pure S3 storage on modest hardware, Garage wins.

Can I run Garage on a Raspberry Pi?

Yes. Garage’s ARM64 Docker image works on Raspberry Pi 4/5. With 1 GB of RAM allocated, it handles personal storage workloads well.

Does Garage support encryption at rest?

Not natively. Garage stores objects as-is on disk. For encryption at rest, either use client-side encryption (most S3 clients and tools like Restic encrypt before uploading) or use an encrypted filesystem (LUKS) on the storage volume. Client-side encryption is preferred because it means the storage server never sees unencrypted data.

Can I use Garage with Terraform or Pulumi?

Yes. Any tool that supports the S3 API as a backend can use Garage. Terraform’s S3 backend works with Garage by setting the endpoint, region, and skip_credentials_validation options. This lets you store Terraform state on your own infrastructure instead of AWS.

How does Garage handle disk failures in multi-node setups?

With replication_factor = 3, Garage stores three copies of each data block across different nodes. If one node fails, the remaining copies serve requests. Garage automatically re-replicates data to restore the replication factor when a new node is added to replace the failed one. Single-node deployments have no redundancy — use regular backups.

Comments