Self-Hosting Grafana Loki with Docker Compose
What Is Grafana Loki?
Loki is a log aggregation system from Grafana Labs that stores and queries logs without full-text indexing. Unlike Elasticsearch, which indexes every word in every log line, Loki only indexes log metadata (labels) and stores compressed log chunks. This makes it dramatically lighter on resources — perfect for self-hosters who want centralized logging without dedicating 16 GB of RAM to Elasticsearch.
The typical stack is Loki (storage + queries) + Alloy (log collection agent) + Grafana (visualization). Think of it as Prometheus, but for logs.
Official site: grafana.com/oss/loki
Prerequisites
- A Linux server (Ubuntu 22.04+ recommended)
- Docker and Docker Compose installed (guide)
- 2 GB of free RAM (minimum for the full stack)
- 20 GB of free disk space (grows with log retention)
- A domain name (optional, for remote access)
Docker Compose Configuration
This deploys the complete stack: Loki for storage, Alloy for log collection, and Grafana for visualization.
Create a project directory:
mkdir -p ~/loki-stack && cd ~/loki-stack
Create docker-compose.yml:
services:
loki:
image: grafana/loki:3.6.7
container_name: loki
ports:
- "3100:3100"
volumes:
- ./loki-config.yaml:/etc/loki/local-config.yaml:ro
- loki-storage:/loki
command: -config.file=/etc/loki/local-config.yaml
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3100/ready"]
interval: 30s
timeout: 10s
retries: 3
restart: unless-stopped
alloy:
image: grafana/alloy:v1.13.2
container_name: alloy
volumes:
- ./alloy-config.alloy:/etc/alloy/config.alloy:ro
- /var/log:/var/log:ro # host logs
- /var/run/docker.sock:/var/run/docker.sock:ro # Docker container logs
command: run /etc/alloy/config.alloy
depends_on:
loki:
condition: service_healthy
restart: unless-stopped
grafana:
image: grafana/grafana:12.3.3
container_name: grafana
ports:
- "3000:3000"
volumes:
- grafana-data:/var/lib/grafana
environment:
GF_SECURITY_ADMIN_USER: admin # CHANGE THIS
GF_SECURITY_ADMIN_PASSWORD: changeme # CHANGE THIS
depends_on:
loki:
condition: service_healthy
restart: unless-stopped
volumes:
loki-storage:
grafana-data:
Create loki-config.yaml:
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2024-01-01
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
limits_config:
retention_period: 168h # 7 days — adjust to your needs
max_query_lookback: 168h
compactor:
working_directory: /loki/compactor
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
Create alloy-config.alloy:
// Collect Docker container logs
loki.source.docker "containers" {
host = "unix:///var/run/docker.sock"
targets = discovery.docker.containers.targets
forward_to = [loki.write.local.receiver]
}
discovery.docker "containers" {
host = "unix:///var/run/docker.sock"
}
// Collect host syslog
loki.source.file "syslog" {
targets = [
{__path__ = "/var/log/syslog", job = "syslog"},
{__path__ = "/var/log/auth.log", job = "authlog"},
]
forward_to = [loki.write.local.receiver]
}
// Send everything to Loki
loki.write "local" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
}
Start the stack:
docker compose up -d
Initial Setup
- Open Grafana at
http://your-server-ip:3000 - Log in with the credentials from your environment variables
- Navigate to Connections → Data Sources → Add data source
- Select Loki
- Set the URL to
http://loki:3100 - Click Save & Test — you should see “Data source successfully connected”
- Go to Explore and select the Loki data source to start querying logs
Querying with LogQL
LogQL is Loki’s query language — it works like PromQL but for logs.
| Query | What It Does |
|---|---|
{job="syslog"} | All syslog entries |
{container="nginx"} |= "error" | Nginx container logs containing “error” |
{job="authlog"} |~ "Failed password" | Failed SSH login attempts |
rate({container="nginx"}[5m]) | Log volume per second over 5 minutes |
{job="syslog"} | json | level="error" | Parse JSON logs, filter by level |
Configuration
Retention Period
Adjust retention_period in loki-config.yaml to control how long logs are kept:
limits_config:
retention_period: 720h # 30 days
After changing the config, restart Loki:
docker compose restart loki
Alerting
Loki supports alerting rules that fire when log patterns match. Create a rules file and mount it into the Loki container. For most self-hosters, setting up alerts in Grafana is simpler — use the Grafana alerting UI with Loki as a data source.
Why Loki Over Elasticsearch?
| Aspect | Loki | Elasticsearch |
|---|---|---|
| Indexing | Labels only (metadata) | Full-text (every word) |
| RAM usage | 512 MB – 2 GB | 8 – 16 GB minimum |
| Disk usage | Compressed chunks | Large Lucene indices |
| Query language | LogQL (Prometheus-like) | Query DSL (complex) |
| Learning curve | Low if you know Prometheus | Steep |
| Setup complexity | 3 containers | 5+ containers (ELK stack) |
| Best for | Self-hosters, small–medium logs | Enterprise full-text search |
Loki trades query power for efficiency. You can’t do arbitrary full-text search across all fields like Elasticsearch. Instead, you label your logs and filter by those labels, then grep within matching streams. For 95% of self-hosting log analysis, this is more than enough.
Backup
The critical data is in the loki-storage volume:
docker compose stop loki
docker run --rm -v loki-storage:/data -v $(pwd):/backup alpine tar czf /backup/loki-backup.tar.gz -C /data .
docker compose up -d loki
Grafana dashboards and data source configs are in grafana-data. Back up both volumes.
See the Backup Strategy guide for automated approaches.
Troubleshooting
Loki shows “not ready” in healthcheck
Symptom: Grafana can’t connect to Loki. Container logs show readiness probe failures.
Fix: Loki needs time to initialize TSDB indices on first start. Wait 30–60 seconds. Check logs:
docker compose logs loki
No logs appearing in Grafana
Symptom: Data source connects but queries return nothing.
Fix: Check that Alloy is running and forwarding logs:
docker compose logs alloy
Verify the Docker socket is mounted (for container log collection) and /var/log permissions allow read access.
Disk usage growing fast
Symptom: Loki storage volume consuming unexpected disk space.
Fix: Ensure retention_enabled: true and retention_period is set in your config. The compactor needs to run — check that compaction_interval is configured.
Resource Requirements
| Component | RAM (idle) | RAM (load) | CPU |
|---|---|---|---|
| Loki | 300–500 MB | 1–2 GB | 0.5–1 core |
| Alloy | 50–128 MB | 128–256 MB | 0.1–0.5 core |
| Grafana | 200–400 MB | 500 MB–1 GB | 0.5–1 core |
| Total | ~700 MB | ~2–3 GB | 1–2 cores |
For a homelab monitoring 10–20 containers, the idle footprint is well under 1 GB. Loki scales to millions of log lines per day on modest hardware.
Verdict
Loki is the best self-hosted logging solution for most people. It gives you centralized log aggregation, powerful queries, and Grafana integration at a fraction of the resource cost of Elasticsearch. If you’re already running Grafana and Prometheus for metrics, adding Loki for logs is the natural next step. If you need full-text search across massive log volumes and have 16+ GB of RAM to spare, Graylog or Elasticsearch are still the right tools — but most self-hosters don’t.
Related
Get self-hosting tips in your inbox
New guides, comparisons, and setup tutorials — delivered weekly. No spam.