Matrix Synapse: High Memory Usage — Fix

The Problem

Matrix Synapse consumes excessive memory — often 1-4 GB or more — even for small deployments with just a handful of users. Symptoms include:

  • Synapse process using 500 MB+ for a single-user server
  • Memory climbing over time until the server runs out of RAM
  • OOM (Out of Memory) kills restarting Synapse repeatedly
  • Slow response times as the server swaps to disk
synapse        | WARNING - Memory usage is high: 2.1 GB RSS

A single-user Synapse deployment typically uses ~220 MB for Synapse + ~120 MB for PostgreSQL (~350 MB total). If you’re seeing significantly more, something needs tuning.

The Cause

Synapse’s memory usage is driven by three factors:

  1. Federated room state. Every room Synapse joins downloads and caches the full room state. Large public rooms like #matrix:matrix.org (200K+ members) store enormous state events. Joining a few popular rooms can add hundreds of megabytes.

  2. PostgreSQL shared memory. Docker defaults to 64 MB of shared memory (/dev/shm), but PostgreSQL’s shared_buffers setting defaults to 128 MB or higher. This mismatch causes PostgreSQL to fall back to slower disk-based operations, which increases Synapse’s memory pressure as it retries queries.

  3. Cache factor. Synapse caches heavily by default. The SYNAPSE_CACHE_FACTOR controls the overall cache size multiplier. The default (0.5) is tuned for medium deployments but oversized for small ones.

The Fix

Method 1: Tune Synapse Cache Factor

Add the SYNAPSE_CACHE_FACTOR environment variable to reduce in-memory caching:

services:
  synapse:
    image: matrixdotorg/synapse:v1.149.1
    environment:
      SYNAPSE_CONFIG_PATH: /data/homeserver.yaml
      SYNAPSE_CACHE_FACTOR: "0.2"
    # ... rest of config

For small deployments (1-10 users), 0.2 is appropriate. For 10-50 users, try 0.5 (the default). For larger deployments, 1.0 or higher.

You can also set per-cache factors in homeserver.yaml:

caches:
  global_factor: 0.2
  per_cache_factors:
    get_users_in_room: 0.5
    get_event_cache: 0.3

Method 2: Fix PostgreSQL Shared Memory

Set shm_size in your Docker Compose to match PostgreSQL’s shared_buffers:

services:
  db:
    image: postgres:16-alpine
    shm_size: "256m"
    environment:
      POSTGRES_USER: synapse
      POSTGRES_PASSWORD: change-this-password
      POSTGRES_DB: synapse
      POSTGRES_INITDB_ARGS: "--encoding=UTF-8 --lc-collate=C --lc-ctype=C"
    command: >
      postgres
      -c shared_buffers=256MB
      -c effective_cache_size=512MB
      -c work_mem=4MB
      -c maintenance_work_mem=64MB
    volumes:
      - ./postgres-data:/var/lib/postgresql/data
    restart: unless-stopped

The shm_size must be equal to or larger than shared_buffers. Without this, PostgreSQL performance degrades and Synapse compensates by using more memory.

Method 3: Leave Resource-Heavy Rooms

If you joined large public rooms for testing, leave them:

# List rooms your server is in
docker exec synapse curl -s -H "Authorization: Bearer YOUR_ADMIN_TOKEN" \
  http://localhost:8008/_synapse/admin/v1/rooms?limit=50 | python3 -m json.tool

# Leave a specific room (purge all local data)
docker exec synapse curl -X DELETE \
  -H "Authorization: Bearer YOUR_ADMIN_TOKEN" \
  "http://localhost:8008/_synapse/admin/v1/rooms/!roomid:matrix.org" \
  -d '{"purge": true}'

Leaving and purging #matrix:matrix.org alone can free 200-500 MB.

Method 4: Enable the Synapse Compressor

The synapse-compress-state tool compresses historical room state in the database, significantly reducing both disk and memory usage:

docker run --rm --network host \
  ghcr.io/matrix-org/rust-synapse-compress-state \
  -p "postgresql://synapse:password@localhost:5432/synapse" \
  -o /dev/null

Run this periodically (weekly) to keep state tables compact. It can reduce database size by 30-60%.

Method 5: Set Docker Memory Limits

Prevent Synapse from consuming all available RAM with Docker resource limits:

services:
  synapse:
    image: matrixdotorg/synapse:v1.149.1
    deploy:
      resources:
        limits:
          memory: 1G
        reservations:
          memory: 512M
    # ... rest of config

This won’t fix the root cause but prevents Synapse from taking down other services on the same host.

Prevention

  • Start small. Don’t join large public rooms on a resource-constrained server. Federate with smaller community rooms first.
  • Monitor memory. Use docker stats or a monitoring stack (Uptime Kuma, Beszel) to track Synapse memory over time.
  • Run the compressor monthly. Schedule synapse-compress-state as a cron job to prevent state table bloat.
  • Keep Synapse updated. Memory leaks are fixed regularly — v1.147.0 fixed a looping call leak. Pin to the latest stable version.
  • Consider Conduit or Dendrite for very resource-constrained environments. These alternative Matrix homeservers use 50-100 MB but lack some Synapse features.
Synapse VersionKey Memory Fix
v1.147.0Fixed memory leak from looping calls
v1.145.0+Improved cache eviction under pressure
v1.140.0+Jemalloc embedded in Docker image (reduces fragmentation)

Comments