Mastodon: Self-Hosted Social Network Setup

Why Run Your Own Mastodon Instance?

Mastodon is the most widely-used decentralized social network, federating with thousands of other servers via ActivityPub. Running your own instance gives you complete control over your data, your moderation rules, and your online identity. No algorithm decides what you see. No corporation can suspend your account. Your posts, your server, your rules. Official site

Self-hosting Mastodon is more complex than most apps on this site. It has five services, requires working email, and needs regular maintenance. But for anyone serious about owning their social media presence, it’s worth the effort.

Prerequisites

  • A Linux server (Ubuntu 22.04+ recommended)
  • Docker and Docker Compose installed (guide)
  • 4 GB RAM minimum (2 GB possible with swap)
  • 20 GB of free disk space (grows with media cache)
  • A domain name with DNS configured
  • Working SMTP credentials (required for user registration)
RequirementMinimumRecommended
CPU2 cores4 cores
RAM2 GB + swap4 GB
Disk20 GB50 GB+ (media storage)
EmailSMTP requiredTransactional service

Docker Compose Configuration

Create a project directory and the following docker-compose.yml:

services:
  db:
    image: postgres:14-alpine
    restart: unless-stopped
    shm_size: 256mb
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "mastodon"]
      interval: 10s
      timeout: 5s
      retries: 5
    volumes:
      - postgres-data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: mastodon
      POSTGRES_USER: mastodon
      POSTGRES_PASSWORD: change_this_password
    networks:
      - internal

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5
    volumes:
      - redis-data:/data
    networks:
      - internal

  web:
    image: ghcr.io/mastodon/mastodon:v4.5.6
    restart: unless-stopped
    env_file: .env.production
    command: bundle exec puma -C config/puma.rb
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "wget -q --spider --proxy=off localhost:3000/health || exit 1"]
      interval: 30s
      timeout: 5s
      retries: 3
    ports:
      - "127.0.0.1:3000:3000"
    volumes:
      - mastodon-public:/mastodon/public/system
    networks:
      - internal
      - external

  streaming:
    image: ghcr.io/mastodon/mastodon-streaming:v4.5.6
    restart: unless-stopped
    env_file: .env.production
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1"]
      interval: 30s
      timeout: 5s
      retries: 3
    ports:
      - "127.0.0.1:4000:4000"
    networks:
      - internal
      - external

  sidekiq:
    image: ghcr.io/mastodon/mastodon:v4.5.6
    restart: unless-stopped
    env_file: .env.production
    command: bundle exec sidekiq
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    volumes:
      - mastodon-public:/mastodon/public/system
    networks:
      - internal

networks:
  external:
  internal:
    internal: true

volumes:
  postgres-data:
  redis-data:
  mastodon-public:

Environment Configuration

Create .env.production alongside your docker-compose.yml:

# Federation
LOCAL_DOMAIN=social.example.com
SINGLE_USER_MODE=false

# PostgreSQL
DB_HOST=db
DB_PORT=5432
DB_NAME=mastodon
DB_USER=mastodon
DB_PASS=change_this_password

# Redis
REDIS_HOST=redis
REDIS_PORT=6379

# Secrets — generate with: docker compose run --rm web bundle exec rake secret
SECRET_KEY_BASE=generate_this_with_rake_secret
OTP_SECRET=generate_this_with_rake_secret

# Web Push VAPID keys — generate with: docker compose run --rm web bundle exec rake mastodon:webpush:generate_vapid_key
VAPID_PRIVATE_KEY=generate_this
VAPID_PUBLIC_KEY=generate_this

# SMTP (REQUIRED — Mastodon will not function without email)
SMTP_SERVER=smtp.mailgun.org
SMTP_PORT=587
SMTP_LOGIN=[email protected]
SMTP_PASSWORD=your_smtp_password
SMTP_AUTH_METHOD=plain
SMTP_OPENSSL_VERIFY_MODE=peer
SMTP_ENABLE_STARTTLS=auto
SMTP_FROM_ADDRESS=Mastodon <[email protected]>

# Storage
PAPERCLIP_SECRET=generate_this_with_rake_secret

# Optional — S3-compatible object storage for media
# S3_ENABLED=true
# S3_BUCKET=mastodon-media
# AWS_ACCESS_KEY_ID=your_key
# AWS_SECRET_ACCESS_KEY=your_secret
# S3_REGION=us-east-1
# S3_HOSTNAME=s3.example.com

Generate Secrets

Before first launch, generate all required secrets:

# Generate SECRET_KEY_BASE and OTP_SECRET
docker compose run --rm web bundle exec rake secret
docker compose run --rm web bundle exec rake secret

# Generate VAPID keys
docker compose run --rm web bundle exec rake mastodon:webpush:generate_vapid_key

Copy the output values into .env.production.

Initial Setup

# Create the database and run migrations
docker compose run --rm web bundle exec rake db:setup

# Pre-compile assets (may take a few minutes)
docker compose run --rm web bundle exec rake assets:precompile

# Start all services
docker compose up -d

# Create your admin account
docker compose run --rm web tootctl accounts create admin \
  --email [email protected] \
  --confirmed \
  --role Owner

The command will output a randomly generated password. Save it and log in at https://social.example.com.

Configuration

Admin Settings

Access the admin panel at /admin/settings:

  • Site Settings: Instance name, description, thumbnail, contact info
  • Registration: Open, approval-required, or closed
  • Content: Character limit, media attachment sizes, poll options
  • Federation: Domain blocks, allowed federation, authorized fetch

Key Environment Variables

VariablePurposeDefault
LOCAL_DOMAINYour instance domainRequired
SINGLE_USER_MODEDisable registration, show one profilefalse
MAX_TOOT_CHARSCharacter limit per post500
MAX_PINNED_TOOTSPinned posts per user5
DEFAULT_LOCALEInstance default languageen
AUTHORIZED_FETCHRequire HTTP signature for APIfalse

Advanced Configuration

Full-Text Search with Elasticsearch

Add Elasticsearch to your docker-compose.yml:

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.27
    restart: unless-stopped
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    volumes:
      - elasticsearch-data:/usr/share/elasticsearch/data
    networks:
      - internal
    healthcheck:
      test: ["CMD-SHELL", "curl -s http://localhost:9200/_cluster/health | grep -vq '\"status\":\"red\"'"]
      interval: 30s
      timeout: 10s
      retries: 5

Add to .env.production:

ES_ENABLED=true
ES_HOST=elasticsearch
ES_PORT=9200

After starting Elasticsearch, populate the search index:

docker compose run --rm web tootctl search deploy

S3 Object Storage

For instances with heavy media usage, offload media to S3-compatible storage (MinIO, Backblaze B2, Wasabi):

S3_ENABLED=true
S3_BUCKET=mastodon-media
AWS_ACCESS_KEY_ID=your_key
AWS_SECRET_ACCESS_KEY=your_secret
S3_REGION=us-east-1
S3_HOSTNAME=s3.example.com
S3_PROTOCOL=https

Reverse Proxy

Mastodon requires a reverse proxy for HTTPS. The web UI runs on port 3000, streaming API on port 4000.

Caddy example:

social.example.com {
    handle /api/v1/streaming* {
        reverse_proxy localhost:4000
    }
    handle {
        reverse_proxy localhost:3000
    }
}

Nginx — see the official Mastodon nginx config or our Reverse Proxy Setup guide.

Backup

Critical data to back up:

  • PostgreSQL database: docker compose exec db pg_dump -U mastodon mastodon > mastodon_backup.sql
  • Media uploads: The mastodon-public volume (or S3 bucket)
  • Environment file: .env.production contains your secrets
# Full database backup
docker compose exec db pg_dump -U mastodon -Fc mastodon > backup_$(date +%F).dump

# Restore from backup
docker compose exec -T db pg_restore -U mastodon -d mastodon < backup_2026-02-22.dump

See our Backup Strategy guide for automated approaches.

Troubleshooting

Federation Not Working

Symptom: Can’t find users on other instances, posts don’t federate. Fix: Check that your domain resolves correctly and port 443 is accessible. Verify HTTPS is working. Check Sidekiq queue at /sidekiq for failed ActivityPub delivery jobs. Run docker compose run --rm web tootctl self-destruct — just kidding. Run tootctl accounts refresh --all to re-fetch remote accounts.

Media Uploads Failing

Symptom: Users can’t upload images or videos. Fix: Check volume permissions. The Mastodon container runs as UID 991:

docker compose exec web ls -la /mastodon/public/system/
# If permission denied:
sudo chown -R 991:991 /var/lib/docker/volumes/mastodon_mastodon-public/_data/

Sidekiq Queue Backed Up

Symptom: Notifications delayed, federation slow, emails delayed. Fix: Check Sidekiq dashboard at /sidekiq (admin access required). Scale Sidekiq workers:

docker compose exec sidekiq bundle exec sidekiq -c 25

Or add a second Sidekiq container for specific queues in your docker-compose.yml.

High Disk Usage from Remote Media

Symptom: Disk fills up with cached media from other instances. Fix: Mastodon caches remote media locally. Clean up old cached media:

# Remove remote media older than 7 days
docker compose run --rm web tootctl media remove --days=7

# Remove preview cards older than 14 days
docker compose run --rm web tootctl preview_cards remove --days=14

Set this as a cron job for automatic cleanup.

500 Errors After Upgrade

Symptom: Internal server errors after pulling a new image version. Fix: Run database migrations and asset precompilation:

docker compose run --rm web bundle exec rake db:migrate
docker compose run --rm web bundle exec rake assets:precompile
docker compose up -d

Resource Requirements

ResourceSingle UserSmall Instance (50 users)Medium Instance (500+ users)
RAM1.5 GB3-4 GB8+ GB
CPU2 cores2-4 cores4-8 cores
Disk10 GB30-50 GB100+ GB
Bandwidth5 GB/month50 GB/month200+ GB/month

Mastodon is resource-hungry. The five services (web, streaming, Sidekiq, PostgreSQL, Redis) add up. Plan for media storage growth — federated media caching is the biggest disk consumer.

Verdict

Mastodon is the gold standard for self-hosted social networking. It has the largest user base in the fediverse, the most polished clients (official iOS and Android apps), and the most active development. If you want to own your social media presence and participate in the fediverse, Mastodon is the most capable option.

Choose Mastodon if you want a fully-featured Twitter/X alternative with mobile apps, a huge federation network, and active development. You need a server with at least 4 GB RAM and you’re comfortable with Docker.

Look elsewhere if you want something lightweight. Mastodon is a heavy application. GoToSocial gives you fediverse participation with a fraction of the resources. Pleroma is another lighter alternative that’s still API-compatible with many Mastodon clients.

FAQ

Can I use Mastodon apps with my own instance?

Yes. All official and third-party Mastodon clients (Ivory, Ice Cubes, Megalodon, Tusky, etc.) work with any Mastodon instance. Just enter your instance domain when logging in.

How do I migrate my account from another instance?

Mastodon supports account migration. On your old instance, set an alias pointing to your new account. On your new instance, initiate the move. Followers transfer automatically. Posts do not migrate — only followers and block/mute lists.

What’s the maintenance burden?

Moderate. Plan for: weekly media cleanup (cron job), monthly version updates, occasional Sidekiq queue monitoring. The biggest ongoing task is moderation if you run a public instance.

Can I run a single-user instance?

Yes. Set SINGLE_USER_MODE=true in .env.production. The landing page becomes your profile. This is a popular setup for personal fediverse presence.

How much does it cost to run?

A single-user instance runs on a $5-10/month VPS. A community instance for 50+ users needs $20-40/month. The main cost driver is disk space for media storage — consider S3-compatible storage for large instances.

Comments