Self-Hosting motionEye with Docker Compose

Unlike Frigate which focuses on AI-powered object detection, motionEye takes a simpler approach: it wraps the battle-tested motion daemon in a clean web interface and gets out of your way. Add cameras, configure motion detection thresholds, view live feeds, and manage recordings — all from a browser.

motionEye works well for straightforward surveillance needs: monitoring a front door, watching a pet, or keeping an eye on a workshop. No GPU required, no machine learning models, no Coral TPU. Just motion detection that works.

Prerequisites

  • A Linux server (Ubuntu 22.04+ recommended)
  • Docker and Docker Compose installed (guide)
  • 512 MB of free RAM (per camera, roughly)
  • Storage space for recordings (varies by camera count and retention)
  • IP cameras with RTSP/MJPEG streams, or USB webcams

Updated March 2026: Verified with latest Docker images and configurations.

Docker Compose Configuration

Create a docker-compose.yml file:

services:
  motioneye:
    image: ghcr.io/motioneye-project/motioneye:0.43.1
    container_name: motioneye
    restart: unless-stopped
    ports:
      - "8765:8765"
    environment:
      - TZ=UTC
    volumes:
      - motioneye-config:/etc/motioneye
      - motioneye-recordings:/var/lib/motioneye
      - /etc/localtime:/etc/localtime:ro
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8765"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 15s

volumes:
  motioneye-config:
  motioneye-recordings:

Start the stack:

docker compose up -d

No database, no cache, no dependencies. motionEye runs as a single container.

Initial Setup

  1. Open http://your-server:8765 in your browser
  2. Log in with username admin and blank password
  3. Immediately set an admin password: click the person icon (top-left) → set Admin Password
  4. Optionally create a surveillance user with view-only access

Adding Cameras

Click the dropdown in the top-left → Add Camera:

  • Network Camera: Enter the RTSP or MJPEG URL from your IP camera
    • Example RTSP: rtsp://user:[email protected]:554/stream1
    • Example MJPEG: http://192.168.1.100:8080/video
  • Local Camera: If you passed through a USB webcam via devices: in Docker Compose
  • Remote motionEye Camera: Connect to another motionEye instance

Configuration

SettingLocationDescription
Admin passwordUser icon → Admin PasswordSet immediately after first login
Motion detectionCamera → Motion DetectionSensitivity, threshold, mask areas
Recording modeCamera → File StorageContinuous, motion-triggered, or both
RetentionCamera → File Storage → PreserveAuto-delete recordings after N days
NotificationsCamera → Motion NotificationsEmail, webhook, or command on detection
StreamingCamera → Video StreamingEnable/disable live streaming port

Motion Detection Tuning

The default motion sensitivity works for most setups. Fine-tune with:

  • Frame Change Threshold: Lower = more sensitive (more false positives). Default is usually fine.
  • Noise Level: Increase if wind/lighting changes trigger false alerts
  • Mask: Draw rectangles over areas to ignore (trees, busy roads)
  • Minimum Motion Frames: Require N consecutive frames of motion before triggering

Recording Storage

motionEye saves recordings to /var/lib/motioneye inside the container. With the volume mount, these persist on your host. Plan storage based on:

CamerasResolutionRetentionEstimated Storage
1720p7 days~50 GB
11080p7 days~100 GB
41080p7 days~400 GB
41080p30 days~1.7 TB

Advanced Configuration

USB Webcam Passthrough

To use a USB webcam directly:

services:
  motioneye:
    devices:
      - /dev/video0:/dev/video0

WebDAV or S3 Upload

motionEye can automatically upload recordings to a remote server or S3-compatible storage. Configure in Camera → Upload Media Files. This offloads storage from your local disk and provides off-site backup.

Running Behind a Reverse Proxy

motionEye serves its web UI on port 8765. For HTTPS, proxy with Nginx Proxy Manager or Caddy pointing to motioneye:8765.

See Reverse Proxy Setup for detailed instructions.

Backup

Back up the config volume:

  • motioneye-config — camera configurations, user accounts, motion settings
  • motioneye-recordings — video files (back up selectively based on retention needs)
docker run --rm -v motioneye-config:/data -v $(pwd):/backup alpine tar czf /backup/motioneye-config.tar.gz -C /data .

See Backup Strategy for a complete approach.

Troubleshooting

Camera Shows “Error” or Black Screen

Symptom: Camera added but shows error icon or black frame. Fix: Verify the RTSP/MJPEG URL works outside motionEye. Test with VLC: vlc rtsp://user:pass@ip:554/stream. Common issues: wrong port, incorrect credentials, camera firewall blocking the server.

High CPU Usage with Multiple Cameras

Symptom: CPU pegged at 100% with 3+ cameras. Fix: Reduce resolution to 720p in camera settings. Lower the frame rate (10 fps is sufficient for surveillance). Disable motion detection on cameras that don’t need it. Each camera’s motion detection runs a separate motion process.

Recordings Missing or Not Saving

Symptom: Motion events detected but no video files saved. Fix: Check that File Storage is enabled for the camera (Camera → File Storage → toggle on). Verify the recordings volume has free space: docker exec motioneye df -h /var/lib/motioneye.

Can’t Access Web UI After Password Set

Symptom: Locked out after setting admin password. Fix: Delete the config volume to reset: docker compose down && docker volume rm motioneye-config && docker compose up -d. You’ll need to reconfigure cameras.

Resource Requirements

Resource1 Camera4 Cameras
RAM200-300 MB600-1000 MB
CPU1 core (10-20%)2-4 cores
Disk50-100 GB/week200-400 GB/week

Motion detection is CPU-bound. Each camera runs its own motion daemon process. Plan CPU allocation accordingly — a Raspberry Pi 4 handles 2-3 cameras at 720p comfortably.

Verdict

motionEye is the right choice when you want simple, reliable surveillance without the complexity of AI-powered NVRs. It runs on minimal hardware (Raspberry Pi included), needs no GPU, and the web UI covers all the basics: live view, motion detection, recording, and notifications.

For AI object detection (person vs car vs animal), use Frigate instead — it requires more hardware but eliminates false positives from trees and shadows. For a middle ground, Shinobi offers more features than motionEye without Frigate’s hardware requirements.

FAQ

Can motionEye do person detection?

No. motionEye uses pixel-change motion detection only — it can’t distinguish between a person, a car, or a tree moving in the wind. For AI-powered detection, use Frigate with a Google Coral TPU.

Does motionEye work with Raspberry Pi cameras?

Yes. The Docker image supports ARM architectures. For the Raspberry Pi camera module, you may need to expose /dev/vchiq or use the legacy camera stack. USB webcams are simpler to set up.

How many cameras can motionEye handle?

On a modern x86 server (4 cores, 4 GB RAM), 4-6 cameras at 720p with motion detection. On a Raspberry Pi 4, 2-3 cameras at 720p. Beyond that, consider Frigate or ZoneMinder which handle large camera counts more efficiently.

Comments