Self-Host Viseron NVR with Docker Compose
A Different Kind of NVR
Viseron is a self-hosted NVR that uses AI computer vision to make your security cameras smart. Object detection, motion detection, face recognition — all running locally on your hardware with no cloud dependency. It supports TensorFlow, YOLO, and hardware accelerators like Google Coral EdgeTPU and NVIDIA CUDA GPUs.
Where Frigate focuses specifically on real-time detection for Home Assistant users, Viseron aims to be a standalone surveillance platform with a built-in web interface for viewing recordings, snapshots, and event clips.
| Feature | Viseron |
|---|---|
| License | MIT |
| Language | Python + TypeScript |
| Object detection | TensorFlow, YOLO, Darknet |
| Face recognition | Yes (built-in) |
| Hardware acceleration | CUDA, Google Coral EdgeTPU |
| Web UI | Built-in (React) |
| MQTT | Yes |
| Home Assistant | Integration available |
| API | RESTful |
| Latest version | v3.4.1 |
Prerequisites
- A Linux server (Ubuntu 22.04+ recommended)
- Docker and Docker Compose installed (guide)
- IP cameras with RTSP output
- 2 GB of RAM minimum (4 GB+ recommended for AI detection)
- 20+ GB of disk space for recordings
- Optional: NVIDIA GPU or Google Coral USB Accelerator for faster inference
- A domain name (optional, for remote access)
Docker Compose Configuration
Create a docker-compose.yml:
services:
viseron:
image: roflcoopter/viseron:3.4.1
container_name: viseron
restart: unless-stopped
shm_size: "1024mb"
environment:
- PUID=1000
- PGID=1000
volumes:
- ./config:/config
- ./segments:/segments
- ./snapshots:/snapshots
- ./thumbnails:/thumbnails
- ./event_clips:/event_clips
- ./timelapse:/timelapse
- /etc/localtime:/etc/localtime:ro
ports:
- "8888:8888"
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8888/ || exit 1"]
interval: 60s
timeout: 10s
retries: 3
start_period: 60s
For NVIDIA GPU acceleration, add:
services:
viseron:
image: roflcoopter/viseron:3.4.1-cuda
# ... same as above, plus:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
For Google Coral EdgeTPU:
services:
viseron:
image: roflcoopter/viseron:3.4.1
# ... same as above, plus:
devices:
- /dev/bus/usb:/dev/bus/usb
privileged: true
Start the stack:
docker compose up -d
Initial Setup
Access the web UI at http://your-server-ip:8888.
On first launch, Viseron creates a default configuration file at ./config/config.yaml. The web interface provides a built-in config editor — click the settings icon to configure cameras, detection zones, and recording options.
A minimal camera configuration in ./config/config.yaml:
cameras:
- name: Front Door
host: 192.168.1.100
port: 554
path: /stream1
username: admin
password: camera-password
width: 1920
height: 1080
fps: 15
object_detection:
type: darknet
model_path: /detectors/models/darknet/yolov4-tiny.weights
model_config: /detectors/models/darknet/yolov4-tiny.cfg
label_path: /detectors/models/darknet/coco.names
motion_detection:
trigger_detector: true
area: 0.08
frames: 3
recorder:
segments_folder: /segments
lookback: 10
idle_timeout: 30
Configuration
Detection Zones
Restrict object detection to specific areas of the camera frame to reduce false positives:
cameras:
- name: Front Door
zones:
- name: driveway
coordinates:
- [0, 500]
- [900, 500]
- [900, 1080]
- [0, 1080]
objects_in_zone:
- label: person
- label: car
Face Recognition
Viseron supports face recognition out of the box. Place reference images of known faces in the config directory:
config/
face_recognition/
known_faces/
john/
photo1.jpg
photo2.jpg
jane/
photo1.jpg
Enable in config:
face_recognition:
type: dlib
model: hog
MQTT Integration
Connect Viseron to your MQTT broker for Home Assistant integration:
mqtt:
broker: 192.168.1.50
port: 1883
username: mqtt_user
password: mqtt_password
Viseron publishes detection events, camera status, and snapshots to MQTT topics.
Event Clips and Timelapses
Viseron can automatically generate event clips (short videos around detections) and daily timelapse videos:
recorder:
segments_folder: /segments
event_clips_folder: /event_clips
thumbnails_folder: /thumbnails
timelapse:
cameras:
- camera_name: Front Door
fps: 30
save_folder: /timelapse
Reverse Proxy
For remote access over HTTPS. With Nginx Proxy Manager, create a proxy host pointing to http://viseron:8888. Enable WebSocket support.
See Reverse Proxy Setup for detailed configuration.
Backup
The critical data is the config/ directory (your configuration and face recognition data). Recordings in segments/ and event_clips/ are typically expendable.
tar -czf viseron-config-$(date +%Y%m%d).tar.gz ./config
See Backup Strategy for automated approaches.
Troubleshooting
Object detection is slow or missing detections
Symptom: Detections are delayed or objects pass through the frame without being detected.
Fix: Without hardware acceleration, CPU-based inference is slow. Options:
- Switch to
yolov4-tinyinstead of the full YOLOv4 model - Reduce camera resolution to 720p for detection (keep full resolution for recording)
- Add a Google Coral USB Accelerator (~$60) for 10x faster inference
- Use an NVIDIA GPU with the CUDA image variant
Camera stream disconnects frequently
Symptom: Camera shows as offline periodically.
Fix: Increase the shared memory size. Viseron needs adequate shared memory for video processing:
shm_size: "2048mb" # Increase from default 1024mb
Also check your camera’s RTSP implementation — some cameras limit concurrent RTSP connections. Ensure only Viseron is accessing the stream.
High CPU usage
Symptom: CPU pegged at 100% with one or two cameras.
Fix: Full YOLO models are CPU-intensive. Switch to a lighter model:
object_detection:
type: darknet
model_path: /detectors/models/darknet/yolov4-tiny.weights
Or use motion detection as a trigger — Viseron only runs object detection when motion is detected, saving significant CPU:
motion_detection:
trigger_detector: true
Face recognition not matching
Symptom: Known faces are not recognized.
Fix: Provide multiple reference photos per person (3-5 minimum), with varied angles and lighting. Ensure photos are clear, well-lit, and show the face prominently. The dlib model works best with frontal face images.
Container crashes with OOM
Symptom: Container is killed by the system OOM killer.
Fix: Viseron with AI detection needs more RAM than basic NVRs. Set memory limits:
deploy:
resources:
limits:
memory: 4G
Reduce the number of concurrent detection cameras, or use motion-triggered detection to lower sustained memory usage.
Resource Requirements
| Resource | CPU Only | With Coral EdgeTPU | With NVIDIA GPU |
|---|---|---|---|
| RAM | 2-4 GB | 2-4 GB | 4-8 GB |
| CPU | High (1+ core per camera) | Low-Moderate | Low |
| GPU VRAM | N/A | N/A | 2+ GB |
| Disk | 20+ GB per camera/month | Same | Same |
Verdict
Viseron occupies an interesting middle ground between Frigate and basic NVRs like Moonfire. It offers AI detection with face recognition — something Frigate doesn’t have built-in — and provides a standalone web UI that doesn’t require Home Assistant. The MIT license is also more permissive than Frigate’s.
The trade-off is maturity. Frigate has a larger community, more documentation, and deeper Home Assistant integration. If you’re in the Home Assistant ecosystem, Frigate is still the better choice. If you want a standalone AI-powered NVR with face recognition and don’t need Home Assistant, Viseron is worth a serious look.
FAQ
Can Viseron work without AI detection?
Yes. You can use it as a pure motion-detection NVR without object detection models. This drastically reduces CPU requirements.
Does it support ONVIF cameras?
Viseron connects via RTSP, which most ONVIF cameras support. It doesn’t use the ONVIF protocol directly for PTZ or other advanced features.
How does Viseron compare to Frigate?
Frigate is more mature, has a larger community, and integrates deeply with Home Assistant. Viseron has built-in face recognition and a standalone web UI. See our Frigate vs Shinobi comparison for more on NVR options.
Related
Get self-hosting tips in your inbox
New guides, comparisons, and setup tutorials — delivered weekly. No spam.