How to Self-Host ComfyUI with Docker Compose
What Is ComfyUI?
ComfyUI is a node-based workflow editor for AI image generation. Instead of a traditional form-based UI, you build image generation pipelines by connecting nodes — model loaders, samplers, VAE decoders, ControlNet processors, and more. This gives you full control over every step of the generation process. Workflows can be saved, shared, and reproduced exactly.
Prerequisites
- A Linux server (Ubuntu 22.04+ recommended)
- Docker and Docker Compose installed (guide)
- NVIDIA GPU with 4+ GB VRAM (8+ GB recommended)
- 8 GB+ system RAM
- 20 GB+ free disk space
- NVIDIA Container Toolkit installed
Docker Compose Configuration
ComfyUI doesn’t have an official Docker image. Here’s a Docker setup using a custom Dockerfile:
Create a Dockerfile:
FROM nvidia/cuda:12.4.1-runtime-ubuntu22.04
RUN apt-get update && apt-get install -y \
git python3 python3-pip python3-venv \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
RUN git clone https://github.com/comfyanonymous/ComfyUI.git . && \
git checkout v0.14.2
RUN pip3 install --no-cache-dir torch torchvision torchaudio \
--index-url https://download.pytorch.org/whl/cu124 && \
pip3 install --no-cache-dir -r requirements.txt
EXPOSE 8188
CMD ["python3", "main.py", "--listen", "0.0.0.0"]
Create a docker-compose.yml:
services:
comfyui:
build: .
container_name: comfyui
ports:
- "8188:8188"
volumes:
- ./models:/app/models
- ./output:/app/output
- ./input:/app/input
- ./custom_nodes:/app/custom_nodes
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
restart: unless-stopped
Alternative: Source installation (simpler):
git clone https://github.com/comfyanonymous/ComfyUI
cd ComfyUI
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
pip install -r requirements.txt
python main.py --listen
Build and start:
docker compose up -d --build
Initial Setup
- Open
http://your-server:8188in your browser - You’ll see the node editor with a default workflow
- Download a model and place it in
models/checkpoints/:
wget -P models/checkpoints/ \
https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors
- Click Queue Prompt to generate your first image
Understanding the Node Editor
The default workflow contains:
- Load Checkpoint — Loads the image generation model
- CLIP Text Encode (Prompt) — Encodes your positive prompt
- CLIP Text Encode (Negative) — Encodes what to avoid
- KSampler — The actual generation step (steps, CFG, sampler)
- VAE Decode — Converts the latent image to pixels
- Save Image — Saves the output
Right-click anywhere to add new nodes. Connect outputs to inputs by dragging.
Configuration
Model Directories
| Directory | Contents |
|---|---|
models/checkpoints/ | SD 1.5, SDXL, Flux checkpoint files |
models/vae/ | VAE models |
models/loras/ | LoRA adapters |
models/controlnet/ | ControlNet models |
models/upscale_models/ | Upscaler models (ESRGAN, etc.) |
models/clip/ | CLIP text encoder models |
models/embeddings/ | Textual inversion embeddings |
CLI Arguments
| Argument | Description |
|---|---|
--listen ADDRESS | Listen address (use 0.0.0.0 for Docker) |
--port PORT | Port (default: 8188) |
--cpu | Run on CPU only |
--lowvram | Optimize for low VRAM GPUs |
--novram | Run with minimal VRAM (~256 MB) |
--disable-auto-launch | Don’t open browser on start |
--preview-method auto | Enable generation previews |
Advanced Configuration
Custom Nodes (ComfyUI-Manager)
Install ComfyUI-Manager for easy custom node management:
cd custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager.git
Restart ComfyUI. Click the Manager button to browse and install custom nodes from the UI.
API Usage
ComfyUI has a WebSocket API for programmatic use:
import json
import urllib.request
prompt = {
# Export a workflow as API format from the UI
# Save → API Format → copy the JSON
}
data = json.dumps({"prompt": prompt}).encode("utf-8")
req = urllib.request.Request(
"http://localhost:8188/prompt",
data=data,
headers={"Content-Type": "application/json"}
)
urllib.request.urlopen(req)
Export any workflow as API-compatible JSON using the Save (API Format) button.
Flux Model Support
ComfyUI supports Flux models. Download the Flux checkpoint and use the appropriate workflow nodes (different from SD 1.5/SDXL workflows).
Reverse Proxy
Configure your reverse proxy to forward to port 8188. WebSocket support is required for the node editor and generation progress. See Reverse Proxy Setup.
Backup
Back up these directories:
output/— Generated images (irreplaceable)custom_nodes/— Installed custom nodes (can be re-downloaded)models/— Downloaded models (large, can be re-downloaded)
Priority: Save your workflow JSON files — they capture your entire generation pipeline and are small. See Backup Strategy.
Troubleshooting
Out of VRAM
Symptom: Generation fails with CUDA OOM error.
Fix: Add --lowvram or --novram to the command. Reduce image resolution. Use FP16 models instead of FP32.
Custom Node Errors
Symptom: Workflow fails with “missing node” errors. Fix: Install the required custom nodes. Check ComfyUI-Manager for one-click installation. Some workflows require specific custom node versions.
Models Not Appearing
Symptom: Checkpoint dropdown is empty.
Fix: Verify model files are in models/checkpoints/ (not a subdirectory). File format must be .safetensors or .ckpt. Refresh the page after adding models.
WebSocket Connection Lost
Symptom: UI disconnects from the server.
Fix: Check that your reverse proxy supports WebSocket connections. If using Nginx, ensure proxy_http_version 1.1, Upgrade, and Connection headers are configured.
Resource Requirements
- VRAM: 4 GB minimum, 8 GB recommended, 12+ GB for SDXL/Flux
- RAM: 8-16 GB
- CPU: Low (GPU does the computation)
- Disk: 4-7 GB per model, plus generated images
Verdict
ComfyUI is the power user’s image generation tool. The node-based workflow gives you complete control over every step of the generation pipeline — something no other interface provides. Workflows are reproducible, shareable, and composable. The trade-off is a steeper learning curve compared to Stable Diffusion WebUI.
Choose ComfyUI if you want maximum control over image generation pipelines, reproducible workflows, and the ability to build complex generation chains. Choose Stable Diffusion WebUI if you want a simpler, more traditional interface.
Frequently Asked Questions
Do I need an NVIDIA GPU to run ComfyUI?
An NVIDIA GPU with 4+ GB VRAM is strongly recommended. ComfyUI can run on CPU with the --cpu flag, but generation will be extremely slow (minutes per image instead of seconds). AMD GPUs are partially supported via ROCm on Linux. For serious use, an NVIDIA GPU with 8+ GB VRAM is the practical minimum.
How does ComfyUI compare to Stable Diffusion WebUI?
ComfyUI uses a node-based workflow editor where you connect processing blocks visually, giving full control over the generation pipeline. Stable Diffusion WebUI (AUTOMATIC1111/Forge) uses a traditional form-based interface that’s simpler to use. ComfyUI is more powerful and flexible for advanced users; Stable Diffusion WebUI is easier for beginners. ComfyUI workflows are fully reproducible and shareable as JSON files.
How much disk space do AI models need?
Each Stable Diffusion model is 2-7 GB. SDXL models are typically 6-7 GB. Flux models can be 20+ GB. LoRAs are 10-200 MB each. A practical setup with 3-4 base models, several LoRAs, and a ControlNet model needs 30-50 GB. Plan for at least 20 GB for your first model.
Can I run ComfyUI on a Raspberry Pi?
No. ComfyUI requires GPU acceleration for practical use, and Raspberry Pi lacks a compatible GPU. Even on CPU, the Raspberry Pi’s ARM processor and limited RAM make image generation impractical. Use a desktop/server with an NVIDIA GPU.
What is ComfyUI Manager?
ComfyUI Manager is a custom node that adds a GUI for browsing, installing, and managing other custom nodes. Install it by cloning the GitHub repo into the custom_nodes/ directory. It’s effectively the package manager for ComfyUI’s extension ecosystem.
Can I use ComfyUI workflows from others?
Yes. ComfyUI workflows can be saved as JSON files and shared. Load a workflow with File → Load in the UI. If the workflow uses custom nodes you don’t have, ComfyUI shows missing node errors — install the required custom nodes through ComfyUI Manager and reload.
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments