How to Self-Host Langflow with Docker Compose
What Is Langflow?
Langflow is a visual AI workflow builder that lets you create LLM-powered applications by connecting components in a drag-and-drop canvas. Built on LangChain, it supports multi-agent orchestration, RAG pipelines, and custom Python components. Flows can be deployed as REST APIs or MCP servers, making Langflow both a prototyping tool and a deployment platform.
Prerequisites
- A Linux server (Ubuntu 22.04+ recommended)
- Docker and Docker Compose installed (guide)
- 2 GB+ RAM
- 5 GB free disk space
- An LLM backend (Ollama, OpenAI API key, or similar)
Docker Compose Configuration
Create a docker-compose.yml file:
services:
langflow:
image: langflowai/langflow:1.8.1
container_name: langflow
ports:
- "7860:7860"
volumes:
- langflow_data:/app/langflow
environment:
# Database (SQLite by default)
- LANGFLOW_DATABASE_URL=sqlite:////app/langflow/langflow.db
- LANGFLOW_CONFIG_DIR=/app/langflow
# Authentication (optional)
- LANGFLOW_AUTO_LOGIN=false
- LANGFLOW_SUPERUSER=admin
- LANGFLOW_SUPERUSER_PASSWORD=changeme-use-strong-password
# Worker settings
- LANGFLOW_WORKERS=1
restart: unless-stopped
volumes:
langflow_data:
For production with PostgreSQL:
services:
langflow:
image: langflowai/langflow:1.8.1
container_name: langflow
ports:
- "7860:7860"
volumes:
- langflow_data:/app/langflow
environment:
- LANGFLOW_DATABASE_URL=postgresql://langflow:langflow-db-password@langflow-db:5432/langflow
- LANGFLOW_CONFIG_DIR=/app/langflow
- LANGFLOW_AUTO_LOGIN=false
- LANGFLOW_SUPERUSER=admin
- LANGFLOW_SUPERUSER_PASSWORD=changeme-use-strong-password
depends_on:
langflow-db:
condition: service_healthy
restart: unless-stopped
langflow-db:
image: postgres:16-alpine
container_name: langflow-db
volumes:
- langflow_db_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=langflow
- POSTGRES_PASSWORD=langflow-db-password
- POSTGRES_DB=langflow
healthcheck:
test: ["CMD-SHELL", "pg_isready -U langflow"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
langflow_data:
langflow_db_data:
Start the stack:
docker compose up -d
Initial Setup
- Open
http://your-server:7860in your browser - Log in with the superuser credentials
- Click New Flow to create your first workflow
- Use the sidebar to drag components onto the canvas
Quick Start: Ollama Chat
- Drag Ollama from the Models section
- Set Base URL to
http://host.docker.internal:11434 - Set Model Name to
llama3.2 - Drag a Chat Output component
- Connect Ollama’s output to Chat Output’s input
- Click the Playground button to test
Configuration
Key Environment Variables
| Variable | Default | Description |
|---|---|---|
LANGFLOW_DATABASE_URL | SQLite | Database connection string |
LANGFLOW_CONFIG_DIR | /app/langflow | Config and data directory |
LANGFLOW_AUTO_LOGIN | true | Skip login (disable for production) |
LANGFLOW_SUPERUSER | Admin username | |
LANGFLOW_SUPERUSER_PASSWORD | Admin password | |
LANGFLOW_WORKERS | 1 | Number of worker processes |
LANGFLOW_PORT | 7860 | Server port |
LANGFLOW_HOST | 0.0.0.0 | Server bind address |
Advanced Configuration
Custom Python Components
Langflow supports custom Python components. Create a component that adds custom logic:
from langflow.custom import Component
from langflow.io import MessageTextInput, Output
class MyComponent(Component):
display_name = "Custom Processor"
inputs = [MessageTextInput(name="input_text", display_name="Input")]
outputs = [Output(display_name="Output", name="output", method="process")]
def process(self) -> str:
return self.input_text.upper()
Place custom components in the Langflow data directory and they appear in the sidebar.
Deploy as API
Every flow automatically gets an API endpoint:
curl -X POST http://localhost:7860/api/v1/run/your-flow-id \
-H "Content-Type: application/json" \
-d '{"input_value": "What is self-hosting?", "output_type": "chat"}'
MCP Server Deployment
Langflow can deploy flows as MCP (Model Context Protocol) servers, allowing AI assistants to use your flows as tools.
Reverse Proxy
Configure your reverse proxy to forward to port 7860. WebSocket support is required for the interactive playground. See Reverse Proxy Setup.
Backup
Back up the Langflow data volume:
docker run --rm -v langflow_data:/data -v $(pwd):/backup alpine \
tar czf /backup/langflow-backup.tar.gz /data
This contains flows, credentials, custom components, and the SQLite database. See Backup Strategy.
Troubleshooting
Cannot Connect to Ollama
Symptom: Ollama component fails to connect.
Fix: Use http://host.docker.internal:11434 on Docker Desktop, or http://172.17.0.1:11434 on Linux. Ensure Ollama has OLLAMA_HOST=0.0.0.0:11434.
Flow Runs Slowly
Symptom: Playground responses take a long time.
Fix: Check that your LLM backend is running efficiently. Increase LANGFLOW_WORKERS for better concurrency. Ensure sufficient RAM for the Langflow process.
Login Page Loops
Symptom: Login redirects back to login page.
Fix: Clear browser cookies. Ensure LANGFLOW_SUPERUSER and LANGFLOW_SUPERUSER_PASSWORD are set correctly. Check LANGFLOW_AUTO_LOGIN is set to false (not an empty string).
Resource Requirements
- RAM: 300-600 MB (Langflow itself; LLM backends are separate)
- CPU: Low-medium
- Disk: 1-5 GB depending on stored flows and custom components
Verdict
Langflow is the more powerful visual AI builder. Its Python component system, multi-agent support, and API deployment capabilities make it a real development platform, not just a chatbot builder. The trade-off is a higher learning curve and heavier resource footprint compared to Flowise.
Choose Langflow for building AI applications with custom logic and API deployment. Choose Flowise for simpler chatbot building with a lower barrier to entry.
FAQ
How does Langflow compare to Flowise?
Both are visual AI workflow builders, but Langflow is more powerful and complex. Langflow supports custom Python components, multi-agent orchestration, MCP server deployment, and has a larger component library. Flowise is simpler with a lower learning curve, focused on chatbot building with no-code configuration. Choose Langflow for complex AI applications; choose Flowise for quick chatbot prototyping.
Do I need my own LLM to use Langflow?
You need access to an LLM backend — either a self-hosted model via Ollama, a cloud API (OpenAI, Anthropic, Google), or any OpenAI-compatible endpoint. Langflow itself is the workflow orchestrator, not the AI model. The most cost-effective approach for self-hosters is pairing Langflow with Ollama running local models.
Can I deploy Langflow flows as APIs?
Yes. Every flow automatically gets a REST API endpoint. Send POST requests with input data, and Langflow runs the flow and returns the output. This turns Langflow from a prototyping tool into a deployment platform — build workflows visually, then call them from your applications via HTTP.
What is MCP server deployment?
Model Context Protocol (MCP) is a standard for AI assistants to use external tools. Langflow can deploy flows as MCP servers, allowing AI assistants (like Claude) to call your Langflow workflows as tools. This enables complex integrations where an AI assistant uses your custom-built workflows as capabilities.
Can I write custom components in Langflow?
Yes. Create Python classes that inherit from Component, define inputs and outputs, and implement processing logic. Custom components appear in the sidebar alongside built-in ones. This is the key advantage over Flowise — arbitrary Python code gives you unlimited flexibility for data processing, API integrations, and custom logic.
Is Langflow suitable for production workloads?
For moderate-scale production, yes. Use PostgreSQL (not SQLite) for the database, increase LANGFLOW_WORKERS for concurrent request handling, and deploy behind a reverse proxy with HTTPS. For high-throughput production, consider running multiple Langflow instances behind a load balancer. The main limitation is that flow execution is synchronous per worker.
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments