Install Nextcloud on Proxmox VE
Why Proxmox for Nextcloud?
Running Nextcloud in a Proxmox VM gives you snapshot-based backups (revert a broken update in seconds), resource isolation (Nextcloud can’t starve your other services), and the ability to migrate the VM between Proxmox nodes. You can also pass through storage controllers or mount NAS shares for file storage while keeping the database on fast local disks.
Updated March 2026: Verified with latest Docker images and configurations.
This guide covers VM creation, storage planning for Nextcloud’s dual needs (fast database I/O + large file capacity), NAS integration, and Proxmox Backup Server integration. It assumes a working Proxmox VE 8.x installation.
For Nextcloud’s features, configuration, and troubleshooting, see the main Nextcloud guide.
Prerequisites
- Proxmox VE 8.0+ installed and accessible via the web UI
- Hardware: Minimum 2 vCPUs and 4 GB RAM for the VM (4 vCPUs / 8 GB recommended for multiple users)
- Storage: 20 GB for the VM OS disk, plus dedicated storage for files
- Network: A bridge interface configured (default
vmbr0) - Ubuntu 24.04 LTS cloud image (downloaded in the setup steps below)
LXC vs VM: Which to Choose
| LXC Container | VM | |
|---|---|---|
| Performance | Near-native, shared kernel | ~5% overhead from virtualization |
| Docker inside | Requires nesting + keyctl features | Works natively |
| NFS/CIFS mounts | Supported (with correct apparmor config) | Supported natively |
| Snapshots | Supported | Supported |
| Isolation | Shared kernel with host | Full isolation |
| Recommendation | Use if you want minimal overhead and are comfortable with LXC-Docker quirks | Recommended. Docker runs natively, no special config needed, easier to troubleshoot. |
This guide uses a VM. Docker in LXC requires enabling nesting and keyctl in the container options, and some Nextcloud features (file locking, cron) can behave unexpectedly in nested container environments.
Platform Setup
Download the Ubuntu Cloud Image
SSH into your Proxmox host:
cd /var/lib/vz/template/iso/
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
Create the VM
Replace VM_ID with your preferred ID (e.g., 300):
VM_ID=300
# Create the VM
qm create $VM_ID --name nextcloud --memory 4096 --cores 2 --cpu host \
--net0 virtio,bridge=vmbr0 --ostype l26 --agent enabled=1
# Import the cloud image as the boot disk
qm importdisk $VM_ID /var/lib/vz/template/iso/noble-server-cloudimg-amd64.img local-lvm
# Attach the disk with IO thread for better performance
qm set $VM_ID --scsihw virtio-scsi-single --scsi0 local-lvm:vm-${VM_ID}-disk-0,iothread=1,discard=on
# Resize the OS disk to 20 GB
qm disk resize $VM_ID scsi0 20G
# Add cloud-init drive
qm set $VM_ID --ide2 local-lvm:cloudinit
# Set boot order
qm set $VM_ID --boot order=scsi0
# Configure cloud-init
qm set $VM_ID --ciuser nextcloud --cipassword YOUR_PASSWORD \
--ipconfig0 ip=dhcp --sshkeys ~/.ssh/authorized_keys
# Enable QEMU guest agent
qm set $VM_ID --agent enabled=1
Adjust these values:
--memory 4096: 4 GB RAM. Use 8192 for 5+ users or if running Collabora/OnlyOffice.--cores 2: 2 vCPUs. Increase for preview generation and Collabora.bridge=vmbr0: Your network bridge.local-lvm: Your storage pool. Replace if using ZFS, Ceph, or another pool.
Start the VM:
qm start $VM_ID
Initial VM Configuration
SSH into the VM:
ssh nextcloud@VM_IP_ADDRESS
Update and install Docker:
sudo apt update && sudo apt upgrade -y
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER
newgrp docker
Install the QEMU guest agent:
sudo apt install -y qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent
Add a Dedicated Data Disk
Keep user files on a separate virtual disk from the OS. This simplifies backups and allows different storage tiers (fast SSD for database, large HDD pool for files).
On the Proxmox host:
# Add a data disk — adjust size to your needs
qm set $VM_ID --scsi1 local-lvm:200,iothread=1,discard=on
Inside the VM:
# Find the new disk
lsblk
# Typically appears as /dev/sdb
# Partition and format
sudo fdisk /dev/sdb # n, p, defaults, w
sudo mkfs.ext4 /dev/sdb1
# Mount
sudo mkdir -p /mnt/data
sudo mount /dev/sdb1 /mnt/data
echo "/dev/sdb1 /mnt/data ext4 defaults,noatime,discard 0 2" | sudo tee -a /etc/fstab
sudo chown -R $USER:$USER /mnt/data
NFS/CIFS Storage for File Library (Alternative)
If you store files on a TrueNAS, Synology, or other NAS:
# NFS
sudo apt install -y nfs-common
sudo mkdir -p /mnt/nextcloud-files
echo "NAS_IP:/volume1/nextcloud /mnt/nextcloud-files nfs defaults,noatime 0 0" | sudo tee -a /etc/fstab
sudo mount -a
# CIFS/SMB
sudo apt install -y cifs-utils
echo "//NAS_IP/nextcloud /mnt/nextcloud-files cifs credentials=/etc/nextcloud-nas-creds,uid=33,gid=33,iocharset=utf8 0 0" | sudo tee -a /etc/fstab
Critical: Mount network storage only for user files. PostgreSQL’s data directory must be on local storage — running PostgreSQL on NFS or CIFS causes data corruption.
Docker Compose Configuration
Create the project directory:
mkdir -p /mnt/data/nextcloud && cd /mnt/data/nextcloud
Create docker-compose.yml:
services:
db:
image: postgres:17-alpine
container_name: nextcloud-db
restart: unless-stopped
volumes:
# Database on fast local storage — NEVER on NFS/CIFS
- nextcloud-db:/var/lib/postgresql/data
environment:
POSTGRES_DB: nextcloud
POSTGRES_USER: nextcloud
# CHANGE THIS — use a strong password
POSTGRES_PASSWORD: "change-this-db-password"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U nextcloud"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
container_name: nextcloud-redis
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
app:
image: nextcloud:33.0.0-apache
container_name: nextcloud
restart: unless-stopped
ports:
- "8080:80"
volumes:
- nextcloud-html:/var/www/html
# User files on dedicated data disk or NAS mount
- /mnt/data/nextcloud-files:/var/www/html/data
environment:
# Database
POSTGRES_HOST: db
POSTGRES_DB: nextcloud
POSTGRES_USER: nextcloud
POSTGRES_PASSWORD: "change-this-db-password"
# Cache
REDIS_HOST: redis
# Admin account (first run only)
NEXTCLOUD_ADMIN_USER: admin
NEXTCLOUD_ADMIN_PASSWORD: "change-this-admin-password"
# Trusted domains — add your VM IP and domain
NEXTCLOUD_TRUSTED_DOMAINS: "localhost vm-ip your-domain.com"
# PHP tuning
PHP_MEMORY_LIMIT: "512M"
PHP_UPLOAD_LIMIT: "16G"
APACHE_BODY_LIMIT: "0"
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
cron:
image: nextcloud:33.0.0-apache
container_name: nextcloud-cron
restart: unless-stopped
volumes:
- nextcloud-html:/var/www/html
- /mnt/data/nextcloud-files:/var/www/html/data
entrypoint: /cron.sh
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
volumes:
nextcloud-db:
nextcloud-html:
Create the data directory and start:
mkdir -p /mnt/data/nextcloud-files
docker compose up -d
First startup takes 1-2 minutes for database initialization.
Initial Setup
- Open
http://vm-ip:8080 - Admin account is created from environment variables
- Install recommended apps (Calendar, Contacts, Talk)
- Administration > Basic settings — confirm background jobs is set to Cron
Configure Trusted Proxies (if using reverse proxy)
docker compose exec -u www-data app php occ config:system:set overwriteprotocol --value="https"
docker compose exec -u www-data app php occ config:system:set trusted_proxies 0 --value="172.16.0.0/12"
Proxmox-Specific Configuration
VM Backups with Proxmox Backup Server
Proxmox can back up the entire Nextcloud VM, including all Docker volumes. This is the simplest backup strategy — one backup captures everything.
In the Proxmox web UI:
- Go to Datacenter > Backup
- Add a backup job for VM
$VM_ID - Schedule: daily at a low-usage hour
- Mode: Snapshot (no downtime) or Stop (consistent but brief downtime)
- Retention: keep 7 daily, 4 weekly
For application-level backups (more granular, faster restores):
# Inside the VM — dump the database
docker compose exec nextcloud-db pg_dump -U nextcloud nextcloud > /mnt/data/backups/nextcloud-db-$(date +%Y%m%d).sql
Snapshot Before Updates
Before updating Nextcloud to a new version, take a Proxmox snapshot:
# On the Proxmox host
qm snapshot $VM_ID pre-update --description "Before Nextcloud 33.x update"
If the update breaks something, roll back instantly:
qm rollback $VM_ID pre-update
This is one of the biggest advantages of running Nextcloud in a Proxmox VM — update rollbacks take seconds instead of hours of troubleshooting.
Resource Monitoring via Proxmox
The Proxmox web UI shows real-time CPU, RAM, disk I/O, and network for the Nextcloud VM. Use this to identify bottlenecks:
- High CPU during file scanning → increase vCPUs or reduce preview generation sizes
- High disk I/O wait → database is on slow storage. Move to SSD-backed pool.
- Memory near limit → increase VM RAM allocation or add swap inside the VM
Live Migration
If you have a Proxmox cluster, you can live-migrate the Nextcloud VM between nodes with zero downtime:
qm migrate $VM_ID target-node --online
The only requirement is shared storage (Ceph, NFS, iSCSI) for the VM disks, or local-to-local migration with storage replication.
Reverse Proxy
You can run the reverse proxy inside the VM alongside Nextcloud, or as a separate VM/LXC. For a single-service VM, running Caddy inside is simplest:
caddy:
image: caddy:2.11.2-alpine
container_name: nextcloud-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- caddy-data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
networks:
- default
With a Caddyfile:
your-domain.com {
reverse_proxy nextcloud:80
request_body {
max_size 16GB
}
}
Alternatively, use a centralized reverse proxy on another VM. See Reverse Proxy Setup.
Troubleshooting
Database is slow despite SSD storage
Check that the VM disk is on an SSD-backed storage pool. In the Proxmox UI, go to Datacenter > Storage and verify the pool uses SSDs. If the pool is on HDDs, create a new SSD pool and move the VM disk.
NFS mount causes “Stale file handle” errors
NFS mounts can become stale if the NAS reboots or the network drops:
sudo umount -l /mnt/nextcloud-files
sudo mount -a
Add hard,intr options to the fstab entry for automatic recovery.
VM won’t start after Proxmox upgrade
Check that the SCSI controller and boot order are correct:
qm config $VM_ID | grep -E "scsi|boot"
Reset boot order if needed: qm set $VM_ID --boot order=scsi0
File permissions wrong after NAS mount
Nextcloud runs as www-data (UID 33) inside the container. NAS mounts must map to this UID:
# For NFS: set the export to squash to uid=33
# For CIFS: use uid=33,gid=33 in mount options
QEMU guest agent not reporting IP
Inside the VM:
sudo systemctl status qemu-guest-agent
# If stopped:
sudo systemctl enable --now qemu-guest-agent
Resource Requirements
- VM allocation: 2-4 vCPUs, 4-8 GB RAM, 20 GB OS disk
- Nextcloud overhead: ~512 MB RAM (app + PostgreSQL + Redis)
- Disk: OS disk (20 GB) + data disk (size depends on your file library)
- Proxmox host overhead: ~200 MB RAM for the VM process
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments