Install Immich on Proxmox VE

Why Proxmox for Immich?

Proxmox VE is the most popular hypervisor for homelabs. Running Immich in a Proxmox VM gives you snapshot-based backups, resource isolation from your other services, and the ability to pass through a GPU for hardware-accelerated transcoding and ML inference. You can also migrate the VM between Proxmox nodes without touching the Immich configuration.

This guide covers VM creation, storage planning, GPU passthrough, and Proxmox-specific backup strategies. It assumes you have a working Proxmox VE 8.x installation.

Prerequisites

  • Proxmox VE 8.0+ installed and accessible via the web UI
  • Hardware: Minimum 4 vCPUs and 8 GB RAM available for the Immich VM (16 GB recommended if enabling ML)
  • Storage: 32 GB for the VM OS disk, plus dedicated storage for the photo library
  • Network: A bridge interface configured (default vmbr0)
  • Optional: Intel CPU with iGPU (for VAAPI/Quick Sync passthrough) or NVIDIA GPU (for NVENC/CUDA passthrough)
  • Ubuntu 24.04 LTS cloud image (downloaded in the setup steps below)

LXC vs VM: Which to Choose

LXC ContainerVM
PerformanceNear-native, shared kernel~5% overhead from virtualization
GPU passthroughLimited, requires privileged container and manual device mappingFull PCI passthrough, well-supported
IsolationShared kernel with hostFull isolation
Snapshots/backupsSupportedSupported, more reliable for databases
Docker insideWorks but requires configuration (nesting, keyctl)Works natively
RecommendationUse if you do not need GPU passthrough and want minimal overheadRecommended. Simpler Docker setup, reliable GPU passthrough, better isolation.

This guide uses a VM. LXC with Docker inside is possible but requires enabling nesting and keyctl features, and GPU passthrough in LXC is fragile. A VM avoids all of these issues.

Platform Setup

Download the Ubuntu Cloud Image

SSH into your Proxmox host and download the Ubuntu 24.04 cloud image:

cd /var/lib/vz/template/iso/
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img

Create the VM

Create a VM using the cloud image. Replace VM_ID with your preferred ID (e.g., 200):

VM_ID=200

# Create the VM with basic settings
qm create $VM_ID --name immich --memory 8192 --cores 4 --cpu host \
  --net0 virtio,bridge=vmbr0 --ostype l26 --agent enabled=1

# Import the cloud image as the boot disk
qm importdisk $VM_ID /var/lib/vz/template/iso/noble-server-cloudimg-amd64.img local-lvm

# Attach the disk as SCSI with IO thread for better performance
qm set $VM_ID --scsihw virtio-scsi-single --scsi0 local-lvm:vm-${VM_ID}-disk-0,iothread=1,discard=on

# Resize the OS disk to 32 GB
qm disk resize $VM_ID scsi0 32G

# Add a cloud-init drive for initial configuration
qm set $VM_ID --ide2 local-lvm:cloudinit

# Set boot order
qm set $VM_ID --boot order=scsi0

# Configure cloud-init settings
qm set $VM_ID --ciuser immich --cipassword YOUR_PASSWORD \
  --ipconfig0 ip=dhcp --sshkeys ~/.ssh/authorized_keys

# Enable QEMU guest agent
qm set $VM_ID --agent enabled=1

Adjust these values to your environment:

  • --memory 8192: 8 GB RAM. Use 16384 (16 GB) if you plan to enable ML with a large library.
  • --cores 4: 4 vCPUs. Increase if your host has cores to spare.
  • bridge=vmbr0: Your network bridge. Check Network in the Proxmox UI if you use a different bridge.
  • local-lvm: Your storage pool. Replace if you use ZFS, Ceph, or another pool.

Start the VM:

qm start $VM_ID

Initial VM Configuration

Find the VM’s IP address in the Proxmox UI (Summary tab) or from your DHCP server. SSH in:

ssh immich@VM_IP_ADDRESS

Update the system and install Docker:

sudo apt update && sudo apt upgrade -y

# Install Docker CE
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER
newgrp docker

Install the QEMU guest agent (for Proxmox integration):

sudo apt install -y qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent

Add a Dedicated Data Disk

The OS disk (32 GB) is for the operating system and Docker. Photo storage should be on a separate disk to make management and backups easier.

In the Proxmox UI or via CLI:

# On the Proxmox host -- add a 500 GB data disk
qm set $VM_ID --scsi1 local-lvm:500,iothread=1,discard=on

Inside the VM, format and mount the data disk:

# Identify the new disk
lsblk
# It will appear as /dev/sdb (or similar)

# Partition and format
sudo fdisk /dev/sdb
# n (new), p (primary), defaults, w (write)
sudo mkfs.ext4 /dev/sdb1

# Mount
sudo mkdir -p /mnt/data
sudo mount /dev/sdb1 /mnt/data

# Add to fstab
echo "/dev/sdb1 /mnt/data ext4 defaults,noatime,discard 0 2" | sudo tee -a /etc/fstab

# Set ownership
sudo chown -R $USER:$USER /mnt/data

NFS/CIFS Storage for Photo Library (Alternative)

If you store photos on a NAS, you can mount network storage inside the VM instead of using a local data disk:

# NFS mount
sudo apt install -y nfs-common
sudo mkdir -p /mnt/photos
echo "NAS_IP:/volume1/photos /mnt/photos nfs defaults,noatime 0 0" | sudo tee -a /etc/fstab
sudo mount -a

Important: Mount network storage for the photo library (UPLOAD_LOCATION) only. The PostgreSQL database (DB_DATA_LOCATION) must always be on local storage. Running PostgreSQL on NFS will cause data corruption.

Docker Compose Configuration

Create the project directory:

mkdir -p /mnt/data/immich && cd /mnt/data/immich

Create a .env file:

# /mnt/data/immich/.env

# Photo and video storage.
# Use the dedicated data disk or NFS mount.
UPLOAD_LOCATION=/mnt/data/immich/library

# PostgreSQL data -- MUST be on local storage, never NFS/CIFS.
DB_DATA_LOCATION=/mnt/data/immich/postgres

# Timezone
TZ=America/New_York

# Immich version -- always pin.
IMMICH_VERSION=v1.131.3

# PostgreSQL credentials. Change DB_PASSWORD to a strong random value.
DB_PASSWORD=CHANGE_ME_TO_A_RANDOM_STRING
DB_USERNAME=postgres
DB_DATABASE_NAME=immich

Create the docker-compose.yml:

# /mnt/data/immich/docker-compose.yml
name: immich

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION}
    volumes:
      - ${UPLOAD_LOCATION}:/data
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    ports:
      - "2283:2283"
    depends_on:
      redis:
        condition: service_healthy
      database:
        condition: service_healthy
    restart: unless-stopped
    healthcheck:
      disable: false

  immich-machine-learning:
    container_name: immich_machine_learning
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION}
    volumes:
      - model-cache:/cache
    env_file:
      - .env
    restart: unless-stopped
    healthcheck:
      disable: false

  redis:
    container_name: immich_redis
    image: docker.io/valkey/valkey:8-alpine
    healthcheck:
      test: valkey-cli ping || exit 1
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped

  database:
    container_name: immich_postgres
    image: docker.io/tensorchord/pgvecto-rs:pg16-v0.4.0
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
      POSTGRES_INITDB_ARGS: "--data-checksums"
    volumes:
      - ${DB_DATA_LOCATION}:/var/lib/postgresql/data
    restart: unless-stopped
    healthcheck:
      test: pg_isready -U ${DB_USERNAME} -d ${DB_DATABASE_NAME} || exit 1
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  model-cache:

Create directories and start the stack:

mkdir -p /mnt/data/immich/library /mnt/data/immich/postgres
cd /mnt/data/immich
docker compose up -d

Verify all containers are running:

docker compose ps

First-Time Setup

  1. Open http://VM_IP_ADDRESS:2283 in a browser.
  2. Click Getting Started to create your admin account.
  3. Navigate to Administration and configure:
    • Storage Template: Set a file organization pattern like {{y}}/{{y}}-{{MM}}-{{dd}}/{{filename}}.
    • Machine Learning: Enabled by default.
    • Video Transcoding: If you have set up GPU passthrough (see below), enable hardware transcoding here.

Platform-Specific Optimization

Intel iGPU Passthrough (VAAPI/Quick Sync)

Intel integrated graphics passthrough enables hardware video transcoding and OpenVINO ML acceleration. This is the most common GPU setup in homelabs.

On the Proxmox host:

  1. Enable IOMMU in the BIOS (usually labeled “VT-d” for Intel).

  2. Enable IOMMU in the kernel. Edit /etc/default/grub:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

Update GRUB and reboot:

update-grub
reboot
  1. Verify IOMMU is active:
dmesg | grep -e DMAR -e IOMMU

You should see messages about IOMMU being enabled.

  1. Find the iGPU PCI device:
lspci | grep -i vga
# Example output: 00:02.0 VGA compatible controller: Intel Corporation...
  1. Add the PCI device to the VM. In the Proxmox UI: VM > Hardware > Add > PCI Device. Select the Intel VGA controller. Enable:
    • All Functions: Yes
    • Primary GPU: No (unless you want console output on the GPU)

Or via CLI:

qm set $VM_ID --hostpci0 0000:00:02.0,mdev=auto
  1. Inside the VM, verify the GPU is visible:
ls -la /dev/dri/
# Should show card0 and renderD128

# Install VA-API tools
sudo apt install -y vainfo intel-media-va-driver-non-free

# Test VA-API
vainfo
  1. Add the device to the Immich server in docker-compose.yml:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION}
    devices:
      - /dev/dri:/dev/dri
    volumes:
      - ${UPLOAD_LOCATION}:/data
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    ports:
      - "2283:2283"
    depends_on:
      redis:
        condition: service_healthy
      database:
        condition: service_healthy
    restart: unless-stopped
    healthcheck:
      disable: false

For OpenVINO ML acceleration, also switch the ML image:

  immich-machine-learning:
    container_name: immich_machine_learning
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION}-openvino
    devices:
      - /dev/dri:/dev/dri
    volumes:
      - model-cache:/cache
    env_file:
      - .env
    restart: unless-stopped
    healthcheck:
      disable: false

Redeploy with docker compose up -d and enable VAAPI or Quick Sync in Administration > Video Transcoding Settings.

NVIDIA GPU Passthrough

For dedicated NVIDIA GPUs (GTX/RTX series):

On the Proxmox host:

  1. Enable IOMMU (same as Intel section above).

  2. Blacklist the Nouveau driver so Proxmox does not grab the GPU:

echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "options nouveau modeset=0" >> /etc/modprobe.d/blacklist.conf
update-initramfs -u
reboot
  1. Find the NVIDIA PCI device and its audio controller:
lspci | grep -i nvidia
# Example:
# 01:00.0 VGA compatible controller: NVIDIA Corporation...
# 01:00.1 Audio device: NVIDIA Corporation...
  1. Add both PCI devices to the VM (the GPU and its audio companion must be passed together):
qm set $VM_ID --hostpci0 0000:01:00,pcie=1
  1. Inside the VM, install the NVIDIA driver and container toolkit:
# Install NVIDIA driver
sudo apt install -y nvidia-driver-550

# Reboot
sudo reboot

# Verify
nvidia-smi

# Install NVIDIA Container Toolkit
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt update
sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
  1. Update docker-compose.yml for GPU transcoding and CUDA ML:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION}
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities:
                - gpu
                - video
    volumes:
      - ${UPLOAD_LOCATION}:/data
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    ports:
      - "2283:2283"
    depends_on:
      redis:
        condition: service_healthy
      database:
        condition: service_healthy
    restart: unless-stopped
    healthcheck:
      disable: false

  immich-machine-learning:
    container_name: immich_machine_learning
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION}-cuda
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities:
                - gpu
    volumes:
      - model-cache:/cache
    env_file:
      - .env
    restart: unless-stopped
    healthcheck:
      disable: false

Enable NVENC in Administration > Video Transcoding Settings after redeploying.

Resource Allocation Recommendations

Library SizevCPUsRAMML Service
< 10,000 photos24 GBOptional (disable to save resources)
10,000 - 100,000 photos48 GBEnabled, 1 worker
100,000+ photos4-816 GBEnabled, 2+ workers with GPU

Set resources in the Proxmox UI under VM > Hardware, or via CLI:

qm set $VM_ID --memory 16384 --cores 8

Enable ballooning for dynamic memory if you want Proxmox to reclaim unused RAM:

qm set $VM_ID --balloon 4096

This sets the minimum to 4 GB but allows the VM to use up to its configured maximum.

Backup with Proxmox Backup Server

Proxmox Backup Server (PBS) provides incremental, deduplicated VM backups. This is the simplest way to back up your entire Immich installation — VM disk, database, and photos in one backup job.

Set up a backup schedule in the Proxmox UI:

  1. Go to Datacenter > Backup in the Proxmox web UI.
  2. Click Add to create a new backup job.
  3. Select your PBS storage (or local storage for vzdump backups).
  4. Select the Immich VM.
  5. Set the schedule (daily at 2:00 AM is a good default).
  6. Set mode to Snapshot (backs up while the VM is running, no downtime).
  7. Set retention (e.g., keep daily: 7, weekly: 4, monthly: 3).

For consistent database backups, also run a pg_dump inside the VM before the Proxmox backup. Create a cron job:

# Inside the VM
sudo mkdir -p /mnt/data/immich/backups

# Add to crontab
(crontab -l 2>/dev/null; echo "0 1 * * * docker exec immich_postgres pg_dump -U postgres -d immich | gzip > /mnt/data/immich/backups/immich-db-\$(date +\%Y\%m\%d).sql.gz") | crontab -

This dumps the database at 1:00 AM, and the Proxmox backup at 2:00 AM captures the fresh dump along with everything else.

Restore: Restore the VM from PBS, start it, and Immich is back exactly as it was. If you need to restore just the database, extract the SQL dump from inside the restored VM.

Network Configuration

The default bridged network (vmbr0) gives the VM a regular IP on your LAN. This is the simplest setup and works for most homelabs.

If you want a static IP (recommended for a server), configure it inside the VM:

# Using netplan (Ubuntu 24.04 default)
sudo tee /etc/netplan/01-static.yaml > /dev/null <<EOF
network:
  version: 2
  ethernets:
    ens18:
      dhcp4: false
      addresses:
        - 192.168.1.50/24
      routes:
        - to: default
          via: 192.168.1.1
      nameservers:
        addresses:
          - 1.1.1.1
          - 8.8.8.8
EOF

sudo netplan apply

Replace the IP addresses with your network’s configuration. ens18 is the default interface name in Proxmox VMs — verify with ip a.

Troubleshooting

GPU Not Visible Inside VM

Symptom: lspci inside the VM does not show the GPU, or /dev/dri/ is empty.

Fix: Verify IOMMU is enabled on the Proxmox host:

# On the Proxmox host
dmesg | grep -e DMAR -e IOMMU

If no output, IOMMU is not active. Check:

  1. BIOS: VT-d / IOMMU must be enabled.
  2. /etc/default/grub: intel_iommu=on iommu=pt must be in GRUB_CMDLINE_LINUX_DEFAULT.
  3. Run update-grub and reboot.

For Intel iGPU, also verify the PCI device ID is correct in the VM configuration. The iGPU is typically 0000:00:02.0.

VM Boot Fails After GPU Passthrough

Symptom: The VM hangs on boot or shows a black screen after adding a PCI device.

Fix: Common causes:

  1. ROM bar issue: In the Proxmox UI, edit the PCI device and try toggling “ROM-Bar” on or off.
  2. Primary GPU conflict: If you set the passthrough device as Primary GPU, the VM expects console output on that GPU. Remove “Primary GPU” unless you have a monitor connected.
  3. IOMMU group conflict: Some devices share IOMMU groups. You may need to pass through all devices in the group. Check with:
# On the Proxmox host
find /sys/kernel/iommu_groups/ -type l | sort -V

Database Corruption After Snapshot Restore

Symptom: PostgreSQL fails to start after restoring a Proxmox snapshot, with errors about corrupt WAL files.

Fix: Proxmox snapshots are crash-consistent, not application-consistent. PostgreSQL may have had in-flight writes during the snapshot. Run the pg_dump cron job before scheduled backups (see Backup section) to have a consistent SQL dump available.

If the database is corrupted:

# Stop Immich
cd /mnt/data/immich
docker compose stop immich-server immich-machine-learning

# Restore from the SQL dump
docker compose up -d database
sleep 10
gunzip -c /mnt/data/immich/backups/immich-db-YYYYMMDD.sql.gz | docker exec -i immich_postgres psql -U postgres -d immich

# Restart everything
docker compose up -d

High Disk I/O Latency

Symptom: Immich is slow, especially thumbnail generation and ML processing. iostat shows high await times.

Fix: Check your storage backend:

  1. local-lvm (LVM-thin on SSD): Best performance. This is the recommended setup.
  2. local-lvm (LVM-thin on HDD): Acceptable for photo storage, poor for database. Consider moving the database to SSD storage.
  3. NFS storage: Do not put the database on NFS. If your photo library is on NFS and performance is poor, check network throughput and NFS server disk speed.

Enable discard (TRIM) on the VM disk for SSD backends:

# On the Proxmox host
qm set $VM_ID --scsi0 local-lvm:vm-${VM_ID}-disk-0,iothread=1,discard=on

Inside the VM, enable periodic TRIM:

sudo systemctl enable --now fstrim.timer

Container Networking Issues After Proxmox Update

Symptom: After a Proxmox update, containers inside the VM cannot reach the internet.

Fix: Proxmox updates sometimes modify bridge or firewall settings. Inside the VM:

# Check DNS resolution
nslookup google.com

# If DNS fails, check /etc/resolv.conf
cat /etc/resolv.conf

# Check gateway
ip route

If the bridge configuration changed, verify vmbr0 settings in the Proxmox UI under Node > Network. Ensure the VM’s network device is still connected to the correct bridge.

Resource Requirements

  • RAM: 8 GB minimum for the VM (4 GB usable after OS overhead). 16 GB recommended with ML enabled.
  • CPU: 4 vCPUs minimum. 8 vCPUs recommended for libraries over 100,000 photos. Use host CPU type for best performance.
  • Disk: 32 GB OS disk + dedicated data disk sized to your photo library. Use SSD-backed storage for the OS and database.
  • GPU: Optional. Intel iGPU or NVIDIA GPU via PCI passthrough dramatically improves transcoding and ML performance.

Comments