#proxmox#lxc#vm#self-hosting#benchmark

Proxmox VM vs LXC vs bare metal for the nutrition stack

Which abstraction earns its overhead and which doesn't, for a tracker stack on a small homelab.

Setup

Same hardware, three personalities:

  • Bare metal: Debian 12, no virtualisation. The control case.
  • Proxmox LXC: a container with the same Debian 12 rootfs, 2 cores assigned, 4 GiB RAM.
  • Proxmox VM: KVM, virtio-net, virtio-blk, 2 vCPU, 4 GiB RAM.

Hardware: Lenovo M715q SFF, AMD Ryzen 5 PRO 2400GE, 16 GiB DDR4, 256 GB Crucial M.2 SATA SSD.

Workload: the standard nutrition stack — OFF mirror (read-only mode, query-only), Postgres 16 with the USDA FDC dump loaded, Caddy ingress.

Benchmarks

OFF mirror barcode lookup, p99 (10,000 requests, mixed barcode set)

Layerp99 latencyThroughput
Bare metal38 ms480 req/s
Proxmox LXC41 ms460 req/s
Proxmox VM49 ms380 req/s

LXC overhead is small (~8% on tail latency, ~4% on throughput). VM overhead is more visible (~30% on tail, ~21% throughput).

Postgres bulk restore (USDA FDC ~6 GiB)

LayerWall-clock
Bare metal4m 12s
Proxmox LXC4m 28s
Proxmox VM6m 03s

VM disk I/O is the dominant penalty. LXC’s bind mounts to the underlying filesystem are essentially as fast as bare metal.

Caddy round-trip on cached path

LayerRTT median (LAN)
Bare metal0.91 ms
Proxmox LXC1.08 ms
Proxmox VM1.62 ms

Sub-millisecond differences but visible on a sustained load.

Operational trade-offs

Bare metal

Pros: nothing in the way. Easy to reason about. Easy to recover from a snapshot of the disk.

Cons: one OS install per host. Every package conflict is a real problem. Migrations are weekend projects.

We run bare metal where the host is doing one thing and we know what that thing is.

Proxmox LXC

Pros: nearly bare-metal performance. Fast to spin up (~3s). Snapshots are cheap. Trivial rollback. Resource limits are real and enforced.

Cons: LXCs share the host kernel. A kernel-level CVE is one CVE, not one per container. Some apps (anything that wants its own kernel modules, or runs nested Docker poorly) don’t fit.

Caveat: nested Docker inside LXC works, but we’ve had networking surprises with Docker Compose and bridge interfaces inside privileged LXCs. Allow extra debugging time the first time you do it.

Proxmox VM

Pros: full kernel isolation. Can run an entirely different OS. Memory ballooning. Live migration to another Proxmox host (if you have one).

Cons: 20–30% slower for I/O-heavy work. RAM costs more (you can’t share kernel pages across VMs the way LXCs do). Slower to start (~30s).

We use VMs when we genuinely need a different kernel (an OpenWRT VM for testing, a Windows VM for one specific reason, a router VM with PCIe passthrough).

What we run

The nutrition stack is in an LXC. CT ID 211, hostname nutrition, running Docker Compose inside.

$ pct config 211
arch: amd64
cores: 2
features: nesting=1
hostname: nutrition
memory: 4096
nameserver: 1.1.1.1
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=BC:24:11:..,ip=192.168.1.42/24
ostype: debian
rootfs: local-lvm:vm-211-disk-0,size=24G
swap: 0
unprivileged: 1

Two cores, 4 GiB RAM, 24 GiB rootfs. Unprivileged (so no root-on-host even if a process inside escapes the container). nesting=1 to allow Docker.

Backups across these layers

  • Bare metal: Borgmatic on the host filesystem. Standard.
  • LXC: vzdump snapshots from Proxmox plus Borgmatic inside. Two layers, both useful.
  • VM: vzdump snapshots only. Don’t run Borg inside the VM if you can run it outside.

vzdump snapshots of the LXC are about 800 MiB and complete in ~30 seconds. That’s the actual reason we love LXCs.

When to bare-metal anyway

Three cases:

  • Pi 5. No virtualisation hypervisor on a Pi. Run Docker on the host. Done.
  • Single-purpose VPS. Spinning up Proxmox on a $4 Hetzner CX22 is silly. Just run Docker on the VPS.
  • Hardware where Proxmox doesn’t have good driver support. Rare on modern x86, occasional on ARM SBCs other than the Pi.
  • Multi-purpose homelab, x86: Proxmox + LXC for the nutrition stack.
  • Pi 5: Debian + Docker Compose, no virtualisation.
  • Cheap VPS: same.
  • Anything more complicated: usually a sign you should split the workload.

References