A Raspberry Pi 5 nutrition stack with k3s
Pi 5, 8 GiB, k3s, Helm. Three pods that survive a power cut.
Why a Pi 5
We have run this stack on:
- Pi 4 (4 GiB) — kept hitting OOMs on the OFF mirror sync. Workable if you trim the dataset.
- Pi 5 (8 GiB) — fits comfortably with headroom for one more service.
- A used Lenovo M715q (8 GiB DDR4) — the actually-correct answer if you can find one for $80.
The Pi 5 is a reasonable middle ground. It draws under 6 W idle, has decent USB-3 throughput for an attached SSD, and the kernel mainline is in good shape. We document the Pi 5 path because it’s the most-asked.
Hardware
- Pi 5, 8 GiB
- Active cooler (the official one is fine; the Pi 5 absolutely benefits)
- 256 GB NVMe over the M.2 hat (or a USB-3 SSD)
- A UPS. Yes, even on a Pi. We use a NUT-compatible CyberPower CP425. Power events corrupt SQLite databases.
- Gigabit ethernet, ideally to a switch, not a powerline adapter.
OS
Raspberry Pi OS 64-bit, Bookworm. We’ve had issues with the lite image’s cgroupv2 defaults around k3s; explicit append to /boot/firmware/cmdline.txt:
cgroup_enable=memory cgroup_memory=1
Reboot.
k3s install
curl -sfL https://get.k3s.io | sh -s - server \
--disable traefik \
--write-kubeconfig-mode 644 \
--node-name pi5-nutrition \
--tls-san pi5-nutrition.lan.example.org
We disable the bundled Traefik because we’ll run a Caddy ingress instead. Mostly habit.
kubectl get nodes
# pi5-nutrition Ready control-plane,master 30s v1.30.x+k3s1
Storage layout
k3s ships with local-path-provisioner which is fine for a single-node cluster. We bind the underlying directory to the NVMe:
sudo mkdir -p /mnt/nvme/k3s-storage
sudo ln -s /mnt/nvme/k3s-storage /var/lib/rancher/k3s/storage
PVCs land on NVMe, which matters for the OFF mirror’s sustained I/O.
The three pods
1. OFF mirror
apiVersion: apps/v1
kind: Deployment
metadata:
name: off-mirror
spec:
replicas: 1
selector:
matchLabels: { app: off-mirror }
template:
metadata:
labels: { app: off-mirror }
spec:
containers:
- name: off-mirror
image: openfoodfacts/openfoodfacts-server:nightly
env:
- name: OFF_MIRROR_MODE
value: read-only
ports:
- containerPort: 8080
resources:
requests: { memory: "512Mi", cpu: "200m" }
limits: { memory: "1Gi", cpu: "1000m" }
volumeMounts:
- { name: off-data, mountPath: /var/data/off }
volumes:
- name: off-data
persistentVolumeClaim:
claimName: off-data-pvc
PVC: 12 GiB. The mirror peaks around 9 GiB with some slack.
2. Postgres (USDA cache)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pg-usda
spec:
serviceName: pg-usda
replicas: 1
selector:
matchLabels: { app: pg-usda }
template:
metadata:
labels: { app: pg-usda }
spec:
containers:
- name: postgres
image: postgres:16-alpine
env:
- name: POSTGRES_DB
value: usda
- name: POSTGRES_USER
value: usda
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef: { name: pg-usda-secret, key: password }
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
resources:
requests: { memory: "256Mi", cpu: "100m" }
limits: { memory: "768Mi", cpu: "500m" }
volumeMounts:
- { name: pg-data, mountPath: /var/lib/postgresql/data }
volumeClaimTemplates:
- metadata: { name: pg-data }
spec:
accessModes: [ReadWriteOnce]
resources: { requests: { storage: 10Gi } }
Bulk-load the USDA FDC dump on first run; see USDA bulk → Postgres.
3. Caddy ingress
A tiny Caddy that routes /off/* to the mirror and /usda/* to a small wrapper around Postgres. Configmap-mounted Caddyfile, service exposed as a NodePort or LoadBalancer (we use NodePort + an external nginx on the router).
Boot order and resilience
startup-order.yaml (an Init Container pattern):
initContainers:
- name: wait-for-postgres
image: busybox:1.36
command: ['sh', '-c', 'until nc -z pg-usda 5432; do sleep 2; done']
After a power cut, k3s comes up first (~90s), Postgres comes up next (~30s), the OFF mirror waits for its PVC (~20s) and then starts (~60s). Caddy comes up last and starts serving once both upstreams pass health checks. End-to-end recovery: about six minutes.
Monitoring
We run node-exporter, postgres-exporter, and a tiny custom probe that hits the OFF mirror’s /healthz every 30 seconds. All three feed a Prometheus + Grafana on a different node (we wouldn’t put metrics storage on the Pi). Dashboard: nutrition-stack-overview.json in the homelab repo.
Costs
Pi 5 8 GiB: $80
Cooler + case: $20
NVMe + hat: $50
UPS: $80 (one-time)
Power: 6 W * 24h * 365d * $0.14/kWh = **$7.40/yr**
Five-year cost-of-ownership, including replacement SD card / minor parts: about $250.
What can go wrong
- SD card death. Don’t use the SD card past initial install. NVMe.
- Thermal throttle. Without active cooling, Pi 5 throttles in sustained postgres queries. Don’t skip the cooler.
- k3s on cgroupv1. The Bookworm default is fine. Bullseye needs the cmdline tweaks above.
- OFF sync stuck on slow uplink. The mirror retries indefinitely. We have hit the 12-hour mark on a household 25 Mbps line.
References
- k3s: docs.k3s.io
- OpenFoodFacts server: github.com/openfoodfacts/openfoodfacts-server
- USDA bulk → Postgres
- Backups and restore