Backups and restore for self-hosted nutrition data
Borgmatic + restic, two locations, monthly restore drill. Yes, even for a calorie diary.
Why bother
Three reasons.
- You’ll be surprised what you regret losing. Five years of calorie data is five years of weight trend data. Long-term trend data is the entire point of self-tracking. Lose it and you lose the analysis.
- App developers ship schema migrations that fail. OpenNutriTracker has had two minor releases that ate the local DB on upgrade. We had a backup. Other people on /r/selfhosted that day did not.
- Hardware dies. SD cards die. NVMes die. Hetzner has had bad days. Two locations is not paranoia.
What to back up
Surprisingly little.
- The phone’s exported JSON snapshots from the tracker app (typically <50 MB).
- Postgres dumps from the USDA cache (<1 GB).
- The OFF mirror’s local cache (rebuildable from upstream, optional).
- Caddy logs (privacy-sensitive — encrypt and rotate).
- Anything you’ve manually annotated: recipes, custom foods, restaurant templates.
Maybe 1.5 GB total of data that actually needs to survive. The OFF mirror data is rebuildable so we don’t back it up; just the manifest of “what version we’re synced to” gets snapshotted.
Tools
We use Borgmatic for the homelab and restic for the off-site copy. We have used both for years and trust both equally; the choice mostly comes down to:
- Borg has better dedup ratios on text-heavy data (our case).
- restic has nicer S3-compatible backend support out of the box.
If you’re starting fresh, restic is probably the easier first choice. If you’re already running Borgmatic for the rest of your homelab, add nutrition into the existing config.
Borgmatic config
# /etc/borgmatic/config.yaml
location:
source_directories:
- /var/lib/postgresql/dumps
- /home/ont-backups
- /etc/caddy
repositories:
- path: /mnt/backup-disk/borg-nutrition
label: local
- path: ssh://borg@offsite.example.org/./nutrition
label: offsite
storage:
encryption_passcommand: cat /etc/borg/passphrase
compression: zstd,9
retention:
keep_daily: 7
keep_weekly: 4
keep_monthly: 6
keep_yearly: 2
consistency:
checks:
- name: archives
frequency: 1 month
- name: data
frequency: 1 week
hooks:
before_backup:
- pg_dump -U usda usda > /var/lib/postgresql/dumps/usda-pre-backup.sql
after_backup:
- rm /var/lib/postgresql/dumps/usda-pre-backup.sql
on_error:
- /usr/local/bin/notify-self.sh borg-failure
Run from a systemd timer at 02:30 nightly. The pre-backup pg_dump ensures the DB is consistent (rather than backing up live /var/lib/postgresql/data, which is a recipe for a corrupt restore).
restic for off-site
# /etc/profile.d/restic-nutrition.sh
export RESTIC_REPOSITORY="b2:nutrition-backup:nutrition"
export RESTIC_PASSWORD_FILE="/etc/restic/password"
export B2_ACCOUNT_ID="..."
export B2_ACCOUNT_KEY="..."
# /etc/cron.d/restic-nutrition
0 4 * * * root . /etc/profile.d/restic-nutrition.sh \
&& restic backup /var/lib/postgresql/dumps /home/ont-backups \
&& restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prune
Backblaze B2 costs us about $0.40/mo for the 1.5 GB of data we keep. Anything cheaper than that is free.
Restore drill
This is the part that nobody does and that is the actual point.
Monthly
Pick a random archive from the last week. Restore it to a scratch directory. Verify:
borg extract --dry-run /mnt/backup-disk/borg-nutrition::nutrition-2026-02-15T02:30:01
borg extract /mnt/backup-disk/borg-nutrition::nutrition-2026-02-15T02:30:01 \
--destination /tmp/restore-test
# verify pg dump
pg_restore --dry-run /tmp/restore-test/var/lib/postgresql/dumps/usda-pre-backup.sql
# verify ONT json
unzip -t /tmp/restore-test/home/ont-backups/ont-backup-latest.zip
If any of these fail, fix them now. Not later.
Quarterly
Full disaster-recovery drill. We spin up a fresh VM on the homelab, run the Ansible playbook against it from scratch, restore the off-site (restic) backup into it, and verify the OFF mirror, Postgres, and ONT export all come up healthy. Total time: about 90 minutes.
We have done this 11 times. Two have failed and each was a lesson:
- Once because the restic password had been rotated and the keyring entry was stale.
- Once because Postgres 15 → 16 had introduced a silent default for
wal_levelthat broke the dump-and-restore on the new version. Documented now.
What not to do
- Don’t only have one backup destination. The Pi’s SD card is not a backup, it’s the source.
- Don’t back up the OFF mirror’s full data dir. It’s 9 GiB of stuff you can re-download from upstream.
- Don’t store the encryption passphrase on the same machine as the backups. We keep ours in a passphrase manager + a printed paper copy in a fireproof box.
- Don’t skip the restore drill. Real backups are restores you’ve tested.
A small Ansible playbook
- hosts: backup-host
become: yes
tasks:
- name: install borgmatic
apt:
name: borgmatic
state: present
- name: deploy borgmatic config
template:
src: borgmatic-config.yaml.j2
dest: /etc/borgmatic/config.yaml
mode: '0600'
- name: enable borgmatic timer
systemd:
name: borgmatic.timer
enabled: yes
state: started
Full playbook in the homelab repo (private; ask if you want a redacted copy).
References
- Borgmatic: torsion.org/borgmatic
- restic: restic.net
- Borg: borgbackup.org
- Backblaze B2: backblaze.com/b2
- Pi 5 nutrition stack