Methodology

A standing document. Updated when the test bench changes. Last revised April 2026.

Test hardware

Sample food set

For barcode and database testing, we use the same 30-item shopping list, refreshed quarterly. It is intentionally heterogeneous:

The list lives in a private repo and rotates so apps cannot game it. Each item is logged on every app under test using whatever the app's "easiest" path is (barcode, search, AI estimate).

Scoring

Privacy testing

For commercial apps, we run them once on the GrapheneOS phone with a NextDNS profile capturing all DNS, and a mitmproxy session capturing TLS-decryptable HTTPS where the app's certificate pinning permits. Captures are logged for 24 hours of app use and then archived. We publish destination domains and observed payload categories — never the full capture.

FOSS evaluation

For an open-source app to be reviewed, we compile from source on a clean container, run on the test phone, and read at least the data-layer and network-layer source. Apps with no commits in the last 12 months are flagged "stale." Apps with no commits in 24 months are flagged "abandoned."

What we do not do

Conflicts of interest

The editor has personally paid (and then cancelled) MyFitnessPal Premium, Cronometer Gold, and MacroFactor. He has run the OpenNutriTracker codebase on a personal phone for over a year. He has briefly trialed PlateLens for the purpose of writing the "Would I pay" piece. He has not received any compensation, free subscriptions, or merchandise from any of the above.