Blog
Benchmark reports, methodology notes, model releases, and the occasional opinion piece.
Cross-chip benchmark report: Q3 2028
Three transformers caught up to YOLO. Hailo-10 dethroned Coral on the low-power tier. Thor justified its watts on VLA. The full numbers, and the surprises.
Releasing Depth-Anything-Edge: a 41 MB depth model that runs at 60 FPS on a Pi 5
A distilled monocular depth model with metric-stable outputs, a 0.32 AbsRel on KITTI, and a memory footprint that fits inside the Hailo-8L on a Pi HAT. Weights, recipes, and HIL traces, all open.
How Meridian Autonomy compressed chip selection from 14 weeks to 9 days
When the SKU plan calls for four hardware tiers — and your perception stack is six models deep — the chip-selection cycle eats the whole roadmap. Meridian's engineering lead on what changed.
Inside the HIL lab: methodology for cross-vendor robotics benchmarks
What it means to call a number 'comparable' across silicon vendors — the rig, the harness, the protocol, and the ways we keep ourselves honest.
Now supporting NVIDIA Jetson Thor and Hailo-10H in HIL Sim
Both platforms are now first-class targets for HIL simulation runs and live in the cross-chip leaderboards. What's new, what's still pending, and how to request runs against your own chip rotation.
The end of single-vendor edge AI
For two decades, the answer to 'which chip should we build on' was 'whichever one our prior product used'. The next product cycle is going to look different, and not because anyone wants it to.