FocsleFocsle
Fo’c’sle

Depth-Anything-Edge

41%
by Fo’c’sle

Distilled depth model that holds 0.32 AbsRel on KITTI while running 60 FPS on a Pi 5 + Hailo HAT. The reference release.

DepthFocsle-ResearchINT8FP16MIXEDdistilledmonocularkitti
198K downloads 15K deploymentsUpdated Apr 16, 2028
Headline:16.8ms · Raspberry Pi 5 + Hailo HAT · INT8

About this model

Distilled depth model that holds 0.32 AbsRel on KITTI while running 60 FPS on a Pi 5 + Hailo HAT. The reference release.

Authored by focsle. Curated into the Fo’c’sle reference set on 2028-04-16. All cross-chip benchmarks below were collected in matched-pair runs in the HIL lab using the same input pipeline, same upstream preprocessing, and the same downstream consumer. See the methodology page for the full protocol.

Task
Depth
Parameters
41.2 M
Benchmarked on
9 chips
Deployments
15K

Architecture

Encoder + DPT decoder
Inferred from upstream weights · simplified
ImageViT-S/16Hybrid stemConv stageDPT decoderRefinementDisparity

Headline benchmarks

Training data

Pretrained on the upstream maintainer’s released checkpoint. Edge-distillation pass uses 2.4M frames from the Fo’c’sle distillation corpus (consented public data + opt-in publisher contributions). Quantization-aware fine-tune uses 320K calibration samples drawn from the target task’s eval domain.

  • Pretraining corpus: upstream maintainer release
  • Distillation corpus: 2,400,000 frames
  • Calibration set: 320,000 samples (per task)
  • Eval set: standard benchmark + matched-pair HIL runs