FocsleFocsle
Fo’c’sle

DINO-v3-Distilled

21%
by Fo’c’sle

Distilled DINO-v3 backbone for downstream perception. The encoder under most of our reference models.

MultimodalFocsle-ResearchINT8FP16backbonessldino
64K downloads 2.4K deploymentsUpdated Apr 12, 2028
Headline:14.8ms · NVIDIA Jetson Orin Nano · FP16

Deploy DINO-v3-Distilled

Pick a chip family. We hand you the artifacts (HEF, TRT engine, Core ML, ONNX) plus a one-click endpoint deploy. For private endpoints, on-prem deploy, or air-gapped distribution, see Enterprise.

NNVIDIA Jetson Orin Nano
# Build a TensorRT engine
$ focsle pull focsle/dino-v3-distilled --target jetson-orin-nano
$ focsle build trt --plan dino-v3-distilled.plan \
    --precision fp16 \
    --workspace 4G

# Run with TensorRT
import focsle.runtime as fr
m = fr.load("dino-v3-distilled.plan", target="trt")
out = m.run(frame)

One-click endpoint

Spins up a managed endpoint in the closest region. Pro and above.

Or deploy yourself