BRby Baidu Research
RT-DETR-Edge
22%Real-time DETR optimized for transformer-friendly NPUs. Wins on transformer-class silicon, struggles on legacy NPUs.
Object detectionApache-2.0INT8FP16transformercocodetection
92K downloads 5.4K deploymentsUpdated Apr 2, 2028
Headline:18.6ms · NVIDIA Jetson Orin Nano · FP16
Deploy RT-DETR-Edge
Pick a chip family. We hand you the artifacts (HEF, TRT engine, Core ML, ONNX) plus a one-click endpoint deploy. For private endpoints, on-prem deploy, or air-gapped distribution, see Enterprise.
NNVIDIA Jetson Orin Nano
# Build a TensorRT engine
$ focsle pull baidu-research/rt-detr-edge --target jetson-orin-nano
$ focsle build trt --plan rt-detr-edge.plan \
--precision fp16 \
--workspace 4G
# Run with TensorRT
import focsle.runtime as fr
m = fr.load("rt-detr-edge.plan", target="trt")
out = m.run(frame)One-click endpoint
Spins up a managed endpoint in the closest region. Pro and above.