FocsleFocsle
GO

EfficientDet-Lite4

4%
by Google

Reference EfficientDet variant pinned at INT8 for legacy NPUs. Maintained, not advanced.

Object detectionApache-2.0INT8cocolegacy
312K downloads 11K deploymentsUpdated Nov 8, 2027
Headline:19.2ms · Google Coral Edge TPU · INT8

Discussion

Issues, PRs, and methodology questions on EfficientDet-Lite4. Chip-vendor authors and reference-set maintainers are auto-pinged on threads tagged with their target.

28
MC

INT4 calibration recipe — overshoot on long-tail classes?

The published INT4 calibration set is heavily weighted to head classes. We’re seeing a 6.4% drop on long-tail traffic categories vs the FP16 baseline; switching to a stratified calibration sample fully recovers it. Recipe attached.

Marcus Chen14 repliesupdated 2h ago
41
HA

Hailo-10H performance on multi-stream — 4x1080p hits a wall at 38 FPS

Posting matched-pair numbers. We see 38.2 FPS sustained across 4 1080p streams on the Hailo-10H with the published HEF — the bottleneck is the post-processing thread, not the NPU. PR with a fused NMS path coming.

Hailo8 repliesupdated 1d ago
12
AP

ONNX export breaks dynamic shapes >1024

Repro: any input axis above 1024 fails the symbolic shape inference pass. Workaround in the meantime is to fix the pre-shape; will open an issue upstream.

Dr. Anika Patel5 repliesupdated 3d ago
9
EC

Proposed: split FP16 and MIXED variants into separate model entries

The MIXED variant has materially different downstream behavior on transformer-class chips (latency goes down, accuracy retention is +0.6pp). Splitting into a separate model card would let it compete on its own merits in the catalog.

EdgeML Collective22 repliesupdated 1w ago