The end of single-vendor edge AI
For two decades, the answer to 'which chip should we build on' was 'whichever one our prior product used'. The next product cycle is going to look different, and not because anyone wants it to.
Single-vendor commitments worked when chips evolved in straight lines and the workload was, in practice, a known quantity. Neither of those conditions describes 2028. The model classes that matter most — the VLA stacks driving the next robotics generation, the small VLMs landing inside cameras, the streaming embedded ASR going into appliances — moved faster than any single silicon roadmap could keep up with. We've spent the last three years sitting next to customers who keep discovering, on an 18-month cadence, that the chip they architected around no longer fits the model they actually want to ship.
Single-vendor commitments worked when chips evolved in straight lines and the workload was, in practice, a known quantity. Neither of those conditions describes 2028. The model classes that matter most — the VLA stacks driving the next robotics generation, the small VLMs landing inside cameras, the streaming embedded ASR going into appliances — moved faster than any single silicon roadmap could keep up with. We've spent the last three years sitting next to customers who keep discovering, on an 18-month cadence, that the chip they architected around no longer fits the model they actually want to ship.
The reflexive defense of single-vendor architectures is that integration cost goes up linearly with the number of silicon families you support. That's true, but it's the wrong frame. The cost that's actually killing roadmaps isn't integration — it's the cost of being wrong about which chip to integrate against. Every chip-selection mistake in this market now eats six months of engineering and forces a model retreat at the hardware boundary.
Multi-vendor isn't a posture. It's a hedge against the fact that the model landscape is moving faster than the silicon landscape, and you don't get to pick which one wins next. The product orgs that have already internalized this have moved their perception stacks onto compiler abstractions and benchmark surfaces that span vendors. Their roadmaps are the ones that aren't getting reset every 18 months.
We built Fo'c'sle to be the substrate that work runs on. The benchmark matrix is the public-good half of that — it's the proof that the comparison can be made without conflict of interest. The compiler is the engineering half. The HIL lab is the part that turns numbers into ship/no-ship decisions. None of those three is sufficient on its own, which is why we don't ship them separately.
We don't think single-vendor edge AI is going to disappear — there will always be a tier of products where one chip dominates the workload — but it's no longer the default architecture for any team that wants to ship more than one product cycle on the current model class. Plan accordingly.