Wendy Labs SF: Robots Up in Minutes IRL
TL;DR
Wendy Labs is attacking the least glamorous robotics problem: setup hell — CTO Yanis says even MIT robotics grads can spend days just flashing a Jetson and wiring headless peripherals, so Wendy built a CLI and OS layer to get devices like Unitree robot dogs running in 10–20 minutes instead of burning half a hackathon.
Their core bet is Swift as the robotics language that doesn’t force a rewrite later — Yanis argues Swift gives C/C++-class performance with memory and concurrency safety, so teams can skip the usual pattern of prototyping in Python and then painfully porting everything to C++ for production.
They already have real demos, not just theory — one robot dog was brought up on Wendy’s stack in about 10 minutes and then, over roughly two days of integration work, made to follow people autonomously using YOLO, speech models, lidar, and RealSense-style sensing.
The hardware ladder is surprisingly cheap at the low end and serious at the top — Wendy highlighted Nvidia Jetson Nano-class hardware around $250 for multi-camera YOLO and speech tasks, AGX Orin with 64 GB for autonomous vehicles and VLMs, and Jetson Thor around $5,000 with 128 GB unified memory for heavier sensor and robotics workloads.
A big part of the vision is retroactive intelligence, not just preplanned autonomy — Yanis described using MCP and LLM-driven tooling to push new logic onto already-deployed devices, like asking a forest-fire drone mid-mission to also look for trapped people or using an overlooked microphone in an oil facility to detect a bad engine sound.
The company is tiny, distributed, and moving like an infra startup, not a robotics lab — Wendy is pre-seed with about eight developers spread across the Netherlands, Germany, Romania, Tasmania, the US, and Canada, and Yanis says they’ve cut OS CI builds to 5 minutes so nightly changes are ready before the Slack message finishes sending.
The Breakdown
A robotics lab full of dogs, drones, and one very specific pain point
Ray opens from Wendy Labs’ San Francisco space — robot dogs on the floor, a drone under construction, and hardware everywhere. The hook is practical, not sci-fi: at hackathons, he says, half the event gets eaten just trying to get software onto the robot at all, and Wendy’s pitch is to kill that friction first.
Why Jetsons are harder than iPhones — and why Wendy started at the bottom of the stack
Yanis makes the comparison bluntly: you can ship your first iPhone app in 10–20 minutes, but a first Jetson can take days just to flash, cable, and boot correctly. His rant is relatable and specific — weird flashing methods, missing adapters, broken updates, random peripherals — and it leads straight to Wendy’s thesis: robotics needs infrastructure before it needs more demos.
The robot dog demo: 10-minute setup, two days to get autonomous following
In the open lab area, they show a Unitree-style robot dog running on Nvidia Jetson hardware with Wendy’s stack. Yanis says the first 10–20 minutes were setup with their tools, and the rest of roughly two days was integrating VLMs, speech, and autonomy until the dog could follow a person by voice command; they keep it in a safer manual mode on stream so it doesn’t go chasing people around the room.
Why this company keeps coming back to Swift
The most opinionated part of the conversation is Yanis defending Swift as the right robotics language. His case: robotics teams prototype in Python because it’s easy, then suffer through a C++ rewrite for performance, while Swift gives you a higher-level programming model with C/C++ interop, native speed, and enough low-level control to optimize vision and embedded workloads without starting over.
Drones, patrol dogs, and the first real use cases
The examples stay grounded: warehouse patrol where a robot dog doesn’t just stream video but can detect intrusions on-device, and a drone with Pixhawk plus Jetson for forest-fire detection, missing-person search, or ocean rescue support. The important detail is local inference — vision models can keep running with no internet, finish the mission, and report back later.
The weirder idea: retroactively teaching deployed robots new tricks
One of the more interesting moments is Yanis describing a future where an LLM can push fresh logic onto a device after it’s already in the field. His example is a drone initially tasked only with spotting fires, then suddenly needing to identify people nearby and respond; instead of months of telemetry and software work, Wendy wants MCP-style tooling to let the system inspect available sensors and improvise a new layer of capability.
The hardware menu, from $8 sensors to $5,000 Jetsons
They walk through the stack: tiny microcontrollers for sensor streaming, Jetson Nano-ish devices at $250, AGX Orin for 64 GB VLM-class workloads, and Jetson Thor around $5,000 with 128 GB unified memory and clustering options. Yanis keeps translating specs into actual developer choices — YOLO on the cheap boards, heavier multimodal or autonomous-vehicle workloads on the bigger ones, and the same code moving between machines when resources allow.
Open source, remote debugging, and a tiny pre-seed team trying to make robotics feel like Vercel
The closing pitch is that Wendy is Apache 2 open source, works offline over a single USB-C cable, supports Wi-Fi and cloud-connected remote debugging, and wants robotics deployment to feel more like shipping a web app than hijacking a hotel TV for a demo. Wendy is still pre-seed with about eight developers, but Yanis says they’ve already compressed years of robotics-infra pain into a few months — and their internal mantra is simple: if something “simple” takes more than an hour, they should fix it.