How I Built an AI Agent That Designs Like Me
TL;DR
Tom spent $13,100 in 90 days building a personal agent stack — his setup uses OpenClaw as the harness, Obsidian as the knowledge layer, Slack as the interface, and four separate agents for different parts of his work.
His core claim is that the real leverage is in the harness, not just the model — he frames an agent as "an AI model dropped inside of a car" with tools, files, memory, rules, and permissions that you can customize.
The most valuable workflow is a curated knowledge vault, not raw chatbot memory — Tom drops tweets, studies, and videos into Slack, his agent summarizes and routes them into Obsidian, then later retrieves related material without hallucinating because he curates what gets remembered.
The agent already handles meaningful business work, not toy demos — it prepares meeting and podcast briefs, helps design and ship research surveys, monitors Vercel and Sentry, and even contributed to a survey project that generated $50,000 in sponsorship revenue.
He sees agents as collaborators, not full autopilot systems — Tom avoids overnight autonomous loops, stays in the approval/orchestration role, and says the quality ceiling is mostly about the system you build around the model.
His practical build recipe is five steps: interface, memory, work system, design canvas, cron job — he recommends adding a search layer like QMD, connecting tools via MCPs, and starting with one workflow you already understand instead of trying to automate your whole life.
The Breakdown
The $13K experiment: cloning his taste into an agent
Tom opens with the big claim: his AI agent helped design the Toolbenders app in his taste, but this was not "vibe coding" or a one-shot miracle. He was still orchestrating and approving the work, and he frames the whole thing as a real-world example of what becomes possible when you build a personal agent. The headline number gives it weight: $13,100 spent over 90 days.
Why Slack replaced ChatGPT for his daily work
He says he hasn't opened Claude or ChatGPT as standalone apps in about four months because Slack has become the surface where much of his work happens. Since January, he's been running OpenClaw as the container, Obsidian as the knowledge layer, and Slack as the interface, with four agents handling different areas of his workflow. The tone here is part demo, part challenge: most people are still thinking too small about what these systems can do.
OpenClaw as a project car, with markdown files as the engine room
Tom uses a memorable metaphor: OpenClaw is a starter-kit project car with a swappable engine, where the model is the engine and the harness is everything around it. He walks through files like soul.md, identity.md, user.md, agents.md, memory.md, tools.mmd, and heartbeat.md, emphasizing that these are plain-text instructions controlling behavior, permissions, memory, and tool use. His point is that agent quality varies wildly because everyone is effectively customizing their own vehicle.
The mindset shift: stop thinking chatbot, start thinking operating system
One of the stickier moments comes from a story about his 17-year-old son, who used Claude to teach himself how to build a personal agent tied to Discord with tagging and memory indexing. Tom says this is the skill now: learning how to surface the right question inside your frustration. Once you see an agent as an operating system, you naturally start asking better questions about memory, source of truth, rules, and what should stay manual.
The knowledge vault that makes vague questions actually useful
For Tom, the center of gravity is an Obsidian-based vault covering health, journal, worldview signals, project planning, his writer's room, and OpenClaw operations. His "capture loop" lets him drop a tweet, study, or video into Slack with a plus sign; the agent summarizes it, routes it to the right place in Obsidian, and pattern-matches it against related entries to build themes and tags. He says this is why retrieval works so well later: he's curating memory himself, so it doesn't feel like hallucination roulette.
Meeting prep, research ops, and the moment the agent got proactive
The examples get more concrete fast. Every morning the agent reads his calendar, researches who he's meeting, pulls relevant context from Obsidian, and creates a prep doc; for podcast guests, it runs that through NotebookLM and turns it into an audio summary he can listen to over coffee. On survey projects, it helps shape hypotheses from voice notes, drafts research plans, writes technical implementation, monitors Vercel and Sentry, analyzes responses, and once even started making paper data visualizations on its own when it noticed a deadline approaching.
The ROI is real, but so is the security risk
Tom says one survey alone generated $50,000 in sponsorship revenue, helping recoup the roughly $13,000 token spend. He also saves time on content distribution, cutting a repetitive 45-minute multi-platform publishing workflow down to about 10 minutes, four times a week. But he stops to underline the risk: every connected tool comes with secret keys and private data, so the security docs are not optional.
Why he still keeps himself in the loop — and how he wants you to start
He gets unexpectedly personal here, saying he misses parts of his old workflow and doesn't want agents running wild all night while he sleeps. Still, he says AI has improved both his work and his life: he can make documentaries, support his family, manage his health, and feel "a lot less" suffering "in this river." His build advice is simple and grounded: pick one workflow, choose an interface, add memory plus search with something like QMD, connect your source-of-truth tools via MCPs, point the agent at a canvas like Figma, then give it one recurring cron job and tune from there.