· Practice · 10 min read
The Slow Letter
From Idea to Prototype in Hours
Quixote spent a chapter composing a love letter that arrived too late to matter. Today, the gap between idea and testable prototype has collapsed to hours—but only if you know what to test and what to throw away.

The Letter That Arrived Too Late
In Part I of the novel, stranded in the Sierra Morena mountains, Quixote decides to compose a love letter to Dulcinea. He doesn’t just write it—he agonizes. He drafts and revises. He debates whether to model it after Amadis de Gaul or invent his own form. He tears up lines and rewrites them, performing the role of the suffering lover for an audience of one very confused squire.
When the letter is finally ready, he hands it to Sancho to deliver. Sancho can’t read, so he has to memorize it. He rides to the village, forgets most of it, paraphrases the rest, and the innkeeper’s daughter ends up hearing a garbled version that bears little resemblance to what Quixote wrote.
The letter was beautiful. It was also irrelevant the moment it left Quixote’s hands. The act of composing it mattered more to him than whether it accomplished anything.
I think about this scene every time I catch myself polishing a spec that nobody has validated.
Last Thursday
Here’s something that actually happened to me last week. I’d been looking at session recordings from our onboarding flow, and I noticed a pattern: users were dropping off at the workspace setup screen. Not all of them—maybe 30%—but the ones who dropped tended to be solo users, not team leads. My hypothesis was simple: solo users don’t need a “workspace.” They need to get to the product. The setup screen was a gate designed for teams that was punishing individuals.
I could have written a spec. I could have put together a deck with the session recordings, framed the problem, proposed three options, and brought it to the next planning meeting. That’s what I would have done two years ago, and it would have been a perfectly reasonable letter to Dulcinea—carefully composed, formally structured, delivered weeks after the observation.
Instead, I opened Claude Code at about 9 AM and described what I wanted: a forked onboarding path that detects solo signups and skips the workspace configuration, dropping users straight into a personal default workspace. I had a working prototype by noon. Not a mockup in Figma. Not a flowchart. A functional fork in the actual onboarding flow, running locally, that I could put in front of people.
That afternoon I pulled three solo users from our beta list into quick calls, shared my screen, and watched them go through both flows. Two of them didn’t even notice the workspace step was gone—they just arrived at the product faster and started using it. The third said, “Oh good, I was confused by that part last time.”
By Friday I had enough signal to kill the original hypothesis (skip it entirely) and replace it with a better one (keep the workspace concept but auto-create a default for solo users, let them rename it later). I wrote a focused spec over the weekend—not a guess about what might work, but a description of what I’d watched work, with the specific edge cases I’d already discovered.
The whole cycle, from observation to validated spec, took four days. The spec I would have written on Monday would have taken a week to draft and another month to prove wrong.
The Part Nobody Talks About
That story sounds clean. Here’s the part that isn’t.
The first version Claude Code generated used a React hook pattern that had been deprecated two versions ago. It compiled. It ran. It looked correct. But if I’d handed it to engineering and said “build this,” they would have built on a foundation that would need to be ripped out in six months. The prototype passed the user test and would have failed a code review, and those are very different things.
This is the tension at the heart of AI-assisted prototyping. The output looks production-ready. It has proper component structure, reasonable naming, even inline comments. It presents itself with the confidence of finished work. But underneath, it’s making architectural choices based on pattern matching, not judgment. It doesn’t know your deployment environment. It doesn’t know which libraries your team has standardized on. It doesn’t know that the API it’s calling has a rate limit that will break at scale.
I’ve learned to think of AI-generated prototypes the way I think of concept cars at an auto show. They demonstrate what the experience could feel like. They are not the thing you manufacture. The door handles might be beautiful but structurally unsound. The engine might be a painted block of wood. That’s fine, as long as everyone involved understands that the purpose was to test whether people want the car—not to drive it off the lot.
The danger is when the prototype is so convincing that the team mistakes it for a head start on production. “We’ve already got 60% of the code!” No. You have 100% of a prototype and 0% of production code. Those might share some ideas. They almost certainly shouldn’t share implementation.
What Actually Changes
The real shift isn’t that AI writes code faster. Engineers were never the bottleneck in the way most people assumed. The bottleneck was always the gap between “we think users want this” and “we know users want this.” That gap used to cost months. Now it can cost days.
This changes the economics of being wrong. In the old model, testing an idea required committing engineering resources, which meant you needed high confidence before starting, which meant long research and planning cycles, which meant by the time you tested the idea the market had moved. Being wrong was expensive, so you optimized for being right on the first try. You wrote beautiful letters.
In the new model, you can test three ideas in the time it used to take to spec one. You don’t need high confidence before starting—you need a clear hypothesis and a willingness to throw away the prototype when it’s served its purpose. Being wrong is cheap. Being slow is expensive.
I run prototyping sessions roughly twice a week now. Some of them validate hypotheses. Most of them don’t. Last month I prototyped a notification batching feature that I was certain users wanted. Put it in front of five people. Not one of them cared. They didn’t dislike it—they were indifferent, which is worse. That prototype cost me a morning. The feature, if I’d committed to building it through the traditional planning process, would have consumed a team for a sprint before we discovered the same indifference.
The morning wasn’t wasted. It was the cheapest way to learn that the idea was dead. Sancho didn’t need to ride to the village. The letter didn’t need to be delivered. We just needed to know it wasn’t worth sending.
The Honest Toolkit
I use Claude Code for most of my prototyping because I’m comfortable describing systems in natural language and iterating on the output. But the prototype is only the middle step. Before it, I use Figma Make to explore interaction patterns quickly—not full designs, but rough flows that help me think through what I’m asking Claude Code to build. Sometimes a five-minute sketch in Figma reveals that the interaction I had in mind doesn’t actually make sense, and I skip the prototype entirely. After the prototype, if visual design matters for the test, I’ll use MidJourney to generate realistic UI assets so testers aren’t distracted by placeholder content.
None of these tools replace thinking. All of them compress the time between having a thought and testing whether the thought holds up. The sequence is always the same: observe something in user behavior, form a hypothesis, build the cheapest possible thing that tests the hypothesis, watch real people interact with it, update your understanding. The tools just make “build the cheapest possible thing” take hours instead of weeks.
There are sessions where I spend more time arguing with Claude Code about an architectural choice than I would have spent writing the code myself. There are sessions where MidJourney produces beautiful images of a UI that has nothing to do with what I described. The tools are powerful and unreliable in the way that all new tools are—they extend your reach while occasionally grabbing the wrong thing.
The skill isn’t using the tools. The skill is knowing what to test, recognizing when the prototype has answered your question, and having the discipline to throw it away once it has.
The Letter and the Lesson
Quixote’s letter to Dulcinea failed because he optimized for the wrong thing. He optimized for the beauty of the composition when he should have optimized for the speed of the feedback. He wanted the letter to be perfect. He should have wanted to know whether Dulcinea would write back.
The PM equivalent: we optimize for the completeness of the spec when we should optimize for the speed of the learning. We want the document to be thorough. We should want to know whether the idea survives contact with users.
A prototype built in a morning, tested in an afternoon, and thrown away by evening teaches you more than a spec reviewed by twelve people over three weeks. Not because the prototype is better—it’s almost certainly worse as a document. But because it generates the one thing a spec never can: actual user behavior in response to an actual experience.
The letter doesn’t need to be beautiful. It needs to arrive while the question is still worth asking.
Prototype Checklist
Before you invest engineering time in building something for production, make sure your prototype has answered these questions:
Did real users interact with it? Showing a prototype to your team doesn’t count. Showing it to your manager doesn’t count. Five minutes of screen-shared testing with an actual user is worth more than a week of internal review.
Did the core hypothesis survive? You built this to test a specific assumption about user behavior. Did users behave the way you expected? If they didn’t, that’s not a failure—that’s the entire point. Update the hypothesis and decide whether to prototype again or move on.
Have you identified what the prototype can’t tell you? Performance at scale. Security implications. Accessibility compliance. Data migration requirements. Integration complexity with existing systems. Write these down explicitly so the engineering team knows what still needs to be figured out.
Is the team clear this is a throwaway? If anyone is planning to “clean up the prototype code” and ship it, stop. Discuss what production actually requires. The prototype proved the idea works. Production engineering proves it works reliably, securely, and at scale. Those are different projects.
Can you articulate what changed? Before the prototype, you believed X. After testing, you now believe Y. If you can’t name what changed, the prototype didn’t teach you enough. Run another round of testing or sharpen your hypothesis.
Is the next step clear? A good prototype ends with a decision: build this for real, modify the approach and prototype again, or kill it. If the answer is “let’s discuss it more,” the prototype didn’t do its job. Go back to the users.
Put this into practice
Build a prototype-first workflow. The Weekly Status template helps you track what your team learned—not just what they shipped.

