· Practice  · 8 min read

The Second Sancho

How AI Changes What Small Teams Can Ship

Sancho made Quixote's quest possible by handling the practical work. AI coding tools shift what's possible for small product teams—but the shift isn't where most people think it is.

Sancho made Quixote's quest possible by handling the practical work. AI coding tools shift what's possible for small product teams—but the shift isn't where most people think it is.

The Squire Who Changed the Math

Don Quixote had the vision. Sancho did the work.

While Quixote planned the next grand adventure, Sancho packed the bags, fed the donkey, negotiated with innkeepers, and kept them both alive. He didn’t replace Quixote’s vision—he made it executable. Without Sancho, Quixote was a philosopher on a horse. With Sancho, he was a knight on a quest.

And here’s what matters for us: Sancho didn’t make Quixote smarter. He didn’t improve the knight’s judgment about which battles to fight. He freed Quixote from logistics so the knight could spend more time—for better or worse—on decisions.

AI coding tools do something similar for engineering teams. They handle scaffolding, boilerplate, migrations, and documentation. They’re fast, tireless, and surprisingly capable at mechanical tasks. But they don’t decide what to build. They don’t know your users. And they introduce failure modes that didn’t exist before.

This post is about all three: what changed, what didn’t, and what to watch for.


What Changed

Prototyping got drastically cheaper. A feature that needed two days of boilerplate setup now needs two hours. When a senior engineer pairs with an AI coding tool, working prototypes—real ones, ones you can put in front of users—emerge in hours instead of sprints. This changes the economics of experimentation. Instead of agonizing over which idea to validate, a team can build and test three approaches in the time it used to take to spec one.

The backlog of tedious work became tractable. Every team has a graveyard of deferred work: the codebase migration nobody had bandwidth for, the documentation that’s perpetually outdated, the repetitive transformation across hundreds of files. These tasks are painful for humans and straightforward for AI tools. The two-year-old migration sitting in your backlog? It’s now a realistic candidate for the next quarter.

Small teams gained real leverage. A team of 3-5 people with AI tooling can genuinely ship more than the same team could a year ago—not because they work harder, but because less time goes to mechanical tasks. The boilerplate, scaffolding, test generation, and API wiring that used to consume a large share of engineering time shrinks. That recovered time can go toward architecture, design quality, and user research.

Coordination cost became the dominant constraint. When execution speed increases but coordination overhead stays the same, the overhead becomes more visible. Three people who talk every day and use AI tooling will often outpace ten people who need standups, planning sessions, and alignment meetings to stay coordinated. The advantage goes to focus and clarity, not headcount.


What Didn’t Change

Decision quality is still the bottleneck. AI tools make it faster to build the wrong thing just as effectively as they make it faster to build the right thing. If your team can prototype three approaches in a week instead of one in a month, the question isn’t “which one can we afford to build?” It’s “which one should we build?” That question requires user research, strategic thinking, and product judgment. None of which got automated.

Architecture still needs humans. AI coding assistants are strong at generating code within well-defined boundaries. They struggle with the decisions that shape a system over time: how services communicate, where to draw domain boundaries, how data flows through the application. These choices compound. An AI tool that generates plausible-looking architecture can create months of technical debt if nobody with experience reviews it.

Estimation shifted but didn’t disappear. Tasks that were “two-week estimates” may now be “three-day estimates.” But the hard parts of estimation—edge case handling, integration complexity, the user experience thinking that makes a feature actually good—take the same time they always did. The scaffolding is faster. The judgment calls aren’t. If your sprint planning only adjusts the first number, you’ll consistently overpromise.

Adding people still has diminishing returns. Brooks’s Law didn’t get repealed. AI tooling makes small teams more productive, but it doesn’t solve the coordination problems that come with large teams. If your team is slow despite sufficient headcount, the bottleneck is probably process and clarity, not engineering capacity. More people won’t fix that. Better tools won’t either.

Organizational reactions are still the real obstacle. Some teams freeze, convinced AI will replace them. Others deny the shift matters and wait it out. Others slap an “AI-powered” label on the product without thinking about what problem it solves. In all three cases, adoption stays theoretical—leadership endorses AI in town halls but nobody changes a single workflow. The tools changed. The human tendency to charge windmills or hide from them didn’t.


What to Watch For

This is the section most AI commentary skips. The tools are genuinely useful, but they introduce risks that product teams should understand clearly.

Complex state management is an AI weak spot. AI coding tools excel at stateless transformations and well-contained functions. They struggle when logic spans multiple services, involves complex race conditions, or requires reasoning about state that changes over time. A feature that looks correct in isolation may break when it interacts with the rest of your system. Code review for AI-generated code isn’t optional—it’s more important than ever, specifically at the integration boundaries.

Security is not something to delegate. AI tools generate code that works. They don’t reliably generate code that’s secure. Input validation, authentication flows, authorization logic, data handling—these require intentional, adversarial thinking. An AI tool optimizes for “does it run,” not “can it be exploited.” Treat AI-generated code touching authentication or data access as requiring extra scrutiny, not less.

Team conventions erode silently. Every codebase has conventions—naming patterns, error handling approaches, architectural boundaries—that exist in the team’s shared understanding rather than in a linter config. AI tools don’t know these conventions. They’ll generate perfectly functional code that violates your team’s patterns in subtle ways. Over months, this drift makes the codebase harder to maintain. Teams adopting AI tooling need to invest in codifying conventions that used to be implicit: better linting rules, architecture decision records, and explicit style guides that AI tools can reference.

The quality illusion is real. AI-generated code that looks correct, passes basic tests, and ships without deep review. Then edge cases surface. The code “worked” but wasn’t right. When output volume goes up and review thoroughness doesn’t keep pace, you accumulate invisible debt. Track not just how much your team ships, but whether shipped features hold up in production. Throughput that doesn’t translate to user outcomes is just Quixote on a faster horse.

Skill atrophy is a long-term risk. Engineers who rely on AI for scaffolding and boilerplate may gradually lose fluency with the foundations. This is fine until something breaks at a level below what the AI typically handles, and nobody on the team has the muscle memory to debug it quickly. Encourage engineers to periodically work without AI assistance—not as a productivity exercise, but as a skills maintenance practice.


The Partnership, Honestly

Sancho didn’t make Quixote unnecessary. He made him effective. The quest still needed a knight—someone with vision, judgment, and the willingness to charge forward.

AI coding tools work the same way. They remove the friction that prevents engineers from doing their best work. They amplify what small, focused teams can accomplish. And they amplify the consequences of PM decisions—good ones ship faster, bad ones fail faster.

The teams getting the most from these tools aren’t the ones treating AI as a magic multiplier. They’re the ones who were already clear about what to build and are now able to move faster toward it. Clarity was always the constraint. AI just made that visible.


Try saying this in your next sprint planning

Instead of ending with a list of rhetorical questions, here are specific things you can say that actually change the conversation:

When scoping a new feature: “Before we estimate this the traditional way—could we spend a day building a rough prototype with AI tooling and see how much of the scaffolding disappears? Then we’ll have a better sense of what the actual hard parts are.”

When the backlog is full of tedious work: “Let’s pick one of those deferred migrations and timebox two days with AI assistance. If it works, we know how to clear the backlog. If it doesn’t, we’ve only lost two days.”

When someone proposes adding headcount: “What if we invested in better tooling for the current team first and ran that experiment for a quarter? If the bottleneck is really capacity, we’ll see it clearly. If it’s clarity or process, more people won’t help.”

When reviewing AI-generated code: “Let’s spend extra time on the integration points and security boundaries. The straightforward logic is probably fine. The places where this code talks to other systems—that’s where we need to look hardest.”

When velocity is up but outcomes are flat: “We’re shipping more than ever, which is great. But let’s check—are our outcome metrics moving too? Speed only counts if it’s pointed at the right things.”


Put this into practice

Help your team adopt AI tooling with clear eyes. The Retro Template helps surface where the bottlenecks really are—and where AI tools can help most.

Get the Template →

Back to Blog

Related Posts

View All Posts »
The Slow Letter: From Idea to Prototype in Hours

Mar 1, 2026

The Slow Letter: From Idea to Prototype in Hours

Quixote spent a chapter composing a love letter that arrived too late to matter. Today, the gap between idea and testable prototype has collapsed to hours—but only if you know what to test and what to throw away.