What Sprint Planning Looks Like When Throughput Has Doubled
Velocity was always a flawed proxy for progress. Teams learned to game it, inflate it, and defend it in ways that had little to do with delivering value. AI-assisted development has not fixed the underlying problem — it has made it worse, and in doing so forced a more honest conversation about what sprint planning is actually for.
When individual developer throughput increases significantly — and for many teams working with AI coding tools, the increase in raw output is real — the assumptions baked into story point estimation break down. A task estimated at three points based on historical coding time may now take a fraction of that time to produce but the same amount of time to review, integrate, and validate. The estimate was wrong from the start, but the error was masked by a different bottleneck.
The sprint planning sessions that work in this environment are the ones that have shifted the unit of estimation from effort to uncertainty. The question is no longer how long will this take to write but how well do we understand what we are building. High understanding, low risk: size it small and move fast. Low understanding or novel architecture: the story needs a spike, not a point value, regardless of how quickly the code can be generated.
This reframe has a structural implication. Backlog refinement becomes more important, not less. The scarcest resource in an AI-assisted team is not typing speed — it is clear requirements, well-understood interfaces, and architectural decisions made before generation begins. Sprint planning that does not account for that upstream investment will consistently produce sprints that generate volume and deliver noise.
The teams handling this well are treating AI throughput as a reason to invest more in definition of ready, not less. The code will come quickly. The clarity has to come first.