Why Solo Specs Fail: The Case for Collaborative Spec-Driven Development
Spec driven development AI has given individual developers a much better way to work with coding agents. Write a spec, provide context, get better output. The methodology works. Posts 1 and 2 in this series covered why.
But here’s what most SDD content isn’t talking about: the methodology breaks down the moment you move from one developer to a team.
Five developers. Five agents. Five different specs. Five different sets of context. Zero shared visibility into what anyone else planned, what constraints they followed, or what assumptions they made.
That’s not a methodology problem. It’s a collaboration problem. And it’s the reason spec-driven development needs to become a team practice, not just an individual one.
The Solo SDD Workflow
Most spec-driven development content today describes a workflow that looks like this: a developer writes a spec, gathers context about the codebase, hands it to an agent, reviews the output, and verifies it against the spec. It works well. For one person.
The problem is that modern software teams don’t build software one person at a time. They build it in parallel. Multiple developers working across different parts of the same codebase, often touching shared services, shared conventions, and shared infrastructure.
When each developer is running their own SDD workflow in isolation, a set of predictable problems emerges. These aren’t theoretical. They’re the daily reality for teams that have adopted AI coding agents.
What Breaks at Team Scale
Context divergence. Developer A writes a spec referencing the auth middleware as it existed last week. Developer B’s agent just refactored that middleware based on a different spec. Developer A’s agent builds against a system state that no longer exists. Neither developer knows about the conflict until code review, or worse, production.
Without shared visibility into what specs are active and what parts of the codebase they’re touching, every developer’s context is a snapshot that starts going stale the moment it’s created.
Duplicated effort. Three developers on the same team need rate limiting for three different endpoints. Each writes their own spec. Each agent builds its own rate limiting implementation. The codebase now has three different approaches to the same problem, none of which reference each other.
This isn’t hypothetical. Teams report this pattern constantly. When AI conversations are siloed, nobody knows what anyone else has already solved. The institutional knowledge that would normally prevent duplication (“hey, check what Sarah built last sprint”) doesn’t reach the agent.
Conflicting conventions. Developer A’s spec says “use the repository pattern for data access.” Developer B’s spec says “call the database service directly.” Both are following conventions they learned at different points in the codebase’s evolution. Without a shared, authoritative source of conventions, each developer’s spec encodes their own understanding of how things should be done. The agents faithfully follow conflicting instructions.
Invisible planning. An engineering manager asks: “What are our agents building this sprint? How are the plans looking? Are there any architectural risks I should know about?” In a solo SDD workflow, the answer is: nobody knows. Each developer’s specs live in their local files, their chat histories, or their heads. There’s no team-level view of what’s being planned.
This is the visibility gap that makes engineering managers nervous about AI adoption. They can see the code that gets committed. They can’t see the plans that produced it. And without seeing the plans, they can’t catch problems before they become expensive.
Knowledge loss. A senior developer builds excellent specs with deep context about the system. They leave the company. All of that context, all of those spec patterns, all of the architectural reasoning they encoded into their agent interactions disappears with them. It was never shared. It was never persistent. It lived in one person’s workflow.
The Cost Nobody’s Measuring
Most teams measure AI coding agent effectiveness at the individual level. Did the agent produce working code? How many iterations did it take? How fast was the developer?
Almost nobody measures the team-level cost of uncoordinated AI-assisted development. What’s the cost of three developers building three different rate limiting implementations? What’s the cost of a context conflict that doesn’t surface until integration testing? What’s the cost of an engineering manager who can’t assess risk because planning is invisible?
These costs are real and they compound. A team of ten developers each running isolated SDD workflows will produce more total code than a team without specs. But they’ll also produce more conflicts, more duplication, more architectural drift, and more rework during integration. The individual productivity gain gets eaten by coordination overhead.
This is a version of the AI productivity paradox applied to methodology. Spec driven development AI makes individual developers more effective. Without collaboration infrastructure, it can make teams less coherent.
The irony is sharp. The methodology that was supposed to bring order to AI-assisted development creates a new kind of disorder when practiced in isolation at team scale.
What Collaborative SDD Looks Like
The fix isn’t to abandon spec-driven development. It’s to make it a team practice with shared infrastructure. Here’s what that looks like in practice.
Shared context libraries. Instead of each developer gathering their own context for every spec, the team maintains a shared set of context documents: architecture decision records, coding conventions, service maps, dependency graphs. These are the single source of truth that every spec references. When the auth middleware changes, the context library gets updated once and every future spec reflects the new state.
This isn’t just documentation for humans. It’s documentation specifically structured for agent consumption. The context library is the team’s shared knowledge, packaged so that any developer’s agent can access the same authoritative information.
Visible planning surfaces. Specs and plans should exist in a shared space where the team can see what’s being built, what parts of the codebase are being touched, and where potential conflicts exist. This doesn’t mean every developer needs to review every spec. It means the information is available when it’s needed, and managers can get a team-level view of what’s in progress.
Think of it like the difference between developers committing directly to main versus working in branches with pull requests. The code is the same. The visibility and coordination infrastructure around it changes everything.
Authoritative conventions. The team agrees on one set of conventions and encodes them in a shared document that every spec references. When conventions change, they change in one place. There’s no ambiguity about which approach is current. Agents across the team all receive the same guidance.
This also solves the onboarding problem. A new developer doesn’t need to absorb months of tribal knowledge before writing effective specs. They reference the shared context library, and their agent receives the same foundational context as everyone else’s.
Persistent context across sessions. Individual SDD workflows suffer from context amnesia. Each new session starts from scratch. The developer re-explains the architecture, re-provides the conventions, re-describes the system state. Developers report spending 30-45% of their agent interaction time just re-establishing context.
Collaborative SDD solves this with persistent context that lives beyond any single session. The team’s shared knowledge base carries forward. The context from last sprint’s planning is available for this sprint’s work. The reasoning behind architectural decisions made six months ago is accessible to today’s agent.
Role-based visibility. Not everyone needs to see everything. A product manager might need to see what features are being planned and what verification criteria are defined. A senior developer might need to see what architectural decisions are being made across the team’s specs. An engineering manager might need aggregate metrics: how many specs are in progress, what areas of the codebase are being modified, what verification outcomes look like.
Collaborative SDD supports different views for different roles, all drawing from the same underlying planning data.
The Verification Multiplier
Collaborative spec-driven development also transforms verification. In a solo workflow, verification means one developer checking their own agent’s output against their own spec. It’s better than no verification, but it’s limited by one person’s perspective.
In a collaborative workflow, verification can check against the team’s shared context, not just the individual spec. Did the agent follow team conventions? Did it use shared services instead of building custom ones? Does the implementation conflict with what another developer’s agent is building in parallel?
This is where the real quality improvement lives. Individual verification catches whether the spec was implemented correctly. Team-level verification catches whether the implementation fits the system.
The Infrastructure Gap
If collaborative SDD is the obvious answer, why aren’t more teams doing it?
Because the tooling hasn’t caught up to the methodology. Most SDD tools today are built for individual workflows. GitHub Spec Kit scaffolds specs for one developer’s agent. Kiro structures a workflow for one developer’s session. The planning happens in isolation because the tools assume isolation.
What’s missing is the planning and verification layer that sits above individual agent interactions. A shared space where specs are created collaboratively, where context is maintained as a team resource, where plans are visible across the organization, and where verification checks output against team standards, not just individual specs.
This is an infrastructure problem, not a discipline problem. Teams that want to do collaborative SDD today end up stitching together wikis, shared docs, Slack channels, and manual processes. It works, but it’s fragile and it doesn’t scale.
Purpose-built planning and verification platforms are starting to emerge in this space, designed specifically for teams that need shared planning surfaces, persistent context, and systematic verification. The category is young, but the need is clear.
Getting Started With Collaborative SDD
Even without purpose-built tooling, teams can start moving toward collaborative spec-driven development today.
Build a shared context library. Start with three documents: architecture patterns, coding conventions, and a service map. Put them in a shared repo. Reference them in every spec. Update them when things change. This alone eliminates the biggest source of context divergence.
Make specs visible. Pick a shared location for specs (even a folder in your repo works). The goal isn’t formal review of every spec. It’s making the information discoverable so developers can check what others are building before they start.
Agree on conventions explicitly. If your team has unwritten rules about how code should be structured, write them down. If two developers would give different answers about the “right” way to implement something, that ambiguity will show up in your agent output. Resolve it once in a conventions doc.
Add team context to your verification step. When checking agent output, don’t just verify against the individual spec. Also check: does this follow our conventions? Does it use shared services? Does it conflict with anything else in progress? This single practice catches the most expensive category of errors: code that meets its own spec perfectly but doesn’t fit the system.
These are small investments that produce immediate returns. They won’t give you the full benefits of a purpose-built collaborative platform, but they’ll close the biggest gaps and set the foundation for scaling. The teams that figure out collaborative SDD early will have a compounding advantage as AI agents become more capable, because better agents executing coordinated plans produce exponentially better results than better agents executing isolated ones.
We’re Building Brunel to Solve This
The collaborative SDD gap described in this post is exactly what Brunel is designed to close.
Brunel Agent is a planning and verification platform for teams using AI coding agents, built by Loadsys. It gives your team shared workspaces for building structured plans together, persistent context that carries forward across sessions, plan export in formats any coding agent can consume (Cursor, Claude Code, Copilot, or whatever your team prefers), and a verification engine that compares agent output against the original plan.
Brunel Agent doesn’t generate code. It doesn’t replace your agents. It’s the planning layer that sits before execution and the verification layer that sits after it. Plan, export, execute, verify.
If your team is feeling the pain of isolated specs, invisible planning, and no systematic way to check what your agents actually built, download Brunel Agent and see the difference collaborative planning makes.
The Next Problem
Collaborative spec-driven development gives teams shared planning, shared context, and shared visibility. But there’s one more piece of the puzzle that most teams are missing entirely: what happens after the agent says “done”?
Code review catches code that was built poorly. QA catches flows that are broken. But neither catches the features that were specified, that the agent had every reason to think it built, that are simply absent from the implementation.
That’s the verification gap. And it’s the subject of the next post in this series.
This is Part 3 of a series on spec driven development AI and the planning infrastructure that makes AI coding agents work. Next up: the verification gap, and what happens when structured checks find 30-40% of the spec missing after an agent declares “complete.”