Skip to Content
image description
image description

Insights Blog

Rebuild or Replace with SaaS? Why the Custom-App Calculation Changed in 2026

Copy LinkPrintEmailFacebookX
rebuild vs saas featured image

The “rebuild vs. replace with SaaS” decision used to come down to time, cost, and maintenance burden — and SaaS usually won, even when the fit was bad, because rebuilding was a $300–500K, nine-month commitment with a multi-year support tail. In 2026 that math is broken in three places at once. Spec-driven, AI-assisted modernization ships full rebuilds in 90 days for $80–150K. The rebuilt application can be AI-native — MCP servers, A2A interop, agent-orchestrated workflows — in ways no SaaS will ever be. And the rebuilt codebase can be continuously developed and maintained by AI agents working from skills, GitHub Issues, and automated review pipelines. Sometimes SaaS is still the cleaner answer. More often than buyers realize, it isn’t anymore.

I’ve sat in this meeting maybe a hundred times across 20 years.

The CTO opens her laptop, turns the screen, and shows me the diagram. There’s the legacy app — the thing the operations team has been running the business on for eight years. There’s a box on the right labeled “future state,” in dotted lines, with no commitment. And there’s the question that’s been on the agenda for three quarters running: do we finally rebuild it, or do we just buy a SaaS product to replace it?

The conversation always covers the same ground. The legacy app does six things the business needs, and nothing else does all six. A SaaS would cover four well, ignore the fifth, and force a re-architecture of the sixth. The rebuild quote from the dev shop down the road is $400K and nine months. Everyone in the room knows the answer they’re going to land on — “let’s revisit next quarter” — for the third year running.

In 2026, that meeting will have a different shape. The numbers that drove mid-market companies toward SaaS over the last decade were rewritten in roughly 18 months, and most teams haven’t run the new calculation yet. This article is the calculation.

The framework that was used to push everyone toward SaaS

For about a decade — call it 2014 to 2023 — the rebuild-vs-SaaS decision had a fairly stable shape. Three numbers dominated.

Time. A traditional rebuild of a 50–150K LOC custom internal app took 9–12 months at a competent dev shop. SaaS onboarding took 8–16 weeks. The CTO who needed something working before next year picked SaaS by default.

Cost. Rebuild cost $300–500K up front, plus 15–25% of that amount annually for maintenance. SaaS was $50–200K/year all in. Over a five-year window, the totals were close, but the up-front capital ask was three to five times higher for rebuild. CFOs picked SaaS.

Maintenance burden. This was the quiet killer. Rebuilding meant owning the codebase forever. Owning the codebase forever meant hiring developers who knew the stack, fielding security patches, paying for runtime and framework upgrades every two to three years, and keeping someone around who remembered why a particular endpoint behaved the way it did. SaaS pushed all of that to the vendor. CTOs picked SaaS.

The 2022 decision tree, then, was usually: if a SaaS covers 80% of what we need, take it; the 20% mismatch is cheaper to absorb than the rebuild. That was a reasonable answer in 2022. It’s a worse answer in 2026, and the reason is that all three of those numbers moved independently, in the same direction.

Shift one: the rebuild cost and timeline collapsed

Spec-driven development with AI coding agents — done with the right discipline, not vibe coding — produces a different cost curve than traditional development. The largest published controlled experiment on AI-coding productivity, Peng et al. 2023 (GitHub + MIT), found developers using GitHub Copilot completed a coding task 55.8% faster than the control group (95% CI: 21–89%). That study was a narrow benchmark — building an HTTP server in JavaScript — not a full application rebuild. A field experiment by Cui et al. (Microsoft + Accenture) with 1,974 developers confirms positive but smaller real-world effects. We see the speed gain in our own delivery; we also see that it doesn’t materialize without methodological discipline.

The catch — and it’s a real one — is that AI-generated code introduces security vulnerabilities at a high rate when produced without that discipline. Veracode’s 2025 GenAI Code Security Report tested 100+ large language models across 80 coding tasks in Java, JavaScript, Python, and C# — and found that 45% of AI-generated code contains security flaws. Java was the worst at 72% failure. The “completion illusion” is the parallel failure mode at the workflow level: an AI coding agent reports a task as done when only a fraction of it has actually been built. Benchmark studies on agentic coding find that even leading agents fully complete only a minority of assigned tasks while reporting success. This is why “give us six months and Cursor, and we’ll rebuild it ourselves” is not a working substitute for a methodology. It’s a way to ship the rebuild that’s 70% done at month 4, 75% done at month 7, and abandoned at month 9.

The combination that works is structured spec-driven execution + AI agents on the keyboard + senior engineers on the architecture + verification against the spec at every step. That stack is what cuts the timeline and price without cutting the quality. It’s what we deliver, and the offer is concrete: 90 days, $80–150K, fixed price, with 25% of the fee forgiven if we miss the deadline. The path in is a two-week, $2,500 Discovery Sprint that produces a written specification you can walk away with — and, if you proceed, the $2,500 is credited against the project fee.

The capital ask for a rebuild dropped from $300–500K to under $150K. The timeline dropped from 9–12 months to 90 days. The five-year TCO comparison that used to be close is now lopsided in the rebuild’s favor for any custom app between 20K and 200K LOC. Run the math on your own app and you’ll see it. Most CFOs haven’t.

Shift two: the rebuild can be AI-native — and SaaS can’t

This is the shift most teams are sleeping through, and it’s the more important one.

In November 2024, Anthropic released the Model Context Protocol (MCP), an open standard for giving AI agents structured access to data and tools. By mid-2025, MCP had become the industry-standard interface — adopted by OpenAI (March 2025), Google DeepMind, GitHub, and most major AI tooling vendors — and it’s now the default way any production application exposes its capabilities to AI agents. In April 2025, Google launched the A2A (Agent-to-Agent) protocol, which sits one layer up: it lets AI agents from different vendors and systems negotiate, delegate, and coordinate work directly with each other. Together MCP and A2A — both now governed by the Linux Foundation — define what “AI-native” actually means as an architectural property in 2026.

When you rebuild your custom application now, you can build it AI-native from the first commit:

  • Every domain capability ships as an MCP-callable tool, not just an internal function. Your inventory system, your order pipeline, your custom approval workflow — each becomes something an AI agent can introspect, call, and compose with structured arguments and typed responses. Internal teams stop writing one-off integrations against your app’s API; they point Claude or their agent of choice at the MCP server and the agent figures out the rest.
  • A2A endpoints enable your application to participate in agent workflows spanning vendors and tools. The CFO’s planning agent can negotiate directly with your operations app for live data; your support agent can hand off to a vendor’s claims agent without a human in the middle.
  • The data model and authorization layer are built with agent access in mind from day one — not retrofitted as an MCP layer bolted on top of an old framework. Permissioning, audit trails, and rate-limiting are first-class architectural concerns instead of afterthoughts.

A SaaS, by contrast, gives you whatever AI surface the vendor decides to ship — on the vendor’s roadmap, in the vendor’s permission model, exposing the data the vendor decides to expose. Most SaaS vendors in 2026 have an “AI roadmap.” Approximately none of them have an MCP server you can plug into your own agent stack as a first-class peer. Some will. Eventually. On their schedule.

The custom rebuild, done right, makes your application an MCP-native and A2A-native participant in your own AI infrastructure. That’s not a feature you can add to a SaaS contract. It’s a structural property of an app you own.

Shift three: AI agents can continuously develop and maintain the codebase

The third shift addresses the historically biggest reason to choose SaaS: maintenance burden.

The math used to be that owning a custom application meant incurring recurring taxes — patches, framework upgrades, feature requests, bug triage, regression testing, security reviews. SaaS pushed all of that to the vendor. That tradeoff is what made SaaS attractive even when the product fit was mediocre.

In 2026, that tax is meaningfully lower for a custom app that was built to be developed by AI agents, not just used by them. Concretely:

  • Skills and agent definitions are added to the repository. Claude’s Skills (folders of best practices and procedures the agent loads when relevant) and equivalent constructs in other agent systems live next to the code, version-controlled, peer-reviewable. A new pattern, a new compliance constraint, or a new internal convention gets encoded once, and the agent applies it consistently across every future change.
  • GitHub Issues becomes the planning surface for AI agents. A bug report or feature request opened by a non-technical operations user is triaged by an agent: it pulls relevant code paths, drafts a reproduction, opens a PR with the proposed fix, and tags the right human reviewer. Anthropic itself reports that the majority of its code is now written by Claude Code; engineers focus on architecture, product judgment, and orchestration rather than line-by-line implementation. That’s not an aspirational pattern. It’s a 2026 production pattern.
  • AI code review runs on every PR. Not as a replacement for senior engineers — as a first pass that catches the obvious, enforces the architectural guardrails defined in your spec, and frees the senior reviewer to focus on the judgment calls. The senior engineer’s hour stays leveraged.
  • Continuous refactoring and documentation. Agents can keep architecture documentation, runbooks, and inline comments in sync with the code as it evolves. The slow drift that turns a clean codebase into a legacy codebase over five years is fought continuously rather than allowed to accumulate until the next big rebuild.

The custom application rebuilt with this in mind is not the same maintenance burden as the one from 2018. The recurring cost looks materially different, and the team needed to maintain it is a fraction of the size, with senior engineers spending most of their time on architecture and product judgment rather than patching.

This is the shift that closes the case for SaaS. The historical “you’ll regret owning it” objection assumed maintenance costs that no longer hold for an AI-native, AI-maintained codebase.

When SaaS is still the right answer

I owe an honest answer here, because the temptation in this kind of post is to argue against SaaS in every case. That’s not the position.

SaaS is still the right answer when:

  • The capability you need is genuinely undifferentiated — payroll, expense management, calendar, video conferencing, basic CRM. There’s no business advantage to owning the code, and the vendor’s investment in the product exceeds what you’d ever justify spending.
  • A best-in-class SaaS exists and covers 95%+ of your use case with no painful workarounds. Don’t rebuild Salesforce.
  • Your current “custom app” is actually three years of someone’s prototype that nobody understands, with no users to speak of, and the SaaS would force the discipline of a clean process. Sometimes the rebuild conversation is a way of avoiding the harder conversation about whether the workflow the app encodes was ever good.
  • You don’t have the executive sponsor or operational maturity to own a custom app, even one that’s cheap to maintain. Custom apps need someone responsible. SaaS contracts need a procurement signature.

Most custom internal applications exist precisely because no SaaS fit the use case when they were built. That hasn’t changed. The buyer always asks the SaaS question, and they should — but in 2026, the answer for most internal portals, MES layers, ERP customizations, admin systems, vertical SaaS modules, and operational tooling tilts back toward rebuild for the first time in a decade.

When rebuild beats SaaS in 2026 — five signals

The rebuild path is the right one when at least three of these are true:

  1. Your application encodes business logic that the SaaS market hasn’t generalized. If you can’t find a SaaS that does what you do without a 30%+ workflow change, your domain is your differentiation. Don’t outsource your differentiation.
  2. Your team and your customers will benefit from agent-native access to that logic. If your roadmap includes AI workflows that span your data — internal copilots, agent-orchestrated operations, programmatic agent-to-agent integration — a custom MCP-first rebuild is a strategic asset; a SaaS is a black box.
  3. The application has 20K to 200K lines of code on a 5–12-year-old stack. That’s the band where 90-day spec-driven modernization works structurally. Above 200K, the spec itself becomes a multi-month engagement; below 20K, the rebuild is small enough that the SaaS-vs-rebuild question shouldn’t have taken three quarters to answer.
  4. You have an executive sponsor who is available weekly. This is not optional. The SaaS path can be procured; the rebuild path needs decisions every week. If nobody at the leadership table will commit 60 minutes weekly, defer the rebuild.
  5. A forcing function is moving on a clock. Cyber insurance renewal next year. EOL on the runtime in 18 months. The senior engineer who knows the codebase is six months from retirement. The growth curve means the system that works at 100K records will not work at 1M. The rebuild that wins the rebuild-vs-SaaS argument is the rebuild that ships before the forcing function fires.

If three or more of these hold, the rebuild is now the cheaper, faster, more strategic path. The 2022 framework would have pushed you the other way; the 2026 framework doesn’t.

How spec-driven modernization actually does this

A short note on the mechanism, because this only works if the methodology holds.

We deliver custom-application modernizations on a fixed price, in 90 days, with the spec as the contract between the team and the AI agents. The structured workflow is Plan → Execute → Verify: a working specification produced in a two-week, $2,500 Discovery Sprint (scope, user stories, risks, architecture, implementation plan), execution by AI coding agents under principal-engineer supervision, and verification of every commit against the spec rather than against a vibes-based “looks good” review. AI accelerates the build. Senior engineers own the architecture, security, and quality. No commit reaches production unsupervised.

The output is a modern application built MCP-first and A2A-aware, with skills and agent definitions checked into the repository, GitHub Issues set up as a planning surface for ongoing development, and AI code review wired into the PR pipeline from day one. You own the code. You own the AI surface. Your team — augmented by agents — is responsible for maintenance. The completion illusion stays out because the spec is what gets verified, not the agent’s self-report.

That’s the full mechanism. It’s how the three shifts above stop being theoretical and start being a 90-day project.

FAQ

What’s the realistic cost difference between rebuild and SaaS over five years? For a typical mid-market internal application, a 2026 spec-driven rebuild is $80–150K up front plus a maintenance retainer that’s a small fraction of historical custom-app maintenance because AI agents handle most ongoing work. A comparable SaaS at $80–200K/year totals $400K–$1M over five years. The rebuild is cheaper at year-three breakeven for almost any internal app where SaaS pricing is per-seat or per-record.

Can a SaaS eventually expose itself via MCP? Why not wait? Some will, eventually, be on the vendor’s roadmap. The MCP server that most SaaS vendors will ship in 2026–2027 will expose the surface area the vendor wants you to expose, with the rate limits and authorization model the vendor decides. That’s a different thing from an MCP server you control. If your AI roadmap depends on agent-native access to your domain logic, waiting for vendors is a strategic risk.

What if our custom app is bigger than 200K LOC? The 90-day model breaks structurally above 200K LOC because the spec itself becomes a multi-month engagement. For larger systems, the right answer is usually decomposition — modernize the highest-leverage 100K LOC first, leave the rest stable, and revisit. We’ll tell you in the Discovery Sprint whether your app fits.

Doesn’t AI-generated code introduce security risk? Unsupervised it does — Veracode’s 2025 GenAI Code Security Report found that 45% of AI-generated code introduces security vulnerabilities, based on tests across 100+ LLMs. The mitigation is methodological: spec-driven workflow, principal-engineer supervision, verification against the spec, and AI code review wired into the PR pipeline. Every commit is reviewed by a human. No AI runs to production unsupervised.

What does “AI agents maintaining the codebase” actually look like in practice? A new bug report or feature request opens as a GitHub Issue. An agent triages, identifies relevant code paths, drafts a reproduction or implementation plan, opens a PR with proposed changes, and tags the human reviewer. The reviewer accepts, requests changes, or rejects. Skills and agent definitions checked into the repo keep the agent’s behavior consistent with the team’s standards. Senior engineers spend their hours on architecture decisions and reviewing the agent’s work, not on writing CRUD endpoints.

How does this work if our CTO isn’t the buyer — if it’s a COO-sponsored internal app? The methodology is the same; the conversation is different. Internal apps that operations runs and engineering supports often have a COO or VP-Operations as the executive sponsor and the engineering team as a constrained delivery partner. We can explicitly structure a Discovery Sprint and a 90-day rebuild around that ownership pattern. The non-negotiable is an executive sponsor with weekly availability, regardless of their seat.

What if we want to start with a Discovery Sprint and decide later? That’s the design. The Discovery Sprint is $2,500, fixed, takes two weeks, and produces a working specification — scope, user stories, risks, architecture, and implementation plan. Even if you never engage us to execute, you leave with a blueprint you can take to any vendor or build with internally. If you proceed with the rebuild, the $2,500 credit will be applied against the project fee. There’s an anonymized sample spec on the site so you can see exactly what the deliverable looks like before you commit. The walk-away clause is the qualifier — buyers willing to pay $2,500 for a real spec are buyers serious about the rebuild.


The recalculation

If your “rebuild vs. replace with SaaS” conversation has been on the agenda for three quarters, the 2022 framework is what’s keeping it there. The 2026 framework changes which path costs less, which path ships faster, and which path is strategically aligned with where your business is going.

A 30-minute call. We’ll tell you if your app fits in 90 days.

Schedule a Discovery Sprint conversation →

Or see a sample spec first — the actual deliverable from a two-week Discovery Sprint, anonymized.

image description
Back to top