Bo Frese

March 2025

AI Is About to Make Your Technical Debt Problem Much Worse

Most organisations already carry more technical debt than they can handle. AI is about to accelerate the problem significantly — unless you address the underlying quality culture first.

Most engineering organisations I visit have the same quiet problem: a technical debt backlog that grows faster than it shrinks. They've learned to live with it — hiring more people as development becomes less efficient, building more support capacity to handle the bugs, accepting that new features take longer than they should.

AI coding tools are about to make this problem much harder to ignore.

What I've seen firsthand

I've spent the past year building products from scratch with AI — an iOS app, a CMS, a development tooling system. The productivity gains are real. I built things in that timeframe I couldn't have built alone.

But several times I had to stop and rewrite sections from scratch. Not refactor — rewrite. AI had helped me build something that worked on the surface but carried enough inconsistency and accumulated cruft that cleaning it up was slower than starting over. I had that option because it was my own project and my own timeline.

Most teams don't.

The pattern

AI produces impressive code quickly. But beneath the initial productivity boost, technical debt accumulates faster than with traditional development.

As developers guide AI through iterations, codebases collect unused or duplicate functions, mixed implementation approaches, inconsistent patterns, and abandoned fragments nobody cleaned up. What makes this particularly difficult is that developers don't recognise problematic patterns in AI-generated code the way they do in code they wrote themselves. The code feels foreign even to the people who requested it.

AI also tends to follow the patterns already present in the codebase. If those patterns are good, it amplifies them. If they're not, it amplifies those too — consistently, at speed, across every file it touches.

Why this is an organisational problem, not a technical one

The technical part is the symptom. The cause is how organisations currently incentivise software development.

In most engineering organisations, teams are measured primarily on velocity and feature delivery. Quality is a secondary concern — important in principle, squeezed in practice. Technical debt reduction is always the thing that gets pushed to the next sprint.

When AI allows teams to demonstrate working features five to ten times faster, the pressure to prioritise short-term delivery over long-term quality becomes nearly irresistible. The debt doesn't announce itself — it accumulates quietly until one day the thing that was supposed to make you faster becomes the thing slowing you down.

The organisations already struggling with technical debt under traditional development will see that debt compound at a rate they haven't experienced before.

What actually needs to change

The fix isn't a policy about AI tools. It's addressing the quality culture that makes this dynamic possible in the first place.

That means making technical debt visible in planning — not as a separate "tech debt sprint" that gets cancelled under pressure, but as part of every feature conversation. It means involving the engineers who work directly in the codebase in decisions about technical priorities. It means treating code quality as a leadership responsibility, not just an engineering concern.

It also means thinking seriously about documentation — not as bureaucracy, but as the context that makes AI useful. AI tools need to understand your architecture, your patterns, the decisions you've made and why. If that context isn't written down anywhere, the AI is making decisions without the full picture. Every session.

The question to ask now

Before your organisation goes further with AI coding tools, honestly evaluate:

  • Do your teams have the time and authority to address technical debt as they go?
  • Are code quality and architectural consistency part of how you measure progress?
  • Does your codebase have enough documented context that AI can follow your patterns — not just invent new ones?

If the answer to most of these is no, proceed carefully. AI will amplify what's already there.

The teams that will get the most from AI aren't the ones who move fastest at first. They're the ones who have the quality foundation that allows speed to compound rather than decay.

The code your team ships bears your organisation's signature, not the AI's. That hasn't changed.

Is your codebase ready for AI?

If the answer to most of those questions is no, that's worth addressing before you go further. I'd like to hear about your situation.