Skip to content

The Bottom-Up Approach

February 17, 2026

I find the bold visions for AI-powered software development genuinely intriguing. The idea that highly personalized applications could be generated and maintained from specs alone, that swarms of agents could build and iterate autonomously, that an AI with full access to your systems could act as a true personal assistant — these are fascinating possibilities, and I think about them regularly, in particular how we might get there.

However, I'm not signing up for or trying to adopt these ideas directly; instead my approach looks more like what people are doing today in their day-to-day work: incremental adoption. It's the seemingly-slower, less glamorous work of figuring out the smaller things that can be done with LLMs right now, through hands-on experimentation and learning. I believe the path to those bigger ideas runs directly through work that many seem eager to skip.

There are generally two modes of LLM adoption: bottom-up and top-down. Bottom-up approaches take existing processes and extend or replace parts of them: summarizing sprint notes, generating test cases, adding an AI reviewer before a human one. Top-down approaches aim to replace how software is built entirely: all code generated from specs, applications maintained without writing a line, agents operating autonomously across your systems and services.

Both matter. But right now, I see a lot of energy going into building and trying to adopt top-down visions directly and not nearly enough into bottom-up legwork, and I think that's a costly mistake.

The argument I hear most often goes something like: "This novel approach doesn't quite work yet, but it will once LLMs get good enough." I understand the appeal. If you believe the technology will eventually get there, why invest in incremental improvements to how things work today?

Here's why: many solutions to many software development problems are embedded in the systems and practices we have today, and those problems don't just disappear. Version control, CI/CD, testing frameworks, code review — none of these were built for and used by people because they are theoretically elegant or because they're busywork. They were developed because the lived, human experience without them was expensive, unsustainable chaos.

Now, top-down approaches don't usually throw these things away — they automate them, removing the human from the loop, but in so doing they lose some of the value they were developed to provide in the first place, and some of it's crucial. These practices aren't just checklists; they're structures built around deliberation and decision-making. Code review is valuable not because a diff gets examined for formatting or style, but because someone reads it and asks "wait, is this actually what we want?". Testing matters not just because it catches bugs, but because writing tests forces you to articulate what correct behavior actually looks like; when written well tests are a form of software specification. These systems aren't bottlenecked by humans — they're efficient systems for surfacing decisions to humans and enabling them to apply judgment.

Skip that deliberation, and the problems don't disappear. They accumulate quietly, and surface later, more expensively, in ways that are harder and more expensive to fix. In the same way that it's important to shift-left testing where possible, it's also important to shift-left decision-making where possible.

To adopt a whole new mode of software development is like a total rewrite in software, with similar risks. Netscape's decision to rewrite Navigator from scratch — abandoning a complex, battle-tested codebase in favor of starting clean — is one of the most cited failures in the history of software. They lost years of market position and never fully recovered. The existing code was messy, yes. But it also encoded solutions to thousands of problems the team had already solved and forgotten about. Starting fresh meant discovering and solving them all again, unnecessarily.

Applying that same instinct to how software development itself works, trying to rewrite the process from scratch without taking into account what the current one solved and what still applies, risks the same kind of expensive rediscovery.

At the same time, there are some really compelling reasons to do the bottom-up work head-on:

  • It compounds in ways top-down thinking doesn't. Say you use an agent to write unit tests and it cuts that work in half. That's a real, immediate gain. Now say you use an agent swarm to parallelize that further and cut it in half again. That second step only saves half as much total time as the first, and it required significantly more complexity to achieve. The highest-leverage moves are almost always the first ones in any given area, which means your time is usually better spent finding the next untouched area than squeezing more out of one you've already improved. Bottom-up thinking keeps you hunting for those high-leverage moments and getting the most out of your time.
  • It keeps you honest about cost. We're in an unusual period where LLM usage is heavily subsidized and still surprisingly expensive. Approaches that work today by throwing more LLMs at a problem (e.g. having several models generate competing solutions and another choose between them) may not survive contact with real economics. Bottom-up work forces you to think about whether an LLM is actually the right tool for a given step, which builds the judgment you'll need when the economics normalize.
  • It shows you what the future actually requires. I find I can assess ambitious top-down ideas better the more I do incremental adoption work. You start to see which parts of a workflow can be automated cleanly and which are load-bearing in ways that aren't obvious. You learn what the real blockers are. Those learnings are genuinely useful for evaluating new ideas and products, and for knowing when someone's vision is missing a crucial piece.

None of this is an argument against ambitious thinking. Top-down ideas are valuable; they're how you know what direction to point. The best ones illuminate what problems need to be solved and why they matter.

The Apollo program, the development of the internet, the Manhattan Project, these are all examples of audacious top-down goals. All of them succeeded by solving enormous numbers of smaller, unglamorous problems first, building on centuries of prior work along the way. The vision and the legwork weren't in competition. They needed each other.

That's how I think about this. The exciting futures people are imagining for AI-native software are quite possible. But they'll be built bottom-up, one experiment at a time, by people who are willing to do the necessary legwork.

Meta note: writing this blog post is itself an example of the limits of waving away the bottom-up work. I started with a handful of thoughts and a rough outline; I could have given those to an AI and had it write this post for me. But instead I wrote the first draft, submitted it to Claude for thoughts, and we iterated a few times, first with a whole rewrite and then targeting some specific points which were still unsatisfying. Through that iteration I improved not only the post, but also my own thinking and articulation. Without that, both this post and my thinking would have just been sloppy, and AI-assisted work need not be slop!