Brand

Three deletions for every build

We counted our own field notes. Far more 'we deleted X' posts than 'we built X' posts. That ratio is the startup loop — ship a hypothesis, learn it was wrong, delete it, ship the next one. The deletion is the receipt.

Denys Kuzin··5 min read·branddeletionsbuild-in-publicmethodology

We did a count this week. Not a vanity count, an honest one. Open the field-notes folder, sort by title, and read down the column. Six of them are deletion stories. We deleted the worker. Killed MkDocs, kept the URLs. Lanes as config — or how we killed the workflow artifact. The docs-mcp-server experiment we deleted. The mapping table the runtime never read. Multi-tracker adapters before there were customers — the last one is the inverted twin, a post about not deleting something we could have. Against those, the number of posts titled "we built X" and meaning it as a clean origin story is closer to two.

The shape of the corpus is three deletions for every build.

We were a little surprised by it, then we weren't. The honest reading is that this is not a confession. It is the loop. A startup that ships a hypothesis, learns it was wrong, deletes it, ships the next one — that startup writes more deletion posts than build posts, because the deletion is the receipt. It is what proves the hypothesis was a real bet, not a daydream, and that we let it go cheaply, not after a year of sunk cost stapling it to the roadmap.

Most startups don't die in the build phase. They die in the can't-delete phase. The code shipped, three customers depend on the side effect, the founder named a child after the feature, and now the thing has to be maintained forever even though it was never the right idea. The features compound, the team gets slower, the next bet can't fit because there's no room. That is the failure mode. Not building, not deleting.

Three from the corpus

The worker, the queue, the cache. We stood up a cloud topology in the morning with five containers: an API, a Console, a worker, a Redis queue, a repo cache. By the evening three of them were gone. The hypothesis had been that secret probes were slow, repo reads needed caching, and agent runs needed a queue. None of it held when we measured the system we actually had. What stayed was the API and the Console. The diagram fits in the palm of a hand.

MkDocs. We had a fine documentation runtime. We replaced it with a Next.js dynamic route in one commit, because the interface — every existing /docs/... URL — was the thing the rest of the methodology depended on, and the runtime was implementation. We built the runtime, learned the runtime wasn't the contract, deleted the runtime. The contract stayed.

The dark-factory metaphor. Not a code deletion. A mental-model deletion. We had been quietly thinking of the agent the way the industry was thinking of it — lights-out, one operator, the team shrinks. Writing the post forced us to delete that frame and replace it with the one we actually believe: the typing compresses, the deciding doesn't, the team grows in the part of the job that was always the job. The build was the metaphor. The delete was the metaphor. What stayed is the working theory of the team.

Three different scales — infrastructure, runtime, frame — and in each case the same beat: ship, learn, delete, keep the thing under it that was real.

The discipline that makes this possible

A useful question to ask about anything you're about to build: if this turns out to be wrong, how long will it take to delete? If the answer is "an afternoon," the bet is small enough to make. If the answer is "a quarter," the bet is probably too expensive to test, and the actual right move is to find a cheaper version of the same question.

Most of the posts above describe deletions that took hours, not weeks. The worker rip-out was a single afternoon because the queue had always sat behind an interface. The MkDocs swap was one commit because the URLs had been the contract from day one. The mapping-table walk-back was a clean revert because the runtime had never read the field, so there was no migration to write. None of that was luck. It was a design decision, made earlier, that each module should be deletable in an afternoon. You write the seam in before you need it, knowing the version on the page right now is the version most likely to be wrong.

The flip side is also true and worth saying out loud: there are things in the codebase right now that we can't delete in an afternoon, and we know which ones. They are the bets we are most confident in. Postgres is one. The artifact model is another. The methodology book. If those are wrong, we are in trouble — they are load-bearing for everything else. We make peace with that by being honest about which bets are afternoon-shaped and which are quarter-shaped, and by writing far more afternoon-shaped bets than quarter-shaped ones.

A hypothesis testable in days is a hypothesis. A build that sits on the team for years without a delete path is a decision — and decisions at that scale deserve more thought than hypotheses do.

The metric we'd actually look at

If we were sitting in a board meeting for an early-stage product company and someone asked for one number, features shipped this quarter would be a bad answer. So would commits per engineer. So would roadmap completion. All three reward the build half of the loop and ignore the half that decides whether the build was worth it.

The number we'd actually look at — and we are aware this is unusual — is rate of correct deletion. How often did the team identify that something it had built was wrong, and remove it, fast, before it compounded? Not deletions in general; bug-fix deletions don't count, neither do refactors. Correct deletions — bets that were tested, found wrong, and retired in a working week. A team that produces a steady number of those is a team that is running the loop. A team that produces zero is a team that is either too cautious to bet or too attached to delete.

The corpus, this week, says we are running it.

The rate of correct deletion is the better health metric for an early-stage product company than features-shipped.

The long read

This is one field note. The full argument lives in the book.

Read the book