Build in public
The page an agent can read
We rewrote the Lighthouse pitch page with the explicit constraint that an LLM agent reading the HTML should be able to self-configure its MCP client. Then we ran the test. Here is what changed and what the new constraint asked of the writing.
Halfway through rewriting /lighthouse this week we realised the page had two audiences — a buyer scrolling on a Monday, and an LLM coding agent doing what its user just asked it to do. The buyer wanted to know what Lighthouse is and why it should sit next to their workspace. The agent wanted three exact strings to paste into a config file, in the right shape, with the correct URL, so its human did not have to bounce between tabs.
We had been writing the page for the buyer. The agent showed up anyway.
The conventional version
A normal product pitch page tells you the product exists, why it matters, where the value is, and where to click. It treats the reader as a human who will read the prose, decide they are interested, and then go talk to another human or fill out a form. Every line is sized for a person who can hold ambiguity for a moment and then ask a follow-up question in the same room.
Drop an agent on that page and it gets nothing useful. The agent is not interested in our brand voice. It is trying to do a task — its user said "connect me to the global Lighthouse" — and it needs (a) the canonical URL, (b) the transport, (c) the client-specific config, and (d) a way to verify the connection worked. Marketing prose answers none of those four questions. The page that converts a human politely refuses to converse with the reader who can actually take the next step on its own.
The new constraint
Lighthouse just launched. The global instance is live at lighthouse.harborgang.com — a read-only public knowledge graph, with its MCP endpoint at https://lighthouse.harborgang.com/mcp/. So we sat with the rewrite and added one constraint to the brief: the rendered HTML must be sufficient for an agent to self-configure its MCP client. No additional docs trip. No "see the integration guide." The page itself must close the loop.
The page now opens with a buyer paragraph and then turns hard into a section called "Connect in 60 seconds" with three copy-paste configs for three clients. Here is the Claude Code one, on the page in plain view:
{
"mcpServers": {
"lighthouse": {
"type": "http",
"url": "https://lighthouse.harborgang.com/mcp/"
}
}
}
Claude Code accepts native HTTP transport, so the snippet is four lines and a URL. Cursor is even shorter — it takes just the URL in its MCP settings panel, no transport field at all. Claude Desktop is the awkward one: no native HTTP transport yet, so the config wraps the URL in the mcp-remote shim and shells out to npx. Three clients, three shapes, all on the page.
Below the configs we listed the three tools the server exposes — search, fetch, propose — with one-sentence contracts. A curl https://lighthouse.harborgang.com/health line for the agent to verify the instance is up. A sample prompt that demonstrates a real answer comes back with a citation, so the agent knows what success looks like before it runs the first query. The page reads slightly differently to a human now — there is a denser middle, the kind of middle a human eye skims and an agent eye fixates on — but the buyer paragraphs that bracket it are still doing their job.
We ran the test
After publishing, we ran the obvious experiment. Open a fresh coding-agent session, give it the rendered URL, and play the human: "I want to connect to the global Lighthouse instance. Read the page and tell me what to put in my config and how to verify it works."
We had eight questions in the rubric. What is Lighthouse. Where is the global instance hosted. Which MCP endpoint does it serve. Which clients are supported. What is the exact Claude Code config snippet. What tools does the server expose. How do you verify the connection. What does a successful query look like.
The agent answered all eight from the page alone. No follow-up requests for documentation, no "I'd need to check the integration guide," no plausible-sounding guesses about endpoint paths. It quoted the snippet verbatim, named the three tools with their contracts, and proposed the curl /health verification step before we asked. The whole exchange ran without a human in the loop after the initial prompt.
One piece of feedback came back, and it was real. The page had an internal link written as /lighthouse/evals — relative — and the agent had to resolve it against the host before it could follow. A human reader does that without noticing; a browser does that without noticing. The agent flagged it as an extra step. We made the link absolute in the next push. Small fix, but the kind of micro-friction you would only ever discover by handing the page to the reader who cannot just intuit "oh, this is on the same site."
Why this matters beyond Lighthouse
This is the operational sequel to Two readers, two manuals. That post said documentation splits into a human surface and an agent surface — different folders, different reviewers, different render targets — because the two readers want different artifacts and trying to serve both with one file costs you every week. Product pages are next. Pricing pages are next. Integration guides, status pages, "compare us to X" pages — every surface where a buyer used to be the only reader is now a surface where an agent is also a reader, often acting on behalf of that buyer.
We do not think the agent is a niche reader. For a product like Lighthouse — a public MCP server whose entire reason for existing is that other agents call it — the agent might already be the primary reader. The buyer scrolls, gets interested, and then asks their tool to do the wiring. If the page their tool reads is pretty to a human and useless to the agent, the conversion leaks. Not to a competitor's better salesperson — to a competitor's better-structured HTML.
The Lighthouse evals page does the same kind of work from the trust side. It is the sibling surface where the agent reader is looking for "do I believe this graph" rather than "how do I connect to it." Both pages were built with the same instinct: a reader who cannot ask a follow-up question deserves a page that does not need one.
We are not designing for agents instead of humans. We are designing for both, and noticing how much the design changed when we added the second reader. The buyer paragraphs got tighter — there was less room for them, so they had to earn their lines. The agent section got more disciplined — every snippet had to be paste-correct, every URL absolute, every claim verifiable on the page. Both readers got a better page out of the same rewrite.
The next surfaces are obvious enough that we have started a list. Pricing — agents are increasingly the ones answering "what would this cost my team if we adopted it." Status pages — the agent reading the incident timeline so it can decide whether to retry. Integration directories — every row is a question an agent will ask in production. None of those are going to write themselves to the new constraint without someone holding the constraint deliberately.
The work on /lighthouse was a single page. The pattern under it is going to take a while.