Becoming a Digital Octopus
I have been re-reading Adrian Tchaikovsky’s Children of Time trilogy. If you have any interest in science fiction, go read the series. You’ll thank me.
A couple of days ago I started the second book in the series: Children of Ruin. The story, among other things, describes an intelligent octopus civilization. These beings aren’t single minds; they’re described as having a “crown” and a “reach.” The crown possesses the sense of self, the core consciousness. The reach—the tentacles—are also intelligent, but they receive general directives from the crown and then go off to figure out how to accomplish the goal on their own. The crown is comfortable not having exclusive control over the “how,” and it can adjust its plans based on the learnings of its reach without fully “knowing” why the adjustment is correct.
This is the best analogy I have found for the experience of working with progressively more AI tools. I feel like I am the crown of some vast, digital octopus, directing my reach of coding agents and other AIs to perform tasks, conduct research, and synthesize data.
Directing the Reach
A typical working session has become a constant stream of demands for my attention. I might have three or more agents working at once: one writing a feature for a project, another helping me review a pull request, and a third researching a new topic. Any one of these tasks would have previously required my full, undivided attention. Now, it feels more like forking a tiny piece of my own brain to go do a thing.
At first, this was overwhelming. I felt I had to watch the agents at every step. Sometimes that’s still true, but I’ve been experimenting more with building clear, but non-specific, plans and then telling an agent to implement a feature in a single shot. This gives me longer periods of uninterrupted focus, and it makes me feel like I have time to let the completed work sink in.
There’s a narrative out there that this way of working will inevitably lead to garbage code that no one can debug. I see the point, but I think it misses the mark. The issue isn’t that an agent’s code is inherently more “garbage” than my own. The crucial difference is that when I write code myself, the process builds a deep mental model for me. When an agent does the writing, that step is skipped. The code might be pristine, but my understanding of it is secondhand.
This changes my role as the crown. I’m no longer just the originator of the idea; I’m also its first and most important student. My job is to use rigorous code reviews and testing not just to validate the code, but to build my own deep understanding of what my reach has produced. It’s about applying the same guardrails we’ve always used, but with a new purpose: ensuring the crown understands the work of its tentacles as well as it understands its own.
Letting Go of the Reins
This new model is a huge shift from my initial impulse, which was to control the chaos. I’ve written before about my attempts to build frameworks like context-monkey to tame these agents. I was trying to micromanage every tentacle.
I’ve completely ditched that approach. I’ve found that a lighter touch is more effective. For example, my bootstrap prompt for getting an agent up to speed on a repository has shrunk from a 300-line document of detailed instructions to this:
Review this repository. Develop a precise understanding of its architecture, dependencies, and design patterns as if you were a core contributor.
Analyze the repository, return with a concise summary of the project. Await further instructions.
The key is giving the AI wiggle room for the “how,” not for the “what.” The critical constraints and goals must be well-defined, but the implementation path can be flexible.
The Beauty of the Unexpected
This hands-off approach doesn’t just make me more productive; it also opens the door for surprising discoveries. This brings us to nondeterminism. I’ve said it before: an AI being nondeterministic is not a bug, any more than a human being nondeterministic is a bug. It’s a reality we need to adapt to. Sometimes, this leads to frustrating outcomes that get deleted. Other times, it leads to moments of surprising brilliance.
Recently, I asked two different agents how I could improve my dotfiles based on my command history. They came back with some fantastic new ways to manage git worktrees and zellij panes—practical, daily workflow improvements I hadn’t considered. This is a very small example of the reach optimizing its environment in ways the crown hadn’t considered.
Code is Cheap, Experimentation is Free
This partnership fundamentally changes the economics of software development. The crown can now afford to send its tentacles on dozens of exploratory missions, where before it could only fund one. The barrier to experimentation is now basically zero. It’s so easy to build prototypes or explore ideas that it feels irresponsible not to. We’ve all felt this when asking ChatGPT questions that were previously too daunting or time-consuming to research.
But there’s a limit. I can experiment with more, but I can’t become an expert in more. My agents, working at full speed, produce more knowledge than I can fully digest. This creates the need for an odd new kind of trust.
My artificial reach has access to all the combined knowledge of humanity stored in its weights. It’s like the leap we gained from search engines, but the difference is the sheer scope of what I can do with that knowledge.
What is Intelligence?
This brings up the question of what “intelligence” even means in this new partnership. The agent exhibits staggering ability in achieving a goal, but it’s the crown that provides the goal and, more importantly, the taste and judgment to know if the result is any good.
This is why deep experience is more critical than ever. I couldn’t get these kinds of results from a medical LLM because I lack the background to spot a bad idea or pressure-test its findings. The feedback loop between the expert and the tool is everything. My expertise as the crown is what allows me to guide, validate, and ultimately harness the immense power of my digital reach.
This is more than just a new workflow; it’s a new way of thinking. The line between my own intelligence and the tool’s capabilities blurs. I’m not just directing a reach; I’m extending my own. The question for me, then, isn’t just about what I’m building, but about what I’m becoming in the process.