← back to blog
The agentic whiplash
3/9/2026 by Tommy Falkowski

The agentic whiplash

Breakneck speed and deliberate friction

#Society #Technology #AI

Got tentacles?

A couple of months ago, I wrote a piece about the octopus and the rake, where I discussed the fact that we are increasingly using agentic AI tools like Claude Code, OpenAI Codex, opencode, pi etc. In that piece, I described how I have started embracing this kind of work, where I delegate tasks to different agents in parallel. And I think it's safe to say that in the last couple of months, more and more people have jumped on board because they have realized how capable the underlying AI models have become. If you are able to keep a good mental model of all your tasks and projects, you can now technically delegate many of them to your x number of agents. Peter Steinberger, the developer of openclaw has shown that this is his preferred way of working, even though he tries to keep things simple in the sense that he does not use any kind of orchestration tools or elaborate cascading setups, but just a bunch of agents running in parallel in his code base.

I have argued that this might be the future of work, that we might all need to learn how to commandeer and steer an army of agents for getting stuff done. And I still think that at the core, learning how to effectively use these AI-based tools is something that we won't be able to skip. However, I've also come to realize that just because you technically can do something, it does not necessarily mean that you should. Of course you can try to spin up as many agents in parallel as possible. There are even people that have built elaborate AI factories where they spend most time developing very detailed specifications at the beginning and then letting their agents go wild. Even though this might become the future eventually, I am not sure this is something that we as individual builders should embrace right now. And even more importantly, we don't know what this will do to our physical and mental health. I don't want to walk back my own argumentation. But I do want to augment it and try to explain what I have learned since then, how my workflow has evolved, and what I think people should know when getting into this kind of work.

Building for the sake of building

There has been a recent study by the Harvard Business Review that I find really fascinating because it shows what me and many other people have been saying for the last couple of months: Even though AI can increasingly do more and more work autonomously, we have the feeling that instead of working less and trying to relax more, the opposite is actually happening. We work more. We try to achieve more. We aim for bigger things to build.

For me, at least, it's even worse than that, in the sense that I cannot relax anymore. If I have some free time on my hands, I feel the strong urge to delegate something to one of my agents, to have them always working in the background. And this is something where I acknowledge that it is definitely not healthy anymore. It has reached a point where instead of playing video games, instead of watching a movie with full attention, I'd rather just go in and command the agents, look at the results and iterate. I think at the core, what this shows is that I love building. I love building stuff. I love trying out stuff. I love the feeling of power that conjuring something up almost out of thin air is giving me. I love living in the true age of personal computing. But what is the end game here?

I mostly have been building for myself. The tools that I build fit my workflow like a glove. I also always try to make the tools as ergonomic as possible for AI agents because I think that this is actually where a lot of things are going: I will not be the one stringing commands together indefinitely for quickly getting something done. That might and will still be the most effective way for some things. Checking your agenda for today, for example. I have a program called h8 that taps into the legacy Microsoft Exchange servers at my work and gives me instant access to the information that I would otherwise have to click through the horrible Outlook interface to get to. I built this for myself, but coincidentally, this is perfect for agents because they can now, with very limited tokens, get the exact information that I need in any given moment. The thing is, this was in no way a one-shot. I am constantly using my tools and optimizing every interaction to make them as ergonomic as possible.

The feedback loop

I have found something that I think more and more people are realizing. Yes, you can spin up many different parallel agents to chug through a long list of tasks, but one thing that can't really be replaced, at least not in my opinion, is trying something out yourself, iterating through different options, getting a feeling for how something works or how it doesn't work, or maybe realizing that you made a wrong assumption in the beginning. This is where I see great potential for AI, because we can iterate so much faster with very tight feedback loops. And to get the best results out of agentic tools you have to set up good feedback loops: Good tests alongside the tools for the agent to do end-to-end testing itself. This is increasingly becoming very feasible with different kinds of browser tools like vercel's agent-browser, which works surprisingly well. But it still can't replace using something yourself and looking at it from the human perspective.

I'm not sure we will be able to replace every kind of interface with the text or speech input box where the agent just does it for us. I think humans need visual information for many tasks. Even if our interfaces will be rendered on-the-fly by agents in the future, there will still be the need for something predictable, something deterministic, that we can repeat over and over again. I don't want my calendar to look slightly different every time I use it. And this is why I think some user interfaces, and especially good user interfaces, will still prevail.

Detaching yourself from the problem

More and more people have started speaking out about the downsides of agentic engineering: not having a good feel for the underlying code base, not being able to really appreciate what has been achieved. I have experienced this myself. A couple of months ago, I gave Codex hyper-specific requirements, iterating through a long list of them until I thought that I had explained every little detail. The idea was to develop a self-hostable audio server for music, podcasts, audiobooks, and audio stories for kids. A very minimalistic and easy-to-use alternative to the increasingly shitty stuff like Spotify et al. I wanted something that really makes it easy to navigate, find things quickly, and continue listening. For audiobooks and audio stories, for example, there's nothing worse than not having robust bookmarks.

I had the whole idea fleshed out in my head. I talked to Codex for a long, long time. But at the end of the day, after Codex implemented everything, I didn't touch this code base for months. I just recently started spinning it up and trying to get a feel for what it can do, but there's no attachment. It does not feel like mine at all. It has existed in a sort of Schrödinger's Code state where I didn't even know if it works or not.

This is where the fast feedback loop and fast iteration come into play. I think this is still very much necessary, as opposed to having a detailed plan at the beginning and letting your agents chug through it. When you do that, you become so detached from the project and the code base that it's really hard to get into it and evaluate it at a later stage.

Compartmentalization is key

Another thing I have learned: I find it extremely useful to have a compartmentalized approach to agents where I have different topical agents that I can hand different tasks to. One for research, one per coding project, one for managing my servers etc. This is basically what the OG idea of GPTs from OpenAI was: in theory not a bad idea but I think it was too early, capabilities were still lacking and it didn't really catch on in the broad sense, even though for enterprises this is actually a really good approach in my opinion.

I've started building my own agent platform using a similar approach, focusing on a multi-user architecture. Multiple users can register, each gets their own Linux user space, and they also have shared workspaces where multiple people can work with agents at the same time. I think this is especially powerful for getting a feel for what others are doing with their agents, how they are interacting with the agents, and what the agent can even do if prompted correctly. It's called oqto, continuing my vision of the octopus that we might all have to become, with multiple tentacles that are semi-autonomous.

But the compartmentalization is key, one tentacle doesn't always need to know what the others are doing. And another important part is that you remain the octopus's brain. Of course, you can try to have a coordination agent or orchestration layer, but I think the more layers you put between you and the unit of work being done, the harder it is to actually wrap your head around it and identify with the results (and take responsibility for it).

This also has practical implications for where agents run. Most tasks we delegate are not actually that complicated, and local models are increasingly capable of handling them. But only if you scope them well. One general agent that has to know everything will fall apart on a smaller model still. A compartmentalized agent with a clear, narrow focus can work surprisingly well, even using local models. And I think this is the way forward: self-hosted agents for the bulk of the work, cloud models for the complex stuff, and good compartmentalization as the thing that makes it all feasible.

Of course, there's also the question of security and safety. Simon Willison's lethal trifecta has done a great job of describing the inherent issues with powerful agents that have access to a wide range of tools and to the internet at the same time: they are still susceptible to prompt injection, which can result in data exfiltration, data manipulation etc. This is a real issue we urgently need to solve as more and more people are putting powerful agents onto their private hardware.

The keepers of the gates of hell

But the biggest hurdle for broad adoption of AI agents are something else I think. Over the last decades, we have built up a complex web of regulations and directives that serves as the foundation of many professions and jobs. These barriers are not technical in nature. They are mostly rules that we have invented and agreed to adhere to. And no matter how good AI agents will become, if you legally need a human to sign off on something, they become the de-factor keepers of the floodgates.

For certain situations, human involvement is still going to be essential. Hiring a new co-worker, for example, where cultural fit and vibes play a role that no ruleset will be able to fully capture. Or child custody cases, where every family is different and the stakes are too high for a deterministic system to have the final say. These are situations where human judgment genuinely matters because the problem doesn't fit a cookie-cutter recipe.

But a lot of what we require humans to sign off on is not like this at all. Filing your taxes, for example. The rules are explicit and the thresholds are defined. The deductions follow a formula. And yet we have entire professions built around navigating this complexity on our behalf. Or getting a building permit, where your plans either meet the specifications or they don't, but you still have to wait for multiple months until a human reviews what could be checked automatically. These are actually not problems that require human judgment. They should be deterministic, rule-based processes that we have simply never bothered to automate properly. And there are a million other examples like this, a million different paper cuts inside of our organizations and institutions that make full end-to-end automation almost impossible. And don't get me started on notaries. Someone who justifies their job by pointing to some kind of regulation is just pathetic. "You need to pay me because I as an individual need to sign off on this by law." Thanks for nothing!

Deliberate friction

Now, I am not against deliberate friction. On the contrary, the more I work with agents, the more I realize that we need some amount of it. It's actually good that we as humans are the bottleneck in the sense that we can't review the huge amounts of code that agent can write in a matter of minutes if we let them. Some might argue we don't even need to do this because the agents will review it. But I think in order to provide quality results, it's important that we have a good understanding of what we are actually building. Having to slow down to understand things in a sufficient way is actually a good thing.

But these are friction points that we control ourselves as individuals. Deliberate friction should be something that we define voluntarily. The moment people are putting restrictions onto something to gatekeep their profession is the moment it's all going to fall down. The gatekeeping professions I described above are exactly the barriers we will run into when trying to automate more and more processes. This is not friction that we as individuals can directly control. And it is in stark contrast to the feeling of empowerment that AI agents can give you.

People have been talking about democratization of coding, of engineering. And I think to a certain extent this is true. Now anyone can easily start to build stuff. You might argue that this is bad because you need a firm understanding first, but I don't fully agree. Yes, we should always aim to learn as much as we can. We should not aim to let the AI do everything without us having any kind of understanding. I'm not advocating for that. But agents make everything much more accessible, and getting people to start with something, to experience building something, I think is awesome.

But if all of the gamekeeping will continue to be in place, then the empowerment is limited. I'm not proposing to abolish all regulation. A progressive society needs a very good foundation of laws and regulation, in my opinion. But the if the level of complexity we have created ourselves requires entire professions to help us sift through it, we have utterly failed. Why can't we aim to make things simpler?

The artisanal coder

Underneath the technical barriers, the security concerns, the institutional gatekeepers, and the deliberate friction, there is a more personal question that many people are wrestling with right now: What happens to us in this process?

The agentic whiplash is upon us. We've all gotten a taste of the power that AI agents can bring, but more and more people have realized that we as individual humans are not built to keep up with this kind of pace. And maybe that's a good thing. Maybe this natural bottleneck is actually something that we should embrace and even cherish. As I've written in the past, I still think that letting something sink in, thinking hard about a problem for extended periods of time, coming back to problems we have tried to address before, is still something very valuable. I don't we should aim for AI to replace this.

More and more people have started talking about the loss of their purpose. The purpose of solving difficult puzzles by means of writing code is becoming less and less something that people are willing to actually pay money for. So it might go the way of what many musicians are experiencing: most people won't ever make any money with music, but they still do it for the love of it. I think for a lot of people, coding will still be that. It's going to be a way of thinking, a way of problem solving. And there might still be room for artisanal, hand-coded projects that are of such high quality, because of the slowness and the thinking that went into them, that people still cherish and love them. But on the grand scale of things, we need to accept that we won't be writing code by hand most of the time anymore.

Mario Zechner, the developer of pi, has done a lot of things that go against the current grain of agentic development. He recently took an open source vacation, where he closed down the issue tracker on GitHub and people could not put in pull requests. He keeps stressing that the way he works with agents is actually pretty bare bones: one or two agents working on something, reviewing everything thoroughly, having to yell at the agent more often than not to get what he wants done. Jeremy Howard is warning people about the dangerous illusion that he thinks AI coding can be. I think it's important to have these viewpoints, in contrast the people who spin up their AI factories and proclaim this to be the only realistic future.

Honest opinions

At the end of the day, we need to acknowledge that increasingly, it does not matter who writes the code, whether it's a person or an agent. What still matters is the intent behind it, and that we know what we want to do and what we don't want to do. I still think AI agents are the future. They need to be made much more accessible and we need to tear down artificial boundaries that make it infeasible to use them in a productive manner outside of our own private little space. But we also need to find good answers for avoiding the agentic whiplash that might have a very detrimental effect on our mental and physical health.

I don't have the answers to most of this. I can ask questions and talk about my opinion. And I think the most important thing we can do is talk about it. Be realistic about how things are developing. Be realistic about our own capabilities. Be honest about how we are feeling. AI agents can be like slot machines where you pull the lever and wait for something to come out, and more often than not, the result might actually be good enough and they might keep getting better and better with each model iteration. But the question is if we all want to just play the slot machines for our entire lives, or if we want to become the builders of something much better. I hope we'll aim for the latter one.

Nobody really has any clue about how this is going to play out. Everyone is experiencing this shift in real time, throwing shit at the wall and seeing what sticks. The most honest take in my opinion: We don't know shit.