I wrote this a few weeks back.

On Vibe Coding

This weekend I started throwing Claude Code at a 3rd-person shooter idea that I had started to implement in Godot engine manually. Initially progress was blindingly fast. Claude got the basic multiplayer aspect working in a couple prompts, which had been especially daunting to me. But as the weekend drew on, and my understanding of my own project dwindled, we started

to run into more and more blockers. Sometimes throwing out recent code and starting from scratch with a new prompt would get us through, but more and more it felt like I was just acting as Claude's eyes and ears as, unlike traditional web development, Claude can't easily test game code itself. And without a human's understanding, Claude was just digging itself into harder and harder to debug issues, where telling it to try again in a different way worked less and less.

The central paradox is that coding agents (esp. Claude Code in my experience) are definitely able to take a well defined task in a common framework and produce something of value, but it takes an experienced developer with good problem-domain and framework knowledge to iterate on a project using agents to avoid unmaintainable buggy spaghetti code. The developer also has to guard against going too fast and ending up with code they don't fully understand and thus be unable to unstick the bot when it inevitably can't dig itself out of its own complexity.

If you take someone with no coding experience, or even a developer, but one who is not familiar with the framework and problem-domain they are vibe-coding for, they are going to sooner or later, depending on their aptitude and caution, have the bot dig itself into a hole that the human doesn't have the tools or knowledge to dig out.

So how can we best leverage coding agents?

Firstly, give the agent a way to test its own code. Have it fire up the project via shell commands and read the error output itself. For visual elements, give it a way to take screenshots of what they're working on. This way, the agent will be productive for longer without human intervention. Agents and humans both need to run the code they write to find problems and bugs.

Developers should treat coding agents as "junior" developers, by giving them tasks that are focused, concrete, and limited in scope. We then have to be sure to thoroughly review and test the generated code.

Just as important, the developer must continue "manually" coding themselves, because without continuously pushing themselves as a code reviewer _and_ coder, their skills will atrophy and they will end up like someone who doesn't code and fails to vibe-code their app idea because they eventually get bogged down by architectural and knowledge debt.

Additional Reading

https://dylanbeattie.net/2025/04/11/the-problem-with-vibe-coding.html

https://diwank.space/field-notes-from-shipping-real-code-with-claude

https://blog.singleton.io/posts/2025-06-14-coding-agents-cross-a-chasm/


Source