Skip to main content
Vibing With Kerim Experiments in AI Coding

Back to all posts

Loop

Published on by P. Kerim Friedman · 2 min read

The single most useful thing you can do to improve the quality of code generated by Claude is to use an agent to review its plans, and then review the new plan, and then review those plans … and keep going until the agent no longer finds any “critical” issues with the plan. This is a bit like the idea behind the Ralph Wiggum Plugin, but I run it manually and usually only two or three times.

It is important, however, to do it no matter how trivial the task. I’ve discovered that Claude can make critical errors with even the most innocuous requests. While running an endless loop like Ralph Wiggum would probably burn through all my tokens in a day, doing a couple of rounds of manual review like this will save you tokens, because you won’t have to then waste time debugging.

What agents do you use? It depends on the task. I generally install Superpowers and Swift Engineering when I’m doing macOS coding. There are also some built into Claude that you can use. I prompt Claude to pick the most suitable ones for the plan. Something like this:

Dispatch multiple code-reviewer agents to validate the plan’s diagnosis and proposed fixes against the actual code, including swift-engineering and superpowers code-review agents.

UPDATE: It seems that Claude has created a new built-in slash command /simplify which is designed to be run against recent changes to the code. I might take to running this regularly as well. (Haven’t tried it yet.)