The people getting the worst results from AI coding tools are also the most confident they understand how to use them. Fast, impatient, frustrated. They think prompting is like ordering from a menu. It isn’t. It’s rendering.
Rendering is always the same process: apply energy over time and what’s latent becomes clarified. When a path tracer resolves a 3D scene, each pass drives out noise and leaves behind light as it actually falls. When you fry bacon, you’re rendering the fat.
AI is rendering for text.
The first mistake: expecting the first output to be the final output. That’s like stopping a raytracer after the first ray and declaring ray tracing overrated.
Junior developers ask for code and get code-shaped objects: technically correct, architecturally wrong. Senior developers get what they pictured. The difference is vision. You can’t iterate toward a destination you can’t see.
The second mistake: impatience.
Frying bacon takes as long as it takes. You can turn the heat up and burn it before the fat renders. Or you can tend it, low and slow, and pull it at exactly the right moment.
AI coding is the same. Every feature has a natural resolution time. The ones who get garbage are typing furiously, overriding half-finished outputs, flipping the bacon before the fat has rendered. The ones who get something good read the output, understand what it did, and make one precise correction at a time.
Experience can work against you here. The more you know, the harder it is to stop interrupting the render.
When it really works, AI coding feels like thinking out loud to someone who finally understands you. Metaphors just appear. Connections you hadn’t made. It helps you paint. You’re not typing; you’re directing.
Every output is a render pass. You’re not evaluating whether it’s done; you’re evaluating whether it’s getting closer.
Ask it what’s wrong with what it just made. It’ll tell you. The brush and the critic are the same thing.
The rendering metaphor isn’t just an analogy; there’s a sense in which it’s literally true. A language model is a vast geometric space where every word, concept, and code pattern exists as a point. I’m the raycaster. My intent is the beam; the model is the geometry it passes through. Each prompt sets a direction, and the output is wherever my ray lands: a probable surface close to what I meant. Each iteration refines the angle.
I’ve built systems this way that I couldn’t have built alone. Not because the AI was smarter than me; it wasn’t. But because the render time collapsed to almost nothing. Every hour I used to spend on implementation I now spend on design.
Not that AI writes code. It removes everything that isn’t thinking, and lets you find out how good your thinking actually is.
There’s a moment when the bacon is done. You know it when you see it. Keep going and the fat is gone; you’re burning meat. The same moment exists in code — where the output is genuinely good and the next prompt makes it worse.
Pull it at the right moment. Know when the render is done.
The gap between what you can imagine and what you can build has never been smaller. That’s not an accident; that’s rendering.
Put on Everything – Said the Sky and let it render.
First pass only. I'll tell you what's off.