AI coding agents have a problem: they’re too eager to help.

Ask them to build something and they’ll do it. They won’t ask if you actually need it. They won’t point out that you’re about to spend two hours on something you’ll use once. They just… build.

I’ve caught myself doing this multiple times now. Get an idea, start prompting, watch the code flow, feel productive. Then a week later I realize I never used the thing. Or worse, I realize halfway through that I didn’t need it in the first place.

Today it was a Flutter app. I have a Python CLI that syncs audio with video. Works great. But I thought hey, what if I could do this on my phone? So I spent a couple hours setting up Flutter, dealing with deprecated FFmpeg libraries, fighting Android permissions, trying to get wireless debugging to work…

Then I stopped and asked myself: when would I actually use this? Manual workflow would just take 5 mins.

I was building because I could, not because I should.

So I added a one-line rule to my user level Cursor rules:

Before starting new projects or automation, evaluate if the effort is justified. Push back if the build time exceeds the time it would save.

Now the AI has to think before diving in. If I ask for something that smells like overengineering, it should push back instead of just doing it.

Armin Ronacher wrote about this recently in Agent Psychosis. The whole piece is worth reading, but this line stuck with me:

It became so easy to build something and in comparison it became much harder to actually use it or polish it.

That’s the trap. Building feels like progress. The dopamine is real. But if you’re not careful, you end up with a graveyard of tools you never use.

The fix isn’t to stop using AI. It’s to make the AI part of your impulse control. Give it permission to say no.
But LLMs are people-pleasers. It probably wont’ say no. This rule probably won’t work.

You have to impulse control. Every time you get an idea, maybe just jot it down somewhere.
It takes longer than you think, even with LLMs.