I know, I know: another post about AI code generation. The hype cycle feels exhausting. But after months of using Claude, Gemini, and Cursor daily, I’ve noticed something that keeps nagging at me. The skills AI promises to eliminate are precisely the skills you need to use it effectively.

AI code generation is powerful, but only when you maintain agency. The skills AI promises to eliminate are precisely the skills you need to use it effectively.

The assistant can generate hundreds of lines of code in seconds, but you need to validate every line, understand the trade-offs, and recognize when it’s confidently wrong. That requires domain knowledge, experience with the technology, and the judgment to make decisions: exactly what we hoped AI would let us skip.

Here are the ironies I’ve encountered:

You Need to Learn How to Make AI Learn How to Help You

Prompts are not magic. I was initially overly optimistic about how much AI already knows or could deduce from context. It has zero understanding of your system’s constraints, your team’s principles, or your change management realities.

AI performs much better when working with backend languages that have established conventions and strong type systems. When working on front-end applications where patterns are looser and frameworks change rapidly, it’s a complete toss-up what solution it will invent. But who can blame it? The front-end ecosystem is fragmented and constantly shifting.

The irony is that teaching AI to help you requires the same clarity you’d need to explain the problem to a junior developer, or to solve it yourself.

You’re Paying to Contribute to the Service You’re Paying For

I’m not certain to what degree our conversations and feedback influence training, but I feel like I have to explain a lot. This is especially true when leveraging a package or API released within the last year or so.

If I have to tell the AI that there are 5 bugs in the 100 lines of code it swears is the correct answer, I feel like an employee, not a customer. I’m doing quality assurance work on the product I’m paying to use.

You Use AI to Discover What Not to Do

In many cases, AI simply picks a direction and runs with it. It might explain its reasoning, but with even a slight nudge to reconsider, it will change its mind on fundamental premises. Probabilities are one thing, but inconsistency is another.

Eventually I go back to the official documentation for an API, package, or language, learn the correct approach, and then tell the AI what I learned. The AI becomes a filter for bad ideas rather than a source of good ones. That’s useful, but it’s not the productivity miracle I expected.

You Spend More Time in Order to Save Time

When I first started using AI code generation, I lost so much time. I assumed too much of the AI and thought that actually learning “the old way” would be slower and less effective than asking my assistant.

But we learn by doing, and AI learns directly from us. It doesn’t take a genius to see where this is going: if you skip learning the fundamentals because the AI will “handle it,” you can’t validate what the AI produces. You end up debugging hallucinated code without the knowledge to recognize the hallucination.

I now spend more time upfront learning the technology, frameworks, and APIs myself so I can use AI effectively. The time saved on typing is offset by the time spent validating, debugging, and correcting.

What This Means for Using AI

AI doesn't eliminate the need for expertise; it amplifies it.