Esc
Type to search posts, tags, and more...
Skip to content

The AI Usage Dilemma — Hard Skills in the Age of LLMs

As LLM accuracy improves from broken syntax to near-flawless output, the real question shifts: how much hard skill do we actually need, and how do we keep it sharp?

Contents

My current challenge with AI isn’t capability — it’s the rate of usage in day-to-day professional work.

When I started using Claude Sonnet 3.5, the generated code almost always had syntax errors. That was fine — it turned the process into a collaboration. I’d work through the problems alongside the model, and in doing so, I was still learning. I could build things faster than without AI, but the friction kept me engaged with the underlying technology.

Then I used it to build a larger React application. I had zero frontend experience beyond completing a React course, but with AI assistance the project came together. Claude and other models evolved. Now with Opus 4.5, we’re at a stage where it creates large features or refactors with almost no syntax errors. It runs linters. It handles UI/UX work. Even in domains where models are weaker, prompt engineering techniques and in-prompt teaching get you there.

So my big question: how much hard skill do we actually need, and how do we maintain it?

The laziness trap

It’s certainly possible to “Vibe” everything out — let the model generate code without understanding what it produces. The risk of over-reliance is real. But framing this as a binary choice misses the point.

Going back to laziness: yes, it’s possible to be lazy. It’s also possible to utilize models to speed up learning. The difference is intent. If you treat AI as a shortcut to avoid understanding, you’ll accumulate technical debt in your own skills. If you treat it as an expert advisor that can explain, demonstrate, and accelerate — you learn faster than you would alone.

AI as a tool, not a replacement

The framing matters. These models are a tool, not a human being. When you combine careful planning with a model’s breadth of knowledge, you effectively get a team of expert advisors. The result is an order-of-magnitude productivity boost.

Code generation, Infrastructure as Code, network configuration, UI/UX design — models handle all of these with increasing competence. The practical ceiling keeps rising. But the boost only works if you understand enough to direct the work, evaluate the output, and catch the mistakes that do slip through.

What skills actually matter now

Two things stand out:

  1. We still need to build skills and practice a technology to understand it. The difference is where to focus. Instead of memorizing syntax or grinding through boilerplate, focus on the Why, How, and What — why a technology exists, how it works at a conceptual level, and what problems it solves. That foundational understanding is what lets you evaluate AI-generated output critically.

  2. After that foundation is solid, the hard work shifts. The day-to-day likely isn’t manual implementation anymore. Instead, you operate in an Architect-like model — articulating requirements, asking the right questions, analyzing results at multiple levels, and synthesizing across domains. The skill becomes directing and verifying, not typing.

The new standard

Being an architect across many domains is the new standard to evolve into. Not deep in one silo, but broad enough to connect infrastructure, application logic, data, and user experience into coherent systems — with AI handling the implementation details under your direction.

The good news: this requires a lot more creativity. The bad news: being an expert of a single domain isn’t as valuable as before.

! Was this useful?