The AI Usage Dilemma - Hard Skills in the Age of LLMs

My current challenge with AI isn't capability—it's the rate of usage in day-to-day professional work.

The accuracy of these models is evolving exponentially. Take "Vibe Coding" as an example. In a React-based web development workflow, Sonnet 3.5 was the initial wow moment. It almost always had syntax errors, which I helped solve, and we'd get it right together. It helped me build a larger React application with no frontend development experience (though I'd taken an excellent React course beforehand, so it wasn't zero knowledge).

Claude and other models evolved. Now with Opus 4.5, we're at a stage where it creates large features or refactors with almost no syntax errors—it runs linters too, obviously. It's not just accuracy; the overall skillset evolved. Opus 4.5 is pretty good at UI/UX as well, and you can use it for different personas.

So my big question: how much hard skill do we actually need, and how do we maintain it?

These models are getting very good at code generation, network configuration generation, IaC, everything. Even when they're weak in a domain, you can engineer solutions with in-prompt teaching—adding examples of good and bad outputs, using prompt engineering techniques.

What happens if we overuse the models and get lazy? It's certainly possible to "Vibe" everything out. But it's also possible to use models as a team of expert advisors, carefully plan work, and get an order-of-magnitude productivity boost.

We're in an interesting time where it's unclear what to learn and how much skills will be valued in the future. I believe, however, that as long as AI systems remain far from a human engineer (which is the case today), we're more likely seeing job losses from automation rather than AI systems replacing even junior engineers. An AI system is a statistical model—I'd always refer to it as a Tool, not a human being. I don't think the current architecture will produce human-replacing behavior anytime soon.

However, the capabilities are excellent for speeding up work and extending our mental capabilities. In the end, it's us who dream the plan, the solution, the product, the system. The AI is the Tool helping achieve it.

Going back to laziness: yes, it's possible to be lazy. It's also possible to utilize models to speed up learning. The type of knowledge we'll likely require in the future is something like this:

  1. We still need to build skills and practice a technology to understand it. We still require to get our hands dirty to learn it. (CLI/Code/Configure/Design/etc.). We should however focus on the Why, How and What in the process.
  2. After that's done, the hard work likely isn't required anymore on that level. We'll rarely write those syntaxes manually. I believe we'll need to operate in an Architect-like model—asking questions, articulating requirements, understanding results at both high and low levels.

The good news: this requires a lot more creativity. The bad news: being an expert of a single domain isn't as valuable as before. Being an architect across many domains is the new standard to evolve into.