Esc
Type to search posts, tags, and more...
Skip to content

AI Makes the Easy Part Easier and the Hard Part Harder — A Network Engineering Perspective

AI can generate Cisco configs and Terraform plans with impressive fluency, but the hard part of network engineering was never the syntax. It's interop, requirements, and the mental models we build to design and troubleshoot.

Contents

I came across AI Makes the Easy Part Easier and the Hard Part Harder by Matthew Hansen, written from a software development perspective. The core argument resonated immediately: AI handles the writing-code part well, but that was never the hard part. The hard part — investigation, context, validation — gets harder when you skip straight to generated output.

The parallel to network engineering is almost one-to-one.

The easy part was never the config

Writing a BGP neighbor statement, an OSPF area config, or a Terraform module for an AWS Transit Gateway — that’s syntax. It’s learnable, it’s pattern-based, and it’s exactly the kind of thing AI handles well. Ask any model to generate a Cisco IOS BGP configuration for a dual-homed customer with AS-path prepending, and you’ll get something that looks correct. Probably compiles. Maybe even works in a lab.

But pushing that config to a production PE router serving 200 customers on a shared MPLS backbone? That’s where the easy part ends.

The hard part of network engineering has always been everything around the config: understanding the customer’s actual requirements (not what they wrote in the ticket), knowing how that PE interacts with the rest of the topology, anticipating what happens to traffic during the change window, understanding the interop quirks between vendors in the path.

AI doesn’t have that context. It wasn’t in the design meeting. It doesn’t know that the customer’s CE router is running an old firmware version that handles extended communities differently. It doesn’t know the implicit agreements your team has about route-map naming conventions or that VLAN 100 is reserved for management across all sites.

Senior skill, junior trust — in networking terms

Hansen uses the phrase “senior skill, junior trust” to describe AI coding agents. They write like an expert but should be trusted like a junior. This maps perfectly to network engineering.

An AI can generate a configuration that a CCIE-level engineer might write. Clean syntax, correct address families, proper soft-reconfiguration settings. But it carries the judgment of someone who just passed their CCNA. It doesn’t know why you’d choose maximum-paths 2 over ECMP with 4 paths in this specific topology. It doesn’t understand that the “simple” VXLAN overlay it just generated will create a bridging loop with the legacy spanning-tree domain in Building C.

This is the trust gap. The output looks senior. The understanding behind it is not.

Mental models are the actual skill

Here’s where I think the software development and network engineering perspectives converge. The real value an experienced engineer brings isn’t the ability to type configuration faster. It’s the mental model — the internal representation of how the network behaves as a system.

When a senior network engineer designs a solution, they’re running a simulation in their head. Traffic enters here, gets policy-routed there, hits this firewall context, NATs through that pool, and egresses on this path — unless that link is down, in which case BFD triggers a failover and traffic shifts to the backup path, which has different QoS markings, which means the customer’s voice traffic might get reclassified. All of that happens before anyone touches a CLI.

That mental model is built through years of designing, implementing, breaking, and fixing networks. It’s what makes troubleshooting possible — you can reason about what should be happening and compare it to what is happening. When AI generates the config for you, you skip the process that builds and reinforces that model.

This is the same dynamic Hansen describes in software development: developers who let AI write the code lose the context they’d normally build up by writing it themselves. In networking, you lose the intuition for how configuration maps to behavior.

The investigation gap

The most dangerous pattern I see is engineers using AI to generate configs or IaC without doing the investigation first. They describe the desired end state, get a Terraform plan or an Ansible playbook, and push it toward production.

What they skipped: reading the existing configuration. Understanding why it looks the way it does. Checking whether there are implicit dependencies — maybe that static route exists because of a known bug workaround. Maybe that ACL entry was added at 2 AM during an incident and never documented but is the only thing preventing a routing loop.

AI-generated network changes are other people’s code. And as Hansen puts it, reading and understanding other people’s code is harder than writing it. This is doubly true in networking, where the “code” is a distributed system configuration that interacts with hardware, firmware, and protocols that have vendor-specific behaviors.

Where AI actually helps with the hard part

This isn’t a case against using AI in network engineering. It’s a case for using it correctly.

AI is genuinely useful for investigation. Feed it a show ip bgp output and a show ip route table and ask it to identify why a specific prefix isn’t being installed — it can parse that faster than you can scroll through it. Use it to compare running configs between two routers and flag inconsistencies. Ask it to explain an unfamiliar vendor’s syntax when you’re dealing with a multi-vendor environment.

That’s AI helping with the hard part — accelerating the investigation, not replacing it. You still provide the context: which prefix matters, why it should be there, what changed recently. The AI does the grunt work of parsing and correlating. You do the thinking.

The bottom line

The config is not the craft. The craft is the mental model you build — the understanding of how pieces interact across vendors, protocols, and failure domains. That’s what lets you design solutions that actually work in production, review generated configurations critically, and troubleshoot at 2 AM when the monitoring dashboard lights up.

Use AI to generate configs. Use it to accelerate investigation. But treat every output the way you’d treat a pull request from a talented junior who just joined the team and hasn’t seen your network yet.

Trust, but verify. Every time.

! Was this useful?