One prompt, one MPLS L3VPN: Opus 4.7 on a cEOS lab
A single prompt turned a 3-node Arista cEOS lab into a working MPLS L3VPN — RR, LDP, VPNv4, two VRFs. Opus 4.7 patched its own skill's blind spot mid-run, caught an IACL ordering trap, and verified itself with VRF pings.
Contents
A 3-node Arista cEOS lab running in containerlab on an M4 Pro. One prompt: “Design a bit more complex configuration for the cEOS network using the MCP. It could include all three nodes. Add complex BGP, MPLS. VRF RED and VRF Green. Loopbacks in each.” Few minutes later the lab was a classical MPLS L3VPN — single AS, OSPF underlay, LDP-signalled data plane, iBGP VPNv4 overlay with spine-1 as a route reflector, and two customer VRFs reachable between every PE. Pings across the MPLS core worked. The deliberate cross-VRF ping failed, as it should.
This is the show-and-tell. I’m pulling the interesting parts of the Claude Code transcript straight through, with thin framing. The point is what the agent did — not what I told it to do.
The setup
- netmcp — my MCP server (early release soon) that exposes
send_configurationandsend_commandas typed, approval-gated tools against the lab inventory. Every config push needs a one-time approval code confirmed via a local HTTP endpoint. Writes are gated; reads are not. arista-ceos-configskill — a Context7-style indexed reference pack for cEOS. BGP, VRFs, L3 routing, ACLs. Small, focused, skimmable by the agent.- Lab — containerlab, cEOS 4.34.0F, 3-node triangle:
spine-1(10.255.0.1),leaf-1(10.255.0.2),leaf-2(10.255.0.3). Before this session it ran plain eBGP IPv4 over three P2P /31 links. Fine for a smoke test. Not demo-worthy. - Driver — Claude Opus 4.7.
Skill loaded. Gap noticed.
The first thing the agent did after reading the prompt was load the skill and dispatch three Explore subagents in parallel. One of them came back with a problem:
Lab is a 3-node triangle: spine-1 (172.20.20.11), leaf-1 (172.20.20.12), leaf-2 (172.20.20.13). cEOS 4.34.0F — MPLS data-plane, LDP and VPNv4 L3VPN are supported. The reference pack has no MPLS/LDP coverage, so I need to verify Arista-specific MPLS/LDP syntax from docs before composing the plan.
This is the moment that matters. The skill I built doesn’t cover MPLS. The agent noticed, said so, and went to fix it — before writing a single line of config. It tried arista.com user-guide URLs (several 404s), bounced off search engines, and eventually landed on the gold-standard source via the GitHub API:
gh api -X GET 'search/code' \
-f q='"mpls ldp" "transport-address" arista extension:cfg' \
--jq '.items[] | "\(.repository.full_name) \(.path)"'
That pulled up aristanetworks/avd/ansible_collections/arista/avd/examples/isis-ldp-ipvpn/intended/configs/pe3.cfg — Arista’s own reference design. The agent pulled one PE config, diffed it against what it was about to write, and caught the vendor-specific bit that trips up engineers coming from IOS-XR:
Key finding: Arista uses
address-family vpn-ipv4(notvpnv4) androute-target import/export vpn-ipv4 ASN:NN. LDP syntax confirmed: globalmpls ip+mpls ldpblock withrouter-id/transport-address interface Loopback0/interface disabled default; per-interfacempls ip+mpls ldp interface.
That detail matters. vpnv4 (Cisco-style) is a common reflex, and it would have silently failed the activation step in router bgp.
Three forks, back to the human
Before committing to a plan, the agent surfaced three design decisions:
- AS topology? Classical MPLS L3VPN needs iBGP VPNv4. The lab had three different ASNs. Collapse to a single AS 65000 (recommended) or keep eBGP?
- IGP under LDP? LDP needs an IGP for loopback reachability. OSPFv2 area 0 (recommended) or IS-IS?
- iBGP overlay? Full mesh, or spine-1 as RR with the leaves as clients (recommended)?
I picked the recommended answer in each case. The point is that the agent asked — it didn’t plow ahead on defaults. With 3 PEs the RR is overkill, but it makes the output of show bgp vpn-ipv4 more interesting, which is exactly what a show-and-tell wants.
The plan
Here’s the design the agent committed to:
| Layer | Choice |
|---|---|
| AS plan | Single AS 65000 on all three nodes (replaces 65001/65002/65003) |
| IGP | OSPFv2 area 0 on all core P2P links + Loopback0 |
| Label distribution | LDP — transport-address interface Loopback0, enabled per-core-interface |
| Overlay | iBGP address-family vpn-ipv4, spine-1 = RR, leaves = clients, on Loopback0 |
| Tenants | vrf RED (RT/RD 65000:10) and vrf GREEN (RT/RD 65000:20) on all three PEs |
And the IP plan:
| Resource | spine-1 | leaf-1 | leaf-2 |
|---|---|---|---|
| Loopback0 (BGP RID + LDP transport) | 10.255.0.1/32 | 10.255.0.2/32 | 10.255.0.3/32 |
| Loopback10 in VRF RED | 10.10.0.1/32 | 10.10.0.2/32 | 10.10.0.3/32 |
| Loopback20 in VRF GREEN | 10.20.0.1/32 | 10.20.0.2/32 | 10.20.0.3/32 |
RDs per-PE (<LocalLoopback0>:10 and :20), RTs fabric-wide (65000:10, 65000:20). That matches Arista’s AVD template, and it keeps each PE’s advertised routes distinguishable in show bgp vpn-ipv4.
The non-obvious bit
Every core interface had an IACL permitting only BGP (tcp/179) and ICMP, then denying everything with a log. Enabling OSPF (proto 89) and LDP (tcp/udp 646) on those interfaces without touching the IACL first would silently black-hole adjacency formation. The agent caught this:
Existing IACL on every core interface permits only tcp/bgp + icmp then denies everything. That will silently break OSPF (proto 89) and LDP (TCP/UDP 646). The config delta updates IACL first in each push block so the rest of the session succeeds. New permits added:
ospf(proto 89),tcp any any eq 646,udp any any eq 646.
That’s the kind of ordering trap you discover the hard way, three hours into a troubleshooting session. The permits went in with explicit sequence numbers before the existing deny-log, keeping the control plane alive through every subsequent step of the push.
The push
One send_configuration call per device, in a specific order that’s designed to never lose reachability mid-push:
- IACL permits for OSPF + LDP
vrf instance RED/vrf instance GREEN+ip routing vrf ...- VRF loopbacks (
Loopback10,Loopback20) - Global
mpls ip - Per-core-interface
mpls ip+mpls ldp interface mpls ldpblock (router-id,transport-address interface Loopback0,interface disabled default,no shutdown)- OSPF underlay
no router bgp 6500x— because Arista EOS only allows one BGP instance — thenrouter bgp 65000with peer-groupOVERLAY,address-family vpn-ipv4, and the two VRF stanzas
Each push was gated by a one-time approval code:
spine-1 DNA6-7ZWJ http://localhost:8000/approve/DNA6-7ZWJ
leaf-1 6DFN-2HMZ http://localhost:8000/approve/6DFN-2HMZ
leaf-2 LZJU-QNMA http://localhost:8000/approve/LZJU-QNMA
I typed “approved.” The agent pushed spine-1 first (RR up before the clients), then the two leaves.
Management0 has no ACL, so SSH stays reachable throughout. That’s a deliberate lab convenience — in production you’d want the same human-in-the-loop gate, but you wouldn’t rely on an un-filtered management VRF.
Convergence, then patience
First verification attempt, 20 seconds after the last push:
OSPF is in EXCH START (not FULL yet), LDP is in TCP-retry, BGP in Active. Classic dependency chain — waiting for OSPF to converge so loopbacks become reachable, then LDP and iBGP will follow.
The agent read the dependency chain correctly: OSPF has to bring loopbacks up before LDP can open a TCP session to the transport address, and before iBGP can open a TCP session to the same loopback. It waited 45 seconds and re-ran the checks. Everything came up.
Verification
Three successful VRF pings across the MPLS core, and one deliberate cross-VRF ping that must fail as isolation proof:
| Test | Source | Destination | Result |
|---|---|---|---|
| VRF RED leaf-1 → leaf-2 (direct link) | 10.10.0.2 (RED) | 10.10.0.3 (RED) | 3/3 |
| VRF RED leaf-1 → spine-1 (via spine) | 10.10.0.2 (RED) | 10.10.0.1 (RED) | 3/3 |
| VRF GREEN leaf-1 → spine-1 | 10.20.0.2 (GREEN) | 10.20.0.1 (GREEN) | 3/3 |
| Cross-VRF (isolation proof) | 10.10.0.2 (RED) | 10.20.0.3 (GREEN) | 0/2, dropped |
The 0/2 drop is a pass, not a fail. VRFs don’t leak between each other without explicit import/export, and the ACL-free path from RED to GREEN is supposed to die.
The forwarding-path snippet on leaf-1 is the most satisfying line of output from the whole session:
B I 10.10.0.3/32 via 10.255.0.3/32, LDP tunnel index 2, label 100001
via 10.0.3.1, Ethernet2, label imp-null(3)
VPN label 100001 for RED, 100000 for GREEN — each destination PE advertises a per-VRF label. Transport label is imp-null(3), which is PHP — expected for a 1-hop LSP. That’s textbook MPLS L3VPN in four lines of CLI output.
The RR is working too: show bgp vpn-ipv4 on leaf-1 showed the leaf-2 prefix both as a direct best path and as an RR-reflected backup with Or-ID: 10.255.0.3 and C-LST: 10.255.0.1. The cluster list proves it went through spine-1 as a reflector, not a full-mesh sibling.
What to take from this
The prompt was a single sentence. The agent designed a non-trivial multi-node L3VPN, caught that its own skill pack didn’t cover the topic, went out to authoritative sources, ran three design choices by me, built an order-sensitive plan that protected the control plane from the change it was about to make, pushed it behind human-approved gates, waited for convergence correctly, and verified its work — including a negative test for VRF isolation.
The pieces that made this work, and that I think generalise:
- A typed, approval-gated MCP surface into real devices. Reads are free; writes are gated. That split is the safety model.
- A focused vendor skill, Context7-style — small, indexed, easy for the agent to skim. The point isn’t that the skill has everything. The point is that the agent knows when it doesn’t, and where to go next.
- Permission to go fetch authoritative docs when the skill falls short. In this case, the AVD repo. Without that, the agent would have hallucinated
vpnv4and broken the activation.
This example is not really about the configuration or the design. It’s agentic network engineering in action. Domain knowledge is what produces a proper spec for the network — not a single sentence like the one I used here. But once you have a real spec, an agentic workflow speeds up the work and the verification of the work significantly. That, my friends, is where we’re headed.
netmcp early release soon. Follow-up is obvious: add an mpls-l3vpn.md reference file to the skill pack so the next session doesn’t re-derive this from the AVD repo every time.