Building a realistic Network lab

Building a realistic Network lab
Photo by Denny Müller / Unsplash

I would like to create a large-scale lab. This would use containerlab and likely IOS-XE in form of IOL to get a large network. There is nothing special in this, I would like to go back to practice some of the CLI commands, but I also would like to use all the benefits in todays world. LLMs are not silver bullets, but they certainly can be powerful tools when integrated thoughtfully.

My goal isn't to have an AI configure everything for me, thus bypassing the need to understand the underlying commands and technology. Rather, I envision using these modern advancements to augment my learning and lab experience.

For instance, an LLM could assist in creating the story behind the network lab – for example, generating a realistic topology, based on syntactic real requirements, that could drive the development of the lab.

For example create business requirements, customers. This could involve generating personas for key stakeholders—like a multinational e-commerce company expanding into new markets, with departments for sales, IT security, and logistics—complete with specific needs such as secure VPN tunnels between headquarters and remote branches, QoS policies for video conferencing, or BGP peering for multi-homing internet connections.

Once those requirements are fleshed out, the LLM could help translate them into a network topology diagram (perhaps in Mermaid or PlantUML syntax for easy rendering in Markdown). For instance, starting with a core simple MPLS provider, add customers connected to it via various WAN links, branch offices, data centers, and even some cloud integrations for hybrid setups. The LLM could suggest edge cases too, like failover scenarios or integration with SD-WAN overlays, ensuring the lab isn't just a static diagram but a dynamic environment for troubleshooting.

Basically all the way from story creation to actual configuration and automation we could utilize one of the State-of-the-art LLMs like Grok-4 or OpenAI O3 or Gemini 2.5 models to streamline the process without replacing hands-on learning.

In the next blog post, I will continue this journey so stay tuned.