A network MCP
An open-source, vendor-agnostic MCP service for AI-assisted network troubleshooting — starting read-only, with a roadmap toward gated write operations.
An open-source, vendor-agnostic MCP service for AI-assisted network troubleshooting — starting read-only, with a roadmap toward gated write operations.
AI agents are starting to look like operating systems. The LLM is the CPU, the agent is the OS, and skills and MCPs are the applications — here's why that analogy holds up.
A hands-on guide to installing, configuring, and using Claude Code for network engineering — from live YANG data fetches to BGP audits, troubleshooting runbooks, and config templating.
Andrej Karpathy on the No Priors podcast talking about agents, AutoResearch, and what he calls the “loopy era” of AI.
Two things stood out. First, the Frontier Lab vs. Outside framing — frontier labs have massive trusted compute, but the Earth has far more untrusted compute. If you design the right verification systems (discover is expensive, verify is cheap), a distributed swarm of outside contributors could outpace closed labs. There’s something appealing about that asymmetry as a balancing force.
Second, AutoResearch — fully autonomous research loops where an agent edits training code, runs experiments, evaluates results, and commits improvements via Git. No human in the loop. In a 2-day run it executed ~700 experiments and found 20 real optimizations on a single GPU. The human role shifts to writing evaluation criteria and research prompts, not the code itself.
Dwarkesh Patel and Dylan Patel (SemiAnalysis) got an exclusive tour of Microsoft’s Fairwater 2 datacenter with Satya Nadella. Each Fairwater building has hundreds of thousands of GB200s & GB300s, with over 2 GW of total capacity across the interconnected sites — a single building already outscales any other AI datacenter that exists today.
The interview covers how Microsoft is preparing for AGI across the full stack: business models, the CAPEX explosion turning Microsoft into a capital-intensive industrial company, in-house chip development, the OpenAI partnership structure, and whether the world will trust US companies to lead AI. Worth the full watch.
The Pragmatic Engineer interviewed Mitchell Hashimoto about his new way of writing code. The bit that stuck with me: always have an agent running in the background. Don’t wait for it to finish — kick off a task, context-switch to something else, come back when it’s done. Treat agents like background jobs, not pair programmers.
It’s a subtle shift but it changes how you structure your work. You stop thinking sequentially and start thinking in parallel — like managing async workers instead of typing code yourself.