Esc
Type to search posts, tags, and more...
Skip to content

Networking.__init__

! networking, automation, and the occasional LLM experiment

the-prediction-machine.md 5 min

The prediction machine

The models are good now. The engineering discipline is managing certainty vs uncertainty in their output — knowing when to trust, when to verify, and how to push the certainty floor higher.

--ai
vibe-coding-has-a-ceiling.md 4 min

Vibe coding has a ceiling

Vibe coding is great — until your project outgrows a single conversation. For complex, long-lived systems, spec-driven development with OpenSpec gives AI assistants deterministic input instead of fuzzy chat history.

--ai--tooling
a-network-mcp.md 5 min

A network MCP

What a network MCP server is, why multi-device correlation is the real value, and the hard problems: two paths for different use cases, command whitelists as a safety model, and auth delegation.

--ai--networking
$ ls /blog --all
router# show logging | tail
%BLOG-6-POST

Simon Willison is writing an evolving guide on agentic engineering patterns — not a blog post, more like a living book. It covers principles for working with coding agents, red/green TDD workflows, subagent patterns, and includes annotated prompt examples you can steal.

The framing around “writing code is cheap now” and the emphasis on building personal knowledge repos to feed into agents resonates. Worth bookmarking and checking back — he’s clearly adding to it over time.

%BLOG-6-POST

Andrej Karpathy on the No Priors podcast talking about agents, AutoResearch, and what he calls the “loopy era” of AI.

Two things stood out. First, the Frontier Lab vs. Outside framing — frontier labs have massive trusted compute, but the Earth has far more untrusted compute. If you design the right verification systems (discover is expensive, verify is cheap), a distributed swarm of outside contributors could outpace closed labs. There’s something appealing about that asymmetry as a balancing force.

Second, AutoResearch — fully autonomous research loops where an agent edits training code, runs experiments, evaluates results, and commits improvements via Git. No human in the loop. In a 2-day run it executed ~700 experiments and found 20 real optimizations on a single GPU. The human role shifts to writing evaluation criteria and research prompts, not the code itself.

%BLOG-6-POST

Dwarkesh Patel and Dylan Patel (SemiAnalysis) got an exclusive tour of Microsoft’s Fairwater 2 datacenter with Satya Nadella. Each Fairwater building has hundreds of thousands of GB200s & GB300s, with over 2 GW of total capacity across the interconnected sites — a single building already outscales any other AI datacenter that exists today.

The interview covers how Microsoft is preparing for AGI across the full stack: business models, the CAPEX explosion turning Microsoft into a capital-intensive industrial company, in-house chip development, the OpenAI partnership structure, and whether the world will trust US companies to lead AI. Worth the full watch.

$ tail -f /syslog