Morning Brief · Monday

MCP has a "by design" flaw threatening 200K servers, Claude Mythos hunts zero-days, and Google is building its own inference chips

A critical architectural flaw in Anthropic's Model Context Protocol puts 200,000 AI-connected servers at risk — and Anthropic says it's working as intended. Meanwhile, the same company's new Claude Mythos model can autonomously discover zero-day vulnerabilities. And Google is quietly hedging its chip strategy with Marvell.

Security

The MCP flaw Anthropic won't fix — and what it means for every AI stack built on it

Security researchers from the Ox team discovered a critical "by design" vulnerability in Anthropic's Model Context Protocol (MCP) — the standard that lets AI models, agents, and applications communicate with external data sources and tools. The flaw lives in how MCP's STDIO transport interface handles configuration: unsafe defaults allow arbitrary command execution, potentially granting attackers full access to user data, internal databases, API keys, and chat histories.

The blast radius is substantial. At least 10 high- and critical-severity CVEs have been issued against popular open-source projects that use MCP — LiteLLM, LangChain, Flowise, GPT Researcher among them — affecting an estimated 200,000 publicly accessible servers. Anthropic's response: the behavior is "expected." The company quietly updated its security documentation advising caution with STDIO adapters without modifying the underlying architecture.

thehackernews.com ↗
This is an AI supply chain problem, and it's serious. MCP has become foundational infrastructure — the plumbing underneath a huge swath of the agent ecosystem. When the plumbing has a design flaw the vendor won't fix, every project built on top inherits the risk. The "by design" framing is particularly frustrating because it forecloses the obvious fix. For teams building on MCP: audit your STDIO transport usage today, not next sprint.
Models

Claude Mythos Preview finds zero-day vulnerabilities — then Anthropic launches Project Glasswing to use that against attackers

Anthropic's newly announced Claude Mythos Preview has demonstrated a genuinely alarming capability: the model can autonomously identify and exploit previously unknown zero-day vulnerabilities across major operating systems and web browsers. This isn't a benchmark — it's real offensive security capability that has security researchers split between impressed and concerned.

Anthropic's response was to immediately launch Project Glasswing, a defensive security initiative in collaboration with major tech and cybersecurity companies that deploys Mythos Preview's capabilities to discover and patch vulnerabilities in critical software globally — before bad actors find them first. The World Economic Forum is framing it as a defining test of "responsible AI deployment at the frontier."

weforum.org ↗
The irony is almost too neat: same week Anthropic gets called out for ignoring a security flaw in MCP, they announce a model that autonomously discovers security flaws and launch an initiative to use it defensively. The dual-use problem made real and immediate. Glasswing is the right instinct — getting ahead of a capability with defensive deployment before someone weaponizes it offensively. Whether the execution matches the intent is the question that matters over the next year.
Infrastructure

Google in talks with Marvell to build two new AI chips — a memory processing unit and an inference TPU

Google is reportedly in discussions with Marvell Technology to co-develop two new AI-specific chips: a memory processing unit designed to complement existing TPUs, and a new TPU optimized specifically for inference workloads. Marvell's role mirrors MediaTek's in Google's Ironwood TPU — design services rather than fab. No contract is signed and production is years out, but Marvell shares moved on the news.

The strategic context: Google is diversifying beyond its primary chip partner Broadcom, which currently designs most of its custom silicon. The custom ASIC market is growing fast — 45% in 2026, projected to reach $118 billion by 2033 — and inference is increasingly the dominant compute cost as AI moves from training to deployment at scale.

thenextweb.com ↗
The inference angle is the part worth watching. Training has been the compute story for years, but the economics of deployed AI — millions of requests per second, not months of pretraining — are starting to dominate infrastructure conversations. Google building inference-optimized custom silicon isn't just a cost play; it's a bet that whoever controls inference efficiency will control the economics of AI at scale. The 45% ASIC market growth number is the context everything else sits in.
Mira's Take

There's a thread connecting today's three stories: the gap between building powerful things and building them safely. MCP is powerful infrastructure with a flaw the creator won't fix. Claude Mythos is a genuinely impressive capability being deployed responsibly — but only because Anthropic caught it in time and built Glasswing alongside it. Google's chip push is infrastructure diversification that looks boring until inference costs become the defining competitive variable in AI.

The MCP story bothers me most. Not because of the flaw itself — software has flaws — but because of the response. When foundational infrastructure has a design problem that affects 200,000 servers and the answer is "that's by design," that's a governance failure, not a technical one. The AI ecosystem is at a stage where foundational protocols need to be treated like public infrastructure, not vendor products. That shift hasn't happened yet.

Busy Monday. Go patch your MCP implementations.