The biggest funding round in tech history. Google's open-weights answer. And a security stat that should stop you cold.
OpenAI has closed its largest funding round ever, raising $122 billion at an $852 billion valuation — up from $730B just weeks ago. SoftBank co-led alongside Andreessen Horowitz and D.E. Shaw, with Amazon, Nvidia, and Microsoft also participating. The bulk of new capital goes toward compute infrastructure. This is no longer a startup raise. This is a sovereign-scale bet.
Alongside the funding announcement, OpenAI unveiled the ChatGPT Super App — a single product merging ChatGPT, the Codex coding agent, the Atlas browser, and "broader agentic capabilities" into one unified interface. Described as "agent-first from the ground up," it's a direct play for the desktop productivity market dominated by Microsoft Office and Google Workspace. OpenAI is now openly competing with its own investors.
Anthropic accidentally exposed 512,000 lines of Claude Code CLI source code via an npm source map — revealing far more than expected. Hidden inside: a secret project called KAIROS, described as a background autonomous agent mode "that never sleeps." Also revealed: anti-distillation mechanisms, client attestation, and an "undercover mode." The repo hit 100,000 GitHub stars within hours. A rewrite called claw-code is already circulating. This is a significant opsec failure with real strategic implications.
Google dropped Gemma 4, its most capable open-weights model family to date, built on the same architecture as Gemini 3. Purpose-built for advanced reasoning and agentic workflows, available in E2B, E4B, 31B, and 26B MoE sizes under the permissive Apache 2.0 license. It's designed to run on low-power devices — workstations, phones — making locally-hosted AI agents viable without cloud dependency. Over 400 million total Gemma downloads to date. Worth experimenting with on your Mac Mini lab setup.
With Gemma 4 joining GPT-4o, Claude 3.5, and Qwen 3.x in the open/frontier mix, industry analysis points to multi-model routing as a must-have for production agentic systems. The pattern: small fast models for triage, frontier models for reasoning, specialized models for domains. The consultancy angle here is clear — enterprises buying "AI" are actually buying orchestration strategy, and most don't know it yet.
Utah became the first US state to authorize AI systems to autonomously renew drug prescriptions, bypassing a physician for that step. Supporters cite access and efficiency gains; critics flag patient safety and liability gaps. Either way, this is the agentic moment in healthcare: AI moving from "assistant" to "decision-maker" in regulated domains. The regulatory debate that follows will matter for anyone building AI into business-critical workflows.
Microsoft's RSAC 2026 security brief is blunt: threat actors have embedded AI across the full attack lifecycle — faster recon, better lures, AI-generated malware, automated data triage. The stat that should land: AI-assisted phishing campaigns are achieving 54% click-through rates, up from ~12% for traditional campaigns. That's a 450% improvement in effectiveness. Tycoon2FA, the Storm-1747-linked MFA-bypass kit, continues to evolve as a prime example of "industrial-scale cybercrime." US represents ~25% of observed threat activity globally.
A busy first week: Google issued an emergency Chrome update for CVE-2026-5281, a use-after-free in the Dawn WebGPU component — actively exploited in the wild, allows arbitrary code execution. Separately, a Qualcomm-linked Android zero-day triggered a federal mandate for agencies to patch immediately. Meanwhile, a coalition of ~60 hacktivist groups aligned with Iran and pro-Russian factions launched DDoS, defacement, and hack-and-leak campaigns across at least 16 countries. Update your Chrome now.
Two stories this week deserve your full attention — and they pull in opposite directions.
The OpenAI raise at $852B is not a startup story anymore. It's a sovereign infrastructure bet. When SoftBank, a16z, Amazon, and Nvidia all pile in at the same time, they're not betting on a product — they're betting that whoever controls the inference layer controls the next decade of computing. That's a different kind of gravity.
But the security number is what I keep coming back to: 54% click-through on AI-assisted phishing. For context, a good email marketing campaign gets 3%. A great one gets 8%. These are attacks performing at 54%. That's not an incremental improvement — that's a category change. Every organization deploying AI agents right now is also expanding their attack surface in ways most of their security teams haven't modeled yet. This is the governance gap NI was built to address.
Gemma 4 is the quieter story but might be the most important for our stack long-term — a frontier-capable open model that runs locally changes the economics of private inference dramatically. Worth watching closely.