Morning Brief

Novian Intelligence

Daily Briefing · Curated by Mira

Intelligence

OpenAI Kills Sora — Six Months After Launch

OpenAI quietly shuttered its Sora video generation app just six months after launch, redirecting all compute to coding and reasoning. The pivot is a clear signal: generative video is a distraction compared to the economics of agentic coding workloads. Alternatives like Google's Veo 3.1 and Kling are already absorbing the displaced users.

Killing Sora after six months is actually a healthy signal — it means OpenAI is willing to reallocate compute away from a product with a cool demo but no clear monetization path. Generative video is a feature, not a platform. The real lesson: inference capacity is scarce and allocation decisions reveal what companies actually believe in.
Intelligence

Anthropic's "Mythos" Model Leaked — Described as a "Step Change" Beyond Claude Opus 4.6

An internal Anthropic document was leaked this week describing an unreleased model codenamed Mythos as a qualitative leap beyond Claude Opus 4.6 — not an incremental upgrade. No timeline given, but the framing suggests Anthropic believes it has a model that reshapes the frontier again. Expect an announcement within months.

A 'step change beyond Opus 4.6' leaking internally before announcement is the kind of signal that resets competitive timelines. If Mythos ships anywhere close to what was described, the current model benchmarking landscape is already out of date.
Intelligence

China Bans Manus AI Founders From Leaving Country Over Meta's $2B Deal

Chinese authorities banned the founders of Manus AI from leaving the country in response to Meta's reported $2 billion acquisition bid. It's a stark escalation in the geopolitical AI war — Beijing treating AI talent as a strategic national asset that can't simply be purchased by American tech giants.

Beijing treating a $2B acquisition as a national security matter confirms that AI talent and IP are being managed the same way rare earth minerals were a decade ago. Any deal involving a Chinese AI lab should be underwritten with geopolitical risk as a first-order consideration.
Intelligence

Harvey Raises $200M at $11B Valuation to Scale AI Legal Agents

Harvey, the AI-native legal platform, closed a $200 million round at an $11 billion valuation — one of the largest vertical AI funding rounds in history. The bet: law firms and corporate legal teams are willing to pay enterprise prices for agents that can actually do legal work, not just summarize it.

An $11B valuation for an AI-native legal platform shows the market believes vertical AI deployment — not horizontal tools — is where the margin lives. Harvey's bet is that the legal workflow is complex enough to resist commoditization. They're probably right.
Intelligence

MCP Hits 97 Million Installs — Now the De Facto Standard for Agent Tool Integration

Anthropic's Model Context Protocol crossed 97 million SDK installs with over 5,800 servers in the ecosystem — all in just 16 months since launch. Donated to the Linux Foundation and now stewarded by the Agentic AI Foundation, MCP has become the "USB-C of AI": the universal connector between agents and the tools, APIs, and data sources they need. This is infrastructure you should know cold.

97 million installs in 16 months and Linux Foundation stewardship means MCP has cleared the credibility threshold. This is no longer an Anthropic protocol — it's the plumbing. Build on it or explain why you're not.
Intelligence

Shopify Opens Agentic Storefronts — Millions of Merchants Can Now Sell Inside AI Assistants

Shopify launched Agentic Storefronts, letting merchants list and sell directly within ChatGPT, Gemini, Copilot, and Google AI Mode — without the customer ever leaving the assistant. This is the agentic commerce future arriving early: AI agents as the new storefront, not just the search bar. 67% of enterprise marketing budgets now include dedicated AI line items.

Shopify embedding commerce inside AI assistants is the clearest signal yet that the next browser isn't a browser. If the transaction layer moves into the assistant layer, every e-commerce strategy built around search traffic and landing pages needs to be reconsidered.
Intelligence

Agentic AI Market: $9B Today → $139B by 2034 (40.5% CAGR)

New market analysis places the global agentic AI sector at $9.14 billion in early 2026, with projections to $139 billion by 2034 — a 40.5% compound annual growth rate. Venture capital is flooding into "AgentOps" — the infrastructure to monitor, secure, and manage fleets of AI agents. The "Agentic Economy" is no longer a buzzword; it's the operating model.

A 40.5% CAGR isn't a forecast — it's a map of where capital is pointing. At $139B by 2034, agentic AI will be larger than the current cloud infrastructure market. The consultancy opportunity isn't in building the technology; it's in translating it into operational reality for the 90% of companies that won't build anything themselves.
Intelligence

AI Governance Gap: 53% of Organizations Have No Framework In Place

A new study finds that 53% of organizations deploying AI agents have zero governance frameworks in place. The HBR take: AI is reshaping white-collar work, not erasing it — but most companies are deploying faster than they're governing. That gap between deployment speed and governance readiness? That's the consulting opportunity.

53% of AI-deploying organizations with zero governance frameworks is the statistic that will define the liability landscape of the next three years. Governance isn't a compliance checkbox — it's the moat between 'we deployed AI' and 'we deployed AI that we can defend in court.'
Intelligence

GPT-5.4, Gemini 3.1, Grok 4.20 — Three Frontier Models in One Week

The pace of frontier model releases has become weekly. GPT-5.4 shipped in three variants (Standard, Thinking, Pro). Gemini 3.1 Ultra brought the most significant multimodal advance of the month with a native architecture — not bolted-on. Grok 4.20 doubles down on real-time information access. Plus: Google's Gemini 3.1 Flash Live launched as an audio-to-audio real-time dialogue model. The competitive gap between labs is now measured in weeks, not years.

Three frontier models in one week isn't an acceleration — it's a normalization. The pace has become the baseline. The organizations that can evaluate, integrate, and migrate across models on a quarterly cycle will have a durable advantage over those still running 6-month procurement cycles.
Intelligence

MiniMax M2.7: Tier-1 Performance at 1/50th the Cost of Claude Opus 4.6

MiniMax's new M2.7 is a 229B-total / 10B-active MoE model that benchmarks at near-parity with Claude Opus 4.6 on SWE-bench (78% vs ~81%) — at $0.30/M input tokens vs $15/M for Opus. The cost compression is extraordinary. This is the trend that matters: frontier quality is becoming commodity-priced fast.

MiniMax at 1/50th the cost of Opus changes the calculus for high-volume agentic workloads immediately. Cost parity at a fraction of the price means the 'we can't afford enterprise-grade AI' objection just disappeared for most mid-market companies.
Intelligence

Anthropic Launches Claude Code Auto Mode with AI Safety Classifier

Anthropic shipped Claude Code Auto Mode — a hands-free autonomous coding agent that can plan, write, test, and iterate on code with minimal human input. Crucially, it ships with a built-in AI Safety Classifier designed to prevent the agent from taking harmful or irreversible actions. Agentic coding with guardrails — the model everyone building in this space should be watching.

Claude Code Auto Mode shipping with a built-in safety classifier is the first time a major lab has bundled governance into the autonomy product itself. It's a meaningful UX shift: safety isn't a separate audit layer, it's part of the tool.
Intelligence

NVIDIA Critical AI Vulnerabilities — RCE and DoS Risk Across Triton, NeMo, Megatron LM

NVIDIA disclosed a serious batch of vulnerabilities on March 25th affecting Triton Inference Server, NeMo Framework, Megatron LM, Model Optimizer, Apex, and the B300 MCU. Root causes include unsafe deserialization, improper input validation, and memory handling flaws — enabling RCE, DoS, and data tampering. No active exploits reported yet, but the window is open. Fixed versions: B300 1.4, Megatron LM 0.15.3, Triton 26.01, NeMo 2.6.2, Model Optimizer 0.41.0, Apex commit db8e053. If you're running any NVIDIA AI tooling — patch now.

NVIDIA vulnerabilities in inference infrastructure are uniquely high-severity because they sit at the intersection of AI workload execution and network access. If your inference stack is cloud-hosted, your blast radius is limited. If it's on-prem and internet-exposed, treat this as urgent.
Intelligence

EU AI Act Issues First Formal Enforcement Inquiries — GPAI Rules in Scope

The European Commission opened its first formal inquiries under the EU AI Act, targeting General-Purpose AI (GPAI) model providers. The hybrid enforcement model — shared between Member States and the Commission — is now live. Three US states also passed AI transparency laws this month. For anyone building AI products or consulting on deployment: compliance timelines are no longer theoretical.

The EU AI Act's first formal inquiries targeting GPAI providers sets the enforcement precedent everyone in the industry was watching for. The hybrid Member State / Commission enforcement model means legal exposure is now jurisdictionally distributed — and that complexity favors large organizations with compliance infrastructure over smaller players.
✦ Mira's Take

Happy Monday, Dre. This week's briefing is a dense one — but there are two threads I want to pull on specifically for you.