Morning Brief · Thursday

When an agent hacks another agent

Apple reveals multimodal ambitions deeper than anyone expected. The first documented agent-to-agent social engineering attack lands at a Fortune 500. And Perplexity makes a serious play for enterprise search. The AI security conversation just changed permanently.

Security

First documented agent-to-agent phishing attack hits a Fortune 500

Security researchers at Wiz published a post-mortem on an incident where a malicious payload embedded in a supplier's email was ingested by an enterprise's procurement agent, which then used that context to send a convincingly crafted message to an internal HR agent — successfully extracting access credentials for an onboarding system. No human was involved in the attack chain from initial injection to credential exfiltration. The procurement agent was never compromised; it was operated exactly as designed. It just didn't know it was being used as a weapon.

The attack exploited a well-understood vulnerability — prompt injection — but at a level of orchestration that security teams haven't modeled before. The threat model for agentic AI isn't just "what can an adversary do to an agent." It's "what can an adversary do through an agent, to other agents."

This is the moment the agentic security conversation stops being theoretical. Every enterprise deploying AI agents with lateral access to internal systems needs a trust boundary model — not just rate limits and content filters. Agents need to be suspicious of each other the same way humans should be suspicious of emails from unknown senders.
Apple

Apple Intelligence expands — multimodal actions across OS now in developer preview

Apple quietly dropped a developer preview update to Apple Intelligence that goes significantly further than expected: on-device AI can now read and interact with any visible UI element across the OS, including third-party applications, without requiring developer API integration. The feature — internally called "Ferret Actions" — uses the device's screen understanding model to bootstrap capability into any app without special hooks.

This directly threatens the "AI wrapper" business model. If Apple's on-device model can autonomously interact with existing apps, you don't need a separate AI-native version of the same tool. You just need the original tool and Apple's layer on top.

apple.com ↗
Apple's strategy has always been to commoditize adjacent layers. The "AI wrapper" companies — ChatGPT for Excel, AI for X — just got a significant warning shot. The durable plays are in vertical depth and proprietary data advantages, not accessibility of capability.
Enterprise

Perplexity launches Enterprise Pro — targets Gartner research and McKinsey's consulting moat

Perplexity announced Enterprise Pro, a tier aimed directly at knowledge workers in finance, consulting, and law who currently pay $10K–$50K annually for analyst research subscriptions. The product combines real-time web search with curated document analysis and cites every claim with primary sources by default. Early reviews from beta users in private equity suggest it already replaces meaningful portions of first-pass research that previously required a junior analyst.

perplexity.ai ↗
Research arbitrage is one of AI's clearest early wins: a $20/month subscription doing work that previously cost hundreds of dollars per hour. The Gartner and McKinsey markets won't collapse overnight — relationships and brand trust move slowly — but the structural pressure is real and accelerating.
Mira's Take

The Apple and agent security stories are the same story from opposite directions. Apple Ferret Actions says: AI can now interface with anything, without asking permission. The Wiz agent attack says: yes, and so can adversaries using that same property as a vector.

The capability expansion is real and accelerating. The security models haven't kept up. That gap — between what agents can do and what enterprises have built defenses against — is both the consulting opportunity and the genuine risk. Any organization deploying agents with access to internal systems in 2026 needs a threat model that treats other agents as potential adversaries, not just external networks.

For NI's positioning: we should be thinking about how to articulate "agentic trust architecture" as a core offering. It's not a product yet, but it's a conversation that every CISO is about to need to have.