Security researchers discovered nearly 3,000 unpublished documents sitting in an unsecured, publicly searchable database belonging to Anthropic on Friday. Among them: draft blog posts for a model called Claude Mythos — internally codenamed "Capybara" — described as "the most capable we've built to date."
Anthropic confirmed Mythos is real, calling it "a step change" in capabilities. According to the leaked drafts, Mythos represents a new tier above Opus — not a version update. It scores "dramatically higher" than Opus 4.6 on coding, reasoning, and cybersecurity benchmarks. More notably, the documents warn that Mythos is "currently far ahead of any other AI model in cyber capabilities" and "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."
The cause of the leak? A default CMS setting that made uploaded files public unless someone manually changed the permission. An AI safety company left its most sensitive roadmap behind an unlocked door.
Markets reacted immediately. Cybersecurity stocks dropped 3–7% on Friday, with CrowdStrike falling 7% and Palo Alto Networks losing 6%, as investors priced in a future where AI offense outpaces AI defense.
Meanwhile, the leaked documents also reveal Mythos is "very expensive for us to serve, and will be very expensive for our customers to use" — which maps directly to the rate-limit tightening Claude users noticed all week.
Why it matters
This is a rare case where a leak tells you more than a launch would. The Mythos documents show Anthropic internally acknowledging that its own model may be too dangerous and too expensive for broad deployment — while simultaneously failing to secure the paperwork describing it. The cybersecurity implications alone moved billions in market value overnight.
For developers, the signal is clear: the next generation of frontier models will be significantly more capable and significantly more restricted. Rate limits, pricing tiers, and access controls are tightening across the board. If you're building on top of these APIs, plan for both capability jumps and access friction.
Also in the news
- OpenAI's next model "Spud" finished pretraining on March 25. Sam Altman told staff internally that "things are moving faster than many of us expected." The frontier model race is accelerating.
- Meta released SAM 3.1 (Segment Anything Model 3), the latest version of its open-source image segmentation model — a major upgrade for computer vision workflows.
- Cline launched Kanban, a standalone app for CLI-agnostic multi-agent orchestration compatible with Claude Code and Codex. Tasks run in worktrees with dependency chains.
- A new Science study found AI sycophancy is widespread and harmful across 11 state-of-the-art models, actively decreasing prosocial intentions and promoting dependence on AI.
- Donald Knuth published "Claude Cycles", a new paper examining Claude's behavior — notable simply because it's Knuth.