May 6, 2026. Anthropic announced three things at their dev conference and the internet lost its mind over the wrong one. Yes, they signed a deal with SpaceX. Yes, the same Elon Musk who called them “misanthropic and evil” is now their GPU landlord. Funny. But that’s not what’s going to affect your daily work.
The Dreaming feature and the rate limit changes are the actual story. And almost everyone covered the Musk angle instead.
Here’s the deal.
Anthropic gets access to all of Colossus 1 in Memphis, 220,000+ Nvidia GPUs, H100s and H200s and GB200s, 300+ megawatts of capacity within the month. They’re also shipping a feature called Dreaming and doubling certain rate limits for Claude Code. Three announcements, zero of them boring.
Dreaming Isn’t What You Think
Most writeups called it “agent self-improvement” and moved on.
That undersells it and oversells it at the same time.
Here’s what actually happens. Between active sessions, Claude Managed Agents can now run a scheduled review process that looks at past work, finds patterns in errors and convergences. And rewrites the agent’s own memory. You pick automatic mode or review-before-apply mode. It’s like a sprint retrospective that actually changes what the agent does next, not just files notes nobody reads.
Harvey, a legal AI company, tested it during the research preview period. Completion rates went up about 6x. That’s Anthropic’s number, not a third-party audit, so take it as direction not proof. But direction matters.
If your agents make the same mistake 40 times before you catch it, Dreaming is supposed to eliminate that class of error automatically.
I’m not sold on any feature that promises self-improvement without some babysitting. Research preview means the label is doing heavy lifting. “Considerably raising API limits for Claude Opus” is concrete. “Agents learn from mistakes” is a promise.
Test it on throwaway workflows before you let it touch anything client-facing.
One exception: Outcomes, the rubric-based agent evaluation tool, moved from research preview to public beta alongside Dreaming.
That’s the feature I actually wanted. Now it’s here.
Request access at claude.ai/developers.
The Rate Limit Fine Print Nobody Mentioned
Anthropic doubled the 5-hour rate limits for Pro, Max, Team, and Company plans. Peak-hours token surcharges gone for Pro and Max. Fine.
Here’s what got buried: weekly limits didn’t change.
If you’re a heavy Claude Code user chewing through your weekly token budget by Wednesday, nothing in this announcement helps you. HN users noticed immediately. The announcement was about burst capacity, not total throughput.
Small teams feel this differently than enterprises. If you’re running Claude Code as your actual coding environment. Feature development, code review, not just autocomplete. You already know which limit you hit. Check your dashboard. If it’s the weekly number, you’re evaluating whether to downgrade or eat the overage. Neither is fun.
SpaceX Hosting Anthropic’s Compute Is Stranger Than It Looks
Musk calling Anthropic evil and then renting them GPU capacity reads like a screenplay.
The business logic is less dramatic. SpaceX built Colossus 1 for Starlink, FSD, and internal training. The capacity they need didn’t materialize on schedule. Someone was going to rent the excess.
Anthropic’s probably the most credible tenant available. That’s the story. Not a thaw in Silicon Valley feuds, just a compute oversupply meeting a well-capitalized buyer.
For you, the implication matters more than the headline.
The big AI labs are all renting now. Google has TPU, sure. But Anthropic, OpenAI, the rest. They’re paying hyperscalers or, apparently, SpaceX. When multiple providers are competing for your inference dollar, your switching costs drop. The lock-in window isn’t closed.
Jack Clark, Anthropic’s co-founder, published an essay predicting a 60% chance that frontier models autonomously train their successors by end of 2028. I don’t know if that’s right. But if it is, the moat isn’t compute. It’s code quality, memory management, and workflow design. Things you can control.
What You Do With This Today
Don’t wait on any of this. Check your Claude Code usage dashboard right now. Are you hitting the 5-hour burst cap or the weekly total? If it’s weekly, this announcement didn’t change your life.
If it’s the hourly cap, push a long session today and see if the doubling actually changes how you work.
Request Dreaming access if you’re running managed agents in production. Even in research preview, a 6x completion improvement on legal AI is worth a test against your current error-correction overhead. Review-before-apply mode. Let it find patterns. You review them before anything auto-applies.
Build portability into your agent frameworks now, not later.
SpaceX is an infrastructure company now. Amazon, Google, Microsoft are all competing for inference market share. The window where you’re deeply locked into one provider’s tooling is closing while you read this.
None of these three announcements is revolutionary alone.
Together they’re a signal: the AI infrastructure layer is still competitive enough that the big players are fighting for capacity rather than extracting rent. That competition is your leverage. It won’t last forever.
Sources
– Bloomberg — Anthropic-SpaceX compute deal
– The Verge — Claude Dreaming feature
– The Register. Rate limits and compute deal
– Ars Technica. Developer conference hands-on
– Anthropic official blog. Managed agents
