Anthropic Is Now Running Claude on Elon's Data Center: Anthropic SpaceX Deal

Anthropic Is Now Running Claude on Elon's Data Center: Anthropic SpaceX Deal

Something interesting happened yesterday. Anthropic announced a deal with SpaceX, Elon Musk’s SpaceX, to use the full computing capacity of Colossus 1, a massive data center in Memphis, Tennessee. Over 220,000 Nvidia GPUs. 300 megawatts of power. The same facility that xAI built in record time and was using to train Grok.

The weird part? Just a few months ago, Musk was publicly calling Anthropicmisanthropic” and saying the company was “doomed.” In February he was accusing them of being hostile to Western civilization or something like that. And now he’s leasing them what is basically one of the biggest AI supercomputers on the planet. I spent like 20 minutes just sitting with that before I could start writing.

And then there’s this other part buried at the bottom of Anthropic’s announcement. They’re “expressing interest” in developing multiple gigawatts of compute capacity in orbit. With SpaceX. In space. If you’re a Claude subscriber who was annoyed about hitting rate limits while coding last week, this is a slightly surreal escalation.

What Actually Happened and Why It Matters Right Now

The immediate trigger for this deal is something most Claude users have been annoyed about for months. Demand for Claude Code, Anthropic’s AI coding tool, has been growing so fast that the infrastructure just couldn’t keep up. Last month, Anthropic straight-up admitted that demand had caused “inevitable strain” on their systems. Users on Claude Pro and Max were regularly hitting usage limits, especially during peak hours in the afternoons.

The fix, at least for now: double Claude Code rate limits for paid users, remove the peak-hour restrictions entirely, and sharply increase how much API access developers get to Claude Opus models. Those are the immediate practical benefits.

Anthropic confirmed that paid Claude subscribers would see usage limits double through Claude Code, and peak-hour rate limit reductions on Pro and Max plans are being removed. That’s real. That affects your actual day. But the bigger question is what this deal means beyond “I can finally use Claude in the afternoon without getting a rate limit error.

The thing is, this deal also happened because xAI, which SpaceX absorbed earlier this year when they merged with Musk’s AI company, had already moved their own training work to a newer facility called Colossus 2. So Colossus 1 is basically sitting there. Renting it to a competitor makes business sense. SpaceX also has a massive IPO coming this fall, and having Anthropic as a marquee customer looks good on a prospectus. So it’s not exactly altruism from either side, which is fine. Good deals usually aren’t.

Elon Musk Passes the “Evil Detector” Test (His Words)

Here’s what caught me off guard. Anthropic cofounder Tom Brown wrote on X that the company is “going to need to move a lot of atoms in order to keep up with AI demand, and there’s nobody better at quickly moving atoms (on or off planet Earth).

And Musk replied, not with skepticism but with a genuine endorsement. He said he spent a lot of time with senior members of the Anthropic team over the last week to understand what they do to ensure Claude is good for humanity, and he came away “impressed.” He wrote that “everyone I met was highly competent and cared a great deal about doing the right thing,” and that “no one set off my evil detector.”

“No one set off my evil detector.” That’s either the most reassuring or most cursed endorsement in tech history, I’m not sure which. But given how loudly Musk had been criticizing Anthropic just weeks earlier, it matters. He also said he’d provide computing capacity to other AI companies that make similar efforts to favor humanity, comparing it to how SpaceX launches satellites for competitors “with fair terms and pricing.”

That last bit is interesting because it hints at something bigger. SpaceX potentially positioning itself as neutral infrastructure for the entire AI industry, like AWS but for compute. And the space angle, the gigawatts of orbital compute they’re “expressing interest” in, is where this stops being a boring business story.

The Space Part Is Not a Joke

I want to be honest here: I initially read “orbital AI compute capacity” and assumed it was marketing fluff. Like something you say at a developer conference to make people gasp. I figured it was five years away minimum and nobody would ever seriously build it.

But the SpaceXAI blog post describes it plainly: “The compute required to train and operate the next generation of these systems is outpacing what terrestrial power, land, and cooling can deliver.” That’s not hype. That’s a logistics problem. Data centers need power, and power needs land and cooling. Land near power grids in acceptable climates is genuinely running out in the US. The companies that got to Memphis and Virginia and Oregon early are fine. Everyone else is in a bidding war.

Space solves some of this. You get solar power that’s always on (no day/night cycle, no clouds). You get natural cooling from radiating heat into space. You don’t need land. The physics actually work. The hard part isn’t the concept, it’s the reliability and latency. Running a satellite server farm that doesn’t crash and doesn’t introduce 500ms delays into every API call is a different kind of engineering problem than building a building in Tennessee.

SpaceX is the only company on Earth right now with the launch cadence and cost structure to make this commercially viable. They launched over 130 rockets last year. Nobody else is close. So if orbital data centers happen in the next decade, it almost certainly goes through them.

And Claude might run on some of that infrastructure. That’s kind of wild to think about.

What This Does to Prices and Competitors

So what does all this compute actually mean for Claude’s pricing? My guess, and this is just me reading the situation, is that prices won’t go down immediately but the pressure to cut them will go up. Here’s the logic.

Anthropic is currently in talks to raise money at a valuation near $900 billion. The SpaceX deal comes as Anthropic prepares to file an IPO, reportedly planned for June. Right before you go public, you want to show growth and you want to show happy users. Throttling paid subscribers is bad for both. So more compute = better user experience = better IPO story.

But once you have more compute than you’re using, the cost per query drops. When cost per query drops, you can afford to either cut prices or increase what you give at the same price. Given how competitive the market is, OpenAI, Google, Meta’s Llama models, Mistral, and about 40 other companies, there’s real pressure to pass some of those savings on. I think we’ll see limits go up before prices go down, but prices will come eventually.

For OpenAI, this is uncomfortable. Not disastrous, but uncomfortable. The surge in popularity for Claude Code has already forced OpenAI to scale back work on products like Sora to focus more on AI coding tools. OpenAI has its own compute deals, Azure’s infrastructure is massive, but they don’t have the flexibility Anthropic now has. And they don’t have a rocket company offering them space-based compute.

Google is less worried because they have TPUs, DeepMind, and the biggest data center footprint of anyone. But even Google doesn’t have a plan for orbital compute that’s this concrete.

The smaller players, Mistral, Cohere, Perplexity, this is where it gets rough for them. More compute for Anthropic means lower effective costs per token. Anthropic can subsidize Claude Pro more aggressively. If you’re running a startup and competing with “Claude Pro at $20/month but now with double the limits,” that’s a hard product conversation to have with your own customers.

The Pentagon Problem Nobody Is Talking About

There’s something awkward sitting in the middle of all this. The deal comes as Anthropic has been navigating a contentious situation with the US government. In March, the Pentagon declared Anthropic a supply chain risk, blacklisting it from work with US federal systems. Last week, Microsoft, Google, and SpaceXAI all signed deals giving the US government access to test their AI tools on classified networks. Anthropic was not part of that.

So Anthropic now has a compute deal with the CEO of SpaceX, which just signed a government AI deal that Anthropic itself was excluded from. And Musk personally endorsed Anthropic’s safety work while simultaneously his other companies are getting government access Anthropic doesn’t have. It’s messy.

I don’t think this kills the deal or the relationship. But it means the Anthropic-SpaceX partnership is happening against a backdrop where Anthropic’s government relationships are genuinely complicated right now. Whether access to Colossus 1 changes that calculation for the Pentagon, “well, SpaceX vouches for them,” I have no idea. But it’s not a completely irrelevant data point.

The Dreaming Feature Is Actually Interesting Too

I keep coming back to the orbital compute stuff but I should mention: the SpaceX deal wasn’t the only announcement at Anthropic’s developer day yesterday. Anthropic also unveiled a feature called “dreaming,” a tool that allows AI systems to review work between sessions, spot patterns, and update files that store user preferences and other context. It’s in research preview right now.

Basically, Claude can do background work between your conversations. Review what you’ve been doing. Update its own memory files with things it noticed. The “default isn’t, I’m going to prompt Claude Code, the default is now, I will have Claude prompt Claude Code,” as Boris Cherny, Anthropic’s head of Claude Code, put it on stage.

That sentence is doing a lot. And the compute implications are significant. If Claude is now running tasks in the background on your behalf between sessions, that multiplies how many GPU cycles each user consumes. Which is exactly why you’d need 220,000 more GPUs before you ship something like that publicly.

Where This Goes From Here

The Tom Brown tweet is the one I keep thinking about. “Nobody better at quickly moving atoms on or off planet Earth.” That’s not just a fun line. It’s describing a world where the bottleneck for AI capability is physical infrastructure: power, cooling, land, launch access. And SpaceX is the only company that can credibly promise to break that bottleneck.

If orbital compute actually ships, and let’s say in 2030 or 2031 it’s at least partially real, the companies that built their infrastructure dependency on SpaceX early will have a meaningful advantage. Latency will still be a problem they’re working through. Reliability in space is genuinely hard. But the cost curve of compute could drop in a way that makes today’s prices look absurd.

For regular users, the near-term effect is simpler: fewer rate limit errors, better Claude Code performance, and more headroom on API usage. That’s already happening this week.

The longer-term thing, Claude potentially running on satellites orbiting at 550 kilometers above Earth, is either a brilliant engineering solution to a real physical constraint, or the most expensive way anyone has ever tried to fix a server overload problem. Probably both, honestly.

Musk’s “evil detector” comment still makes me laugh every time I read it. But the deal itself, stripped of all the drama, is pretty straightforward: Anthropic needed compute urgently, SpaceX had spare capacity, both sides benefit. The space part is real but it’s at least 4–5 years from maturity. And the immediate effect for people actually using Claude starts happening this month.

That’s the boring version. The exciting version involves satellites and gigawatts and two companies that were publicly feuding 90 days ago now talking about building data centers in orbit together. I find both versions interesting, to be honest.

Post a Comment

Previous Post Next Post