Overview
Anchor constraint: the push toward orbital infrastructure does not come from technological enthusiasm alone. It is driven by grid queues measured in years, cooling limits measured in water and heat, and land-permit timelines that scale more slowly than AI demand.
This article follows directly from our earlier Perspectives piece: New Space Era: How Markets Are Attempting to Price Space in 2026 . That first article focused on how markets began assigning value to “space” as an investable domain. Here, I ask a deeper question: what structural pressures are forcing that pricing pressure to exist at all?
The answer is not “space hype.” It is the growing reality that AI infrastructure is colliding with planetary limits—power, cooling, land, permitting, and middle‑mile bandwidth—and the system is searching for a new layer that can absorb growth.
Tesla Dojo: From Specialists to a Unified Chip Path
In August 2025, Elon Musk dissolved Tesla’s Dojo team. Five months later, he publicly announced Dojo’s return. That reversal reads less like emotion and more like a resource allocation correction once you separate Tesla’s two silicon lines:
- Inference chips (AI4 / AI5 / AI6; HW4/HW5): deployed in vehicles and robots for real‑time decision-making.
- Training chips (Dojo D1 / D2): deployed in data centers to process massive video and sensor datasets.
The original Dojo thesis was that a dedicated training architecture could outperform general-purpose GPUs on Tesla’s unique workload. But running two distinct chip architectures also means two engineering stacks, two toolchains, two supply paths, and a permanent internal fork in priorities.
Over subsequent months, Tesla validated a pivotal point: AI5-class inference silicon can do “good enough” training when scaled via large clusters. At that moment, a specialized training path (Dojo D2) becomes harder to justify if the inference path can be multiplied into a supercomputer.
In that framing, “Dojo 3” is less a return to a bespoke training chip and more a shift toward architectural unification: produce one dominant chip line at volume—put it in cars for FSD, put it in racks for training, and treat the system as a single evolving compute platform.
Why the “Space Dojo” Narrative Matters
If Tesla’s shift is primarily about efficiency, why announce a “Dojo reboot” and recruit publicly? Because narrative is leverage. “Using car chips for training” sounds like cost control; “Dojo 3 / space AI compute” sounds like civilizational scale.
The strategic logic is also straightforward: space imposes constraints ground silicon does not tolerate well. Radiation, thermal extremes, and long-duration operation push any serious “orbital compute” plan toward space-hardened designs—often described in Musk’s language as a future AI7 path. That is a post‑2028 option, not a next‑quarter product.
When Power and Cooling Become Civilizational Constraints
The reason “orbital compute” keeps resurfacing is brutal and simple: Earth is getting hard to scale.
By 2030, data‑center power demand is expected to be enormous. In the United States, new hyperscale sites are often bottlenecked by grid upgrades that can take 5–10 years. Cooling is another constraint: large facilities can consume millions of tons of water per year for heat rejection. Energy, land, permitting, and community resistance—more than capital—are increasingly the friction points.
Google’s Project Suncatcher: Conservative Validation
Google’s Project Suncatcher is not a marketing stunt. It is a prototype-driven attempt to test whether an orbital layer could supplement terrestrial AI infrastructure. The concept explores solar-powered satellites running TPU-class accelerators, connected through optical inter‑satellite links, to evaluate power capture, thermal behavior, and workload feasibility.
This is not “moving the cloud to space.” It is controlled validation: can an orbital layer provide incremental capacity when terrestrial expansion slows?
SpaceX’s Extreme Hypothesis: Orbital Data Centers at Scale
At the other end of the spectrum is SpaceX. Public reporting suggests SpaceX has explored the idea of orbital data centers and discussed a future architecture that could involve extremely large satellite counts—up to one million satellites in certain long-range concepts.
Even if such designs remain speculative, the direction is revealing: Starlink makes orbit a network; orbital compute would make orbit part of the compute substrate itself.
Bezos’ Approach: Middle‑Mile Space Infrastructure
Jeff Bezos’ play is easiest to understand as infrastructure discipline: target the layer AI actually chokes on—middle‑mile connectivity between data centers and network hubs.
Two moves sharpen that picture:
- TeraWave: a proposed 5,408‑satellite optical backbone targeting up to 6 Tbps symmetric data transmission for enterprise and government.
- New Glenn (NG‑3): a mission designed to demonstrate reusability economics—crucial if a massive constellation requires dozens of launches.
That execution layer intersects with AST SpaceMobile via the heavy BlueBird 7 satellite—reported at roughly 6,100–6,500 kg with a phased‑array antenna of about 2,400 sq ft. AST has commercial agreements with Verizon (including a $100M commitment) and AT&T, with deployment targets spanning 2026–2028.
The Capital Scale of This Experiment
If you do not attach numbers, “space AI” becomes vapor. The reality is that this is not venture scale—it is civilization-scale capital.
- SpaceX / Starlink: widely reported cumulative investment on the order of $10–15B to date. Any orbital-compute architecture at extreme satellite counts implies hundreds of billions in hardware, launch, and operations.
- Tesla / Dojo: AI and compute-related spend is often framed around $3–5B per year, with Dojo D1/D2 in the $1–2B class. Dojo 3 / AI7 is a multi-year bet, not a single capex check.
- Blue Origin: New Glenn development is widely discussed as exceeding $10B. A constellation the size of TeraWave implies $20–30B across satellites, launch cadence, and ground systems.
- Google: Suncatcher is research scale—but “research scale” at Google can still mean hundreds of millions inside an annual AI/data‑center capex envelope often discussed in the $30–40B range.
Why Space Data Centers Are Not an Immediate Solution
Even if the direction is real, the bottlenecks are equally real:
- Extreme launch and deployment cost at data-center scale.
- Energy and thermal management: continuous power is attractive, but high‑power heat rejection in orbit is an unsolved scaling problem.
- Hardware durability: advanced nodes face radiation sensitivity and long‑lived reliability challenges.
- Latency and bandwidth limits between orbit and ground cannot be eliminated.
- Debris and orbital safety: congestion and governance risk rise as constellations scale.
- Immature tech stack: much remains prototype or demo scale.
- Most importantly, Earth’s constraints—grid queues, cooling, land, permitting—are the near‑term driver pushing the system to explore orbit.
Counterfactual Compression
If AI infrastructure could continue scaling entirely on Earth, grid expansion would need to outpace AI demand, cooling capacity would need to scale without water or land constraints, and permitting timelines would need to compress rather than stretch.
None of these conditions are visible. Space enters the system not as an escape or aspiration, but as the residue of eliminated terrestrial options once physical constraints harden.
Conclusion
Extending AI infrastructure into space is not escapism. It is a structural response to planetary limits.
If the first article explained how markets began pricing “space,” this article explains why that pricing pressure emerged: because AI is colliding with physical constraints, and the system is searching for a new layer to absorb growth.
Sources
- McKinsey & Company — The New Space Economy
- World Economic Forum — Space: The Next Frontier for Economic Growth
- Space Foundation — The Space Report (Space Economy totals and breakdowns)
- PCMag — SpaceX Eyes 1 Million Satellites for Orbital Data Center Push
- CNBC — Tesla’s Dojo Supercomputer Explained
- Ars Technica — Inside Blue Origin’s New Glenn Rocket Program
- SpaceNews — Reporting on AST SpaceMobile, direct-to-cell partnerships, and launch cadence (search within site for latest)
- International Energy Agency (IEA) — Electricity 2024 (context for rising power demand and grid constraints)
- Nature — Coverage on data-center energy/cooling constraints (search within site for latest relevant analysis)
Reproduction is permitted with attribution to Hi K Robot (https://www.hikrobot.com).