Compute Among the Stars
- Juan Manuel Ortiz de Zarate

- 2 hours ago
- 9 min read
Artificial intelligence has become the new electricity: a general-purpose technology[3] that quietly underpins economies, science, and daily life. Yet powering AI’s exponential appetite for computation is a growing planetary challenge[4,5,6]. While efficiency improvements have been remarkable, Google’s Gemini model reduced its query energy use thirty-threefold in a single year[2], global data-center consumption continues to soar.
The team behind Google’s Project Suncatcher offers an audacious proposition: if AI demands astronomical energy, why not place its compute infrastructure among the stars that supply it? Their 2025 paper [1] outlines a scalable system for machine-learning computation in orbit, a constellation of solar-powered satellites carrying Google’s tensor-processing units (TPUs) and communicating through free-space optical links. It reads like hard science fiction written by engineers with launch contracts.
The central claim is both elegant and subversive: rather than beam harvested energy down to Earth, let computation rise up to where the energy already flows freely.
Compute Gravity and Solar Abundance
Every industrial revolution finds its prime mover. For steam, it was coal, for electricity, it was copper and turbines; for the AI revolutio,n it is compute, which in turn is energy incarnate.
The Sun delivers an output of 3.86 × 10²⁶ watts, more than a hundred trillion times humanity’s total electricity production. Panels in orbit can receive up to eight times more solar energy annually than those at mid-latitude on Earth[7], unshadowed by weather or night. The challenge that has thwarted previous space-based solar-power schemes has been transmission: sending the energy home safely. Suncatcher circumvents that step entirely by co-locating power generation and computation.
The vision is a self-sustaining “solar compute biosphere”, fleets of satellites that process data, train models, and communicate results to Earth, drawing their energy directly from sunlight. In the long term this could offload much of humanity’s AI workload from the planet’s finite grids and water-cooled data centers.
System Design
Working backward from a future in which most AI computation happens in space, the authors outline an intermediate milestone: building a cluster with performance comparable to a terrestrial data center.
Each satellite would host arrays of Google Trillium TPUs, powered by solar panels and cooled via passive thermal radiators. They would orbit in a sun-synchronous dawn-to-dusk low-Earth orbit (LEO), maximizing exposure while minimizing latency to ground stations.
Inter-satellite communication, a critical bottleneck, is achieved through free-space optical interlinks (FSO ISLs). Flying close together, the satellites exchange data via laser beams rather than radio waves, enabling terabit-per-second connections. Formation flight is maintained by machine-learning-based control systems that continuously adjust thrusters to prevent collisions.
The design favors modularity over monolithic megastructures. Instead of building kilometer-scale “data castles” that would require orbital assembly, Google’s engineers propose a swarm of smaller, autonomous craft, each a computing cell in a vast orbital organism.
Achieving Terabit Links in Vacuum
Modern ML clusters rely on dense, low-latency networking: Google’s terrestrial TPUs use optical inter-chip interconnects capable of hundreds of gigabits per second per chip. No current satellite link approaches that throughput.
Suncatcher bridges the gap by shrinking the distance. Optical power received scales with the inverse square of distance, so reducing separation by orders of magnitude yields orders of magnitude more bandwidth.

The paper models an 81-satellite cluster with a 1 km radius, where neighbors maintain separations of 100–200 meters. At such proximity, commercially available Dense Wavelength Division Multiplexing (DWDM) transceivers, standard in Earth-based data centers, can provide aggregate bandwidths of ≈10 terabits per second per link.
Bench-scale tests using off-the-shelf equipment achieved 1.6 Tbps bidirectional throughput across short free-space paths, validating feasibility. The authors even foresee spatial multiplexing, where multiple narrow beams operate in parallel within a single 10 cm aperture, scaling bandwidth inversely with distance.
In short, the vacuum between satellites becomes a glass-fiber-less fiber network.
Keeping 81 Satellites in Harmony
Formation flight on this scale has no precedent. To test stability, the researchers simulated an 81-satellite planar constellation orbiting 650 km above Earth, subject to gravity and the planet’s slight equatorial bulge (the J₂ term).
The results reveal a delicate gravitational dance: the cluster oscillates through two “shape cycles” per orbit, its satellites tracing ellipses that stretch and compress without collisions. Differential accelerations from Earth’s oblateness cause slow drifts[8], but these can be corrected with minimal fuel, mere meters per second of velocity adjustment per year.
Scaling laws emerge naturally: for a given minimum separation, the number of satellites grows quadratically with cluster radius. Thus, capacity can expand simply by adding more rings around the core. The formation resembles a living lattice, a shimmering digital coral reef orbiting in perpetual sunlight.

TPUs Under Fire
Hardware reliability in space is non-negotiable. High-energy protons and cosmic rays can flip bits, degrade transistors, or fry entire chips. Traditionally, aerospace systems use radiation-hardened processors that lag a decade behind commercial performance. Google took the opposite route: test its own V6e Trillium Cloud TPU and AMD host server under a 67 MeV proton beam.
This is the first published radiation characterization of a modern ML accelerator. The verdict: Trillium survives a total ionizing dose ≈ 15 krad(Si), twenty times the expected cumulative exposure for a five-year LEO mission. No permanent failures occurred.
The main vulnerability lies in the High-Bandwidth Memory (HBM) modules, which showed uncorrectable error-correction-code (ECC) faults roughly once per 50 rads. In orbital terms, that’s one failure per ten million inferences, acceptable for inference but risky for training, where silent data corruption could skew gradients.
Future mitigation could involve ECC reinforcement, redundant training runs, or even ML models that learn to detect their own cosmic amnesia.
The Economics of Escape Velocity
Energy abundance alone doesn’t make orbital AI practical, launch cost does. The team conducted a rigorous learning-curve analysis using SpaceX data from Falcon 1 through Falcon Heavy. Prices have historically fallen ≈ 20% with each doubling of cumulative mass launched.
At current rates, launch to LEO costs about $3,600 per kilogram, but the authors forecast ≤ $200/kg by 2035, assuming the success of fully reusable heavy-lift vehicles like Starship.
Why is $200/kg the magic number? Because at that threshold, the amortized “launched power price”, the cost per kilowatt of solar generation sent to orbit, matches or undercuts terrestrial data-center energy costs.
Their comparison is striking:
Terrestrial data centers in the U.S. spend ≈ $570–3,000 per kW per year on electricity and cooling. In other words, space could soon achieve cost parity with Earth.
SpaceX’s ambitions for 10×–100× component reuse push theoretical costs below $60/kg, even approaching $15/kg in optimistic scenarios. Competition from Blue Origin, Rocket Lab, and others may accelerate the descent.

Challenges Beyond Launch
The authors are candid that their proposal is an early milestone, not a blueprint for immediate deployment. Several formidable challenges remain:
Thermal management
TPUs are power-dense heat engines. In vacuum, heat can only escape via radiation, demanding large, lightweight radiators and advanced thermal-interface materials. Passive solutions are preferred to avoid moving parts and increase reliability.
Maintenance and redundancy
A failed TPU can be swapped on Earth but not 650 km above it. The simplest mitigation is redundancy, overprovisioning compute nodes, but long-term sustainability may require robotic servicing or self-healing architectures.
Ground communication
High-bandwidth laser links to Earth face atmospheric turbulence, cloud cover, and precision-pointing issues. Yet progress is rapid: NASA’s TBIRD mission demonstrated 200 Gbps optical downlinks in 2023, while private firms like CAILABS and MIT are pushing laser communication speeds ever higher.
Debris and regulation
An orbital AI cloud raises concerns about congestion and space-traffic management. Coordinating hundreds of reflective satellites in tight formation demands unprecedented precision and compliance with debris-mitigation policies.
Ethical and environmental considerations
Moving computation off-planet doesn’t absolve humanity of stewardship, it merely extends it beyond the atmosphere. Ensuring equitable access and preventing orbital pollution will be as critical as any engineering feat.
Toward Integrated Orbital Architectures
Project Suncatcher also gestures toward deeper technological convergence. Just as smartphones integrated cameras, sensors, and CPUs into system-on-chip (SoC) designs, space computing may evolve into system-on-satellite architectures, merging compute, radiators, and power generation into unified structures.

The authors even speculate on computational substrates based on neural cellular automata, hardware that mimics self-organizing biological growth, potentially merging computation and material structure. In that vision, satellites might not be manufactured but grown, evolving forms optimized for light capture, heat dissipation, and fault tolerance.
Suncatcher, then, is both a research program and a philosophy of design: decentralize, modularize, and evolve toward harmony with the energy gradient of the solar system.
Methods in Brief
To ground their proposal, the researchers employed both analytic and numerical modeling. Using Hill-Clohessy-Wiltshire and Tschauner-Hempel equations, they simulated relative motion in near-Keplerian orbits, refining with perturbations like Earth’s oblateness and atmospheric drag. Integration required centimeter-level precision across trajectories spanning tens of millions of meters, a computational feat in itself.
Their optical-link feasibility study used the Friis transmission equation and near-field approximations to compute received power and beam divergence. A 10 cm telescope aperture and 5 W transmitter yield workable power at inter-satellite distances below 10 km.
Radiation testing took place at UC Davis’s Crocker Nuclear Laboratory, where TPUs endured controlled proton bombardment under operational load. Beam energy losses through heatsinks, secondary particle generation, and error-logging software were all characterized in meticulous detail, fittingly, a stress test for hardware meant to run machine learning inside a particle storm.
From Moonshot to Infrastructure
The paper closes with cautious optimism. These early results, successful short-range optical links, stable orbital simulations, radiation-tolerant TPUs, and promising launch-cost trends, suggest that space-based AI compute is technically plausible within two decades.
Future milestones include on-orbit prototypes, integrated thermal-control demonstrations, and optical downlinks to ground stations. The approach echoes Google’s internal research philosophy: decompose the impossible into a chain of achievable experiments.
If successful, Project Suncatcher could redefine the geography of computation. Today, “the cloud” is a metaphor; tomorrow it might be literal, a luminous belt of servers orbiting Earth, powered by sunlight and linked by laser.
Computing Beyond the Planetary Scale
The significance of Suncatcher extends beyond Google’s infrastructure. It forces a rethink of energy, computation, and planetary ethics.
Energetically, it acknowledges that terrestrial sustainability has ceilings. Even if every data center runs on renewables, land, water, and grid stability impose limits. Moving compute to orbit decouples digital expansion from terrestrial ecosystems.
Technologically, it pioneers architectures resilient to cosmic radiation, latency, and isolation, skills that will underpin future interplanetary networks. The same systems could support Mars colonies, asteroid-mining operations, or planetary-scale environmental monitoring.
Philosophically, it gestures toward a Copernican shift in computation: intelligence no longer bound to the surface of one planet but distributed through the heliosphere. Humanity’s machines would literally think in sunlight.
Conclusion
Project Suncatcher stands where speculative fiction meets systems engineering. Its authors treat the cosmos not as a backdrop but as the next data-center campus. They marshal orbital mechanics, semiconductor physics, and economic modeling into a coherent roadmap toward solar-native artificial intelligence.
Whether this vision materializes or remains an elegant provocation, its logic is hard to dismiss. The Sun radiates freely, endlessly, impartially. If AI is to illuminate the future, perhaps it should learn directly from that example, running not on Earth, but above it.
References
[1] y Arcas, B. A., Beals, T., Biggs, M., Bloom, J. V., Fischbacher, T., Gromov, K., ... & Manyika, J. (2025). Towards a future space-based, highly scalable AI infrastructure system design.
[2] Elsworth, C., Huang, K., Patterson, D., Schneider, I., Sedivy, R., Goodman, S., ... & Manyika, J. (2025). Measuring the environmental impact of delivering AI at Google Scale. arXiv preprint arXiv:2508.15734.
[3] Baily, M., Byrne, D., Kane, A., & Soto, P. (2025). Generative AI at the Crossroads: Light Bulb, Dynamo, or Microscope?. arXiv preprint arXiv:2505.14588.
[4] Gohr, C., Rodríguez, G., Belomestnykh, S., Berg-Moelleken, D., Chauhan, N., Engler, J. O., ... & von Wehrden, H. (2025). Artificial intelligence in sustainable development research. Nature Sustainability, 1-9.
[5] McDuff, D., Schaekermann, M., Tu, T., Palepu, A., Wang, A., Garrison, J., ... & Natarajan, V. (2025). Towards accurate differential diagnosis with large language models. Nature, 1-7.
[6] Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., ... & Hassabis, D. (2021). Highly accurate protein structure prediction with AlphaFold. nature, 596(7873), 583-589.
[7] Sengupta, M., Xie, Y., Lopez, A., Habte, A., Maclaurin, G., & Shelby, J. (2018). The national solar radiation data base (NSRDB). Renewable and sustainable energy reviews, 89, 51-60.
[8] D'Amico, S. (2010). Autonomous formation flying in low earth orbit (Doctoral dissertation, TU Delft).




Comments