Research Agenda — Phase I: Understanding

We don't just make AI work. We study how it learns.

There is no fundamental theory of learning yet. We are bringing the tools of statistical mechanics, information geometry, and field theory to build one.

The Physics Advantage

The same mathematics that maps the invisible universe can map the hidden states of neural networks.

For three decades, cosmologists have built mathematical infrastructure to extract knowledge from vast, noisy, incomplete datasets — where ground truth is fundamentally unknowable and brute-force approaches fail.

This accumulated expertise, developed for billion-dollar space missions under the most extreme conditions of uncertainty, has never been systematically brought to bear on the science of intelligence and learning.

Cosmology and machine learning share the same underlying mathematical structures. The transfer is not metaphorical — it is mathematical.

Research Directions

I

The Statistical Mechanics of Neural Networks

Why do neural networks learn at all? Why do models undergo sudden capability jumps? What governs the scaling of performance with data and parameters? We apply statistical mechanics and information geometry to derive the laws that govern learning.
Phase I · Core
II

Hallucination Mechanics

Why do AI systems produce confident outputs when they are wrong? The complete absence of uncertainty quantification in current systems is not a feature gap — it is a scientific one. We adapt cosmological Bayesian inference to build AI that knows when it doesn't know.
Phase I · Core
III

Training Time Reduction

Understanding the mechanics of learning can reduce training time and improve accuracy. Principled architectural proposals grounded in empirical science — not engineering workarounds — can produce models that learn faster from less data.
Phase I · Applied
IV

Biased Tracers of Underlying Truths

The same mathematics cosmologists use to infer the invisible dark matter field from galaxy positions can be applied to infer latent computational states from neural activations. We build new architectural proposals grounded in first principles.
Phase I · Applied
V

Simulation-Based Inference

Many real-world systems are too complex for analytic formulas, but we can simulate them. We pioneered these methods in cosmology a decade before the ML community adopted them. Now we build the universal toolkit.
Cross-Phase
VI

New AI/ML Architectures

The theoretical advances and analytical tools developed in Phase I become the foundation for novel machine learning architectures designed for applied settings — systems grounded in first principles rather than empirical trial and error.
Phase II · Creation

The Transfer Map

Cosmology's solved problems are AI's open problems.
Bayesian Inference
Cosmologists quantify uncertainty as a first principle. Current AI systems lack any principled uncertainty quantification.
Distribution Shift
Redshift evolution and selection effects are distribution shifts — cosmology has handled them for decades.
Intractable Likelihoods
Simulation-based inference was developed for cosmological models where the likelihood cannot be written analytically.
Biased Tracers
Galaxies are biased tracers of the dark matter field. Neural activations are biased tracers of latent computational states.
Get in Touch
If you believe this science matters, we'd like to hear from you.
Whether you're a researcher, funder, journalist, or simply someone who thinks the science underneath AI deserves serious attention — reach out.
This opens your email client with your message pre-filled. No data is stored.