Michael Y. Li

Hi! I'm a Computer Science PhD student at Stanford, where I'm advised by Noah Goodman and Emily Fox.

Previously, I graduated summa cum laude and Phi Beta Kappa from Princeton, where I was fortunate to work with Tom Griffiths and Ryan Adams.

This summer, I'm interning as a quantitative researcher at Two Sigma. Last year, I interned at Microsoft Research.

Email  /  GitHub  /  Google Scholar  /  LinkedIn

profile photo

Research

I work broadly on reasoning in LLMs. My previous research explores how to integrate LLMs into statistical/data science workflows in principled, statistically rigorous ways. Before that, I worked on probabilistic methods (e.g., Sequential Monte Carlo). My email is firstname.middle_initial.lastname@stanford.edu!

Automated Hypothesis Validation with Agentic Sequential Falsifications


Kexin Huang*, Ying Jin*, Ryan Li*, Michael Y. Li, Emmanuel Candès, Jure Leskovec
preprint, 2025
paper

LLMs + sequential hypothesis tests with Type-I error control

BoxingGym: Benchmarking Progress in Automated Experimental Design and Model Discovery


Kanishk Gandhi*, Michael Y. Li*, Lyle Goodyear,Louise Li, Aditi Bhaskar, Mohammed Zaman,Noah D. Goodman
preprint, 2025
paper

A benchmark for LLM driven experimental design and model discovery

CriticAL: Model Criticism Automation with Language Models


Michael Y. Li, Noah D. Goodman, Emily B. Fox
NeurIPS Statistical Foundations of LLMs and Foundation Models Workshop, 2024
paper

An automated method for scalably falsifying scientific models

What Should Embeddings Embed? Transformers Represent Latent Generating Distributions


Liyi Zhang, Michael Y. Li, Thomas L. Griffiths
preprint, 2024
paper

We study the embeddings of transformers through the lens of predictive sufficient statistics.

Automated Statistical Model Discovery with Language Models


Michael Y. Li, Emily B. Fox, Noah D. Goodman
ICML, 2024
paper

We propose a language model driven automated statistical model discovery system.

NAS-X: Neural Adaptive Smoothing via Twisting


Dieterich Lawson* Michael Y. Li*, Scott W. Linderman
NeurIPS, 2023
Advances in Approximate Bayesian Inference, 2023 [Oral Presentation]
paper website

We introduce a new method for inference and model learning that combines reweighted-wake sleep and smoothing Sequential Monte Carlo. We theoretically analyze the bias and consistency of our method and then apply it to discrete latent variable modeling and fitting mechanistic models of neural dynamics.

Why think step-by-step? Reasoning emerges from the locality of experience


Ben Prystawski, Michael Y. Li, Noah D. Goodman
NeurIPS, 2023 [Oral Presentation, top 0.5%]
paper

We empirically and theoretically study when chain-of-thought reasoning emerges in large language models.

Gaussian Process Surrogate Models for Neural Networks


Michael Y. Li, Erin Grant, Thomas L. Griffiths
UAI, 2023
paper

We propose a framework that uses Gaussian processes to approximate neural networks. We use this framework to analyze neural network training dynamics and identify influential data points.

Learning to Learn Functions


Michael Y. Li, Fred Callaway, William D. Thompson, Ryan P. Adams, Thomas L. Griffiths
Cognitive Science, 2023
paper

We propose hierarchical Bayesian models of how people learn to learn functions and validate our model in behavioral experiments.


Design and source code from Jon Barron's website