Michael Y. Li

I am a second year Computer Science PhD student at Stanford, where I'm advised by Noah Goodman and Emily Fox.

Previously, I graduated summa cum laude, Phi Beta Kappa, and Tau Beta Pi from Princeton, where I was fortunate to work with Tom Griffiths and Ryan Adams.

This summer, I'm interning at Microsoft Research in Redmond.

Email  /  GitHub  /  Google Scholar  /  LinkedIn

profile photo


Recently, I've worked on integrating large language models within a statistical modeling pipeline. I'm also interested in probabilistic modeling/inference and understanding large language models.

CriticAL: Model Criticism Automation with Language Models

Michael Y. Li, Noah D. Goodman, Emily B. Fox
preprint, 2024

We use language models for Bayesian model criticism and refinement.

What Should Embeddings Embed? Transformers Represent Latent Generating Distributions

Liyi Zhang Michael Y. Li, Thomas L. Griffiths
preprint, 2024

We study the embeddings of transformers through the lens of predictive sufficient statistics.

BoxingGym: Benchmarking Progress in Automated Experimental Design and Model Discovery

Kanishk Gandhi* Michael Y. Li* , Lyle Goodyear,Louise Li, Aditi Bhaskar, Mohammed Zaman,Noah D. Goodman
preprint, 2024

A benchmark for LLM driven experimental design and model discovery

Automated Statistical Model Discovery with Language Models

Michael Y. Li, Emily B. Fox, Noah D. Goodman
ICML, 2024

We propose a language model driven automated statistical model discovery system.

NAS-X: Neural Adaptive Smoothing via Twisting

Dieterich Lawson* Michael Y. Li*, Scott W. Linderman
NeurIPS, 2023
Advances in Approximate Bayesian Inference, 2023 [Oral Presentation]
paper website

We introduce a new method for inference and model learning that combines reweighted-wake sleep and smoothing Sequential Monte Carlo. We theoretically analyze the bias and consistency of our method and then apply it to discrete latent variable modeling and fitting mechanistic models of neural dynamics.

Why think step-by-step? Reasoning emerges from the locality of experience

Ben Prystawski, Michael Y. Li, Noah D. Goodman
NeurIPS, 2023 [Oral Presentation, top 0.5%]

We empirically and theoretically study when chain-of-thought reasoning emerges in large language models.

Gaussian Process Surrogate Models for Neural Networks

Michael Y. Li, Erin Grant, Thomas L. Griffiths
UAI, 2023

We propose a framework that uses Gaussian processes to approximate neural networks. We use this framework to analyze neural network training dynamics and identify influential data points.

Learning to Learn Functions

Michael Y. Li, Fred Callaway, William D. Thompson, Ryan P. Adams, Thomas L. Griffiths
Cognitive Science, 2023

We propose hierarchical Bayesian models of how people learn to learn functions and validate our model in behavioral experiments.

Design and source code from Jon Barron's website