Michael Y. Li

I am a first-year Computer Science PhD student at Stanford University, where I'm advised by Noah Goodman and Emily Fox.

Previously, I graduated summa cum laude and Phi Beta Kappa from Princeton University, where I worked with Tom Griffiths and Ryan Adams.

Email  /  GitHub  /  LinkedIn

profile photo

Research

I'm interested in understanding large language models. I'm also interested in probabilistic machine learning.

NAS-X: Neural Adaptive Smoothing via Twisting


Dieterich Lawson* Michael Y. Li*, Scott Linderman
NeurIPS, 2023
Advances in Approximate Bayesian Inference, 2023 [Oral Presentation]
paper

We present a novel importance-sampling based method for approximate Bayesian inference that uses smoothing Sequential Monte Carlo to estimate gradients. We apply it to inference and model learning in discrete latent variable models and nonlinear dynamical systems.

Why think step-by-step? Reasoning emerges from the locality of experience


Ben Prystawski, Michael Y. Li, Noah D. Goodman
NeurIPS, 2023 [Oral Presentation]
paper

We show that reasoning emerges in large language models when training data is local.

Gaussian Process Surrogate Models for Neural Networks


Michael Y. Li, Erin Grant, Thomas L. Griffiths
UAI, 2023
paper

We propose a framework that uses Gaussian processes to approximate neural networks. We use this framework to analyze neural network training dynamics and identify influential data points.

Learning to Learn Functions


Michael Y. Li, Fred Callaway, William D. Thompson, Ryan P. Adams, Thomas L. Griffiths
Cognitive Science, 2023
paper

We propose hierarchical Bayesian models of how people learn to learn functions and validate our model in behavioral experiments.


Design and source code from Jon Barron's website