CriticAL: Model Criticism Automation with Language Models
Michael Y. Li, Noah D. Goodman, Emily B. Fox
NeurIPS Statistical Foundations of LLMs and Foundation Models Workshop, 2024
paper
We introduce a new statistical method for automatically and scalably falsifying LLM-generated hypotheses.
|
What Should Embeddings Embed? Transformers Represent Latent Generating Distributions
Liyi Zhang, Michael Y. Li, Thomas L. Griffiths
preprint, 2024
paper
We study the embeddings of transformers through the lens of predictive sufficient statistics.
|
BoxingGym: Benchmarking Progress in Automated Experimental Design and Model Discovery
Kanishk Gandhi*, Michael Y. Li*, Lyle Goodyear,Louise Li, Aditi Bhaskar, Mohammed Zaman,Noah D. Goodman
preprint, 2024
paper
A benchmark for LLM driven experimental design and model discovery
|
Automated Statistical Model Discovery with Language Models
Michael Y. Li, Emily B. Fox, Noah D. Goodman
ICML, 2024
paper
An iterative algorithm for generating structured hypotheses with LLMs.
|
NAS-X: Neural Adaptive Smoothing via Twisting
Dieterich Lawson* Michael Y. Li*, Scott W. Linderman
NeurIPS, 2023
Advances in Approximate Bayesian Inference, 2023 [Oral Presentation]
paper
website
We introduce a new method for fitting sequential latent variables models that leverages recent advances in Sequential Monte Carlo.
We study our method theoretically and apply it to fitting discrete latent variable models and complex ODE based models.
|
Why think step-by-step? Reasoning emerges from the locality of experience
Ben Prystawski, Michael Y. Li, Noah D. Goodman
NeurIPS, 2023 [Oral Presentation, top 0.5%]
paper
We empirically and theoretically study when chain-of-thought reasoning emerges in large language models.
|
Gaussian Process Surrogate Models for Neural Networks
Michael Y. Li, Erin Grant, Thomas L. Griffiths
UAI, 2023
paper
We propose a framework that uses Gaussian processes to approximate neural networks. We use this framework to analyze neural network training dynamics and identify influential data points.
|
Learning to Learn Functions
Michael Y. Li, Fred Callaway, William D. Thompson, Ryan P. Adams, Thomas L. Griffiths
Cognitive Science, 2023
paper
We propose hierarchical Bayesian models of how people learn to learn functions and validate our model in behavioral experiments.
|
|