I am a principal researcher at Microsoft Research, New England (and New York City), where I am a member of the Reinforcement Learning Group. Previously, I was a postdoctoral fellow at the MIT Institute for Foundations of Data Science in IDSS, and prior to this I received my PhD from the Department of Computer Science at Cornell University (2019), advised by Karthik Sridharan. I received my BS and MS in electrical engineering from the University of Southern California in 2014.
Research
I am interested in the mathematical foundations—algorithm design principles and fundamental limits—necessary to develop intelligent agents that learn from experience. I am currently most excited about:
The statistical and computational foundations of interactive decision making, including reinforcement learning and imitation learning.
Understanding and improving foundation models, from pre-training to post-training and test-time—particularly as a basis upon which to build interactive decision making agents.
More broadly, I like to dabble in almost all theoretical aspects of machine learning and adjacent topics (statistics, information theory, algorithms/complexity, optimization, …).
News
8/15/25: We will teach our course 9.522: Statistical Reinforcement Learning and Decision Making for the third time at MIT this Fall, with new content on RL for language models.
8/1/25: Adam Block, Max Simchowitz, and I will be presenting a NeurIPS 2025 tutorial, Foundations of Imitation Learning: From Language Modeling to Continuous Control.
7/4/25: We are organizing a workshop on Foundations of Reasoning in Language Models at NeurIPS 2025! The submission deadline is Sept 3, 2025.
7/1/25: Upcoming talks: July: EXAIT workshop at ICML, Aug: IAIFI Summer Workshop at Harvard, IFDS Summer Workshop at UW, Sept: Harvard Stats Colloquium
6/29/25: I was elected to the board of directors of the Association for Computational Learning for a four-year term.
5/10/25: We are organizing a workshop on Foundations of Post-Training at COLT 2025! The submission deadline is May 19, 2025.
Internships and Postdocs
I have been fortunate to work with the following amazing interns and postdocs at MSR:
Phil Amortila (2023), Adam Block (2023, PD ‘24-25), Fan Chen (2025), Noah Golowich (2022, PD ‘25-26), Audrey Huang (2024, 2025), Qinghua Liu (PD ‘24-25), Sadhika Malladi (PD ‘25-26), Nived Rajaraman (PD ‘25-), Dhruv Rohatgi (2024), Clayton Sanford (2023), Anikait Singh (2025), Yuda Song (2023), Jens Tuyls (2025), Andrew Wagenmaker (2022), Tengyang Xie (PD ‘23-24), Yunzong Xu (2021, PD ‘23-24), and Yinglun Zhu (2021).
Postdocs: For postdocs in AI/ML at MSR starting in 2026, apply here (theory) and here (empirical). The deadline is October 22, 2025.
Internships: For internships at MSR in spring and summer 2026, apply here.
Selected Papers
The Coverage Principle: How Pre-Training Enables Post-Training
Fan Chen, Audrey Huang, Noah Golowich, Sadhika Malladi, Adam Block, Jordan T. Ash,
Akshay Krishnamurthy, and Dylan J. Foster.
Preprint (under review), 2025.
Self-Improvement in Language Models: The Sharpening Mechanism
Audrey Huang*, Adam Block*, Dylan J. Foster*, Dhruv Rohatgi, Cyril Zhang, Max Simchowitz, Jordan T. Ash, and Akshay Krishnamurthy.
ICLR, 2025. Oral presentation.
Is Behavior Cloning All You Need? Understanding Horizon in Imitation Learning
Dylan J. Foster, Adam Block, and Dipendra Misra.
NeurIPS, 2024. Spotlight presentation. [talk]
The Statistical Complexity of Interactive Decision Making
Dylan J. Foster, Sham M. Kakade, Jian Qian, and Alexander Rakhlin.
Preprint (under review), 2021. [talk]
Beyond UCB: Optimal and Efficient Contextual Bandits with Regression Oracles
Dylan J. Foster and Alexander Rakhlin.
ICML, 2020.
Orthogonal Statistical Learning
Dylan J. Foster and Vasilis Syrgkanis.
COLT, 2019. Best Paper Award. Journal version in Annals of Statistics (2023).
Spectrally-Normalized Margin Bounds for Neural Networks
Peter L. Bartlett, Dylan J. Foster, and Matus J. Telgarsky.
NeurIPS, 2017. Spotlight presentation.
*Equal contribution
Selected Awards
Best Paper Award (Orthogonal Statistical Learning)
Conference on Learning Theory (COLT), 2019.
Best Student Paper Award (
Conference on Learning Theory (COLT), 2019.
Best Student Paper Award (Logistic Regression: The Importance of Being Improper)
Conference on Learning Theory (COLT), 2018.
Facebook PhD Fellowship, 2018.
NDSEG PhD Fellowship, 2016.
