I am a principal researcher at Microsoft Research, New England (and New York City), where I am a member of the Reinforcement Learning Group. Previously, I was a postdoctoral fellow at the MIT Institute for Foundations of Data Science, and prior to this I received my PhD from the Department of Computer Science at Cornell University (2019), advised by Karthik Sridharan. I received my BS and MS in electrical engineering from the University of Southern California in 2014. I work on theory for machine learning.
My research lies at the intersection of machine learning and decision making, including data-driven reinforcement learning and control, contextual bandits, and statistical learning in causal/counterfactual settings. I am interested in uncovering new algorithmic principles and fundamental limits for data-driven decision making, with themes including:
I am also excited about developing new models that better capture challenges and constraints faced when deploying data-driven decision making systems in practice.
More broadly, I am interested in all modern aspects of statistical learning, generalization, and algorithm design, especially in the context of deep learning.
Lecture Notes on Statistical Reinforcement Learning and Decision Making
Fall 2023 Course @ MIT : Statistical Reinforcement Learning and Decision Making
ICML 2022 Tutorial: Bridging Learning and Decision Making
Internships and postdocs. I have been fortunate to work with the following glorious interns and postdocs at MSR: Phil Amortila, Adam Block, Noah Golowich, Clayton Sanford, Yuda Song, Andrew Wagenmaker, Tengyang Xie, Yunzong Xu, Yinglun Zhu. If you are a PhD student interested in internships in ML & RL at MSR for 2024, apply here. For postdocs starting in 2024, apply here (theory) and here (empirical).
Efficient First-Order Contextual Bandits:
Prediction, Allocation, and Triangular Discrimination
Dylan J. Foster and Akshay Krishnamurthy.
NeurIPS 2021. Oral presentation.
Dylan J. Foster, Alexander Rakhlin, David Simchi-Levi, and Yunzong Xu
COLT 2021.
Learning the Linear Quadratic Regulator from Nonlinear Observations
Zakaria Mhammedi, Dylan J. Foster, Max Simchowitz, Dipendra Misra,
Wen Sun, Akshay Krishnamurthy, Alexander Rakhlin, and John Langford.*
NeurIPS 2020.
Beyond UCB: Optimal and Efficient Contextual Bandits
with Regression Oracles
Dylan J. Foster and Alexander Rakhlin.
ICML 2020.
Now in Vowpal Wabbit! Use the --squarecb option or see here for more information.
Model Selection for Contextual Bandits
Dylan J. Foster, Akshay Krishnamurthy, and Haipeng Luo.
NeurIPS 2019. Spotlight presentation.
Practical Contextual Bandits with Regression Oracles
Dylan J. Foster, Alekh Agarwal, Miroslav Dudík, Haipeng Luo, and Robert E. Schapire.*
ICML 2018. Long talk.
Now in Vowpal Wabbit (thanks to Alberto Bietti!). Try it with the --regcb or --regcbopt option.
Logistic Regression: The Importance of Being Improper
Dylan J. Foster, Satyen Kale, Haipeng Luo, Mehryar Mohri, and Karthik Sridharan.
COLT 2018. Best Student Paper Award.
Online Learning: Sufficient Statistics and the Burkholder Method
Dylan J. Foster, Alexander Rakhlin, and Karthik Sridharan.
COLT 2018.
Dylan J. Foster, Alexander Rakhlin, and Karthik Sridharan.
NeurIPS 2015. Spotlight presentation.
Orthogonal Statistical Learning (Statistical Learning with a Nuisance Component)
Dylan J. Foster and Vasilis Syrgkanis.
Annals of Statistics, 2023.
COLT 2019. Best Paper Award.
Lower Bounds for Non-Convex Stochastic Optimization
Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster,
Nathan Srebro, and Blake Woodworth.
Mathematical Programming, Series A, 2022.
The Complexity of Making the Gradient Small in
Stochastic Convex Optimization
Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan,
and Blake Woodworth.
COLT 2019. Best Student Paper Award.
Spectrally-Normalized Margin Bounds for Neural Networks
Peter Bartlett, Dylan J. Foster, and Matus Telgarsky.
NeurIPS 2017. Spotlight presentation.
Adaptive Learning: Algorithms and Complexity
Dylan J. Foster
Ph.D. Thesis. Department of Computer Science, Cornell University, 2019.
Cornell CS Doctoral Dissertation Award.
Foundations of Reinforcement Learning and Interactive Decision Making
Dylan J. Foster and Alexander Rakhlin, 2023.
Lecture notes from
9.522: Statistical Reinforcement Learning and Decision Making
Program Committee/Area Chair: COLT (Senior PC): 2020, 2021, 2022, 2023, NeurIPS (Area Chair): 2020, 2021, 2022, 2023, ICML (Area Chair): 2022, ALT: 2019, 2020, 2021, 2022, 2023, 2024, Learning for Dynamics and Control (L4DC): 2020, 2021, 2022.
Conference Reviewing: COLT, NeurIPS, ICML, STOC, FOCS, SODA, ALT, AISTATS, AAAI.
Journal Reviewing: JMLR, Journal of the ACM, Annals of Statistics, Mathematics of Operations Research, Operations Research, Biometrika.
Statistical Reinforcement Learning and Decision Making
MIT, Fall 2023.
Co-taught with Sasha Rakhlin.
Statistical Reinforcement Learning and Decision Making
MIT, Fall 2022.
Co-taught with Sasha Rakhlin.
Machine Learning Theory
Cornell University, Spring 2018.
Teaching assistant for Karthik Sridharan.
Introduction to Analysis of Algorithms
Cornell University, Spring 2015.
Teaching assistant for Éva Tardos and David Steurer.
Received outstanding teaching award.
Foundations of Artificial Intelligence
Cornell University, Fall 2014.
Teaching assistant for Bart Selman.
I can be reached at dylanfoster at microsoft dot com.
© Dylan Foster 2015.