Biography
Jason M. Klusowski is an Assistant Professor in the Department of Operations Research & Financial Engineering (ORFE) at Princeton University, where he is also a Participating Faculty in the Center for Statistics and Machine Learning (CSML). His research interests broadly span statistical machine learning for complex, large-scale models, focusing on the trade-offs between interpretability, statistical accuracy, and computational feasibility. He works on topics such as transformers (encoding and decoding of sequence models), decision trees and ensemble learning (CART, random forests, stacking), neural networks (approximation theory and statistical properties), gradient-based optimization (ADAM, SGD), and the large limit behavior of statistical models (Lasso, Slope).
Prior to joining Princeton, Jason was an Assistant Professor in the Department of Statistics at Rutgers University, New Brunswick. He completed his Ph.D. in Statistics and Data Science at Yale University in 2018 under the supervision of Andrew Barron. From 2017 to 2018, he was a visiting graduate student in the Statistics Department at The Wharton School, University of Pennsylvania.
Starting in January 2025, Jason will serve on the editorial board of Bernoulli—Journal of the Bernoulli Society. His research is supported by NSF CAREER DMS-2239448 (PI) and previously by NSF DMS-2054808 (PI) and TRIPODS DATA-INSPIRE Institute CCF-1934924 (Senior Personnel).
Jason grew up in the heart of the Canadian Prairies, in the city of Winnipeg. His spouse is an Assistant Professor of Marketing at Yale University.
News
- November 2024. Paper on sharp convergence rates for matching pursuit with Jonathan Siegel to appear in IEEE Transactions on Information Theory.
- November 2024. New paper on statistical-computational trade-offs for greedy recursive partitioning estimators with Yan Shuo Tan and Krishnakumar Balasubramanian.
- October 2024. New paper on decoding strategies for neural sequence models with Sijin Chen and Omar Hagrass.
- September 2024. Starting January 2025, I will be an associate editor for Bernoulli.
- September 2024. Two new papers on transformers to appear in NeurIPS: (1) global convergence in training large-scale transformers and (2) one-layer transformer provably learns one-nearest neighbor in context, with Cheng Gao, Yuan Cao, Zihao Li, Yihan He, Mengdi Wang, Han Liu, and Jianqing Fan.
- May 2024. Paper on the implicit bias of Adam with Matias Cattaneo and Boris Shigida to appear in ICML.
- February 2024. I'm looking for a postdoctoral associate in statistics and machine learning at Princeton, starting summer/fall 2024!
- February 2024. I received a Project X Innovation Research Grant from the School of Engineering and Applied Science (SEAS) at Princeton.
- January 2024. Paper on convergence rates of oblique regression trees with Matias Cattaneo and Rajita Chandak to appear in Annals of Statistics.