## Qi Lei (雷琦)
I am an associate research scholar at ECE, Princeton University. I am fortunate to have Prof. Jason Lee as my research mentor. I received my Ph.D. from Oden Institute for Computational Engineering and Sciences, the University of Texas at Austin, advised by Alexandros G. Dimakis and Inderjit S. Dhillon. I was also a member of the Center for Big Data Analytics and Wireless Networking & Communications Group. I visited IAS for the Theoretical Machine Learning program from September 2019 to July 2020. Prior to that, I was a Research Fellow at the Simons Institute for the Theory of Computing at UC Berkeley for the program on Foundations of Deep Learning. Starting from Sept 2022, I will be an Assistant Professor of Mathematics and Data Science at the Courant Institute of Mathematical Sciences and the Center for Data Science at NYU. My research interests are (Curriculum Vitae, Github, Google Scholar) ## News and Announcement04/2022 Invited talk at Dartmouth ACMS 03/2022 New papers out: Sample Efficiency of Data Augmentation Consistency Regularization Nearly Minimax Algorithms for Linear Bandits with Shared Representation
02/2022 Invited talk at Adversarial Approaches in Machine Learning workshop at Simons Institute 12/2021 Invited talk at the USC Machine Learning Symposium 12/2021 Invited talk in the CSIP seminar at Gatech 11/2021 Invited talk in the ELLIS talk series on ‘‘Provable representation learning" 10/2021 I'm honored to be selected as a rising star in Machine Learning at the University of Maryland 10/2021 I'm honored to be selected as a rising star in EECS at MIT 10/2021 I'm invited to give a talk on non-concave bandit optimization at the Sampling Algorithms and Geometries on Probability Distributions workshop at Simons Institute 09/2021 Four papers accepted at NeurIPS 2021: Baihe Huang*, Kaixuan Huang*, Sham M. Kakade*, Jason D. Lee*, **Qi Lei***, Runzhe Wang*, Jiaqi Yang* ‘‘Optimal Gradient-based Algorithms for Non-concave Bandit Optimization"Baihe Huang*, Kaixuan Huang*, Sham M. Kakade*, Jason D. Lee*, **Qi Lei***, Runzhe Wang*, Jiaqi Yang* ‘‘Going Beyond Linear RL: Sample Efficient Neural Function ApproximationKurtland Chua, **Qi Lei**, Jason D. Lee, ‘‘How fine-tuning allows for effective meta-learning"Jason D Lee*, **Qi Lei***, Nikunj Saunshi*, Jiacheng Zhuo*, ‘‘Predicting What You Already Know Helps: Provable Self-Supervised Learning"
09/2021 Invited talk at BLISS seminar 07/2021 New papers out: Optimal Gradient-based Algorithms for Non-concave Bandit Optimization Going Beyond Linear RL: Sample Efficient Neural Function Approximation A Short Note on the Relationship of Information Gain and Eluder Dimension
05/2021 Three papers are accepted at ICML 2021: **Qi Lei**, Wei Hu, Jason D. Lee. ‘‘Near-Optimal Linear Regression under Distribution Shift"Tianle Cai*, Ruiqi Gao*, Jason D Lee*, **Qi Lei***. ‘‘A Theory of Label Propagation for Subpopulation Shift"Jay Whang, **Qi Lei**, Alexandros G. Dimakis. ‘‘Solving Inverse Problems with a Flow-based Noise Model"
05/2021 I'm invited to give a talk on Provable Representation Learning at Caltech Young Investigators Lecture 05/2021 New paper out: How Fine-Tuning Allows for Effective Meta-Learning ## Selected Papers6. Baihe Huang*, Kaixuan Huang*, Sham M. Kakade*, Jason D. Lee*, 5. Jason D. Lee*, 4. Simon S. Du*, Wei Hu*, Sham M. Kakade*, Jason D. Lee*, 3. 2. Press coverage: <Nature Story> <Vecturebeat> <Tech Talks> <机器之心>
1. Rashish Tandon, |