Qi Lei (雷琦)

Qi Lei UT Austin 

Email: ql518 at nyu.edu

Website: http://cecilialeiqi.github.io

I am an Assistant Professor of Mathematics and Data Science at the Courant Institute of Mathematical Sciences and the Center for Data Science at NYU. Previously I was an associate research scholar at ECE, Princeton University working with Prof. Jason Lee as my research mentor. I received my Ph.D. from Oden Institute for Computational Engineering and Sciences, the University of Texas at Austin, advised by Alexandros G. Dimakis and Inderjit S. Dhillon. I was also a member of the Center for Big Data Analytics and Wireless Networking & Communications Group. I visited IAS for the Theoretical Machine Learning program from September 2019 to July 2020. Prior to that, I was a Research Fellow at the Simons Institute for the Theory of Computing at UC Berkeley for the program on Foundations of Deep Learning.

My research interests are machine learning, deep learning, and optimization. Specifically, I am interested in developing sample- and computationally efficient algorithms for some fundamental machine learning problems.

(Curriculum Vitae, Github, Google Scholar)

For prospective students or interns who want to work with me, please fill out this form so that we could find a suitable project for you.

News and Announcement

05/2022 Presented my recent work on handling distribution shifts and won best poster award at New Advances in Statistics and Data Science

04/2022 Invited talk on Theoretical foundations of Pre-trained Models at AlgML seminar in Princeton

04/2022 Invited talk at Dartmouth ACMS

03/2022 New papers out:

02/2022 Invited talk at Adversarial Approaches in Machine Learning workshop at Simons Institute

12/2021 Invited talk at the USC Machine Learning Symposium

12/2021 Invited talk in the CSIP seminar at Gatech

11/2021 Invited talk in the ELLIS talk series on ‘‘Provable representation learning"

10/2021 I'm honored to be selected as a rising star in Machine Learning at the University of Maryland

10/2021 I'm honored to be selected as a rising star in EECS at MIT

10/2021 I'm invited to give a talk on non-concave bandit optimization at the Sampling Algorithms and Geometries on Probability Distributions workshop at Simons Institute

09/2021 All submissions got accepted at NeurIPS 2021!

09/2021 Invited talk at BLISS seminar

07/2021 New papers out:

Selected Papers

(full publication list)

6. Baihe Huang*, Kaixuan Huang*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*, Runzhe Wang*, and Jiaqi Yang*. “Optimal Gradient-based Algorithms for Non-concave Bandit Optimization”, to appear at NeurIPS 2021

5. Jason D. Lee*, Qi Lei*, Nikunj Saunshi*, Jiacheng Zhuo*. “Predicting What You Already Know Helps: Provable Self-Supervised Learning”, to appear at NeurIPS 2021

4. Simon S. Du*, Wei Hu*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*. “Few-Shot Learning via Learning the Representation, Provably”, The International Conference on Learning Representations (ICLR) 2021

3. Qi Lei, Jason D. Lee, Alexandros G. Dimakis, Constantinos Daskalakis. “SGD Learns One-Layer Networks in WGANs”, Proc. of International Conference of Machine Learning (ICML) 2020

2. Qi Lei*, Lingfei Wu*, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock. “Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification”, Systems and Machine Learning (sysML). 2019 (code, slides)

1. Rashish Tandon, Qi Lei, Alexandros G. Dimakis, Nikos Karampatziakis, “Gradient Coding: Avoiding Stragglers in Distributed Learning”, Proc. of International Conference of Machine Learning (ICML), 2017 (code)