Qi Lei (雷琦)

Qi Lei UT Austin 

Email: qilei at princeton.edu

Website: http://cecilialeiqi.github.io

I am a postdoc research associate at EE, Princeton University. I am fortunate to have Prof. Jason Lee as my research mentor. I received my Ph.D. from Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, advised by Alexandros G. Dimakis and Inderjit S. Dhillon. I was also a member of Center for Big Data Analytics and Wireless Networking & Communications Group. I visited IAS for the Theoretical Machine Learning program from September 2019 to July 2020. Prior to that, I was a Research Fellow at the Simons Institute for the Theory of Computing at UC Berkeley for the program on Foundations of Deep Learning.

My research interests are machine learning, deep learning and optimization. Specifically, I'm interested in developing provably efficient and robust algorithms for some fundamental machine learning problems.

(Curriculum Vitae, Github, Google Scholar)

News and Announcement

05/2021 Three papers are accepted at ICML 2021:

  • Qi Lei, Wei Hu, Jason D. Lee. ‘‘Near-Optimal Linear Regression under Distribution Shift"

  • Tianle Cai*, Ruiqi Gao*, Jason D Lee*, Qi Lei*. ‘‘A Theory of Label Propagation for Subpopulation Shift"

  • Jay Whang, Qi Lei, Alexandros G. Dimakis. ‘‘Solving Inverse Problems with a Flow-based Noise Model"

05/2021 I'm invited to give a talk on Provable Representation Learning at Caltech Young Investigators Lecture

05/2021 New paper out: How Fine-Tuning Allows for Effective Meta-Learning

03/2021 My PhD thesis Provably Effective Algorithms for Min-Max Optimization won this year's Oden Institute Outstanding Dissertation Award

01/2021 Our paper is accepted at AISTATS 2021:

  • Qi Lei*, Sai Ganesh Nagarajan*, Ioannis Panageas*, Xiao Wang*. “Last iterate convergence in no-regret learning: constrained min-max optimization for convex-concave landscapes”

01/2021 Our paper is accepted at ICLR 2021:

  • Simon S. Du*, Wei Hu*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*. “Few-Shot Learning via Learning the Representation, Provably”

12/2020 I'm invited to give a talk on SGD learns one-layer network with WGANs at Learning and Testing in High Dimensions

11/2020 I'm invited to give a Young Researcher Spotlight Talk at the “Seeking Low-dimensionality in Deep Learning” workshop on my recent work of provable representation learning

10/2020 I'm invited to give a talk on provable self-supervised learning at One-World ML seminar

10/2020 Our paper is accepted at NeurIPS 2020:

  • Xiao Wang, Qi Lei, Ioannis Panageas. “Fast Convergence of Langevin Dynamics on Manifold: Geodesics meet Log-Sobolev”

09/2020 I am selected as a 2020 Computing Innovation Fellow. Thank you CRA!

08/2020 New paper out:

06/2020 Our paper is accepted at ICML 2020:

Selected Papers

(full publication list)

7. Jason D. Lee*, Qi Lei*, Nikunj Saunshi*, Jiacheng Zhuo*. “Predicting What You Already Know Helps: Provable Self-Supervised Learning”, arxiv preprint

6. Simon S. Du*, Wei Hu*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*. “Few-Shot Learning via Learning the Representation, Provably”, arxiv preprint

5. Qi Lei, Jason D. Lee, Alexandros G. Dimakis, Constantinos Daskalakis. “SGD Learns One-Layer Networks in WGANs”, Proc. of International Conference of Machine Learning (ICML) 2020

4. Qi Lei, Jiacheng Zhuo, Constantine Caramanis, Inderjit S Dhillon, Alexandros G Dimakis. “Primal-Dual Block Frank-Wolfe”, Proc. of Neural Information Processing Systems (NeurIPS) 2019 (slides, code)

3. Qi Lei, Ajil Jalal, Inderjit S. Dhillon, Alexandros G. Dimakis. “Inverting Deep Generative models, One layer at a time”, Proc. of Neural Information Processing Systems (NeurIPS) 2019 (poster, code)

2. Qi Lei*, Lingfei Wu*, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock. “Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification”, Systems and Machine Learning (sysML). 2019 (code, slides)

1. Rashish Tandon, Qi Lei, Alexandros G. Dimakis, Nikos Karampatziakis, “Gradient Coding: Avoiding Stragglers in Distributed Learning”, Proc. of International Conference of Machine Learning (ICML), 2017 (code)