Qi Lei (雷琦)

Qi Lei UT Austin 

Email: qilei at princeton.edu

Website: http://cecilialeiqi.github.io

I am an associate research scholar at ECE, Princeton University. I am fortunate to have Prof. Jason Lee as my research mentor. I received my Ph.D. from Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, advised by Alexandros G. Dimakis and Inderjit S. Dhillon. I was also a member of Center for Big Data Analytics and Wireless Networking & Communications Group. I visited IAS for the Theoretical Machine Learning program from September 2019 to July 2020. Prior to that, I was a Research Fellow at the Simons Institute for the Theory of Computing at UC Berkeley for the program on Foundations of Deep Learning.

My research interests are machine learning, deep learning and optimization. Specifically, I'm interested in developing sample- and computationally efficient algorithms for some fundamental machine learning problems.

I will be on the 2021-22 job market.

(Curriculum Vitae, Github, Google Scholar)

News and Announcement

10/2021 I'm invited to give a talk on non-concave bandit optimization at the Sampling Algorithms and Geometries on Probability Distributions workshop at Simons Institute

09/2021 Four papers accepted at NeurIPS 2021:

07/2021 New papers out:

05/2021 Three papers are accepted at ICML 2021:

05/2021 I'm invited to give a talk on Provable Representation Learning at Caltech Young Investigators Lecture

05/2021 New paper out: How Fine-Tuning Allows for Effective Meta-Learning

03/2021 My PhD thesis Provably Effective Algorithms for Min-Max Optimization won this year's Oden Institute Outstanding Dissertation Award

01/2021 Our paper is accepted at AISTATS 2021:

  • Qi Lei*, Sai Ganesh Nagarajan*, Ioannis Panageas*, Xiao Wang*. “Last iterate convergence in no-regret learning: constrained min-max optimization for convex-concave landscapes”

01/2021 Our paper is accepted at ICLR 2021:

  • Simon S. Du*, Wei Hu*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*. “Few-Shot Learning via Learning the Representation, Provably”

12/2020 I'm invited to give a talk on SGD learns one-layer network with WGANs at Learning and Testing in High Dimensions

11/2020 I'm invited to give a Young Researcher Spotlight Talk at the “Seeking Low-dimensionality in Deep Learning” workshop on my recent work of provable representation learning

10/2020 I'm invited to give a talk on provable self-supervised learning at One-World ML seminar

10/2020 Our paper is accepted at NeurIPS 2020:

  • Xiao Wang, Qi Lei, Ioannis Panageas. “Fast Convergence of Langevin Dynamics on Manifold: Geodesics meet Log-Sobolev”

Selected Papers

(full publication list)

6. Baihe Huang*, Kaixuan Huang*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*, Runzhe Wang*, and Jiaqi Yang*. “Optimal Gradient-based Algorithms for Non-concave Bandit Optimization”, to appear at NeurIPS 2021

5. Jason D. Lee*, Qi Lei*, Nikunj Saunshi*, Jiacheng Zhuo*. “Predicting What You Already Know Helps: Provable Self-Supervised Learning”, to appear at NeurIPS 2021

4. Simon S. Du*, Wei Hu*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*. “Few-Shot Learning via Learning the Representation, Provably”, The International Conference on Learning Representations (ICLR) 2021

3. Qi Lei, Jason D. Lee, Alexandros G. Dimakis, Constantinos Daskalakis. “SGD Learns One-Layer Networks in WGANs”, Proc. of International Conference of Machine Learning (ICML) 2020

2. Qi Lei*, Lingfei Wu*, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock. “Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification”, Systems and Machine Learning (sysML). 2019 (code, slides)

1. Rashish Tandon, Qi Lei, Alexandros G. Dimakis, Nikos Karampatziakis, “Gradient Coding: Avoiding Stragglers in Distributed Learning”, Proc. of International Conference of Machine Learning (ICML), 2017 (code)