Qi Lei (雷琦)

Qi Lei NYU 

Assistant Professor of Mathematics and Data Science, and, by courtesy,
Assistant Professor of Computer Science,
Member of CILVR lab,
Member of Math and Data,
Courant Institute of Mathematical Sciences and Center for Data Science,
New York University

Email: ql518 at nyu.edu

Research Overview

My research interests are machine learning, deep learning, and optimization. Specifically, I am interested in developing sample- and computationally efficient algorithms for some fundamental machine learning problems.

Recent research highlights: (Data Reconstruction Attack and Defense), (Theoretical Foundations of Pre-trained Models)

(Curriculum Vitae, Github, Google Scholar)

Advertisement

I am actively looking for self-motivated and proactive students to work with. You are welcome to shoot me an email with your CV and short research plans/interests. (You may refer to this link to see whether our research interests match.)

For Ph.D. applicants, please apply to Courant Mathematics or Center for Data Science whichever you see fit and mention my name in your application. I do not plan to admit students from Courant CS for now.

For prospective students or interns who want to work with me in short term, please fill out this form so that we could find a suitable project for you.

For prospective post-doc applicants, I encourage you to apply for the positions of CDS Faculty Fellows, Courant Instructors, and Flatiron Research Fellows.

News and Announcement

03/2024 Invited talk at the CISS 24@Princeton on data reconstruction attack and defense (slides)

03/2024 New papers out:

01/2024 New papers out:

01/2024 Presented our recent paper Exploring Minimally Sufficient Representation in Active Learning through Label-Irrelevant Patch Augmentation(slides) at CPAL2024

10/2023 I am a co-PI on the ROME: Reduced Modeling with Extreme Data project, a Department of Energy project developing Science Foundations for Energy Earthshots. More information is in the DOE press release.

10/2023 Two papers (1 main + 1 findings) accepted at EMNLP 2023:

09/2023 Two papers accepted at Neurips 2023:

08/2023 Won the Whitehead Fellowship for Junior Faculty in Biomedical and Biological Sciences

06/2023 New papers out:

04/2023 Paper accepted at ICML 2023:

04/2023 Won the NYU Research Catalyst Prize jointly with Zhengyuan Zhou

Selected Papers

(full publication list)

7. Zihan Wang, Jason Lee, Qi Lei. “Reconstructing Training Data from Model Gradient, Provably”, AISTATS 2023

6. Baihe Huang*, Kaixuan Huang*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*, Runzhe Wang*, and Jiaqi Yang*. “Optimal Gradient-based Algorithms for Non-concave Bandit Optimization”, NeurIPS 2021

5. Jason D. Lee*, Qi Lei*, Nikunj Saunshi*, Jiacheng Zhuo*. “Predicting What You Already Know Helps: Provable Self-Supervised Learning”, NeurIPS 2021

4. Simon S. Du*, Wei Hu*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*. “Few-Shot Learning via Learning the Representation, Provably”, The International Conference on Learning Representations (ICLR) 2021

3. Qi Lei, Jason D. Lee, Alexandros G. Dimakis, Constantinos Daskalakis. “SGD Learns One-Layer Networks in WGANs”, Proc. of International Conference of Machine Learning (ICML) 2020

2. Qi Lei*, Lingfei Wu*, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock. “Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification”, Systems and Machine Learning (sysML). 2019 (code, slides)

1. Rashish Tandon, Qi Lei, Alexandros G. Dimakis, Nikos Karampatziakis, “Gradient Coding: Avoiding Stragglers in Distributed Learning”, Proc. of International Conference of Machine Learning (ICML), 2017 (code)