Qi Lei (雷琦)
Research OverviewMy research aims to bridge the theoretical and empirical boundary of modern machine learning algorithms and in particular AI safety, with a focus on data privacy, distributionally robust algorithms, sample- and parameter-efficient learning. Recent research highlights: (Weak-to-Strong Generalization), (Data Reconstruction Attack and Defense), (Data and Model Pruning), (Theoretical Foundations of Pre-trained Models) (Curriculum Vitae, Github, Google Scholar) AdvertisementI am actively looking for self-motivated and proactive students to work with. You are welcome to shoot me an email with your CV and short research plans/interests. (You may refer to this link to see whether our research interests match.) For Ph.D. applicants, please apply to Courant Mathematics or Center for Data Science whichever you see fit and mention my name in your application. I do not plan to admit students from Courant CS for now. For prospective post-doc applicants, I encourage you to apply for the positions of CDS Faculty Fellows, Courant Instructors, and Flatiron Research Fellows. News and Announcement08/2025 Invited talk at the Inaugural Workshop on Frontiers in Statistical Machine Learning on Weak-to-Strong Generalization 07/2025 Invited talk at Inverse Methods for Complex Systems under Uncertainty Workshop on Data Reconstruction Attacks in AI models are Inverse Problems 05/2025 Paper accepted at UAI 2025:
05/2025 Paper accepted at ICML 2025:
01/2025 Three papers accepted at AISTATS 2025:
01/2025 Paper accepted at ICLR 2025:
01/2025 Invited talk at IMS@NUS on Theoretical Bounds of Data Reconstruction Error and Induced Optimal Defenses (slides) 12/2024 Invited talk at ICSDS on Theoretical Bounds of Data Reconstruction Error and Induced Optimal Defenses (slides) 11/2024 Invited talk at Harvard Statistics on Distribution-aware Data and Model Pruning (slides) 10/2024 Organized the minisymposium “Efficient Computation and Learning with Randomized Sampling and Pruning” at SIAM MDS 2024 09/2024 Two papers accepted at NeurIPS:
Selected Papers8. Yijun Dong, Yicheng Li, Yunai Li, Jason D Lee, Qi Lei, Discrepancies are Virtue: Weak-to-Strong Generalization through Lens of Intrinsic Dimension, to appear at ICML 2025 7. Sheng Liu*, Zihan Wang*, Yuxiao Chen, Qi Lei, “Data Reconstruction Attacks and Defenses: A Systematic Evaluation”, AISTATS 2025 6. Zihan Wang, Jason Lee, Qi Lei. “Reconstructing Training Data from Model Gradient, Provably”, AISTATS 2023 5. Baihe Huang*, Kaixuan Huang*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*, Runzhe Wang*, and Jiaqi Yang*. “Optimal Gradient-based Algorithms for Non-concave Bandit Optimization”, NeurIPS 2021 4. Jason D. Lee*, Qi Lei*, Nikunj Saunshi*, Jiacheng Zhuo*. “Predicting What You Already Know Helps: Provable Self-Supervised Learning”, NeurIPS 2021 3. Simon S. Du*, Wei Hu*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*. “Few-Shot Learning via Learning the Representation, Provably”, The International Conference on Learning Representations (ICLR) 2021 2. Qi Lei*, Lingfei Wu*, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock. “Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification”, Systems and Machine Learning (sysML). 2019 (code, slides)
1. Rashish Tandon, Qi Lei, Alexandros G. Dimakis, Nikos Karampatziakis, “Gradient Coding: Avoiding Stragglers in Distributed Learning”, Proc. of International Conference of Machine Learning (ICML), 2017 (code) |