Qi Lei (雷琦)
Research OverviewMy research interests are machine learning, deep learning, and optimization. Specifically, I am interested in developing sample- and computationally efficient algorithms for some fundamental machine learning problems. Recent research highlights: (Data and Model Pruning), (Data Reconstruction Attack and Defense), (Theoretical Foundations of Pre-trained Models) (Curriculum Vitae, Github, Google Scholar) AdvertisementI am actively looking for self-motivated and proactive students to work with. You are welcome to shoot me an email with your CV and short research plans/interests. (You may refer to this link to see whether our research interests match.) For Ph.D. applicants, please apply to Courant Mathematics or Center for Data Science whichever you see fit and mention my name in your application. I do not plan to admit students from Courant CS for now. For prospective students or interns who want to work with me in short term, please fill out this form so that we could find a suitable project for you. For prospective post-doc applicants, I encourage you to apply for the positions of CDS Faculty Fellows, Courant Instructors, and Flatiron Research Fellows. News and Announcement10/2024 Organized the minisymposium “Efficient Computation and Learning with Randomized Sampling and Pruning” at SIAM MDS 2024 09/2024 Two papers accepted at NeurIPS:
08/2024 Invited talk at 2024 Workshop on Data-driven PDE-based inverse problem, in theory and practice on Data Reconstruction Error Analysis in the Lens of Inverse Problems 08/2024 Invited talk at JSM Harnessing Large Language Models: Opportunities and Challenges for Statistics on LLM Pruning 06/2024 Invited talk at Mathematics of Deep Learning on Data Reconstruction Error Analysis 05/2024 Two papers accepted at ICML 2024:
04/2024 Congratulations to my Capstone team Christine Gao, Ciel Wang and Yuqi Zhang on winning the Best Student Voted Runner-Up Poster (out of 46 projects!) with their great work on Medical Data Leakage with Multi-site Collabrative Training 03/2024 Invited talk at the CISS 24@Princeton on data reconstruction attack and defense (slides) 03/2024 New papers out:
01/2024 New papers out: 01/2024 Presented our recent paper Exploring Minimally Sufficient Representation in Active Learning through Label-Irrelevant Patch Augmentation(slides) at CPAL2024 Selected Papers7. Sheng Liu*, Zihan Wang*, Yuxiao Chen, Qi Lei, “Data Reconstruction Attacks and Defenses: A Systematic Evaluation”, preprint 2024 6. Zihan Wang, Jason Lee, Qi Lei. “Reconstructing Training Data from Model Gradient, Provably”, AISTATS 2023 5. Baihe Huang*, Kaixuan Huang*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*, Runzhe Wang*, and Jiaqi Yang*. “Optimal Gradient-based Algorithms for Non-concave Bandit Optimization”, NeurIPS 2021 4. Jason D. Lee*, Qi Lei*, Nikunj Saunshi*, Jiacheng Zhuo*. “Predicting What You Already Know Helps: Provable Self-Supervised Learning”, NeurIPS 2021 3. Simon S. Du*, Wei Hu*, Sham M. Kakade*, Jason D. Lee*, Qi Lei*. “Few-Shot Learning via Learning the Representation, Provably”, The International Conference on Learning Representations (ICLR) 2021 2. Qi Lei*, Lingfei Wu*, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock. “Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification”, Systems and Machine Learning (sysML). 2019 (code, slides)
1. Rashish Tandon, Qi Lei, Alexandros G. Dimakis, Nikos Karampatziakis, “Gradient Coding: Avoiding Stragglers in Distributed Learning”, Proc. of International Conference of Machine Learning (ICML), 2017 (code) |