Preprints (‘‘*’’ denotes equal contribution, ‘‘\(\dagger\)’’ denotes corresponding author.)
(\(\alpha\)-\(\beta\) order) Po Chen, Rujun Jiang, Peng Wang\(^\dagger\). A Complete Loss Landscape Analysis of Regularized Deep Matrix Factorization, 2025. [paper]
(\(\alpha\)-\(\beta\) order) Laura Balzano\(^\dagger\), Tianjiao Ding\(^\dagger\), Benjamin D. Haeffele, Soo Min Kwon, Qing Qu, Peng Wang\(^\dagger\), Zhangyang Wang, Can Yaras. An Overview of Low-Rank Structures in the Training and Adaptation of Large Models. Under review in IEEE Signal Processing Magazine, 2025. [paper]
(\(\alpha\)-\(\beta\) order) Po Chen, Rujun Jiang, Peng Wang\(^\dagger\). Error Bound Analysis for the Regularized Loss of Deep Linear Neural Networks, 2025. [paper]
Peng Wang*\(^\dagger\), Huijie Zhang*, Zekai Zhang, Siyi Chen, Yi Ma, Qing Qu. Diffusion Models Learn Low-Dimensional Distributions via Subspace Clustering, 2024. Under review in Journal of Machine Learning Research, 2025. [paper]
Huijie Zhang, Zijian Huang, Siyi Chen, Jinfan Zhou, Zekai Zhang, Peng Wang, Qing Qu. Understanding Generalization in Diffusion Models via Probability Flow Distance. Under review in NeurIPS 2025. [paper]
Xiao Li*, Zekai Zhang*, Xiang Li, Siyi Chen, Zhihui Zhu, Peng Wang\(^\dagger\), Qing Qu. Understanding Representation Dynamics of Diffusion Models via Low-Dimensional Modeling, 2025. Under review in NeurIPS 2025. [paper]
Alec S Xu, Can Yaras, Peng Wang, Qing Qu. Understanding How Nonlinear Layers Create Linearly Separable Features for Low-Dimensional Data, 2025. Under review in SIAM Journal on Mathematics of Data Science. [paper]
Can Yaras*, Peng Wang*, Wei Hu, Zhihui Zhu, Laura Balzano, Qing Qu. The Law of Parsimony in Gradient Descent for Learning Deep Linear Networks, 2023. To be submitted. [paper]
Taoli Zheng, Peng Wang, Anthony Man-Cho So. A Linearly Convergent Algorithm for Rotationally Invariant L1-Norm Principal Component Analysis, 2022. [paper]
Books
Learning Deep Representations of Data Distributions. Sam Buchanan, Druv Pai, Peng Wang, Yi Ma. An open source book, 2025. [book]
Journal Papers
Peng Wang*, Xiao Li*, Can Yaras, Zhihui Zhu, Laura Balzano, Wei Hu, Qing Qu. Understanding Deep Representation Learning via Layerwise Feature Compression and Discrimination. Accepted for publication in Journal of Machine Learning Research, 2025. [paper]
Peng Wang, Rujun Jiang, Qingyuan Kong, Laura Balzano. A Proximal Difference-of-Convex Algorithm for Sample Average Approximation of Chance Constrained Programming. Accepted for publication in INFORMS Journal on Computing, 2025. [paper, code]
Peng Wang, Huikang Liu, Anthony Man-Cho So. Linear Convergence of Proximal Alternating Minimization Method with Extrapolation for L1-Norm Principal Component Analysis. SIAM Journal on Optimization (2023) 33(2):684-712. [paper]
Peng Wang, Zirui Zhou, Anthony Man-Cho So. Non-Convex Exact Community Recovery in Stochastic Block Model. Mathematical Programming, Series A (2022) 195(1-2):793-829. [paper]
Conference Papers
Peng Wang, Yifu Lu, Yaodong Yu, Druv Pai, Qing Qu, Yi Ma. Attention-Only Transformers via Unrolled Subspace Denoising. ICML 2025. [paper]
Can Yaras*, Siyi Chen*, Peng Wang, Qing Qu. Explaining and Mitigating the Modality Gap in Contrastive Multimodal Learning. CAPL 2025. [paper]
Siyi Chen*, Huijie Zhang*, Minzhe Guo, Yifu Lu, Peng Wang, Qing Qu. Exploring Low-Dimensional Subspaces in Diffusion Models for Controllable Image Editing. NeurIPS 2024. [paper]
Peng Wang, Huikang Liu, Druv Pai, Yaodong Yu, Zhihui Zhu, Qing Qu, Yi Ma. A Global Geometric Analysis of Maximal Coding Rate Reduction. ICML 2024. [paper]
Can Yaras, Peng Wang, Laura Balzano, Qing Qu. Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation. ICML 2024 (Oral, acceptance rate: 1.52%). [paper]
Huikang Liu*, Peng Wang*, Longxiu Huang, Qing Qu, Laura Balzano. Matrix Completion with ReLU Sampling. ICML 2024. [paper]
Jiachen Jiang, Jinxin Zhou, Peng Wang, Qing Qu, Dustin Mixon, Chong You, Zhihui Zhu. Generalized Neural Collapse for a Large Number of Classes. ICML 2024. [paper]
Huijie Zhang, Jinfan Zhou, Yifu Lu, Minzhe Guo, Peng Wang, Liyue Shen, and Qing Qu. The Emergence of Reproducibility and Consistency in
Diffusion Models. ICML 2024. [paper]
Can Yaras*, Peng Wang*, Wei Hu, Zhihui Zhu, Laura Balzano, Qing Qu. Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Matrix Factorizations. NeurIPS M3L Workshop 2023. [paper]
Jinxin Wang, Yuen-Man Pun, Xiaolu Wang, Peng Wang, Anthony Man-Cho So. Projected Tensor Power Method for Hypergraph Community Recovery. ICML 2023. [paper]
Peng Wang*, Huikang Liu*, Can Yaras*, Laura Balzano, Qing Qu. Linear Convergence Analysis of Neural Collapse with Unconstrained Features. NeurIPS Workshop on Optimization for Machine Learning, NeurIPS OPT Workshop 2022. [paper]
Can Yaras*, Peng Wang*, Zhihui Zhu, Laura Balzano, Qing Qu. Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold. NeurIPS 2022. [paper]
Peng Wang, Huikang Liu, Anthony Man-Cho So, Laura Balzano. Convergence and Recovery Guarantees of the K-Subspaces Method for Subspace Clustering. ICML 2022. [paper]
Xiaolu Wang, Peng Wang, Anthony Man-Cho So. Exact Community Recovery over Signed Graphs. AISTATS 2022. [paper]
Peng Wang, Huikang Liu, Zirui Zhou, Anthony Man-Cho So. Optimal Non-Convex Exact Recovery in Stochastic Block Model via Projected Power Method. ICML 2021. [paper]
Peng Wang*, Zirui Zhou*, Anthony Man-Cho So. A Nearly-Linear Time Algorithm for Exact Community Recovery in Stochastic Block Model. ICML 2020. [paper]
Peng Wang, Huikang Liu, Anthony Man-Cho So. Globally Convergent Accelerated Proximal Alternating Maximization Method for L1-Principal Component Analysis. ICASSP 2019 (IEEE SPS Student Travel Award). [paper]
Huikang Liu, Peng Wang, Anthony Man-Cho So. Fast First-Order Methods for the Massive Robust Multicast Beamforming Problem with Interference Temperature Constraints. ICASSP 2019. [paper]
|