Peng Wang

Peng Wang Photo 

Peng Wang
Postdoc Research Fellow
Department of Electrical Engineering and Computer Science
University of Michigan, Ann Arbor
Email: pengwa@umich.edu
Google Scholar

About Me

I am currently a postdoc research fellow advised by Professors Laura Balzano and Qing Qu at University of Michigan. Before that, I got my Ph.D. degree in Systems Engineering and Engineering Management advised by Professor Anthony Man-Cho So at The Chinese University of Hong Kong.

Research Interests

Broadly speaking, my research interest lies in the intersects of optimization, machine learning, and data science. Currently, I am devoted to understanding mathematical foundations of deep learning models, including supervised learning models, diffusion models, and large language models. I mainly study how low-complexity structures (e.g., low-rankness, sparsity, over-parameterization) in practical problems lead to favorable optimization properties and use them to mitigate the challenges caused by worst-case scenarios, enable efficient optimization, and improve our understanding of learning phenomena..

Feel free to email me if you are interested in my research. Remote collaboration is also welcome!

What's New

Preprints (‘‘*’’ denotes equal contribution, ‘‘\(\dagger\)’’ denotes corresponding author.)

  • Alec S Xu, Can Yaras, Peng Wang, Qing Qu. Understanding How Nonlinear Layers Create Linearly Separable Features for Low-Dimensional Data, 2025. [paper]

  • Peng Wang*, Huijie Zhang*, Zekai Zhang, Siyi Chen, Yi Ma, Qing Qu. Diffusion Models Learn Low-Dimensional Distributions via Subspace Clustering, 2024. Under review in ICLR 2024. [paper]

    • Accepted by NeurIPS M3L Workshop.

  • Peng Wang*, Xiao Li*, Can Yaras, Wei Hu, Zhihui Zhu, Laura Balzano, Wei Hu, Qing Qu. Understanding Deep Representation Learning via Layerwise Feature Compression and Discrimination. Under review in Journal of Machine Learning Research, 2024. [paper]

  • Can Yaras*, Peng Wang*, Wei Hu, Zhihui Zhu, Laura Balzano, Qing Qu. The Law of Parsimony in Gradient Descent for Learning Deep Linear Networks, 2023. To be submitted. [paper]

  • Taoli Zheng, Peng Wang, Anthony Man-Cho So. A Linearly Convergent Algorithm for Rotationally Invariant L1-Norm Principal Component Analysis, 2022. [paper]

Journal Papers

  • Peng Wang, Rujun Jiang, Qingyuan Kong, Laura Balzano. A Proximal DC Algorithm for Sample Average Approximation of Chance Constrained Programming. Accepted for publication in INFORMS Journal on Computing, 2025 [paper]

  • Peng Wang, Huikang Liu, Anthony Man-Cho So. Linear Convergence of Proximal Alternating Minimization Method with Extrapolation for L1-Norm Principal Component Analysis. SIAM Journal on Optimization (2023) 33(2):684-712. [paper]

  • Peng Wang, Zirui Zhou, Anthony Man-Cho So. Non-Convex Exact Community Recovery in Stochastic Block Model. Mathematical Programming, Series A (2022) 195(1-2):793-829. [paper]

Conference Papers

  • Siyi Chen*, Huijie Zhang*, Minzhe Guo, Yifu Lu, Peng Wang, Qing Qu. Exploring Low-Dimensional Subspaces in Diffusion Models for Controllable Image Editing. NeurIPS 2024. [paper]

  • Peng Wang, Huikang Liu, Druv Pai, Yaodong Yu, Zhihui Zhu, Qing Qu, Yi Ma. A Global Geometric Analysis of Maximal Coding Rate Reduction. ICML 2024. [paper]

  • Can Yaras, Peng Wang, Laura Balzano, Qing Qu. Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation. ICML 2024 (Oral, acceptance rate: 1.52%). [paper]

  • Huikang Liu*, Peng Wang*, Longxiu Huang, Qing Qu, Laura Balzano. Matrix Completion with ReLU Sampling. ICML 2024. [paper]

  • Jiachen Jiang, Jinxin Zhou, Peng Wang, Qing Qu, Dustin Mixon, Chong You, Zhihui Zhu. Generalized Neural Collapse for a Large Number of Classes. ICML 2024. [paper]

  • Huijie Zhang, Jinfan Zhou, Yifu Lu, Minzhe Guo, Peng Wang, Liyue Shen, and Qing Qu. The Emergence of Reproducibility and Consistency in Diffusion Models. ICML 2024. [paper]

  • Can Yaras*, Peng Wang*, Wei Hu, Zhihui Zhu, Laura Balzano, Qing Qu. Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Matrix Factorizations. NeurIPS M3L Workshop 2023. [paper]

  • Jinxin Wang, Yuen-Man Pun, Xiaolu Wang, Peng Wang, Anthony Man-Cho So. Projected Tensor Power Method for Hypergraph Community Recovery. ICML 2023. [paper]

  • Peng Wang*, Huikang Liu*, Can Yaras*, Laura Balzano, Qing Qu. Linear Convergence Analysis of Neural Collapse with Unconstrained Features. NeurIPS Workshop on Optimization for Machine Learning, NeurIPS OPT Workshop 2022. [paper]

  • Can Yaras*, Peng Wang*, Zhihui Zhu, Laura Balzano, Qing Qu. Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold. NeurIPS 2022. [paper]

  • Peng Wang, Huikang Liu, Anthony Man-Cho So, Laura Balzano. Convergence and Recovery Guarantees of the K-Subspaces Method for Subspace Clustering. ICML 2022. [paper]

  • Xiaolu Wang, Peng Wang, Anthony Man-Cho So. Exact Community Recovery over Signed Graphs. AISTATS 2022. [paper]

  • Peng Wang, Huikang Liu, Zirui Zhou, Anthony Man-Cho So. Optimal Non-Convex Exact Recovery in Stochastic Block Model via Projected Power Method. ICML 2021. [paper]

  • Peng Wang*, Zirui Zhou*, Anthony Man-Cho So. A Nearly-Linear Time Algorithm for Exact Community Recovery in Stochastic Block Model. ICML 2020. [paper]

  • Peng Wang, Huikang Liu, Anthony Man-Cho So. Globally Convergent Accelerated Proximal Alternating Maximization Method for L1-Principal Component Analysis. ICASSP 2019 (IEEE SPS Student Travel Award). [paper]

  • Huikang Liu, Peng Wang, Anthony Man-Cho So. Fast First-Order Methods for the Massive Robust Multicast Beamforming Problem with Interference Temperature Constraints. ICASSP 2019. [paper]