Peng Wang

Peng Wang Photo 

Peng Wang
Postdoc Research Fellow

Department of Electrical Engineering and Computer Science
University of Michigan, Ann Arbor

Office: Room 3218, EECS Building, 1301 Beal Avenue, Ann Arbor, MI 48109-2122
Email: pengwa@umich.edu
Google Scholar

About Me

I am currently a postdoc research fellow advised by Professors Laura Balzano and Qing Qu at University of Michigan. Before that, I got my Ph.D. degree in Systems Engineering and Engineering Management advised by Professor Anthony Man-Cho So at The Chinese University of Hong Kong.

Research Interests

Broadly speaking, my research interest lies in the intersects of optimization, machine learning, and data science. Specifically, I am interested in exploring the low-complexity models of learning problems in applications and developing advanced theory and efficient algorithms to understand and solve these problems. I aim to not only enhance our understanding of data-driven processes but also enable us to make informed decisions, optimize the workflows, and develop more effective and efficient solutions.

Feel free to email me if you are interested in my research. Remote collaboration is also welcome!

What's New

  • [May 2024 ] Five papers are accepted in ICML 2024!

  • [March 2024] One paper on studying reproducibility of diffusion model in memorization and generalization regimes is posted.

  • [November 2023] One paper on understanding hierarchical representation learning via low-dimensinoal modeling is posted!

  • [October 2023] I was selected to receive the Rising Star Award at CPAL 2024!

  • [October 2023] One paper on the generalized neural collapse is posted!

  • [June 2023] One paper on the low-dimensional updates of gradient descent for deep networks is posted!

  • [April 2023] One paper on chance constrained programming is posted!

Preprint Papers

  • Peng Wang*, Xiao Li*, Can Yaras, Wei Hu, Zhihui Zhu, Laura Balzano, Wei Hu, Qing Qu. Understanding Deep Representation Learning via Layerwise Feature Compression and Discrimination, 2023. Submitted to Journal of Machine Learning Research, 2024. [paper]

  • Can Yaras*, Peng Wang*, Wei Hu, Zhihui Zhu, Laura Balzano, Qing Qu. The Law of Parsimony in Gradient Descent for Learning Deep Linear Networks, 2023. In submission. [paper]

  • Peng Wang, Rujun Jiang, Qingyuan Kong, Laura Balzano. Proximal DC Algorithm for Sample Average Approximation of Chance Constrained Programming: Convergence and Numerical Results. Submitted to SIAM Journal on Optimization, 2023. [paper]

  • Taoli Zheng, Peng Wang, Anthony Man-Cho So. A Linearly Convergent Algorithm for Rotationally Invariant L1-Norm Principal Component Analysis, 2022. [paper]

Journal Papers

  • Peng Wang, Huikang Liu, Anthony Man-Cho So. Linear Convergence of Proximal Alternating Minimization Method with Extrapolation for L1-Norm Principal Component Analysis. SIAM Journal on Optimization (2023) 33(2):684-712. [paper]

  • Peng Wang, Zirui Zhou, Anthony Man-Cho So. Non-Convex Exact Community Recovery in Stochastic Block Model. Mathematical Programming, Series A (2022) 195(1-2):793-829. [paper]

Conference Papers (‘‘*’’ denotes equal contribution.)

  • Peng Wang, Huikang Liu, Druv Pai, Yaodong Yu, Zhihui Zhu, Qing Qu, Yi Ma. A Global Geometric Analysis of Maximal Coding Rate Reduction. To appear in ICML 2024.

  • Can Yaras, Peng Wang, Laura Balzano, Qing Qu. Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation. To appear in ICML 2024.

  • Huikang Liu*, Peng Wang*, Longxiu Huang, Qing Qu, Laura Balzano. Matrix Completion with ReLU Sampling. To appear in ICML 2024.

  • Jiachen Jiang, Jinxin Zhou, Peng Wang, Qing Qu, Dustin Mixon, Chong You, Zhihui Zhu. Generalized Neural Collapse for a Large Number of Classes. To appear in ICML 2024. [paper]

  • Huijie Zhang, Jinfan Zhou, Yifu Lu, Minzhe Guo, Peng Wang, Liyue Shen, and Qing Qu. The Emergence of Reproducibility and Consistency in Diffusion Models. To appear in ICML 2024. [paper]

  • Can Yaras*, Peng Wang*, Wei Hu, Zhihui Zhu, Laura Balzano, Qing Qu. Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Matrix Factorizations. NeurIPS M3L 2023.

  • Jinxin Wang, Yuen-Man Pun, Xiaolu Wang, Peng Wang, Anthony Man-Cho So. Projected Tensor Power Method for Hypergraph Community Recovery. ICML 2023. [paper]

  • Peng Wang*, Huikang Liu*, Can Yaras*, Laura Balzano, Qing Qu. Linear Convergence Analysis of Neural Collapse with Unconstrained Features. NeurIPS Workshop on Optimization for Machine Learning, NeurIPS OPT 2022. [paper]

  • Can Yaras*, Peng Wang*, Zhihui Zhu, Laura Balzano, Qing Qu. Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold. NeurIPS 2022. [paper]

  • Peng Wang, Huikang Liu, Anthony Man-Cho So, Laura Balzano. Convergence and Recovery Guarantees of the K-Subspaces Method for Subspace Clustering. ICML 2022. [paper]

  • Xiaolu Wang, Peng Wang, Anthony Man-Cho So. Exact Community Recovery over Signed Graphs. AISTATS 2022. [paper]

  • Peng Wang, Huikang Liu, Zirui Zhou, Anthony Man-Cho So. Optimal Non-Convex Exact Recovery in Stochastic Block Model via Projected Power Method. ICML 2021. [paper]

  • Peng Wang*, Zirui Zhou*, Anthony Man-Cho So. A Nearly-Linear Time Algorithm for Exact Community Recovery in Stochastic Block Model. ICML 2020. [paper]

  • Peng Wang, Huikang Liu, Anthony Man-Cho So. Globally Convergent Accelerated Proximal Alternating Maximization Method for L1-Principal Component Analysis. ICASSP 2019 (IEEE SPS Student Travel Award). [paper]

  • Huikang Liu, Peng Wang, Anthony Man-Cho So. Fast First-Order Methods for the Massive Robust Multicast Beamforming Problem with Interference Temperature Constraints. ICASSP 2019. [paper]