Peng Wang
About Me
Currently, I am an assistant professor at the Department of Computer and Information Science at the University of Macau. Before this, I am a postdoc research fellow advised by Professors Laura Balzano and Qing Qu at the University of Michigan. I got my Ph.D. degree under the supervision of Professor Anthony Man-Cho So at The Chinese University of Hong Kong and bachelor's degree from Beijing University of Posts and Telecommunications.
Open Positions (Fall 2026)
I am actively seeking PhD students with a strong background in mathematics or programming. Research assistants and postdoctoral positions are also available. If you are interested, please feel free to contact me directly! Please visit here for more information!
Research Interests
Broadly speaking, my research interest lies in the intersects of optimization, machine learning, and artificial intelligence. Currently, I am devoted to understanding mathematical foundations of deep learning and generative AI models, including supervised learning models, diffusion models, and large language models. I mainly study how low-complexity structures (e.g., low-rankness, sparsity, over-parameterization) in practical problems lead to favorable optimization properties and use them to mitigate the challenges caused by worst-case scenarios, enable efficient optimization, and improve our understanding of learning phenomena.
Feel free to email me if you are interested in my research. Remote collaboration is also welcome!
Teaching Courses
What's New
[September 2025] One paper on understanding feature learning of diffusion models is accepted by NeurIPS 2025!
[August 2025] Our paper on understanding hierarchical representation learning of deep neural networks is accepted by JMLR!
[August 2025] Our new book Learning Deep Representations of Data Distributions is posted online! Thanks for my wonderful collaborators Dr. Sam Buchanan, Druv Pai, and Prof. Yi Ma!
[June 2025] One paper on the global loss landscape analyais of deep matrix factorization is posted!
[May 2025] One paper on understanding the mechenism of transformers has been accepted by ICML 2025!
[Apr 2025] Our recent works on studying the generalization of diffusion models appears on SIAM News Blog!
[Mar 2025] A tutorial paper on understanding the role of low-rank structures in the training and adaptation of deep learning models is posted!
[Mar 2025] I will attend the Conference on Parsimony and Learning at Stanford University from March 24-27!
[Feb 2025] One paper on the local error bound of deep linear networks is posted!
[Jan 2025] One paper on the representation capabilities of diffusion models is posted!
[Jan 2025] One paper is accepted by INFORMS Journal on Computing!
[Jan 2025] One paper on the separation capabilities of shallow nonlinear networks is posted!
[Jan 2025] I will give a presentaion in 1W-MINDS Seminar online at 5PM (Beijing Time) on Jan 9, 2025!
[Jan 2025] I will give a presenation in IMS Young Mathematical Scientists Forum at National University of Singapore!
[Dec 2024] One paper on understanding modality gap in multimodal learning is posted!
[Dec 2024] I will serve as an area chair of CAPL 2025!
[Oct 2024] I will chair a session and give a lecture presentation on diffusion models in Asilomar 2024 at Pacific Grove, CA!
[Oct 2024] Two paper on diffusion models get accepted by NeurIPS Workshop on Mathematics of Modern Machine Learning!
[Sep 2024] I will give a talk on diffusion models in ECE Communications and Signal Processing Seminars at the University of Michigan!
[May 2024] Five papers [paper1, paper2, paper3, paper4, paper5] are accepted by ICML 2024!
|