Liang(Divin) Yan

I am working on generative models, exploring both their theoretical foundations and practical applications. I firmly believe that the boundaries of generative models extend far beyond this. My long-term goal is to develop the next generation of generative models that are more controllable, scalable, efficient, cost-effective, and, most importantly, grounded in first principles. Recently, I have been working on AI for Science and Science for AI, with a emphasis on the development of generative models. I want to have a more principled understanding of generative models from the perspective of science, especially physics and mathematics.

I received my research master’s degree in Applied Mathematics from Fudan University, under the esteemed supervision of the distinguished scholar Prof. Zengfeng Huang. At Fudan University, my work focused on the theory and real-world applications of graph learning and generative models.

Email: yanliangfdu[at]gmail.com;

Google Scholar / Github / ORCID / Twitter / LinkedIn

profile photo
Open Questions

In my view, the essence of a truly outstanding AI system is not just learning or imitation but rather creativity and the ability to generate divergent thinking. This perspective has led me to some simple yet unavoidable questions:

  • Are current AI systems truly capable of creating or expanding knowledge?
  • Is the distribution learned by generative models genuinely reflective of the true distribution of natural data? How can we determine this, given that that the true distribution itself does not have a closed-form solution?
  • Do humans learn from the true distribution of natural data? If so, why must human-created knowledge necessarily be confined to such a distribution?
  • In my understanding, the current comprehension of AI is still based on statistics. Is it truly sufficient to rely on statistics to understand intelligence?

These questions linger in my mind, and I hope that through future exploration, I can gradually gain deeper insights into them. Even so, I remain convinced that generative models are still the most promising path toward enabling true creativity in AI systems.

Research Topics

My research interests lie in the theoretical understanding of generative models and the development of effective algorithms to address key challenges in areas such as vision, language, science. Additionally, I am interested in exploring fundamental issues in deep learning, including robustness, generalization, and fairness. I love to find a sweet balance between mathmetical theory and practical tricks.

Publication (* indicates equal contribution)

Manifold-Constrained Nucleus-Level Denoising Diffusion Model for Structure-Based Drug Design
Shengchao Liu* , Liang Yan*, Weitao Du, Weiyang Liu, Zhuoxinran Li, Hongyu Guo, Christian Borgs, Jennifer Chayes, Anima Anandkumar
[Project Page] [Arxiv] [OpenReview] [Code] [Slides]
Proceedings of the National Academy of Sciences 2025 (PNAS 2025)

An Adaptive Autoregressive Diffusion Approach to Design Active Humanized Antibody and Nanobody
Jian Ma, Fandi Wu, Tingyang Xu, Shaoyong Xu, Wei Liu, Liang Yan, Qifeng Bai, Jianhua Yao
[Project Page] [Arxiv] [OpenReview] [Code] [Slides]
Nature Machine Intelligence 2025

Hierarchical Graph Latent Diffusion Model for Conditional Molecule Generation
Tian Bian, Yifan Niu, Heng Chang, Divin Yan, Tingyang Xu, Yu Rong, Jia Li, Hong Cheng
[Project Page] [paper] [OpenReview] [Code] [Slides]
CIKM 2024

Rethinking Semi-Supervised Imbalanced Node Classification from Bias-Variance Decomposition
Liang Yan, Gengchen Wei, Chen Yang, Shengzhong Zhang, Zengfeng Huang
[Project Page] [Arxiv] [OpenReview] [Code] [Slides] [Poster]
NeurIPS 2023

Preprint&Workshops (* indicates equal contribution)

Training Class-Imbalanced Diffusion Model Via Overlap Optimization
Divin Yan, Lu Qi, Vincent Tao Hu, Ming-Hsuan Yang, Meng Tang
[Project Page] [Arxiv] [OpenReview] [Code] [Slides]

UNREAL:Unlabeled Nodes Retrieval and Labeling for Heavily-imbalanced Node Classification
Liang Yan, Shengzhong Zhang, Menglin Yang, Bisheng Li, Chen Yang, Min Zhou, Zengfeng Huang
[Project Page] [Arxiv] [OpenReview] [Code] [Slides]
ICML 2025 DataWorld Workshop

Physics Informed Spectral Element Network with Positional Parameter
Gengchen Wei, Liang Yan, Lei Bai, Wanli Ouyang, Chen Lin
[Project Page] [Arxiv] [OpenReview] [Code] [Slides]

A Multi-Grained Symmetric Differential Equation Model for Learning Protein-Ligand Binding Dynamics
Shengchao Liu, Weitao Du, Hannan Xu, Yanjing Li, Zhuoxinran Li, Vignesh Bhethanabotla, Liang Yan, Christian Borgs, Anima Anandkumar, Hongyu Guo, Jennifer Chayes
[Project Page] [Arxiv] [OpenReview] [Code] [Slides]
Under Review by Nature Communications 2025

Service

Reviewer: KDD (2024), ICLR (2025), ICML LXAI Workshop (2025), ICML AI4Math Workshop (2025), ICML AIW Workshop (2025), ICML DataWorld Workshop (2025), NeurIPS (2025), ACM MM (2025), ACM MM Datasets Track (2025).

Personal

I originally had no connection to the field of artificial intelligence. If everything had gone as expected, I might have become a bank manager or an accountant. However, during my undergraduate years, I happened to stumble upon a book on artificial intelligence while wandering through the library. That book left a profound impact on me, and from that moment, I made up my mind to devote myself to this exciting field. This is my origin, the path I started on, and I hope I will never forget the inspiration and determination I felt at the very beginning.

I am a fan of the late NBA star Kobe Bryant. He has been a great source of inspiration for me. He once said: "If you love a thing, you will overcome all difficulties." So, the most important thing is to find something you truly love. I hope I already find mine too. RIP, Kobe.



Updated at May. 2025