|
Liang(Divin) Yan
I am working on physics-informed generative models and physics-informed representation learning, exploring both their theoretical foundations and practical applications, especially in the field of AI for Science and Science for AI.
I received my graduate degree in Applied Mathematics from Fudan University, under the esteemed supervision of the distinguished scholar Prof. Zengfeng Huang. At Fudan University, my work focused on the theory and real-world applications of graph learning and generative models. I worked in the Anima AI+Science lab at the California Institute of Technology, advised by Professor Anima Anandkumar. I was a visiting student at Vision and Learning Lab, UC Merced, under the guidance of Professor Ming-Hsuan Yang and Professor Lu Qi. I was also a research intern at Tencent AI Lab and Shanghai AI Lab.
Feel free to reach out to me if you're interested in discussing research or potential collaborations!
Email: yanliangfdu[at]gmail.com.
Google Scholar /
Github /
ORCID /
Twitter /
LinkedIn
|
|
News
- 2025.10: 👏 👏 NeuralMD was accepted by Nature Communications 2025! Congrats Shengchao and Weitao!
- 2025.10: 👏 👏 NucleusDiff was reported by Caltech News! Check it out: https://www.caltech.edu/about/news/new-ai-model-for-drug-design-brings-more-physics-to-bear-in-predictions!
- 2025.09: 👏 👏 MGB was accepted by NeurIPS 2025 AI4Mat Workshop! We present the first comprehensive material generation benchmark in the world, which includes LLMs, diffusion & flow-based models, and VAE-based models!
- 2025.09: 👏 👏 UNREAL was accepted by NeurIPS 2025! We first introduce the concept of geometric imbalance of GNNs on riemannian manifold!
- 2025.09: 👏 👏 NucleusDiff was accepted by PNAS 2025!
- 2025.08: 👏 👏 HuDiff was accepted by Nature Machine Intelligence 2025! Congrats Jian and Fandi!
|
|
Open Questions
In my view, the essence of a truly outstanding AI system is not just learning or imitation but rather creativity and the ability to generate divergent thinking. This perspective has led me to some simple yet unavoidable questions:
- Are current AI systems truly capable of creating or expanding knowledge?
- Is the distribution learned by generative models genuinely reflective of the true distribution of natural data? How can we determine this, given that that the true distribution itself does not have a closed-form solution?
- Do humans learn from the true distribution of natural data? If so, why must human-created knowledge necessarily be confined to such a distribution?
- In my understanding, the current comprehension of AI is still based on statistics. Is it truly sufficient to rely on statistics to understand intelligence?
These questions linger in my mind, and I hope that through future exploration, I can gradually gain deeper insights into them. Even so, I remain convinced that generative models are still the most promising path toward enabling true creativity in AI systems.
|
|
Research Topics
I am working on generative models, exploring both their theoretical foundations and practical applications. I firmly believe that the boundaries of generative models extend far beyond this. My long-term goal is to develop the next generation of generative models that are more controllable, scalable, efficient, cost-effective, and, most importantly, grounded in first principles. Recently, I have been working on AI for Science and Science for AI, with a emphasis on the development of generative models. I want to have a more principled understanding of generative models from the perspective of science, especially physics and mathematics. My research interests lie in the theoretical understanding of generative models and the development of effective algorithms to address key challenges in science. I love to find a sweet balance between mathmetical theory and practical tricks.
|
|
Publication (* indicates equal contribution)
MGB: The Material Generation Benchmark
Liang Yan, Beom Seok Kang, Maurice D. Hanisch, Jian Ma, Anima Anandkumar
[Paper] [Code] [Poster]
Presented at NeurIPS 2025 AI4Mat Workshop
In Submission
Geometric Imbalance in Semi-Supervised Node Classification
Liang Yan, Shengzhong Zhang, Bisheng Li, Menglin Yang, Chen Yang, Min Zhou, Weiyang Ding, Yutong Xie, Zengfeng Huang
[Project Page] [Paper] [Arxiv] [OpenReview] [Code] [Slides]
Presented at ICML 2025 DataWorld Workshop
NeurIPS 2025
Manifold-Constrained Nucleus-Level Denoising Diffusion Model for Structure-Based Drug Design
Shengchao Liu* (co-first), Liang Yan* (co-first), Weitao Du, Weiyang Liu, Zhuoxinran Li, Hongyu Guo, Christian Borgs, Jennifer Chayes, Anima Anandkumar
[Project Page] [Paper] [Arxiv] [OpenReview] [Code] [Slides]
Presented at ICML 2024 GRaM Workshop
Proceedings of the National Academy of Sciences 2025 (PNAS 2025)
An Adaptive Autoregressive Diffusion Approach to Design Active Humanized Antibody and Nanobody
Jian Ma, Fandi Wu, Tingyang Xu, Shaoyong Xu, Wei Liu, Liang Yan, Minghao Qu, Xiaoke Yang, Qifeng Bai, Junyu Xiao, Jianhua Yao
[Project Page] [Paper] [Arxiv] [OpenReview] [Code] [Slides]
Nature Machine Intelligence 2025
A Multi-Grained Symmetric Differential Equation Model for Learning Protein-Ligand Binding Dynamics
Shengchao Liu, Weitao Du, Hannan Xu, Yanjing Li, Zhuoxinran Li, Vignesh Bhethanabotla, Liang Yan, Christian Borgs, Anima Anandkumar, Hongyu Guo, Jennifer Chayes
[Project Page] [Arxiv] [OpenReview] [Code] [Slides]
ICLR AI4DifferentialEquations Workshop 2024 Oral
Nature Communications 2025
NeuralCrystal: A Geometric Foundation Model for Crystalline Material Discovery
Shengchao Liu*, Liang (Divin) Yan*, Weitao Du, Zhuoxinran Li, Zhiling Zheng, Omar Yaghi, Christian Borgs, Hongyu Guo, Anima Anandkumar, Jennifer Chayes
[Project Page] [paper] [Code]
NeurIPS 2024 AI4Mat Workshop
CrystalFlow: An Equivariant Flow Matching Framework for Learning Molecular Crystallization
Shengchao Liu, Liang (Divin) Yan, Hongyu Guo, Anima Anandkumar
[Project Page] [paper] [Code]
ICML 2024 GRaM Workshop, ICML 2024 ML4LMS Workshop
Hierarchical Graph Latent Diffusion Model for Conditional Molecule Generation
Tian Bian, Yifan Niu, Heng Chang, Liang (Divin) Yan, Tingyang Xu, Yu Rong, Jia Li, Hong Cheng
[Project Page] [paper] [OpenReview] [Code] [Slides]
CIKM 2024
Rethinking Semi-Supervised Imbalanced Node Classification from Bias-Variance Decomposition
Liang Yan, Gengchen Wei, Chen Yang, Shengzhong Zhang, Zengfeng Huang
[Project Page] [Paper] [Arxiv] [Code] [Slides] [Poster]
NeurIPS 2023
Training Class-Imbalanced Diffusion Model Via Overlap Optimization
Liang (Divin) Yan, Lu Qi, Vincent Tao Hu, Ming-Hsuan Yang, Meng Tang
[Project Page] [Arxiv] [OpenReview] [Code] [Slides]
Arxiv 2024
Physics Informed Spectral Element Network with Positional Parameter
Gengchen Wei, Liang Yan, Lei Bai, Wanli Ouyang, Chen Lin
[Project Page] [Arxiv] [OpenReview] [Code] [Slides]
Preprint 2024
|
|
Service
Reviewer: KDD (2023, 2024), ICLR (2024, 2025, 2026), ICML (2024, 2025), ICML LXAI Workshop (2025), ICML AI4Math Workshop (2025), ICML AIW Workshop (2025), ICML DataWorld Workshop (2025), NeurIPS (2023, 2024, 2025), NeurIPS 2025 LXAI Workshop (2025), NeurIPS 2025 VLM4RWD Workshop (2025), ACM MM (2025), ACM MM Datasets Track (2025).
|
|
Personal
I originally had no connection to the field of artificial intelligence. If everything had gone as expected, I might have become a bank manager or an accountant. However, during my undergraduate years, I happened to stumble upon a book on artificial intelligence while wandering through the library. That book left a profound impact on me, and from that moment, I made up my mind to devote myself to this exciting field. This is my origin, the path I started on, and I hope I will never forget the inspiration and determination I felt at the very beginning.
I am a fan of the late NBA star Kobe Bryant. He has been a great source of inspiration for me. He once said: "If you love a thing, you will overcome all difficulties." So, the most important thing is to find something you truly love. I hope I already find mine too. RIP, Kobe.
|
|