Acceder al contenido principalAcceder al menú principalFormulario de contactoLa UAM

Escuela Politécnica SuperiorLogo EPS

Seminario: Learning to Generate Data by Estimating Gradients of the Data Distribution

Yang Song, last year Phd Student supervised by Stefano Ermon
Institución de origen
Stanford University
Online (Teams)


Generating data with complex patterns, such as images, audio, and molecular structures, requires fitting very flexible statistical models to the data distribution. Even in the age of deep neural networks, building such models is difficult because they typically require an intractable normalization procedure to represent a probability distribution. To address this challenge, I propose to model the vector field of gradients of the data distribution (known as the score function), which does not require normalization and therefore can take full advantage of the flexibility of deep neural networks. I will show how to (1) estimate the score function from data with flexible deep neural networks and efficient statistical methods, (2) generate new data using stochastic differential equations and Markov chain Monte Carlo, and even (3) evaluate probability values accurately as in a traditional statistical model. The resulting method, called score-based generative modeling, achieves record-breaking performance in applications including image synthesis, text-to-speech generation, time series prediction, and point cloud generation, challenging the long-time dominance of generative adversarial networks (GANs) on many of these tasks. Furthermore, unlike GANs, score-based generative models are suitable for Bayesian reasoning tasks such as solving ill-posed inverse problems, and I have demonstrated their superior performance on sparse-view computed tomography and accelerated magnetic resonance imaging. Finally, I will discuss my future research plan on improving the controllability and generalization of generative models, as well as their broader impacts on machine learning, science & engineering, and society. This talk is based on and/or related to the following papers: - Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021 best paper award ) - Generative Modeling by Estimating Gradients of the Data Distribution ( Nips 2019 oral presentacion ) - Denoising diffusion probabilistic models ( Nips 2020 ) - Deep Unsupervised Learning using Nonequilibrium Thermodynamics ( ICML 2015 ) - Improved Techniques for Training Score-Based Generative Models (Nips 2020) - Maximum Likelihood Training of Score-Based Diffusion Models (Nips 2021 spotlight) - Variational Diffusion Models ( Nips 2021 ) 

Canal de Posgrado EPS-UAM en MS Teams:


Curriculum ponente

Yang Song is a final year Ph.D. student at Stanford University. His research interest is in deep generative models and their applications to inverse problem solving and AI safety. His first-author papers have been recognized with an Outstanding Paper Award at ICLR-2021, and an oral presentation at NeurIPS-2019. He is a recipient of the Apple PhD Fellowship in AI/ML, and the J.P. Morgan PhD Fellowship. He has done internships at Google Brain, Uber Advanced Technologies Group and Microsoft Research.


Más información

Escuela Politécnica Superior | Universidad Autónoma de Madrid | Francisco Tomás y Valiente, 11 | 28049 Madrid | Tel.: +34 91 497 2222 | e-mail: