

Wasserstein gradient flows and applications to sampling in machine learning - lecture 1
By Anna Korba


Wasserstein gradient flows and applications to sampling in machine learning - lecture 2
By Anna Korba
Appears in collection : 2019 - T1 - WS3 - Imaging and machine learning
Autoencoders and GAN's can synthesize remarkably complex images, although we still do not understand the mathematical properties of the generated random processes. We introduces a mathematical and algorithmic framework to analyze the principles of such image syntheses. In Wasserstein autoencoders, the coder is trained to transform the input random vector into a lower-dimensional nearly white noise. Images are synthesized from white noise with an inverse deep convolutional generator. We show that the encoder can be computed with a multiscale scattering transform, which mixes input variables at multiple scales. We prove that generating an image model then amounts to solve a sequence of linear deconvolutions at different scales. A deep convolutional generator regularizes this deconvolution by sparsity in dictionaries learned at each scale. Numerical image synthesis will be shown. Joint work with Tomas Anglès.