00:00:00 / 00:00:00

Appears in collection : Imaging and machine learning

Autoencoders and GAN's can synthesize remarkably complex images, although we still do not understand the mathematical properties of the generated random processes. We introduces a mathematical and algorithmic framework to analyze the principles of such image syntheses. In Wasserstein autoencoders, the coder is trained to transform the input random vector into a lower-dimensional nearly white noise. Images are synthesized from white noise with an inverse deep convolutional generator. We show that the encoder can be computed with a multiscale scattering transform, which mixes input variables at multiple scales. We prove that generating an image model then amounts to solve a sequence of linear deconvolutions at different scales. A deep convolutional generator regularizes this deconvolution by sparsity in dictionaries learned at each scale. Numerical image synthesis will be shown. Joint work with Tomas Anglès.

Information about the video

  • Date of recording 04/04/2019
  • Date of publication 10/05/2019
  • Institution IHP
  • Language English
  • Format MP4
  • Venue Institut Henri Poincaré

Domain(s)

Last related questions on MathOverflow

You have to connect your Carmin.tv account with mathoverflow to add question

Ask a question on MathOverflow




Register

  • Bookmark videos
  • Add videos to see later &
    keep your browsing history
  • Comment with the scientific
    community
  • Get notification updates
    for your favorite subjects
Give feedback