00:00:00 / 00:00:00

One signal processing view on deep learning - lecture 1

By Edouard Oyallon

Appears in collection : Mathematics, Signal Processing and Learning / Mathématiques, traitement du signal et apprentissage

Since 2012, deep neural networks have led to outstanding results in many various applications, literally exceeding any previously existing methods, in texts, images, sounds, videos, graphs... They consist of a cascade of parametrized linear and non-linear operators whose parameters are optimized to achieve a fixed task. This talk addresses 4 aspects of deep learning through the lens of signal processing. First, we explain image classification in the context of supervised learning. Then, we show several empirical results that allow us to get some insights about the black box of neural networks. Third, we explain how neural networks create invariant representation: in the specific case of translation, it is possible to design predefined neural networks which are stable to translation, namely the Scattering Transform. Finally, we discuss several recent statistical learning, about the generalization and approximation properties of this deep machinery.

Information about the video

Citation data

  • DOI 10.24350/CIRM.V.19705703
  • Cite this video Oyallon, Edouard (28/01/2021). One signal processing view on deep learning - lecture 1. CIRM. Audiovisual resource. DOI: 10.24350/CIRM.V.19705703
  • URL https://dx.doi.org/10.24350/CIRM.V.19705703

Last related questions on MathOverflow

You have to connect your Carmin.tv account with mathoverflow to add question

Ask a question on MathOverflow




Register

  • Bookmark videos
  • Add videos to see later &
    keep your browsing history
  • Comment with the scientific
    community
  • Get notification updates
    for your favorite subjects
Give feedback