Overcoming the curse of dimensionality with deep neural networks

By Sophie Langer

Appears in collection : 2022 - T3 - WS1 - Non-Linear and High Dimensional Inference

Although the application of deep neural networks to real-world problems has become ubiquitous, the question of why they are so effective has not yet been satisfactorily answered. However, some progress has been made in establishing an underlying mathematical foundation. This talk surveys results on statistical risk bounds of deep neural networks. In particular, we focus on the question of when neural networks bypass the curse of dimensionality. Here we discuss results for vanilla feedforward and convolutional neural networks as well as regression and classification settings.

Information about the video

Citation data

  • DOI 10.57987/IHP.2022.T3.WS1.002
  • Cite this video Langer, Sophie (03/10/2022). Overcoming the curse of dimensionality with deep neural networks. IHP. Audiovisual resource. DOI: 10.57987/IHP.2022.T3.WS1.002
  • URL https://dx.doi.org/10.57987/IHP.2022.T3.WS1.002

Domain(s)

Last related questions on MathOverflow

You have to connect your Carmin.tv account with mathoverflow to add question

Ask a question on MathOverflow




Register

  • Bookmark videos
  • Add videos to see later &
    keep your browsing history
  • Comment with the scientific
    community
  • Get notification updates
    for your favorite subjects
Give feedback