Contribution relative du Métavers et de la Réalité virtuelle en Formation médico-chirurgicale de haut niveau
De Tran N'guyen
Language and Mathematics in the Time of AI. Philosophical and Theoretical Perspectives
De Juan Luis Gastaldi
De Francis Bach
Apparaît dans la collection : Optimization for Machine Learning / Optimisation pour l’apprentissage automatique
Neural networks trained to minimize the logistic (a.k.a. cross-entropy) loss with gradient-based methods are observed to perform well in many supervised classification tasks. Towards understanding this phenomenon, we analyze the training and generalization behavior of infinitely wide two-layer neural networks with homogeneous activations. We show that the limits of the gradient flow on exponentially tailed losses can be fully characterized as a max-margin classifier in a certain non-Hilbertian space of functions.