Schlumberger workshop - Computational and statistical trade-offs in learning

Collection Schlumberger workshop - Computational and statistical trade-offs in learning

Organisateur(s)
Date(s) 16/05/2024
00:00:00 / 00:00:00
5 10

Highly-Smooth Zero-th Order Online Optimization

De Vianney Perchet

We consider online convex optimization with noisy zero-th order information, that is noisy function evaluations at any desired point. We focus on problems with high degrees of smoothness, such as online logistic regression. We show that as opposed to gradient-based algorithms, high-order smoothness may be used to improve estimation rates, with a precise dependence on the degree of smoothness and the dimension. In particular, we show that for infinitely differentiable functions, we recover the same dependence on sample size as gradient-based algorithms, with an extra dimension-dependent factor. This is done for convex and strongly-convex functions in constrained or global optimization (with either one point or two points noisy evaluations of the functions). Joint work with F. Bach.

Informations sur la vidéo

  • Date de captation 22/03/2016
  • Date de publication 27/03/2016
  • Institut IHES
  • Format MP4

Dernières questions liées sur MathOverflow

Pour poser une question, votre compte Carmin.tv doit être connecté à mathoverflow

Poser une question sur MathOverflow




Inscrivez-vous

  • Mettez des vidéos en favori
  • Ajoutez des vidéos à regarder plus tard &
    conservez votre historique de consultation
  • Commentez avec la communauté
    scientifique
  • Recevez des notifications de mise à jour
    de vos sujets favoris
Donner son avis