Bayesian generalized modeling when the model is wrong

De Alisa Kirichenko

Apparaît dans la collection : Séminaire Parisien de Statistique

Over the last years it became clear that Bayesian inference can perform rather poorly under misspecification. A possible remedy is to use a generalized Bayesian method instead, i.e. to raise the likelihood in the Bayes equation to some power, which is called a learning rate. In this talk I present results on the theoretical and empirical performance of generalized Bayesian method. I discuss the conditions under which the posterior with a suitably chosen learning rate concentrate around the best approximation of the truth within the model, even when the model is misspecified. In particular, it can be shown that these conditions are satisfied for General linear models (GLMs). Suitable inference algorithms (Gibbs samplers) for computing generalized posteriors in the context of GLMs are devised, and the experiments show that the method significantly outperforms other Bayesian estimation procedures on both, simulated and real data.

Informations sur la vidéo

  • Date de publication 18/04/2024
  • Institut IHP
  • Langue Anglais
  • Format MP4

Dernières questions liées sur MathOverflow

Pour poser une question, votre compte Carmin.tv doit être connecté à mathoverflow

Poser une question sur MathOverflow




Inscrivez-vous

  • Mettez des vidéos en favori
  • Ajoutez des vidéos à regarder plus tard &
    conservez votre historique de consultation
  • Commentez avec la communauté
    scientifique
  • Recevez des notifications de mise à jour
    de vos sujets favoris
Donner son avis