Bayesian generalized modeling when the model is wrong

By Alisa Kirichenko

Appears in collection : Séminaire Parisien de Statistique

Over the last years it became clear that Bayesian inference can perform rather poorly under misspecification. A possible remedy is to use a generalized Bayesian method instead, i.e. to raise the likelihood in the Bayes equation to some power, which is called a learning rate. In this talk I present results on the theoretical and empirical performance of generalized Bayesian method. I discuss the conditions under which the posterior with a suitably chosen learning rate concentrate around the best approximation of the truth within the model, even when the model is misspecified. In particular, it can be shown that these conditions are satisfied for General linear models (GLMs). Suitable inference algorithms (Gibbs samplers) for computing generalized posteriors in the context of GLMs are devised, and the experiments show that the method significantly outperforms other Bayesian estimation procedures on both, simulated and real data.

Information about the video

  • Date of publication 18/04/2024
  • Institution IHP
  • Language English
  • Format MP4

Last related questions on MathOverflow

You have to connect your Carmin.tv account with mathoverflow to add question

Ask a question on MathOverflow




Register

  • Bookmark videos
  • Add videos to see later &
    keep your browsing history
  • Comment with the scientific
    community
  • Get notification updates
    for your favorite subjects
Give feedback