Phase transitions on one-dimensional symbolic systems
By Tamara Kucherenko
Appears in collection : Séminaire Parisien de Statistique
Over the last years it became clear that Bayesian inference can perform rather poorly under misspecification. A possible remedy is to use a generalized Bayesian method instead, i.e. to raise the likelihood in the Bayes equation to some power, which is called a learning rate. In this talk I present results on the theoretical and empirical performance of generalized Bayesian method. I discuss the conditions under which the posterior with a suitably chosen learning rate concentrate around the best approximation of the truth within the model, even when the model is misspecified. In particular, it can be shown that these conditions are satisfied for General linear models (GLMs). Suitable inference algorithms (Gibbs samplers) for computing generalized posteriors in the context of GLMs are devised, and the experiments show that the method significantly outperforms other Bayesian estimation procedures on both, simulated and real data.