

From robust tests to robust Bayes-like posterior distribution
By Yannick Baraud


Statistical learning in biological neural networks
By Johannes Schmidt-Hieber
By Chao Gao
Appears in collection : Meeting in Mathematical Statistics: Statistical thinking in the age of AI : robustness, fairness and privacy / Rencontre de Statistique Mathématique
The advent of large scale inference has spurred reexamination of conventional statistical thinking. In a series of highly original articles, Efron showed in some examples that the ensemble of the null distributed test statistics grossly deviated from the theoretical null distribution, and Efron persuasively illustrated the danger in assuming the theoretical null's veracity for downstream inference. Though intimidating in other contexts, the large scale setting is to the statistician's benefit here. There is now potential to estimate, rather than assume, the null distribution. In a model for n many z-scores with at most k nonnulls, we adopt Efron's suggestion and consider estimation of location and scale parameters for a Gaussian null distribution. Placing no assumptions on the nonnull effects, we consider rate-optimal estimation in the entire regime k < n/2, that is, precisely the regime in which the null parameters are identifiable. The minimax upper bound is obtained by considering estimators based on the empirical characteristic function and the classical kernel mode estimator. Faster rates than those in Huber's contamination model are achievable by exploiting the Gaussian character of the data. As a consequence, it is shown that consistent estimation is indeed possible in the practically relevant regime k ≍ n. In a certain regime, the minimax lower bound involves constructing two marginal distributions whose characteristic functions match on a wide interval containing zero. The construction notably differs from those in the literature by sharply capturing a second-order scaling of n/2 − k in the minimax rate.