

Universal Optimal Control, Reinforcement Learning, and Reaching Goals in LLMs
De Yann Ollivier


Search, Reason or Recombine? Paradigms for Scaling Formal Proving
De Fabian Glöckle
Apparaît dans la collection : 2025 Huawei-IHES Workshop on Causality in the Era of AI : From Theory to Practice
In many scientific domains, the cost of data annotation limits the scale and pace of experimentation. Yet, modern machine learning systems offer a promising alternative—provided their predictions yield correct conclusions. We focus on Prediction-Powered Causal Inferences (PPCI), i.e., estimating the treatment effect in a target experiment with unlabeled factual outcomes, retrievable zero-shot from a pre-trained model. We first identify the conditional calibration property to guarantee valid PPCI at population level. Then, we introduce a new necessary ``causal lifting'' constraint transferring validity across experiments, which we propose to enforce in practice in Deconfounded Empirical Risk Minimization, our new model-agnostic training objective. We validate our method on synthetic and real-world scientific data, offering solutions to instances not solvable by vanilla Empirical Risk Minimization and invariant training. In particular, we solve zero-shot PPCI on the ISTAnt dataset for the first time, fine-tuning a foundational model on our replica dataset of their ecological experiment with a different recording platform and treatment.