Monte Carlo guided Diffusion for Bayesian linear inverse problems
De Sylvain Le Corff
Linear and nonlinear schemes for forward model reduction and inverse problems - Lecture 1
De Olga Mula Hernandez
Apparaît dans la collection : FLAIM: Formal Languages, AI and Mathematics
Machine learning is now deployed on planetary scales, e.g. in vocal assistants, targeted advertising and content recommendation. However, despite this state of affairs, known cyber-attacks and evident vulnerabilities, the theory of machine learning security is still underdeveloped and lagging behind. In this talk, I will highlight three leading security concerns (privacy, evasion and poisoning). I will then focus more particularly on poisoning. Unfortunately, as we will see, several impossibility theorems expose a fundamental vulnerability of any learning system, under modest adversarial attacks. I will also discuss the current leading ideas to increase, to some extent, the security of the training of machine learning models.