

Slow Convergence of Stochastic Optimization Algorithms Without Derivatives Is Avoidable
De Anne Auger


Le transport optimal en pratique : géométrie, algorithmes et applications
De Jean Feydy
Apparaît dans la collection : 2019 - T1 - WS1 - Variational methods and optimization in imaging
The optimisation of nonsmooth, nonconvex functions without access to gradients is a particularly challenging problem that is frequently encountered, for example in model parameter optimisation problems. Bilevel optimisation of parameters is a standard setting in areas such as variational regularisation problems and supervised machine learning. We present efficient and robust derivative-free methods called randomised Itoh--Abe methods. These are generalisations of the Itoh--Abe discrete gradient method, a well-known scheme from geometric integration, which has previously only been considered in the smooth setting. We demonstrate that the method and its favourable energy dissipation properties are well-defined in the nonsmooth setting. Furthermore, we prove that whenever the objective function is locally Lipschitz continuous, the iterates almost surely converge to a connected set of Clarke stationary points. We present an implementation of the methods, and apply it to various test problems. The numerical results indicate that the randomised Itoh--Abe methods are superior to state-of-the-art derivative-free optimisation methods in solving nonsmooth problems while remaining competitive in terms of efficiency. If time allows we will also give some results in the smooth setting where we could derive convergence rates. This is joint work with Erlend Riis, Matthias Ehrhardt, Torbjørn Ringholm and Reinout Quispel.