Apparaît dans la collection : 2019 - T1 - WS1 - Variational methods and optimization in imaging
Sparse regularization is a central technique for both machine learning and imaging sciences. Existing performance guarantees assume a separation of the spikes based on an ad-hoc (usually Euclidean) minimum distance condition, which ignore the geometry of the problem. In this talk, we study the BLASSO (i.e. the off-the-grid version of ℓ1 LASSO regularization) and show that the Fisher-Rao distance is the natural way to ensure and quantify support recovery. Under a separation imposed by this distance, I will present results which show that stable recovery of a sparse measure can be achieved when the sampling complexity is (up to log factors) linear with sparsity. On deconvolution problems, which are translation invariant, this generalizes to the multi-dimensional setting existing results of the literature. For more complex translation-varying problems, such as Laplace transform inversion, this gives the first geometry-aware guarantees for sparse recovery. This is joint work with Nicolas Keriven and Gabriel Peyré.