

Wasserstein gradient flows and applications to sampling in machine learning - lecture 1
De Anna Korba


Wasserstein gradient flows and applications to sampling in machine learning - lecture 2
De Anna Korba
De Péter Gács
Apparaît dans la collection : 2016 - T1 - WS1 - Distributed computation and communication theme
Some versions of Kolmogorov complexity are better suited than others when regarding finite sequences as starting segments of infinite sequences. Levin and Schnorr introduced monoton complexity, while Solomonoff and Levin introduced algorithmic probability, the distribution at the output of a universal monotonic machine reading a random coin-tossing input sequence. Monotonic complexity and the negative logarithm of algorithmic probability are very closely related, but can still be distinguished – as an old result of the author showed, using a game argument. The talk will survey the result and its proof, the significant improvement by Adam Day, and the open questions.