Inverted steady states and LAMP models
In this talk we'll cover two variants of random walks on networks.In the first variant, the problem is inverted in that the steady state is known, but the underlying markov chain must be learned subject to some conditions. We motivate this problem of "inverting a steady-state," describe it formally, and give an algorithmic solution. Second, we turn to situations in which the markov assumptionis too restrictive, as effective models must retain some information about the more distant past. We describe LAMP: linear additive markov processes, which extend markov chains to take into account the entire history of the process, while retaining sparse parameteriztion, and a clean mathematical interpretation. We'll describe LAMP, characterize its properties, show how to learn such models, and present experimental results.