Google matrix: fundamentals, applications and beyond

Collection Google matrix: fundamentals, applications and beyond

Organizer(s)
Date(s) 18/04/2024
00:00:00 / 00:00:00
4 17

In this talk we'll cover two variants of random walks on networks.In the first variant, the problem is inverted in that the steady state is known, but the underlying markov chain must be learned subject to some conditions. We motivate this problem of "inverting a steady-state," describe it formally, and give an algorithmic solution. Second, we turn to situations in which the markov assumptionis too restrictive, as effective models must retain some information about the more distant past. We describe LAMP: linear additive markov processes, which extend markov chains to take into account the entire history of the process, while retaining sparse parameteriztion, and a clean mathematical interpretation. We'll describe LAMP, characterize its properties, show how to learn such models, and present experimental results.

Information about the video

Domain(s)

Document(s)

Last related questions on MathOverflow

You have to connect your Carmin.tv account with mathoverflow to add question

Ask a question on MathOverflow




Register

  • Bookmark videos
  • Add videos to see later &
    keep your browsing history
  • Comment with the scientific
    community
  • Get notification updates
    for your favorite subjects
Give feedback