00:00:00 / 00:00:00

Apparaît dans la collection : Google matrix: fundamentals, applications and beyond

In this talk we'll cover two variants of random walks on networks.In the first variant, the problem is inverted in that the steady state is known, but the underlying markov chain must be learned subject to some conditions. We motivate this problem of "inverting a steady-state," describe it formally, and give an algorithmic solution. Second, we turn to situations in which the markov assumptionis too restrictive, as effective models must retain some information about the more distant past. We describe LAMP: linear additive markov processes, which extend markov chains to take into account the entire history of the process, while retaining sparse parameteriztion, and a clean mathematical interpretation. We'll describe LAMP, characterize its properties, show how to learn such models, and present experimental results.

Informations sur la vidéo

  • Date de captation 16/10/2018
  • Date de publication 29/10/2018
  • Institut IHES
  • Langue Anglais
  • Audience Chercheurs, Doctorants
  • Format MP4

Domaine(s)

Document(s)

Dernières questions liées sur MathOverflow

Pour poser une question, votre compte Carmin.tv doit être connecté à mathoverflow

Poser une question sur MathOverflow




Inscrivez-vous

  • Mettez des vidéos en favori
  • Ajoutez des vidéos à regarder plus tard &
    conservez votre historique de consultation
  • Commentez avec la communauté
    scientifique
  • Recevez des notifications de mise à jour
    de vos sujets favoris
Donner son avis