00:00:00 / 00:00:00

Topos, stacks, semantic information and artificial neural networks

By Daniel Bennequin

Appears in collection : Toposes Online

Joint work with Jean-Claude Belfiore

Every known artificial deep neural network (DNN) corresponds to an object in a canonical Grothendieck’s topos; its learning dynamic corresponds to a flow of morphisms in this topos. Invariance structures in the layers (like CNNs or LSTMs) correspond to Giraud’s stacks. This invariance is supposed to be responsible of the generalization property, that is extrapolation from learning data under constraints. The fibers represent pre-semantic categories (Culioli, Thom), over which artificial languages are defined, with internal logics, intuitionist, classical or linear (Girard). Semantic functioning of a network is its ability to express theories in such a language for answering questions in output about input data. Quantities and spaces of semantic information are defined by analogy with the homological interpretation of Shannon’s entropy (P.Baudot and D.B.). They generalize the measures found by Carnap and Bar-Hillel (1952). Amazingly, the above semantical structures are classified by geometric fibrant objects in a closed model category of Quillen, then they give rise to homotopical invariants of DNNs and of their semantic functioning. Intentional type theories (Martin-Löf) organize these objects and fibrations between them. Information contents and exchanges are analyzed by Grothendieck’s derivators.

Information about the video

  • Date of recording 6/28/21
  • Date of publication 6/28/21
  • Institution IHES
  • Language English
  • Audience Researchers
  • Format MP4

MSC codes

Last related questions on MathOverflow

You have to connect your Carmin.tv account with mathoverflow to add question

Ask a question on MathOverflow


  • Bookmark videos
  • Add videos to see later &
    keep your browsing history
  • Comment with the scientific
  • Get notification updates
    for your favorite subjects
Give feedback