00:00:00 / 00:00:00

Grounding LLMs in Execution

By Gabriel Synnaeve

Appears in collection : Mathematics for and by Large Language Models

Large language models (LLMs) are trained in a very simple way. Lots of properties we assign to them are already present in the training data. In this talk we will review how LLMs are trained today, what are new training paradigms that are aiming at grounding those LLMs in the impact of those generations. In the context of code generation, this is for instance groudning the LLM with the feedback of executing its generated code. For Lean proofstep prediction we can use tactics execution feedback similarly. We believe closing the loop between “open” generation and “grouding” with more formal system can bridge the gap between informal and formal LLM usages.

Information about the video

  • Date of recording 23/05/2024
  • Date of publication 25/05/2024
  • Institution IHES
  • Licence CC BY-NC-ND
  • Language English
  • Audience Researchers
  • Format MP4

Last related questions on MathOverflow

You have to connect your Carmin.tv account with mathoverflow to add question

Ask a question on MathOverflow




Register

  • Bookmark videos
  • Add videos to see later &
    keep your browsing history
  • Comment with the scientific
    community
  • Get notification updates
    for your favorite subjects
Give feedback