00:00:00 / 00:00:00

Fair Classifiers via Transferable Representations

By Charlotte Laclau

Appears in collection : 10e Journée Statistique et Informatique pour la Science des Données à Paris-Saclay

Group fairness is a central research topic in text classification, where reaching fair treatment between sensitive groups (e.g., women and men) remains an open challenge. In this talk, I will present an approach that extends the use of the Wasserstein Independence measure for learning unbiased neural text classifiers. Given the challenge of distinguishing fair from unfair information in a text encoder, we draw inspiration from adversarial training by inducing Wasserstein independence between representations learned for the target label and those for a sensitive attribute. We further show that domain adaptation can be efficiently leveraged to remove the need for access to the sensitive attributes in the dataset at training time. I will present theoretical and empirical evidence of the validity of this approach.

Information about the video

  • Date of recording 01/04/2025
  • Date of publication 10/04/2025
  • Institution IHES
  • Language English
  • Audience Researchers
  • Format MP4

Domain(s)

Last related questions on MathOverflow

You have to connect your Carmin.tv account with mathoverflow to add question

Ask a question on MathOverflow




Register

  • Bookmark videos
  • Add videos to see later &
    keep your browsing history
  • Comment with the scientific
    community
  • Get notification updates
    for your favorite subjects
Give feedback