1 Matching Annotations
  1. Aug 2024
    1. Findings

      It is not necessary to resort to technically complex concepts and programming skills to carry out a useful, actionable assessment of biases in word embeddings and large language models.

      Discrimination experts can obtain very valuable insights from the exploration of biases through our prototype and with the proposed methodology, providing them with strong evidence to plan actions to mitigate those biases in downstream applications.

      Discrimination experts find that the obtained insights are also useful to validate (or refute) their intuitions on discrimination, and argue with other actors in the discrimination scenario.

      The funding to carry out this project has allowed us to work with our own priorities, not the usual requirements of academic publication, like standard metrics and datasets that are irrelevant to our local context. Historically, such requirements have taken most of the resources from research projects, and have prevented us from pursuing an agenda that is locally relevant, instead of aligned with the global north agenda.