Understanding latent affective bias in large pre-trained neural language models

Anoop Kadan*, Deepak P, Sahely Bhadra, Manjary P. Gangan, Lajish V L

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Downloads (Pure)

Abstract

Groundbreaking inventions and highly significant performance improvements in deep learning based Natural Language Processing are witnessed through the development of transformer based large Pre-trained Language Models (PLMs). The wide availability of unlabeled data within human generated data deluge along with self-supervised learning strategy helps to accelerate the success of large PLMs in language generation, language understanding, etc. But at the same time, latent historical bias/unfairness in human minds towards a particular gender, race, etc., encoded unintentionally/intentionally into the corpora harms and questions the utility and efficacy of large PLMs in many real-world applications, particularly for the protected groups. In this paper, we present an extensive investigation towards understanding the existence of “Affective Bias” in large PLMs to unveil any biased association of emotions such as anger, fear, joy, etc., towards a particular gender, race or religion with respect to the downstream task of textual emotion detection. We conduct our exploration of affective bias from the very initial stage of corpus level affective bias analysis by searching for imbalanced distribution of affective words within a domain, in large scale corpora that are used to pre-train and fine-tune PLMs. Later, to quantify affective bias in model predictions, we perform an extensive set of class-based and intensity-based evaluations using various bias evaluation corpora. Our results show the existence of statistically significant affective bias in the PLM based emotion detection systems, indicating biased association of certain emotions towards a particular gender, race, and religion.
Original languageEnglish
Article number100062
Number of pages17
JournalNatural Language Processing Journal
Volume7
Early online date10 Mar 2024
DOIs
Publication statusPublished - 1 Jun 2024
Externally publishedYes

Keywords

  • Affective bias in NLP
  • Fairness in NLP
  • Pre-trained language models
  • Textual emotion detection
  • Deep learning

Fingerprint

Dive into the research topics of 'Understanding latent affective bias in large pre-trained neural language models'. Together they form a unique fingerprint.

Cite this