Using Social and Linguistic Information to Adapt Pretrained Representations for Political Perspective Identification

Chang Li     Dan Goldwasser    
Findings of the Association for Computational Linguistics (ACL-findings), 2021
[pdf]

Abstract

Understanding the political perspective shaping the way events are discussed in the media is increasingly important due to the dramatic change in news distribution. With the advance in text classification models, the performance of political perspective detection is also improving rapidly. However, current deep learning based text models often require a largeamount of supervised data for training, which can be very expensive to obtain for this task. Meanwhile, models pre-trained on the general source and task (e.g. BERT) lack the ability to focus on bias-related text span. In this paper, we propose a novel framework that pretrains the text model using signals from the rich social and linguistic context that is readily available, including entity mentions, news sharing, and frame indicators. The pre-trained models benefit from tasks related to bias detection and therefore are easier to train with the bias labels. We demonstrate the effectiveness of our proposed framework by experiments on two news bias datasets. The models with pre-training achieve significant improvement in performance and are capable of identifying the text span for bias better.


Bib Entry

  @InProceedings{LiG_facl_2021,
    author = "Chang Li and Dan Goldwasser",
    title = "Using Social and Linguistic Information to Adapt Pretrained Representations for Political Perspective Identification",
    booktitle = "Findings of the Association for Computational Linguistics (ACL-findings)",
    year = "2021"
  }