ConStance: Modeling Annotation Contexts to Improve Stance Classification

Kenneth Joseph, Lisa Friedland, William Hobbs, David Lazer, Oren Tsur


Abstract
Manual annotations are a prerequisite for many applications of machine learning. However, weaknesses in the annotation process itself are easy to overlook. In particular, scholars often choose what information to give to annotators without examining these decisions empirically. For subjective tasks such as sentiment analysis, sarcasm, and stance detection, such choices can impact results. Here, for the task of political stance detection on Twitter, we show that providing too little context can result in noisy and uncertain annotations, whereas providing too strong a context may cause it to outweigh other signals. To characterize and reduce these biases, we develop ConStance, a general model for reasoning about annotations across information conditions. Given conflicting labels produced by multiple annotators seeing the same instances with different contexts, ConStance simultaneously estimates gold standard labels and also learns a classifier for new instances. We show that the classifier learned by ConStance outperforms a variety of baselines at predicting political stance, while the model’s interpretable parameters shed light on the effects of each context.
Anthology ID:
D17-1116
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1115–1124
Language:
URL:
https://aclanthology.org/D17-1116
DOI:
10.18653/v1/D17-1116
Bibkey:
Cite (ACL):
Kenneth Joseph, Lisa Friedland, William Hobbs, David Lazer, and Oren Tsur. 2017. ConStance: Modeling Annotation Contexts to Improve Stance Classification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1115–1124, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
ConStance: Modeling Annotation Contexts to Improve Stance Classification (Joseph et al., EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1116.pdf
Attachment:
 D17-1116.Attachment.zip
Video:
 https://aclanthology.org/D17-1116.mp4