Understanding and Interpreting the Impact of User Context in Hate Speech Detection

Edoardo Mosca, Maximilian Wich, Georg Groh


Abstract
As hate speech spreads on social media and online communities, research continues to work on its automatic detection. Recently, recognition performance has been increasing thanks to advances in deep learning and the integration of user features. This work investigates the effects that such features can have on a detection model. Unlike previous research, we show that simple performance comparison does not expose the full impact of including contextual- and user information. By leveraging explainability techniques, we show (1) that user features play a role in the model’s decision and (2) how they affect the feature space learned by the model. Besides revealing that—and also illustrating why—user features are the reason for performance gains, we show how such techniques can be combined to better understand the model and to detect unintended bias.
Anthology ID:
2021.socialnlp-1.8
Volume:
Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media
Month:
June
Year:
2021
Address:
Online
Editors:
Lun-Wei Ku, Cheng-Te Li
Venue:
SocialNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
91–102
Language:
URL:
https://aclanthology.org/2021.socialnlp-1.8
DOI:
10.18653/v1/2021.socialnlp-1.8
Bibkey:
Cite (ACL):
Edoardo Mosca, Maximilian Wich, and Georg Groh. 2021. Understanding and Interpreting the Impact of User Context in Hate Speech Detection. In Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media, pages 91–102, Online. Association for Computational Linguistics.
Cite (Informal):
Understanding and Interpreting the Impact of User Context in Hate Speech Detection (Mosca et al., SocialNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.socialnlp-1.8.pdf