FALCoN: Detecting and classifying abusive language in social networks using context features and unlabeled data
Issued Date
2023-07-01
Resource Type
ISSN
03064573
Scopus ID
2-s2.0-85153572822
Journal Title
Information Processing and Management
Volume
60
Issue
4
Rights Holder(s)
SCOPUS
Bibliographic Citation
Information Processing and Management Vol.60 No.4 (2023)
Suggested Citation
Tuarob S., Satravisut M., Sangtunchai P., Nunthavanich S., Noraset T. FALCoN: Detecting and classifying abusive language in social networks using context features and unlabeled data. Information Processing and Management Vol.60 No.4 (2023). doi:10.1016/j.ipm.2023.103381 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/81322
Title
FALCoN: Detecting and classifying abusive language in social networks using context features and unlabeled data
Author's Affiliation
Other Contributor(s)
Abstract
Social networks have grown into a widespread form of communication that allows a large number of users to participate in conversations and consume information at any time. The casual nature of social media allows for nonstandard terminology, some of which may be considered rude and derogatory. As a result, a significant portion of social media users is found to express disrespectful language. This problem may intensify in certain developing countries where young children are granted unsupervised access to social media platforms. Furthermore, the sheer amount of social media data generated daily by millions of users makes it impractical for humans to monitor and regulate inappropriate content. If adolescents are exposed to these harmful language patterns without adequate supervision, they may feel obliged to adopt them. In addition, unrestricted aggression in online forums may result in cyberbullying and other dreadful occurrences. While computational linguistics research has addressed the difficulty of detecting abusive dialogues, issues remain unanswered for low-resource languages with little annotated data, leading the majority of supervised techniques to perform poorly. In addition, social media content is often presented in complex, context-rich formats that encourage creative user involvement. Therefore, we propose to improve the performance of abusive language detection and classification in a low-resource setting, using both the abundant unlabeled data and the context features via the co-training protocol that enables two machine learning models, each learning from an orthogonal set of features, to teach each other, resulting in an overall performance improvement. Empirical results reveal that our proposed framework achieves F1 values of 0.922 and 0.827, surpassing the state-of-the-art baselines by 3.32% and 45.85% for binary and fine-grained classification tasks, respectively. In addition to proving the efficacy of co-training in a low-resource situation for abusive language detection and classification tasks, the findings shed light on several opportunities to use unlabeled data and contextual characteristics of social networks in a variety of social computing applications.