Abstract:
With the proliferation of hateful and offensive speech on social media platforms such as Twitter, machine learning approaches to detect such toxic content have gained pro...Show MoreMetadata
Abstract:
With the proliferation of hateful and offensive speech on social media platforms such as Twitter, machine learning approaches to detect such toxic content have gained prominence. Despite these advances, real-time detection of such speech, while it is being shared on these platforms, remains a challenge for two reasons. First, these approaches train complex models on a plethora of features, which calls into question their computational efficiency for real-time deployment. Moreover, they require sizeable, manually annotated data sets from the same context, and annotating large data sets is extremely time-consuming, error-prone and cumbersome. This paper proposes a parsimonious and practical approach for the detection of offensive speech that alleviates these challenges. The approach is parsimonious because through a comprehensive evaluation of commonly used machine learning models (Logistic Regression, Random Forest, Neural Networks) on two public domain data sets it demonstrates that a simple Logistic Regression model trained on unigrams with frequency counts can detect hate speech with high accuracy of over 90%. It is practical because it demonstrates how an existing labeled training data set can be used to train models that can detect offensive content from a completely unknown data set with moderate accuracy. Based on these findings, the paper offers guidance on the characteristics that may be desirable in benchmark training data sets for offensive speech detection.
Published in: 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS)
Date of Conference: 19-20 February 2021
Date Added to IEEE Xplore: 12 April 2021
ISBN Information: