Next Article in Journal
Constraints on the Geometry of Peripheral Faults above Mafic Sills in the Tarim Basin, China: Kinematic and Mechanical Approaches
Next Article in Special Issue
Automating Systematic Literature Reviews with Retrieval-Augmented Generation: A Comprehensive Overview
Previous Article in Journal
Overview and Recent Developments of the Frascati Laser for Acceleration and Multidisciplinary Experiments Laser Facility at SPARC_LAB
Previous Article in Special Issue
Automatic Era Identification in Classical Arabic Poetry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Transparent Pipeline for Identifying Sexism in Social Media: Combining Explainability with Model Prediction

Department of Methodology and Statistics, Utrecht University, 3584 CS Utrecht, The Netherlands
*
Author to whom correspondence should be addressed.
Submission received: 20 August 2024 / Revised: 11 September 2024 / Accepted: 19 September 2024 / Published: 24 September 2024
(This article belongs to the Special Issue Data and Text Mining: New Approaches, Achievements and Applications)

Abstract

:

Featured Application

We show illustrative examples of sexist language to describe the taxonomy and explainability analysis.

Abstract

In this study, we present a new approach that combines multiple Bidirectional Encoder Representations from Transformers (BERT) architectures with a Convolutional Neural Network (CNN) framework designed for sexism detection in text at a granular level. Our method relies on the analysis and identification of the most important terms contributing to sexist content using Shapley Additive Explanations (SHAP) values. This approach involves defining a range of Sexism Scores based on both model predictions and explainability, moving beyond binary classification to provide a deeper understanding of the sexism-detection process. Additionally, it enables us to identify specific parts of a sentence and their respective contributions to this range, which can be valuable for decision makers and future research. In conclusion, this study introduces an innovative method for enhancing the clarity of large language models (LLMs), which is particularly relevant in sensitive domains such as sexism detection. The incorporation of explainability into the model represents a significant advancement in this field. The objective of our study is to bridge the gap between advanced technology and human comprehension by providing a framework for creating AI models that are both efficient and transparent. This approach could serve as a pipeline for future studies to incorporate explainability into language models.

1. Introduction

The digital age has significantly transformed communication, especially with the emergence of online platforms facilitating real-time interactions. Although social media platforms can be a useful tool for instant communication, they also facilitate the sharing of harmful content, such as hate speech and sexist comments. According to Kurasawa et al. [1], gender-based online violence (GBOV) involves digital forms of harassment and abuse targeted at women, providing a significant challenge to maintaining safe and respectful digital environments. Content regulations on these platforms [2], within the framework of the European Union’s Digital Services Act, highlight the challenge of effectively managing harmful content. The increasing rate of online sexism poses a serious threat to women’s mental health [3]. This challenge is worsened by social media platforms, where sexist ideologies can spread rapidly, leading to an increase in sexist comments [4].
Recognizing and addressing sexism in online spaces is crucial for advancing gender equality, as emphasized by Lee et al. [5] in their study on the role of online forums in movement dynamics and communication. Creating a safe digital space is about more than just protecting individuals from gender-based harm; it is also about creating an environment where gender equality is possible. Identifying sexist content, on the other hand, is a challenging task. The sensitive and context-dependent nature of language presents challenges for accurately distinguishing between sexist and non-sexist content, as shown by Feng [6] in their development of a voting mechanism for identifying online sexist content. While studies such as Schütz et al. [7] and Ortiz [3] have extensively studied the detection of online sexism in various contexts, including in non-English ones, these models often rely on a binary sexist/non-sexist classification [8,9,10].
To the best of our knowledge, no study has addressed the problem of sexism detection as a regression task. It is essential to address this gap for a thorough analysis that can effectively moderate content. Treating sexism detection as a regression task within a continuous range allows users and platform moderators to assess the intensity of sexism, recognizing that not all posts cause the same level of harm. This approach provides a more nuanced and detailed understanding than a binary classification, emphasizing the varying degrees of impact. Also integrating explainable AI (XAI) techniques in sexism detection as shown by Mehta and Passi [11] and Gil Bermejo et al. [12] can provide deeper insights into the rationale behind these classifications, thereby enhancing the transparency and trustworthiness of the models. This knowledge is essential for ensuring responsible and transparent content management on social media, particularly because these models often function as black boxes, making the integration of explainability crucial. The challenge exists in overcoming people’s reluctance to trust machine-based decisions, which often arises from their lack of understanding of how these decisions are made. Even though these models may achieve high performance, their decisions need to be reviewed to ensure that users are not unfairly blocked and to minimize false negatives, especially given the sensitivity of gender discrimination issues. The European Union’s General Data Protection Regulation (GDPR), emphasizing a “right to explanation” as highlighted by Hoofnagle et al. [13], underscores the need for clarity and transparency in these models, as argued by Mathew et al. [14]. A more detailed classification of sexist content allows policymakers to address online sexism with greater precision, reducing gender-based harm. This granular detection should be distinguished from explainability, as each serves a different but complementary purpose in managing online content.
While automated detection tools are crucial in identifying complex social issues such as sexism, recent models such as transformers function as black boxes, offering little to no insight into the reasoning behind their classifications. As highlighted in the review by Velankar et al. [15], and the analysis of online content challenges by Jiang [16], while the lack of transparency in these models does not necessarily affect their accuracy in detecting sexist content, the ‘black box’ nature of these models makes it difficult to understand their rationale and identify potential errors. This lack of transparency and explainability can lead to mistrust from users, which in turn might reduce their willingness to engage with or rely on the system, thereby undermining its overall effectiveness in detecting online sexism.
The need to understand the decision process of the models has led to the development of various techniques to make these models more interpretable, as highlighted in recent surveys on the state of explainable natural language processing (XNLP) tasks [17,18]. XNLP offers an exciting response to the challenges of interpretability and trust in the context of online harm detection, including sexism detection. XNLP enables users to understand and critically evaluate the outcomes of automated detection systems by providing insights into the decision-making process of AI models. This is particularly important in sensitive fields like the detection of sexism, where the importance of false positives or negatives exceeds simple errors. In this scenario, a false negative (failure to identify a sexist comment) poses the risk of overlooking harmful behavior, whereas a false positive (incorrectly labeling a statement as sexist) may wrongly implicate individuals or content. These errors carry significant ethical and social consequences, in addition to being crucial from a data accuracy perspective.
Our research proposes the application of the XNLP technique to detect and understand sexism in social media posts. By prioritizing explainability, we aim to clarify the decision-making processes of algorithms, particularly when dealing with large language models (LLMs). Some researchers have tackled the problem of detecting sexist text using ensemble models, as shown by Mohammadi et al. [19] in their study on ensembling transformer models for identifying online sexism. Their approach combines multiple transformer models to improve detection accuracy. We also propose an ensemble method for this task; however, our method incorporates explainability techniques such as SHAP values to provide insights into the decision-making process of each model, offering a more transparent and interpretable solution compared to the previous work. The ensemble approach has advantages because it combines predictions from multiple pre-trained models, such as Bidirectional Encoder Representations from Transformers (BERT) and DistilBERT, enhancing the overall performance of the system. In summary, our study aims to contribute to the growing body of research on sexism detection by providing a novel ensemble approach that integrates technical accuracy with explainability.
The remainder of this paper is organized as follows: we start with a review of the most relevant works in Section 2. Then, we describe the data and our methodology in Section 3,  which includes the data analysis, data preparation, and augmentation, and our ensemble model design; we then introduce our approach about explainability analysis in Section 3.5. Subsequently, we discuss experimentation and results in Section 4, covering the hyperparameter tuning of the model and training progress, along with several examples of how explainability impacts the performance metrics. The paper concludes with the discussion, conclusion, and future work in Section 5.

2. Related Work

The detection of online sexism has received much attention in recent years. Some approaches have relied on machine learning algorithms and hand-crafted features to determine whether text is sexist or not [20,21]. These methods, however, often do not capture the context that is crucial for such a complex task. Lopez-Lopez et al. [22] proposed integrating transformer-based models with traditional machine learning approaches for detecting sexism in social networks, highlighting the dynamic nature of these methodologies. The quality and diversity of datasets, whether exclusively in English or multilingual, can create challenges that may affect the applicability of the results. This issue is addressed further by Samory et al. [23], who study sexism detection using psychological factors and adversarial samples to improve dataset annotation for more reliable sexism-detection methods.
Significant improvements in detecting online sexism have been made in recent years [24,25]. This investigation has extended beyond English, as evidenced by the work of researchers  Mohammadi et al. [19], Jiang et al. [26], and de Paula et al. [9], who applied LLMs to datasets in multiple languages. Despite these advancements, a common issue persists: all of these studies focus on a binary classification—sexist or not sexist—without delving into the underlying rationale behind why certain texts are perceived to contain sexist content. Das et al. [27] presented a method for addressing this limitation by combining user gender information with textual features, which improves classification performance over the typical binary categorization. Furthermore, this need led to competitions such as Explainable Detection of Online Sexism (https://codalab.lisn.upsaclay.fr/competitions/7124, (accessed on 15 May 2024)), which attempted to address this issue through supervisors in more detailed categories [28].  Tasneem et al. [29], Kiritchenko et al. [30], and Lamsiyah et al. [31] have investigated the effectiveness of transfer learning models for explainable online sexism detection, the ethical and human rights perspective in confronting online abuse, and semi-supervised multi-task learning for explainable online sexism detection, respectively.
Recent advances in natural language processing (NLP) and machine learning have provided opportunities for more effective detection of online sexism. Deep learning, context-aware algorithms, and lexicon-based sentiment analysis have been employed to enhance the discernment of the nuances in sexist language. For instance, the utilization of BERT for sentiment analysis within textual data [32], alongside the refinement of sentiment-analysis methodologies to incorporate finer details, exemplifies notable advancements within this domain [33].
Furthermore, the introduction of sentimental and context-aware recurrent convolutional neural network (CNN) highlights advancements in handling complex language structures [34]. However, these methods face challenges, particularly in dealing with ambiguous or indirect expressions of sexism. Furthermore, it is crucial to address the issue of bias in the training data utilized to train such models, as it can significantly influence their performance and reliability.
The usage of techniques such as Shapley Additive Explanations (SHAP) [35] and Local Interpretable Model-agnostic Explanations (LIME) [36] has increased the popularity of explainability in AI, especially in NLP. These methods enable more transparency and understanding of model predictions by offering insights into the machine learning models’ decision-making process. To provide more detailed knowledge of how specific features affect model output, SHAP has been used, for example, to interpret complicated NLP models. Also, LIME has played a crucial role in clarifying the logic underlying specific forecasts, facilitating comprehension and confidence in AI judgments.
Studies that emphasize the importance of human–AI collaboration show that the role of human involvement in AI-driven content filtering has grown in popularity. For example, Lai et al. [37] and Molina and Sundar [38] show how human oversight can drastically minimize errors in AI moderation systems. Furthermore, the Rallabandi et al. [39] study emphasizes the significance of balancing human moderation with algorithmic action in content moderation, especially in sensitive areas such as online harm detection.
Our study combines explainability with the power of transformer-based models to advance the field of sexism detection. In contrast to the previous studies that focused on performance, our approach emphasises comprehending the decision process in sexism detection. This enables us to not only detect sexism in various forms but also to provide clear explanations for these classifications.

3. Materials and Methods

3.1. Data

For our study, we utilized the data by SemEval Task 10 [28], which includes labeled datasets from Gab and Reddit (https://github.com/rewire-online/edos (accessed on 15 July 2024)). Gab is recognized as a social networking platform supporting free speech and harboring a diverse user base, thereby hosting content spanning a wide range. Conversely, Reddit serves as a network of communities wherein individuals engage with topics aligning with their interests, hobbies, and passions. The labeled dataset consists of 14,000 posts. The tasks include a binary classifier for categorizing posts as sexist or non-sexist (Subtask A), a four-class classification system for sexist posts (Subtask B), and an 11-class system for more specific labels of sexism (Subtask C). These subtasks ensure that texts labeled as sexist are given specific reasons for the classification. The overview of tasks and datasets is shown in Table 1.

3.2. Data Preparation and Augmentation

The first step in the data-preparation process was to convert all of the text to lowercase so that the analysis could recognize words uniformly. Additionally we removed URLs, special characters and punctuation. Then, we applied tokenization and lemmatization to divide the text into units.
We applied various augmentation strategies due to limited data availability in certain classes and the necessity to develop more robust models. Using techniques like ‘random deletion’ to remove random words trained the model to understand incomplete data, and techniques such as ‘synonym replacement’ for word and phrase replacement expanded the model’s understanding of context. Different spellings were also used to simulate errors found in the real world. The dataset was significantly imbalanced, particularly in Task A, where 76% of the training data was ‘Not sexist’ (10,602 instances) and only 24% was ‘Sexist’ (3398 instances). This imbalance was similarly reflected in the test data and persisted across Task B and Task C categories. For example, some classes in Task C, like ‘Threats of harm’, had only 56 training instances, representing a mere 2% of the data. The training dataset contained 14,000 records. After augmentation, this number increased to 42,000 records, significantly enhancing the data volume available for training.
To further address the class imbalance, we used back translation to increase the number of ‘sexist’ texts. This means we first translated the ‘sexist’ texts into Dutch and then back into English. Dutch was selected for this purpose based on previous research indicating its efficacy for such tasks and due to its linguistic similarity to English, as both languages belong to the West Germanic language family [40]. Figure 1 illustrates an example of our back translation augmentation approach.
To further address the class imbalance, we used some other techniques like stratified K-fold cross-validation (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html, (accessed on 15 May 2024)) method to ensure an accurate class distribution throughout the data segments. Strategies including RandomOverSampler, SMOTE (https://imbalanced-learn.org/stable/references/generated/imblearn.over_sampling.SMOTE.html, (accessed on 15 May 2024)), and differential class weighting were used during training to address the class imbalance [41].

3.3. Methodology

In this study, we propose a novel methodology that uses explainability to define a Sexism Score, which can lead to more specific results and identifification. This approach is structured into two parts, where the various phases are shown in Figure 2.
As illustrated in Figure 2, the first part focuses on the sexism detection and starts with dataset analysis. This phase is followed by preprocessing and data augmentation. Then we develop an ensemble model by combining various versions of BERT with a CNN architecture (detailed in Section 3.4).
In the second part of our methodology, we generate SHAP values to identify the most influential tokens in the model’s decision-making process [35] and we integrate the explainability scores into model prediction to define the Sexism Score, which we elaborate more on in Section 3.5.

3.4. CustomBERT—Ensemble Model Design

We developed the CustomBERT model, which combines different BERT versions and a CNN, inspired by the way CNNs recognize similarities in images, as investigated by Xu and Vaziri-Pashkam [42]. This approach is motivated by the need to capture diverse linguistic patterns from different transformer models. Each BERT variant offers distinct strengths: BERT multilingual excels in handling various aspect of language, XLM-RoBERTa is particularly adept at cross tasks, and DistilBERT provides efficiency with minimal loss in performance. By using these models, our ensemble aims to capture a wider range of semantic features. This method identifies similarities between various pre-trained transformer models using text input, similar to how CNNs identify similarities in images. The combination of transformer models and CNN introduces additional layers of representation learning, allowing the system to extract more nuanced textual features that a single model might miss. Figure 3 shows the detailed architecture of our approach.
By using the advantages of various transformer models, this structure aims to provide an improved and flexible method to evaluate new text. We used BERT’s bidirectional nature to fully understand the context of words in sentences [43]. After the pre-trained model processes the input, its output undergoes two subsequent processes. First, we perform concatenation by merging the outputs from the pre-trained model. Then, we pass this concatenated input through a CNN layer, which is designed to identify important features from both the text data and the token scores. CNN is chosen here to effectively capture local dependencies in the concatenated transformer outputs, enabling the model to recognize intricate relationships within the text that are critical for tasks like sexism detection. The pseudocode of Algorithm 1 is as follows:
Algorithm 1 CustomBERT model for sexism detection.
Require: 
Input sentence
Ensure: 
Output classes
1:
function Pre-train Language Models( sentence )
2:
     bert _ output BERT Model ( sentence )
3:
     xlmroberta _ output XLMRoBERTa Model ( sentence )
4:
     distilbert _ output DistilBERT Model ( sentence )
5:
    return  bert _ output , xlmroberta _ output , distilbert _ output
6:
end function
7:
function CustomBERT( tokens )
8:
     bert _ output , xlmroberta _ output , distilbert _ output
9:
         Pre-train Language Models Outputs(sentence)
10:
     concatenated _ outputs concatenate ( bert _ output , xlmroberta _ output , distilbert _ output )
11:
     cnn _ output apply Conv 1 D layer , filters=64, kernel=3, activation=‘relu’(concatenated_outputs)
12:
     flattened _ cnn _ output flatten ( cnn _ output )
13:
     output apply Dense layer with 1 unit , activation = sigmoid ( flattened _ cnn _ output )
14:
    return  class
15:
end function
16:
final output  CustomBERT ( input   sentence )
As shown in Algorithm 1, the CustomBERT model combines the outputs of various pre-trained transformer models with token scores before classification. It consists of BERT multilingual [43], XLM-RoBERTa [44], and DistilBERT, a smaller but efficient version of BERT [45]. The main task for these transformer models is to encode the input sentence into a dense vector representation, capturing contextualized meaning and semantic nuances. We combine the outputs from each transformer model by concatenating their vector representations, which are then processed further through a convolutional neural network (CNN) and process them through a Conv1D layer (https://keras.io/api/layers/convolution_layers/convolution1d/, (accessed on 10 June 2023)), followed by MaxPooling1D (https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPooling1D, (accessed on 10 June 2023)) and a flattening step. This preprocessing stage primes the data for subsequent dense layers, using binary and multiclass classification across diverse tasks.

3.5. Explainability Analysis

In this section, we introduce the symbols and parameters used throughout our explainability methodology, followed by a detailed description of the techniques applied. Table 2 shows the definitions of symbols and parameters used in our model.
We utilized SHAP to integrate the explainability component into our methodology. These values are instrumental in understanding the contribution of each token to the model’s predictions. SHAP generates scores indicative of the importance of a token in the prediction [35]. SHAP values are based on cooperative game theory and provide a method to attribute the output of the model to its input features. The application of SHAP values allows us to identify the key factors influencing the classification process. By analyzing these values, we can assign scores to the most influential tokens based on their perceived impact, leading to a better understanding of the textual content.
The SHAP value for each token is calculated as follows: let S t denote the SHAP value for token t, T be the set of all tokens, T { t } be a subset of tokens excluding t, and f ( T { t } { t } ) and f ( T { t } ) be the model predictions with and without the token t, respectively. The SHAP value S t is computed as:
S t = T { t } T | T { t } | ! ( | T | | T { t } | 1 ) ! | T | ! f ( T { t } { t } ) f ( T { t } )
Here, the term | T { t } | ! ( | T | | T { t } | 1 ) ! | T | ! represents the weight assigned to the difference in model outputs, ensuring that the contribution of token t is fairly distributed among all possible combinations of tokens. This weight is derived from the concept of Shapley values in cooperative game theory, which ensures a fair distribution of the total gain (or loss) among all contributors.
To determine the most influential tokens based on SHAP values, we first calculate the SHAP importance for each token t across all samples where the token appears in the set of all tokens T in the sentences from i = 1 to N t , which represent all sentences in the dataset. S i ( t ) represents the SHAP value for token t in the i-th sentence. The SHAP importance for token t, denoted as SI t , is computed as:
SI t = 1 N t i = 1 N t | S t ( i ) | · I ( y i = y ^ i )
In this formula, N t is the number of sentences in which token t appears, and S t ( i ) is the SHAP value for token t in the i-th sentence. The term I ( y i = y ^ i ) is an indicator function that equals 1 if the predicted class y ^ i matches the true class y i , and 0 otherwise. This ensures that we only consider SHAP values from sentences where the model’s prediction is correct, thereby focusing on the tokens that truly influence accurate predictions.
We remove data points that fall outside 99.7% of the SHAP value distribution to exclude outliers. This step ensures that our analysis focuses on the most representative data. Specifically, we filter the SHAP scores s to satisfy the following condition: where − μ is the mean of the SHAP scores, − σ is the standard deviation of the SHAP scores.
μ 3 σ s μ + 3 σ
After filtering outliers, we normalize the remaining SHAP values to derive the importance ratio for each token. The importance ratio for token t is defined as:
IR t = SI t k T SI k
This step converts the SHAP values into a proportional format, where each ratio represents the token’s share of the total impact on the model. Following the normalization, we calculate the cumulative importance of the tokens. Let K be the total number of tokens sorted by descending importance, and IR i be the importance ratio of the i-th token. The cumulative importance is given by:
CI k = i = 1 k IR i such that CI k T c
We establish a threshold, T c = 0.95 , to select tokens based on their importance. Tokens are selected such that their cumulative importance is less than or equal to the threshold T c , focusing on those that contribute to the first 95% of total SHAP importance.
The selected tokens are then employed to calculate a ‘Sexism Score’ for each text entry in our dataset. This score combines the SHAP scores with a modified predicted hard label. The hard Label(i) is changed to −1 for ‘No’ labels and 1 for ‘Yes’ labels.
Label ( i ) = 1 if prediction = YES 1 if prediction = NO
Let SS ( i ) be the Sexism Score for sentence i derived from the summed absolute SHAP values of the selected tokens, and Label(i) be the modified label indicator. The Sexism Score for sentence i is calculated as follows:
SS ( i ) = i N i SI t ( i ) × Label ( i )
After calculating the Sexism Scores for the training dataset, we proceed to evaluate the test dataset. For each sentence in the test dataset, we utilize the trained model to generate predictions. Using the SHAP scores computed during the training phase, we calculate the Sexism Score for each sentence in the test dataset. This is achieved by summing the SHAP values of the selected effective tokens based on Table A1.
Subsequently, we categorize the test dataset into different bins based on the calculated Sexism Scores. This binning allows us to compare the model’s performance across varying levels of detected sexism. By analyzing the model’s predictive accuracy and other performance metrics within these bins, we can know how well the model generalizes to new data, particularly in identifying and handling sexist content. The overall steps of test dataset evaluation and performance comparison are shown below:
  • Model Prediction on Test Dataset: For each sentence i in the test dataset, use the trained model to generate predictions y ^ i test .
  • Sexism Score Calculation for Test Dataset: Calculate the Sexism Score SS test ( i ) for each sentence i in the test dataset using the SHAP scores from the training dataset. This is achieved as follows:
    SS test ( i ) = t T i SI t × Label ( y ^ i test )
    where SI t are the SHAP importance scores from the training dataset, and Label ( y ^ i test ) is the predicted label for the test sentence i.
  • Binning Based on Sexism Scores: Divide the test dataset into bins based on the calculated Sexism Scores SS test ( i ) . Define bins B k such that:
    B k = { i α k 1 SS test ( i ) < α k }
    where α k are the thresholds for the bins.
  • Performance Comparison: For each bin B k , evaluate the model’s performance by calculating metrics such as accuracy, precision, recall, and F1 score. Compare these metrics across different bins to assess the model’s performance in handling varying levels of detected sexism.
By following these steps, we can understand the model’s effectiveness and robustness in identifying and handling sexist content in the test dataset, highlighting any potential biases or areas for improvement.

4. Results

4.1. Exploratory Data Analysis (EDA)

In this part, we analyze the data by examining the text length distribution, frequency of unique words and n-grams across different categories of sexist texts, and sentiment analysis of common word pairs within each category. We aim to uncover linguistic patterns and characteristics that distinguish sexist texts from non-sexist ones. First, we looked at the relationship between text length and the presence of sexist content. We calculated the text length based on the number of characters in each text, after removing stop words, punctuation, etc. We divided the text length into five ranges, [2–50, 51–100, 101–150, 151–200, 201+], to simplify the analysis. A density distribution of text lengths within each label category (sexist or non-sexist) is shown in Figure 4.
We used several statistical methods, including logistic regression, chi-square tests, and t-tests, to analyze how text length influences sexist labeling. Table 3 show the results from the logistic regression, and Table 4 show the results from the l chi-square tests, and t-tests show that longer texts are more likely to be labelled as sexist; however, the logistic regression results indicate a low R-squared value, near 0.004, suggesting that text length alone is not a strong predictor. More details are shown in Table 3 and Table 4:
Based on the results above, with a p-value of 4.98 × 10 14 , 1.83 × 10 13 , and 4.21 × 10 14 for logistic regression, chi-square test, and t-test, respectively, it is evident that text length is a significant factor in determining whether a text is labeled as sexist. All three tests consistently indicate a strong and statistically significant relationship between text length and sexist labeling.
To further analyze the linguistic features of sexist content, we intend to understand the typical language used in it. We implemented n-gram analysis to identify common bigrams and unique words in sexist texts. Figure 5 and Figure 6 show the top 20 unique words and bigrams in sexist texts.
As we can see in Figure 5 and Figure 6, the word ‘women’ was the most frequently occurring word in sexist texts, highlighting the targeted nature of these texts. We also notice that some words indicate aggressive and derogatory language. The sexist text is categorized into 4 main categories including ‘threats, plans to harm, and incitement’, ‘derogation’, ‘animosity’, and ‘prejudiced discussions’ by Task B. Figure 7 demonstrates the proportions of different types of sexist content.
As we can see, ‘derogation’ was the most prevalent, comprising 46.8% of the sexist texts. In Figure 8, we identify the top unique words for each category of sexist content to understand common words associated with each category. In category 1, common words included ‘women’, ‘beat’, and ‘shit’, reflecting violent and harmful intentions. Frequent words were ‘women’, ‘don’t’, and ‘want’, indicating derogatory and dismissive language associated with category 2. In category 3, words like ‘women’, ‘like’, and ‘bitch’ were prevalent, showing animosity. Top words included ‘women’, ‘years’, and ‘rape/sexual/divorce’, indicating prejudiced discussions often revolving around stereotypes and harmful myths for category 4.
Here, we analyze the sentiment polarity of sentences containing top word pairs in each category of sexist content to understand the emotional impact of sexist content and how it varies across different types of sexism. Sentiment polarity refers to the classification of a sentence as positive, negative, or neutral, which allows us to assess the emotional tone associated with the content [46]. Figure 9 shows the distribution of sentiment scores within each category.
Based on the sentiment polarity distribution shown in the box plot, the analysis of the minimum and maximum scores within the interquartile range (IQR) reveals significant insights. For the first category, the IQR ranges from −0.2 to 0.0, indicating that most comments are moderately negative to neutral, reflecting the mixed nature of this category. In the derogation category, the IQR spans from −0.3 to 0.09, showing that most comments are generally negative to slightly positive, suggesting derogatory remarks often contain a blend of sentiments. Animosity has an IQR from −0.1 to 0.089, indicating that while extreme sentiments exist, the majority of comments are slightly negative to neutral. Prejudiced discussions have the narrowest IQR from 0.0 to 0.202, suggesting that sentiments within this category are more consistently neutral to positive. These IQRs reveal that while extreme sentiments exist, the bulk of comments tend to be less extreme and range from moderately negative to neutral, with derogation and threats categories showing the most variability in negative sentiments.

4.2. Model Training and Optimization

The experimental steps of this study followed a progressive approach, initially based on the methodology introduced by Mohammadi et al. [47]. We further tested this approach using two datasets: EXIST 2023 (http://nlp.uned.es/exist2023/, (accessed on 10 April 2023)) and EDOS (https://github.com/rewire-online/edos, (accessed on 15 July 2024). While EDOS mainly focuses on content from Gab and Reddit, EXIST is based on Twitter, a more mainstream and diverse platform. This combination of datasets allows us to test the model’s ability to generalize across both niche and widely used social media environments. The results of the model on the EXIST dataset can be found in [47]. However, although the first task in both datasets is similar (sexism detection), we did not compare the results due to the completely different dataset compositions and annotation processes. Initially, we started with more traditional models as our baseline. Over time, we improved and tested these models with different variations and techniques until we developed the final structure of our model. Furthermore, upon publication of this article, the comprehensive final model, accompanied by all requisite materials, will be accessible on GitHub (https://github.com/hadimh93/Explainable-Sexim-Detection (accessed on 10 August 2024).
During the training phase, we used the Adam optimizer [48]. We selected the model’s hyperparameters, such as the learning rate and batch size, using a random search approach with Keras Tuner (https://blog.tensorflow.org/2020/01/hyperparameter-tuning-with-keras-tuner.html, (accessed on 10 June 2023)). The learning rate was set at 3 × 10 5 , enabled by a TensorFlow-based learning rate scheduler (https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/LearningRateScheduler, (accessed on 10 June 2023)), including a 200-step warm-up phase. To enhance efficiency and learning, we chose an early stopping mechanism (https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping, (accessed on 10 June 2023)) and mixed precision training (https://www.tensorflow.org/guide/mixed_precision, (accessed on 10 June 2023)). Our preprocessing step considered a tokenization limit of 512. Subsequently, we selected important hyperparameters, such as the learning rate and batch size, using a random search approach with Keras Tuner. The learning rate followed a cosine decay schedule (https://keras.io/api/optimizers/learning_rate_schedules/cosine_decay/, (accessed on 10 June 2023)), completed by a warm-up period, which was calibrated to the dataset and the number of epochs. We used early stopping based on validation losses to prevent overfitting [49].
The binary cross-entropy loss function (https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy, (accessed on 10 June 2023)) and the Adam optimizer (https://keras.io/api/optimizers/adam/, (accessed on 10 June 2023)) were used for training, employing the ‘mixed float16’ precision training policy (https://keras.io/api/mixed_precision/, (accessed on 10 June 2023)). A custom function was developed for model structure, employing various transformers like bert-base-multilingual-uncased, xlm-roberta-base, and distilbert-base-multilingual-cased. These transformers’ outputs were amalgamated for binary classification with L2 regularization (https://www.tensorflow.org/api_docs/python/tf/keras/regularizers/L2, (accessed on 10 June 2023)). A comprehensive summary of the hyperparameters we examined is presented in Table 5.
In evaluating the performance of various models, we considered several key metrics, including accuracy, precision, recall, and F1 score across multiple tasks. Table 6 summarizes the results of our experiments, comparing the performance of different models on Tasks A (binary classification of posts as sexist or non-sexist), B (four-class classification of sexist posts), and C (11-class system for more specific labels of sexism).
According to Table 6, the CustomBERT model consistently outperforms the individual models across all tasks. For Task A, the CustomBERT model achieves the highest accuracy of 0.79, precision of 0.77, recall of 0.79, and F1 score of 0.76. Similarly, for Task B, the CustomBERT model leads with an accuracy of 0.71, precision of 0.69, recall of 0.71, and F1 score of 0.68. For Task C, the CustomBERT model again surpasses the others, showing superior performance with an accuracy of 0.67, precision of 0.65, recall of 0.67, and F1 score of 0.64.
The consistent performance improvements observed with the CustomBERT model highlight the benefits of an ensemble approach, effectively combining the strengths of various models to achieve better overall results. This ensemble strategy, therefore, was selected as the final model for our application due to its robust performance across multiple evaluation metrics and tasks.

4.3. Explainability Results

In this section, we explore the explainability of our model by analyzing the SHAP values to understand the contribution of individual tokens to the model’s predictions. Explainability is crucial for ensuring that our model’s decisions are transparent and interpretable, particularly in sensitive applications such as detecting sexist content.
Figure 10 illustrates the SHAP importance and cumulative importance of the top 20 tokens. These tokens significantly contribute to the model’s prediction outcomes, as described by Equations (1)–(5). Notably, the most effective tokens predominantly consist of offensive language directed towards women. Additionally, we computed the Sexism Score for texts labeled as ‘YES’ (indicative of sexism) using Equation (7). To provide a more comprehensive understanding, we present the statistical summary of the Sexism Scores for these texts in Table 7.
This statistical summary provides insights into the distribution of Sexism Scores among texts labeled as sexist. The mean score indicates a moderate level of sexism on average, as evidenced by the standard deviation. The median and percentile values offer additional details about the central tendency and variability of the scores. To take a closer look at the distribution of Sexism Scores, we created a histogram, as shown in Figure 11.
The histogram in Figure 11 depicts the density of Sexism Scores across the dataset. The x-axis represents the Sexism Scores, while the y-axis indicates the density of texts with those scores. Notably, the distribution features two prominent peaks, suggesting a bimodal distribution of Sexism Scores.
The first peak, centered around −0.4, corresponds to texts that have been labeled as ‘NO’ but contain tokens with relatively high SHAP importance in negative contexts, resulting in negative Sexism Scores. The second peak, centered around 0.4, represents texts labeled as ‘YES’ with high SHAP importance in positive contexts. Additionally, the histogram shows a notable gap around the zero mark, indicating a clear separation between texts identified as sexist and non-sexist by the model.
In this study, we opted to set the threshold at 0.95, indicating that we considered tokens contributing to 95% of the cumulative importance, based on Formula (4). Consequently, the number of selected tokens amounted to 197. With this configuration, we identified the most influential tokens and their contributions to the model’s decision-making process regarding sexist content. A comprehensive list of all effective tokens can be found in Appendix B.
Figure 12 shows the relationship between the threshold and the number of selected tokens. As the threshold increases, the number of tokens contributing to the cumulative importance also increases, with a marked rise observed near the higher thresholds. At a threshold of 0.95, we capture the most significant tokens influencing the model’s output.
Next, we will validate the effectiveness of the selected tokens and the threshold choice in accurately identifying sexist content. By ensuring that the selected tokens are both influential and relevant to the model’s predictions, we enhance the model’s interpretability and reliability.
To evaluate the impact of Sexism Scores on model performance, we segmented the data into bins based on Sexism Score ranges and calculated the performance metrics for each bin. The results for the test dataset are summarized in Table 8.
As depicted in Table 8, there is a positive correlation between the Sexism Score and the performance metrics. Lower and higher Sexism Scores generally correspond to better model performance, indicating increased confidence and accuracy in identifying strongly sexist content. This trend suggests that the model is more reliable in detecting content with higher Sexism Scores, which aligns with its design to emphasize the most impactful tokens.

4.4. Increasing Model Efficiency

To demonstrate the efficiency improvements achieved by our model, we compared its performance and runtime when processing the entire dataset versus only sentences with high Sexism Scores. Table 9 provides a detailed comparison of these scenarios.
As shown, by focusing on sentences with higher Sexism Scores, the runtime is reduced by 13% while maintaining comparable performance metrics. This significant reduction in processing time demonstrates that prioritizing sentences with higher Sexism Scores can enhance model efficiency without substantially compromising accuracy, precision, recall, or F1 score.
This approach leverages the insights gained from the explainability analysis, where tokens with higher SHAP values were identified as more impactful in the model’s decision-making process. By concentrating computational resources on these high-impact tokens, we achieve a more efficient processing pipeline. Details of the computing environment used for all experiments can be found in the Appendix A.

4.5. Usability for Decision Makers

The pipeline consists of two key elements: (1) the model’s true prediction based on the annotated data, and (2) the SHAP value analysis highlighting the most influential tokens in the sentence. By introducing the sexism range which combines both predictions and influential tokens, decision makers are provided with a transparent view of why a certain sentence is classified as sexist. This can improve their ability to make informed decisions quickly, focusing on the most relevant content. As a result, organizations can have better confidence in their moderation efforts and lessen their dependency on costly extensive manual annotation.
Moreover, this approach addresses one of the key limitations of traditional machine learning models—black-box decision making—by providing a human-interpretable explanation of how the model arrived at a decision. This transparency is essential for maintaining ethical standards and avoiding potential biases in automated content moderation, as well as mitigating the risk of false positives or negatives, which are particularly critical when dealing with sensitive content like sexism.
Also, human annotation, especially for sensitive topics like sexism, is a resource-intensive process, both in terms of time and cost. This approach offers a more transparent and interpretable pipeline where decision makers do not need to manually evaluate every sentence. Instead, they can focus on sentences flagged by the model with the highest Sexism Scores, and use SHAP values to validate the most significant parts of the text. This not only reduces the time required for manual review but also provides a clear rationale for each decision, enhancing trust in the system.
While this paper mainly evaluates the model on data from Gab and Reddit, future work will focus on validating the model across more platforms, to ensure its generalizability. This will help address concerns about cross-platform validation, expanding the model’s usability in diverse real-world settings. Additionally, by simplifying the complexity of the system and making the SHAP explanations more actionable, organizations without significant computational resources can implement the model more effectively, ensuring practical utility without sacrificing interpretability or performance.

5. Discussion and Conclusions

The present study introduces a new methodology for detecting sexism in textual content by using explainability to define a Sexism Score. This approach integrates ensemble modelling and SHAP values for understanding and identifying sexist language. Our methodology shows advancements in the field of sexism detection. The ensemble model design, CustomBERT, which combined various BERT versions with a CNN architecture, capitalized on the strengths of multiple transformer models, enhancing the overall accuracy and robustness of the system.
The explainability analysis using SHAP values provided a deeper understanding of the model’s decision-making process. By identifying the most influential tokens, we could assign a meaningful Sexism Score to each text entry, thereby offering a transparent and interpretable metric for sexism detection. This aspect is crucial, as it not only improves the trustworthiness of the model but also aids in highlighting specific elements within the text that contribute to its classification as sexist.
Experimental results indicated that our CustomBERT model outperforms individual transformer models across all tasks. Also, the implementation of Sexism Scores based on SHAP values showed a clear correlation between these scores and model performance. Texts with higher Sexism Scores were more reliably identified as sexist, highlighting the efficacy of our explainability-driven approach. Moreover, the efficiency improvements observed by prioritizing high-scoring sentences underscore the practical benefits of this methodology in real-world applications, where processing time and computational resources are critical considerations.
In conclusion, this study presents an interpretable framework for sexism detection in textual content. By integrating a sophisticated ensemble modeling approach, and a thorough explainability analysis, we have developed a model that provides valuable insights into its decision-making process. The introduction of Sexism Scores enhances the model’s transparency and interpretability, making it a valuable tool for both academic research and practical applications in combating online harassment and promoting respectful discourse.
Future work can explore the application of this methodology to other forms of hate speech and biased language, further refining the explainability components to address diverse linguistic and cultural contexts. Additionally, integrating user feedback into the explainability analysis could enhance the model’s adaptability and accuracy, ensuring its continued relevance and effectiveness in dynamic online environments.

Author Contributions

Conceptualization, H.M., A.G. and A.B.; Methodology, H.M., A.G. and A.B.; Resources, H.M.; Writing–original draft, H.M.; Writing–review & editing, A.G. and A.B.; Supervision, A.G. and A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: (https://github.com/rewire-online/edos).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
BERTBidirectional Encoder Representations from Transformers
CNNConvolutional Neural Network
LLM    Large Language Model
NLPNatural Language Processing
SHAPShapley Additive Explanations
XAIExplainable Artificial Intelligence
XNLPExplainable Natural Language Processing

Appendix A. System Configuration

The experiments were conducted on a SURF (https://www.surf.nl/en, (accessed on 10 May 2023)) Azure cloud instance with the following configuration:
  • Operating System: Ubuntu 20.04 server
  • Instance Type: GPU 24 Core—220 GB RAM—1x A100 (Standard_NC24ads_A100_v4)

Appendix B. Effective Tokens

The following table lists all the effective tokens identified by our model:
Table A1. List of effective tokens.
Table A1. List of effective tokens.
TokenSHAP ImportanceImportance RatioCumulative Importance
pussy1.52 × 10201.16 × 10−21.16 × 10−2
yeah1.50 × 10201.14 × 10−22.30 × 10−2
shut1.39 × 10201.06 × 10−23.36 × 10−2
baby1.36 × 10201.04 × 10−24.39 × 10−2
think1.32 × 10201.01 × 10−25.40 × 10−2
theyre1.31 × 10209.97 × 10−36.40 × 10−2
fight1.31 × 10209.95 × 10−37.39 × 10−2
company1.31 × 10209.94 × 10−38.39 × 10−2
slut1.30 × 10209.87 × 10−39.37 × 10−2
post1.29 × 10209.82 × 10−31.04 × 10−1
whore1.28 × 10209.77 × 10−31.13 × 10−1
ive1.25 × 10209.51 × 10−31.23 × 10−1
day1.24 × 10209.46 × 10−31.32 × 10−1
hell1.23 × 10209.39 × 10−31.42 × 10−1
ugly1.23 × 10209.37 × 10−31.51 × 10−1
vagina1.20 × 10209.12 × 10−31.60 × 10−1
just1.16 × 10208.81 × 10−31.69 × 10−1
yes1.16 × 10208.80 × 10−31.78 × 10−1
avoid1.14 × 10208.71 × 10−31.86 × 10−1
try1.11 × 10208.45 × 10−31.95 × 10−1
credit1.10 × 10208.39 × 10−32.03 × 10−1
talk1.08 × 10208.20 × 10−32.12 × 10−1
lady1.07 × 10208.17 × 10−32.20 × 10−1
turn1.06 × 10208.05 × 10−32.28 × 10−1
sidebar1.05 × 10207.96 × 10−32.36 × 10−1
wine1.04 × 10207.95 × 10−32.44 × 10−1
fucking1.04 × 10207.93 × 10−32.52 × 10−1
work1.03 × 10207.83 × 10−32.59 × 10−1
crap1.02 × 10207.77 × 10−32.67 × 10−1
rape1.02 × 10207.75 × 10−32.75 × 10−1
equality1.01 × 10207.71 × 10−32.83 × 10−1
feminism9.91 × 10197.55 × 10−32.90 × 10−1
raped9.90 × 10197.54 × 10−32.98 × 10−1
youre9.71 × 10197.40 × 10−33.05 × 10−1
far9.70 × 10197.38 × 10−33.13 × 10−1
making9.63 × 10197.34 × 10−33.20 × 10−1
little9.35 × 10197.13 × 10−33.27 × 10−1
sex9.28 × 10197.07 × 10−33.34 × 10−1
personality9.26 × 10197.05 × 10−33.41 × 10−1
commie9.23 × 10197.03 × 10−33.48 × 10−1
muslim9.15 × 10196.97 × 10−33.55 × 10−1
face9.08 × 10196.91 × 10−33.62 × 10−1
cunt9.01 × 10196.86 × 10−33.69 × 10−1
maybe8.93 × 10196.80 × 10−33.76 × 10−1
girl8.88 × 10196.77 × 10−33.82 × 10−1
bitch8.84 × 10196.73 × 10−33.89 × 10−1
liberty8.83 × 10196.72 × 10−33.96 × 10−1
kino8.61 × 10196.56 × 10−34.02 × 10−1
life8.58 × 10196.53 × 10−34.09 × 10−1
wtf8.56 × 10196.52 × 10−34.16 × 10−1
stop8.53 × 10196.50 × 10−34.22 × 10−1
dad8.51 × 10196.48 × 10−34.28 × 10−1
burning8.35 × 10196.36 × 10−34.35 × 10−1
sending8.28 × 10196.31 × 10−34.41 × 10−1
guess8.28 × 10196.30 × 10−34.47 × 10−1
guy8.25 × 10196.28 × 10−34.54 × 10−1
rapist8.11 × 10196.17 × 10−34.60 × 10−1
tbh7.85 × 10195.98 × 10−34.66 × 10−1
rule7.77 × 10195.92 × 10−34.72 × 10−1
care7.72 × 10195.88 × 10−34.78 × 10−1
run7.65 × 10195.82 × 10−34.84 × 10−1
dirty7.44 × 10195.67 × 10−34.89 × 10−1
islam7.09 × 10195.40 × 10−34.95 × 10−1
having6.82 × 10195.19 × 10−35.00 × 10−1
college6.79 × 10195.17 × 10−35.05 × 10−1
feminist6.69 × 10195.10 × 10−35.10 × 10−1
tell6.54 × 10194.98 × 10−35.15 × 10−1
make6.49 × 10194.94 × 10−35.20 × 10−1
come6.42 × 10194.89 × 10−35.25 × 10−1
role6.40 × 10194.88 × 10−35.30 × 10−1
way6.33 × 10194.82 × 10−35.35 × 10−1
whale6.11 × 10194.65 × 10−35.39 × 10−1
red6.11 × 10194.65 × 10−35.44 × 10−1
attractive5.99 × 10194.56 × 10−35.48 × 10−1
theyd5.90 × 10194.50 × 10−35.53 × 10−1
logic5.76 × 10194.39 × 10−35.57 × 10−1
white5.60 × 10194.27 × 10−35.62 × 10−1
liberal5.58 × 10194.25 × 10−35.66 × 10−1
forget5.54 × 10194.22 × 10−35.70 × 10−1
treat5.43 × 10194.14 × 10−35.74 × 10−1
trump5.42 × 10194.13 × 10−35.78 × 10−1
oppression5.29 × 10194.03 × 10−35.82 × 10−1
left5.29 × 10194.03 × 10−35.86 × 10−1
expect5.24 × 10193.99 × 10−35.90 × 10−1
funny5.24 × 10193.99 × 10−35.94 × 10−1
different5.15 × 10193.92 × 10−35.98 × 10−1
marriage5.10 × 10193.89 × 10−36.02 × 10−1
natural5.10 × 10193.88 × 10−36.06 × 10−1
hope5.09 × 10193.88 × 10−36.10 × 10−1
cuck4.98 × 10193.79 × 10−36.14 × 10−1
surprised4.98 × 10193.79 × 10−36.18 × 10−1
selfish4.93 × 10193.76 × 10−36.21 × 10−1
picture4.88 × 10193.72 × 10−36.25 × 10−1
wonder4.88 × 10193.71 × 10−36.29 × 10−1
getting4.78 × 10193.64 × 10−36.32 × 10−1
did4.73 × 10193.60 × 10−36.36 × 10−1
rt4.71 × 10193.59 × 10−36.40 × 10−1
dead4.67 × 10193.55 × 10−36.43 × 10−1
rest4.59 × 10193.49 × 10−36.47 × 10−1
suck4.58 × 10193.49 × 10−36.50 × 10−1
vote4.41 × 10193.36 × 10−36.53 × 10−1
course4.38 × 10193.34 × 10−36.57 × 10−1
number4.32 × 10193.29 × 10−36.60 × 10−1
idiot4.29 × 10193.27 × 10−36.63 × 10−1
hard4.15 × 10193.16 × 10−36.66 × 10−1
soros4.11 × 10193.13 × 10−36.70 × 10−1
report4.09 × 10193.11 × 10−36.73 × 10−1
begin4.08 × 10193.11 × 10−36.76 × 10−1
space4.08 × 10193.11 × 10−36.79 × 10−1
away4.07 × 10193.10 × 10−36.82 × 10−1
spend4.06 × 10193.09 × 10−36.85 × 10−1
ball4.04 × 10193.08 × 10−36.88 × 10−1
fucked3.98 × 10193.03 × 10−36.91 × 10−1
monkey3.96 × 10193.02 × 10−36.94 × 10−1
enemy3.96 × 10193.01 × 10−36.97 × 10−1
wait3.95 × 10193.01 × 10−37.00 × 10−1
wont3.89 × 10192.96 × 10−37.03 × 10−1
waiting3.86 × 10192.94 × 10−37.06 × 10−1
oh3.84 × 10192.93 × 10−37.09 × 10−1
send3.83 × 10192.91 × 10−37.12 × 10−1
id3.78 × 10192.88 × 10−37.15 × 10−1
going3.77 × 10192.87 × 10−37.18 × 10−1
wife3.74 × 10192.85 × 10−37.21 × 10−1
foid3.66 × 10192.79 × 10−37.23 × 10−1
thanks3.64 × 10192.78 × 10−37.26 × 10−1
thing3.60 × 10192.74 × 10−37.29 × 10−1
hate3.58 × 10192.73 × 10−37.32 × 10−1
place3.49 × 10192.66 × 10−37.34 × 10−1
current3.47 × 10192.64 × 10−37.37 × 10−1
easily3.45 × 10192.63 × 10−37.40 × 10−1
need3.41 × 10192.60 × 10−37.42 × 10−1
really3.28 × 10192.50 × 10−37.45 × 10−1
word3.26 × 10192.48 × 10−37.47 × 10−1
thank3.25 × 10192.48 × 10−37.50 × 10−1
say3.25 × 10192.48 × 10−37.52 × 10−1
lying3.25 × 10192.47 × 10−37.55 × 10−1
mean3.24 × 10192.47 × 10−37.57 × 10−1
female3.18 × 10192.42 × 10−37.60 × 10−1
state3.15 × 10192.40 × 10−37.62 × 10−1
men3.14 × 10192.39 × 10−37.64 × 10−1
actually3.13 × 10192.38 × 10−37.67 × 10−1
ground3.13 × 10192.38 × 10−37.69 × 10−1
123.12 × 10192.38 × 10−37.71 × 10−1
deserve3.07 × 10192.34 × 10−37.74 × 10−1
exist3.06 × 10192.33 × 10−37.76 × 10−1
wouldnt3.02 × 10192.30 × 10−37.78 × 10−1
hang2.98 × 10192.27 × 10−37.81 × 10−1
reason2.97 × 10192.26 × 10−37.83 × 10−1
lmao2.94 × 10192.24 × 10−37.85 × 10−1
daily2.91 × 10192.22 × 10−37.87 × 10−1
stand2.87 × 10192.19 × 10−37.90 × 10−1
wall2.85 × 10192.17 × 10−37.92 × 10−1
youll2.82 × 10192.15 × 10−37.94 × 10−1
ha2.80 × 10192.13 × 10−37.96 × 10−1
fact2.80 × 10192.13 × 10−37.98 × 10−1
potential2.77 × 10192.11 × 10−38.00 × 10−1
damage2.76 × 10192.10 × 10−38.02 × 10−1
gender2.75 × 10192.09 × 10−38.04 × 10−1
agree2.74 × 10192.08 × 10−38.07 × 10−1
giving2.71 × 10192.06 × 10−38.09 × 10−1
trap2.71 × 10192.06 × 10−38.11 × 10−1
use2.70 × 10192.06 × 10−38.13 × 10−1
imagine2.69 × 10192.05 × 10−38.15 × 10−1
thats2.68 × 10192.04 × 10−38.17 × 10−1
art2.68 × 10192.04 × 10−38.19 × 10−1
ring2.66 × 10192.02 × 10−38.21 × 10−1
lot2.62 × 10192.00 × 10−38.23 × 10−1
matter2.62 × 10191.99 × 10−38.25 × 10−1
smart2.61 × 10191.99 × 10−38.27 × 10−1
chance2.61 × 10191.99 × 10−38.29 × 10−1
inside2.60 × 10191.98 × 10−38.31 × 10−1
difference2.59 × 10191.97 × 10−38.33 × 10−1
shes2.58 × 10191.96 × 10−38.35 × 10−1
trying2.56 × 10191.95 × 10−38.37 × 10−1
user2.54 × 10191.93 × 10−38.39 × 10−1
want2.53 × 10191.93 × 10−38.41 × 10−1
cope2.51 × 10191.91 × 10−38.43 × 10−1
future2.51 × 10191.91 × 10−38.44 × 10−1
got2.50 × 10191.90 × 10−38.46 × 10−1
mom2.50 × 10191.90 × 10−38.48 × 10−1
rich2.44 × 10191.85 × 10−38.50 × 10−1
femininity2.42 × 10191.84 × 10−38.52 × 10−1
friend2.41 × 10191.84 × 10−38.54 × 10−1
instead2.40 × 10191.83 × 10−38.56 × 10−1
clinton2.39 × 10191.82 × 10−38.57 × 10−1
boring2.35 × 10191.79 × 10−38.59 × 10−1
immediately2.32 × 10191.77 × 10−38.61 × 10−1
plan2.31 × 10191.76 × 10−38.63 × 10−1
working2.31 × 10191.76 × 10−38.65 × 10−1
sister2.30 × 10191.75 × 10−38.66 × 10−1
toe2.28 × 10191.73 × 10−38.68 × 10−1
behavior2.27 × 10191.73 × 10−38.70 × 10−1
blame2.27 × 10191.73 × 10−38.71 × 10−1
bos2.24 × 10191.71 × 10−38.73 × 10−1
fought2.23 × 10191.69 × 10−38.75 × 10−1
dick2.23 × 10191.69 × 10−38.77 × 10−1
feminine2.20 × 10191.68 × 10−38.78 × 10−1
mgtow2.18 × 10191.66 × 10−38.80 × 10−1
mother2.14 × 10191.63 × 10−38.82 × 10−1
thinking2.14 × 10191.63 × 10−38.83 × 10−1
american2.13 × 10191.62 × 10−38.85 × 10−1
bang2.09 × 10191.59 × 10−38.86 × 10−1
self2.08 × 10191.59 × 10−38.88 × 10−1
problem2.08 × 10191.59 × 10−38.90 × 10−1
male2.06 × 10191.57 × 10−38.91 × 10−1
young2.04 × 10191.55 × 10−38.93 × 10−1
jew2.04 × 10191.55 × 10−38.94 × 10−1
worst2.03 × 10191.55 × 10−38.96 × 10−1
attention2.03 × 10191.55 × 10−38.97 × 10−1
said2.01 × 10191.53 × 10−38.99 × 10−1
start2.00 × 10191.52 × 10−39.00 × 10−1
shit2.00 × 10191.52 × 10−39.02 × 10−1
black1.98 × 10191.51 × 10−39.03 × 10−1
waste1.98 × 10191.50 × 10−39.05 × 10−1
fuck1.96 × 10191.49 × 10−39.06 × 10−1
kavanaugh1.94 × 10191.48 × 10−39.08 × 10−1
lol1.93 × 10191.47 × 10−39.09 × 10−1
le1.93 × 10191.47 × 10−39.11 × 10−1
wild1.93 × 10191.47 × 10−39.12 × 10−1
lead1.88 × 10191.43 × 10−39.14 × 10−1
virgin1.82 × 10191.39 × 10−39.15 × 10−1
realize1.80 × 10191.37 × 10−39.16 × 10−1
dont1.79 × 10191.36 × 10−39.18 × 10−1
hanging1.77 × 10191.35 × 10−39.19 × 10−1
typical1.74 × 10191.33 × 10−39.21 × 10−1
die1.73 × 10191.32 × 10−39.22 × 10−1
given1.73 × 10191.32 × 10−39.23 × 10−1
west1.69 × 10191.29 × 10−39.24 × 10−1
believe1.67 × 10191.27 × 10−39.26 × 10−1
super1.64 × 10191.25 × 10−39.27 × 10−1
doe1.59 × 10191.21 × 10−39.28 × 10−1
hold1.57 × 10191.20 × 10−39.29 × 10−1
maga1.57 × 10191.20 × 10−39.31 × 10−1
end1.57 × 10191.20 × 10−39.32 × 10−1
lonely1.51 × 10191.15 × 10−39.33 × 10−1
voter1.50 × 10191.14 × 10−39.34 × 10−1
incel1.48 × 10191.13 × 10−39.35 × 10−1
probably1.44 × 10191.10 × 10−39.36 × 10−1
plot1.40 × 10191.07 × 10−39.37 × 10−1
leg1.40 × 10191.06 × 10−39.38 × 10−1
loser1.37 × 10191.05 × 10−39.39 × 10−1
watch1.37 × 10191.04 × 10−39.41 × 10−1
sexual1.37 × 10191.04 × 10−39.42 × 10−1
wow1.33 × 10191.01 × 10−39.43 × 10−1
money1.32 × 10191.01 × 10−39.44 × 10−1
killed1.27 × 10199.70 × 10−49.45 × 10−1
lulz1.23 × 10199.34 × 10−49.45 × 10−1
cost1.22 × 10199.26 × 10−49.46 × 10−1
support1.21 × 10199.23 × 10−49.47 × 10−1
reply1.21 × 10199.20 × 10−49.48 × 10−1
willing1.19 × 10199.06 × 10−49.49 × 10−1

References

  1. Kurasawa, F.; Rondinelli, E.; Kilicaslan, G. Evidentiary activism in the digital age: On the rise of feminist struggles against gender-based online violence. Inf. Commun. Soc. 2021, 24, 2174–2194. [Google Scholar] [CrossRef]
  2. Papaevangelou, C. ‘The non-interference principle’: Debating online platforms’ treatment of editorial content in the European Union’s Digital Services Act. Eur. J. Commun. 2023, 38, 466–483. [Google Scholar] [CrossRef]
  3. Ortiz, S.M. “If Something Ever Happened, I’d Have No One to Tell:” how online sexism perpetuates young women’s silence. Fem. Media Stud. 2023, 24, 119–134. [Google Scholar] [CrossRef]
  4. Aldana-Bobadilla, E.; Molina-Villegas, A.; Montelongo-Padilla, Y.; Lopez-Arevalo, I.; Sordia, O.S. A language model for misogyny detection in Latin American Spanish driven by multisource feature extraction and transformers. Appl. Sci. 2021, 11, 10467. [Google Scholar] [CrossRef]
  5. Lee, F.L.; Liang, H.; Cheng, E.W.; Tang, G.K.; Yuen, S. Affordances, movement dynamics, and a centralized digital communication platform in a networked movement. Inf. Commun. Soc. 2022, 25, 1699–1716. [Google Scholar] [CrossRef]
  6. Feng, C. A simple voting mechanism for online sexist content identification. arXiv 2021, arXiv:2105.14309. [Google Scholar]
  7. Schütz, M.; Boeck, J.; Liakhovets, D.; Slijepcevic, D.; Kirchknopf, A.; Hecht, M.; Bogensperger, J.; Schlarb, S.; Schindler, A.; Zeppelzauer, M. Automatic Sexism Detection with Multilingual Transformer Models, CoRR abs/2106.04908. 2021. Available online: https://arxiv.org/abs/2106.04908 (accessed on 8 February 2023).
  8. Kumar, R.; Pal, S.; Pamula, R. Sexism Detection in English and Spanish Tweets. In Proceedings of the IberLEF@ SEPLN. 2021, pp. 500–505. Available online: https://ceur-ws.org/Vol-2943/exist_paper17.pdf (accessed on 1 September 2023).
  9. de Paula, A.F.M.; da Silva, R.F.; Schlicht, I.B. Sexism prediction in spanish and english tweets using monolingual and multilingual bert and ensemble models. arXiv 2021, arXiv:2111.04551. [Google Scholar]
  10. Altin, L.S.M.; Saggion, H. Automatic detection of sexism in social media with a multilingual approach. In Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2021), Málaga, Espanya, 21 September 2021; [Málaga]: CEUR Workshop Proceedings Series. CEUR Workshop Proceedings: Aachen, Germany, 2021; pp. 415–419. [Google Scholar]
  11. Mehta, H.; Passi, K. Social media hate speech detection using explainable artificial intelligence (XAI). Algorithms 2022, 15, 291. [Google Scholar] [CrossRef]
  12. Gil Bermejo, J.L.; Martos Sánchez, C.; Vázquez Aguado, O.; García-Navarro, E.B. Adolescents, ambivalent sexism and social networks, a conditioning factor in the healthcare of women. Healthcare 2021, 9, 721. [Google Scholar] [CrossRef]
  13. Hoofnagle, C.J.; Van Der Sloot, B.; Borgesius, F.Z. The European Union general data protection regulation: What it is and what it means. Inf. Commun. Technol. Law 2019, 28, 65–98. [Google Scholar] [CrossRef]
  14. Mathew, B.; Saha, P.; Yimam, S.M.; Biemann, C.; Goyal, P.; Mukherjee, A. Hatexplain: A benchmark dataset for explainable hate speech detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 2–9 February 2021; Volume 35, pp. 14867–14875. [Google Scholar]
  15. Velankar, A.; Patil, H.; Joshi, R. A review of challenges in machine learning based automated hate speech detection. arXiv 2022, arXiv:2209.05294. [Google Scholar]
  16. Jiang, J.A. Identifying and addressing design and policy challenges in online content moderation. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–7. [Google Scholar]
  17. Danilevsky, M.; Qian, K.; Aharonov, R.; Katsis, Y.; Kawas, B.; Sen, P. A survey of the state of explainable AI for natural language processing. arXiv 2020, arXiv:2010.00711. [Google Scholar]
  18. Søgaard, A. Explainable Natural Language Processing; Morgan & Claypool Publishers: San Rafael, CA, USA, 2021. [Google Scholar]
  19. Mohammadi, H.; Giachanou, A.; Bagheri, A. Towards robust online sexism detection: A multi-model approach with BERT, XLM-RoBERTa, and DistilBERT for EXIST 2023 Tasks. In Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2023); CEUR Workshop Proceedings: Aachen, Germany, 2023. [Google Scholar]
  20. Böck, J.; Schütz, M.; Liakhovets, D.; Satriani, N.Q.; Babic, A.; Slijepčević, D.; Zeppelzauer, M.; Schindler, A. AIT_FHSTP at EXIST 2023 benchmark: Sexism detection by transfer learning, sentiment and toxicity embeddings and hand-crafted features. In Proceedings of the 14th International Conference of the CLEF Association, CLEF 2023, Thessaloniki, Greece, 17–21 September 2023. Working Notes of CLEF. [Google Scholar]
  21. Daouadi, K.E.; Boualleg, Y.; Guehairia, O. Deep Random Forest and AraBert for Hate Speech Detection from Arabic Tweets. J. Univers. Comput. Sci. 2023, 29, 1319–1335. [Google Scholar] [CrossRef]
  22. Lopez-Lopez, E.; Carrillo-de Albornoz, J.; Plaza, L. Combining Transformer-Based Models with Traditional Machine Learning Approaches for Sexism Identification in Social Networks at EXIST 2021. In Proceedings of the IberLEF@ SEPLN. 2021, pp. 431–441. Available online: https://ceur-ws.org/Vol-2943/exist_paper10.pdf (accessed on 1 September 2022).
  23. Samory, M.; Sen, I.; Kohne, J.; Flöck, F.; Wagner, C. “Call me sexist, but…”: Revisiting Sexism Detection Using Psychological Scales and Adversarial Samples. In Proceedings of the International AAAI Conference on Web and sOcial Media, Online, 7–10 June 2021; Volume 15, pp. 573–584. [Google Scholar]
  24. Rodríguez-Sánchez, F.; de Albornoz, J.C.; Plaza, L. Automatic Classification of Sexism in Social Networks: An Empirical Study on Twitter Data. IEEE Access 2020, 8, 219563–219576. [Google Scholar] [CrossRef]
  25. Jha, A.; Mamidi, R. When Does a Compliment Become Sexist? Analysis and Classification of Ambivalent Sexism Using Twitter Data. 2017, pp. 7–16. Available online: https://aclanthology.org/W17-2902/ (accessed on 3 August 2022).
  26. Jiang, A.; Yang, X.; Liu, Y.; Zubiaga, A. SWSR: A Chinese dataset and lexicon for online sexism detection. Online Soc. Netw. Media 2022, 27, 100182. [Google Scholar] [CrossRef]
  27. Das, A.; Rahgouy, M.; Zhang, Z.; Bhattacharya, T.; Dozier, G.; Seals, C.D. Online Sexism Detection and Classification by Injecting User Gender Information. In Proceedings of the 2023 IEEE International Conference on Artificial Intelligence, Blockchain, and Internet of Things (AIBThings), Mount Pleasant, MI, USA, 16–17 September 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
  28. Kirk, H.R.; Yin, W.; Vidgen, B.; Röttger, P. SemEval-2023 Task 10: Explainable Detection of Online Sexism. In Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), Association for Computational Linguistics, Toronto, ON, Canada, 9–14 July 2023. [Google Scholar] [CrossRef]
  29. Tasneem, F.; Hossain, T.; Naim, J. KingsmanTrio at SemEval-2023 Task 10: Analyzing the Effectiveness of Transfer Learning Models for Explainable Online Sexism Detection. In Proceedings of the Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), Toronto, ON, Canada, 31 January 2023; pp. 1916–1920.
  30. Kiritchenko, S.; Nejadgholi, I.; Fraser, K.C. Confronting abusive language online: A survey from the ethical and human rights perspective. J. Artif. Intell. Res. 2021, 71, 431–478. [Google Scholar] [CrossRef]
  31. Lamsiyah, S.; El Mahdaouy, A.; Alami, H.; Berrada, I.; Schommer, C. UL & UM6P at SemEval-2023 Task 10: Semi-Supervised Multi-task Learning for Explainable Detection of Online Sexism. In Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), Toronto, ON, Canada, 31 January 2023; pp. 644–650. [Google Scholar]
  32. Kotapati, G.; Gandhimathi, S.K.; Rao, P.A.; Muppagowni, G.K.; Bindu, K.R.; Reddy, M.S.C. A Natural Language Processing for Sentiment Analysis from Text using Deep Learning Algorithm. In Proceedings of the 2023 2nd International Conference on Edge Computing and Applications (ICECAA), Namakkal, India, 19–21 July 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1028–1034. [Google Scholar]
  33. Chauhan, R.; Gusain, A.; Kumar, P.; Bhatt, C.; Uniyal, I. Fine Grained Sentiment Analysis using Machine Learning and Deep Learning. In Proceedings of the 2023 International Conference on Sustainable Emerging Innovations in Engineering and Technology (ICSEIET), Ghaziabad, India, 14–15 September 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 423–427. [Google Scholar]
  34. Mariappan, U.; Balakrishnan, D.; Subhashini, S.; Kumar, N.V.A.S.; Rao, S.L.S.M.; Alagusundar, N. Sentiment and Context-Aware Recurrent Convolutional Neural Network for Sentiment Analysis. In Proceedings of the 2023 3rd Asian Conference on Innovation in Technology (ASIANCON), Pune, India, 25–27 August 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  35. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4765–4774. [Google Scholar]
  36. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
  37. Lai, V.; Carton, S.; Bhatnagar, R.; Liao, Q.V.; Zhang, Y.; Tan, C. Human-ai collaboration via conditional delegation: A case study of content moderation. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–18. [Google Scholar]
  38. Molina, M.D.; Sundar, S.S. When AI moderates online content: Effects of human collaboration and interactive transparency on user trust. J. Comput.-Mediat. Commun. 2022, 27, zmac010. [Google Scholar] [CrossRef]
  39. Rallabandi, S.; Kakodkar, I.G.; Avuku, O. Ethical U se of AI in Social Media. In Proceedings of the 2023 International Workshop on Intelligent Systems (IWIS), Ulsan, Republic of Korea, 9–11 August 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–9. [Google Scholar]
  40. Beddiar, D.R.; Jahan, M.S.; Oussalah, M. Data expansion using back translation and paraphrasing for hate speech detection. Online Soc. Netw. Media 2021, 24, 100153. [Google Scholar] [CrossRef]
  41. Zheng, Z.; Cai, Y.; Li, Y. Oversampling method for imbalanced classification. Comput. Inform. 2015, 34, 1017–1037. [Google Scholar]
  42. Xu, Y.; Vaziri-Pashkam, M. Limits to visual representational correspondence between convolutional neural networks and the human brain. Nat. Commun. 2021, 12, 2065. [Google Scholar] [CrossRef] [PubMed]
  43. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  44. Conneau, A.; Khandelwal, K.; Goyal, N.; Chaudhary, V.; Wenzek, G.; Guzmán, F.; Grave, E.; Ott, M.; Zettlemoyer, L.; Stoyanov, V. Unsupervised cross-lingual representation learning at scale. arXiv 2019, arXiv:1911.02116. [Google Scholar]
  45. Sanh, V.; Debut, L.; Chaumond, J.; Wolf, T. DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter. arXiv 2019, arXiv:1910.01108. [Google Scholar]
  46. Prabha, M.I.; Srikanth, G.U. Survey of sentiment analysis using deep learning techniques. In Proceedings of the 2019 1st International Conference on Innovations in Information and Communication Technology (ICIICT), Chennai, India, 25–26 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–9. [Google Scholar]
  47. Mohammadi, H.; Giachanou, A.; Bagheri, A. Code for “Towards Robust Online Sexism Detection: A Multi-Model Approach with BERT, XLM-RoBERTa, and DistilBERT for EXIST 2023 Tasks”. 2023. Available online: https://zenodo.org/records/8144300 (accessed on 13 July 2023).
  48. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  49. Brownlee, J. A gentle introduction to early stopping to avoid overtraining neural networks. Mach. Learn. Mastery 2018, 7. Available online: https://machinelearningmastery.com/early-stopping-to-avoid-overtraining-neural-network-models/ (accessed on 10 June 2023).
Figure 1. Back translation method for data augmentation (English ↔ Dutch).
Figure 1. Back translation method for data augmentation (English ↔ Dutch).
Applsci 14 08620 g001
Figure 2. Research methodology.
Figure 2. Research methodology.
Applsci 14 08620 g002
Figure 3. Architecture of our CustomBERT model.
Figure 3. Architecture of our CustomBERT model.
Applsci 14 08620 g003
Figure 4. Density distribution of text lengths by sexist label.
Figure 4. Density distribution of text lengths by sexist label.
Applsci 14 08620 g004
Figure 5. Top 20 unique words in sexist texts.
Figure 5. Top 20 unique words in sexist texts.
Applsci 14 08620 g005
Figure 6. Top 20 bigrams in sexist texts.
Figure 6. Top 20 bigrams in sexist texts.
Applsci 14 08620 g006
Figure 7. Proportions of different categories of sexist texts.
Figure 7. Proportions of different categories of sexist texts.
Applsci 14 08620 g007
Figure 8. Top unique words for each category of sexist content.
Figure 8. Top unique words for each category of sexist content.
Applsci 14 08620 g008
Figure 9. Sentiment polarity scores.
Figure 9. Sentiment polarity scores.
Applsci 14 08620 g009
Figure 10. Cumulative importance of top 20 tokens.
Figure 10. Cumulative importance of top 20 tokens.
Applsci 14 08620 g010
Figure 11. Distribution of Sexism Scores for texts labeled as sexist.
Figure 11. Distribution of Sexism Scores for texts labeled as sexist.
Applsci 14 08620 g011
Figure 12. Threshold vs. number of selected tokens.
Figure 12. Threshold vs. number of selected tokens.
Applsci 14 08620 g012
Table 1. Data summary. The total number of records in each task is bolded.
Table 1. Data summary. The total number of records in each task is bolded.
CategoryRecords
Task A
Not sexist10,602
Sexist3398
Total14,000
Task B
1. Threats, plans to harm, and incitement310
2. Derogation1590
3. Animosity1165
4. Prejudiced discussion333
Total3398
Task C
1.1 Threats of harm56
1.2 Incitement and encouragement of harm254
2.1 Descriptive attacks717
2.2 Aggressive and emotive attacks673
2.3 Dehumanising attacks and overt sexual objectification200
3.1 Casual use of gendered slurs, profanities, and insults637
3.2 Immutable gender differences and gender stereotypes417
3.3 Backhanded gendered compliments64
3.4 Condescending explanations or unwelcome advice47
4.1 Supporting mistreatment of individual women75
4.2 Supporting systemic discrimination against women as a group258
Total3398
Table 2. Definitions of symbols and parameters.
Table 2. Definitions of symbols and parameters.
SymbolDescription
tToken in the sentences
S t SHAP value for token t
TSet of all tokens
f ( t ) Model prediction without the token t
SI t SHAP Importance for token t
N t Number of sentences in which token t appears
S t ( i ) SHAP value for token t in the i-th sentence
μ Mean of the SHAP scores
σ Standard deviation of the SHAP scores
IR t Importance Ratio for token t
CI k Cumulative Importance up to the k-th token
T c Threshold for cumulative importance, set to 0.95
Label ( i ) Modified predicted hard label indicator for sentence i
SS ( i ) Sexsim Score for sentence i
Table 3. Logistic regression results.
Table 3. Logistic regression results.
ParameterCoefficientStandard Errorz-Valuep-Value95% Confidence Interval
Intercept−1.4314970.044−32.262 2.37 × 10 228 [−1.518, −1.345]
Text Length0.0035790.0007.532 4.98 × 10 14 [0.003, 0.005]
Log-Likelihood: −7730.043
R-Squared: 0.003650
Table 4. Chi-Square and T-test results.
Table 4. Chi-Square and T-test results.
StatisticValue
Chi2 Statistic65.703
p-Value 1.83 × 10 13
Degrees of Freedom4
T-Statistic7.562
p-Value 4.21 × 10 14
Mean Text Length (Sexist)64.83
Mean Text Length (Non-Sexist)58.29
Standard Deviation (Sexist)51.41
Standard Deviation (Non-Sexist)50.77
Table 5. Summary of model parameters and hyperparameters.
Table 5. Summary of model parameters and hyperparameters.
ParameterDescription
Tokenization Max Length512 tokens
Learning Rate Range 1 × 10 5 to 1 × 10 4 (Default: 3 × 10 5 )
Batch Sizes32, 64, 128
Learning Rate SchedulerCosine decay schedule
Warm up Steps200 steps
Early Stopping Patience5 epochs
Loss FunctionBinary cross-entropy
OptimizerAdam
Precision Training PolicyMixed float16
Table 6. Performance results of CustomBERT and baselines on Tasks A, B, and C. The best performance result per each task is bolded.
Table 6. Performance results of CustomBERT and baselines on Tasks A, B, and C. The best performance result per each task is bolded.
ModelAccuracyPrecisionRecallF1 Score
Task A
Logistic Regression0.700.420.700.46
XGBOOST0.720.450.720.49
BERT0.760.570.760.65
XLM-RoBERTa0.760.570.760.65
DistilBERT0.770.740.770.72
CustomBERT0.790.770.790.76
Task B
Logistic Regression0.540.250.540.23
XGBOOST0.560.270.560.21
BERT0.680.520.680.59
XLM-RoBERTa0.670.510.670.58
DistilBERT0.690.660.690.65
CustomBERT0.710.690.710.68
Task C
Logistic Regression0.400.100.400.07
XGBOOST0.420.120.420.08
BERT0.630.440.630.52
XLM-RoBERTa0.640.460.640.53
DistilBERT0.650.620.650.60
CustomBERT0.670.650.670.64
Table 7. Statistical summary of SHAP scores.
Table 7. Statistical summary of SHAP scores.
StatisticValue
Count14,000
Mean−0.271917
Standard Deviation0.458922
Minimum−0.963626
25th Percentile−0.526098
50th Percentile (Median)−0.483276
75th Percentile−0.297328
Maximum0.962766
Table 8. Performance metrics for the test dataset in each bin across Tasks A, B, and C. The best performance result per task is bolded.
Table 8. Performance metrics for the test dataset in each bin across Tasks A, B, and C. The best performance result per task is bolded.
BinAccuracyPrecisionRecallF1 Score
(−0.332, −0.0644] ⋃ (0.734, 0.984]0.790.780.790.78
(−0.0644, 0.202] ⋃ (0.202, 0.468]0.780.760.780.77
(0.468, 0.734]0.770.750.770.76
All data (Task A)  0.790.770.790.76
(−0.332, −0.0644] ⋃ (0.734, 0.984]0.720.700.720.71
(−0.0644, 0.202] ⋃ (0.202, 0.468]0.710.690.710.70
(0.468, 0.734]0.700.680.700.69
All data (Task B)0.710.690.710.68
(-0.332, -0.0644] ⋃ (0.734, 0.984]0.680.660.680.67
(-0.0644, 0.202] ⋃ (0.202, 0.468]0.670.650.670.66
(0.468, 0.734]0.660.640.660.65
All data (Task C)0.670.650.670.64
Table 9. Efficiency comparison.
Table 9. Efficiency comparison.
Dataset PortionAccuracyPrecisionRecallF1 ScoreRuntime (s)
All Sentences0.790.770.790.761578 (26.3 min)
High Sexism Score (Top 80%)0.790.750.760.731342 (22.4 min)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohammadi, H.; Giachanou, A.; Bagheri, A. A Transparent Pipeline for Identifying Sexism in Social Media: Combining Explainability with Model Prediction. Appl. Sci. 2024, 14, 8620. https://doi.org/10.3390/app14198620

AMA Style

Mohammadi H, Giachanou A, Bagheri A. A Transparent Pipeline for Identifying Sexism in Social Media: Combining Explainability with Model Prediction. Applied Sciences. 2024; 14(19):8620. https://doi.org/10.3390/app14198620

Chicago/Turabian Style

Mohammadi, Hadi, Anastasia Giachanou, and Ayoub Bagheri. 2024. "A Transparent Pipeline for Identifying Sexism in Social Media: Combining Explainability with Model Prediction" Applied Sciences 14, no. 19: 8620. https://doi.org/10.3390/app14198620

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop