0% found this document useful (0 votes)
18 views13 pages

A Novel Cognitive Multi-Label Classification Model

This research article presents a novel cognitive multi-label classification model for social media data, utilizing Dolphin Optimized Learning and hybrid classification networks to enhance Extreme Multi-Label Text Classification (XMTC). The proposed model effectively addresses challenges related to unstructured data labeling and computational overhead, demonstrating superior performance in classifying large datasets. Comprehensive experiments validate the model's effectiveness against existing state-of-the-art methods, particularly in handling tail labels.

Uploaded by

siraajniazi666
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views13 pages

A Novel Cognitive Multi-Label Classification Model

This research article presents a novel cognitive multi-label classification model for social media data, utilizing Dolphin Optimized Learning and hybrid classification networks to enhance Extreme Multi-Label Text Classification (XMTC). The proposed model effectively addresses challenges related to unstructured data labeling and computational overhead, demonstrating superior performance in classifying large datasets. Comprehensive experiments validate the model's effectiveness against existing state-of-the-art methods, particularly in handling tail labels.

Uploaded by

siraajniazi666
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

International Journal of Computational and Experimental

Science and ENgineering


(IJCESEN)
Vol. 11-No.2 (2025) pp. 2383-2395
http://www.ijcesen.com
Copyright © IJCESEN ISSN: 2149-9144
Research Article

A Novel Cognitive Multi-Label Classification Model for Social Media Data Based
on Dolphin Optimized Learning and Hybrid Classification networks
Shumama Ansa1*, G. Narsimha2
1
Research Scholar, Computer Science and Engineering, Jawaharlal Nehru Technological University Hyderabad,
Telangana, India
* Corresponding Author Email: [email protected] - ORCID: 0009-0007-7670-4185
2
Professor and Principal, Computer Science and Engineering, JNTUH University College of Engineering Sultanpur,
Telangana, India
Email: [email protected] - ORCID: 0000-0002-0143-8122

Article Info: Abstract:

DOI: 10.22399/ijcesen.1737 Social media plays a pivotal role in people’s daily lives where users distribute diverse
Received : 08 February 2025 materials and subjects like ideas, events, and emotions. As the number of people grows,
Accepted : 05 April 2025 extensive use of social platforms has resulted in the creation of vast amounts of
information. These unstructured data need to be labelled for understanding the relevant
Keywords : information that aids for various applications such as healthcare, entertainment and even
sentimental politics. These unstructured data have large number of labels which needs
Extreme Multi-Label Classification, the brighter light of annotation that tags the document with the most relevant labels.
Dolphin Optimized Learning Model,
Stacked Gated Recurrent Units,
Extreme Multi-Label Classification (XMTC) aims to solve the above problem by
Feed Forward Networks. automatically labelling a file with the most pertinent label from the large buckets of the
large label sets. Because of the surge in big data, implementing the XMTC has become
significant challenge to handle the millions of data, features and labels. This bottleneck
was overshadowed with the arrival of Machine Learning (ML) and Deep Learning (DL)
algorithms. But the computational overhead in training these learning networks degrades
the performance of XMTC for handling the larger social media data. To solve this
aforementioned problem, this research paper proposes the ensemble combination of
Dolphin Optimized Learning and Hybrid classification networks. The proposed model
comprises of triple set: Initially, it incorporates the multi-label dolphin optimized learning
procedure to recognize the weight of every word in relation to labels. The label structure
and document details are utilized to ascertain the link among the phrases and labels to
compress the labels. Finally, the label-aware classification networks formulated with the
Stacked Gated Recurrent networks and Feed Forward networks to attain the final label-
aware massive documents. The comprehensive experimentation is carried out using the
EuroLeX benchmarks and various performance metrics like accuracy, precision, recall,
hamming score are calculated. To prove the excellence of the recommended XMTC with
the varied state-of-the art models, Results demonstrates the proposed model has exhibited
the superior performance over the models notably on the tail labels.

1. Introduction simultaneously manage vast amount of documents,


features, and labels [3-5]. Therefore, it is crucial to
Extreme Multi-Label Text Classification (XMTC) create efficient XMTC for several real-world
focuses on effortlessly labeling a document with the applications including product categorization in e-
pertinent tags from an immense label set. For commerce, news tagging, and so on.
example, there are lots of groups on Wikipedia, and Unlike traditional multi-class classification, multi-
one might want to develop a classifier that can note label text Classification allows multiple labels to co-
a given text with the set of most significant groups exist for a single record. In general, the numerous
[1,2]. XMTC has gained increasing importance labels are often semantically related, and it is
because of the explosion of big data, while it has advantageous for the multi-label learning approach
become considerably more challenging since it must to utilize the correlation between distinct labels. At
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395

the same time, there might be a substantial count of b) The Hybrid Stacked GRU networks are proposed
tail labels with sparse supportive records, making for achieving the better classification using the
them difficult to model. To address the above- larger labels and documents.
mentioned challenges, researchers focus on dual c) The Label aware semantic relationship using
aspects: (1) approach to illustrate labels so that the attention maps has been proposed to extract the
correlation between labels can be precisely mined, relationship between every document and all the
and (2) approach to demonstrate documents so that multiple labels for XMTC.
the interdependence between texts can be fully d) Extensive Experimentation has been carried out
acquired. using the EURLlEX57K datasets in which the
performance metrics are measured and evaluated
1.1 Role of ML and DL Model in XMTC against other state-of-the-art procedures.
Experimental findings highlighted the
In the recent times, DL models such as recommended approach has exhibited superior
Convolutional Neural Networks (CNN) [6], XMTC performance.
Classification Neural Networks (CLNN) [7],
Recurrent Neural networks (RNN) [8] have attained 1.4 Structure of the Paper
the great success to represent the text data. But these
techniques are limited to the same level of The sequential arrangement of the manuscript is
representation of text data. Besides the success of presented pursues:1) Section-2 demonstrates the
these deep learning models, Attentive Neural relevant studies by more than different authors. The
Networks (ANN), Attention XML [9] and EXAM proposed system, data-preprocessing, classifier
[10] has also gained the more popularity in solving architectures are presented in Section-3. The
the XMTC problems. But these algorithms experimental details, comparative analysis and
concentrate solely on the document text but results discussion are discussed in Section-4. At last,
disregard the label arrangement within extreme the study concludes with the future enhancement in
labels, that is demonstrated to be crucial in XMTC Section-5.
learning problems. Additionally, computational
complexity fuels the existing model to achieve the 2. Related Works
inaccurate classification.
Bayu and Tegegn (2024) [11] conducted pioneering
1.2 Motivation of the Research research on multi-label emotion classification for
Amharic social media text, addressing a critical gap
Existing Deep learning Models has exhibited the in the field. They collected 18,000 datasets from
inaccurate classification due to ignoring the label social media platforms annotated by psychologists
structures and suffers from the computational and professionals. Utilizing word2vec and one-hot
complexity which persists to be real problem among encoding, they trained four DL models: LSTM,
the researchers. To solve this aforementioned BiLSTM, CNN, and GRU. The best accuracy was
problem, this research paper proposes the hybrid achieved with BiLSTM at 54.5%, followed by CNN
combination of Dolphin Optimized Neural at 54%, and LSTM at 53.1%. However, the study's
Networks (DONN) and Stacked GRUs with the Feed primary drawback lies in its relatively low overall
Forward networks to achieve the better XMTC for accuracy, with GRU performing particularly poorly
the massive social media text data. The proposed at 39.7%. The limited dataset size and the
framework consists of three components such as complexity of Amharic language processing pose
Label Contributor Phase (LCP), Compression Phase significant challenges for comprehensive emotion
(CP) and Label Aware Classification Networks classification. Wang et al. (2024) [12] introduced a
(LACN). novel deep Active Learning method relies on
Bayesian DL and Expected confidence (BEAL) for
1.3 Contribution of the Research Article multi-label text classification, addressing the
challenge of limited labeled data. Their approach
a) The proposed paper proposes the Dolphin leverages Bayesian deep learning to obtain the
Optimized Deep Neural Networks for detecting method's subsequent forecast probability
the relationship of the words to the label. The distribution and specifies an innovative expected
DONN models with self –attention maps confidence-based acquisition function.
construct the label-aware document illustration Experimental results with a BERT-based multi-label
by concurrently examining the content and label classification (MLTC) technique demonstrated more
structure. efficient model training, enabling convergence with
less labeled samples. But, the computational
2384
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395

complexity of Bayesian deep learning approaches elevated ELMo component to process Crisis word
may pose challenges for real-world implementation. vectors. Their suggested method surpasses the
Samy (2023) [13] explored sentiment analysis using classification of microblog texts with an improved
hybrid BERT models to enhance classification accuracy rate outperforming other techniques.
accuracy across different scenarios. The research However, significant drawbacks include the limited
developed four deep learning models by combining generalizability to other domains, and the high
BERT with BiLSTM and Bidirectional GRU computational requirements of the complex model
algorithms. They utilized pre-trained word architecture. Ashraf et al. (2022) [18] created the
embedding vectors to assist the algorithm fine- first multi-label emotion dataset for Urdu,
tuning process. The developed models aimed to comprising 6,043 tweets representing six basic
improve accuracy and evaluate the impact of emotions. Recognizing the morphological and
hybridizing BiGRU and BiLSTM layers. The syntactic challenges of the Urdu language, they
architectures incorporating BiGRU layers achieved developed baseline classifiers, including ML, DL
the best results. However, the computational approaches and BERT. The study employed various
intensity of the hybrid models and their demand for text representations, like stylometric attributes, pre-
extensive computational resources remain trained word embeddings, and n-grams.
significant drawbacks. De and Vats (2023) [14] Nevertheless, the main drawback was the dataset's
developed a robust multi-label classifier to classify small size. Khalil et al. (2021) [19] proposed a multi-
diverse concerns expressed in tweets about label emotion classification model for Arabic tweets
vaccination. Their comprehensive approach utilized utilising BiLSTM deep network. Their approach
advanced natural language processing methods and included comprehensive preprocessing steps, such
ML models, such as transformer models like BERT as Arabic language stemming, emoji text
and GPT 3.5, alongside traditional methods such as replacement. The study uniquely examined the effect
Classifier Chains, Support Vector Machine (SVM), of hyperparameter tuning on model performance,
Random Forest (RF), and Naive Bayes (NB). The achieving a 9% improvement in validation accuracy.
cutting-edge large language models outperformed Nonetheless, the method’s effectiveness may be
other methods, demonstrating the potential of constrained by its reliance on Aravec embeddings,
advanced AI in understanding complex social media which could limit adaptability to evolving language
discourse. Although, key drawbacks include usage.
potential bias in data collection and difficulty Bdeir and Ibrahim (2020) [20] developed an
capturing subtle vaccine-related sentiments. Ameer architecture for Arabic tweet multi-label
et al. (2023) [15] advanced multi-label emotion classification using word embedding techniques and
classification by introducing innovative transfer deep learning models. They constructed a dataset of
learning techniques with multiple attention 160,000 Arabic tweets and compared two DL
mechanisms. Their approach utilized transformer methods:
networks like XLNet, DistilBERT, and RoBERTa to CNN and RNN. Their outcomes demonstrated
reveal each word's contribution to emotion nearly identical performance across both network
classification. The XLNet-multi-attention model types, with accuracy scores around 90% and
showed 45.6% accuracy on the Ren-CECps Chinese Hamming loss approximately 0.02.
dataset. Although the results are impressive, the
model's complexity, high computational demands, However, the study's reliance on a single dataset and
and difficulties in handling different languages and the lack of evaluation across diverse linguistic
emotions limit its effectiveness. Paranjape et al. contexts limit its generalizability.
(2023) [16] evaluated several transformer-driven
approaches by fine-tuning them for multi-label 3. Proposed Methodology
classification. Oversampling techniques were
applied to address the class imbalance in the dataset. 3.1 System Overview
Ensemble methods were employed to enhance the
system's effectiveness. The technique's reliance on Figure 1 presents the proposed framework for an
multiple sophisticated models may limit its efficient XMTC model. As shown in Figure 1,
practicality for resource-constrained environments. proposed XMTC framework consists for four
Elangovan and Sasikala (2022) [17] introduced a components such as Data Pre-processing, DONN
novel approach is suggested utilizing the Enhanced Models for Handling the Multi-label documents,
Embedding from Language Model (EnELMo) for Feature extraction and finally classification layers.
categorizing tweets into various classes. The The detailed description of each and every
proposed EWECNN approach consists of an component is as follows

2385
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395

Table 1. Quick Summary of the Different Related Works

S. Authors Year Technology Results Obtained Advantages Drawback


No
1 Bayu and 2024 Word2vec, One-hot BiLSTM: 54.5% accuracy, Utilized multiple DL Low overall accuracy,
Tegegn encoding, Deep CNN: 54%, LSTM: 53.1% approaches Small dataset size,
Learning (LSTM, Complex language
BiLSTM, CNN, processing
GRU)
2 Wang et al. 2024 Bayesian Deep Efficient model training Innovative expected High computational
Learning, BEAL, with fewer labeled samples confidence-based complexity of
BERT-based acquisition function Bayesian deep
MLTC learning
3 Samy 2023 Hybrid BERT with Best results with BiGRU Improved accuracy High computational
BiLSTM and layer architectures through model resource demands
BiGRU, Pre-trained hybridization
word embedding
4 De and Vats 2023 Transformer Large language models Comprehensive Potential data
models (BERT, outperformed traditional approach to collection bias,
GPT 3.5), methods understanding social Difficulty capturing
Traditional ML media discourse subtle sentiments
(SVM, RF, NB)
5 Ameer et al. 2023 Transformer RoBERTa-Multi-attention: Innovative transfer Complex model, High
networks (XLNet, 62.4% accuracy (SemEval- learning with computational
DistilBERT, 2018), XLNet-Multi- attention mechanisms demands
RoBERTa) with attention: 45.6% accuracy
Multi-attention (Ren-CECps)
6 Paranjape et 2023 Transformer Macro F1-score: 0.5649 Oversampling and Reliance on multiple
al. models (XLNet, (official phase), 0.6605 ensembling sophisticated models
Longformer, (post-competition) techniques
BERT, BigBird)
7 Elangovan 2022 Improved ELMo, 93.46% accuracy, 92.99% High performance in Limited
and CNN-RNN F1-Score microblog text generalizability, High
Sasikala approaches classification computational
requirements
8 Ashraf et al. 2022 ML, DL and BERT- Comprehensive evaluation First multi-label Small dataset size
based classifiers using multiple metrics emotion dataset for
Urdu
9 Khalil et al. 2021 BiLSTM with Improved validation Comprehensive Limited adaptability
Aravec word accuracy, Higher micro F1 preprocessing, to evolving language
embedding score Hyperparameter
tuning
10 Bdeir and 2020 CNN and RNN with ~90% accuracy, ~0.02 Extensive dataset of Limited
Ibrahim word embedding Hamming loss 160,000 tweets generalizability,
Single dataset
evaluation

3.2 Data-Preprocessing technique 3.3 DONN Models for Handling the Multiple
Labels
Before training the proposed model with the
datasets, mentioned in [21], pre-processing This section discusses about the general overview of
technique is adopted for the parallel datasets by the dolphin optimization model, Dolphin optimized
adopting the following steps: model and label aware documents.

(i) Converting all the texts into lower case Dolphin Optimization Model
(ii) Excluding special characters from text, except In this paper, a new optimization technique inspired
apostrophes by dolphin echolocation is introduced. This
(iii) tokenizing the source and target parallel approach simulates the hunting strategies dolphins
sentences into sub word tokens using Keras employ, where they use sonar to detect and adjust to
Libraries. the location of their prey. This process of modifying
(iv) generating the sub word embeddings as input to sonar signals to pinpoint the target is emulated as the
the DONN models. core feature of the proposed method. Figure 2 is
living dolphin grabbing its meal.

2386
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395

Dolphin echolocation in nature


The concept of "echolocation" was introduced by
Griffin [22,23] to detail the capability of flying bats
to detect obstacles and prey by heeding to the echoes
reflected from the large-frequency clicks they
produce. Echolocation is employed by several
mammals and a small number of bird species. The
most well-researched form of echolocation is found
in marine mammals, particularly in bottlenose
dolphins. A dolphin generates sound waves in the
form of clicks, with frequencies that are higher than
those utilized for communication and that vary
among species. When these sounds hit an item, few
sound energy is reflected back towards the dolphin.
Upon receiving the echo, it emits another click. The
time interval among the emission of the click and the
return of the echo helps the dolphin assess the
distance to the object. The dolphin can also gauge
the direction of the object based on the varying
intensity of the signal retrieved on each side of its
head. By repeatedly radiating clicks and retrieving
the echoes, it is able to track and approach objects.
Bats utilize echolocation, their sonar systems vary
than those of dolphins. Bats generally rely on their
sonar within shorter ranges, about 3–4 meters, they
can recognize targets from a range of 10-100 meters.
Bats typically track for fast-moving insects, which
differs significantly than the traits of fish pursued by
dolphins. Furthermore, Sound travels in air at
roughly one-fifth the velocity it does in water,
meaning that the rate of information exchange for
bats at sonar transmission is minimal than that of
dolphins. These environmental and prey-related
differentiations necessitate distinct sonar systems,
making a direct comparison between the two animals
difficult

Dolphin echolocation (DE) optimization


The concept of DE can be compared to optimization
in several ways. In the process of finding prey using
echolocation, dolphins explore their surroundings
Figure 1. Proposed XMTC Framework for the Multi- before honing in on the target, which resembles the
Label Classification process of attaining the ideal solution to a problem.
Initially, dolphins search the environment broadly,
but as they approach their prey, they restrict their
search and increase the frequency of their
echolocation clicks to zero in on the target. This
process can be modeled in optimization problems by
limiting the exploration phase based on the
proximity to the target. The optimization process can
be categorized into two phases: in the first stage, the
algorithm performs a global search by exploring
various points in the solution space, seeking
unexplored areas. In the second phase, the algorithm
focuses more locally around the better solutions
found during the first phase. This behavior is
Figure 2. A living dolphin grabbing its meal.
common to many metaheuristic algorithms. It
2387
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395

encompasses modifying the proportion of solutions in Equation (1) exhibits a Power degree. The loops
produced in the global search phase (phase 1) count, also referred to as the Loop count, represents
compared to those produced in the local search phase how many iterations the algorithm requires to reach
(phase 2). By utilizing the DE approach, this ratio the convergence point. This value must be selected
can be modified according to a predefined curve. by the user, depending on the computational
This curve dictates how the approach gradually resources available for the algorithm.
shifts from global search to local search.The user
details a curve that specifies how the optimization
should converge. The algorithm then adjusts its
parameters to follow this curve. In essence, the
method considers the probability of selecting the
best solution found so far compared to other
alternatives in each iteration. This approach reduces
the dependence on the parameters, as the curve
defines the convergence criterion. The curve can be
any smooth, increasing function, with some
recommendations on its form, which will be further
discussed. Previous studies have shown a unified
approach to parameter selection in metaheuristics
[24]. In these methods, a metric known as the
Convergence Factor (CF) is used. The CF represents
the average likelihood of selecting the best solution.
For instance, in an example where steel profiles are
Figure 3. Overview Search Space Arrangement
assigned to a structure with four elements, the CF is
calculated as the mean of the frequencies of each
modal profile for those elements. This method
allows for consistent convergence tracking in
optimization problems. Before proceeding with the
optimization process, the search space must be
arranged using the following criterion:

Search Space Arrangement


For every variable being optimized at the procedure,
sort the available options in either ascending or
descending order. In cases where the options involve
multiple characteristics, the sorting should prioritize
the most significant one. As a result, for a given
variable j, length LAj is formed, containing all the
potential variants for the variable j. Then arranged
side by side to form a matrix, Matrix Alternatives
MANV*, where MA is the maximum of LAj for each
j = 1:NV, with NV depicts the total variables count.
It is necessary to assign a curve that dictates how the
convergence factor (CF) should evolve at the
optimization phase. The variation in CF follows the
designated curve during the optimization procedure.

𝑃𝑃(𝐿𝑜𝑜𝑝𝑖 ) = 𝑃𝑃1 + (1 −
𝐿𝑜𝑜𝑝𝑃𝑜𝑤𝑒𝑟 −1
𝑖
𝑃𝑃1 ) (𝐿𝑜𝑜𝑝𝑠𝑁𝑢𝑚𝑏𝑒𝑟) 𝑃𝑜𝑤𝑒𝑟 −1

(1)

The predefined probability is denoted as PP, with


PP1 representing the CF during the initial loop,
where the solutions are chosen at random. Loopi Figure 4. Working Flow of the Proposed Dolphin
refers to the current iteration number, while Power Optimization Model
signifies the curve's degree. As illustrated, the curve
2388
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395

Figure 3 Alterations of PP by the moderations of 3.4 DEO Tuned Model for Word and Feature
the Power, utilizing the recommended formulae, Embedding
Equation (1). Figure 4is working flow of the
proposed dolphin optimization model. To Establish the recommended approach, pre-
processed text data is then converted into the low-
Hyper-parameter Tuned Deep Neural dimensional dense vectors by word embedding
Networks techniques. The extreme labels is encoded into dense
Adjusting hyper-parameters (HP) is performed to vectors coexistence graph to facilitate the accurate
acquire the optimal HP for the proposed framework identification of label correlation and local patterns.
to further lessen complexity. The HP that requires As discussed, better extract label information, label-
modification in this research wraps the count of coexist graph G= (V, S) from the training data. In
hidden layers, number of hidden units, dropout rate, this visual, V is formed by the label node set, with S
epochs, and batch size. The weights of the dense signifying the edge set. An edge set is included.
networks utilized by classification layers are fine- There will be an edge connecting the ithlabel and the
tuned by utilizing the straightforward DEO. The Jthdocument. For better extraction of label
training network initially fetches the randomly correlation information, label co-exist graph
chosen HP. The fitness value of the suggested G=(V,S) was built using python libraries. The
approach is provided in equation (2). HP are objective is to map the extreme labels into a low-
calculated using Algorithm-1 during every iteration. dimensional latent space, ensuring that labels
The iteration concludes when the fitness function positioned closely within the graph share similar
aligns with equation (2). representations. To achieve this, the widely
recognized and robust Node2Vec [25] technique is
𝐹𝒊𝒕𝒏𝒆𝒔𝒔 𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏 = employed. This method effectively explores the
{𝑴𝒂𝒙(𝑨𝒄𝒄𝒖𝒓𝒂𝒄𝒚, 𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏, 𝑹𝒆𝒄𝒂𝒍𝒍, 𝑭𝟏 − diverse neighborhoods of labels using a flexible,
𝒔𝒄𝒐𝒓𝒆) (2) biased random walk approach, incorporating both
breadth-first and depth-first sampling strategies. At
The developed classification layer efficiently and last every label will be categorize as the group of
with minimal computation identifies multiple labels positive, neutral and negative. Implemented by a r-
of classes. Algorithm 1 outlines the proposed dimensional dense vector, i.e., l(i ) ∈ Rr for the i-th
classification layers' functional procedure. label (i = 1, . . . , k) and the whole label set could be
According to the suggested algorithm, 78 hidden presented by
nodes, a momentum of 0.01, and 85 iterations are
chosen. L = (l(1), l(2), . . . , l(k)) ∈ Rr×k . (3)
Steps Algorithm-1 // Pseudo Code for the
The proposed model DEO seeks to enhance the
Proposed Model
representation of each document by leveraging both
01 Input : Bias weight, concealed units, Epochs,
Learning Rate
its content and the structure of the labels.
02 Target : Multi-Class Labels 3.5 Self-Attention Maps
03 Bias weight, concealed units, epochs, and
learning rate should be assigned at random. Recent studies predominantly focus on
04 Set the three parameters incorporating attention layers to reduce redundant
05 Utilize While loop for true features, thereby enhancing the accuracy of the
06 Utilize the formula (2) for determining the classification process. The self-attention
fitness function mechanism, also referred to as intra-attention,
07 Commence the For loop from t=1 to Max. operates by generating three vectors—Q (Query), K
iteration (Key), and V (Value)—for each input sequence.
08 Utilize equations (1& 2) to allocate the bias Consequently, the input sequences from each layer
weights & input layers
are converted into output sequences. In essence, this
09 Utilize equation (2) for quantizing the fitness
technique aligns the Query with corresponding key-
function
value pairs using scaled dot-product functions. In
10 Check If condition for (Fitness function is
multilabel contexts, a document might fetch multiple
equal to threshold )
labels, and it should reflect the context most strongly
11 jump to Step 14 related to every label. To focus on the different
12 Otherwise
words on the documents, self –attention is applied on
13 jump to Step 06 the outputs of DEO evoked Neural networks.
14 Stop

2389
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395

Mathematically, dot product for self –attention is


computed as pursues GRU-First Variant
Here, every gate is calculated utilising only the
𝐹(𝐾, 𝑄) = 𝑆𝑜𝑓𝑡𝑚𝑎𝑥((𝐾), 𝑄 𝑇 ))/(𝑉𝐾 )^0.5 (4) previous hidden state and the bias, thereby
decreasing the entire count of parameters by 2 ×
Softmax is applied along the first dimension to
nm compared to the GRU RNN.
illustrate the varying contributions of each word to
each label. 𝑧𝑡 = 𝜎(𝑈𝑧 ℎ𝑡−1 + 𝑏𝑧 ) (5)
𝑟𝑡 = 𝜎(𝑈𝑟 ℎ𝑡−1 + 𝑏𝑟 ) (6)
3.6 Classification Layers
GRU-Second Variant
For achieving the better classification for the multi- Every gate is calculated solely utilizing the prior
label data with the three classes, stacked GRU and hidden state, thereby decreasing the overall count
feed forward networks are deployed. The detailed of parameters by 2 × (nm + n) compared to the
description of each and every module is presented GRU RNN.
as follows
𝑧𝑡𝑚 = 𝜎(𝑈𝑧 ℎ𝑡−1 ) (7)
Stacked GRU networks 𝑟𝑡𝑚 = 𝜎(𝑈𝑟 ℎ𝑡−1 ) (8)
The gating mechanism in GRU and LSTM RNNs
mirrors the simple RNN in terms of 3.6.4 GRU-Third variant
parameterization. The weights associated with these
gates are upgraded utilizing backpropagation Here, every gate is calculated solely utilising the
through time (BTT) via stochastic gradient descent, bias, thereby diminishing the overall parameters
aiming to diminish a loss or cost function [26]. count compared to the GRU RNN by 2 × (nm +
Consequently, every parameter upgrade n²).
incorporates information related to the overall 𝑧𝑡𝑙 = 𝜎( 𝑏𝑧 ) (9)
network state. The primary driving signal is the
internal state of the network. Additionally, the 𝑟𝑡𝑙 = 𝜎(𝑏𝑟 ) (10)
adaptive parameter upgrades affect all aspects of the As mentioned in [28],GRU is a shallow model
system's internal state [27]. This section examines with weak capability of feature extraction, and the
three significant variants for deriving the gating proposed GRU comprised of multiple variants of
equations, utilized consistently to both gates. GRU units, as shown in Figure 5. The input of intial
layer in GRU ‘s first variant is the original data and
followed by the middle layer consist of the GRU-
variant-2 structure. which is the output of the hidden
layers of the upper layers composed of GRU-variant-
3. Finally, the sigmoidal layers are added to the last
layer of the GRU networks as mentioned in above
equations.
The primary benefit of the recommended network
lies in its potential for efficiently extracting high-
level features from multi-label data. Furthermore, it
effectively leverages information from time series,
significantly enhancing classifier performance.
Additionally, since the model parameters are not
time-dependent, it eliminates the trade-off between
time and accuracy highlighted in reference [29].
These low-level features are feed into the
classification networks. These networks are
designed based on the principles of Extreme
Learning Machines (ELM). ELM represents a type
of neural network characterized by a single hidden
layer and operates on an auto-tuning principle. ELM
demonstrates superior performance, faster
execution, and reduced computational overhead
compared to other learning techniques including
Figure 5. Proposed Stacked GRU framework for deep
SVM, Bayesian Classifiers (BC), K-Nearest
feature extraction process
2390
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395

Neighbors (KNN), and even RF algorithm. A comprehensive examination of the empirical


comprehensive explanation of the ELM's investigation conducted to assess the effectiveness
operational mechanism is provided in [30]. of the algorithm presenting the most meticulous
ELM is a kind of neural network that typically outcomes and comprehensive comparative analysis.
employs a single hidden layer, where the tuning of Through the systematic exploration of the
this layer is not necessarily essential. When algorithm’s performance, effectiveness in multi-
compared to other models like Support Vector label classification in sentimental analysis is
Machines (SVM) and RF, ELM demonstrates analysed.
superior performance, faster processing speeds, and
reduced computational demands. By leveraging 4.1 Implementation Details
kernel functions, ELM achieves high accuracy and
efficient performance. The primary benefits of ELM The proposed framework was deployed on a system
include minimal training errors and enhanced equipped with the following hardware and software
approximation capabilities. ELM employs confirgation
automated adjustment of weight biases and non-zero
a. CPU and GPU: an Intel I9 confirgation and
activation functions. The input feature mappings of
NVIDIA Titan GPU was used for training the
ELM are represented by
proposed framework. This combination provided
𝑍 the sufficient processing power for the deep
= 𝐹(𝐺(𝑖), 𝑃) (11) learning models.
b. Storage RAM: The workstation has 16GB of
The capsule features, characterized by dimension P, RAM, which supports for efficient data handling
are represented as X. The function associated with and model training.
the Extreme Learning Machine (ELM) output is c. Data Storage Space: A total of 16 GB of
described by diskspace are used for storing the datasets with
the operating frequency of 3.4 GHZ.
1
𝑌(𝑖) = 𝑍(𝑖)𝛽 = 𝑍(𝑖) 𝑍 𝑇 ( 𝑍𝑍 𝑇 )−1 𝑂 (12) d. Software Design: Python3.19 is used as the
𝐶
major programming tool for the implementing
𝑆 = 𝛼(∑𝑁 the proposed framework. Due to the versatility
𝑖=1(𝑌(𝑖), 𝐵(𝑖), 𝑊(𝑖)) (13)
and its robustness, python is chosen for
Let Z(i) represent the input feature maps. The developing the framework. Several libraries such
temporal matrix β, is derived using the Moore- as Tensorflow, Keras, Matplotlib, Pandas,
Penrose generalized inverse theorem, is represented Numpy and Seaborn are utilised for developing
as 𝑍 𝑇 . Constants include C, while B and W denote the various tasks in the proposed framework.
the weights and biases of the network, respectively.
The probability for every category is then 4.2 Evaluation Metrics
determined using the softmax function These measures including accuracy, precision,
recall, specificity, and F1-score are assessed and
𝑌 ′ = 𝑆𝑜𝑓𝑡𝑚𝑎𝑥(𝑆) (14)
directly evaluated against varied state-of-the-art DL
The predicted output Y′ aims to forecast the DFU procedure used for multi-label classification to
mechanism using predefined datasets. To calculate highlight the superiority of the recommended
the loss function, the cross-entropy function is approach. Also, all these measures and statistical
employed, represented by the mathematical analyses are utilizing to highlight the higher
expression provided. proficiency of the recommended approach. Table 2
1 shows the mathematical expressions utilised for
𝐿𝑜𝑠𝑠 = (𝐾) ∑𝐾 ′
𝑖=1(𝑌(𝑖) ∗ 𝐿𝑜𝑔 𝑌 + calculating these metrics. The early stopping method
2
𝜂||𝜃|| (15) is applied to overcome generalization and overfitting
concerns. This method halts the iteration when the
In this context, K demonstrates the length of the validation performance of the proposed model no
feature vector for the dimensional capsule, η denotes longer enhances over time. TP and TN represent
the coefficient for regularization, and ∣θ∣ is the True Positives and True Negatives, while FP and FN
constant term. denote False Positives and False Negatives. First one
is TP, where the values are correctly recognized as
4. Results and Discussions true and are actually true. Next is FP, where the
values are incorrectly recognised as true when they
In the experimentation, results, and analysis section are actually false.
of this proposed research focus shifts to a
2391
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395

Table 2. Performance metrics used in the assessment


Performance Metrics
Accuracy
Expression
𝑇𝑃 + 𝑇𝑁
Recall of Detection
𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁 97,4
TP

Recall of Detection
Recall x100
T P+FN
Specificity 𝑇𝑁
97,2
𝑇𝑁 + 𝐹𝑃
Precision 𝑇𝑁
𝑇𝑃 + 𝐹𝑃 97
F1-Score 𝑃𝑟𝑒𝑐𝑖𝑠𝑜𝑛 ∗ 𝑅𝑒𝑐𝑎𝑙𝑙
2.
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙
96,8
Next one is FN, where the value is true yet is 1 0,2 0,3 0,4 0,5
incorrectly recognised as negative. Last one is TN,
where the value is negative and is correctly No of testing data
recognised as negative. Additionally, Normalized
Discounted Cummulative Gain (NDCG) is Figure 8. Recall of Detection of the Proposed Model
calculated forcounting the relavent labels in the using EUROLEX57k Datasets
ground truth vector V. Higher the NDCG, is the
higher the performance of the model

Accuracy of Detection Specifiicty in Detection


98,3
Acuracy of Detection

97,4
Specificity of Detection

98,25 97,3
98,2 97,2
98,15 97,1
98,1 97
98,05 96,9
98 96,8
0,1 0,2 0,3 0,4 0,5 1 0,2 0,3 0,4 0,5
No of testing data
No of testing data
Figure 6. Accuracy of Detection of the Proposed Model
using EUROLEX57k Datasets Figure 9. Specificity of Detection of the Proposed Model
using EUROLEX57k Datasets

Precision of Detection False Alarm Rate Detection

98,6
Precision of Detetcion

98
97,8 98,4
False Alarm rate

97,6
97,4 98,2
97,2
97 98
96,8
97,8
96,6
96,4 97,6
0,1 0,2 0,3 0,4 0,5 0,1 0,2 0,3 0,4 0,5
Testing Data No of Testinig Data

Figure 7. Precision of Detection of the Proposed Model Figure 10. False Alarm Rate Detection of the Proposed
using EUROLEX57k Datasets Model using EUROLEX57k Datasets

2392
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395

Figure 6 presents the Accuracy of detection for the Bi-LSTM Models, Bi-LSTM+Attention models and
recommended approach in detecting the sentimental Hybrid Attention Networks. Table 3 presents the
label. The model has produced the 98% of accuracy comparative assessment of the metrics for various
of detection in multi-label sentimental analysis. algorithms utilising EUROLEX57K datasets.Table
Figure 7-10 shows the precision, recall, specificity 3 presents the evaluation metrics of distinct methods
and F1-score of the recommended approach in in classifying the multi-label sentimental
detecting the multi-label sentiments. From the documental data. From the table it is evident that
Figures, it is evident the effectiveness of the suggested technique has produced superior results in
suggested method has given the stable performance multi-label classification and outperforms the other
in detecting sentimentals with the increase in testing existing models in classifying the multi-label
word length. Figure 11 shows the NDCG documents. Feed Forward Networks is applied to
performance of the algorithm in detecting the ground different fields [31-36].
truth labels with the increase in the word-length. As
the word length increases, NDCG performance is 5. Conclusion and Future Enhancement
found to be 98.2% which is far better performance in In this article, novel XMTC framework was
multi-label classification. proposed which examines document content and
label structure concurrently to obtain the
NDCG Detection discriminative content for every label meanwhile
maintaining the label content. The proposed XMTC
98,6 framework consist of ensemble of Dolphin
Optimized neural network, Self –attention with the
NDCCG Detection

98,4 Stacked GRU for achieving the better performance


in detecting multi-label classification. Extensive
98,2
experiments are carried out using EUROLEX57k
98 datasets and performance metrics like accuracy,
precision, recall, specificity, F1-score, NSDC are
97,8 measured and evaluated against other existing
97,6 learning XMTC frameworks.
0,1 0,2 0,3 0,4 0,5
The performance of the recommended approach is
No of Testinig Data found to have 98% of accuracy, 97.5% of precision,
Figure 11. NDCG Detection of the Proposed Model 97.2% of recall, 97% of F1-score and 98.4% of
utilising EUROLEX57k Datasets NSDC respectively. In a nutshell, the proposed
model has provided the better platform to extract the
4.3 Comparative Assessment better discriminative ability than baseline
To prove the superiority of the recommended architectures. As future direction, this framework
approach, models such as ATTENTIONXMTC, needs more brighter light in handling the real time
Non-Optimized GRU networks, LSTM networks, massive multi-level datasets.
Table 3. Performance Metrics of the Different Models in Classifying the Multi-labels using EUROLEX57K datasets.
Algorithms
A-XMLC GRU Bi-LSTM BiLA HAN Proposed Model
Metrics
Accuracy 90.0% 87.2% 915% 93.5% 94.6% 98.2%
Precision 89.2% 87.3% 90.2% 92.6% 93.4% 97.2%
Recall 88.7% 86.5% 90.1% 92.1% 93.0% 97.0%
Specificity 88.2% 87.4% 90.0% 90.5% 91.2% 98.2%
F1-Score 88.0% 87.6 89.4% 91.1% 93.5% 98.%
NDCG 84.2% 83.8% 82.5% 87.4 89.4% 97.6%

Author Statements: appeared to influence the work reported in this


paper
 Ethical approval: The conducted research is not  Acknowledgement: The authors declare that
related to either human or animal use. they have nobody or no-company to
 Conflict of interest: The authors declare that acknowledge.
they have no known competing financial interests  Author contributions: The authors declare that
or personal relationships that could have they have equal right on this paper.

2393
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395

 Funding information: The authors declare that [11]Bayu, Yeshimebet & tegegn, tesfa. (2024). Multi-
there is no funding to be acknowledged. label Emotion Classification on Social Media
 Data availability statement: The data that Comments using Deep learning. 10.21203/rs.3.rs-
4431629/v1.
support the findings of this study are available on [12]Wang, Q., Zhang, H., Zhang, W., & others. (2024).
request from the corresponding author. The data Deep active learning for multi-label text
are not publicly available due to privacy or classification. Scientific Reports, 14, 28246.
ethical restrictions. https://doi.org/10.1038/s41598-024-79249-7
[13]Samy, A. (2023). Sentiment analysis classification
References system using hybrid BERT models. Journal of Big
Data, 10. https://doi.org/10.1186/s40537-023-
[1]Mishra, N. K., & Singh, P. K. (2022). Linear ordering 00781-w
problem-based classifier chain using genetic [14]De, S., & Vats, S. (2023). Decoding concerns: Multi-
algorithm for multi-label classification. Applied Soft label classification of vaccine sentiments in social
Computing, 117, 108395. media. ArXiv, abs/2312.10626.
https://doi.org/10.1016/j.asoc.2022.108395 [15]Ameer, I., Bölücü, N., Siddiqui, M. H. F., Can, B.,
[2]Zhao, D., Wang, Q., Zhang, J., & Bai, C. (2023). Mine Sidorov, G., & Gelbukh, A. (2023). Multi-label
diversified contents of multi-spectral cloud images emotion classification in texts using transfer
along with geographical information for multi-label learning. Expert Systems with Applications, 213(Part
classification. IEEE Transactions on Geoscience A), 118534.
and Remote Sensing, 61, 1–15. https://doi.org/10.1016/j.eswa.2022.118534
[3]Liu, Z., Tang, C., Abhadiomhen, S. E., Shen, X. J., & [16]Paranjape, A., Kolhatkar, G., Patwardhan, Y.,
Li, Y. (2023). Robust label and feature space co- Gokhale, O., & Dharmadhikari, S. (2023). Converge
learning for multi-label classification. IEEE at WASSA 2023 Empathy, Emotion and Personality
Transactions on Knowledge and Data Engineering, Shared Task: A Transformer-based Approach for
1–14. https://doi.org/ [Insert DOI if available] Multi-Label Emotion Classification. Pune Institute
[4]Singh, K., Sharma, B., Singh, J., Srivastava, G., of Computer Technology, Pune, India.
Sharma, S., Aggarwal, A., & Cheng, X. (2020). [17]Elangovan, A., & Sasikala, S. (2022). A multi-label
Local statistics-based speckle reducing bilateral classification of disaster-related tweets with
filter for medical ultrasound images. Mobile enhanced word embedding ensemble convolutional
Networks and Applications, 25, 2367–2389. neural network model. Informatica: An
[5]Huang, J., Qian, W., Vong, C. M., Ding, W., Shu, W., International Journal, 46(7).
& Huang, Q. (2023). Multi-label feature selection https://doi.org/10.31449/inf.v46i7.4280
via label enhancement and analytic hierarchy [18]Ashraf, N., Khan, L., Butt, S., Chang, H.-T., Sidorov,
process. IEEE Transactions on Emerging Topics in G., & Gelbukh, A. (2022). Multi-label emotion
Computational Intelligence. classification of Urdu tweets. PeerJ Computer
[6]Koundal, D., Sharma, B., & Guo, Y. (2020). Science.
Intuitionistic based segmentation of thyroid nodules [19]Khalil, Enas & El Houby, Enas & Mohamed, Hoda.
in ultrasound images. Computers in Biology and (2021). Deep Learning for emotion analysis in
Medicine, 121, 103776. Arabic tweets. 10.21203/rs.3.rs-631537/v1.
[7]Kalpana, P., Malleboina, K., Nikhitha, M., Saikiran, P., [20]Bdeir, A. M., & Ibrahim, F. (2020). A framework for
& Kumar, S. N. (2024). Predicting cyberbullying on Arabic tweets multi-label classification using word
social media in the big data era using machine embedding and neural networks algorithms. In
learning algorithm. 2024 International Conference Proceedings of the 2020 2nd International
on Data Science and Network Security (ICDSNS), Conference on Big Data Engineering (BDE '20) (pp.
Tiptur, India, 1–7. 105–112). Association for Computing Machinery.
https://doi.org/10.1109/ICDSNS62112.2024.10691 https://doi.org/10.1145/3404512.3404526
297 [21]https://huggingface.co/datasets/NLP-AUEB/eurlex
[8]Huang, J., Vong, C. M., Chen, C. P., & Zhou, Y. [22]Kalpana, P., Kodati, S. S., Sreekanth, D., Smerat, N.,
(2022). Accurate and efficient large-scale multi- & Akram, M. (2025). Explainable AI-driven gait
label learning with reduced feature broad learning analysis using wearable Internet of Things (WIoT)
system using label correlation. IEEE Transactions and human activity recognition. Journal of
on Neural Networks and Learning Systems, 1–14. Intelligent Systems and Internet of Things, 15(2),
[9]Bayati, H., Dowlatshahi, M. B., & Hashemi, A. (2022). 55–75. https://doi.org/10.54216/JISIoT.150205
MSSL: A memetic-based sparse subspace learning [23]Dhiman, P., Kukreja, V., Manoharan, P., Kaur, A.,
algorithm for multi-label classification. Kamruzzaman, M. M., Dhaou, I. B., & Iwendi, C.
International Journal of Machine Learning and (2022). A novel deep learning model for detection of
Cybernetics, 13, 3607–3624. severity level of the disease in citrus fruits.
[10]Zhu, X., Li, J., Ren, J., Wang, J., & Wang, G. (2023). Electronics, 11(495).
Dynamic ensemble learning for multi-label [24]Kalpana, P., Narayana, P., Smitha, M. D., Keerthi, K.,
classification. Information Sciences, 623, 94–111. Smerat, A., & Akram, M. (2025). Health-Fots: A
latency-aware fog-based IoT environment and
efficient monitoring of body’s vital parameters in
2394
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395

smart healthcare environment. Journal of Intelligent Sciences and Radiation Research, 2(1).
Systems and Internet of Things, 15(1), 144–156. https://doi.org/10.22399/ijasrar.19
https://doi.org/10.54216/JISIoT.150112 [35]Sadula, V., & D. Ramesh. (2025). Enhancing Cross
[25]Abirami, Abirami & Sanmugaraja, Language for English-Telugu pairs through the
Lakshmanaprakash & R L, Priya & Hirlekar, Modified Transformer Model based Neural Machine
Vaishali & Dalal, Bhargavi. (2024). Proactive Translation. International Journal of Computational
Analysis and Detection of Cyber-attacks using Deep and Experimental Science and Engineering, 11(2).
Learning Techniques. Indian Journal Of Science https://doi.org/10.22399/ijcesen.1740
And Technology. 17. 1596-1605. [36]García, R., Carlos Garzon, & Juan Estrella. (2025).
10.17485/IJST/v17i15.3044. Generative Artificial Intelligence to Optimize
[26]Kalpana, P., Almusawi, M., Chanti, Y., Sunil Kumar, Lifting Lugs: Weight Reduction and Sustainability
V., & Varaprasad Rao, M. (2024). A deep in AISI 304 Steel. International Journal of Applied
reinforcement learning-based task offloading Sciences and Radiation Research, 2(1).
framework for edge-cloud computing. Proceedings https://doi.org/10.22399/ijasrar.22
of the 2024 International Conference on Integrated
Circuits and Communication Systems (ICICACS),
Raichur, India, 1–5.
https://doi.org/10.1109/ICICACS60521.2024.1049
8232
[27]Salam, A., Ullah, F., Amin, F., & Abrar, M. (2023).
Deep learning techniques for web-based attack
detection in Industry 5.0: A novel approach.
Technologies, 11(4), 107.
https://doi.org/10.3390/technologies11040107
[28]Molaei, R., Rahsepar Fard, K., & Bouyer, A. (2025).
A mathematical multi-objective optimization model
and metaheuristic algorithm for effective advertising
in the social Internet of Things. Neural Computing
and Applications, 37, 4797–4821.
https://doi.org/10.1007/s00521-024-10793-z
[29]Akhavan-Hejazi, Z. S., Esmaeili, M., & Ghobaei-
Arani, M. (2024). Identifying influential users using
a homophily-based approach in location-based
social networks. Journal of Supercomputing, 80,
19091–19126.
[30]Saqia, B., Khan, K., Rahman, A. U., & others. (2025).
Immoral post detection using a one-dimensional
convolutional neural network-based LSTM network.
Multimedia Tools and Applications.
https://doi.org/10.1007/s11042-025-20757-7
[31]S, P., & A, P. (2024). Secured Fog-Body-Torrent : A
Hybrid Symmetric Cryptography with Multi-layer
Feed Forward Networks Tuned Chaotic Maps for
Physiological Data Transmission in Fog-BAN
Environment. International Journal of
Computational and Experimental Science and
Engineering, 10(4).
https://doi.org/10.22399/ijcesen.490
[32]Hafez, I. Y., & El-Mageed, A. A. A. (2025).
Enhancing Digital Finance Security: AI-Based
Approaches for Credit Card and Cryptocurrency
Fraud Detection. International Journal of Applied
Sciences and Radiation Research, 2(1).
https://doi.org/10.22399/ijasrar.21
[33]Fowowe, O. O., & Agboluaje, R. (2025). Leveraging
Predictive Analytics for Customer Churn: A Cross-
Industry Approach in the US Market. International
Journal of Applied Sciences and Radiation Research
2(1). https://doi.org/10.22399/ijasrar.20
[34]Ibeh, C. V., & Adegbola, A. (2025). AI and Machine
Learning for Sustainable Energy: Predictive
Modelling, Optimization and Socioeconomic Impact
In The USA. International Journal of Applied
2395

You might also like