A Novel Cognitive Multi-Label Classification Model
A Novel Cognitive Multi-Label Classification Model
A Novel Cognitive Multi-Label Classification Model for Social Media Data Based
on Dolphin Optimized Learning and Hybrid Classification networks
Shumama Ansa1*, G. Narsimha2
1
Research Scholar, Computer Science and Engineering, Jawaharlal Nehru Technological University Hyderabad,
Telangana, India
* Corresponding Author Email: [email protected] - ORCID: 0009-0007-7670-4185
2
Professor and Principal, Computer Science and Engineering, JNTUH University College of Engineering Sultanpur,
Telangana, India
Email: [email protected] - ORCID: 0000-0002-0143-8122
DOI: 10.22399/ijcesen.1737 Social media plays a pivotal role in people’s daily lives where users distribute diverse
Received : 08 February 2025 materials and subjects like ideas, events, and emotions. As the number of people grows,
Accepted : 05 April 2025 extensive use of social platforms has resulted in the creation of vast amounts of
information. These unstructured data need to be labelled for understanding the relevant
Keywords : information that aids for various applications such as healthcare, entertainment and even
sentimental politics. These unstructured data have large number of labels which needs
Extreme Multi-Label Classification, the brighter light of annotation that tags the document with the most relevant labels.
Dolphin Optimized Learning Model,
Stacked Gated Recurrent Units,
Extreme Multi-Label Classification (XMTC) aims to solve the above problem by
Feed Forward Networks. automatically labelling a file with the most pertinent label from the large buckets of the
large label sets. Because of the surge in big data, implementing the XMTC has become
significant challenge to handle the millions of data, features and labels. This bottleneck
was overshadowed with the arrival of Machine Learning (ML) and Deep Learning (DL)
algorithms. But the computational overhead in training these learning networks degrades
the performance of XMTC for handling the larger social media data. To solve this
aforementioned problem, this research paper proposes the ensemble combination of
Dolphin Optimized Learning and Hybrid classification networks. The proposed model
comprises of triple set: Initially, it incorporates the multi-label dolphin optimized learning
procedure to recognize the weight of every word in relation to labels. The label structure
and document details are utilized to ascertain the link among the phrases and labels to
compress the labels. Finally, the label-aware classification networks formulated with the
Stacked Gated Recurrent networks and Feed Forward networks to attain the final label-
aware massive documents. The comprehensive experimentation is carried out using the
EuroLeX benchmarks and various performance metrics like accuracy, precision, recall,
hamming score are calculated. To prove the excellence of the recommended XMTC with
the varied state-of-the art models, Results demonstrates the proposed model has exhibited
the superior performance over the models notably on the tail labels.
the same time, there might be a substantial count of b) The Hybrid Stacked GRU networks are proposed
tail labels with sparse supportive records, making for achieving the better classification using the
them difficult to model. To address the above- larger labels and documents.
mentioned challenges, researchers focus on dual c) The Label aware semantic relationship using
aspects: (1) approach to illustrate labels so that the attention maps has been proposed to extract the
correlation between labels can be precisely mined, relationship between every document and all the
and (2) approach to demonstrate documents so that multiple labels for XMTC.
the interdependence between texts can be fully d) Extensive Experimentation has been carried out
acquired. using the EURLlEX57K datasets in which the
performance metrics are measured and evaluated
1.1 Role of ML and DL Model in XMTC against other state-of-the-art procedures.
Experimental findings highlighted the
In the recent times, DL models such as recommended approach has exhibited superior
Convolutional Neural Networks (CNN) [6], XMTC performance.
Classification Neural Networks (CLNN) [7],
Recurrent Neural networks (RNN) [8] have attained 1.4 Structure of the Paper
the great success to represent the text data. But these
techniques are limited to the same level of The sequential arrangement of the manuscript is
representation of text data. Besides the success of presented pursues:1) Section-2 demonstrates the
these deep learning models, Attentive Neural relevant studies by more than different authors. The
Networks (ANN), Attention XML [9] and EXAM proposed system, data-preprocessing, classifier
[10] has also gained the more popularity in solving architectures are presented in Section-3. The
the XMTC problems. But these algorithms experimental details, comparative analysis and
concentrate solely on the document text but results discussion are discussed in Section-4. At last,
disregard the label arrangement within extreme the study concludes with the future enhancement in
labels, that is demonstrated to be crucial in XMTC Section-5.
learning problems. Additionally, computational
complexity fuels the existing model to achieve the 2. Related Works
inaccurate classification.
Bayu and Tegegn (2024) [11] conducted pioneering
1.2 Motivation of the Research research on multi-label emotion classification for
Amharic social media text, addressing a critical gap
Existing Deep learning Models has exhibited the in the field. They collected 18,000 datasets from
inaccurate classification due to ignoring the label social media platforms annotated by psychologists
structures and suffers from the computational and professionals. Utilizing word2vec and one-hot
complexity which persists to be real problem among encoding, they trained four DL models: LSTM,
the researchers. To solve this aforementioned BiLSTM, CNN, and GRU. The best accuracy was
problem, this research paper proposes the hybrid achieved with BiLSTM at 54.5%, followed by CNN
combination of Dolphin Optimized Neural at 54%, and LSTM at 53.1%. However, the study's
Networks (DONN) and Stacked GRUs with the Feed primary drawback lies in its relatively low overall
Forward networks to achieve the better XMTC for accuracy, with GRU performing particularly poorly
the massive social media text data. The proposed at 39.7%. The limited dataset size and the
framework consists of three components such as complexity of Amharic language processing pose
Label Contributor Phase (LCP), Compression Phase significant challenges for comprehensive emotion
(CP) and Label Aware Classification Networks classification. Wang et al. (2024) [12] introduced a
(LACN). novel deep Active Learning method relies on
Bayesian DL and Expected confidence (BEAL) for
1.3 Contribution of the Research Article multi-label text classification, addressing the
challenge of limited labeled data. Their approach
a) The proposed paper proposes the Dolphin leverages Bayesian deep learning to obtain the
Optimized Deep Neural Networks for detecting method's subsequent forecast probability
the relationship of the words to the label. The distribution and specifies an innovative expected
DONN models with self –attention maps confidence-based acquisition function.
construct the label-aware document illustration Experimental results with a BERT-based multi-label
by concurrently examining the content and label classification (MLTC) technique demonstrated more
structure. efficient model training, enabling convergence with
less labeled samples. But, the computational
2384
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395
complexity of Bayesian deep learning approaches elevated ELMo component to process Crisis word
may pose challenges for real-world implementation. vectors. Their suggested method surpasses the
Samy (2023) [13] explored sentiment analysis using classification of microblog texts with an improved
hybrid BERT models to enhance classification accuracy rate outperforming other techniques.
accuracy across different scenarios. The research However, significant drawbacks include the limited
developed four deep learning models by combining generalizability to other domains, and the high
BERT with BiLSTM and Bidirectional GRU computational requirements of the complex model
algorithms. They utilized pre-trained word architecture. Ashraf et al. (2022) [18] created the
embedding vectors to assist the algorithm fine- first multi-label emotion dataset for Urdu,
tuning process. The developed models aimed to comprising 6,043 tweets representing six basic
improve accuracy and evaluate the impact of emotions. Recognizing the morphological and
hybridizing BiGRU and BiLSTM layers. The syntactic challenges of the Urdu language, they
architectures incorporating BiGRU layers achieved developed baseline classifiers, including ML, DL
the best results. However, the computational approaches and BERT. The study employed various
intensity of the hybrid models and their demand for text representations, like stylometric attributes, pre-
extensive computational resources remain trained word embeddings, and n-grams.
significant drawbacks. De and Vats (2023) [14] Nevertheless, the main drawback was the dataset's
developed a robust multi-label classifier to classify small size. Khalil et al. (2021) [19] proposed a multi-
diverse concerns expressed in tweets about label emotion classification model for Arabic tweets
vaccination. Their comprehensive approach utilized utilising BiLSTM deep network. Their approach
advanced natural language processing methods and included comprehensive preprocessing steps, such
ML models, such as transformer models like BERT as Arabic language stemming, emoji text
and GPT 3.5, alongside traditional methods such as replacement. The study uniquely examined the effect
Classifier Chains, Support Vector Machine (SVM), of hyperparameter tuning on model performance,
Random Forest (RF), and Naive Bayes (NB). The achieving a 9% improvement in validation accuracy.
cutting-edge large language models outperformed Nonetheless, the method’s effectiveness may be
other methods, demonstrating the potential of constrained by its reliance on Aravec embeddings,
advanced AI in understanding complex social media which could limit adaptability to evolving language
discourse. Although, key drawbacks include usage.
potential bias in data collection and difficulty Bdeir and Ibrahim (2020) [20] developed an
capturing subtle vaccine-related sentiments. Ameer architecture for Arabic tweet multi-label
et al. (2023) [15] advanced multi-label emotion classification using word embedding techniques and
classification by introducing innovative transfer deep learning models. They constructed a dataset of
learning techniques with multiple attention 160,000 Arabic tweets and compared two DL
mechanisms. Their approach utilized transformer methods:
networks like XLNet, DistilBERT, and RoBERTa to CNN and RNN. Their outcomes demonstrated
reveal each word's contribution to emotion nearly identical performance across both network
classification. The XLNet-multi-attention model types, with accuracy scores around 90% and
showed 45.6% accuracy on the Ren-CECps Chinese Hamming loss approximately 0.02.
dataset. Although the results are impressive, the
model's complexity, high computational demands, However, the study's reliance on a single dataset and
and difficulties in handling different languages and the lack of evaluation across diverse linguistic
emotions limit its effectiveness. Paranjape et al. contexts limit its generalizability.
(2023) [16] evaluated several transformer-driven
approaches by fine-tuning them for multi-label 3. Proposed Methodology
classification. Oversampling techniques were
applied to address the class imbalance in the dataset. 3.1 System Overview
Ensemble methods were employed to enhance the
system's effectiveness. The technique's reliance on Figure 1 presents the proposed framework for an
multiple sophisticated models may limit its efficient XMTC model. As shown in Figure 1,
practicality for resource-constrained environments. proposed XMTC framework consists for four
Elangovan and Sasikala (2022) [17] introduced a components such as Data Pre-processing, DONN
novel approach is suggested utilizing the Enhanced Models for Handling the Multi-label documents,
Embedding from Language Model (EnELMo) for Feature extraction and finally classification layers.
categorizing tweets into various classes. The The detailed description of each and every
proposed EWECNN approach consists of an component is as follows
2385
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395
3.2 Data-Preprocessing technique 3.3 DONN Models for Handling the Multiple
Labels
Before training the proposed model with the
datasets, mentioned in [21], pre-processing This section discusses about the general overview of
technique is adopted for the parallel datasets by the dolphin optimization model, Dolphin optimized
adopting the following steps: model and label aware documents.
(i) Converting all the texts into lower case Dolphin Optimization Model
(ii) Excluding special characters from text, except In this paper, a new optimization technique inspired
apostrophes by dolphin echolocation is introduced. This
(iii) tokenizing the source and target parallel approach simulates the hunting strategies dolphins
sentences into sub word tokens using Keras employ, where they use sonar to detect and adjust to
Libraries. the location of their prey. This process of modifying
(iv) generating the sub word embeddings as input to sonar signals to pinpoint the target is emulated as the
the DONN models. core feature of the proposed method. Figure 2 is
living dolphin grabbing its meal.
2386
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395
encompasses modifying the proportion of solutions in Equation (1) exhibits a Power degree. The loops
produced in the global search phase (phase 1) count, also referred to as the Loop count, represents
compared to those produced in the local search phase how many iterations the algorithm requires to reach
(phase 2). By utilizing the DE approach, this ratio the convergence point. This value must be selected
can be modified according to a predefined curve. by the user, depending on the computational
This curve dictates how the approach gradually resources available for the algorithm.
shifts from global search to local search.The user
details a curve that specifies how the optimization
should converge. The algorithm then adjusts its
parameters to follow this curve. In essence, the
method considers the probability of selecting the
best solution found so far compared to other
alternatives in each iteration. This approach reduces
the dependence on the parameters, as the curve
defines the convergence criterion. The curve can be
any smooth, increasing function, with some
recommendations on its form, which will be further
discussed. Previous studies have shown a unified
approach to parameter selection in metaheuristics
[24]. In these methods, a metric known as the
Convergence Factor (CF) is used. The CF represents
the average likelihood of selecting the best solution.
For instance, in an example where steel profiles are
Figure 3. Overview Search Space Arrangement
assigned to a structure with four elements, the CF is
calculated as the mean of the frequencies of each
modal profile for those elements. This method
allows for consistent convergence tracking in
optimization problems. Before proceeding with the
optimization process, the search space must be
arranged using the following criterion:
𝑃𝑃(𝐿𝑜𝑜𝑝𝑖 ) = 𝑃𝑃1 + (1 −
𝐿𝑜𝑜𝑝𝑃𝑜𝑤𝑒𝑟 −1
𝑖
𝑃𝑃1 ) (𝐿𝑜𝑜𝑝𝑠𝑁𝑢𝑚𝑏𝑒𝑟) 𝑃𝑜𝑤𝑒𝑟 −1
(1)
Figure 3 Alterations of PP by the moderations of 3.4 DEO Tuned Model for Word and Feature
the Power, utilizing the recommended formulae, Embedding
Equation (1). Figure 4is working flow of the
proposed dolphin optimization model. To Establish the recommended approach, pre-
processed text data is then converted into the low-
Hyper-parameter Tuned Deep Neural dimensional dense vectors by word embedding
Networks techniques. The extreme labels is encoded into dense
Adjusting hyper-parameters (HP) is performed to vectors coexistence graph to facilitate the accurate
acquire the optimal HP for the proposed framework identification of label correlation and local patterns.
to further lessen complexity. The HP that requires As discussed, better extract label information, label-
modification in this research wraps the count of coexist graph G= (V, S) from the training data. In
hidden layers, number of hidden units, dropout rate, this visual, V is formed by the label node set, with S
epochs, and batch size. The weights of the dense signifying the edge set. An edge set is included.
networks utilized by classification layers are fine- There will be an edge connecting the ithlabel and the
tuned by utilizing the straightforward DEO. The Jthdocument. For better extraction of label
training network initially fetches the randomly correlation information, label co-exist graph
chosen HP. The fitness value of the suggested G=(V,S) was built using python libraries. The
approach is provided in equation (2). HP are objective is to map the extreme labels into a low-
calculated using Algorithm-1 during every iteration. dimensional latent space, ensuring that labels
The iteration concludes when the fitness function positioned closely within the graph share similar
aligns with equation (2). representations. To achieve this, the widely
recognized and robust Node2Vec [25] technique is
𝐹𝒊𝒕𝒏𝒆𝒔𝒔 𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏 = employed. This method effectively explores the
{𝑴𝒂𝒙(𝑨𝒄𝒄𝒖𝒓𝒂𝒄𝒚, 𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏, 𝑹𝒆𝒄𝒂𝒍𝒍, 𝑭𝟏 − diverse neighborhoods of labels using a flexible,
𝒔𝒄𝒐𝒓𝒆) (2) biased random walk approach, incorporating both
breadth-first and depth-first sampling strategies. At
The developed classification layer efficiently and last every label will be categorize as the group of
with minimal computation identifies multiple labels positive, neutral and negative. Implemented by a r-
of classes. Algorithm 1 outlines the proposed dimensional dense vector, i.e., l(i ) ∈ Rr for the i-th
classification layers' functional procedure. label (i = 1, . . . , k) and the whole label set could be
According to the suggested algorithm, 78 hidden presented by
nodes, a momentum of 0.01, and 85 iterations are
chosen. L = (l(1), l(2), . . . , l(k)) ∈ Rr×k . (3)
Steps Algorithm-1 // Pseudo Code for the
The proposed model DEO seeks to enhance the
Proposed Model
representation of each document by leveraging both
01 Input : Bias weight, concealed units, Epochs,
Learning Rate
its content and the structure of the labels.
02 Target : Multi-Class Labels 3.5 Self-Attention Maps
03 Bias weight, concealed units, epochs, and
learning rate should be assigned at random. Recent studies predominantly focus on
04 Set the three parameters incorporating attention layers to reduce redundant
05 Utilize While loop for true features, thereby enhancing the accuracy of the
06 Utilize the formula (2) for determining the classification process. The self-attention
fitness function mechanism, also referred to as intra-attention,
07 Commence the For loop from t=1 to Max. operates by generating three vectors—Q (Query), K
iteration (Key), and V (Value)—for each input sequence.
08 Utilize equations (1& 2) to allocate the bias Consequently, the input sequences from each layer
weights & input layers
are converted into output sequences. In essence, this
09 Utilize equation (2) for quantizing the fitness
technique aligns the Query with corresponding key-
function
value pairs using scaled dot-product functions. In
10 Check If condition for (Fitness function is
multilabel contexts, a document might fetch multiple
equal to threshold )
labels, and it should reflect the context most strongly
11 jump to Step 14 related to every label. To focus on the different
12 Otherwise
words on the documents, self –attention is applied on
13 jump to Step 06 the outputs of DEO evoked Neural networks.
14 Stop
2389
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395
Recall of Detection
Recall x100
T P+FN
Specificity 𝑇𝑁
97,2
𝑇𝑁 + 𝐹𝑃
Precision 𝑇𝑁
𝑇𝑃 + 𝐹𝑃 97
F1-Score 𝑃𝑟𝑒𝑐𝑖𝑠𝑜𝑛 ∗ 𝑅𝑒𝑐𝑎𝑙𝑙
2.
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙
96,8
Next one is FN, where the value is true yet is 1 0,2 0,3 0,4 0,5
incorrectly recognised as negative. Last one is TN,
where the value is negative and is correctly No of testing data
recognised as negative. Additionally, Normalized
Discounted Cummulative Gain (NDCG) is Figure 8. Recall of Detection of the Proposed Model
calculated forcounting the relavent labels in the using EUROLEX57k Datasets
ground truth vector V. Higher the NDCG, is the
higher the performance of the model
97,4
Specificity of Detection
98,25 97,3
98,2 97,2
98,15 97,1
98,1 97
98,05 96,9
98 96,8
0,1 0,2 0,3 0,4 0,5 1 0,2 0,3 0,4 0,5
No of testing data
No of testing data
Figure 6. Accuracy of Detection of the Proposed Model
using EUROLEX57k Datasets Figure 9. Specificity of Detection of the Proposed Model
using EUROLEX57k Datasets
98,6
Precision of Detetcion
98
97,8 98,4
False Alarm rate
97,6
97,4 98,2
97,2
97 98
96,8
97,8
96,6
96,4 97,6
0,1 0,2 0,3 0,4 0,5 0,1 0,2 0,3 0,4 0,5
Testing Data No of Testinig Data
Figure 7. Precision of Detection of the Proposed Model Figure 10. False Alarm Rate Detection of the Proposed
using EUROLEX57k Datasets Model using EUROLEX57k Datasets
2392
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395
Figure 6 presents the Accuracy of detection for the Bi-LSTM Models, Bi-LSTM+Attention models and
recommended approach in detecting the sentimental Hybrid Attention Networks. Table 3 presents the
label. The model has produced the 98% of accuracy comparative assessment of the metrics for various
of detection in multi-label sentimental analysis. algorithms utilising EUROLEX57K datasets.Table
Figure 7-10 shows the precision, recall, specificity 3 presents the evaluation metrics of distinct methods
and F1-score of the recommended approach in in classifying the multi-label sentimental
detecting the multi-label sentiments. From the documental data. From the table it is evident that
Figures, it is evident the effectiveness of the suggested technique has produced superior results in
suggested method has given the stable performance multi-label classification and outperforms the other
in detecting sentimentals with the increase in testing existing models in classifying the multi-label
word length. Figure 11 shows the NDCG documents. Feed Forward Networks is applied to
performance of the algorithm in detecting the ground different fields [31-36].
truth labels with the increase in the word-length. As
the word length increases, NDCG performance is 5. Conclusion and Future Enhancement
found to be 98.2% which is far better performance in In this article, novel XMTC framework was
multi-label classification. proposed which examines document content and
label structure concurrently to obtain the
NDCG Detection discriminative content for every label meanwhile
maintaining the label content. The proposed XMTC
98,6 framework consist of ensemble of Dolphin
Optimized neural network, Self –attention with the
NDCCG Detection
2393
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395
Funding information: The authors declare that [11]Bayu, Yeshimebet & tegegn, tesfa. (2024). Multi-
there is no funding to be acknowledged. label Emotion Classification on Social Media
Data availability statement: The data that Comments using Deep learning. 10.21203/rs.3.rs-
4431629/v1.
support the findings of this study are available on [12]Wang, Q., Zhang, H., Zhang, W., & others. (2024).
request from the corresponding author. The data Deep active learning for multi-label text
are not publicly available due to privacy or classification. Scientific Reports, 14, 28246.
ethical restrictions. https://doi.org/10.1038/s41598-024-79249-7
[13]Samy, A. (2023). Sentiment analysis classification
References system using hybrid BERT models. Journal of Big
Data, 10. https://doi.org/10.1186/s40537-023-
[1]Mishra, N. K., & Singh, P. K. (2022). Linear ordering 00781-w
problem-based classifier chain using genetic [14]De, S., & Vats, S. (2023). Decoding concerns: Multi-
algorithm for multi-label classification. Applied Soft label classification of vaccine sentiments in social
Computing, 117, 108395. media. ArXiv, abs/2312.10626.
https://doi.org/10.1016/j.asoc.2022.108395 [15]Ameer, I., Bölücü, N., Siddiqui, M. H. F., Can, B.,
[2]Zhao, D., Wang, Q., Zhang, J., & Bai, C. (2023). Mine Sidorov, G., & Gelbukh, A. (2023). Multi-label
diversified contents of multi-spectral cloud images emotion classification in texts using transfer
along with geographical information for multi-label learning. Expert Systems with Applications, 213(Part
classification. IEEE Transactions on Geoscience A), 118534.
and Remote Sensing, 61, 1–15. https://doi.org/10.1016/j.eswa.2022.118534
[3]Liu, Z., Tang, C., Abhadiomhen, S. E., Shen, X. J., & [16]Paranjape, A., Kolhatkar, G., Patwardhan, Y.,
Li, Y. (2023). Robust label and feature space co- Gokhale, O., & Dharmadhikari, S. (2023). Converge
learning for multi-label classification. IEEE at WASSA 2023 Empathy, Emotion and Personality
Transactions on Knowledge and Data Engineering, Shared Task: A Transformer-based Approach for
1–14. https://doi.org/ [Insert DOI if available] Multi-Label Emotion Classification. Pune Institute
[4]Singh, K., Sharma, B., Singh, J., Srivastava, G., of Computer Technology, Pune, India.
Sharma, S., Aggarwal, A., & Cheng, X. (2020). [17]Elangovan, A., & Sasikala, S. (2022). A multi-label
Local statistics-based speckle reducing bilateral classification of disaster-related tweets with
filter for medical ultrasound images. Mobile enhanced word embedding ensemble convolutional
Networks and Applications, 25, 2367–2389. neural network model. Informatica: An
[5]Huang, J., Qian, W., Vong, C. M., Ding, W., Shu, W., International Journal, 46(7).
& Huang, Q. (2023). Multi-label feature selection https://doi.org/10.31449/inf.v46i7.4280
via label enhancement and analytic hierarchy [18]Ashraf, N., Khan, L., Butt, S., Chang, H.-T., Sidorov,
process. IEEE Transactions on Emerging Topics in G., & Gelbukh, A. (2022). Multi-label emotion
Computational Intelligence. classification of Urdu tweets. PeerJ Computer
[6]Koundal, D., Sharma, B., & Guo, Y. (2020). Science.
Intuitionistic based segmentation of thyroid nodules [19]Khalil, Enas & El Houby, Enas & Mohamed, Hoda.
in ultrasound images. Computers in Biology and (2021). Deep Learning for emotion analysis in
Medicine, 121, 103776. Arabic tweets. 10.21203/rs.3.rs-631537/v1.
[7]Kalpana, P., Malleboina, K., Nikhitha, M., Saikiran, P., [20]Bdeir, A. M., & Ibrahim, F. (2020). A framework for
& Kumar, S. N. (2024). Predicting cyberbullying on Arabic tweets multi-label classification using word
social media in the big data era using machine embedding and neural networks algorithms. In
learning algorithm. 2024 International Conference Proceedings of the 2020 2nd International
on Data Science and Network Security (ICDSNS), Conference on Big Data Engineering (BDE '20) (pp.
Tiptur, India, 1–7. 105–112). Association for Computing Machinery.
https://doi.org/10.1109/ICDSNS62112.2024.10691 https://doi.org/10.1145/3404512.3404526
297 [21]https://huggingface.co/datasets/NLP-AUEB/eurlex
[8]Huang, J., Vong, C. M., Chen, C. P., & Zhou, Y. [22]Kalpana, P., Kodati, S. S., Sreekanth, D., Smerat, N.,
(2022). Accurate and efficient large-scale multi- & Akram, M. (2025). Explainable AI-driven gait
label learning with reduced feature broad learning analysis using wearable Internet of Things (WIoT)
system using label correlation. IEEE Transactions and human activity recognition. Journal of
on Neural Networks and Learning Systems, 1–14. Intelligent Systems and Internet of Things, 15(2),
[9]Bayati, H., Dowlatshahi, M. B., & Hashemi, A. (2022). 55–75. https://doi.org/10.54216/JISIoT.150205
MSSL: A memetic-based sparse subspace learning [23]Dhiman, P., Kukreja, V., Manoharan, P., Kaur, A.,
algorithm for multi-label classification. Kamruzzaman, M. M., Dhaou, I. B., & Iwendi, C.
International Journal of Machine Learning and (2022). A novel deep learning model for detection of
Cybernetics, 13, 3607–3624. severity level of the disease in citrus fruits.
[10]Zhu, X., Li, J., Ren, J., Wang, J., & Wang, G. (2023). Electronics, 11(495).
Dynamic ensemble learning for multi-label [24]Kalpana, P., Narayana, P., Smitha, M. D., Keerthi, K.,
classification. Information Sciences, 623, 94–111. Smerat, A., & Akram, M. (2025). Health-Fots: A
latency-aware fog-based IoT environment and
efficient monitoring of body’s vital parameters in
2394
Shumama Ansa, G. Narsimha / IJCESEN 11-2(2025)2383-2395
smart healthcare environment. Journal of Intelligent Sciences and Radiation Research, 2(1).
Systems and Internet of Things, 15(1), 144–156. https://doi.org/10.22399/ijasrar.19
https://doi.org/10.54216/JISIoT.150112 [35]Sadula, V., & D. Ramesh. (2025). Enhancing Cross
[25]Abirami, Abirami & Sanmugaraja, Language for English-Telugu pairs through the
Lakshmanaprakash & R L, Priya & Hirlekar, Modified Transformer Model based Neural Machine
Vaishali & Dalal, Bhargavi. (2024). Proactive Translation. International Journal of Computational
Analysis and Detection of Cyber-attacks using Deep and Experimental Science and Engineering, 11(2).
Learning Techniques. Indian Journal Of Science https://doi.org/10.22399/ijcesen.1740
And Technology. 17. 1596-1605. [36]García, R., Carlos Garzon, & Juan Estrella. (2025).
10.17485/IJST/v17i15.3044. Generative Artificial Intelligence to Optimize
[26]Kalpana, P., Almusawi, M., Chanti, Y., Sunil Kumar, Lifting Lugs: Weight Reduction and Sustainability
V., & Varaprasad Rao, M. (2024). A deep in AISI 304 Steel. International Journal of Applied
reinforcement learning-based task offloading Sciences and Radiation Research, 2(1).
framework for edge-cloud computing. Proceedings https://doi.org/10.22399/ijasrar.22
of the 2024 International Conference on Integrated
Circuits and Communication Systems (ICICACS),
Raichur, India, 1–5.
https://doi.org/10.1109/ICICACS60521.2024.1049
8232
[27]Salam, A., Ullah, F., Amin, F., & Abrar, M. (2023).
Deep learning techniques for web-based attack
detection in Industry 5.0: A novel approach.
Technologies, 11(4), 107.
https://doi.org/10.3390/technologies11040107
[28]Molaei, R., Rahsepar Fard, K., & Bouyer, A. (2025).
A mathematical multi-objective optimization model
and metaheuristic algorithm for effective advertising
in the social Internet of Things. Neural Computing
and Applications, 37, 4797–4821.
https://doi.org/10.1007/s00521-024-10793-z
[29]Akhavan-Hejazi, Z. S., Esmaeili, M., & Ghobaei-
Arani, M. (2024). Identifying influential users using
a homophily-based approach in location-based
social networks. Journal of Supercomputing, 80,
19091–19126.
[30]Saqia, B., Khan, K., Rahman, A. U., & others. (2025).
Immoral post detection using a one-dimensional
convolutional neural network-based LSTM network.
Multimedia Tools and Applications.
https://doi.org/10.1007/s11042-025-20757-7
[31]S, P., & A, P. (2024). Secured Fog-Body-Torrent : A
Hybrid Symmetric Cryptography with Multi-layer
Feed Forward Networks Tuned Chaotic Maps for
Physiological Data Transmission in Fog-BAN
Environment. International Journal of
Computational and Experimental Science and
Engineering, 10(4).
https://doi.org/10.22399/ijcesen.490
[32]Hafez, I. Y., & El-Mageed, A. A. A. (2025).
Enhancing Digital Finance Security: AI-Based
Approaches for Credit Card and Cryptocurrency
Fraud Detection. International Journal of Applied
Sciences and Radiation Research, 2(1).
https://doi.org/10.22399/ijasrar.21
[33]Fowowe, O. O., & Agboluaje, R. (2025). Leveraging
Predictive Analytics for Customer Churn: A Cross-
Industry Approach in the US Market. International
Journal of Applied Sciences and Radiation Research
2(1). https://doi.org/10.22399/ijasrar.20
[34]Ibeh, C. V., & Adegbola, A. (2025). AI and Machine
Learning for Sustainable Energy: Predictive
Modelling, Optimization and Socioeconomic Impact
In The USA. International Journal of Applied
2395