Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (465)

Search Parameters:
Keywords = XAI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 3766 KiB  
Article
Smart Vision Transparency: Efficient Ocular Disease Prediction Model Using Explainable Artificial Intelligence
by Sagheer Abbas, Adnan Qaisar, Muhammad Sajid Farooq, Muhammad Saleem, Munir Ahmad and Muhammad Adnan Khan
Sensors 2024, 24(20), 6618; https://doi.org/10.3390/s24206618 (registering DOI) - 14 Oct 2024
Abstract
The early prediction of ocular disease is certainly an obligatory concern in the domain of ophthalmic medicine. Although modern scientific discoveries have shown the potential to treat eye diseases by using artificial intelligence (AI) and machine learning, explainable AI remains a crucial challenge [...] Read more.
The early prediction of ocular disease is certainly an obligatory concern in the domain of ophthalmic medicine. Although modern scientific discoveries have shown the potential to treat eye diseases by using artificial intelligence (AI) and machine learning, explainable AI remains a crucial challenge confronting this area of research. Although some traditional methods put in significant effort, they cannot accurately predict the proper ocular diseases. However, incorporating AI into diagnosing eye diseases in healthcare complicates the situation as the decision-making process of AI demonstrates complexity, which is a significant concern, especially in major sectors like ocular disease prediction. The lack of transparency in the AI models may hinder the confidence and trust of the doctors and the patients, as well as their perception of the AI and its abilities. Accordingly, explainable AI is significant in ensuring trust in the technology, enhancing clinical decision-making ability, and deploying ocular disease detection. This research proposed an efficient transfer learning model for eye disease prediction to transform smart vision potential in the healthcare sector and meet conventional approaches’ challenges while integrating explainable artificial intelligence (XAI). The integration of XAI in the proposed model ensures the transparency of the decision-making process through the comprehensive provision of rationale. This proposed model provides promising results with 95.74% accuracy and explains the transformative potential of XAI in advancing ocular healthcare. This significant milestone underscores the effectiveness of the proposed model in accurately determining various types of ocular disease. It is clearly shown that the proposed model is performing better than the previously published methods. Full article
Show Figures

Figure 1

21 pages, 3914 KiB  
Article
Asset Returns: Reimagining Generative ESG Indexes and Market Interconnectedness
by Gordon Dash, Nina Kajiji and Bruno G. Kamdem
J. Risk Financial Manag. 2024, 17(10), 463; https://doi.org/10.3390/jrfm17100463 - 13 Oct 2024
Viewed by 452
Abstract
Financial economists have long studied factors related to risk premiums, pricing biases, and diversification impediments. This study examines the relationship between a firm’s commitment to environmental, social, and governance principles (ESGs) and asset market returns. We incorporate an algorithmic protocol to identify three [...] Read more.
Financial economists have long studied factors related to risk premiums, pricing biases, and diversification impediments. This study examines the relationship between a firm’s commitment to environmental, social, and governance principles (ESGs) and asset market returns. We incorporate an algorithmic protocol to identify three nonobservable but pervasive E, S, and G time-series factors to meet the study’s objectives. The novel factors were tested for information content by constructing a six-factor Fama and French model following the imposition of the isolation and disentanglement algorithm. Realizing that nonlinear relationships characterize models incorporating both observable and nonobservable factors, the Fama and French model statement was estimated using an enhanced shallow-learning neural network. Finally, as a post hoc measure, we integrated explainable AI (XAI) to simplify the machine learning outputs. Our study extends the literature on the disentanglement of investment factors across two dimensions. We first identify new time-series-based E, S, and G factors. Second, we demonstrate how machine learning can be used to model asset returns, considering the complex interconnectedness of sustainability factors. Our approach is further supported by comparing neural-network-estimated E, S, and G weights with London Stock Exchange ESG ratings. Full article
(This article belongs to the Special Issue Business, Finance, and Economic Development)
Show Figures

Figure 1

14 pages, 1765 KiB  
Article
XAI-Augmented Voting Ensemble Models for Heart Disease Prediction: A SHAP and LIME-Based Approach
by Nermeen Gamal Rezk, Samah Alshathri, Amged Sayed, Ezz El-Din Hemdan and Heba El-Behery
Bioengineering 2024, 11(10), 1016; https://doi.org/10.3390/bioengineering11101016 - 12 Oct 2024
Viewed by 347
Abstract
Ensemble Learning (EL) has been used for almost ten years to classify heart diseases, but it is still difficult to grasp how the “black boxes”, or non-interpretable models, behave inside. Predicting heart disease is crucial to healthcare, since it allows for prompt diagnosis [...] Read more.
Ensemble Learning (EL) has been used for almost ten years to classify heart diseases, but it is still difficult to grasp how the “black boxes”, or non-interpretable models, behave inside. Predicting heart disease is crucial to healthcare, since it allows for prompt diagnosis and treatment of the patient’s true state. Nonetheless, it is still difficult to forecast illness with any degree of accuracy. In this study, we have suggested a framework for the prediction of heart disease based on Explainable artificial intelligence (XAI)-based hybrid Ensemble Learning (EL) models, such as LightBoost and XGBoost algorithms. The main goals are to build predictive models and apply SHAP (SHapley Additive expPlanations) and LIME (Local Interpretable Model-agnostic Explanations) analysis to improve the interpretability of the models. We carefully construct our systems and test different hybrid ensemble learning algorithms to determine which model is best for heart disease prediction (HDP). The approach promotes interpretability and transparency when examining these widespread health issues. By combining hybrid Ensemble learning models with XAI, the important factors and risk signals that underpin the co-occurrence of heart disease are made visible. The accuracy, precision, and recall of such models were used to evaluate their efficacy. This study highlights how crucial it is for healthcare models to be transparent and recommends the inclusion of XAI to improve interpretability and medical decisionmaking. Full article
(This article belongs to the Special Issue Artificial Intelligence for Better Healthcare and Precision Medicine)
Show Figures

Figure 1

45 pages, 8086 KiB  
Article
Helping CNAs Generate CVSS Scores Faster and More Confidently Using XAI
by Elyes Manai, Mohamed Mejri and Jaouhar Fattahi
Appl. Sci. 2024, 14(20), 9231; https://doi.org/10.3390/app14209231 - 11 Oct 2024
Viewed by 406
Abstract
The number of cybersecurity vulnerabilities keeps growing every year. Each vulnerability must be reported to the MITRE Corporation and assessed by a Counting Number Authority, which generates a metrics vector that determines its severity score. This process can take up to several weeks, [...] Read more.
The number of cybersecurity vulnerabilities keeps growing every year. Each vulnerability must be reported to the MITRE Corporation and assessed by a Counting Number Authority, which generates a metrics vector that determines its severity score. This process can take up to several weeks, with higher-severity vulnerabilities taking more time. Several authors have successfully used Deep Learning to automate the score generation process and used explainable AI to build trust with the users. However, the explanations that were shown were surface label input saliency on binary classification. This is a limitation, as several metrics are multi-class and there is much more we can achieve with XAI than just visualizing saliency. In this work, we look for actionable actions CNAs can take using XAI. We achieve state-of-the-art results using an interpretable XGBoost model, generate explanations for multi-class labels using SHAP, and use the raw Shapley values to calculate cumulative word importance and generate IF rules that allow a more transparent look at how the model classified vulnerabilities. Finally, we made the code and dataset open-source for reproducibility. Full article
(This article belongs to the Special Issue Recent Applications of Explainable AI (XAI))
Show Figures

Figure 1

30 pages, 716 KiB  
Review
Advancing Arctic Sea Ice Remote Sensing with AI and Deep Learning: Opportunities and Challenges
by Wenwen Li, Chia-Yu Hsu and Marco Tedesco
Remote Sens. 2024, 16(20), 3764; https://doi.org/10.3390/rs16203764 - 10 Oct 2024
Viewed by 723
Abstract
Revolutionary advances in artificial intelligence (AI) in the past decade have brought transformative innovation across science and engineering disciplines. In the field of Arctic science, we have witnessed an increasing trend in the adoption of AI, especially deep learning, to support the analysis [...] Read more.
Revolutionary advances in artificial intelligence (AI) in the past decade have brought transformative innovation across science and engineering disciplines. In the field of Arctic science, we have witnessed an increasing trend in the adoption of AI, especially deep learning, to support the analysis of Arctic big data and facilitate new discoveries. In this paper, we provide a comprehensive review of the applications of deep learning in sea ice remote sensing domains, focusing on problems such as sea ice lead detection, thickness estimation, sea ice concentration and extent forecasting, motion detection, and sea ice type classification. In addition to discussing these applications, we also summarize technological advances that provide customized deep learning solutions, including new loss functions and learning strategies to better understand sea ice dynamics. To promote the growth of this exciting interdisciplinary field, we further explore several research areas where the Arctic sea ice community can benefit from cutting-edge AI technology. These areas include improving multimodal deep learning capabilities, enhancing model accuracy in measuring prediction uncertainty, better leveraging AI foundation models, and deepening integration with physics-based models. We hope that this paper can serve as a cornerstone in the progress of Arctic sea ice research using AI and inspire further advances in this field. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

21 pages, 3313 KiB  
Article
Understanding Public Opinion towards ESG and Green Finance with the Use of Explainable Artificial Intelligence
by Wihan van der Heever, Ranjan Satapathy, Ji Min Park and Erik Cambria
Mathematics 2024, 12(19), 3119; https://doi.org/10.3390/math12193119 - 5 Oct 2024
Viewed by 783
Abstract
This study leverages explainable artificial intelligence (XAI) techniques to analyze public sentiment towards Environmental, Social, and Governance (ESG) factors, climate change, and green finance. It does so by developing a novel multi-task learning framework combining aspect-based sentiment analysis, co-reference resolution, and contrastive learning [...] Read more.
This study leverages explainable artificial intelligence (XAI) techniques to analyze public sentiment towards Environmental, Social, and Governance (ESG) factors, climate change, and green finance. It does so by developing a novel multi-task learning framework combining aspect-based sentiment analysis, co-reference resolution, and contrastive learning to extract nuanced insights from a large corpus of social media data. Our approach integrates state-of-the-art models, including the SenticNet API, for sentiment analysis and implements multiple XAI methods such as LIME, SHAP, and Permutation Importance to enhance interpretability. Results reveal predominantly positive sentiment towards environmental topics, with notable variations across ESG categories. The contrastive learning visualization demonstrates clear sentiment clustering while highlighting areas of uncertainty. This research contributes to the field by providing an interpretable, trustworthy AI system for ESG sentiment analysis, offering valuable insights for policymakers and business stakeholders navigating the complex landscape of sustainable finance and climate action. The methodology proposed in this paper advances the current state of AI in ESG and green finance in several ways. By combining aspect-based sentiment analysis, co-reference resolution, and contrastive learning, our approach provides a more comprehensive understanding of public sentiment towards ESG factors than traditional methods. The integration of multiple XAI techniques (LIME, SHAP, and Permutation Importance) offers a transparent view of the subtlety of the model’s decision-making process, which is crucial for building trust in AI-driven ESG assessments. Our approach enables a more accurate representation of public opinion, essential for informed decision-making in sustainable finance. This paper paves the way for more transparent and explainable AI applications in critical domains like ESG. Full article
(This article belongs to the Special Issue Explainable and Trustworthy AI Models for Data Analytics)
Show Figures

Figure 1

111 pages, 1410 KiB  
Systematic Review
Recent Applications of Explainable AI (XAI): A Systematic Literature Review
by Mirka Saarela and Vili Podgorelec
Appl. Sci. 2024, 14(19), 8884; https://doi.org/10.3390/app14198884 - 2 Oct 2024
Viewed by 1452
Abstract
This systematic literature review employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web of Science database, [...] Read more.
This systematic literature review employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web of Science database, 512 peer-reviewed journal articles met the inclusion criteria—namely, being recent, high-quality XAI application articles published in English—and were analyzed in detail. Both qualitative and quantitative statistical techniques were used to analyze the identified articles: qualitatively by summarizing the characteristics of the included studies based on predefined codes, and quantitatively through statistical analysis of the data. These articles were categorized according to their application domains, techniques, and evaluation methods. Health-related applications were particularly prevalent, with a strong focus on cancer diagnosis, COVID-19 management, and medical imaging. Other significant areas of application included environmental and agricultural management, industrial optimization, cybersecurity, finance, transportation, and entertainment. Additionally, emerging applications in law, education, and social care highlight XAI’s expanding impact. The review reveals a predominant use of local explanation methods, particularly SHAP and LIME, with SHAP being favored for its stability and mathematical guarantees. However, a critical gap in the evaluation of XAI results is identified, as most studies rely on anecdotal evidence or expert opinion rather than robust quantitative metrics. This underscores the urgent need for standardized evaluation frameworks to ensure the reliability and effectiveness of XAI applications. Future research should focus on developing comprehensive evaluation standards and improving the interpretability and stability of explanations. These advancements are essential for addressing the diverse demands of various application domains while ensuring trust and transparency in AI systems. Full article
(This article belongs to the Special Issue Recent Applications of Explainable AI (XAI))
Show Figures

Figure 1

50 pages, 19482 KiB  
Article
The Use of eXplainable Artificial Intelligence and Machine Learning Operation Principles to Support the Continuous Development of Machine Learning-Based Solutions in Fault Detection and Identification
by Tuan-Anh Tran, Tamás Ruppert and János Abonyi
Computers 2024, 13(10), 252; https://doi.org/10.3390/computers13100252 - 2 Oct 2024
Viewed by 342
Abstract
Machine learning (ML) revolutionized traditional machine fault detection and identification (FDI), as complex-structured models with well-designed unsupervised learning strategies can detect abnormal patterns from abundant data, which significantly reduces the total cost of ownership. However, their opaqueness raised human concern and intrigued the [...] Read more.
Machine learning (ML) revolutionized traditional machine fault detection and identification (FDI), as complex-structured models with well-designed unsupervised learning strategies can detect abnormal patterns from abundant data, which significantly reduces the total cost of ownership. However, their opaqueness raised human concern and intrigued the eXplainable artificial intelligence (XAI) concept. Furthermore, the development of ML-based FDI models can be improved fundamentally with machine learning operations (MLOps) guidelines, enhancing reproducibility and operational quality. This study proposes a framework for the continuous development of ML-based FDI solutions, which contains a general structure to simultaneously visualize and check the performance of the ML model while directing the resource-efficient development process. A use case is conducted on sensor data of a hydraulic system with a simple long short-term memory (LSTM) network. Proposed XAI principles and tools supported the model engineering and monitoring, while additional system optimization can be made regarding input data preparation, feature selection, and model usage. Suggested MLOps principles help developers create a minimum viable solution and involve it in a continuous improvement loop. The promising result motivates further adoption of XAI and MLOps while endorsing the generalization of modern ML-based FDI applications with the HITL concept. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

39 pages, 3642 KiB  
Review
Recent Methods for Evaluating Crop Water Stress Using AI Techniques: A Review
by Soo Been Cho, Hidayat Mohamad Soleh, Ji Won Choi, Woon-Ha Hwang, Hoonsoo Lee, Young-Son Cho, Byoung-Kwan Cho, Moon S. Kim, Insuck Baek and Geonwoo Kim
Sensors 2024, 24(19), 6313; https://doi.org/10.3390/s24196313 - 29 Sep 2024
Viewed by 1232
Abstract
This study systematically reviews the integration of artificial intelligence (AI) and remote sensing technologies to address the issue of crop water stress caused by rising global temperatures and climate change; in particular, it evaluates the effectiveness of various non-destructive remote sensing platforms (RGB, [...] Read more.
This study systematically reviews the integration of artificial intelligence (AI) and remote sensing technologies to address the issue of crop water stress caused by rising global temperatures and climate change; in particular, it evaluates the effectiveness of various non-destructive remote sensing platforms (RGB, thermal imaging, and hyperspectral imaging) and AI techniques (machine learning, deep learning, ensemble methods, GAN, and XAI) in monitoring and predicting crop water stress. The analysis focuses on variability in precipitation due to climate change and explores how these technologies can be strategically combined under data-limited conditions to enhance agricultural productivity. Furthermore, this study is expected to contribute to improving sustainable agricultural practices and mitigating the negative impacts of climate change on crop yield and quality. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2024)
Show Figures

Figure 1

27 pages, 13823 KiB  
Article
Application of Remote Sensing and Explainable Artificial Intelligence (XAI) for Wildfire Occurrence Mapping in the Mountainous Region of Southwest China
by Jia Liu, Yukuan Wang, Yafeng Lu, Pengguo Zhao, Shunjiu Wang, Yu Sun and Yu Luo
Remote Sens. 2024, 16(19), 3602; https://doi.org/10.3390/rs16193602 - 27 Sep 2024
Viewed by 874
Abstract
The ecosystems in the mountainous region of Southwest China are exceptionally fragile and constitute one of the global hotspots for wildfire occurrences. Understanding the complex interactions between wildfires and their environmental and anthropogenic factors is crucial for effective wildfire modeling and management. Despite [...] Read more.
The ecosystems in the mountainous region of Southwest China are exceptionally fragile and constitute one of the global hotspots for wildfire occurrences. Understanding the complex interactions between wildfires and their environmental and anthropogenic factors is crucial for effective wildfire modeling and management. Despite significant advancements in wildfire modeling using machine learning (ML) methods, their limited explainability remains a barrier to utilizing them for in-depth wildfire analysis. This paper employs Logistic Regression (LR), Random Forest (RF), and Extreme Gradient Boosting (XGBoost) models along with the MODIS global fire atlas dataset (2004–2020) to study the influence of meteorological, topographic, vegetation, and human factors on wildfire occurrences in the mountainous region of Southwest China. It also utilizes Shapley Additive exPlanations (SHAP) values, a method within explainable artificial intelligence (XAI), to demonstrate the influence of key controlling factors on the frequency of fire occurrences. The results indicate that wildfires in this region are primarily influenced by meteorological conditions, particularly sunshine duration, relative humidity (seasonal and daily), seasonal precipitation, and daily land surface temperature. Among local variables, altitude, proximity to roads, railways, residential areas, and population density are significant factors. All models demonstrate strong predictive capabilities with AUC values over 0.8 and prediction accuracies ranging from 76.0% to 95.0%. XGBoost outperforms LR and RF in predictive accuracy across all factor groups (climatic, local, and combinations thereof). The inclusion of topographic factors and human activities enhances model optimization to some extent. SHAP results reveal critical features that significantly influence wildfire occurrences, and the thresholds of positive or negative changes, highlighting that relative humidity, rain-free days, and land use land cover changes (LULC) are primary contributors to frequent wildfires in this region. Based on regional differences in wildfire drivers, a wildfire-risk zoning map for the mountainous region of Southwest China is created. Areas identified as high risk are predominantly located in the Northwestern and Southern parts of the study area, particularly in Yanyuan and Miyi, while areas assessed as low risk are mainly distributed in the Northeastern region. Full article
Show Figures

Figure 1

31 pages, 2928 KiB  
Review
Literature Review of Explainable Tabular Data Analysis
by Helen O’Brien Quinn, Mohamed Sedky, Janet Francis and Michael Streeton
Electronics 2024, 13(19), 3806; https://doi.org/10.3390/electronics13193806 - 26 Sep 2024
Viewed by 817
Abstract
Explainable artificial intelligence (XAI) is crucial for enhancing transparency and trust in machine learning models, especially for tabular data used in finance, healthcare, and marketing. This paper surveys XAI techniques for tabular data, building on] previous work done, specifically a survey of explainable [...] Read more.
Explainable artificial intelligence (XAI) is crucial for enhancing transparency and trust in machine learning models, especially for tabular data used in finance, healthcare, and marketing. This paper surveys XAI techniques for tabular data, building on] previous work done, specifically a survey of explainable artificial intelligence for tabular data, and analyzes recent advancements. It categorizes and describes XAI methods relevant to tabular data, identifies domain-specific challenges and gaps, and examines potential applications and trends. Future research directions emphasize clarifying terminology, ensuring data security, creating user-centered explanations, improving interaction, developing robust evaluation metrics, and advancing adversarial example analysis. This contribution aims to bolster effective, trustworthy, and transparent decision making in the field of XAI. Full article
Show Figures

Figure 1

26 pages, 17483 KiB  
Article
A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging
by Deepshikha Bhati, Fnu Neha and Md Amiruzzaman
J. Imaging 2024, 10(10), 239; https://doi.org/10.3390/jimaging10100239 - 25 Sep 2024
Viewed by 1555
Abstract
The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools [...] Read more.
The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools to unravel the black-box nature of these models, providing insights into their inner workings and enhancing trust in their predictions. This survey paper comprehensively examines various interpretation and visualization techniques applied to deep learning models in medical imaging. The paper reviews methodologies, discusses their applications, and evaluates their effectiveness in enhancing the interpretability, reliability, and clinical relevance of deep learning models in medical image analysis. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

24 pages, 1353 KiB  
Article
Application of Deep Learning for Heart Attack Prediction with Explainable Artificial Intelligence
by Elias Dritsas and Maria Trigka
Computers 2024, 13(10), 244; https://doi.org/10.3390/computers13100244 - 25 Sep 2024
Viewed by 712
Abstract
Heart disease remains a leading cause of mortality worldwide, and the timely and accurate prediction of heart attack is crucial yet challenging due to the complexity of the condition and the limitations of traditional diagnostic methods. These challenges include the need for resource-intensive [...] Read more.
Heart disease remains a leading cause of mortality worldwide, and the timely and accurate prediction of heart attack is crucial yet challenging due to the complexity of the condition and the limitations of traditional diagnostic methods. These challenges include the need for resource-intensive diagnostics and the difficulty in interpreting complex predictive models in clinical settings. In this study, we apply and compare the performance of five well-known Deep Learning (DL) models, namely Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and a Hybrid model, to a heart attack prediction dataset. Each model was properly tuned and evaluated using accuracy, precision, recall, F1-score, and Area Under the Receiver Operating Characteristic Curve (AUC) as performance metrics. Additionally, by integrating an Explainable Artificial intelligence (XAI) technique, specifically Shapley Additive Explanations (SHAP), we enhance the interpretability of the predictions, making them actionable for healthcare professionals and thereby enhancing clinical applicability. The experimental results revealed that the Hybrid model prevailed, achieving the highest performance across all metrics. Specifically, the Hybrid model attained an accuracy of 91%, precision of 89%, recall of 90%, F1-score of 89%, and an AUC of 0.95. These results highlighted the Hybrid model’s superior ability to predict heart attacks, attributed to its efficient handling of sequential data and long-term dependencies. Full article
Show Figures

Figure 1

27 pages, 2051 KiB  
Article
A Transparent Pipeline for Identifying Sexism in Social Media: Combining Explainability with Model Prediction
by Hadi Mohammadi, Anastasia Giachanou and Ayoub Bagheri
Appl. Sci. 2024, 14(19), 8620; https://doi.org/10.3390/app14198620 - 24 Sep 2024
Viewed by 520
Abstract
In this study, we present a new approach that combines multiple Bidirectional Encoder Representations from Transformers (BERT) architectures with a Convolutional Neural Network (CNN) framework designed for sexism detection in text at a granular level. Our method relies on the analysis and identification [...] Read more.
In this study, we present a new approach that combines multiple Bidirectional Encoder Representations from Transformers (BERT) architectures with a Convolutional Neural Network (CNN) framework designed for sexism detection in text at a granular level. Our method relies on the analysis and identification of the most important terms contributing to sexist content using Shapley Additive Explanations (SHAP) values. This approach involves defining a range of Sexism Scores based on both model predictions and explainability, moving beyond binary classification to provide a deeper understanding of the sexism-detection process. Additionally, it enables us to identify specific parts of a sentence and their respective contributions to this range, which can be valuable for decision makers and future research. In conclusion, this study introduces an innovative method for enhancing the clarity of large language models (LLMs), which is particularly relevant in sensitive domains such as sexism detection. The incorporation of explainability into the model represents a significant advancement in this field. The objective of our study is to bridge the gap between advanced technology and human comprehension by providing a framework for creating AI models that are both efficient and transparent. This approach could serve as a pipeline for future studies to incorporate explainability into language models. Full article
(This article belongs to the Special Issue Data and Text Mining: New Approaches, Achievements and Applications)
Show Figures

Figure 1

23 pages, 5336 KiB  
Article
Enhancing the Interpretability of Malaria and Typhoid Diagnosis with Explainable AI and Large Language Models
by Kingsley Attai, Moses Ekpenyong, Constance Amannah, Daniel Asuquo, Peterben Ajuga, Okure Obot, Ekemini Johnson, Anietie John, Omosivie Maduka, Christie Akwaowo and Faith-Michael Uzoka
Trop. Med. Infect. Dis. 2024, 9(9), 216; https://doi.org/10.3390/tropicalmed9090216 - 16 Sep 2024
Viewed by 799
Abstract
Malaria and Typhoid fever are prevalent diseases in tropical regions, and both are exacerbated by unclear protocols, drug resistance, and environmental factors. Prompt and accurate diagnosis is crucial to improve accessibility and reduce mortality rates. Traditional diagnosis methods cannot effectively capture the complexities [...] Read more.
Malaria and Typhoid fever are prevalent diseases in tropical regions, and both are exacerbated by unclear protocols, drug resistance, and environmental factors. Prompt and accurate diagnosis is crucial to improve accessibility and reduce mortality rates. Traditional diagnosis methods cannot effectively capture the complexities of these diseases due to the presence of similar symptoms. Although machine learning (ML) models offer accurate predictions, they operate as “black boxes” with non-interpretable decision-making processes, making it challenging for healthcare providers to comprehend how the conclusions are reached. This study employs explainable AI (XAI) models such as Local Interpretable Model-agnostic Explanations (LIME), and Large Language Models (LLMs) like GPT to clarify diagnostic results for healthcare workers, building trust and transparency in medical diagnostics by describing which symptoms had the greatest impact on the model’s decisions and providing clear, understandable explanations. The models were implemented on Google Colab and Visual Studio Code because of their rich libraries and extensions. Results showed that the Random Forest model outperformed the other tested models; in addition, important features were identified with the LIME plots while ChatGPT 3.5 had a comparative advantage over other LLMs. The study integrates RF, LIME, and GPT in building a mobile app to enhance the interpretability and transparency in malaria and typhoid diagnosis system. Despite its promising results, the system’s performance is constrained by the quality of the dataset. Additionally, while LIME and GPT improve transparency, they may introduce complexities in real-time deployment due to computational demands and the need for internet service to maintain relevance and accuracy. The findings suggest that AI-driven diagnostic systems can significantly enhance healthcare delivery in environments with limited resources, and future works can explore the applicability of this framework to other medical conditions and datasets. Full article
Show Figures

Figure 1

Back to TopTop