Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (851)

Search Parameters:
Keywords = Explainable AI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 15730 KiB  
Article
Empowering Brain Tumor Diagnosis through Explainable Deep Learning
by Zhengkun Li and Omar Dib
Mach. Learn. Knowl. Extr. 2024, 6(4), 2248-2281; https://doi.org/10.3390/make6040111 - 7 Oct 2024
Viewed by 424
Abstract
Brain tumors are among the most lethal diseases, and early detection is crucial for improving patient outcomes. Currently, magnetic resonance imaging (MRI) is the most effective method for early brain tumor detection due to its superior imaging quality for soft tissues. However, manual [...] Read more.
Brain tumors are among the most lethal diseases, and early detection is crucial for improving patient outcomes. Currently, magnetic resonance imaging (MRI) is the most effective method for early brain tumor detection due to its superior imaging quality for soft tissues. However, manual analysis of brain MRI scans is prone to errors, largely influenced by the radiologists’ experience and fatigue. To address these challenges, computer-aided diagnosis (CAD) systems are more significant. These advanced computer vision techniques such as deep learning provide accurate predictions based on medical images, enhancing diagnostic precision and reliability. This paper presents a novel CAD framework for multi-class brain tumor classification. The framework employs six pre-trained deep learning models as the base and incorporates comprehensive data preprocessing and augmentation strategies to enhance computational efficiency. To address issues related to transparency and interpretability in deep learning models, Gradient-weighted Class Activation Mapping (Grad-CAM) is utilized to visualize the decision-making processes involved in tumor classification from MRI scans. Additionally, a user-friendly Brain Tumor Detection System has been developed using Streamlit, demonstrating its practical applicability in real-world settings and providing a valuable tool for clinicians. All simulation results are derived from a public benchmark dataset, showing that the proposed framework achieves state-of-the-art performance, with accuracy approaching 99% in ResNet-50, Xception, and InceptionV3 models. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

21 pages, 3313 KiB  
Article
Understanding Public Opinion towards ESG and Green Finance with the Use of Explainable Artificial Intelligence
by Wihan van der Heever, Ranjan Satapathy, Ji Min Park and Erik Cambria
Mathematics 2024, 12(19), 3119; https://doi.org/10.3390/math12193119 - 5 Oct 2024
Viewed by 612
Abstract
This study leverages explainable artificial intelligence (XAI) techniques to analyze public sentiment towards Environmental, Social, and Governance (ESG) factors, climate change, and green finance. It does so by developing a novel multi-task learning framework combining aspect-based sentiment analysis, co-reference resolution, and contrastive learning [...] Read more.
This study leverages explainable artificial intelligence (XAI) techniques to analyze public sentiment towards Environmental, Social, and Governance (ESG) factors, climate change, and green finance. It does so by developing a novel multi-task learning framework combining aspect-based sentiment analysis, co-reference resolution, and contrastive learning to extract nuanced insights from a large corpus of social media data. Our approach integrates state-of-the-art models, including the SenticNet API, for sentiment analysis and implements multiple XAI methods such as LIME, SHAP, and Permutation Importance to enhance interpretability. Results reveal predominantly positive sentiment towards environmental topics, with notable variations across ESG categories. The contrastive learning visualization demonstrates clear sentiment clustering while highlighting areas of uncertainty. This research contributes to the field by providing an interpretable, trustworthy AI system for ESG sentiment analysis, offering valuable insights for policymakers and business stakeholders navigating the complex landscape of sustainable finance and climate action. The methodology proposed in this paper advances the current state of AI in ESG and green finance in several ways. By combining aspect-based sentiment analysis, co-reference resolution, and contrastive learning, our approach provides a more comprehensive understanding of public sentiment towards ESG factors than traditional methods. The integration of multiple XAI techniques (LIME, SHAP, and Permutation Importance) offers a transparent view of the subtlety of the model’s decision-making process, which is crucial for building trust in AI-driven ESG assessments. Our approach enables a more accurate representation of public opinion, essential for informed decision-making in sustainable finance. This paper paves the way for more transparent and explainable AI applications in critical domains like ESG. Full article
(This article belongs to the Special Issue Explainable and Trustworthy AI Models for Data Analytics)
Show Figures

Figure 1

13 pages, 753 KiB  
Article
How Do Flemish Laying Hen Farmers and Private Bird Keepers Comply with and Think about Measures to Control Avian Influenza?
by Femke Delanglez, Bart Ampe, Anneleen Watteyn, Liesbeth G. W. Van Damme and Frank A. M. Tuyttens
Vet. Sci. 2024, 11(10), 475; https://doi.org/10.3390/vetsci11100475 - 5 Oct 2024
Viewed by 466
Abstract
Competent authorities of many countries, including Belgium, impose control measures (preventing wild bird access to feeders and water facilities, indoor confinement of captive birds, or fencing off outdoor ranges with nets) on professional and non-professional keepers of birds to prevent the spread of [...] Read more.
Competent authorities of many countries, including Belgium, impose control measures (preventing wild bird access to feeders and water facilities, indoor confinement of captive birds, or fencing off outdoor ranges with nets) on professional and non-professional keepers of birds to prevent the spread of avian influenza (AI). Flemish laying hen farmers (FAR, n = 33) and private keepers of captive birds (PRI, n = 263) were surveyed about their opinion on and compliance with AI measures legally imposed during the most recent high-risk period before this survey in 2021. Participants answered questions on a 5-point Likert scale (1 = the worst, 3 = neutral, and 5 = the best). FAR indicated better compliance with the AI measures than PRI, except for net confinement. FAR indicated that they and other poultry farmers complied better with AI measures than PRI. Additionally, PRI indicated that they better complied than other PRI keepers. FAR regarded the AI measures as more effective than PRI. To prevent the spread of AI more effectively, national authorities could focus on information campaigns explaining to private bird keepers the need for the various control measures that they impose. If these campaigns fail, local authorities may need stricter enforcement or alternative ways to increase compliance. Full article
(This article belongs to the Section Veterinary Food Safety and Zoonosis)
Show Figures

Figure 1

111 pages, 1410 KiB  
Systematic Review
Recent Applications of Explainable AI (XAI): A Systematic Literature Review
by Mirka Saarela and Vili Podgorelec
Appl. Sci. 2024, 14(19), 8884; https://doi.org/10.3390/app14198884 - 2 Oct 2024
Viewed by 1226
Abstract
This systematic literature review employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web of Science database, [...] Read more.
This systematic literature review employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web of Science database, 512 peer-reviewed journal articles met the inclusion criteria—namely, being recent, high-quality XAI application articles published in English—and were analyzed in detail. Both qualitative and quantitative statistical techniques were used to analyze the identified articles: qualitatively by summarizing the characteristics of the included studies based on predefined codes, and quantitatively through statistical analysis of the data. These articles were categorized according to their application domains, techniques, and evaluation methods. Health-related applications were particularly prevalent, with a strong focus on cancer diagnosis, COVID-19 management, and medical imaging. Other significant areas of application included environmental and agricultural management, industrial optimization, cybersecurity, finance, transportation, and entertainment. Additionally, emerging applications in law, education, and social care highlight XAI’s expanding impact. The review reveals a predominant use of local explanation methods, particularly SHAP and LIME, with SHAP being favored for its stability and mathematical guarantees. However, a critical gap in the evaluation of XAI results is identified, as most studies rely on anecdotal evidence or expert opinion rather than robust quantitative metrics. This underscores the urgent need for standardized evaluation frameworks to ensure the reliability and effectiveness of XAI applications. Future research should focus on developing comprehensive evaluation standards and improving the interpretability and stability of explanations. These advancements are essential for addressing the diverse demands of various application domains while ensuring trust and transparency in AI systems. Full article
(This article belongs to the Special Issue Recent Applications of Explainable AI (XAI))
Show Figures

Figure 1

50 pages, 19482 KiB  
Article
The Use of eXplainable Artificial Intelligence and Machine Learning Operation Principles to Support the Continuous Development of Machine Learning-Based Solutions in Fault Detection and Identification
by Tuan-Anh Tran, Tamás Ruppert and János Abonyi
Computers 2024, 13(10), 252; https://doi.org/10.3390/computers13100252 - 2 Oct 2024
Viewed by 257
Abstract
Machine learning (ML) revolutionized traditional machine fault detection and identification (FDI), as complex-structured models with well-designed unsupervised learning strategies can detect abnormal patterns from abundant data, which significantly reduces the total cost of ownership. However, their opaqueness raised human concern and intrigued the [...] Read more.
Machine learning (ML) revolutionized traditional machine fault detection and identification (FDI), as complex-structured models with well-designed unsupervised learning strategies can detect abnormal patterns from abundant data, which significantly reduces the total cost of ownership. However, their opaqueness raised human concern and intrigued the eXplainable artificial intelligence (XAI) concept. Furthermore, the development of ML-based FDI models can be improved fundamentally with machine learning operations (MLOps) guidelines, enhancing reproducibility and operational quality. This study proposes a framework for the continuous development of ML-based FDI solutions, which contains a general structure to simultaneously visualize and check the performance of the ML model while directing the resource-efficient development process. A use case is conducted on sensor data of a hydraulic system with a simple long short-term memory (LSTM) network. Proposed XAI principles and tools supported the model engineering and monitoring, while additional system optimization can be made regarding input data preparation, feature selection, and model usage. Suggested MLOps principles help developers create a minimum viable solution and involve it in a continuous improvement loop. The promising result motivates further adoption of XAI and MLOps while endorsing the generalization of modern ML-based FDI applications with the HITL concept. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

21 pages, 8291 KiB  
Article
An Explainable AI-Based Modified YOLOv8 Model for Efficient Fire Detection
by Md. Waliul Hasan, Shahria Shanto, Jannatun Nayeema, Rashik Rahman, Tanjina Helaly, Ziaur Rahman and Sk. Tanzir Mehedi
Mathematics 2024, 12(19), 3042; https://doi.org/10.3390/math12193042 - 28 Sep 2024
Viewed by 662
Abstract
Early fire detection is the key to saving lives and limiting property damage. Advanced technology can detect fires in high-risk zones with minimal human presence before they escalate beyond control. This study focuses on providing a more advanced model structure based on the [...] Read more.
Early fire detection is the key to saving lives and limiting property damage. Advanced technology can detect fires in high-risk zones with minimal human presence before they escalate beyond control. This study focuses on providing a more advanced model structure based on the YOLOv8 architecture to enhance early recognition of fire. Although YOLOv8 is excellent at real-time object detection, it can still be better adjusted to the nuances of fire detection. We achieved this advancement by incorporating an additional context-to-flow layer, enabling the YOLOv8 model to more effectively capture both local and global contextual information. The context-to-flow layer enhances the model’s ability to recognize complex patterns like smoke and flames, leading to more effective feature extraction. This extra layer helps the model better detect fires and smoke by improving its ability to focus on fine-grained details and minor variation, which is crucial in challenging environments with low visibility, dynamic fire behavior, and complex backgrounds. Our proposed model achieved a 2.9% greater precision rate, 4.7% more recall rate, and 4% more F1-score in comparison to the YOLOv8 default model. This study discovered that the architecture modification increases information flow and improves fire detection at all fire sizes, from tiny sparks to massive flames. We also included explainable AI strategies to explain the model’s decision-making, thus adding more transparency and improving trust in its predictions. Ultimately, this enhanced system demonstrates remarkable efficacy and accuracy, which allows additional improvements in autonomous fire detection systems. Full article
(This article belongs to the Section Mathematics and Computer Science)
Show Figures

Figure 1

15 pages, 817 KiB  
Article
Exploration of Deep-Learning-Based Approaches for False Fact Identification in Social Judicial Systems
by Yuzhuo Zou, Jiepin Chen, Jiebin Cai, Mengen Zhou and Yinghui Pan
Electronics 2024, 13(19), 3831; https://doi.org/10.3390/electronics13193831 - 27 Sep 2024
Viewed by 385
Abstract
With the many applications of artificial intelligence (AI) in social judicial systems, false fact identification becomes a challenging issue when the system is expected to be more autonomous and intelligent in assisting a judicial review. In particular, private lending disputes often involve false [...] Read more.
With the many applications of artificial intelligence (AI) in social judicial systems, false fact identification becomes a challenging issue when the system is expected to be more autonomous and intelligent in assisting a judicial review. In particular, private lending disputes often involve false facts that are intentionally concealed and manipulated due to unique and dynamic relationships and their nonconfrontational nature in the judicial system. In this article, we investigate deep learning techniques to identify false facts in loan cases for the purpose of reducing the judicial workload. Specifically, we adapt deep-learning-based natural language processing techniques to a dataset over 100 real-world judicial rules spanning four courts of different levels in China. The BERT (bidirectional encoder representations from transformers)-based classifier and T5 text generation models were trained to classify false litigation claims semantically. The experimental results demonstrate that T5 has a robust learning capability with a small number of legal text samples, outperforms BERT in identifying falsified facts, and provides explainable decisions to judges. This research shows that deep-learning-based false fact identification approaches provide promising solutions for addressing concealed information and manipulation in private lending lawsuits. This highlights the feasibility of deep learning to strengthen fact-finding and reduce labor costs in the judicial field. Full article
(This article belongs to the Special Issue Data-Driven Intelligence in Autonomous Systems)
Show Figures

Figure 1

23 pages, 2242 KiB  
Article
Financial Distress Prediction in the Nordics: Early Warnings from Machine Learning Models
by Nils-Gunnar Birkeland Abrahamsen, Emil Nylén-Forthun, Mats Møller, Petter Eilif de Lange and Morten Risstad
J. Risk Financial Manag. 2024, 17(10), 432; https://doi.org/10.3390/jrfm17100432 - 27 Sep 2024
Viewed by 622
Abstract
This paper proposes an explicable early warning machine learning model for predicting financial distress, which generalizes across listed Nordic corporations. We develop a novel dataset, covering the period from Q1 2001 to Q2 2022, in which we combine idiosyncratic quarterly financial statement data, [...] Read more.
This paper proposes an explicable early warning machine learning model for predicting financial distress, which generalizes across listed Nordic corporations. We develop a novel dataset, covering the period from Q1 2001 to Q2 2022, in which we combine idiosyncratic quarterly financial statement data, information from financial markets, and indicators of macroeconomic trends. The preferred LightGBM model, whose features are selected by applying explainable artificial intelligence, outperforms the benchmark models by a notable margin across evaluation metrics. We find that features related to liquidity, solvency, and size are highly important indicators of financial health and thus crucial variables for forecasting financial distress. Furthermore, we show that explicitly accounting for seasonality, in combination with entity, market, and macro information, improves model performance. Full article
(This article belongs to the Special Issue Machine Learning Applications in Finance, 2nd Edition)
Show Figures

Figure 1

13 pages, 548 KiB  
Article
A Novel Tsetlin Machine with Enhanced Generalization
by Usman Anjum and Justin Zhan
Electronics 2024, 13(19), 3825; https://doi.org/10.3390/electronics13193825 - 27 Sep 2024
Viewed by 313
Abstract
The Tsetlin Machine (TM) is a novel machine learning approach that implements propositional logic to perform various tasks such as classification and regression. The TM not only achieves competitive accuracy in these tasks but also provides results that are explainable and easy to [...] Read more.
The Tsetlin Machine (TM) is a novel machine learning approach that implements propositional logic to perform various tasks such as classification and regression. The TM not only achieves competitive accuracy in these tasks but also provides results that are explainable and easy to implement using simple hardware. The TM learns using clauses based on the features of the data, and final classification is done using a combination of these clauses. In this paper, we propose the novel idea of adding regularizers to the TM, referred to as Regularized TM (RegTM), to improve generalization. Regularizers have been widely used in machine learning to enhance accuracy. We explore different regularization strategies and their influence on performance. We demonstrate the feasibility of our methodology through various experiments on benchmark datasets. Full article
Show Figures

Figure 1

24 pages, 1240 KiB  
Article
Hospital Re-Admission Prediction Using Named Entity Recognition and Explainable Machine Learning
by Safaa Dafrallah and Moulay A. Akhloufi
Diagnostics 2024, 14(19), 2151; https://doi.org/10.3390/diagnostics14192151 - 27 Sep 2024
Viewed by 255
Abstract
Early hospital readmission refers to unplanned emergency admission of patients within 30 days of discharge. Predicting early readmission risk before discharge can help to reduce the cost of readmissions for hospitals and decrease the death rate for Intensive Care Unit patients. In this [...] Read more.
Early hospital readmission refers to unplanned emergency admission of patients within 30 days of discharge. Predicting early readmission risk before discharge can help to reduce the cost of readmissions for hospitals and decrease the death rate for Intensive Care Unit patients. In this paper, we propose a novel approach for prediction of unplanned hospital readmissions using discharge notes from the MIMIC-III database. This approach is based on first extracting relevant information from clinical reports using a pretrained Named Entity Recognition model called BioMedical-NER, which is built on Bidirectional Encoder Representations from Transformers architecture, with the extracted features then used to train machine learning models to predict unplanned readmissions. Our proposed approach achieves better results on clinical reports compared to the state-of-the-art methods, with an average precision of 88.4% achieved by the Gradient Boosting algorithm. In addition, explainable Artificial Intelligence techniques are applied to provide deeper comprehension of the predictive results. Full article
Show Figures

Figure 1

26 pages, 17483 KiB  
Article
A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging
by Deepshikha Bhati, Fnu Neha and Md Amiruzzaman
J. Imaging 2024, 10(10), 239; https://doi.org/10.3390/jimaging10100239 - 25 Sep 2024
Viewed by 1428
Abstract
The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools [...] Read more.
The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools to unravel the black-box nature of these models, providing insights into their inner workings and enhancing trust in their predictions. This survey paper comprehensively examines various interpretation and visualization techniques applied to deep learning models in medical imaging. The paper reviews methodologies, discusses their applications, and evaluates their effectiveness in enhancing the interpretability, reliability, and clinical relevance of deep learning models in medical image analysis. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

27 pages, 2051 KiB  
Article
A Transparent Pipeline for Identifying Sexism in Social Media: Combining Explainability with Model Prediction
by Hadi Mohammadi, Anastasia Giachanou and Ayoub Bagheri
Appl. Sci. 2024, 14(19), 8620; https://doi.org/10.3390/app14198620 - 24 Sep 2024
Viewed by 483
Abstract
In this study, we present a new approach that combines multiple Bidirectional Encoder Representations from Transformers (BERT) architectures with a Convolutional Neural Network (CNN) framework designed for sexism detection in text at a granular level. Our method relies on the analysis and identification [...] Read more.
In this study, we present a new approach that combines multiple Bidirectional Encoder Representations from Transformers (BERT) architectures with a Convolutional Neural Network (CNN) framework designed for sexism detection in text at a granular level. Our method relies on the analysis and identification of the most important terms contributing to sexist content using Shapley Additive Explanations (SHAP) values. This approach involves defining a range of Sexism Scores based on both model predictions and explainability, moving beyond binary classification to provide a deeper understanding of the sexism-detection process. Additionally, it enables us to identify specific parts of a sentence and their respective contributions to this range, which can be valuable for decision makers and future research. In conclusion, this study introduces an innovative method for enhancing the clarity of large language models (LLMs), which is particularly relevant in sensitive domains such as sexism detection. The incorporation of explainability into the model represents a significant advancement in this field. The objective of our study is to bridge the gap between advanced technology and human comprehension by providing a framework for creating AI models that are both efficient and transparent. This approach could serve as a pipeline for future studies to incorporate explainability into language models. Full article
(This article belongs to the Special Issue Data and Text Mining: New Approaches, Achievements and Applications)
Show Figures

Figure 1

25 pages, 2816 KiB  
Article
GastricAITool: A Clinical Decision Support Tool for the Diagnosis and Prognosis of Gastric Cancer
by Rocío Aznar-Gimeno, María Asunción García-González, Rubén Muñoz-Sierra, Patricia Carrera-Lasfuentes, María de la Vega Rodrigálvarez-Chamarro, Carlos González-Muñoz, Enrique Meléndez-Estrada, Ángel Lanas and Rafael del Hoyo-Alonso
Biomedicines 2024, 12(9), 2162; https://doi.org/10.3390/biomedicines12092162 - 23 Sep 2024
Viewed by 653
Abstract
Background/Objective: Gastric cancer (GC) is a complex disease representing a significant global health concern. Advanced tools for the early diagnosis and prediction of adverse outcomes are crucial. In this context, artificial intelligence (AI) plays a fundamental role. The aim of this work was [...] Read more.
Background/Objective: Gastric cancer (GC) is a complex disease representing a significant global health concern. Advanced tools for the early diagnosis and prediction of adverse outcomes are crucial. In this context, artificial intelligence (AI) plays a fundamental role. The aim of this work was to develop a diagnostic and prognostic tool for GC, providing support to clinicians in critical decision-making and enabling personalised strategies. Methods: Different machine learning and deep learning techniques were explored to build diagnostic and prognostic models, ensuring model interpretability and transparency through explainable AI methods. These models were developed and cross-validated using data from 590 Spanish Caucasian patients with primary GC and 633 cancer-free individuals. Up to 261 variables were analysed, including demographic, environmental, clinical, tumoral, and genetic data. Variables such as Helicobacter pylori infection, tobacco use, family history of GC, TNM staging, metastasis, tumour location, treatment received, gender, age, and genetic factors (single nucleotide polymorphisms) were selected as inputs due to their association with the risk and progression of the disease. Results: The XGBoost algorithm (version 1.7.4) achieved the best performance for diagnosis, with an AUC value of 0.68 using 5-fold cross-validation. As for prognosis, the Random Survival Forest algorithm achieved a C-index of 0.77. Of interest, the incorporation of genetic data into the clinical–demographics models significantly increased discriminatory ability in both diagnostic and prognostic models. Conclusions: This article presents GastricAITool, a simple and intuitive decision support tool for the diagnosis and prognosis of GC. Full article
Show Figures

Figure 1

22 pages, 10035 KiB  
Article
Mobile Platforms as the Alleged Culprit for Work–Life Imbalance: A Data-Driven Method Using Co-Occurrence Network and Explainable AI Framework
by Xizi Wang, Yakun Ma and Guangwei Hu
Sustainability 2024, 16(18), 8192; https://doi.org/10.3390/su16188192 - 20 Sep 2024
Viewed by 746
Abstract
The digital transformation of organizations has propelled the widespread adoption of mobile platforms. Extended availability and prolonged engagement with platform-mediated work have blurred boundaries, making it increasingly difficult for individuals to balance work and life. Criticism of mobile platforms has intensified, precluding digital [...] Read more.
The digital transformation of organizations has propelled the widespread adoption of mobile platforms. Extended availability and prolonged engagement with platform-mediated work have blurred boundaries, making it increasingly difficult for individuals to balance work and life. Criticism of mobile platforms has intensified, precluding digital transformation towards a sustainable future. This study examines the complex relationship between mobile platforms and work–life imbalance using a comprehensive data-driven methodology. We employed a co-occurrence network technique to extract relevant features based on previous findings. Subsequently, we applied an explainable AI framework to analyze the nonlinear relationships underlying technology-induced work–life imbalance and to detect behavior patterns. Our results indicate that there is a threshold for the beneficial effects of availability demands on integration behavior. Beyond this tolerance range, no further positive increase can be observed. For organizations aiming to either constrain or foster employees’ integration behavior, our findings provide tailored strategies to meet different needs. By extending the application of advanced machine learning algorithms to predict integration behaviors, this study offers nuanced insights that counter the alleged issue of technology-induced imbalance. This, in turn, promotes the sustainable success of digital transformation initiatives. This study has significant theoretical and practical implications for organizational digital transformation. Full article
Show Figures

Figure 1

19 pages, 1677 KiB  
Review
Beyond Clinical Factors: Harnessing Artificial Intelligence and Multimodal Cardiac Imaging to Predict Atrial Fibrillation Recurrence Post-Catheter Ablation
by Edward T. Truong, Yiheng Lyu, Abdul Rahman Ihdayhid, Nick S. R. Lan and Girish Dwivedi
J. Cardiovasc. Dev. Dis. 2024, 11(9), 291; https://doi.org/10.3390/jcdd11090291 - 19 Sep 2024
Viewed by 1403
Abstract
Atrial fibrillation (AF) is the most common type of cardiac arrhythmia, with catheter ablation being a key alternative to medical treatment for restoring normal sinus rhythm. Despite advances in understanding AF pathogenesis, approximately 35% of patients experience AF recurrence at 12 months after [...] Read more.
Atrial fibrillation (AF) is the most common type of cardiac arrhythmia, with catheter ablation being a key alternative to medical treatment for restoring normal sinus rhythm. Despite advances in understanding AF pathogenesis, approximately 35% of patients experience AF recurrence at 12 months after catheter ablation. Therefore, accurate prediction of AF recurrence occurring after catheter ablation is important for patient selection and management. Conventional methods for predicting post-catheter ablation AF recurrence, which involve the use of univariate predictors and scoring systems, have played a supportive role in clinical decision-making. In an ever-changing landscape where technology is becoming ubiquitous within medicine, cardiac imaging and artificial intelligence (AI) could prove pivotal in enhancing AF recurrence predictions by providing data with independent predictive power and identifying key relationships in the data. This review comprehensively explores the existing methods for predicting the recurrence of AF following catheter ablation from different perspectives, including conventional predictors and scoring systems, cardiac imaging-based methods, and AI-based methods developed using a combination of demographic and imaging variables. By summarising state-of-the-art technologies, this review serves as a roadmap for developing future prediction models with enhanced accuracy, generalisability, and explainability, potentially contributing to improved care for patients with AF. Full article
Show Figures

Figure 1

Back to TopTop