Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (86)

Search Parameters:
Keywords = rate biased technique

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 13090 KiB  
Article
Dynamic Imaging of Projected Electric Potentials of Operando Semiconductor Devices by Time-Resolved Electron Holography
by Tolga Wagner, Hüseyin Çelik, Simon Gaebel, Dirk Berger, Peng-Han Lu, Ines Häusler, Nina Owschimikow, Michael Lehmann, Rafal E. Dunin-Borkowski, Christoph T. Koch and Fariba Hatami
Electronics 2025, 14(1), 199; https://doi.org/10.3390/electronics14010199 - 5 Jan 2025
Viewed by 791
Abstract
Interference gating (iGate) has emerged as a groundbreaking technique for ultrafast time-resolved electron holography in transmission electron microscopy, delivering nanometer spatial and nanosecond temporal resolution with minimal technological overhead. This study employs iGate to dynamically observe the local projected electric potential within the [...] Read more.
Interference gating (iGate) has emerged as a groundbreaking technique for ultrafast time-resolved electron holography in transmission electron microscopy, delivering nanometer spatial and nanosecond temporal resolution with minimal technological overhead. This study employs iGate to dynamically observe the local projected electric potential within the space-charge region of a contacted transmission electron microscopy (TEM) lamella manufactured from a silicon diode during switching between unbiased and reverse-biased conditions, achieving a temporal resolution of 25 ns at a repetition rate of 3 MHz. By synchronizing the holographic acquisition with the applied voltage, this approach enables the direct visualization of time-dependent potential distributions with high precision. Complementary static and dynamic experiments reveal a remarkable correspondence between modeled and measured projected potentials, validating the method’s robustness. The observed dynamic phase progressions resolve and allow one to differentiate between localized switching dynamics and preparation-induced effects, such as charge recombination near the sample edges. These results establish iGate as a transformative tool for operando investigations of semiconductor devices, paving the way for advancing the nanoscale imaging of high-speed electronic processes. Full article
(This article belongs to the Section Optoelectronics)
Show Figures

Figure 1

38 pages, 9348 KiB  
Article
Bayesian Hierarchical Risk Premium Modeling with Model Risk: Addressing Non-Differential Berkson Error
by Minkun Kim, Marija Bezbradica and Martin Crane
Appl. Sci. 2025, 15(1), 210; https://doi.org/10.3390/app15010210 - 29 Dec 2024
Viewed by 741
Abstract
For general insurance pricing, aligning losses with accurate premiums is crucial for insurance companies’ competitiveness. Traditional actuarial models often face challenges like data heterogeneity and mismeasured covariates, leading to misspecification bias. This paper addresses these issues from a Bayesian perspective, exploring connections between [...] Read more.
For general insurance pricing, aligning losses with accurate premiums is crucial for insurance companies’ competitiveness. Traditional actuarial models often face challenges like data heterogeneity and mismeasured covariates, leading to misspecification bias. This paper addresses these issues from a Bayesian perspective, exploring connections between Bayesian hierarchical modeling, partial pooling techniques, and the Gustafson correction method for mismeasured covariates. We focus on Non-Differential Berkson (NDB) mismeasurement and propose an approach that corrects such errors without relying on gold standard data. We discover the unique prior knowledge regarding the variance of the NDB errors, and utilize it to adjust the biased parameter estimates built upon the NDB covariate. Using simulated datasets developed with varying error rate scenarios, we demonstrate the superiority of Bayesian methods in correcting parameter estimates. However, our modeling process highlights the challenge in accurately identifying the variance of NDB errors. This emphasizes the need for a thorough sensitivity analysis of the relationship between our prior knowledge of NDB error variance and varying error rate scenarios. Full article
(This article belongs to the Special Issue Novel Applications of Machine Learning and Bayesian Optimization)
Show Figures

Figure 1

18 pages, 5635 KiB  
Article
Toward Robust Lung Cancer Diagnosis: Integrating Multiple CT Datasets, Curriculum Learning, and Explainable AI
by Amira Bouamrane, Makhlouf Derdour, Akram Bennour, Taiseer Abdalla Elfadil Eisa, Abdel-Hamid M. Emara, Mohammed Al-Sarem and Neesrin Ali Kurdi
Diagnostics 2025, 15(1), 1; https://doi.org/10.3390/diagnostics15010001 - 24 Dec 2024
Cited by 1 | Viewed by 714
Abstract
Background and Objectives: Computer-aided diagnostic systems have achieved remarkable success in the medical field, particularly in diagnosing malignant tumors, and have done so at a rapid pace. However, the generalizability of the results remains a challenge for researchers and decreases the credibility of [...] Read more.
Background and Objectives: Computer-aided diagnostic systems have achieved remarkable success in the medical field, particularly in diagnosing malignant tumors, and have done so at a rapid pace. However, the generalizability of the results remains a challenge for researchers and decreases the credibility of these models, which represents a point of criticism by physicians and specialists, especially given the sensitivity of the field. This study proposes a novel model based on deep learning to enhance lung cancer diagnosis quality, understandability, and generalizability. Methods: The proposed approach uses five computed tomography (CT) datasets to assess diversity and heterogeneity. Moreover, the mixup augmentation technique was adopted to facilitate the reliance on salient characteristics by combining features and CT scan labels from datasets to reduce their biases and subjectivity, thus improving the model’s generalization ability and enhancing its robustness. Curriculum learning was used to train the model, starting with simple sets to learn complicated ones quickly. Results: The proposed approach achieved promising results, with an accuracy of 99.38%; precision, specificity, and area under the curve (AUC) of 100%; sensitivity of 98.76%; and F1-score of 99.37%. Additionally, it scored a 00% false positive rate and only a 1.23% false negative rate. An external dataset was used to further validate the proposed method’s effectiveness. The proposed approach achieved optimal results of 100% in all metrics, with 00% false positive and false negative rates. Finally, explainable artificial intelligence (XAI) using Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to better understand the model. Conclusions: This research proposes a robust and interpretable model for lung cancer diagnostics with improved generalizability and validity. Incorporating mixup and curriculum training supported by several datasets underlines its promise for employment as a diagnostic device in the medical industry. Full article
Show Figures

Figure 1

17 pages, 27303 KiB  
Article
Evaluation of the Degradation Properties of Plasma Electrolytically Oxidized Mg Alloy AZ31 Using Fluid Dynamic Accelerated Tests for Biodegradable Implants
by Muhammad Saqib, Kerstin Kremmer, Joerg Opitz, Michael Schneider and Natalia Beshchasna
J. Funct. Biomater. 2024, 15(12), 366; https://doi.org/10.3390/jfb15120366 - 3 Dec 2024
Cited by 1 | Viewed by 896
Abstract
Magnesium alloys are promising biodegradable implant materials due to their excellent biocompatibility and non-toxicity. However, their poor corrosion resistance limits their application in vivo. Plasma electrolytic oxidation (PEO) is a powerful technique to improve the corrosion resistance of magnesium alloys. In this study, [...] Read more.
Magnesium alloys are promising biodegradable implant materials due to their excellent biocompatibility and non-toxicity. However, their poor corrosion resistance limits their application in vivo. Plasma electrolytic oxidation (PEO) is a powerful technique to improve the corrosion resistance of magnesium alloys. In this study, we present the accelerated degradation of PEO-treated AZ31 samples using a fluid dynamic test. The samples were prepared using different concentrations of KOH as an electrolyte along with NaSiO3. The anodizing time and the biasing time were optimized to obtain the increased corrosion resistance. The analysis of the degraded samples using microscopy, SEM EDX measurements, and by calculating mass loss and corrosion rates showed a significant increase in the corrosion resistance after the polymer (Resomer© LG 855 S) coating was applied to the anodized samples. The results confirm (or convince) that PEO treatment is an effective way to improve the corrosion resistance of AZ31 magnesium alloy. The fluid dynamic test can be used as an accelerated degradation test for biodegradable alloys in simulated body fluids at a physiological temperature. The polymer coating further improves the corrosion resistance of the PEO-treated AZ31 samples. Full article
(This article belongs to the Special Issue Medical Application of Functional Biomaterials (2nd Edition))
Show Figures

Figure 1

20 pages, 7765 KiB  
Article
Rapid High-Precision Ranging Technique for Multi-Frequency BDS Signals
by Jie Sun, Jiaolong Wei, Zuping Tang and Yuze Duan
Remote Sens. 2024, 16(23), 4352; https://doi.org/10.3390/rs16234352 - 21 Nov 2024
Viewed by 533
Abstract
The rapid expansion of BeiDou satellite navigation applications has led to a growing demand for real-time high-precision positioning services. Currently, high-precision positioning services face challenges such as a long initialization time and heavy reliance on reference station networks, thereby failing to fulfill the [...] Read more.
The rapid expansion of BeiDou satellite navigation applications has led to a growing demand for real-time high-precision positioning services. Currently, high-precision positioning services face challenges such as a long initialization time and heavy reliance on reference station networks, thereby failing to fulfill the requirements for real-time, wide-area, and centimeter-level positioning. In this study, we consider the multi-frequency signals that are broadcast by a satellite to share a common reference clock and possess identical RF channels and propagation paths with strict temporal, spectral, and spatial coupling between signal components, resulting in strongly coherent propagation delays. Firstly, we accurately establish a multi-frequency signal model that fully exploits those coherent characteristics among the multi-frequency BDS signals. Subsequently, we propose a rapid high-precision ranging technique using the code and carrier phases of multi-frequency signals. The proposed method unitizes multi-frequency signals via a coherent joint processing unit consisting of a joint tracking state estimator and a coherent signal generator. The joint tracking state estimator simultaneously estimates the biased pseudorange and its change rate, ionospheric delay and its change rate, and ambiguities. The coherent signal generator updates the numerically controlled oscillator (NCO) to adjust the local reference signal’s code and carrier replicas of different frequencies, changing them according to the state estimated by the joint tracking state estimator. Finally, the simulation results indicate that the proposed method efficiently diminishes the estimated biased pseudorange and ionospheric delay errors to below 0.1 m. Furthermore, this method reduces the carrier phase errors by more than 60% compared with conventional single-frequency-independent tracking methods. Consequently, the proposed method can achieve rapid centimeter-level results ranging for up to 1 min without using precise atmosphere corrections and provide enhanced tracking sensitivity and robustness. Full article
Show Figures

Figure 1

16 pages, 672 KiB  
Article
AI-Enhanced Personality Identification of Websites
by Shafquat Ali Chishti, Iman Ardekani and Soheil Varastehpour
Information 2024, 15(10), 623; https://doi.org/10.3390/info15100623 - 10 Oct 2024
Viewed by 1086
Abstract
This paper addresses the challenge of objectively determining a website’s personality by developing a methodology based on automated quantitative analysis, thus avoiding the biases inherent in human surveys. Utilizing a database of 3000 websites, data extraction tools gather relevant data, which are then [...] Read more.
This paper addresses the challenge of objectively determining a website’s personality by developing a methodology based on automated quantitative analysis, thus avoiding the biases inherent in human surveys. Utilizing a database of 3000 websites, data extraction tools gather relevant data, which are then analyzed using Artificial Intelligence (AI) techniques, including machine learning (ML) and natural language processing. Four ML algorithms—K-means, Expectation Maximization, Hierarchical Agglomerative Clustering, and DBSCAN—are implemented to assess and classify website personality traits. Each algorithm’s strengths and weaknesses are evaluated in terms of data organization, cluster flexibility, and handling of outliers. A software tool is developed to facilitate the research process, from database creation and data extraction to ML application and results analysis. Experimental validation, conducted with identical training and testing datasets, achieves a success rate of up to 94% (with an Error of 50%) in accurately identifying website personality, which is validated by subsequent surveys. The research highlights significant relationships between website attributes and personality traits, offering practical applications for website developers. For instance, developers can use these insights to design websites that align with business goals, enhance customer engagement, and foster brand loyalty. Additionally, the methodology can be applied to creating culturally resonant websites, thus supporting New Zealand’s cultural initiatives and promoting cross-cultural understanding. This research lays the groundwork for future studies and has broad applicability across various domains, demonstrating the potential for automated, unbiased website personality classification. Full article
(This article belongs to the Special Issue Recent Developments and Implications in Web Analysis)
Show Figures

Graphical abstract

45 pages, 3370 KiB  
Article
Adaptive Cybersecurity Neural Networks: An Evolutionary Approach for Enhanced Attack Detection and Classification
by Ahmad K. Al Hwaitat and Hussam N. Fakhouri
Appl. Sci. 2024, 14(19), 9142; https://doi.org/10.3390/app14199142 - 9 Oct 2024
Cited by 3 | Viewed by 1683
Abstract
The increasing sophistication and frequency of cyber threats necessitate the development of advanced techniques for detecting and mitigating attacks. This paper introduces a novel cybersecurity-focused Multi-Layer Perceptron (MLP) trainer that utilizes evolutionary computation methods, specifically tailored to improve the training process of neural [...] Read more.
The increasing sophistication and frequency of cyber threats necessitate the development of advanced techniques for detecting and mitigating attacks. This paper introduces a novel cybersecurity-focused Multi-Layer Perceptron (MLP) trainer that utilizes evolutionary computation methods, specifically tailored to improve the training process of neural networks in the cybersecurity domain. The proposed trainer dynamically optimizes the MLP’s weights and biases, enhancing its accuracy and robustness in defending against various attack vectors. To evaluate its effectiveness, the trainer was tested on five widely recognized security-related datasets: NSL-KDD, CICIDS2017, UNSW-NB15, Bot-IoT, and CSE-CIC-IDS2018. Its performance was compared with several state-of-the-art optimization algorithms, including Cybersecurity Chimp, CPO, ROA, WOA, MFO, WSO, SHIO, ZOA, DOA, and HHO. The results demonstrated that the proposed trainer consistently outperformed the other algorithms, achieving the lowest Mean Square Error (MSE) and highest classification accuracy across all datasets. Notably, the trainer reached a classification rate of 99.5% on the Bot-IoT dataset and 98.8% on the CSE-CIC-IDS2018 dataset, underscoring its effectiveness in detecting and classifying diverse cyber threats. Full article
Show Figures

Figure 1

24 pages, 11772 KiB  
Article
Performance Evaluation of the Two-Input Buck Converter as a Visible Light Communication High-Brightness LED Driver Based on Split Power
by Daniel G. Aller, Diego G. Lamar, Juan R. García-Mere, Manuel Arias, Juan Rodriguez and Javier Sebastian
Sensors 2024, 24(19), 6392; https://doi.org/10.3390/s24196392 - 2 Oct 2024
Viewed by 990
Abstract
This work proposes a high-efficiency High-Brightness LED (HB-LED) driver for Visible Light Communication (VLC) based on a Two-Input Buck (TIBuck) DC/DC converter. This solution not only outperforms previous approaches based on Buck DC/DC converters, but also simplifies previous proposals for VLC drivers that [...] Read more.
This work proposes a high-efficiency High-Brightness LED (HB-LED) driver for Visible Light Communication (VLC) based on a Two-Input Buck (TIBuck) DC/DC converter. This solution not only outperforms previous approaches based on Buck DC/DC converters, but also simplifies previous proposals for VLC drivers that use the split power technique with two DC/DC converters: one is in charge of the communication tasks and the other controls the biasing of the HB-LED (i.e., lighting tasks). The real implementation of this scheme requires either two input voltage sources, one of which is isolated, or one DC/DC converter with galvanic isolation. The proposed implementation of splitting the power is based on a TIBuck DC/DC converter that avoids the isolation requirement, overcoming the major drawback of this technique, keeping high-efficiency and high communication capability thanks to the lower voltage stress both across the switches and at the switching node. This fact allows for the operation at very high frequency for communication purposes, minimizing switching power losses, achieving high efficiency and providing lower filtering effort. Moreover, the duty ratio range can also be adapted to the useful voltage range of the HB-LED load to maximize the resolution on the tracking of the output volage. The power is split by means of an auxiliary Buck DC/DC converter operating at low switching frequency, which generates the secondary voltage source needed by the TIBuck DC/DC converter. This defines a natural split of power by only processing the power delivered for communications purposes at high frequency. A 7 W output-power experimental prototype of the proposed VLC driver was built and tested. Based on the experimental results, the prototype achieved 94% efficiency, reproducing a 64-QAM digital modulation scheme and achieving a bit rate of 1.5 Mbps with error in communication of 12%. Full article
(This article belongs to the Collection Visible Light Communication (VLC))
Show Figures

Figure 1

19 pages, 1789 KiB  
Article
User Sentiment Analysis Based on Securities Application Elements
by Minji Kim, Subeen Kim, Yoonha Park, Sangwoo Bahn, Sung Hee Ahn and Bhavadharani NambiNarayanan
Behav. Sci. 2024, 14(9), 814; https://doi.org/10.3390/bs14090814 - 13 Sep 2024
Viewed by 1211
Abstract
Designing securities applications for mobile devices is challenging due to their inherent complexity, necessitating improvement through the analysis of online reviews. However, research applying deep learning techniques to the sentiment analysis of Korean text remains limited. This study explores the use of Aspect-Based [...] Read more.
Designing securities applications for mobile devices is challenging due to their inherent complexity, necessitating improvement through the analysis of online reviews. However, research applying deep learning techniques to the sentiment analysis of Korean text remains limited. This study explores the use of Aspect-Based Sentiment Analysis (ABSA) as an effective alternative to traditional user research methods for securities application design. By analyzing large volumes of text-based user review data of Korean securities applications, the study identifies critical elements like “update”, “screen”, “chart”, “login”, “access”, “authentication”, “account”, and “transaction”, revealing nuanced user sentiments through techniques such as PMI, SVD, and Word2Vec. ABSA offers deeper insights compared to overall ratings, uncovering hidden areas of dissatisfaction despite positive biases in reviews. This research demonstrates the scalability and cost-effectiveness of ABSA in mobile-application design research. Full article
Show Figures

Figure 1

12 pages, 521 KiB  
Article
Clipping Noise in Visible Light Communication Systems with OFDM and PAPR Reduction
by Hussien Alrakah, Mohamad Hijazi, Sinan Sinanovic and Wasiu Popoola
Photonics 2024, 11(7), 643; https://doi.org/10.3390/photonics11070643 - 6 Jul 2024
Cited by 1 | Viewed by 1037
Abstract
This paper presents an analytical study of signal clipping that leads to the noise/distortion in the waveform of DC-biased optical orthogonal frequency division multiplexing (DCO-OFDM)-based visible light communication (VLC) systems. The pilot-assisted (PA) technique is used to reduce the high peak-to-average power ratio [...] Read more.
This paper presents an analytical study of signal clipping that leads to the noise/distortion in the waveform of DC-biased optical orthogonal frequency division multiplexing (DCO-OFDM)-based visible light communication (VLC) systems. The pilot-assisted (PA) technique is used to reduce the high peak-to-average power ratio (PAPR) of the time-domain waveform of the DCO-OFDM system. The bit error rate (BER) performance of the PA DCO-OFDM system is investigated analytically at three different clipping levels as well as without any clipping. The analytical BER performance is verified through simulation and then compared to that of the conventional DCO-OFDM without PAPR reduction at the selected clipping levels. The PA DCO-OFDM system shows improved BER performance at all three clipping levels. Full article
Show Figures

Figure 1

25 pages, 766 KiB  
Article
A Comparison of Bias Mitigation Techniques for Educational Classification Tasks Using Supervised Machine Learning
by Tarid Wongvorachan, Okan Bulut, Joyce Xinle Liu and Elisabetta Mazzullo
Information 2024, 15(6), 326; https://doi.org/10.3390/info15060326 - 4 Jun 2024
Cited by 2 | Viewed by 2462
Abstract
Machine learning (ML) has become integral in educational decision-making through technologies such as learning analytics and educational data mining. However, the adoption of machine learning-driven tools without scrutiny risks perpetuating biases. Despite ongoing efforts to tackle fairness issues, their application to educational datasets [...] Read more.
Machine learning (ML) has become integral in educational decision-making through technologies such as learning analytics and educational data mining. However, the adoption of machine learning-driven tools without scrutiny risks perpetuating biases. Despite ongoing efforts to tackle fairness issues, their application to educational datasets remains limited. To address the mentioned gap in the literature, this research evaluates the effectiveness of four bias mitigation techniques in an educational dataset aiming at predicting students’ dropout rate. The overarching research question is: “How effective are the techniques of reweighting, resampling, and Reject Option-based Classification (ROC) pivoting in mitigating the predictive bias associated with high school dropout rates in the HSLS:09 dataset?" The effectiveness of these techniques was assessed based on performance metrics including false positive rate (FPR), accuracy, and F1 score. The study focused on the biological sex of students as the protected attribute. The reweighting technique was found to be ineffective, showing results identical to the baseline condition. Both uniform and preferential resampling techniques significantly reduced predictive bias, especially in the FPR metric but at the cost of reduced accuracy and F1 scores. The ROC pivot technique marginally reduced predictive bias while maintaining the original performance of the classifier, emerging as the optimal method for the HSLS:09 dataset. This research extends the understanding of bias mitigation in educational contexts, demonstrating practical applications of various techniques and providing insights for educators and policymakers. By focusing on an educational dataset, it contributes novel insights beyond the commonly studied datasets, highlighting the importance of context-specific approaches in bias mitigation. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Graphical abstract

16 pages, 10529 KiB  
Article
Drought Stress Might Induce Sexual Spatial Segregation in Dioecious Populus euphratica—Insights from Long-Term Water Use Efficiency and Growth Rates
by Honghua Zhou, Zhaoxia Ye, Yuhai Yang and Chenggang Zhu
Biology 2024, 13(5), 318; https://doi.org/10.3390/biology13050318 - 2 May 2024
Cited by 1 | Viewed by 1183
Abstract
P. euphratica stands as the pioneering and dominant tree within desert riparian forests in arid and semi-arid regions. The aim of our work was to reveal why dioecious P. euphratica in natural desert riparian forests in the lower Tarim River exhibits sexual spatial [...] Read more.
P. euphratica stands as the pioneering and dominant tree within desert riparian forests in arid and semi-arid regions. The aim of our work was to reveal why dioecious P. euphratica in natural desert riparian forests in the lower Tarim River exhibits sexual spatial distribution differences combined with field investigation, tree ring techniques, isotope analysis techniques, and statistical analyses. The results showed that P. euphratica was a male-biased population, with the operational sex ratio (OSR) exhibiting spatial distribution differences to variations in drought stress resulting from groundwater depth change. The highest OSR was observed under mild drought stress (groundwater depth of 6–7 m), and it was reduced under non-drought stress (groundwater depth below 6 m) or severe drought stress (groundwater depth exceeding 7 m). As drought stress escalated, the degradation and aging of the P. euphratica forest became more pronounced. Males exhibited significantly higher growth rates and WUEi than females under mild drought stress. However, under severe drought stress, males’ growth rates significantly slowed down, accompanied by significantly lower WUEi than in females. This divergence determined the sexual spatial segregation of P. euphratica in the natural desert riparian forests of the lower Tarim River. Furthermore, the current ecological water conveyance project (EWCP) in the lower Tarim River was hard to fundamentally reverse the degradation and aging of the P. euphratica forest due to inadequate population regeneration. Consequently, we advocated for an optimized ecological water conveyance mode to restore, conserve, and rejuvenate natural P. euphratica forests. Full article
(This article belongs to the Special Issue Dendrochronology in Arid and Semiarid Regions)
Show Figures

Figure 1

21 pages, 3643 KiB  
Article
Enhancing Legal Sentiment Analysis: A Convolutional Neural Network–Long Short-Term Memory Document-Level Model
by Bolanle Abimbola, Enrique de La Cal Marin and Qing Tan
Mach. Learn. Knowl. Extr. 2024, 6(2), 877-897; https://doi.org/10.3390/make6020041 - 19 Apr 2024
Cited by 4 | Viewed by 2752
Abstract
This research investigates the application of deep learning in sentiment analysis of Canadian maritime case law. It offers a framework for improving maritime law and legal analytic policy-making procedures. The automation of legal document extraction takes center stage, underscoring the vital role sentiment [...] Read more.
This research investigates the application of deep learning in sentiment analysis of Canadian maritime case law. It offers a framework for improving maritime law and legal analytic policy-making procedures. The automation of legal document extraction takes center stage, underscoring the vital role sentiment analysis plays at the document level. Therefore, this study introduces a novel strategy for sentiment analysis in Canadian maritime case law, combining sentiment case law approaches with state-of-the-art deep learning techniques. The overarching goal is to systematically unearth hidden biases within case law and investigate their impact on legal outcomes. Employing Convolutional Neural Network (CNN)- and long short-term memory (LSTM)-based models, this research achieves a remarkable accuracy of 98.05% for categorizing instances. In contrast, conventional machine learning techniques such as support vector machine (SVM) yield an accuracy rate of 52.57%, naïve Bayes at 57.44%, and logistic regression at 61.86%. The superior accuracy of the CNN and LSTM model combination underscores its usefulness in legal sentiment analysis, offering promising future applications in diverse fields like legal analytics and policy design. These findings mark a significant choice for AI-powered legal tools, presenting more sophisticated and sentiment-aware options for the legal profession. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

19 pages, 587 KiB  
Article
CRAS: Curriculum Regularization and Adaptive Semi-Supervised Learning with Noisy Labels
by Ryota Higashimoto, Soh Yoshida and Mitsuji Muneyasu
Appl. Sci. 2024, 14(3), 1208; https://doi.org/10.3390/app14031208 - 31 Jan 2024
Cited by 2 | Viewed by 1191
Abstract
This paper addresses the performance degradation of deep neural networks caused by learning with noisy labels. Recent research on this topic has exploited the memorization effect: networks fit data with clean labels during the early stages of learning and eventually memorize data with [...] Read more.
This paper addresses the performance degradation of deep neural networks caused by learning with noisy labels. Recent research on this topic has exploited the memorization effect: networks fit data with clean labels during the early stages of learning and eventually memorize data with noisy labels. This property allows for the separation of clean and noisy samples from a loss distribution. In recent years, semi-supervised learning, which divides training data into a set of labeled clean samples and a set of unlabeled noisy samples, has achieved impressive results. However, this strategy has two significant problems: (1) the accuracy of dividing the data into clean and noisy samples depends strongly on the network’s performance, and (2) if the divided data are biased towards the unlabeled samples, there are few labeled samples, causing the network to overfit to the labels and leading to a poor generalization performance. To solve these problems, we propose the curriculum regularization and adaptive semi-supervised learning (CRAS) method. Its key ideas are (1) to train the network with robust regularization techniques as a warm-up before dividing the data, and (2) to control the strength of the regularization using loss weights that adaptively respond to data bias, which varies with each split at each training epoch. We evaluated the performance of CRAS on benchmark image classification datasets, CIFAR-10 and CIFAR-100, and real-world datasets, mini-WebVision and Clothing1M. The findings demonstrate that CRAS excels in handling noisy labels, resulting in a superior generalization and robustness to a range of noise rates, compared with the existing method. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

24 pages, 1274 KiB  
Article
Assessing Disparities in Predictive Modeling Outcomes for College Student Success: The Impact of Imputation Techniques on Model Performance and Fairness
by Nazanin Nezami, Parian Haghighat, Denisa Gándara and Hadis Anahideh
Educ. Sci. 2024, 14(2), 136; https://doi.org/10.3390/educsci14020136 - 29 Jan 2024
Cited by 7 | Viewed by 1754
Abstract
The education sector has been quick to recognize the power of predictive analytics to enhance student success rates. However, there are challenges to widespread adoption, including the lack of accessibility and the potential perpetuation of inequalities. These challenges present in different stages of [...] Read more.
The education sector has been quick to recognize the power of predictive analytics to enhance student success rates. However, there are challenges to widespread adoption, including the lack of accessibility and the potential perpetuation of inequalities. These challenges present in different stages of modeling, including data preparation, model development, and evaluation. These steps can introduce additional bias to the system if not appropriately performed. Substantial incompleteness in responses is a common problem in nationally representative education data at a large scale. This can lead to missing data and can potentially impact the representativeness and accuracy of the results. While many education-related studies address the challenges of missing data, little is known about the impact of handling missing values on the fairness of predictive outcomes in practice. In this paper, we aim to assess the disparities in predictive modeling outcomes for college student success and investigate the impact of imputation techniques on model performance and fairness using various notions. We conduct a prospective evaluation to provide a less biased estimation of future performance and fairness than an evaluation of historical data. Our comprehensive analysis of a real large-scale education dataset reveals key insights on modeling disparities and the impact of imputation techniques on the fairness of the predictive outcome under different testing scenarios. Our results indicate that imputation introduces bias if the testing set follows the historical distribution. However, if the injustice in society is addressed and, consequently, the upcoming batch of observations is equalized, the model would be less biased. Full article
(This article belongs to the Section Higher Education)
Show Figures

Figure 1

Back to TopTop