Next Issue
Volume 13, July
Previous Issue
Volume 13, May
 
 

Computers, Volume 13, Issue 6 (June 2024) – 30 articles

Cover Story (view full-size image): During the pandemic, container vessels queued in open waters, waiting to be handled at container terminals. This raised the question of how such challenging operational situations could be mitigated in the future. Digital Twins seem to be a promising approach since they have already been used in production and logistics to increase flexibility in operations and to support operational decision-making based on real-time information. To investigate this idea, a Delphi study was conducted with 17 experts. The results indicate that they see great potential in analyzing past working shift data to identify the reasons for poor terminal performance. Moreover, they agree on best practices and support the use of emulation for detailed ad hoc simulation studies. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 5897 KiB  
Article
A Contextual Model for Visual Information Processing
by Illia Khurtin and Mukesh Prasad
Computers 2024, 13(6), 155; https://doi.org/10.3390/computers13060155 - 20 Jun 2024
Viewed by 737
Abstract
Despite significant achievements in the artificial narrow intelligence sphere, the mechanisms of human-like (general) intelligence are still undeveloped. There is a theory stating that the human brain extracts the meaning of information rather than recognizes the features of a phenomenon. Extracting the meaning [...] Read more.
Despite significant achievements in the artificial narrow intelligence sphere, the mechanisms of human-like (general) intelligence are still undeveloped. There is a theory stating that the human brain extracts the meaning of information rather than recognizes the features of a phenomenon. Extracting the meaning is finding a set of transformation rules (context) and applying them to the incoming information, producing an interpretation. Then, the interpretation is compared to something already seen and is stored in memory. Information can have different meanings in different contexts. A mathematical model of a context processor and a differential contextual space which can perform the interpretation is discussed and developed in this paper. This study examines whether the basic principles of differential contextual spaces work in practice. The model is developed with Rust programming language and trained on black and white images which are rotated and shifted both horizontally and vertically according to the saccades and torsion movements of a human eye. Then, a picture that has never been seen in the particular transformation, but has been seen in another one, is exposed to the model. The model considers the image in all known contexts and extracts the meaning. The results show that the program can successfully process black and white images which are transformed by shifts and rotations. This research prepares the grounding for further investigations of the contextual model principles with which general intelligence might operate. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2024)
Show Figures

Figure 1

27 pages, 6904 KiB  
Article
Deep Convolutional Generative Adversarial Networks in Image-Based Android Malware Detection
by Francesco Mercaldo, Fabio Martinelli and Antonella Santone
Computers 2024, 13(6), 154; https://doi.org/10.3390/computers13060154 - 19 Jun 2024
Cited by 2 | Viewed by 1285
Abstract
The recent advancements in generative adversarial networks have showcased their remarkable ability to create images that are indistinguishable from real ones. This has prompted both the academic and industrial communities to tackle the challenge of distinguishing fake images from genuine ones. We introduce [...] Read more.
The recent advancements in generative adversarial networks have showcased their remarkable ability to create images that are indistinguishable from real ones. This has prompted both the academic and industrial communities to tackle the challenge of distinguishing fake images from genuine ones. We introduce a method to assess whether images generated by generative adversarial networks, using a dataset of real-world Android malware applications, can be distinguished from actual images. Our experiments involved two types of deep convolutional generative adversarial networks, and utilize images derived from both static analysis (which does not require running the application) and dynamic analysis (which does require running the application). After generating the images, we trained several supervised machine learning models to determine if these classifiers can differentiate between real and generated malicious applications. Our results indicate that, despite being visually indistinguishable to the human eye, the generated images were correctly identified by a classifier with an F-measure of approximately 0.8. While most generated images were accurately recognized as fake, some were not, leading them to be considered as images produced by real applications. Full article
Show Figures

Figure 1

24 pages, 4449 KiB  
Article
Empowering Communication: A Deep Learning Framework for Arabic Sign Language Recognition with an Attention Mechanism
by R. S. Abdul Ameer, M. A. Ahmed, Z. T. Al-Qaysi, M. M. Salih and Moceheb Lazam Shuwandy
Computers 2024, 13(6), 153; https://doi.org/10.3390/computers13060153 - 19 Jun 2024
Cited by 1 | Viewed by 1291
Abstract
This article emphasises the urgent need for appropriate communication tools for communities of people who are deaf or hard-of-hearing, with a specific emphasis on Arabic Sign Language (ArSL). In this study, we use long short-term memory (LSTM) models in conjunction with MediaPipe to [...] Read more.
This article emphasises the urgent need for appropriate communication tools for communities of people who are deaf or hard-of-hearing, with a specific emphasis on Arabic Sign Language (ArSL). In this study, we use long short-term memory (LSTM) models in conjunction with MediaPipe to reduce the barriers to effective communication and social integration for deaf communities. The model design incorporates LSTM units and an attention mechanism to handle the input sequences of extracted keypoints from recorded gestures. The attention layer selectively directs its focus toward relevant segments of the input sequence, whereas the LSTM layer handles temporal relationships and encodes the sequential data. A comprehensive dataset comprised of fifty frequently used words and numbers in ArSL was collected for developing the recognition model. This dataset comprises many instances of gestures recorded by five volunteers. The results of the experiment support the effectiveness of the proposed approach, as the model achieved accuracies of more than 85% (individual volunteers) and 83% (combined data). The high level of precision emphasises the potential of artificial intelligence-powered translation software to improve effective communication for people with hearing impairments and to enable them to interact with the larger community more easily. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
Show Figures

Figure 1

40 pages, 4392 KiB  
Review
Hybrid Architectures Used in the Protection of Large Healthcare Records Based on Cloud and Blockchain Integration: A Review
by Leonardo Juan Ramirez Lopez, David Millan Mayorga, Luis Hernando Martinez Poveda, Andres Felipe Carbonell Amaya and Wilson Rojas Reales
Computers 2024, 13(6), 152; https://doi.org/10.3390/computers13060152 - 12 Jun 2024
Cited by 1 | Viewed by 2127
Abstract
The management of large medical files poses a critical challenge in the health sector, with conventional systems facing deficiencies in security, scalability, and efficiency. Blockchain ensures the immutability and traceability of medical records, while the cloud allows scalable and efficient storage. Together, they [...] Read more.
The management of large medical files poses a critical challenge in the health sector, with conventional systems facing deficiencies in security, scalability, and efficiency. Blockchain ensures the immutability and traceability of medical records, while the cloud allows scalable and efficient storage. Together, they can transform the data management of electronic health record applications. The method used was the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to choose and select the relevant studies that contribute to this research, with special emphasis set on maintaining the integrity and security of the blockchain while tackling the potential and efficiency of cloud infrastructures. The study’s focus is to provide a comprehensive and insightful examination of the modern landscape concerning the integration of blockchain and cloud advances, highlighting the current challenges and building a solid foundation for future development. Furthermore, it is very important to increase the integration of blockchain security with the dynamic potential of cloud computing while guaranteeing information integrity and security remain uncompromised. In conclusion, this paper serves as an important resource for analysts, specialists, and partners looking to delve into and develop the integration of blockchain and cloud innovations. Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
Show Figures

Figure 1

27 pages, 1367 KiB  
Article
InfoSTGCAN: An Information-Maximizing Spatial-Temporal Graph Convolutional Attention Network for Heterogeneous Human Trajectory Prediction
by Kangrui Ruan and Xuan Di
Computers 2024, 13(6), 151; https://doi.org/10.3390/computers13060151 - 11 Jun 2024
Cited by 1 | Viewed by 1148
Abstract
Predicting the future trajectories of multiple interacting pedestrians within a scene has increasingly gained importance in various fields, e.g., autonomous driving, human–robot interaction, and so on. The complexity of this problem is heightened due to the social dynamics among different pedestrians and their [...] Read more.
Predicting the future trajectories of multiple interacting pedestrians within a scene has increasingly gained importance in various fields, e.g., autonomous driving, human–robot interaction, and so on. The complexity of this problem is heightened due to the social dynamics among different pedestrians and their heterogeneous implicit preferences. In this paper, we present Information Maximizing Spatial-Temporal Graph Convolutional Attention Network (InfoSTGCAN), which takes into account both pedestrian interactions and heterogeneous behavior choice modeling. To effectively capture the complex interactions among pedestrians, we integrate spatial-temporal graph convolution and spatial-temporal graph attention. For grasping the heterogeneity in pedestrians’ behavior choices, our model goes a step further by learning to predict an individual-level latent code for each pedestrian. Each latent code represents a distinct pattern of movement choice. Finally, based on the observed historical trajectory and the learned latent code, the proposed method is trained to cover the ground-truth future trajectory of this pedestrian with a bi-variate Gaussian distribution. We evaluate the proposed method through a comprehensive list of experiments and demonstrate that our method outperforms all baseline methods on the commonly used metrics, Average Displacement Error and Final Displacement Error. Notably, visualizations of the generated trajectories reveal our method’s capacity to handle different scenarios. Full article
Show Figures

Figure 1

13 pages, 830 KiB  
Article
On Predicting Exam Performance Using Version Control Systems’ Features
by Lorenzo Canale, Luca Cagliero, Laura Farinetti and Marco Torchiano
Computers 2024, 13(6), 150; https://doi.org/10.3390/computers13060150 - 9 Jun 2024
Viewed by 888
Abstract
The advent of Version Control Systems (VCS) in computer science education has significantly improved the learning experience. The Learning Analytics community has started to analyze the interactions between students and VCSs to evaluate the behavioral and cognitive aspects of the learning process. Within [...] Read more.
The advent of Version Control Systems (VCS) in computer science education has significantly improved the learning experience. The Learning Analytics community has started to analyze the interactions between students and VCSs to evaluate the behavioral and cognitive aspects of the learning process. Within the aforesaid scope, a promising research direction is the use of Artificial Intelligence (AI) to predict students’ exam outcomes early based on VCS usage data. Previous AI-based solutions have two main drawbacks: (i) They rely on static models, which disregard temporal changes in the student–VCS interactions. (ii) AI reasoning is not transparent to end-users. This paper proposes a time-dependent approach to early predict student performance from VCS data. It applies and compares different classification models trained at various course stages. To gain insights into exam performance predictions it combines classification with explainable AI techniques. It visualizes the explanations of the time-varying performance predictors. The results of a real case study show that the effect of VCS-based features on the exam success rate is relevant much earlier than the end of the course, whereas the timely submission of the first lab assignment is a reliable predictor of the exam grade. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
Show Figures

Figure 1

15 pages, 442 KiB  
Article
A Clustering and PL/SQL-Based Method for Assessing MLP-Kmeans Modeling
by Victor Hugo Silva-Blancas, Hugo Jiménez-Hernández, Ana Marcela Herrera-Navarro, José M. Álvarez-Alvarado, Diana Margarita Córdova-Esparza and Juvenal Rodríguez-Reséndiz
Computers 2024, 13(6), 149; https://doi.org/10.3390/computers13060149 - 9 Jun 2024
Viewed by 945
Abstract
With new high-performance server technology in data centers and bunkers, optimizing search engines to process time and resource consumption efficiently is necessary. The database query system, upheld by the standard SQL language, has maintained the same functional design since the advent of PL/SQL. [...] Read more.
With new high-performance server technology in data centers and bunkers, optimizing search engines to process time and resource consumption efficiently is necessary. The database query system, upheld by the standard SQL language, has maintained the same functional design since the advent of PL/SQL. This situation is caused by recent research focused on computer resource management, encryption, and security rather than improving data mining based on AI tools, machine learning (ML), and artificial neural networks (ANNs). This work presents a projected methodology integrating a multilayer perceptron (MLP) with Kmeans. This methodology is compared with traditional PL/SQL tools and aims to improve the database response time while outlining future advantages for ML and Kmeans in data processing. We propose a new corollary: hkH=SSE(C),wherek>0andX, executed on application software querying data collections with more than 306 thousand records. This study produced a comparative table between PL/SQL and MLP-Kmeans based on three hypotheses: line query, group query, and total query. The results show that line query increased to 9 ms, group query increased from 88 to 2460 ms, and total query from 13 to 279 ms. Testing one methodology against the other not only shows the incremental fatigue and time consumption that training brings to database query but also that the complexity of the use of a neural network is capable of producing more precision results than the simple use of PL/SQL instructions, and this will be more important in the future for domain-specific problems. Full article
Show Figures

Figure 1

28 pages, 6199 KiB  
Article
ARPocketLab—A Mobile Augmented Reality System for Pedagogic Applications
by Miguel Nunes, Telmo Adão, Somayeh Shahrabadi, António Capela, Diana Carneiro, Pedro Branco, Luís Magalhães, Raul Morais and Emanuel Peres
Computers 2024, 13(6), 148; https://doi.org/10.3390/computers13060148 - 8 Jun 2024
Cited by 1 | Viewed by 1052
Abstract
The widespread adoption of digital technologies in educational systems has been globally reflecting a shift in pedagogic content delivery that seems to fit modern generations of students while tackling relevant challenges faced by the current scholar context, e.g., progress traceability, pedagogic content fair [...] Read more.
The widespread adoption of digital technologies in educational systems has been globally reflecting a shift in pedagogic content delivery that seems to fit modern generations of students while tackling relevant challenges faced by the current scholar context, e.g., progress traceability, pedagogic content fair access and intuitive visual representativeness, mobility issue mitigation, and sustainability in crisis situations. Among these technologies, augmented reality (AR) emerges as a particularly promising approach, allowing the visualization of computer-generated interactive data on top of real-world elements, thus enhancing comprehension and intuition regarding educational content, often in mobile settings. While the application of AR to education has been widely addressed, issues related to performance interaction and cognitive performance are commonly addressed, with lesser attention paid to the limitations associated with setup complexity, mostly related to experiences configurating tools, or contextual range, i.e., technical/scientific domain targeting versatility. Therefore, this paper introduces ARPocketLab, a digital, mobile, flexible, and scalable solution designed for the dynamic needs of modern tutorship. With a dual-interface system, it allows both educators and students to interactively design and engage with AR content directly tied to educational outcomes. Moreover, ARPocketLab’s design, aimed at handheld operationalization using a minimal set of physical resources, is particularly relevant in environments where educational materials are scarce or in situations where remote learning becomes necessary. Its versatility stems from the fact that it only requires a marker or a surface (e.g., a table) to function at full capacity. To evaluate the solution, tests were conducted with 8th-grade Portuguese students within the context of Physics and Chemistry subject. Results demonstrate the application’s effectiveness in providing didactic assistance, with positive feedback not only in terms of usability but also regarding learning performance. The participants also reported openness for the adoption of AR in pedagogic contexts. Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications)
Show Figures

Figure 1

17 pages, 714 KiB  
Review
Integrating Machine Learning with Non-Fungible Tokens
by Elias Iosif and Leonidas Katelaris
Computers 2024, 13(6), 147; https://doi.org/10.3390/computers13060147 - 7 Jun 2024
Viewed by 958
Abstract
In this paper, we undertake a thorough comparative examination of data resources pertinent to Non-Fungible Tokens (NFTs) within the framework of Machine Learning (ML). The core research question of the present work is how the integration of ML techniques and NFTs manifests across [...] Read more.
In this paper, we undertake a thorough comparative examination of data resources pertinent to Non-Fungible Tokens (NFTs) within the framework of Machine Learning (ML). The core research question of the present work is how the integration of ML techniques and NFTs manifests across various domains. Our primary contribution lies in proposing a structured perspective for this analysis, encompassing a comprehensive array of criteria that collectively span the entire spectrum of NFT-related data. To demonstrate the application of the proposed perspective, we systematically survey a selection of indicative research works, drawing insights from diverse sources. By evaluating these data resources against established criteria, we aim to provide a nuanced understanding of their respective strengths, limitations, and potential applications within the intersection of NFTs and ML. Full article
(This article belongs to the Special Issue When Blockchain Meets IoT: Challenges and Potentials)
Show Figures

Figure 1

28 pages, 789 KiB  
Article
Unlocking Blockchain UTXO Transactional Patterns and Their Effect on Storage and Throughput Trade-Offs
by David Melo, Saúl Eduardo Pomares-Hernández, Lil María Xibai Rodríguez-Henríquez and Julio César Pérez-Sansalvador
Computers 2024, 13(6), 146; https://doi.org/10.3390/computers13060146 - 7 Jun 2024
Viewed by 1219
Abstract
Blockchain technology ensures record-keeping by redundantly storing and verifying transactions on a distributed network of nodes. Permissionless blockchains have pushed the development of decentralized applications (DApps) characterized by distributed business logic, resilience to centralized failures, and data immutability. However, storage scalability without sacrificing [...] Read more.
Blockchain technology ensures record-keeping by redundantly storing and verifying transactions on a distributed network of nodes. Permissionless blockchains have pushed the development of decentralized applications (DApps) characterized by distributed business logic, resilience to centralized failures, and data immutability. However, storage scalability without sacrificing throughput is one of the remaining open challenges in permissionless blockchains. Enhancing throughput often compromises storage, as seen in projects such as Elastico, OmniLedger, and RapidChain. On the other hand, solutions seeking to save storage, such as CUB, Jidar, SASLedger, and SE-Chain, reduce the transactional throughput. To our knowledge, no analysis has been performed that relates storage growth to transactional throughput. In this article, we delve into the execution of the Bitcoin and Ethereum transactional models, unlocking patterns that represent any transaction on the blockchain. We reveal the trade-off between transactional throughput and storage. To achieve this, we introduce the spent-by relation, a new abstraction of the UTXO model that utilizes a directed acyclic graph (DAG) to reveal the patterns and allows for a graph with granular information. We then analyze the transactional patterns to identify the most storage-intensive ones and those that offer greater flexibility in the throughput/storage trade-off. Finally, we present an analytical study showing that the UTXO model is more storage-intensive than the account model but scales better in transactional throughput. Full article
Show Figures

Figure 1

18 pages, 5386 KiB  
Article
Quasi/Periodic Noise Reduction in Images Using Modified Multiresolution-Convolutional Neural Networks for 3D Object Reconstructions and Comparison with Other Convolutional Neural Network Models
by Osmar Antonio Espinosa-Bernal, Jesús Carlos Pedraza-Ortega, Marco Antonio Aceves-Fernandez, Victor Manuel Martínez-Suárez, Saul Tovar-Arriaga, Juan Manuel Ramos-Arreguín and Efrén Gorrostieta-Hurtado
Computers 2024, 13(6), 145; https://doi.org/10.3390/computers13060145 - 7 Jun 2024
Cited by 1 | Viewed by 760
Abstract
The modeling of real objects digitally is an area that has generated a high demand due to the need to obtain systems that are able to reproduce 3D objects from real objects. To this end, several techniques have been proposed to model objects [...] Read more.
The modeling of real objects digitally is an area that has generated a high demand due to the need to obtain systems that are able to reproduce 3D objects from real objects. To this end, several techniques have been proposed to model objects in a computer, with the fringe profilometry technique being the one that has been most researched. However, this technique has the disadvantage of generating Moire noise that ends up affecting the accuracy of the final 3D reconstructed object. In order to try to obtain 3D objects as close as possible to the original object, different techniques have been developed to attenuate the quasi/periodic noise, namely the application of convolutional neural networks (CNNs), a method that has been recently applied for restoration and reduction and/or elimination of noise in images applied as a pre-processing in the generation of 3D objects. For this purpose, this work is carried out to attenuate the quasi/periodic noise in images acquired by the fringe profilometry technique, using a modified CNN-Multiresolution network. The results obtained are compared with the original CNN-Multiresolution network, the UNet network, and the FCN32s network and a quantitative comparison is made using the Image Mean Square Error E (IMMS), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Profile (MSE) metrics. Full article
Show Figures

Figure 1

23 pages, 477 KiB  
Article
Searching Questions and Learning Problems in Large Problem Banks: Constructing Tests and Assignments on the Fly
by Oleg Sychev
Computers 2024, 13(6), 144; https://doi.org/10.3390/computers13060144 - 5 Jun 2024
Viewed by 785
Abstract
Modern advances in creating shared banks of learning problems and automatic question and problem generation have led to the creation of large question banks in which human teachers cannot view every question. These questions are classified according to the knowledge necessary to solve [...] Read more.
Modern advances in creating shared banks of learning problems and automatic question and problem generation have led to the creation of large question banks in which human teachers cannot view every question. These questions are classified according to the knowledge necessary to solve them and the question difficulties. Constructing tests and assignments on the fly at the teacher’s request eliminates the possibility of cheating by sharing solutions because each student receives a unique set of questions. However, the random generation of predictable and effective assignments from a set of problems is a non-trivial task. In this article, an algorithm for generating assignments based on teachers’ requests for their content is proposed. The algorithm is evaluated on a bank of expression-evaluation questions containing more than 5000 questions. The evaluation shows that the proposed algorithm can guarantee the minimum expected number of target concepts (rules) in an exercise with any settings. The available bank and exercise difficulty chiefly determine the difficulty of the found questions. It almost does not depend on the number of target concepts per item in the exercise: teaching more rules is achieved by rotating them among the exercise items on lower difficulty settings. An ablation study show that all the principal components of the algorithm contribute to its performance. The proposed algorithm can be used to reliably generate individual exercises from large, automatically generated question banks according to teachers’ requests, which is important in massive open online courses. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

18 pages, 9307 KiB  
Article
A GIS-Based Fuzzy Model to Detect Critical Polluted Urban Areas in Presence of Heatwave Scenarios
by Barbara Cardone, Ferdinando Di Martino and Vittorio Miraglia
Computers 2024, 13(6), 143; https://doi.org/10.3390/computers13060143 - 5 Jun 2024
Viewed by 796
Abstract
This research presents a new method for detecting urban areas critical for the presence of air pollutants during periods of heatwaves. The proposed method uses a geospatial model based on the construction of Thiessen polygons and a fuzzy model based on assessing, starting [...] Read more.
This research presents a new method for detecting urban areas critical for the presence of air pollutants during periods of heatwaves. The proposed method uses a geospatial model based on the construction of Thiessen polygons and a fuzzy model based on assessing, starting from air quality control unit measurement data, how concentrations of air pollutants are distributed in the urban study area during periods of heatwaves and determine the most critical areas as hotspots. The proposed method represents an optimal trade-off between the accuracy of the detection of critical areas and the computational speed; the use of fuzzy techniques for assessing the intensity of concentrations of air pollutants allows evaluators to model the assessments of critical areas more naturally. The method is implemented in a GIS-based platform and has been tested in the city of Bologna, Italy. The resulting criticality maps of PM10, NO2, and PM2.5 pollutants during a heatwave period that occurred from 10 to 14 July 2023 revealed highly critical hotspots with high pollutant concentrations in densely populated areas. This framework provides a portable and easily interpretable decision support tool which allows you to evaluate which urban areas are most affected by air pollution during heatwaves, potentially posing health risks to the exposed population. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2024)
Show Figures

Figure 1

17 pages, 2327 KiB  
Article
Observer-Based Suboptimal Controller Design for Permanent Magnet Synchronous Motors: State-Dependent Riccati Equation Controller and Impulsive Observer Approaches
by Nasrin Kalamian, Masoud Soltani, Fariba Bouzari Liavoli and Mona Faraji Niri
Computers 2024, 13(6), 142; https://doi.org/10.3390/computers13060142 - 4 Jun 2024
Viewed by 935
Abstract
Permanent Magnet Synchronous Motors (PMSMs) with high energy efficiency, reliable performance, and a relatively simple structure are widely utilised in various applications. In this paper, a suboptimal controller is proposed for PMSMs without sensors based on the state-dependent Riccati equation (SDRE) technique combined [...] Read more.
Permanent Magnet Synchronous Motors (PMSMs) with high energy efficiency, reliable performance, and a relatively simple structure are widely utilised in various applications. In this paper, a suboptimal controller is proposed for PMSMs without sensors based on the state-dependent Riccati equation (SDRE) technique combined with customised impulsive observers (IOs). Here, the SDRE technique facilitates a pseudo-linearised display of the motor with state-dependent coefficients (SDCs) while preserving all its nonlinear features. Considering the risk of non-available/non-measurable states in the motor due to sensor and instrumentation costs, the SDRE is combined with IOs to estimate the PMSM speed and position states. Customised IOs are proven to be capable of obtaining quality, continuous estimates of the motor states despite the discrete format of the output signals. The simulation results in this work illustrate an accurate state estimation and control mechanism for the speed of the PMSM in the presence of load torque disturbances and reference speed changes. It is clearly shown that the SDRE-IO design is superior compared to the most popular existing regulators in the literature for sensorless speed control. Full article
Show Figures

Figure 1

9 pages, 275 KiB  
Article
Mitigating Large Language Model Bias: Automated Dataset Augmentation and Prejudice Quantification
by Devam Mondal and Carlo Lipizzi
Computers 2024, 13(6), 141; https://doi.org/10.3390/computers13060141 - 4 Jun 2024
Cited by 1 | Viewed by 1353
Abstract
Despite the growing capabilities of large language models, concerns exist about the biases they develop. In this paper, we propose a novel, automated mechanism for debiasing through specified dataset augmentation in the lens of bias producers that can be useful in a variety [...] Read more.
Despite the growing capabilities of large language models, concerns exist about the biases they develop. In this paper, we propose a novel, automated mechanism for debiasing through specified dataset augmentation in the lens of bias producers that can be useful in a variety of industries, especially ones that are “restricted” and have limited data. We consider that bias can occur due to intrinsic model architecture and dataset quality. The two aspects are evaluated using two different metrics we created. We show that our dataset augmentation algorithm reduces bias as measured by our metrics. Our code can be found on an online GitHub repository. Full article
Show Figures

Figure 1

17 pages, 2534 KiB  
Article
LeakPred: An Approach for Identifying Components with Resource Leaks in Android Mobile Applications
by Josias Gomes Lima, Rafael Giusti and Arilo Claudio Dias-Neto
Computers 2024, 13(6), 140; https://doi.org/10.3390/computers13060140 - 3 Jun 2024
Viewed by 642
Abstract
Context: Mobile devices contain some resources, for example, the camera, battery, and memory, that are allocated, used, and then deallocated by mobile applications. Whenever a resource is allocated and not correctly released, a defect called a resource leak occurs, which can cause [...] Read more.
Context: Mobile devices contain some resources, for example, the camera, battery, and memory, that are allocated, used, and then deallocated by mobile applications. Whenever a resource is allocated and not correctly released, a defect called a resource leak occurs, which can cause crashes and slowdowns. Objective: In this study, we intended to demonstrate the usefulness of the LeakPred approach in terms of the number of components with resource leak problems identified in applications. Method: We compared the approach’s effectiveness with three state-of-the-art methods in identifying leaks in 15 Android applications. Result: LeakPred obtained the best median (85.37%) of components with identified leaks, the best coverage (96.15%) of the classes of leaks that could be identified in the applications, and an accuracy of 81.25%. The Android Lint method achieved the second best median (76.92%) and the highest accuracy (100%), but only covered 1.92% of the leak classes. Conclusions: LeakPred is effective in identifying leaky components in applications. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

26 pages, 8365 KiB  
Article
High-Performance Computing Storage Performance and Design Patterns—Btrfs and ZFS Performance for Different Use Cases
by Vedran Dakic, Mario Kovac and Igor Videc
Computers 2024, 13(6), 139; https://doi.org/10.3390/computers13060139 - 3 Jun 2024
Viewed by 4729
Abstract
Filesystems are essential components in contemporary computer systems that organize and manage data. Their performance is crucial in various applications, from web servers to data storage systems. This paper helps to pick the suitable filesystem by comparing btrfs with ZFS by considering multiple [...] Read more.
Filesystems are essential components in contemporary computer systems that organize and manage data. Their performance is crucial in various applications, from web servers to data storage systems. This paper helps to pick the suitable filesystem by comparing btrfs with ZFS by considering multiple situations and applications, ranging from sequential and random performance in the most common use cases to extreme use cases like high-performance computing (HPC). It showcases each option’s benefits and drawbacks, considering different usage scenarios. The performance of btrfs and ZFS will be evaluated through rigorous testing. They will assess their capabilities in handling huge files, managing numerous small files, and the speed of data read and write across varied usage levels. The analysis indicates no definitive answer; the selection of the optimal filesystem is contingent upon individual data-access requirements. Full article
Show Figures

Figure 1

16 pages, 902 KiB  
Article
Insights into How to Enhance Container Terminal Operations with Digital Twins
by Marvin Kastner, Nicolò Saporiti, Ann-Kathrin Lange and Tommaso Rossi
Computers 2024, 13(6), 138; https://doi.org/10.3390/computers13060138 - 30 May 2024
Viewed by 944
Abstract
The years 2021 and 2022 showed that maritime logistics are prone to interruptions. Ports especially turned out to be bottlenecks with long queues of waiting vessels. This leads to the question of whether this can be (at least partly) mitigated by means of [...] Read more.
The years 2021 and 2022 showed that maritime logistics are prone to interruptions. Ports especially turned out to be bottlenecks with long queues of waiting vessels. This leads to the question of whether this can be (at least partly) mitigated by means of better and more flexible terminal operations. Digital Twins have been in use in production and logistics to increase flexibility in operations and to support operational decision-making based on real-time information. However, the true potential of Digital Twins to enhance terminal operations still needs to be further investigated. A Delphi study is conducted to explore the operational pain points, the best practices to counter them, and how these best practices can be supported by Digital Twins. A questionnaire with 16 propositions is developed, and a panel of 17 experts is asked for their degrees of confirmation for each. The results indicate that today’s terminal operations are far from ideal, and leave space for optimisation. The experts see great potential in analysing the past working shift data to identify the reasons for poor terminal performance. Moreover, they agree on the proposed best practices and support the use of emulation for detailed ad hoc simulation studies to improve operational decision-making. Full article
(This article belongs to the Special Issue IT in Production and Logistics)
Show Figures

Figure 1

27 pages, 4827 KiB  
Article
Machine Learning-Based Crop Yield Prediction in South India: Performance Analysis of Various Models
by Uppugunduri Vijay Nikhil, Athiya M. Pandiyan, S. P. Raja and Zoran Stamenkovic
Computers 2024, 13(6), 137; https://doi.org/10.3390/computers13060137 - 29 May 2024
Cited by 2 | Viewed by 2926
Abstract
Agriculture is one of the most important activities that produces crop and food that is crucial for the sustenance of a human being. In the present day, agricultural products and crops are not only used for local demand, but globalization has allowed us [...] Read more.
Agriculture is one of the most important activities that produces crop and food that is crucial for the sustenance of a human being. In the present day, agricultural products and crops are not only used for local demand, but globalization has allowed us to export produce to other countries and import from other countries. India is an agricultural nation and depends a lot on its agricultural activities. Prediction of crop production and yield is a necessary activity that allows farmers to estimate storage, optimize resources, increase efficiency and decrease costs. However, farmers usually predict crops based on the region, soil, weather conditions and the crop itself based on experience and estimates which may not be very accurate especially with the constantly changing and unpredictable climactic conditions of the present day. To solve this problem, we aim to predict the production and yield of various crops such as rice, sorghum, cotton, sugarcane and rabi using Machine Learning (ML) models. We train these models with the weather, soil and crop data to predict future crop production and yields of these crops. We have compiled a dataset of attributes that impact crop production and yield from specific states in India and performed a comprehensive study of the performance of various ML Regression Models in predicting crop production and yield. The results indicated that the Extra Trees Regressor achieved the highest performance among the models examined. It attained a R-Squared score of 0.9615 and showed lowest Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) of 21.06 and 33.99. Following closely behind are the Random Forest Regressor and LGBM Regressor, achieving R-Squared scores of 0.9437 and 0.9398 respectively. Moreover, additional analysis revealed that tree-based models, showing a R-Squared score of 0.9353, demonstrate better performance compared to linear and neighbors-based models, which achieved R-Squared scores of 0.8568 and 0.9002 respectively. Full article
Show Figures

Figure 1

44 pages, 4162 KiB  
Review
Object Tracking Using Computer Vision: A Review
by Pushkar Kadam, Gu Fang and Ju Jia Zou
Computers 2024, 13(6), 136; https://doi.org/10.3390/computers13060136 - 28 May 2024
Cited by 2 | Viewed by 5773
Abstract
Object tracking is one of the most important problems in computer vision applications such as robotics, autonomous driving, and pedestrian movement. There has been a significant development in camera hardware where researchers are experimenting with the fusion of different sensors and developing image [...] Read more.
Object tracking is one of the most important problems in computer vision applications such as robotics, autonomous driving, and pedestrian movement. There has been a significant development in camera hardware where researchers are experimenting with the fusion of different sensors and developing image processing algorithms to track objects. Image processing and deep learning methods have significantly progressed in the last few decades. Different data association methods accompanied by image processing and deep learning are becoming crucial in object tracking tasks. The data requirement for deep learning methods has led to different public datasets that allow researchers to benchmark their methods. While there has been an improvement in object tracking methods, technology, and the availability of annotated object tracking datasets, there is still scope for improvement. This review contributes by systemically identifying different sensor equipment, datasets, methods, and applications, providing a taxonomy about the literature and the strengths and limitations of different approaches, thereby providing guidelines for selecting equipment, methods, and applications. Research questions and future scope to address the unresolved issues in the object tracking field are also presented with research direction guidelines. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
Show Figures

Figure 1

22 pages, 3497 KiB  
Article
Two-Phase Fuzzy Real-Time Approach for Fuzzy Demand Electric Vehicle Routing Problem with Soft Time Windows
by Mohamed A. Wahby Shalaby and Sally S. Kassem
Computers 2024, 13(6), 135; https://doi.org/10.3390/computers13060135 - 27 May 2024
Viewed by 706
Abstract
Environmental concerns have called for several measures to be taken within the logistics and transportation fields. Among these measures is the adoption of electric vehicles instead of diesel-operated vehicles for personal and commercial-delivery use. The optimized routing of electric vehicles for the commercial [...] Read more.
Environmental concerns have called for several measures to be taken within the logistics and transportation fields. Among these measures is the adoption of electric vehicles instead of diesel-operated vehicles for personal and commercial-delivery use. The optimized routing of electric vehicles for the commercial delivery of products is the focus of this paper. We study the effect of several practical challenges that are faced when routing electric vehicles. Electric vehicle routing faces the additional challenge of the potential need for recharging while en route, leading to more travel time, and hence cost. Therefore, in this work, we address the issue of electric vehicle routing problem, allowing for partial recharging while en route. In addition, the practical mandate of the time windows set by customers is also considered, where electric vehicle routing problems with soft time windows are studied. Real-life experience shows that the delivery of customers’ demands might be uncertain. In addition, real-time traffic conditions are usually uncertain due to congestion. Therefore, in this work, uncertainties in customers’ demands and traffic conditions are modeled and solved using fuzzy methods. The problems of fuzzy real-time, fuzzy demand, and electric vehicle routing problems with soft time windows are addressed. A mixed-integer programming mathematical model to represent the problem is developed. A novel two-phase solution approach is proposed to solve the problem. In phase I, the classical genetic algorithm (GA) is utilized to obtain an optimum/near-optimum solution for the fuzzy demand electric vehicle routing problem with soft time windows (FD-EVRPSTW). In phase II, a novel fuzzy real-time-adaptive optimizer (FRTAO) is developed to overcome the challenges of recharging and real-time traffic conditions facing FD-EVRPSTW. The proposed solution approach is tested on several modified benchmark instances, and the results show the significance of recharging and congestion challenges for routing costs. In addition, the results show the efficiency of the proposed two-phase approach in overcoming the challenges and reducing the total costs. Full article
(This article belongs to the Special Issue Recent Advances in Autonomous Vehicle Solutions)
Show Figures

Figure 1

16 pages, 23626 KiB  
Article
DCTE-LLIE: A Dual Color-and-Texture-Enhancement-Based Method for Low-Light Image Enhancement
by Hua Wang, Jianzhong Cao, Lei Yang and Jijiang Huang
Computers 2024, 13(6), 134; https://doi.org/10.3390/computers13060134 - 27 May 2024
Viewed by 849
Abstract
The enhancement of images captured under low-light conditions plays a vitally important role in the area of image processing and can significantly affect the performance of following operations. In recent years, deep learning techniques have been leveraged in the area of low-light image [...] Read more.
The enhancement of images captured under low-light conditions plays a vitally important role in the area of image processing and can significantly affect the performance of following operations. In recent years, deep learning techniques have been leveraged in the area of low-light image enhancement tasks, and deep-learning-based low-light image enhancement methods have been the mainstream for low-light image enhancement tasks. However, due to the inability of existing methods to effectively maintain the color distribution of the original input image and to effectively handle feature descriptions at different scales, the final enhanced image exhibits color distortion and local blurring phenomena. So, in this paper, a novel dual color-and-texture-enhancement-based low-light image enhancement method is proposed, which can effectively enhance low-light images. Firstly, a novel color enhancement block is leveraged to help maintain color distribution during the enhancement process, which can further eliminate the color distortion effect; after that, an attention-based multiscale texture enhancement block is proposed to help the network focus on multiscale local regions and extract more reliable texture representations automatically, and a fusion strategy is leveraged to fuse the multiscale feature representations automatically and finally generate the enhanced reflection component. The experimental results on public datasets and real-world low-light images established the effectiveness of the proposed method on low-light image enhancement tasks. Full article
Show Figures

Figure 1

19 pages, 3161 KiB  
Article
Modeling and Analysis of Dekker-Based Mutual Exclusion Algorithms
by Libero Nigro, Franco Cicirelli and Francesco Pupo
Computers 2024, 13(6), 133; https://doi.org/10.3390/computers13060133 - 25 May 2024
Cited by 2 | Viewed by 1323
Abstract
Mutual exclusion is a fundamental problem in concurrent/parallel/distributed systems. The first pure-software solution to this problem for two processes, which is not based on hardware instructions like test-and-set, was proposed in 1965 by Th.J. Dekker and communicated by E.W. Dijkstra. The correctness of [...] Read more.
Mutual exclusion is a fundamental problem in concurrent/parallel/distributed systems. The first pure-software solution to this problem for two processes, which is not based on hardware instructions like test-and-set, was proposed in 1965 by Th.J. Dekker and communicated by E.W. Dijkstra. The correctness of this algorithm has generally been studied under the strong memory model, where the read and write operations on a memory cell are atomic or indivisible. In recent years, some variants of the algorithm have been proposed to make it RW-safe when using the weak memory model, which makes it possible, e.g., for multiple read operations to occur simultaneously to a write operation on the same variable, with the read operations returning (flickering) a non-deterministic value. This paper proposes a novel approach to formal modeling and reasoning on a mutual exclusion algorithm using Timed Automata and the Uppaal tool, and it applies this approach through exhaustive model checking to conduct a thorough analysis of the Dekker’s algorithm and some of its variants proposed in the literature. This paper aims to demonstrate that model checking, although necessarily limited in the scalability of the number N of the processes due to the state explosions problem, is effective yet powerful for reasoning on concurrency and process action interleaving, and it can provide significant results about the correctness and robustness of the basic version and variants of the Dekker’s algorithm under both the strong and weak memory models. In addition, the properties of these algorithms are also carefully studied in the context of a tournament-based binary tree for N2 processes. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

26 pages, 2756 KiB  
Article
A Blockchain-Based Electronic Health Record (EHR) System for Edge Computing Enhancing Security and Cost Efficiency
by Valerio Mandarino, Giuseppe Pappalardo and Emiliano Tramontana
Computers 2024, 13(6), 132; https://doi.org/10.3390/computers13060132 - 24 May 2024
Cited by 4 | Viewed by 2246
Abstract
Blockchain technology offers unique features, such as transparency, the immutability of data, and the capacity to establish trust without a central authority. Such characteristics can be leveraged to support the collaboration among several different software systems operating within the healthcare ecosystem, while ensuring [...] Read more.
Blockchain technology offers unique features, such as transparency, the immutability of data, and the capacity to establish trust without a central authority. Such characteristics can be leveraged to support the collaboration among several different software systems operating within the healthcare ecosystem, while ensuring data integrity and make electronic health records (EHRs) more easily accessible. To provide a solution based on blockchain technology, this paper has evaluated the main issues that arise when large amounts of data are expected, i.e., mainly cost and performance. A balanced approach that maximizes the benefits and mitigates the constraints of the blockchain has been designed. The proposed decentralized application (dApp) architecture employs a hybrid storage strategy that involves storing medical records locally, on users’ devices, while utilizing blockchain to manage an index of these data. The dApp clients facilitate interactions among participants, leveraging a smart contract to enable patients to set authorization policies, thereby ensuring that only designated healthcare providers and authorized entities have access to specific medical records. The blockchain data-immutability property is used to validate data stored externally. This solution significantly reduces the costs related to the utilization of the blockchain, while retaining its advantages, and improves performance, since the majority of data are available off-chain. Full article
(This article belongs to the Special Issue When Blockchain Meets IoT: Challenges and Potentials)
Show Figures

Figure 1

31 pages, 2472 KiB  
Article
A Step-by-Step Methodology for Obtaining the Reliability of Building Microgrids Using Fault TreeAnalysis
by Gustavo A. Patiño-Álvarez, Johan S. Arias-Pérez and Nicolás Muñoz-Galeano
Computers 2024, 13(6), 131; https://doi.org/10.3390/computers13060131 - 24 May 2024
Viewed by 838
Abstract
This paper introduces an improved methodology designed to address a practical deficit of existing methodologies by incorporating circuit-level analysis in the assessment of building microgrid reliability. The scientific problem at hand involves devising a systematic approach that integrates circuit modeling, Probability Density Function [...] Read more.
This paper introduces an improved methodology designed to address a practical deficit of existing methodologies by incorporating circuit-level analysis in the assessment of building microgrid reliability. The scientific problem at hand involves devising a systematic approach that integrates circuit modeling, Probability Density Function (PDF) selection, formulation of reliability functions, and Fault Tree Analysis (FTA) tailored specifically for the distinctive features of building microgrids. This method entails analyzing inter-component relationships to gain comprehensive insights into system behavior. By harnessing the circuit models and theoretical framework proposed herein, precise estimations of microgrid failure rates can be attained. To complement this approach, we propose a thorough investigation utilizing reliability curves and importance measures, providing valuable insights into individual device failure probabilities over time. Such time-based analysis plays a crucial role in proactively identifying potential failures and facilitating efficient maintenance planning for microgrid devices. We demonstrate the application of this methodology to the University of Antioquia (UdeA) Microgrid, a low-voltage system comprising critical components such as solar panels, microinverters, inverters/chargers, batteries, and charge controllers. Full article
Show Figures

Figure 1

23 pages, 519 KiB  
Article
Exploiting Anytime Algorithms for Collaborative Service Execution in Edge Computing
by Luís Nogueira, Jorge Coelho and David Pereira
Computers 2024, 13(6), 130; https://doi.org/10.3390/computers13060130 - 23 May 2024
Viewed by 680
Abstract
The diversity and scarcity of resources across devices in heterogeneous computing environments can impact their ability to meet users’ quality-of-service (QoS) requirements, especially in open real-time environments where computational loads are unpredictable. Despite this uncertainty, timely responses to events remain essential to ensure [...] Read more.
The diversity and scarcity of resources across devices in heterogeneous computing environments can impact their ability to meet users’ quality-of-service (QoS) requirements, especially in open real-time environments where computational loads are unpredictable. Despite this uncertainty, timely responses to events remain essential to ensure desired performance levels. To address this challenge, this paper introduces collaborative service execution, enabling resource-constrained IoT devices to collaboratively execute services with more powerful neighbors at the edge, thus meeting non-functional requirements that might be unattainable through individual execution. Nodes dynamically form clusters, allocating resources to each service and establishing initial configurations that maximize QoS satisfaction while minimizing global QoS impact. However, the complexity of open real-time environments may hinder the computation of optimal local and global resource allocations within reasonable timeframes. Thus, we reformulate the QoS optimization problem as a heuristic-based anytime optimization problem, capable of interrupting and quickly adapting to environmental changes. Extensive simulations demonstrate that our anytime algorithms rapidly yield satisfactory initial service solutions and effectively optimize the solution quality over iterations, with negligible overhead compared to the benefits gained. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

14 pages, 968 KiB  
Article
Robust Algorithms for the Analysis of Fast-Field-Cycling Nuclear Magnetic Resonance Dispersion Curves
by Villiam Bortolotti, Pellegrino Conte, Germana Landi, Paolo Lo Meo, Anastasiia Nagmutdinova, Giovanni Vito Spinelli and Fabiana Zama
Computers 2024, 13(6), 129; https://doi.org/10.3390/computers13060129 - 23 May 2024
Viewed by 808
Abstract
Fast-Field-Cycling (FFC) Nuclear Magnetic Resonance (NMR) relaxometry is a powerful, non-destructive magnetic resonance technique that enables, among other things, the investigation of slow molecular dynamics at low magnetic field intensities. FFC-NMR relaxometry measurements provide insight into molecular motion across various timescales within a [...] Read more.
Fast-Field-Cycling (FFC) Nuclear Magnetic Resonance (NMR) relaxometry is a powerful, non-destructive magnetic resonance technique that enables, among other things, the investigation of slow molecular dynamics at low magnetic field intensities. FFC-NMR relaxometry measurements provide insight into molecular motion across various timescales within a single experiment. This study focuses on a model-free approach, representing the NMRD profile R1 as a linear combination of Lorentzian functions, thereby addressing the challenges of fitting data within an ill-conditioned linear least-squares framework. Tackling this problem, we present a comprehensive review and experimental validation of three regularization approaches to implement the model-free approach to analyzing NMRD profiles. These include (1) MF-UPen, utilizing locally adapted L2 regularization; (2) MF-L1, based on L1 penalties; and (3) a hybrid approach combining locally adapted L2 and global L1 penalties. Each method’s regularization parameters are determined automatically according to the Balancing and Uniform Penalty principles. Our contributions include the implementation and experimental validation of the MF-UPen and MF-MUPen algorithms, and the development of a “dispersion analysis” technique to assess the existence range of the estimated parameters. The objective of this work is to delineate the variance in fit quality and correlation time distribution yielded by each algorithm, thus broadening the set of software tools for the analysis of sample structures in FFC-NMR studies. The findings underline the efficacy and applicability of these algorithms in the analysis of NMRD profiles from samples representing different potential scenarios. Full article
Show Figures

Figure 1

27 pages, 2040 KiB  
Article
Machine Learning Decision System on the Empirical Analysis of the Actual Usage of Interactive Entertainment: A Perspective of Sustainable Innovative Technology
by Rex Revian A. Guste and Ardvin Kester S. Ong
Computers 2024, 13(6), 128; https://doi.org/10.3390/computers13060128 - 23 May 2024
Cited by 1 | Viewed by 1534
Abstract
This study focused on the impact of Netflix’s interactive entertainment on Filipino consumers, seamlessly combining vantage points from consumer behavior and employing data analytics. This underlines the revolutionary aspect of interactive entertainment in the quickly expanding digital media ecosystem, particularly as Netflix pioneers [...] Read more.
This study focused on the impact of Netflix’s interactive entertainment on Filipino consumers, seamlessly combining vantage points from consumer behavior and employing data analytics. This underlines the revolutionary aspect of interactive entertainment in the quickly expanding digital media ecosystem, particularly as Netflix pioneers fresh content distribution techniques. The main objective of this study was to find the factors impacting the real usage of Netflix’s interactive entertainment among Filipino viewers, filling a critical gap in the existing literature. The major goal of using advanced data analytics techniques in this study was to understand the subtle dynamics affecting customer behavior in this setting. Specifically, the random forest classifier with hard and soft classifiers was assessed. The random forest compared to LightGBM was also employed, alongside the different algorithms of the artificial neural network. Purposive sampling was used to obtain responses from 258 people who had experienced Netflix’s interactive entertainment, resulting in a comprehensive dataset. The findings emphasized the importance of hedonic motivation, underlining the requirement for highly engaging and rewarding interactive material. Customer service and device compatibility, for example, have a significant impact on user uptake. Furthermore, behavioral intention and habit emerged as key drivers, revealing interactive entertainment’s long-term influence on user engagement. Practically, the research recommends strategic platform suggestions that emphasize continuous innovation, user-friendly interfaces, and user-centric methods. This study was able to fill in the gap in the literature on interactive entertainment, which contributes to a better understanding of consumer consumption and lays the groundwork for future research in the dynamic field of digital media. Moreover, this study offers essential insights into the intricate interaction of consumer preferences, technology breakthroughs, and societal influences in the ever-expanding environment of digital entertainment. Lastly, the comparative approach to the use of machine learning algorithms provides insights for future works to adopt and employ among human factors and consumer behavior-related studies. Full article
Show Figures

Figure 1

24 pages, 5079 KiB  
Article
Machine Learning for Predicting Key Factors to Identify Misinformation in Football Transfer News
by Ife Runsewe, Majid Latifi, Mominul Ahsan and Julfikar Haider
Computers 2024, 13(6), 127; https://doi.org/10.3390/computers13060127 - 23 May 2024
Cited by 3 | Viewed by 1166
Abstract
The spread of misinformation in football transfer news has become a growing concern. To address this challenge, this study introduces a novel approach by employing ensemble learning techniques to identify key factors for predicting such misinformation. The performance of three ensemble learning models, [...] Read more.
The spread of misinformation in football transfer news has become a growing concern. To address this challenge, this study introduces a novel approach by employing ensemble learning techniques to identify key factors for predicting such misinformation. The performance of three ensemble learning models, namely Random Forest, AdaBoost, and XGBoost, was analyzed on a dataset of transfer rumours. Natural language processing (NLP) techniques were employed to extract structured data from the text, and the veracity of each rumor was verified using factual transfer data. The study also investigated the relationships between specific features and rumor veracity. Key predictive features such as a player’s market value, age, and timing of the transfer window were identified. The Random Forest model outperformed the other two models, achieving a cross-validated accuracy of 95.54%. The top features identified by the model were a player’s market value, time to the start/end of the transfer window, and age. The study revealed weak negative relationships between a player’s age, time to the start/end of the transfer window, and rumor veracity, suggesting that for older players and times further from the transfer window, rumors are slightly less likely to be true. In contrast, a player’s market value did not have a statistically significant relationship with rumor veracity. This study contributes to the existing knowledge of misinformation detection and ensemble learning techniques. Despite some limitations, this study has significant implications for media agencies, football clubs, and fans. By discerning the credibility of transfer news, stakeholders can make informed decisions, reduce the spread of misinformation, and foster a more transparent transfer market. Full article
Show Figures

Figure 1

17 pages, 3025 KiB  
Article
An Improved Ensemble-Based Cardiovascular Disease Detection System with Chi-Square Feature Selection
by Ayad E. Korial, Ivan Isho Gorial and Amjad J. Humaidi
Computers 2024, 13(6), 126; https://doi.org/10.3390/computers13060126 - 22 May 2024
Cited by 6 | Viewed by 1603
Abstract
Cardiovascular disease (CVD) is a leading cause of death globally; therefore, early detection of CVD is crucial. Many intelligent technologies, including deep learning and machine learning (ML), are being integrated into healthcare systems for disease prediction. This paper uses a voting ensemble ML [...] Read more.
Cardiovascular disease (CVD) is a leading cause of death globally; therefore, early detection of CVD is crucial. Many intelligent technologies, including deep learning and machine learning (ML), are being integrated into healthcare systems for disease prediction. This paper uses a voting ensemble ML with chi-square feature selection to detect CVD early. Our approach involved applying multiple ML classifiers, including naïve Bayes, random forest, logistic regression (LR), and k-nearest neighbor. These classifiers were evaluated through metrics including accuracy, specificity, sensitivity, F1-score, confusion matrix, and area under the curve (AUC). We created an ensemble model by combining predictions from the different ML classifiers through a voting mechanism, whose performance was then measured against individual classifiers. Furthermore, we applied chi-square feature selection method to the 303 records across 13 clinical features in the Cleveland cardiac disease dataset to identify the 5 most important features. This approach improved the overall accuracy of our ensemble model and reduced the computational load considerably by more than 50%. Demonstrating superior effectiveness, our voting ensemble model achieved a remarkable accuracy of 92.11%, representing an average improvement of 2.95% over the single highest classifier (LR). These results indicate the ensemble method as a viable and practical approach to improve the accuracy of CVD prediction. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop