Next Issue
Volume 13, September
Previous Issue
Volume 13, July
 
 

Computers, Volume 13, Issue 8 (August 2024) – 31 articles

Cover Story (view full-size image): As the global deployment of Fifth-Generation (5G) technology continues, the exploration of potential Sixth-Generation (6G) wireless networks has begun. Sixth-Generation technology aims to surpass 5G by providing a ubiquitous communication experience through integrating diverse Radio Access Networks (RANs) and fixed-access networks, creating a “network of networks.” This paper proposes a novel user plane protocol architecture called 6G Recursive User Plane Architecture (6G-RUPA). It focuses on scalability, flexibility, and energy efficiency, improving network federation, protocol overhead, and mobility management mechanisms, among other elements. 6G-RUPA enhances overall performance and sustainability, bringing mobile networks one step closer to the “network of networks” vision. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
31 pages, 13095 KiB  
Article
Self-Adaptive Evolutionary Info Variational Autoencoder
by Toby A. Emm and Yu Zhang
Computers 2024, 13(8), 214; https://doi.org/10.3390/computers13080214 - 22 Aug 2024
Viewed by 869
Abstract
With the advent of increasingly powerful machine learning algorithms and the ability to rapidly obtain accurate aerodynamic performance data, there has been a steady rise in the use of algorithms for automated aerodynamic design optimisation. However, long training times, high-dimensional design spaces and [...] Read more.
With the advent of increasingly powerful machine learning algorithms and the ability to rapidly obtain accurate aerodynamic performance data, there has been a steady rise in the use of algorithms for automated aerodynamic design optimisation. However, long training times, high-dimensional design spaces and rapid geometry alteration pose barriers to this becoming an efficient and worthwhile process. The variational autoencoder (VAE) is a probabilistic generative model capable of learning a low-dimensional representation of high-dimensional input data. Despite their impressive power, VAEs suffer from several issues, resulting in poor model performance and limiting optimisation capability. Several approaches have been proposed in attempts to fix these issues. This study combines the approaches of loss function modification with evolutionary hyperparameter tuning, introducing a new self-adaptive evolutionary info variational autoencoder (SA-eInfoVAE). The proposed model is validated against previous models on the MNIST handwritten digits dataset, assessing the total model performance. The proposed model is then applied to an aircraft image dataset to assess the applicability and complications involved with complex datasets such as those used for aerodynamic design optimisation. The results obtained on the MNIST dataset show improved inference in conjunction with increased generative and reconstructive performance. This is validated through a thorough comparison against baseline models, including quantitative metrics reconstruction error, loss function calculation and disentanglement percentage. A number of qualitative image plots provide further comparison of the generative and reconstructive performance, as well as the strength of latent encodings. Furthermore, the results on the aircraft image dataset show the proposed model can produce high-quality reconstructions and latent encodings. The analysis suggests, given a high-quality dataset and optimal network structure, the proposed model is capable of outperforming the current VAE models, reducing the training time cost and improving the quality of automated aerodynamic design optimisation. Full article
Show Figures

Figure 1

29 pages, 742 KiB  
Review
Forensic Investigation, Challenges, and Issues of Cloud Data: A Systematic Literature Review
by Munirah Maher Alshabibi, Alanood Khaled Bu dookhi and M. M. Hafizur Rahman
Computers 2024, 13(8), 213; https://doi.org/10.3390/computers13080213 - 22 Aug 2024
Viewed by 5512
Abstract
Cloud computing technology delivers services, resources, and computer systems over the internet, enabling the easy modification of resources. Each field has its challenges, and the challenges of data transfer in the cloud pose unique obstacles for forensic analysts, making it necessary for them [...] Read more.
Cloud computing technology delivers services, resources, and computer systems over the internet, enabling the easy modification of resources. Each field has its challenges, and the challenges of data transfer in the cloud pose unique obstacles for forensic analysts, making it necessary for them to investigate and adjust the evolving landscape of cloud computing. This is where cloud forensics emerges as a critical component. Cloud forensics, a specialized field within digital forensics, focuses on uncovering evidence of exploitation, conducting thorough investigations, and presenting findings to law enforcement for legal action against perpetrators. This paper examines the primary challenges encountered in cloud forensics, reviews the relevant literature, and analyzes the strategies implemented to address these obstacles. Full article
Show Figures

Figure 1

22 pages, 1588 KiB  
Article
Investigating the Challenges and Opportunities in Persian Language Information Retrieval through Standardized Data Collections and Deep Learning
by Sara Moniri, Tobias Schlosser and Danny Kowerko
Computers 2024, 13(8), 212; https://doi.org/10.3390/computers13080212 - 21 Aug 2024
Viewed by 1200
Abstract
The Persian language, also known as Farsi, is distinguished by its intricate morphological richness, yet it contends with a paucity of linguistic resources. With an estimated 110 million speakers, it finds prevalence across Iran, Tajikistan, Uzbekistan, Iraq, Russia, Azerbaijan, and Afghanistan. However, despite [...] Read more.
The Persian language, also known as Farsi, is distinguished by its intricate morphological richness, yet it contends with a paucity of linguistic resources. With an estimated 110 million speakers, it finds prevalence across Iran, Tajikistan, Uzbekistan, Iraq, Russia, Azerbaijan, and Afghanistan. However, despite its widespread usage, scholarly investigations into Persian document retrieval remain notably scarce. This circumstance is primarily attributed to the absence of standardized test collections, which impedes the advancement of comprehensive research endeavors within this realm. As data corpora are the foundation of natural language processing applications, this work aims at Persian language datasets to address their availability and structure. Subsequently, we motivate a learning-based framework for the processing of Persian texts and their recognition, for which current state-of-the-art approaches from deep learning, such as deep neural networks, are further discussed. Our investigations highlight the challenges of realizing such a system while emphasizing its possible benefits for an otherwise rarely covered language. Full article
Show Figures

Figure 1

24 pages, 22050 KiB  
Article
SOD: A Corpus for Saudi Offensive Language Detection Classification
by Afefa Asiri and Mostafa Saleh
Computers 2024, 13(8), 211; https://doi.org/10.3390/computers13080211 - 20 Aug 2024
Viewed by 696
Abstract
Social media platforms like X (formerly known as Twitter) are integral to modern communication, enabling the sharing of news, emotions, and ideas. However, they also facilitate the spread of harmful content, and manual moderation of these platforms is impractical. Automated moderation tools, predominantly [...] Read more.
Social media platforms like X (formerly known as Twitter) are integral to modern communication, enabling the sharing of news, emotions, and ideas. However, they also facilitate the spread of harmful content, and manual moderation of these platforms is impractical. Automated moderation tools, predominantly developed for English, are insufficient for addressing online offensive language in Arabic, a language rich in dialects and informally used on social media. This gap underscores the need for dedicated, dialect-specific resources. This study introduces the Saudi Offensive Dialectal dataset (SOD), consisting of over 24,000 tweets annotated across three levels: offensive or non-offensive, with offensive tweets further categorized as general insults, hate speech, or sarcasm. A deeper analysis of hate speech identifies subtypes related to sports, religion, politics, race, and violence. A comprehensive descriptive analysis of the SOD is also provided to offer deeper insights into its composition. Using machine learning, traditional deep learning, and transformer-based deep learning models, particularly AraBERT, our research achieves a significant F1-Score of 87% in identifying offensive language. This score improves to 91% with data augmentation techniques addressing dataset imbalances. These results, which surpass many existing studies, demonstrate that a specialized dialectal dataset enhances detection efficacy compared to mixed-language datasets. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

29 pages, 1108 KiB  
Article
Improved Hybrid Bagging Resampling Framework for Deep Learning-Based Side-Channel Analysis
by Faisal Hameed, Sumesh Manjunath Ramesh and Hoda Alkhzaimi
Computers 2024, 13(8), 210; https://doi.org/10.3390/computers13080210 - 20 Aug 2024
Viewed by 617
Abstract
As cryptographic implementations leak secret information through side-channel emissions, the Hamming weight (HW) leakage model is widely used in deep learning profiling side-channel analysis (SCA) attacks to expose the leaked model. However, imbalanced datasets often arise from the HW leakage model, increasing the [...] Read more.
As cryptographic implementations leak secret information through side-channel emissions, the Hamming weight (HW) leakage model is widely used in deep learning profiling side-channel analysis (SCA) attacks to expose the leaked model. However, imbalanced datasets often arise from the HW leakage model, increasing the attack complexity and limiting the performance of deep learning-based SCA attacks. Effective management of class imbalance is vital for training deep neural network models to achieve optimized and improved performance results. Recent works focus on either improved deep-learning methodologies or data augmentation techniques. In this work, we propose the hybrid bagging resampling framework, a two-pronged strategy for tackling class imbalance in side-channel datasets, consisting of data augmentation and ensemble learning. We show that adopting this framework can boost attack performance results in a practical setup. From our experimental results, the SMOTEENN ensemble achieved the best performance in the ASCAD dataset, and the basic ensemble performed the best in the CHES dataset, with both contributing over 70% practical improvements in performance compared to the original imbalanced dataset, and accelerating practical attack space in comparison to the classical setup of the attack. Full article
Show Figures

Figure 1

13 pages, 2278 KiB  
Article
Developing and Testing a Portable Soil Nutrient Detector in Irrigated and Rainfed Paddy Soils from Java, Indonesia
by Yiyi Sulaeman, Eko Sutanto, Antonius Kasno, Nandang Sunandar and Runik D. Purwaningrahayu
Computers 2024, 13(8), 209; https://doi.org/10.3390/computers13080209 - 20 Aug 2024
Viewed by 895
Abstract
Data on the soil nutrient content are required to calculate fertilizer rate recommendations. The soil laboratory determines these soil properties, yet the measurement is time-consuming and costly. Meanwhile, portable devices to assess the soil nutrient content in real-time are limited. However, a proprietary [...] Read more.
Data on the soil nutrient content are required to calculate fertilizer rate recommendations. The soil laboratory determines these soil properties, yet the measurement is time-consuming and costly. Meanwhile, portable devices to assess the soil nutrient content in real-time are limited. However, a proprietary and low-cost NPK sensor is available and commonly used in IoT for agriculture. This study aimed to assemble and test a portable, NPK sensor-based device in irrigated and rainfed paddy soils from Java, Indonesia. The device development followed a prototyping approach. The device building included market surveys and opted for an inexpensive, light, and compact soil sensor, power storage, monitor, and wire connectors. Arduino programming language was used to write scripts for data display and sub-device communication. The result is a real-time, portable soil nutrient detector that measures the soil temperature, moisture, pH, electrical conductivity, and N, P, and K contents. Field tests show that the device is sensitive to soil properties and location. The portable soil nutrient detector may be an alternative tool for the real-time measurement of soil nutrients in paddy fields in Indonesia. Full article
Show Figures

Figure 1

26 pages, 3806 KiB  
Article
Proposed Supercluster-Based UMBBFS Routing Protocol for Emergency Message Dissemination in Edge-RSU for 5G VANET
by Maath A. Albeyar, Ikram Smaoui, Hassene Mnif and Sameer Alani
Computers 2024, 13(8), 208; https://doi.org/10.3390/computers13080208 - 19 Aug 2024
Viewed by 633
Abstract
Vehicular ad hoc networks (VANETs) can bolster road safety through the proactive dissemination of emergency messages (EMs) among vehicles, effectively reducing the occurrence of traffic-related accidents. It is difficult to transmit EMs quickly and reliably due to the high-speed mobility of VANET and [...] Read more.
Vehicular ad hoc networks (VANETs) can bolster road safety through the proactive dissemination of emergency messages (EMs) among vehicles, effectively reducing the occurrence of traffic-related accidents. It is difficult to transmit EMs quickly and reliably due to the high-speed mobility of VANET and the attenuation of the wireless signal. However, poor network design and high vehicle mobility are the two most difficult problems that affect VANET’s network performance. The real-time traffic situation and network dependability will also be significantly impacted by route selection and message delivery. Many of the current works have undergone studies focused on forwarder selection and message transmission to address these problems. However, these earlier approaches, while effective in forwarder selection and routing, have overlooked the critical aspects of communication overhead and excessive energy consumption, resulting in transmission delays. To address the prevailing challenges, the proposed solutions use edge computing to process and analyze data locally from surrounding cars and infrastructure. EDGE-RSUs are positioned by the side of the road. In intelligent transportation systems, this lowers latency and enhances real-time decision-making by employing proficient forwarder selection techniques and optimizing the dissemination of EMs. In the context of 5G-enabled VANET, this paper introduces a novel routing protocol, namely, the supercluster-based urban multi-hop broadcast and best forwarder selection protocol (UMB-BFS). The improved twin delay deep deterministic policy gradient (IT3DPG) method is used to select the target region for emergency message distribution after route selection. Clustering is conducted using modified density peak clustering (MDPC). Improved firefly optimization (IFO) is used for optimal path selection. In this way, all emergency messages are quickly disseminated to multiple directions and also manage the traffic in VANET. Finally, we plotted graphs for the following metrics: throughput (3.9 kbps), end-to-end delay (70), coverage (90%), packet delivery ratio (98%), packet received (12.75 k), and transmission delay (57 ms). Our approach’s performance is examined using numerical analysis, demonstrating that it performs better than the current methodologies across all measures. Full article
Show Figures

Figure 1

17 pages, 2893 KiB  
Article
Student Teachers’ Perceptions of a Game-Based Exam in the Genial.ly App
by Elina Gravelsina and Linda Daniela
Computers 2024, 13(8), 207; https://doi.org/10.3390/computers13080207 - 19 Aug 2024
Viewed by 934
Abstract
This research examines student teachers’ perceptions of a game-based exam conducted in the Genial.ly app in the study course ”Legal Aspects of the Pedagogical Process”. This study aims to find out the pros and cons of game-based exams and understand which digital solutions [...] Read more.
This research examines student teachers’ perceptions of a game-based exam conducted in the Genial.ly app in the study course ”Legal Aspects of the Pedagogical Process”. This study aims to find out the pros and cons of game-based exams and understand which digital solutions can enable the development and analysis of digital game data. At the beginning of the course, students were introduced to the research and asked to provide feedback throughout the course on what they saw as the most important aspects of each class and insights on how they envisioned the game-based exam could proceed. The game-based exam was built using the digital platform Genial.ly after its update, which provided the possibility to include open-ended questions and collect data for analyses. It was designed with a narrative in which a new teacher comes to a school and is asked for help in different situations. After reading a description of each situation, the students answered questions about how they would resolve them based on Latvia’s regulations. After the exam, students wrote feedback indicating that the game-based exam helped them visualize the situations presented, resulting in lower stress levels compared to a traditional exam. This research was structured based on design-based principles and the data were analyzed from the perspective of how educators can use freely available solutions to develop game-based exams to test students’ knowledge gained during a course. The results show that Genial.ly can be used as an examination tool, as indicated by positive student teachers’ responses. However, this research has limitations as it was conducted with only one test group due to administrative reasons. Future research could address this by including multiple groups within the same course as well as testing game-based exams in other subject courses for comparison. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

21 pages, 3069 KiB  
Article
Caputo Fabrizio Bézier Curve with Fractional and Shape Parameters
by Muhammad Awais, Syed Khawar Nadeem Kirmani, Maheen Rana and Raheel Ahmad
Computers 2024, 13(8), 206; https://doi.org/10.3390/computers13080206 - 19 Aug 2024
Viewed by 578
Abstract
In recent research in computer-aided geometric design (CAGD), one of the most popular concerns has been the creation of new basis functions for the Bézier curve. Bézier curves with high degrees often overshoot, which makes it challenging to maintain control over the ideal [...] Read more.
In recent research in computer-aided geometric design (CAGD), one of the most popular concerns has been the creation of new basis functions for the Bézier curve. Bézier curves with high degrees often overshoot, which makes it challenging to maintain control over the ideal length of the curved trajectory. To get around this restriction, free-form surfaces and curves can be created using the Caputo Fabrizio basis function. In this study, the Caputo Fabrizio fractional order derivative is used to construct the Caputo Fabrizio basis function, which contains fractional parameter and shape parameters. The Caputo Fabrizio Bézier curve and surface are defined using the Caputo Fabrizio basis function, and their geometric properties are examined. Using fractional and shape parameters in the implementation of the Caputo Fabrizio basis function offers an alternative perspective on how the Caputo Fabrizio basis function can construct complicated curves and surfaces beyond a limited formulation. Curves and surfaces can have additional shape and length control by adjusting a number of fractional and shape parameters without affecting their control points. The Caputo Fabrizio Bézier curve’s flexibility and versatility make it more effective in creating complex engineering curves and surfaces. Full article
Show Figures

Figure 1

15 pages, 3941 KiB  
Article
An Educational Escape Room Game to Develop Cybersecurity Skills
by Alessia Spatafora, Markus Wagemann, Charlotte Sandoval, Manfred Leisenberg and Carlos Vaz de Carvalho
Computers 2024, 13(8), 205; https://doi.org/10.3390/computers13080205 - 19 Aug 2024
Viewed by 1112
Abstract
The global rise in cybercrime is fueled by the pervasive digitization of work and personal life, compounded by the shift to online formats during the COVID-19 pandemic. As digital channels flourish, so too do the opportunities for cyberattacks, particularly those exposing small and [...] Read more.
The global rise in cybercrime is fueled by the pervasive digitization of work and personal life, compounded by the shift to online formats during the COVID-19 pandemic. As digital channels flourish, so too do the opportunities for cyberattacks, particularly those exposing small and medium-sized enterprises (SMEs) to potential economic devastation. These businesses often lack comprehensive defense strategies and/or the necessary resources to implement effective cybersecurity measures. The authors have addressed this issue by developing an Educational Escape Room (EER) that supports scenario-based learning to enhance cybersecurity awareness among SME employees, enabling them to handle cyber threats more effectively. By integrating hands-on scenarios based on real-life examples, the authors aimed to improve the knowledge retention and the operational performance of SME staff in terms of cybersafe practices. The results achieved during pilot testing with more than 200 participants suggest that the EER approach engaged the trainees and boosted their cybersecurity awareness, marking a step forward in cybersecurity education. Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
Show Figures

Figure 1

16 pages, 10945 KiB  
Article
Impact of Video Motion Content on HEVC Coding Efficiency
by Khalid A. M. Salih, Ismail Amin Ali and Ramadhan J. Mstafa
Computers 2024, 13(8), 204; https://doi.org/10.3390/computers13080204 - 18 Aug 2024
Viewed by 804
Abstract
Digital video coding aims to reduce the bitrate and keep the integrity of visual presentation. High-Efficiency Video Coding (HEVC) can effectively compress video content to be suitable for delivery over various networks and platforms. Finding the optimal coding configuration is challenging as the [...] Read more.
Digital video coding aims to reduce the bitrate and keep the integrity of visual presentation. High-Efficiency Video Coding (HEVC) can effectively compress video content to be suitable for delivery over various networks and platforms. Finding the optimal coding configuration is challenging as the compression performance highly depends on the complexity of the encoded video sequence. This paper evaluates the effects of motion content on coding performance and suggests an adaptive encoding scheme based on the motion content of encoded video. To evaluate the effects of motion content on the compression performance of HEVC, we tested three coding configurations with different Group of Pictures (GOP) structures and intra refresh mechanisms. Namely, open GOP IPPP, open GOP Periodic-I, and closed GOP periodic-IDR coding structures were tested using several test sequences with a range of resolutions and motion activity. All sequences were first tested to check their motion activity. The rate–distortion curves were produced for all the test sequences and coding configurations. Our results show that the performance of IPPP coding configuration is significantly better (up to 4 dB) than periodic-I and periodic-IDR configurations for sequences with low motion activity. For test sequences with intermediate motion activity, IPPP configuration can still achieve a reasonable quality improvement over periodic-I and periodic-IDR configurations. However, for test sequences with high motion activity, IPPP configuration has a very small performance advantage over periodic-I and periodic-IDR configurations. Our results indicate the importance of selecting the appropriate coding structure according to the motion activity of the video being encoded. Full article
Show Figures

Figure 1

14 pages, 1107 KiB  
Article
Semantic-Aware Adaptive Binary Search for Hard-Label Black-Box Attack
by Yiqing Ma, Kyle Lucke, Min Xian and Aleksandar Vakanski
Computers 2024, 13(8), 203; https://doi.org/10.3390/computers13080203 - 18 Aug 2024
Viewed by 803
Abstract
Despite the widely reported potential of deep neural networks for automated breast tumor classification and detection, these models are vulnerable to adversarial attacks, which leads to significant performance degradation on different datasets. In this paper, we introduce a novel adversarial attack approach under [...] Read more.
Despite the widely reported potential of deep neural networks for automated breast tumor classification and detection, these models are vulnerable to adversarial attacks, which leads to significant performance degradation on different datasets. In this paper, we introduce a novel adversarial attack approach under the decision-based black-box setting, where the attack does not have access to the model parameters, and the returned information from querying the target model consists of only the final class label prediction (i.e., hard-label attack). The proposed attack approach has two major components: adaptive binary search and semantic-aware search. The adaptive binary search utilizes a coarse-to-fine strategy that applies adaptive tolerance values in different searching stages to reduce unnecessary queries. The proposed semantic mask-aware search crops the search space by using breast anatomy, which significantly avoids invalid searches. We validate the proposed approach using a dataset of 3378 breast ultrasound images and compare it with another state-of-the-art method by attacking five deep learning models. The results demonstrate that the proposed approach generates imperceptible adversarial samples at a high success rate (between 99.52% and 100%), and dramatically reduces the average and median queries by 23.96% and 31.79%, respectively, compared with the state-of-the-art approach. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2024)
Show Figures

Figure 1

17 pages, 1241 KiB  
Article
Time Series Forecasting via Derivative Spike Encoding and Bespoke Loss Functions for Spiking Neural Networks
by Davide Liberato Manna, Alex Vicente-Sola, Paul Kirkland, Trevor Joseph Bihl and Gaetano Di Caterina
Computers 2024, 13(8), 202; https://doi.org/10.3390/computers13080202 - 15 Aug 2024
Viewed by 860
Abstract
The potential of neuromorphic (NM) solutions often lies in their low-SWaP (Size, Weight, and Power) capabilities, which often drive their application to domains that could benefit from this. Nevertheless, spiking neural networks (SNNs), with their inherent time-based nature, present an attractive alternative also [...] Read more.
The potential of neuromorphic (NM) solutions often lies in their low-SWaP (Size, Weight, and Power) capabilities, which often drive their application to domains that could benefit from this. Nevertheless, spiking neural networks (SNNs), with their inherent time-based nature, present an attractive alternative also for areas where data features are present in the time dimension, such as time series forecasting. Time series data, characterized by seasonality and trends, can benefit from the unique processing capabilities of SNNs, which offer a novel approach for this type of task. Additionally, time series data can serve as a benchmark for evaluating SNN performance, providing a valuable alternative to traditional datasets. However, the challenge lies in the real-valued nature of time series data, which is not inherently suited for SNN processing. In this work, we propose a novel spike-encoding mechanism and two loss functions to address this challenge. Our encoding system, inspired by NM event-based sensors, converts the derivative of a signal into spikes, enhancing interoperability with the NM technology and also making the data suitable for SNN processing. Our loss functions then optimize the learning of subsequent spikes by the SNN. We train a simple SNN using SLAYER as a learning rule and conduct experiments using two electricity load forecasting datasets. Our results demonstrate that SNNs can effectively learn from encoded data, and our proposed DecodingLoss function consistently outperforms SLAYER’s SpikeTime loss function. This underscores the potential of SNNs for time series forecasting and sets the stage for further research in this promising area of research. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

29 pages, 3409 KiB  
Article
Training and Certification of Competences through Serious Games
by Ricardo Baptista, António Coelho and Carlos Vaz de Carvalho
Computers 2024, 13(8), 201; https://doi.org/10.3390/computers13080201 - 15 Aug 2024
Viewed by 918
Abstract
The potential of digital games, when transformed into Serious Games (SGs), Games for Learning (GLs), or game-based learning (GBL), is truly inspiring. These forms of games hold immense potential as effective learning tools as they have a unique ability to provide challenges that [...] Read more.
The potential of digital games, when transformed into Serious Games (SGs), Games for Learning (GLs), or game-based learning (GBL), is truly inspiring. These forms of games hold immense potential as effective learning tools as they have a unique ability to provide challenges that align with learning objectives and adapt to the learner’s level. This adaptability empowers educators to create a flexible and customizable learning experience, crucial in acquiring knowledge, experience, and professional skills. However, the lack of a standardised design methodology for challenges that promote skill acquisition often hampers the effectiveness of games-based training. The four-step Triadic Certification Method directly responds to this challenge, although implementing it may require significant resources and expertise and adapting it to different training contexts may be challenging. This method, built on a triadic of components: competencies, mechanics, and training levels, offers a new approach for game designers to create games with embedded in-game assessment towards the certification of competencies. The model combines the competencies defined for each training plan with the challenges designed for the game on a matrix that aligns needs and levels, ensuring a comprehensive and practical learning experience. The practicality of the model is evident in its ability to balance the various components of a certification process. To validate this method, a case study was developed in the context of learning how to drive, supported by a game coupled with a realistic driving simulator. The real time collection of game and training data and its processing, based on predefined settings, learning metrics (performance) and game elements (mechanics and parameterisations), defined by both experts and game designers, makes it possible to visualise the progression of learning and to give visual and auditory feedback to the student on their behaviour. The results demonstrate that it is possible use the data generated by the player and his/her interaction with the game to certify the competencies acquired. Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
Show Figures

Figure 1

12 pages, 1968 KiB  
Article
A Deep Learning Approach for Early Detection of Facial Palsy in Video Using Convolutional Neural Networks: A Computational Study
by Anuja Arora, Jasir Mohammad Zaeem, Vibhor Garg, Ambikesh Jayal and Zahid Akhtar
Computers 2024, 13(8), 200; https://doi.org/10.3390/computers13080200 - 15 Aug 2024
Viewed by 803
Abstract
Facial palsy causes the face to droop due to sudden weakness in the muscles on one side of the face. Computer-added assistance systems for the automatic recognition of palsy faces present a promising solution to recognizing the paralysis of faces at an early [...] Read more.
Facial palsy causes the face to droop due to sudden weakness in the muscles on one side of the face. Computer-added assistance systems for the automatic recognition of palsy faces present a promising solution to recognizing the paralysis of faces at an early stage. A few research studies have already been performed to handle this research issue using an automatic deep feature extraction by deep learning approach and handcrafted machine learning approach. This empirical research work designed a multi-model facial palsy framework which is a combination of two convolutional models—a multi-task cascaded convolutional network (MTCNN) for face and landmark detection and a hyperparameter tuned and parametric setting convolution neural network model for facial palsy classification. Using the proposed multi-model facial palsy framework, we presented results on a dataset of YouTube videos featuring patients with palsy. The results indicate that the proposed framework can detect facial palsy efficiently. Furthermore, the achieved accuracy, precision, recall, and F1-score values of the proposed framework for facial palsy detection are 97%, 94%, 90%, and 97%, respectively, for the training dataset. For the validation dataset, the accuracy achieved is 95%, precision is 90%, recall is 75.6%, and F-score is 76%. As a result, this framework can easily be used for facial palsy detection. Full article
Show Figures

Figure 1

18 pages, 10704 KiB  
Article
Virtual Reality Integration for Enhanced Engineering Education and Experimentation: A Focus on Active Thermography
by Ilario Strazzeri, Arnaud Notebaert, Camila Barros, Julien Quinten and Anthonin Demarbaix
Computers 2024, 13(8), 199; https://doi.org/10.3390/computers13080199 - 15 Aug 2024
Viewed by 815
Abstract
The interconnection between engineering simulations, real-world experiments, and virtual reality remains underutilised in engineering. This study addresses this gap by implementing such interconnections, focusing on active thermography for a carbon fibre plate in the aerospace domain. Six scenarios based on three parameters were [...] Read more.
The interconnection between engineering simulations, real-world experiments, and virtual reality remains underutilised in engineering. This study addresses this gap by implementing such interconnections, focusing on active thermography for a carbon fibre plate in the aerospace domain. Six scenarios based on three parameters were simulated using ComSol Multiphysics 6.2 and validated experimentally. The results were then integrated into a virtual reality serious game developed with Unreal Engine 5.3.2 and aimed at educating users on thermography principles and aiding rapid experimental condition analysis. Users are immersed in a 3D representation of the research laboratory, allowing interaction with the environment, understanding thermographic setups, accessing instructional videos, and analysing results as graphs or animations. This serious game helps users determine the optimal scenario for a given problem, enhance thermography principle comprehension, and achieve results more swiftly than through real-world experimentation. This innovative approach bridges the gap between simulations and practical experiments, providing a more engaging and efficient learning experience in engineering education. It highlights the potential of integrating simulations, experiments, and virtual reality to improve understanding and efficiency in engineering. Full article
Show Figures

Figure 1

27 pages, 5663 KiB  
Article
A Platform for Integrating Internet of Things, Machine Learning, and Big Data Practicum in Electrical Engineering Curricula
by Nandana Jayachandran, Atef Abdrabou, Naod Yamane and Anwer Al-Dulaimi
Computers 2024, 13(8), 198; https://doi.org/10.3390/computers13080198 - 15 Aug 2024
Viewed by 1018
Abstract
The integration of the Internet of Things (IoT), big data, and machine learning (ML) has pioneered a transformation across several fields. Equipping electrical engineering students to remain abreast of the dynamic technological landscape is vital. This underscores the necessity for an educational tool [...] Read more.
The integration of the Internet of Things (IoT), big data, and machine learning (ML) has pioneered a transformation across several fields. Equipping electrical engineering students to remain abreast of the dynamic technological landscape is vital. This underscores the necessity for an educational tool that can be integrated into electrical engineering curricula to offer a practical way of learning the concepts and the integration of IoT, big data, and ML. Thus, this paper offers the IoT-Edu-ML-Stream open-source platform, a graphical user interface (GUI)-based emulation software tool to help electrical engineering students design and emulate IoT-based use cases with big data analytics. The tool supports the emulation or the actual connectivity of a large number of IoT devices. The emulated devices can generate realistic correlated IoT data and stream it via the message queuing telemetry transport (MQTT) protocol to a big data platform. The tool allows students to design ML models with different algorithms for their chosen use cases and train them for decision-making based on the streamed data. Moreover, the paper proposes learning outcomes to be targeted when integrating the tool into an electrical engineering curriculum. The tool is evaluated using a comprehensive survey. The survey results show that the students gained significant knowledge about IoT concepts after using the tool, even though many of them already had prior knowledge of IoT. The results also indicate that the tool noticeably improved the students’ practical skills in designing real-world use cases and helped them understand fundamental machine learning analytics with an intuitive user interface. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

31 pages, 2279 KiB  
Article
Achieving High Accuracy in Android Malware Detection through Genetic Programming Symbolic Classifier
by Nikola Anđelić and Sandi Baressi Šegota
Computers 2024, 13(8), 197; https://doi.org/10.3390/computers13080197 - 15 Aug 2024
Viewed by 811
Abstract
The detection of Android malware is of paramount importance for safeguarding users’ personal and financial data from theft and misuse. It plays a critical role in ensuring the security and privacy of sensitive information on mobile devices, thereby preventing unauthorized access and potential [...] Read more.
The detection of Android malware is of paramount importance for safeguarding users’ personal and financial data from theft and misuse. It plays a critical role in ensuring the security and privacy of sensitive information on mobile devices, thereby preventing unauthorized access and potential damage. Moreover, effective malware detection is essential for maintaining device performance and reliability by mitigating the risks posed by malicious software. This paper introduces a novel approach to Android malware detection, leveraging a publicly available dataset in conjunction with a Genetic Programming Symbolic Classifier (GPSC). The primary objective is to generate symbolic expressions (SEs) that can accurately identify malware with high precision. To address the challenge of imbalanced class distribution within the dataset, various oversampling techniques are employed. Optimal hyperparameter configurations for GPSC are determined through a random hyperparameter values search (RHVS) method developed in this research. The GPSC model is trained using a 10-fold cross-validation (10FCV) technique, producing a set of 10 SEs for each dataset variation. Subsequently, the most effective SEs are integrated into a threshold-based voting ensemble (TBVE) system, which is then evaluated on the original dataset. The proposed methodology achieves a maximum accuracy of 0.956, thereby demonstrating its effectiveness for Android malware detection. Full article
Show Figures

Figure 1

23 pages, 671 KiB  
Review
Usage of Gamification Techniques in Software Engineering Education and Training: A Systematic Review
by Vincenzo Di Nardo, Riccardo Fino, Marco Fiore, Giovanni Mignogna, Marina Mongiello and Gaetano Simeone
Computers 2024, 13(8), 196; https://doi.org/10.3390/computers13080196 - 14 Aug 2024
Viewed by 2125
Abstract
Gamification, the integration of game design elements into non-game contexts, has gained prominence in the software engineering education and training realm. By incorporating elements such as points, badges, quests, and challenges, gamification aims to motivate and engage learners, potentially transforming traditional educational methods. [...] Read more.
Gamification, the integration of game design elements into non-game contexts, has gained prominence in the software engineering education and training realm. By incorporating elements such as points, badges, quests, and challenges, gamification aims to motivate and engage learners, potentially transforming traditional educational methods. This paper addresses the gap in systematic evaluations of gamification’s effectiveness in software engineering education and training by conducting a comprehensive literature review of 68 primary studies. This review explores the advantages of gamification, including active learning, individualized pacing, and enhanced collaboration, as well as the psychological drawbacks such as increased stress and responsibility for students. Despite the promising results, this study highlights that gamification should be considered a supplementary tool rather than a replacement for traditional teaching methods. Our findings reveal significant interest in integrating gamification in educational settings, driven by the growing need for digital content to improve learning. Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
Show Figures

Figure 1

12 pages, 3959 KiB  
Article
An Efficient QC-LDPC Decoder Architecture for 5G-NR Wireless Communication Standards Targeting FPGA
by Bilal Mejmaa, Malika Alami Marktani, Ismail Akharraz and Abdelaziz Ahaitouf
Computers 2024, 13(8), 195; https://doi.org/10.3390/computers13080195 - 14 Aug 2024
Cited by 1 | Viewed by 1380
Abstract
This novel research introduces a game-changing architecture design for Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) decoders in Fifth-Generation New-Radio (5G-NR) wireless communications, specifically designed to meet precise specifications and leveraging the layered Min-Sum (MS) algorithm. Our innovative approach presents a fully parallel architecture that is [...] Read more.
This novel research introduces a game-changing architecture design for Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) decoders in Fifth-Generation New-Radio (5G-NR) wireless communications, specifically designed to meet precise specifications and leveraging the layered Min-Sum (MS) algorithm. Our innovative approach presents a fully parallel architecture that is precisely engineered to cater to the demanding high-throughput requirements of enhanced Mobile Broadband (eMBB) applications. To ensure smooth computation in the MS algorithm, we use the Sub-Optimal Low-Latency (SOLL) technique to optimize the critical check node process. Thus, our design has the potential to greatly benefit certain Ultra-Reliable Low-Latency Communications (URLLC) scenarios. We conducted precise Bit Error Rate (BER) performance analysis on our LDPC decoder using a Hardware Description Language (HDL) Co-Simulation (MATLAB/Simulink/ModelSim) for two codeword rates (2/3 and 1/3), simulating the challenging Additive White Gaussian Noise (AWGN) channel environment. Full article
Show Figures

Figure 1

17 pages, 4387 KiB  
Article
Adaptive Load Balancing Approach to Mitigate Network Congestion in VANETS
by Syed Ehsan Haider, Muhammad Faizan Khan and Yousaf Saeed
Computers 2024, 13(8), 194; https://doi.org/10.3390/computers13080194 - 13 Aug 2024
Viewed by 791
Abstract
Load balancing to alleviate network congestion remains a critical challenge in Vehicular Ad Hoc Networks (VANETs). During route and response scheduling, road side units (RSUs) risk being overloaded beyond their calculated capacity. Despite recent advancements like RSU-based load transfer, NP-Hard hierarchical geography routing, [...] Read more.
Load balancing to alleviate network congestion remains a critical challenge in Vehicular Ad Hoc Networks (VANETs). During route and response scheduling, road side units (RSUs) risk being overloaded beyond their calculated capacity. Despite recent advancements like RSU-based load transfer, NP-Hard hierarchical geography routing, RSU-based medium access control (MAC) schemes, simplified clustering, and network activity control, a significant gap persists in employing a load-balancing server for effective traffic management. We propose a server-based network congestion handling mechanism (SBNC) in VANETs to bridge this gap. Our approach clusters RSUs within specified ranges and incorporates dedicated load balancing and network scheduler RSUs to manage route selection and request–response scheduling, thereby balancing RSU loads. We introduce three key algorithms: optimal placement of dedicated RSUs, a scheduling policy for packets/data/requests/responses, and a congestion control algorithm for load balancing. Using the VanetMobiSim library of Network Simulator-2 (NS-2), we evaluate our approach based on residual energy consumption, end-to-end delay, packet delivery ratio (PDR), and control packet overhead. Results indicate substantial improvements in load balancing through our proposed server-based approach. Full article
Show Figures

Figure 1

25 pages, 3068 KiB  
Article
Metaverse Unveiled: From the Lens of Science to Common People Perspective
by Mónica Cruz, Abílio Oliveira and Alessandro Pinheiro
Computers 2024, 13(8), 193; https://doi.org/10.3390/computers13080193 - 13 Aug 2024
Viewed by 985
Abstract
Everyone forms a perception about everything, including the Metaverse. Still, we may expect a gap or disconnection between what has been expressed by various researchers and the widespread perceptions of technology and related concepts. However, the degree to which these two frames of [...] Read more.
Everyone forms a perception about everything, including the Metaverse. Still, we may expect a gap or disconnection between what has been expressed by various researchers and the widespread perceptions of technology and related concepts. However, the degree to which these two frames of representation differ awaits further investigation. This study seeks to compare the Metaverse perceptions between the scientific findings and the common people’s perceptions using the data from two previous qualitative studies about the representations of the Metaverse from a scientific perspective versus a common perspective (by adults). Is there a common ground between these two perspectives? Or are they in opposition? As goals for this research, we aim to contrast the depiction of the Metaverse in pertinent studies (published in indexed journals) with the portrayal of the Metaverse among adults (non-researchers); ascertain the most prevalent depiction of virtual reality; and determine the significance of gaming within the representations of the Metaverse and virtual reality. This investigation encapsulates crucial findings on the Metaverse concept, contrasting the discoveries made by researchers in prior studies with the common public’s interpretation of this concept. It helps with understanding the differences between the Metaverse representations, the immersion and perception concepts, and a disagreement from the past vs. future perspective. Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications)
Show Figures

Figure 1

13 pages, 385 KiB  
Article
Availability, Scalability, and Security in the Migration from Container-Based to Cloud-Native Applications
by Bruno Nascimento, Rui Santos, João Henriques, Marco V. Bernardo and Filipe Caldeira
Computers 2024, 13(8), 192; https://doi.org/10.3390/computers13080192 - 9 Aug 2024
Viewed by 2158
Abstract
The shift from traditional monolithic architectures to container-based solutions has revolutionized application deployment by enabling consistent, isolated environments across various platforms. However, as organizations look for improved efficiency, resilience, security, and scalability, the limitations of container-based applications, such as their manual scaling, resource [...] Read more.
The shift from traditional monolithic architectures to container-based solutions has revolutionized application deployment by enabling consistent, isolated environments across various platforms. However, as organizations look for improved efficiency, resilience, security, and scalability, the limitations of container-based applications, such as their manual scaling, resource management challenges, potential single points of failure, and operational complexities, become apparent. These challenges, coupled with the need for sophisticated tools and expertise for monitoring and security, drive the move towards cloud-native architectures. Cloud-native approaches offer a more robust integration with cloud services, including managed databases and AI/ML services, providing enhanced agility and efficiency beyond what standalone containers can achieve. Availability, scalability, and security are the cornerstone requirements of these cloud-native applications. This work explores how containerized applications can be customized to address such requirements during their shift to cloud-native orchestrated environments. A Proof of Concept (PoC) demonstrated the technical aspects of such a move into a Kubernetes environment in Azure. The results from its evaluation highlighted the suitability of Kubernetes in addressing such a demand for availability and scalability while safeguarding security when moving containerized applications to cloud-native environments. Full article
Show Figures

Figure 1

18 pages, 3533 KiB  
Article
Rice Yield Forecasting Using Hybrid Quantum Deep Learning Model
by De Rosal Ignatius Moses Setiadi, Ajib Susanto, Kristiawan Nugroho, Ahmad Rofiqul Muslikh, Arnold Adimabua Ojugo and Hong-Seng Gan
Computers 2024, 13(8), 191; https://doi.org/10.3390/computers13080191 - 7 Aug 2024
Cited by 1 | Viewed by 1870
Abstract
In recent advancements in agricultural technology, quantum mechanics and deep learning integration have shown promising potential to revolutionize rice yield forecasting methods. This research introduces a novel Hybrid Quantum Deep Learning model that leverages the intricate processing capabilities of quantum computing combined with [...] Read more.
In recent advancements in agricultural technology, quantum mechanics and deep learning integration have shown promising potential to revolutionize rice yield forecasting methods. This research introduces a novel Hybrid Quantum Deep Learning model that leverages the intricate processing capabilities of quantum computing combined with the robust pattern recognition prowess of deep learning algorithms such as Extreme Gradient Boosting (XGBoost) and Bidirectional Long Short-Term Memory (Bi-LSTM). Bi-LSTM networks are used for temporal feature extraction and quantum circuits for quantum feature processing. Quantum circuits leverage quantum superposition and entanglement to enhance data representation by capturing intricate feature interactions. These enriched quantum features are combined with the temporal features extracted by Bi-LSTM and fed into an XGBoost regressor. By synthesizing quantum feature processing and classical machine learning techniques, our model aims to improve prediction accuracy significantly. Based on measurements of mean square error (MSE), the coefficient of determination (R2), and mean average error (MAE), the results are 1.191621 × 10−5, 0.999929482, and 0.001392724, respectively. This value is so close to perfect that it helps make essential decisions in global agricultural planning and management. Full article
Show Figures

Figure 1

13 pages, 3310 KiB  
Article
Dynamic Opinion Formation in Networks: A Multi-Issue and Evidence-Based Approach
by Joel Weijia Lai
Computers 2024, 13(8), 190; https://doi.org/10.3390/computers13080190 - 7 Aug 2024
Viewed by 1134
Abstract
In this study, we present a computational model for simulating opinion dynamics within social networks, incorporating cognitive and social psychological principles such as homophily, confirmation bias, and selective exposure. We enhance our model using Dempster–Shafer theory to address uncertainties in belief updating. Mathematical [...] Read more.
In this study, we present a computational model for simulating opinion dynamics within social networks, incorporating cognitive and social psychological principles such as homophily, confirmation bias, and selective exposure. We enhance our model using Dempster–Shafer theory to address uncertainties in belief updating. Mathematical formalism and simulations were performed to derive empirical results from showcasing how this method might be useful for modeling real-world opinion consensus and fragmentation. By constructing a scale-free network, we assign initial opinions and iteratively update them based on neighbor influences and belief masses. Lastly, we examine how the presence of “truth” nodes with high connectivity, used to simulate the influence of objective truth on the network, alters opinions. Our simulations reveal insights into the formation of opinion clusters, the role of cognitive biases, and the impact of uncertainty on belief evolution, providing a robust framework for understanding complex opinion dynamics in social systems. Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
Show Figures

Figure 1

20 pages, 9632 KiB  
Article
Force-Directed Immersive 3D Network Visualization
by Alexander Brezani, Jozef Kostolny and Michal Zabovsky
Computers 2024, 13(8), 189; https://doi.org/10.3390/computers13080189 - 5 Aug 2024
Viewed by 997
Abstract
Network visualization, in mathematics often referred to as graph visualization, has evolved significantly over time, offering various methods to effectively represent complex data structures. New methods and devices advance the possibilities of visualization both from the point of view of the quality of [...] Read more.
Network visualization, in mathematics often referred to as graph visualization, has evolved significantly over time, offering various methods to effectively represent complex data structures. New methods and devices advance the possibilities of visualization both from the point of view of the quality of displayed information and of the possibilities of visualizing a larger amount of data. Immersive visualization includes the user directly in presented visual representation but requires a native 3D environment for direct interaction with visualized information. This article describes an approach to creating a force-directed immersive 3D network visualization algorithm available for application in immersive environments, such as a cave automatic virtual environment or virtual reality. The algorithm aims to address the challenge of creating visually appealing and easily interpretable visualizations by utilizing 3D space and the Unity engine. The results show successfully visualized data and developed interactive visualization methods, overcoming limitations of basic force-directed implementations. The main contribution of the presented research is the force-directed algorithm with springs and controlled placement as an immersive visualization technique that combines the use of springs and attractive forces to stabilize a network in a 3D environment. Full article
Show Figures

Figure 1

22 pages, 3039 KiB  
Article
Measuring Undergraduates’ Motivation Levels When Learning to Program in Virtual Worlds
by Juan Gabriel López Solórzano, Christian Jonathan Ángel Rueda and Osslan Osiris Vergara Villegas
Computers 2024, 13(8), 188; https://doi.org/10.3390/computers13080188 - 31 Jul 2024
Viewed by 1069
Abstract
Teaching/learning programming is complex, and conventional classes often fail to arouse students’ motivation in this discipline. Therefore, teachers should look for alternative methods for teaching programming. Information and communication technologies (ICTs) can be a valuable alternative, especially virtual worlds. This study measures the [...] Read more.
Teaching/learning programming is complex, and conventional classes often fail to arouse students’ motivation in this discipline. Therefore, teachers should look for alternative methods for teaching programming. Information and communication technologies (ICTs) can be a valuable alternative, especially virtual worlds. This study measures the students’ motivation level when using virtual worlds to learn introductory programming skills. Moreover, a comparison is conducted regarding their motivation levels when students learn in a traditional teaching setting. In this study, first-semester university students participated in a pedagogical experiment regarding the learning of the programming subject employing virtual worlds. A pre-test-post-test design was carried out. In the pre-test, 102 students participated, and the motivation level when a professor taught in a traditional modality was measured. Then, a post-test was applied to 60 students learning in virtual worlds. With this research, we have found that the activity conducted with virtual worlds presents higher motivation levels than traditional learning with the teacher. Moreover, regarding gender, women present higher confidence than men. We recommend that teachers try this innovation with their students based on our findings. However, teachers must design a didactic model to integrate virtual worlds into daily teaching activities. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

31 pages, 2905 KiB  
Article
On Using GeoGebra and ChatGPT for Geometric Discovery
by Francisco Botana, Tomas Recio and María Pilar Vélez
Computers 2024, 13(8), 187; https://doi.org/10.3390/computers13080187 - 30 Jul 2024
Viewed by 1773
Abstract
This paper explores the performance of ChatGPT and GeoGebra Discovery when dealing with automatic geometric reasoning and discovery. The emergence of Large Language Models has attracted considerable attention in mathematics, among other fields where intelligence should be present. We revisit a couple of [...] Read more.
This paper explores the performance of ChatGPT and GeoGebra Discovery when dealing with automatic geometric reasoning and discovery. The emergence of Large Language Models has attracted considerable attention in mathematics, among other fields where intelligence should be present. We revisit a couple of elementary Euclidean geometry theorems discussed in the birth of Artificial Intelligence and a non-trivial inequality concerning triangles. GeoGebra succeeds in proving all these selected examples, while ChatGPT fails in one case. Our thesis is that both GeoGebra and ChatGPT could be used as complementary systems, where the natural language abilities of ChatGPT and the certified computer algebra methods in GeoGebra Discovery can cooperate in order to obtain sound and—more relevant—interesting results. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

18 pages, 2365 KiB  
Article
6G-RUPA: A Flexible, Scalable, and Energy-Efficient User Plane Architecture for Next-Generation Mobile Networks
by Sergio Giménez-Antón, Eduard Grasa, Jordi Perelló and Andrés Cárdenas
Computers 2024, 13(8), 186; https://doi.org/10.3390/computers13080186 - 25 Jul 2024
Viewed by 1036
Abstract
As the global deployment of Fifth Generation (5G) is being well consolidated, the exploration of Sixth Generation (6G) wireless networks has intensified, focusing on novel Key Performance Indicators (KPIs) and Key Value Indicators (KVIs) that extend beyond traditional metrics like throughput and latency. [...] Read more.
As the global deployment of Fifth Generation (5G) is being well consolidated, the exploration of Sixth Generation (6G) wireless networks has intensified, focusing on novel Key Performance Indicators (KPIs) and Key Value Indicators (KVIs) that extend beyond traditional metrics like throughput and latency. As 5G begins transitioning to vertical-oriented applications, 6G aims go beyond, providing a ubiquitous communication experience by integrating diverse Radio Access Networks (RANs) and fixed-access networks to form a hyper-converged edge. This unified platform will enable seamless network federation, thus realizing the so-called network of networks vision. Emphasizing energy efficiency, the present paper discusses the importance of reducing telecommunications’ environmental impact, aligning with global sustainability goals. Central to this vision is the proposal of a novel user plane network protocol architecture, called 6G Recursive User Plane Architecture (6G-RUPA), designed to be scalable, flexible, and energy-efficient. Briefly, 6G-RUPA offers superior flexibility in network adaptation, federation, scalability, and mobility management, aiming to enhance overall network performance and sustainability. This study provides a comprehensive analysis of 6G’s potential, from its conceptual framework to the high-level design of 6G-RUPA, addressing current challenges and proposing actionable solutions for next-generation mobile networks. Full article
(This article belongs to the Special Issue Advances in High-Performance Switching and Routing)
Show Figures

Figure 1

22 pages, 4930 KiB  
Review
Quantum Image Compression: Fundamentals, Algorithms, and Advances
by Sowmik Kanti Deb and W. David Pan
Computers 2024, 13(8), 185; https://doi.org/10.3390/computers13080185 - 25 Jul 2024
Viewed by 1379
Abstract
Quantum computing has emerged as a transformative paradigm, with revolutionary potential in numerous fields, including quantum image processing and compression. Applications that depend on large scale image data could benefit greatly from parallelism and quantum entanglement, which would allow images to be encoded [...] Read more.
Quantum computing has emerged as a transformative paradigm, with revolutionary potential in numerous fields, including quantum image processing and compression. Applications that depend on large scale image data could benefit greatly from parallelism and quantum entanglement, which would allow images to be encoded and decoded with unprecedented efficiency and data reduction capability. This paper provides a comprehensive overview of the rapidly evolving field of quantum image compression, including its foundational principles, methods, challenges, and potential uses. The paper will also feature a thorough exploration of the fundamental concepts of quantum qubits as image pixels, quantum gates as image transformation tools, quantum image representation, as well as basic quantum compression operations. Our survey shows that work is still sparse on the practical implementation of quantum image compression algorithms on physical quantum computers. Thus, further research is needed in order to attain the full advantage and potential of quantum image compression algorithms on large-scale fault-tolerant quantum computers. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
Show Figures

Figure 1

Previous Issue
Back to TopTop