sensors-logo

Journal Browser

Journal Browser

Intelligent Sensing and Computing for Smart and Autonomous Vehicles

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (30 November 2024) | Viewed by 6219

Special Issue Editors

1. Data61 of Commonwealth Scientific and Industrial Research Organisation (CSIRO), Sydney 2015, Australia
2. School of Electrical and Computer Engineering, UC Davis, Davis, CA 95616, USA
Interests: spatio-temporal data; Internet of Things; reinforcement learning and ubiquitous computing
Special Issues, Collections and Topics in MDPI journals
State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, China
Interests: vehicular networks; autonomous driving and intelligent transportation systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Rapid advances in intelligent sensing and computing technologies have triggered a number of emerging concepts, such as smart and autonomous vehicles (SAVs). With intelligent onboard units, SAVs can replace human drivers to improve driving safety. In addition, as an intelligent robot that integrates perception, communication, storage, and computation, SAVs can provide passengers with rich data services during their travels. However, realizing such a blueprint is not trivial because the practical application of SAVs still faces multiple challenges.

This Special Issue will include a selection of papers covering topics related to intelligent sensing and computing technologies for SAVs. There will be specific emphasis on data sensing and processing, the integration of sensing and computing, topology and collaboration analysis, intelligent and autonomous systems, accident prediction and diagnosis, path planning and traffic scheduling, computer vision and pattern recognition techniques, cloud and edge computing, digital twin and metaverse technology, integrated space–air–ground networks, learning models and techniques, and artificial intelligence algorithms and applications.

This Special Issue aims to showcase novel and prominent research studies on intelligent sensing and computing technologies for SAVs. This will help to guide research activities that investigate system designs for SAVs. Topics of interest include, but are not limited to, the following:

  • Sensing and big data analysis;
  • Intelligent sensing and computing;
  • Integration of sensing and computing;
  • Cloud and edge computing;
  • Learning models and techniques;
  • Intelligent and autonomous systems;
  • Computer vision and pattern recognition; 
  • Digital twin and metaverse technology;
  • Resource allocation and management;
  • Advanced algorithms design and optimization;
  • Smart AV platforms and tools.

Dr. Wei Shao
Dr. Yilong Hui
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 2379 KiB  
Article
Driving-Related Cognitive Abilities Prediction Based on Transformer’s Multimodal Fusion Framework
by Yifan Li, Bo Liu and Wenli Zhang
Sensors 2025, 25(1), 174; https://doi.org/10.3390/s25010174 - 31 Dec 2024
Viewed by 381
Abstract
With the increasing complexity of urban roads and rising traffic flow, traffic safety has become a critical societal concern. Current research primarily addresses drivers’ attention, reaction speed, and perceptual abilities, but comprehensive assessments of cognitive abilities in complex traffic environments are lacking. This [...] Read more.
With the increasing complexity of urban roads and rising traffic flow, traffic safety has become a critical societal concern. Current research primarily addresses drivers’ attention, reaction speed, and perceptual abilities, but comprehensive assessments of cognitive abilities in complex traffic environments are lacking. This study, grounded in cognitive science and neuropsychology, identifies and quantitatively evaluates ten cognitive components related to driving decision-making, execution, and psychological states by analyzing video footage of drivers’ actions. Physiological data (e.g., Electrocardiogram (ECG), Electrodermal Activity (EDA)) and non-physiological data (e.g., Eye Tracking (ET)) are collected from simulated driving scenarios. A dual-branch Transformer network model is developed to extract temporal features from multimodal data, integrating these features through a weight adjustment strategy to predict driving-related cognitive abilities. Experiments on a multimodal driving dataset from the Computational Physiology Laboratory at the University of Houston, USA, yield an Accuracy (ACC) of 0.9908 and an F1-score of 0.9832, confirming the model’s effectiveness. This method effectively combines scale measurements and driving behavior under secondary tasks to assess cognitive abilities, providing a novel approach for driving risk assessment and traffic safety strategy development. Full article
(This article belongs to the Special Issue Intelligent Sensing and Computing for Smart and Autonomous Vehicles)
Show Figures

Figure 1

18 pages, 2648 KiB  
Article
Semantic Scene Completion in Autonomous Driving: A Two-Stream Multi-Vehicle Collaboration Approach
by Junxuan Li, Yuanfang Zhang, Jiayi Han, Peng Han and Kaiqing Luo
Sensors 2024, 24(23), 7702; https://doi.org/10.3390/s24237702 - 2 Dec 2024
Viewed by 774
Abstract
Vehicle-to-vehicle communication enables capturing sensor information from diverse perspectives, greatly aiding in semantic scene completion in autonomous driving. However, the misalignment of features between ego vehicle and cooperative vehicles leads to ambiguity problems, affecting accuracy and semantic information. In this paper, we propose [...] Read more.
Vehicle-to-vehicle communication enables capturing sensor information from diverse perspectives, greatly aiding in semantic scene completion in autonomous driving. However, the misalignment of features between ego vehicle and cooperative vehicles leads to ambiguity problems, affecting accuracy and semantic information. In this paper, we propose a Two-Stream Multi-Vehicle collaboration approach (TSMV), which divides the features of collaborative vehicles into two streams and regresses interactively. To overcome the problems caused by feature misalignment, the Neighborhood Self-Cross Attention Transformer (NSCAT) module is designed to enable the ego vehicle to query the most similar local features from collaborative vehicles through cross-attention, rather than assuming spatial-temporal synchronization. A 3D occupancy map is finally generated from the features of collaborative vehicle aggregation. Experimental results on both V2VSSC and SemanticOPV2V datasets demonstrate TSMV outpace state-of-the-art collaborative semantic scene completion techniques. Full article
(This article belongs to the Special Issue Intelligent Sensing and Computing for Smart and Autonomous Vehicles)
Show Figures

Figure 1

25 pages, 28035 KiB  
Article
Research on Visual Perception of Speed Bumps for Intelligent Connected Vehicles Based on Lightweight FPNet
by Ruochen Wang, Xiaoguo Luo, Qing Ye, Yu Jiang and Wei Liu
Sensors 2024, 24(7), 2130; https://doi.org/10.3390/s24072130 - 27 Mar 2024
Cited by 1 | Viewed by 1604
Abstract
In the field of intelligent connected vehicles, the precise and real-time identification of speed bumps is critically important for the safety of autonomous driving. To address the issue that existing visual perception algorithms struggle to simultaneously maintain identification accuracy and real-time performance amidst [...] Read more.
In the field of intelligent connected vehicles, the precise and real-time identification of speed bumps is critically important for the safety of autonomous driving. To address the issue that existing visual perception algorithms struggle to simultaneously maintain identification accuracy and real-time performance amidst image distortion and complex environmental conditions, this study proposes an enhanced lightweight neural network framework, YOLOv5-FPNet. This framework strengthens perception capabilities in two key phases: feature extraction and loss constraint. Firstly, FPNet, based on FasterNet and Dynamic Snake Convolution, is developed to adaptively extract structural features of distorted speed bumps with accuracy. Subsequently, the C3-SFC module is proposed to augment the adaptability of the neck and head components to distorted features. Furthermore, the SimAM attention mechanism is embedded within the backbone to enhance the ability of key feature extraction. Finally, an adaptive loss function, Inner–WiseIoU, based on a dynamic non-monotonic focusing mechanism, is designed to improve the generalization and fitting ability of bounding boxes. Experimental evaluations on a custom speed bumps dataset demonstrate the superior performance of FPNet, with significant improvements in key metrics such as the mAP, mAP50_95, and FPS by 38.76%, 143.15%, and 51.23%, respectively, compared to conventional lightweight neural networks. Ablation studies confirm the effectiveness of the proposed improvements. This research provides a fast and accurate speed bump detection solution for autonomous vehicles, offering theoretical insights for obstacle recognition in intelligent vehicle systems. Full article
(This article belongs to the Special Issue Intelligent Sensing and Computing for Smart and Autonomous Vehicles)
Show Figures

Graphical abstract

19 pages, 4270 KiB  
Article
Intelligent Head-Mounted Obstacle Avoidance Wearable for the Blind and Visually Impaired
by Peijie Xu, Andy Song and Ke Wang
Sensors 2023, 23(23), 9598; https://doi.org/10.3390/s23239598 - 4 Dec 2023
Cited by 3 | Viewed by 2421
Abstract
Individuals who are Blind and Visually Impaired (BVI) take significant risks and dangers on obstacles, particularly when they are unaccompanied. We propose an intelligent head-mount device to assist BVI people with this challenge. The objective of this study is to develop a computationally [...] Read more.
Individuals who are Blind and Visually Impaired (BVI) take significant risks and dangers on obstacles, particularly when they are unaccompanied. We propose an intelligent head-mount device to assist BVI people with this challenge. The objective of this study is to develop a computationally efficient mechanism that can effectively detect obstacles in real time and provide warnings. The learned model aims to be both reliable and compact so that it can be integrated into a wearable device with a small size. Additionally, it should be capable of handling natural head turns, which can generally impact the accuracy of readings from the device’s sensors. Over thirty models with different hyper-parameters were explored and their key metrics were compared to identify the most suitable model that strikes a balance between accuracy and real-time performance. Our study demonstrates the feasibility of a highly efficient wearable device that can assist BVI individuals in avoiding obstacles with a high level of accuracy. Full article
(This article belongs to the Special Issue Intelligent Sensing and Computing for Smart and Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop