Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,437)

Search Parameters:
Keywords = point clouds

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2672 KiB  
Article
Point Cloud Densification Algorithm for Multiple Cameras and Lidars Data Fusion
by Jakub Winter and Robert Nowak
Sensors 2024, 24(17), 5786; https://doi.org/10.3390/s24175786 - 5 Sep 2024
Abstract
Fusing data from many sources helps to achieve improved analysis and results. In this work, we present a new algorithm to fuse data from multiple cameras with data from multiple lidars. This algorithm was developed to increase the sensitivity and specificity of autonomous [...] Read more.
Fusing data from many sources helps to achieve improved analysis and results. In this work, we present a new algorithm to fuse data from multiple cameras with data from multiple lidars. This algorithm was developed to increase the sensitivity and specificity of autonomous vehicle perception systems, where the most accurate sensors measuring the vehicle’s surroundings are cameras and lidar devices. Perception systems based on data from one type of sensor do not use complete information and have lower quality. The camera provides two-dimensional images; lidar produces three-dimensional point clouds. We developed a method for matching pixels on a pair of stereoscopic images using dynamic programming inspired by an algorithm to match sequences of amino acids used in bioinformatics. We improve the quality of the basic algorithm using additional data from edge detectors. Furthermore, we also improve the algorithm performance by reducing the size of matched pixels determined by available car speeds. We perform point cloud densification in the final step of our method, fusing lidar output data with stereo vision output. We implemented our algorithm in C++ with Python API, and we provided the open-source library named Stereo PCD. This library very efficiently fuses data from multiple cameras and multiple lidars. In the article, we present the results of our approach to benchmark databases in terms of quality and performance. We compare our algorithm with other popular methods. Full article
(This article belongs to the Section Sensing and Imaging)
20 pages, 8518 KiB  
Article
An Anti-Occlusion Approach for Enhanced Unmanned Surface Vehicle Target Detection and Tracking with Multimodal Sensor Data
by Minjie Zheng, Dingyuan Li, Guoquan Chen, Weijun Wang and Shenhua Yang
J. Mar. Sci. Eng. 2024, 12(9), 1558; https://doi.org/10.3390/jmse12091558 - 5 Sep 2024
Abstract
Multimodal sensors are often employed by USVs (unmanned surface vehicles) to enhance situational awareness, and the fusion of LiDAR and monocular vision is widely used in near-field perception scenarios. However, this strategy of fusing data from LiDAR and monocular vision may lead to [...] Read more.
Multimodal sensors are often employed by USVs (unmanned surface vehicles) to enhance situational awareness, and the fusion of LiDAR and monocular vision is widely used in near-field perception scenarios. However, this strategy of fusing data from LiDAR and monocular vision may lead to the incorrect matching of image targets and LiDAR point cloud targets when targets occlude one another. To address this issue, a target matching network with an attention module was developed to process occlusion information. Additionally, an image target occlusion detection branch was incorporated into YOLOv9 to extract the occlusion relationships of the image targets. The introduction of the attention module and the occlusion detection branch allows for the consideration of occlusion information in matching point cloud and image targets, thereby achieving more accurate target matching. Based on the target matching network, a method for water surface target detection and multi-target tracking was proposed. This method fuses LiDAR point cloud and image data while considering occlusion information. Its effectiveness was confirmed through experimental verification. The experimental results show that the proposed method improved the correct matching rate in complex scenarios by 13.83% compared to IoU-based target matching methods, with an MOTA metric of 0.879 and an average frame rate of 21.98. The results demonstrate that the method effectively reduces the mismatch rate between point cloud and image targets. The method’s frame rate meets real-time requirements, and the method itself offers a promising solution for unmanned surface vehicles (USVs) to perform water surface target detection and multi-target tracking. Full article
(This article belongs to the Special Issue Unmanned Marine Vehicles: Navigation, Control and Sensing)
Show Figures

Figure 1

32 pages, 2464 KiB  
Article
Wasserstein-Based Evolutionary Operators for Optimizing Sets of Points: Application to Wind-Farm Layout Design
by Babacar Sow, Rodolphe Le Riche, Julien Pelamatti, Merlin Keller and Sanaa Zannane
Appl. Sci. 2024, 14(17), 7916; https://doi.org/10.3390/app14177916 - 5 Sep 2024
Abstract
This paper introduces an evolutionary algorithm for objective functions defined over clouds of points of varying sizes. Such design variables are modeled as uniform discrete measures with finite support and the crossover and mutation operators of the algorithm are defined using the Wasserstein [...] Read more.
This paper introduces an evolutionary algorithm for objective functions defined over clouds of points of varying sizes. Such design variables are modeled as uniform discrete measures with finite support and the crossover and mutation operators of the algorithm are defined using the Wasserstein barycenter. We prove that the Wasserstein-based crossover has a contracting property in the sense that the support of the generated measure is included in the closed convex hull of the union of the two parents’ supports. We introduce boundary mutations to counteract this contraction. Variants of evolutionary operators based on Wasserstein barycenters are studied. We compare the resulting algorithm to a more classical, sequence-based, evolutionary algorithm on a family of test functions that include a wind-farm layout problem. The results show that Wasserstein-based evolutionary operators better capture the underlying geometrical structures of the considered test functions and outperform a reference evolutionary algorithm in the vast majority of the cases. The tests indicate that the mutation operators play a major part in the performances of the algorithms. Full article
(This article belongs to the Special Issue New Insights into Multidisciplinary Design Optimization)
Show Figures

Figure 1

20 pages, 6719 KiB  
Article
Tracking Method of GM-APD LiDAR Based on Adaptive Fusion of Intensity Image and Point Cloud
by Bo Xiao, Yuchao Wang, Tingsheng Huang, Xuelian Liu, Da Xie, Xulang Zhou, Zhanwen Liu and Chunyang Wang
Appl. Sci. 2024, 14(17), 7884; https://doi.org/10.3390/app14177884 - 5 Sep 2024
Viewed by 209
Abstract
The target is often obstructed by obstacles with the dynamic tracking scene, leading to a loss of target information and a decrease in tracking accuracy or even complete failure. To address these challenges, we leverage the capabilities of Geiger-mode Avalanche Photodiode (GM-APD) LiDAR [...] Read more.
The target is often obstructed by obstacles with the dynamic tracking scene, leading to a loss of target information and a decrease in tracking accuracy or even complete failure. To address these challenges, we leverage the capabilities of Geiger-mode Avalanche Photodiode (GM-APD) LiDAR to acquire both intensity images and point cloud data for researching a target tracking method that combines the fusion of intensity images and point cloud data. Building upon Kernelized correlation filtering (KCF), we introduce Fourier descriptors based on intensity images to enhance the representational capacity of target features, thereby achieving precise target tracking using intensity images. Additionally, an adaptive factor is designed based on peak sidelobe ratio and intrinsic shape signature to accurately detect occlusions. Finally, by fusing the tracking results from Kalman filter and KCF with adaptive factors following occlusion detection, we obtain location information for the central point of the target. The proposed method is validated through simulations using the KITTI tracking dataset, yielding an average position error of 0.1182m for the central point of the target. Moreover, our approach achieves an average tracking accuracy that is 21.67% higher than that obtained by Kalman filtering algorithm and 7.94% higher than extended Kalman filtering algorithm on average. Full article
(This article belongs to the Special Issue Optical Sensors: Applications, Performance and Challenges)
Show Figures

Figure 1

24 pages, 7868 KiB  
Article
Target Fitting Method for Spherical Point Clouds Based on Projection Filtering and K-Means Clustered Voxelization
by Zhe Wang, Jiacheng Hu, Yushu Shi, Jinhui Cai and Lei Pi
Sensors 2024, 24(17), 5762; https://doi.org/10.3390/s24175762 - 4 Sep 2024
Viewed by 355
Abstract
Industrial computed tomography (CT) is widely used in the measurement field owing to its advantages such as non-contact and high precision. To obtain accurate size parameters, fitting parameters can be obtained rapidly by processing volume data in the form of point clouds. However, [...] Read more.
Industrial computed tomography (CT) is widely used in the measurement field owing to its advantages such as non-contact and high precision. To obtain accurate size parameters, fitting parameters can be obtained rapidly by processing volume data in the form of point clouds. However, due to factors such as artifacts in the CT reconstruction process, many abnormal interference points exist in the point clouds obtained after segmentation. The classic least squares algorithm is easily affected by these points, resulting in significant deviation of the solution of linear equations from the normal value and poor robustness, while the random sample consensus (RANSAC) approach has insufficient fitting accuracy within a limited timeframe and the number of iterations. To address these shortcomings, we propose a spherical point cloud fitting algorithm based on projection filtering and K-Means clustering (PK-RANSAC), which strategically integrates and enhances these two methods to achieve excellent accuracy and robustness. The proposed method first uses RANSAC for rough parameter estimation, then corrects the deviation of the spherical center coordinates through two-dimensional projection, and finally obtains the spherical center point set by sampling and performing K-Means clustering. The largest cluster is weighted to obtain accurate fitting parameters. We conducted a comparative experiment using a three-dimensional ball-plate standard. The sphere center fitting deviation of PK-RANSAC was 1.91 μm, which is significantly better than RANSAC’s value of 25.41 μm. The experimental results demonstrate that PK-RANSAC has higher accuracy and stronger robustness for fitting geometric parameters. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

14 pages, 1578 KiB  
Technical Note
Instantaneous Material Classification Using a Polarization-Diverse RMCW LIDAR
by Cibby Pulikkaseril, Duncan Ross, Alexander Tofini, Yannick K. Lize and Federico Collarte
Sensors 2024, 24(17), 5761; https://doi.org/10.3390/s24175761 - 4 Sep 2024
Viewed by 211
Abstract
Light detection and ranging (LIDAR) sensors using a polarization-diverse receiver are able to capture polarimetric information about the target under measurement. We demonstrate this capability using a silicon photonic receiver architecture that enables this on a shot-by-shot basis, enabling polarization analysis nearly instantaneously [...] Read more.
Light detection and ranging (LIDAR) sensors using a polarization-diverse receiver are able to capture polarimetric information about the target under measurement. We demonstrate this capability using a silicon photonic receiver architecture that enables this on a shot-by-shot basis, enabling polarization analysis nearly instantaneously in the point cloud, and then use this data to train a material classification neural network. Using this classifier, we show an accuracy of 85.4% for classifying plastic, wood, concrete, and coated aluminum. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

28 pages, 4219 KiB  
Review
Delving into the Potential of Deep Learning Algorithms for Point Cloud Segmentation at Organ Level in Plant Phenotyping
by Kai Xie, Jianzhong Zhu, He Ren, Yinghua Wang, Wanneng Yang, Gang Chen, Chengda Lin and Ruifang Zhai
Remote Sens. 2024, 16(17), 3290; https://doi.org/10.3390/rs16173290 - 4 Sep 2024
Viewed by 291
Abstract
Three-dimensional point clouds, as an advanced imaging technique, enable researchers to capture plant traits more precisely and comprehensively. The task of plant segmentation is crucial in plant phenotyping, yet current methods face limitations in computational cost, accuracy, and high-throughput capabilities. Consequently, many researchers [...] Read more.
Three-dimensional point clouds, as an advanced imaging technique, enable researchers to capture plant traits more precisely and comprehensively. The task of plant segmentation is crucial in plant phenotyping, yet current methods face limitations in computational cost, accuracy, and high-throughput capabilities. Consequently, many researchers have adopted 3D point cloud technology for organ-level segmentation, extending beyond manual and 2D visual measurement methods. However, analyzing plant phenotypic traits using 3D point cloud technology is influenced by various factors such as data acquisition environment, sensors, research subjects, and model selection. Although the existing literature has summarized the application of this technology in plant phenotyping, there has been a lack of in-depth comparison and analysis at the algorithm model level. This paper evaluates the segmentation performance of various deep learning models on point clouds collected or generated under different scenarios. These methods include outdoor real planting scenarios and indoor controlled environments, employing both active and passive acquisition methods. Nine classical point cloud segmentation models were comprehensively evaluated: PointNet, PointNet++, PointMLP, DGCNN, PointCNN, PAConv, CurveNet, Point Transformer (PT), and Stratified Transformer (ST). The results indicate that ST achieved optimal performance across almost all environments and sensors, albeit at a significant computational cost. The transformer architecture for points has demonstrated considerable advantages over traditional feature extractors by accommodating features over longer ranges. Additionally, PAConv constructs weight matrices in a data-driven manner, enabling better adaptation to various scales of plant organs. Finally, a thorough analysis and discussion of the models were conducted from multiple perspectives, including model construction, data collection environments, and platforms. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

34 pages, 17617 KiB  
Article
Integration of a Mobile Laser Scanning System with a Forest Harvester for Accurate Localization and Tree Stem Measurements
by Tamás Faitli, Eric Hyyppä, Heikki Hyyti, Teemu Hakala, Harri Kaartinen, Antero Kukko, Jesse Muhojoki and Juha Hyyppä
Remote Sens. 2024, 16(17), 3292; https://doi.org/10.3390/rs16173292 - 4 Sep 2024
Viewed by 252
Abstract
Automating forest machines to optimize the forest value chain requires the ability to map the surroundings of the machine and to conduct accurate measurements of nearby trees. In the near-to-medium term, integrating a forest harvester with a mobile laser scanner system may have [...] Read more.
Automating forest machines to optimize the forest value chain requires the ability to map the surroundings of the machine and to conduct accurate measurements of nearby trees. In the near-to-medium term, integrating a forest harvester with a mobile laser scanner system may have multiple applications, including real-time assistance of the harvester operator using laser-scanner-derived tree measurements and the collection of vast amounts of training data for large-scale airborne laser scanning-based surveys at the individual tree level. In this work, we present a comprehensive processing flow for a mobile laser scanning (MLS) system mounted on a forest harvester starting from the localization of the harvester under the forest canopy followed by accurate and automatic estimation of tree attributes, such as diameter at breast height (DBH) and stem curve. To evaluate our processing flow, we recorded and processed MLS data from a commercial thinning operation on three test strips with a total driven length ranging from 270 to 447 m in a managed Finnish spruce forest stand containing a total of 658 reference trees within a distance of 15 m from the harvester trajectory. Localization reference was obtained by a robotic total station, while reference tree attributes were derived using a high-quality handheld laser scanning system. As some applications of harvester-based MLS require real-time capabilities while others do not, we investigated the positioning accuracy both for real-time localization of the harvester and after the optimization of the full trajectory. In the real-time positioning mode, the absolute localization error was on average 2.44 m, while the corresponding error after the full optimization was 0.21 m. Applying our automatic stem diameter estimation algorithm for the constructed point clouds, we measured DBH and stem curve with a root-mean-square error (RMSE) of 3.2 cm and 3.6 cm, respectively, while detecting approximately 90% of the reference trees with DBH>20 cm that were located within 15 m from the harvester trajectory. To achieve these results, we demonstrated a distance-adjusted bias correction method mitigating diameter estimation errors caused by the high beam divergence of the laser scanner used. Full article
(This article belongs to the Special Issue Remote Sensing and Smart Forestry II)
Show Figures

Figure 1

17 pages, 6523 KiB  
Article
Lightweight Model Development for Forest Region Unstructured Road Recognition Based on Tightly Coupled Multisource Information
by Guannan Lei, Peng Guan, Yili Zheng, Jinjie Zhou and Xingquan Shen
Forests 2024, 15(9), 1559; https://doi.org/10.3390/f15091559 - 4 Sep 2024
Viewed by 116
Abstract
Promoting the deployment and application of embedded systems in complex forest scenarios is an inevitable developmental trend in advanced intelligent forestry equipment. Unstructured roads, which lack effective artificial traffic signs and reference objects, pose significant challenges for driverless technology in forest scenarios, owing [...] Read more.
Promoting the deployment and application of embedded systems in complex forest scenarios is an inevitable developmental trend in advanced intelligent forestry equipment. Unstructured roads, which lack effective artificial traffic signs and reference objects, pose significant challenges for driverless technology in forest scenarios, owing to their high nonlinearity and uncertainty. In this research, an unstructured road parameterization construction method, “DeepLab-Road”, based on tight coupling of multisource information is proposed, which aims to provide a new segmented architecture scheme for the embedded deployment of a forestry engineering vehicle driving assistance system. DeepLab-Road utilizes MobileNetV2 as the backbone network that improves the completeness of feature extraction through the inverse residual strategy. Then, it integrates pluggable modules including DenseASPP and strip-pooling mechanisms. They can connect the dilated convolutions in a denser manner to improve feature resolution without significantly increasing the model size. The boundary pixel tensor expansion is then completed through a cascade of two-dimensional Lidar point cloud information. Combined with the coordinate transformation, a quasi-structured road parameterization model in the vehicle coordinate system is established. The strategy is trained on a self-built Unstructured Road Scene Dataset and transplanted into our intelligent experimental platform to verify its effectiveness. Experimental results show that the system can meet real-time data processing requirements (≥12 frames/s) under low-speed conditions (≤1.5 m/s). For the trackable road centerline, the average matching error between the image and the Lidar was 0.11 m. This study offers valuable technical support for the rejection of satellite signals and autonomous navigation in unstructured environments devoid of high-precision maps, such as forest product transportation, agricultural and forestry management, autonomous inspection and spraying, nursery stock harvesting, skidding, and transportation. Full article
(This article belongs to the Special Issue Modeling of Vehicle Mobility in Forests and Rugged Terrain)
Show Figures

Figure 1

27 pages, 17955 KiB  
Article
Characterization of Complex Rock Mass Discontinuities from LiDAR Point Clouds
by Yanan Liu, Weihua Hua, Qihao Chen and Xiuguo Liu
Remote Sens. 2024, 16(17), 3291; https://doi.org/10.3390/rs16173291 - 4 Sep 2024
Viewed by 187
Abstract
The distribution and development of rock mass discontinuities in 3D space control the deformation and failure characteristics of the rock mass, which in turn affect the strength, permeability, and stability of rock masses. Therefore, it is essential to accurately and efficiently characterize these [...] Read more.
The distribution and development of rock mass discontinuities in 3D space control the deformation and failure characteristics of the rock mass, which in turn affect the strength, permeability, and stability of rock masses. Therefore, it is essential to accurately and efficiently characterize these discontinuities. Light Detection and Ranging (LiDAR) now allows for fast and precise 3D data collection, which supports the creation of new methods for characterizing rock mass discontinuities. However, uneven density distribution and local surface undulations can limit the accuracy of discontinuity characterization. To address this, we propose a method for characterizing complex rock mass discontinuities based on laser point cloud data. This method is capable of processing datasets with varying densities and can reduce over-segmentation in non-planar areas. The suggested approach involves a five-stage process that includes: (1) adaptive resampling of point cloud data based on density comparison; (2) normal vector calculation using Principal Component Analysis (PCA); (3) identifying non-planar areas using a watershed-like algorithm, and determine the main discontinuity sets using Multi-threshold Mean Shift (MTMS); (4) identify single discontinuity clusters using Density-Based Spatial Clustering of Applications with Noise (DBSCAN); (5) fitting discontinuity planes with Random Sample Consensus (RANSAC) and determining discontinuity orientations using analytic geometry. This method was applied to three rock slope datasets and compared with previous research results and manual measurement results. The results indicate that this method can effectively reduce over-segmentation and the characterization results have high accuracy. Full article
Show Figures

Figure 1

17 pages, 11761 KiB  
Article
Prediction of Useful Eggplant Seedling Transplants Using Multi-View Images
by Xiangyang Yuan, Jingyan Liu, Huanyue Wang, Yunfei Zhang, Ruitao Tian and Xiaofei Fan
Agronomy 2024, 14(9), 2016; https://doi.org/10.3390/agronomy14092016 - 4 Sep 2024
Viewed by 124
Abstract
Traditional deep learning methods employing 2D images can only classify healthy and unhealthy seedlings; consequently, this study proposes a method by which to further classify healthy seedlings into primary seedlings and secondary seedlings and finally to differentiate three classes of seedling through a [...] Read more.
Traditional deep learning methods employing 2D images can only classify healthy and unhealthy seedlings; consequently, this study proposes a method by which to further classify healthy seedlings into primary seedlings and secondary seedlings and finally to differentiate three classes of seedling through a 3D point cloud for the detection of useful eggplant seedling transplants. Initially, RGB images of three types of substrate-cultivated eggplant seedlings (primary, secondary, and unhealthy) were collected, and healthy and unhealthy seedlings were classified using ResNet50, VGG16, and MobilNetV2. Subsequently, a 3D point cloud was generated for the three seedling types, and a series of filtering processes (fast Euclidean clustering, point cloud filtering, and voxel filtering) were employed to remove noise. Parameters (number of leaves, plant height, and stem diameter) extracted from the point cloud were found to be highly correlated with the manually measured values. The box plot shows that the primary and secondary seedlings were clearly differentiated for the extracted parameters. The point clouds of the three seedling types were ultimately classified directly using the 3D classification models PointNet++, dynamic graph convolutional neural network (DGCNN), and PointConv, in addition to the point cloud complementary operation for plants with missing leaves. The PointConv model demonstrated the best performance, with an average accuracy, precision, and recall of 95.83, 95.83, and 95.88%, respectively, and a model loss of 0.01. This method employs spatial feature information to analyse different seedling categories more effectively than two-dimensional (2D) image classification and three-dimensional (3D) feature extraction methods. However, there is a paucity of studies applying 3D classification methods to predict useful eggplant seedling transplants. Consequently, this method has the potential to identify different eggplant seedling types with high accuracy. Furthermore, it enables the quality inspection of seedlings during agricultural production. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

19 pages, 3131 KiB  
Article
Three-Dimensional Object Detection Network Based on Multi-Layer and Multi-Modal Fusion
by Wenming Zhu, Jia Zhou, Zizhe Wang, Xuehua Zhou, Feng Zhou, Jingwen Sun, Mingrui Song and Zhiguo Zhou
Electronics 2024, 13(17), 3512; https://doi.org/10.3390/electronics13173512 - 4 Sep 2024
Viewed by 220
Abstract
Cameras and LiDAR are important sensors in autonomous driving systems that can provide complementary information to each other. However, most LiDAR-only methods outperform the fusion method on the main benchmark datasets. Current studies attribute the reasons for this to misalignment of views and [...] Read more.
Cameras and LiDAR are important sensors in autonomous driving systems that can provide complementary information to each other. However, most LiDAR-only methods outperform the fusion method on the main benchmark datasets. Current studies attribute the reasons for this to misalignment of views and difficulty in matching heterogeneous features. Specially, using the single-stage fusion method, it is difficult to fully fuse the features of the image and point cloud. In this work, we propose a 3D object detection network based on the multi-layer and multi-modal fusion (3DMMF) method. 3DMMF works by painting and encoding the point cloud in the frustum proposed by the 2D object detection network. Then, the painted point cloud is fed to the LiDAR-only object detection network, which has expanded channels and a self-attention mechanism module. Finally, the camera-LiDAR object candidates fusion for 3D object detection(CLOCs) method is used to match the geometric direction features and category semantic features of the 2D and 3D detection results. Experiments on the KITTI dataset (a public dataset) show that this fusion method has a significant improvement over the baseline of the LiDAR-only method, with an average mAP improvement of 6.3%. Full article
Show Figures

Figure 1

21 pages, 9533 KiB  
Article
An Algorithm for Generating Outdoor Floor Plans and 3D Models of Rural Houses Based on Backpack LiDAR
by Quanshun Zhu, Bingjie Zhang and Lailiang Cai
Sensors 2024, 24(17), 5723; https://doi.org/10.3390/s24175723 - 3 Sep 2024
Viewed by 279
Abstract
As the Rural Revitalization Strategy continues to progress, there is an increasing demand for the digitization of rural houses, roads, and roadside trees. Given the characteristics of rural areas, such as narrow roads, high building density, and low-rise buildings, the precise and automated [...] Read more.
As the Rural Revitalization Strategy continues to progress, there is an increasing demand for the digitization of rural houses, roads, and roadside trees. Given the characteristics of rural areas, such as narrow roads, high building density, and low-rise buildings, the precise and automated generation of outdoor floor plans and 3D models for rural areas is the core research issue of this paper. The specific research content is as follows: Using the point cloud data of the outer walls of rural houses collected by backpack LiDAR as the data source, this paper proposes an algorithm for drawing outdoor floor plans based on the topological relationship of sliced and rasterized wall point clouds. This algorithm aims to meet the needs of periodically updating large-scale rural house floor plans. By comparing the coordinates of house corner points measured with RTK, it is verified that the floor plans drawn by this algorithm can meet the accuracy requirements of 1:1000 topographic maps. Additionally, based on the generated outdoor floor plans, this paper proposes an algorithm for quickly generating outdoor 3D models of rural houses using the height information of wall point clouds. This algorithm can quickly generate outdoor 3D models of rural houses by longitudinally stretching the floor plans, meeting the requirements for 3D models in spatial analyses such as lighting and inundation. By measuring the distance from the wall point clouds to the 3D models and conducting statistical analysis, results show that the distances are concentrated between −0.1 m and 0.1 m. The 3D model generated by the method proposed in this paper can be used as one of the basic data for real 3D construction. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

15 pages, 4612 KiB  
Article
RD-SLAM: Real-Time Dense SLAM Using Gaussian Splatting
by Chaoyang Guo, Chunyan Gao, Yiyang Bai and Xiaoling Lv
Appl. Sci. 2024, 14(17), 7767; https://doi.org/10.3390/app14177767 - 3 Sep 2024
Viewed by 352
Abstract
Simultaneous localization and mapping (SLAM) is fundamental for intelligent mobile units to perform diverse tasks. Recent work has shown that integrating neural rendering and SLAM showed promising results in photorealistic environment reconstruction. However, existing methods estimate pose by minimizing the error between rendered [...] Read more.
Simultaneous localization and mapping (SLAM) is fundamental for intelligent mobile units to perform diverse tasks. Recent work has shown that integrating neural rendering and SLAM showed promising results in photorealistic environment reconstruction. However, existing methods estimate pose by minimizing the error between rendered and input images, which is time-consuming and cannot be run in real-time, deviating from the original intention of SLAM. In this paper, we propose a dense RGB-D SLAM based on 3D Gaussian splatting (3DGS) while employing generalized iterative closest point (G-ICP) for pose estimation. We actively utilize 3D point cloud information to improve the tracking accuracy and operating speed of the system. At the same time, we propose a dual keyframe selection strategy and its corresponding densification method, which can effectively reconstruct new observation scenes and improve the quality of previously constructed maps. In addition, we introduce regularization loss to address the scale explosion of the 3D Gaussians and over-elongate in the camera viewing direction. Experiments on the Replica, TUM-RGBD, and ScanNet datasets show that our method achieves state-of-the-art tracking accuracy and runtime while being competitive in rendering quality. Full article
Show Figures

Figure 1

27 pages, 595 KiB  
Review
Centralized vs. Decentralized Cloud Computing in Healthcare
by Mona Abughazalah, Wafaa Alsaggaf, Shireen Saifuddin and Shahenda Sarhan
Appl. Sci. 2024, 14(17), 7765; https://doi.org/10.3390/app14177765 - 3 Sep 2024
Viewed by 506
Abstract
Healthcare is one of the industries that seeks to deliver medical services to patients on time. One of the issues it currently grapples with is real-time patient data exchange between various healthcare organizations. This challenge was solved by both centralized and decentralized cloud [...] Read more.
Healthcare is one of the industries that seeks to deliver medical services to patients on time. One of the issues it currently grapples with is real-time patient data exchange between various healthcare organizations. This challenge was solved by both centralized and decentralized cloud computing architecture solutions. In this paper, we review the current state of these two cloud computing architectures in the health sector with regard to the effect on the efficiency of Health Information Exchange (HIE) systems. Our study seeks to determine the relevance of these cloud computing approaches in assisting healthcare facilities in the decision-making process to adopt HIE systems. This paper considers the system performance, patient data privacy, and cost and identifies research directions in each of the architectures. This study shows that there are some benefits in both cloud architectures, but there are also some drawbacks. The prominent characteristic of centralized cloud computing is that all data and information are stored together at one location, known as a single data center. This offers many services, such as integration, effectiveness, simplicity, and rapid information access. However, it entails providing data privacy and confidentiality aspects because it will face the hazard of a single point of failure. On the other hand, decentralized cloud computing is built to safeguard data privacy and security whereby data are distributed to several nodes as a way of forming mini-data centers. This increases the system’s ability to cope with a node failure. Thus, continuity and less latency are achieved. Nevertheless, it poses integration issues because managing data from several sites could be a problem, and the costs of operating several data centers are higher and complex. This paper also pays attention to the differences in aspects like efficiency, capacity, and cost. This paper assists healthcare organizations in determining the most suitable cloud architecture strategy for deploying secure and effective HIE systems. Full article
Show Figures

Figure 1

Back to TopTop