Next Article in Journal
The Regulatory Effect of Se-Cd Interaction on Tea Plants (Camellia sinensis (L.) O. Kuntze) Under Cadmium Stress
Next Article in Special Issue
Stem-Leaf Segmentation and Morphological Traits Extraction in Rapeseed Seedlings Using a Three-Dimensional Point Cloud
Previous Article in Journal
Advanced Efficient Feature Selection Integrating Augmented Extreme Learning Machine and Particle Swarm Optimization for Predicting Nitrogen Use Efficiency and Yield in Corn
Previous Article in Special Issue
Analysis and Verification of a Slope Steering Model of TRVs in Hilly and Mountainous Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Rapeseed Leave Segmentation Accuracy Using Binocular Stereo Vision 3D Point Clouds

1
Cultivation and Construction Site of National Key Laboratory for Crop Genetics and Physiology in Jiangsu Province, Yangzhou University, Yangzhou 225009, China
2
Jiangsu Co-Innovation Center for Modern Production Technology of Grain Crops, Yangzhou University, Yangzhou 225009, China
3
Research Institute of Smart Agriculture, Yangzhou University, Yangzhou 225009, China
*
Author to whom correspondence should be addressed.
Submission received: 31 December 2024 / Revised: 15 January 2025 / Accepted: 17 January 2025 / Published: 20 January 2025
(This article belongs to the Special Issue Unmanned Farms in Smart Agriculture)

Abstract

:
Point cloud segmentation is necessary for obtaining highly precise morphological traits in plant phenotyping. Although a huge development has occurred in point cloud segmentation, the segmentation of point clouds from complex plant leaves still remains challenging. Rapeseed leaves are critical in cultivation and breeding, yet traditional two-dimensional imaging is susceptible to reduced segmentation accuracy due to occlusions between plants. The current study proposes the use of binocular stereo-vision technology to obtain three-dimensional (3D) point clouds of rapeseed leaves at the seedling and bolting stages. The point clouds were colorized based on elevation values in order to better process the 3D point cloud data and extract rapeseed phenotypic parameters. Denoising methods were selected based on the source and classification of point cloud noise. However, for ground point clouds, we combined plane fitting with pass-through filtering for denoising, while statistical filtering was used for denoising outliers generated during scanning. We found that, during the seedling stage of rapeseed, a region-growing segmentation method was helpful in finding suitable parameter thresholds for leaf segmentation, and the Locally Convex Connected Patches (LCCP) clustering method was used for leaf segmentation at the bolting stage. Furthermore, the study results show that combining plane fitting with pass-through filtering effectively removes the ground point cloud noise, while statistical filtering successfully denoises outlier noise points generated during scanning. Finally, using the region-growing algorithm during the seedling stage with a normal angle threshold set at 5.0/180.0* M_PI and a curvature threshold set at 1.5 helps to avoid the under-segmentation and over-segmentation issues, achieving complete segmentation of rapeseed seedling leaves, while the LCCP clustering method fully segments rapeseed leaves at the bolting stage. The proposed method provides insights to improve the accuracy of subsequent point cloud phenotypic parameter extraction, such as rapeseed leaf area, and is beneficial for the 3D reconstruction of rapeseed.

1. Introduction

Rapeseed is one of the most important oil crops globally, after oil palm and soybean, and is also a major source of biofuel [1,2]. During 2023, rapeseed occupied the largest planted area among oilseed crops in China, reaching 7253 thousand hectares while also achieving a huge production volume of 15.53 million tons [3]. After oil extraction, the resulting rapeseed cake is rich in amino acids and other substances, which can not only be used as animal feed [4,5] but also to improve the soil around the roots of plants [6]. Phenotypic traits of rapeseed reflect the quality and growth status of rapeseed [7] and also facilitate precision breeding [8,9]. Among these traits, leaf area and leaf area index reflect the canopy structure [10], monitor crop growth [11,12], and are closely related to yield [8]. In field production, obtaining leaf data often relies on visual estimation and destructive sampling methods. These methods are time-consuming, laborious, and can damage crops [13]. Therefore, non-destructive automated monitoring of rapeseed leaves is of great research significance.
In recent years, remote sensing technology has provided technical support for efficiently obtaining images on large-scale farmland [14]. Compared to 2D images, 3D images can capture the spatial morphology of crops, which can be used to analyze and study the morphological structure and growth process of crops [15,16,17]. Three-dimensional scanning technology is also of great significance for extracting crop phenotypic features and conducting 3D visualization. Furthermore, compared with active sensors such as ultrasonic and LiDAR, binocular stereo vision is a passive sensor that mainly uses the principle of binocular disparity to obtain three-dimensional spatial information with low cost and high resolution [18]. Previously, binocular stereo vision has been successfully applied to many plants [19,20,21,22,23]. Ge et al. [24] integrated binocular stereo vision technology with a Gaussian mixture model to successfully identify broccoli seedlings under various weed conditions. The instance segmentation model, YOLO-TomatoSeg, developed by Zheng et al. [25], in combination with the parameters of a binocular camera, attained precision in tomato localization that complied with the positioning demands of harvesting robots.
During the acquisition process, a large amount of noise is generated due to equipment and the collection environment, which has a detrimental effect on the subsequent point cloud stitching and analysis. At the same time, under the scenario of huge point cloud data, direct processing and feature extraction of the point cloud are challenging, requiring point cloud segmentation. Point cloud segmentation is the process of classifying point clouds based on local features, grouping points with similar attributes into regions, and dividing point clouds into blocks for further processing [26]. Point cloud segmentation can be categorized into the following types: region-growing-based [27,28], edge-based [29,30], model-based [31,32], point cloud clustering-based [33,34], and deep learning-based segmentation algorithm [35,36,37]. Accurate and effective segmentation of 3D point cloud data is a crucial step in point cloud processing, as it directly impacts the accuracy of subsequent processing tasks. Different factors, such as the quantity and structure of the point cloud, may pose challenges in segmentation. Thus, point cloud denoising and segmentation are important steps in point cloud processing. Liu et al. [38] removed background noise and outliers through pass-through filtering and statistical filtering, followed by reconstructing a 3D model of peanut plants using point cloud spatial coordinates. Wang et al. [39] acquired a potato 3D point cloud based on binocular stereo vision, removed outliers through filtering and k-means clustering, and calculated the crop water stress index (CWSI). Bao et al. [40] utilized steps, including voxel grid downsampling, ground plane fitting with the random sample consensus (RANSAC) algorithm, statistical filtering, and Euclidean clustering to eliminate noise points and obtain the sorghum plant height. The application of filtering algorithms to reduce the noise of point clouds has been widely used in many plant 3D reconstruction studies [41,42,43].
Achieving accurate and efficient segmentation remains a serious research hotspot in the field. Miao et al. [44] utilized the RANSAC algorithm and Euclidean clustering to remove ground points from point clouds and proposed a point cloud image conversion method to achieve stem and leaf segmentation. Zhu et al. [45] employed conditional filtering and statistical outlier removal filtering to remove the background point cloud based on a three-dimensional reconstruction of the tomato canopy and effectively obtained clear point clouds of the tomato canopy and leaves. Wang et al. [46] successfully achieved the extraction of organ-level parameters in maize without the need for organ-level segmentation by employing a distance-field-based segmentation pipeline. Hao et al. [47] developed a high-throughput method for cotton 3D phenotype acquisition and analysis using the PointSegAt deep learning network model. This method includes models for segmenting plant stems and leaves based on point cloud data, as well as an automatic segmentation algorithm. The study findings demonstrate the model’s strong performance in stem-leaf segmentation and single-leaf segmentation while also achieving automated measurements of plant height, leaf number, and wilting leaf area.
There have been significant achievements in 3D visualization of crops such as maize and cotton, but research on rapeseed is relatively scarce, and studies on rapeseed leaves are limited to a single stage [48]. Qiao et al. [49] used a DGCNN-sparse-dense point cloud mapping method to segment the crown siliques, which improved point cloud recognition and counting accuracy for rapeseed siliques. Xiong et al. [50] accurately quantified the 3D canopy structure and single-leaf traits in rapeseed seedlings, achieving a mean absolute percentage error of 3.68% for automated leaf area measurements; however, rapeseed plants in other growth stages have not yet been verified. Teng et al. [51] utilized an RGB-D camera for the 3D reconstruction of rapeseed and proposed an improved method based on the classical iterative closest point with a success rate of cloud registration of 92.5%. Therefore, we planned a study to obtain 3D point clouds of rapeseed leaves at the seedling and bolting stages. We removed background noise using pass-through and statistical filtering. Then, we extracted rapeseed leaves at different stages using region-growing and LCCP algorithms. The objectives of the study were as follows: (a) to obtain rapeseed point clouds using binocular stereo vision technology; (b) to combine different filtering methods to remove ground point clouds and outliers, filtering out most noise points; and (c) to segment the rapeseed leaves at seedling and bolting stages with the aim of improving the accuracy of subsequent point cloud phenotypic parameter extraction such as rapeseed leaf area.

2. Materials and Methods

2.1. Study Site and Experiment Design

The experiment was conducted in an experimental field of the College of Agriculture, Yangzhou University, Yangzhou City, Jiangsu Province, China (32°38′ N, 119°42′ E) during the 2021–2022 growing season. Three rapeseed varieties, namely Huyou 039, Qinyou 7, and Zheyouza 108, were selected for the experiment. The planting method involved strip sowing with a soil covering of 2–4 cm in depth. The experiment was replicated three times, with each plot area measuring 9 square meters, and a 0.5 m-wide isolation strip was set up between each plot. Standard irrigation and fertilization practices were followed throughout the experiment, and, due to the late sowing period for rapeseed, plastic film was used for protection during extremely low temperatures.

2.2. Point Cloud Data Acquisition

In this experiment, we utilized the Weijing Intelligent Stereo Vision Technology (Vizum, Beijing, China), which integrated various sensors, such as hyperspectral cameras, texture cameras, and infrared cameras, into a smart phenotyping platform. This equipment employed line laser binocular stereo vision technology with multiple high-precision wide-field stereo cameras. Four cameras were mounted on the top and two on the side. Table 1 presents the specific parameters of the stereo camera. During the camera installation process, the cameras were fixed in place while the laser emitter’s angle was kept perpendicular to the cameras. Data were output to a 5G transmission device via Ethernet cables, and the results were transmitted to a control room for overall management using 5G communication technology.
Data collection was conducted at the seedling and bolting stages of rapeseed. To minimize external noise interference, 3D scans were performed during favorable weather conditions with suitable lighting and no wind. The scanning mechanism of the smart phenotyping platform is shown in Figure 1. The specific steps were as follows: (1) Power on and access the main interface of the device. (2) Release the emergency stop mode to allow the vehicle to automatically leave the warehouse, then manually zero the X and Z axes of the scanning mechanism. (3) Perform region of interest (ROI) detection from each camera. (4) After setting the position of the plot, ensure the crop to be photographed is in the camera’s field of view by moving the vehicle to the plot and adjusting the position of the scanning mechanism with the height set to around one meter above the crop top. If the point cloud image quality is poor, check if the laser line is within the camera; if not, redefine the ROI. (5) Complete the scan and power off the device for storage.

2.3. Point Cloud Coloring

After acquiring the point cloud data, processing was performed using PCL1.11.1 software (https://pointclouds.org/downloads/, accessed on 16 January 2025) on the Windows 10 operating system. The point cloud data were saved in TXT format with six columns. The last three columns, representing color information, were all set to 0, indicating no color variation for visualizing the point cloud data. In this experiment, elevation-based rendering was used to color the point cloud, where the principle involves rendering the point cloud based on selected fields and establishing the relationship between coordinate changes and color changes, typically using the red, green, and blue colors. The specific steps for coloring the point cloud in this experiment were as follows: First, by calculating the maximum, median, and minimum elevation values of the point cloud in the z-axis direction, red, green, and blue colors were selected for rendering. The point with the minimum coordinate was set to blue, the middle coordinate was set to green, and the maximum coordinate was set to red. For the lower part of the point cloud, the ratio of each laser point elevation value in the interval between the minimum and median values was calculated, and then green was added to blue based on this ratio. Similarly, for the upper part of the point cloud, red was added to green based on the ratio of each laser point elevation value in the green zone. The effect of the colored point cloud is shown in Figure 2.

2.4. Point Cloud Denoising

2.4.1. Pass-Through Filtering to Filter Ground Point Cloud

It is challenging to manually remove the non-target point clouds, such as weeds, in the field, when the soil level is inconsistent. Therefore, before analyzing the rapeseed point cloud data, noise data needs to be processed. Visualizing the scanned rapeseed point cloud reveals that some ground points are similar in height to the plants. Thus, filtering the ground point cloud based on height would also result in filtering out parts of the rapeseed plants. By slicing the entire point cloud along the z-axis, there is always be a plane with only soil points. Therefore, the entire rapeseed plot point cloud was first subjected to plane fitting based on RANSAC. The main idea of this method is to select the minimum number of points from the point cloud data that can form a plane, calculate the parameters of the plane, and determine the distance ‘l’ from the remaining points to this plane. A threshold ‘L’ was set, and if ‘l < L’, then the point was considered to be on this plane. When the number of points on this plane reached ‘n’, the plane was saved and the points on the plane were considered matched. This process was repeated, and when ‘n’ was maximized, the plane with the best fitting effect was output as the main plane of the point cloud, as shown in Figure 3, where the shaded grid represents the fitted plane.
In this experiment, pass-through filtering was used to filter the noise of the ground point cloud. The principle of pass-through filtering is to first specify a dimension and set a threshold range. Then, iterate through each point in the point cloud and determine if the value of that point in the specified dimension is within the threshold. Points with values outside the threshold range are directly filtered out, and the remaining points after the iteration are considered the filtered point cloud. Since the coloring of the point cloud was performed based on elevation along the z-axis, the elevation values were used to determine the threshold range for denoising during pass-through filtering. The soil noise scanned in each plot was different, so the threshold range had to be selected accordingly.

2.4.2. Statistical Filtering for Denoising

During the scanning process, due to equipment issues and other factors, unevenly dense point cloud datasets are often generated. Within these datasets, there are outliers, which are points that deviate significantly from the majority of the data. These outliers are sparsely distributed in space and need to be filtered out promptly during point cloud preprocessing to reduce the difficulty of subsequent processing.
The main idea of statistical filtering is to statistically analyze the neighborhood of each point and calculate the average distance from each point to its k nearest neighbors. Assuming the distances of all points follow a Gaussian distribution, the mean and standard deviation become key factors. By setting the mean and variance in the algorithm, outliers can be filtered out. The main algorithm for statistical filtering is as follows:
(1) Set up the neighborhood as k and calculate the average distance (Si) from each point to all points within the k distance. For any two non-overlapping points P1 ( X i , Y i , Z I ) and P2 ( X j , Y j , Z j ) in the point cloud, Equation (1) shows the formula to iterate through each point to find the average distance to the k neighborhood.
S i = j = 1 k X i X j 2 + Y i Y j 2 + Z I Z j 2 k
Here, ( X i , Y i , Z I ) represents the coordinates of point P1, while ( X j , Y j , Z j ) denotes the coordinates of point P2.
(2) Equations (2) and (3) show the formulas to calculate the mean distance (µ) and sample standard deviation (σ) of the entire point cloud dataset.
μ = 1 n i = 1 n S i
σ = 1 n i = 1 n S i μ 2
(3) Once the mean distance and standard deviation were calculated, the distance threshold ( d m a x ) was determined using Equation (4).
d m a x = μ + α × μ
Here, α is the standard deviation multiple used to control the effect of the distance standard deviation on the distance threshold. In the algorithm, by inputting the values of k and α, if the average distance of the points in the k-neighborhood of a particular point falls within the range of ( μ α × μ ,   μ + α × μ ), the point is retained.

2.5. Target Point Cloud Extraction

2.5.1. Segmentation of Rapeseed Leaves at Seedling Stage

The segmentation of the main stems and leaves of individual rapeseed plants was carried out using conditional filtering. During the seedling stage, a region-growing algorithm was applied to segment the leaves. The specific algorithmic process (Algorithm 1) is outlined as follows:
Algorithm 1. Pseudocode for Conditional Filtering
  • FUNCTION VisualizeCloud(cloud, filter_cloud)
  •   create viewer for point cloud visualization
  •   CREATE two viewports (v1, v2)
  •   SET the background color for v1 to black
  •   ADD text “point clouds” to v1
  •   SET the background color for v2 to dark gray
  •   ADD text “filtered point clouds” to v2
  •   DEFINE color handler for cloud based on z field
  •   ADD cloud to v1 with color handler
  •   ADD filter_cloud to v2 with green color
  •   WHILE viewer is not stopped
  •     UPDATE viewer
  •     SLEEP for a short duration
  • FUNCTION main()
  •   CREATE cloud_in and cloud_conditional point clouds
  •   LOAD point cloud data from “HY-1p.pcd” into cloud_in
  •   IF loading fails THEN
  •     RETURN -1
  •   PRINT the number of points in cloud_in
  •   CREATE a condition filter (range_cond)
  •   ADD conditions for filtering based on x, y, and z fields
  •   CREATE conditional removal filter (condrem)
  •   SET condition for condrem
  •   SET input cloud for condrem
  •   SET keep organized to true
  •   APPLY filter to cloud_conditional
  •   PRINT the number of points in cloud_conditional before removing NaNs
  •   REMOVE NaN points from cloud_conditional
  •   PRINT the number of points in cloud_conditional after removing NaNs
  •   CALL VisualizeCloud with cloud_in and cloud_conditional
  •   RETURN 0
In summary, region-growing segmentation involves the establishment of seed regions and region growth [52]. While establishing seed regions, we selected the point with the smallest curvature value as the initial seed point to ensure a smooth region, thereby reducing the number of segments during region growth. We utilized a method based on local surface fitting for estimating normal vectors and curvature values. Initially, each point in the point cloud was searched for its k nearest neighbors, and the least squares local plane was calculated for these points. By decomposing the covariance matrix, the smallest eigenvalue corresponds to the normal vector of the plane. When the eigenvalues were obtained, if they satisfy the condition λ1 ≤ λ2 ≤ λ3, Equation (5) shows the formula for curvature value.
H = λ 1 λ 1 + λ 2 + λ 3

2.5.2. Segmentation of Rapeseed Leaves at Bolting Stage

While using the region-growing algorithm for leaf segmentation during the bolting stage of rapeseed, it was discovered that certain leaves were overlapping due to the growth habit of the plant. Therefore, this study employed the LCCP algorithm for segmenting rapeseed leaves in the bolting stage. The LCCP algorithm is an image segmentation method based on the concavity and convexity of neighboring points in a point cloud. The algorithm consists of two main parts: dividing the point cloud into supervoxels based on the angle between normal vectors and spatial distance, determining the connectivity of adjacent supervoxels in the recorded adjacency graph of supervoxels, and finally merging all convexly connected supervoxels to form the segmentation result.
When over-segmenting the point cloud into supervoxels, the data was initially divided into voxels, and a voxel cloud with a specific resolution was created. The voxel cloud was then gridded, and the voxel closest to the center of each grid was selected as the initial seed voxel. After filtering out isolated seed points, a search region was established for the remaining seed voxels on the object surface. The number of voxels within the neighborhood radius of the seed point was calculated, and seed voxels with a voxel count below a fixed threshold in the intersecting region with the search range were removed. Following the initialization of the clustering algorithm, the edge properties, geometric features, and spatial distance of the point cloud were comprehensively considered to measure the similarity between voxels in the feature space. The voxel data were then clustered in the feature space, and an over-segmentation of the voxel data was performed using a flow-constrained clustering algorithm to obtain supervoxel data. After over-segmenting the point cloud, the Extended Convexity Criterion (CC) and Sanity Criterion (SC) were used to determine the concavity and convexity relationships between different blocks. The CC determines whether two adjacent supervoxels are concave or convex based on the vector and normal vector of the line connecting their centers. Stein et al. [53] provided a criterion graph for CC, as shown in Figure 4.
Figure 4 shows that when α1 < α2, the two supervoxels are connected convexly; otherwise, they are connected concavely. In the process of using the algorithm, Equations (6) and (7) were used for the CC criterion to reduce misjudgments of concave–convex relationships.
C C b = P i P j = t r u e   n 1 n 2 · d > 0 β < β T h r e s h f a l s e   o t h e r w i s e
C C e P i , P j = C C b P i , P j C C b P i , P c C C b P j , P c
where C C b is the basic convexity criterion, β is the spatial distance, βThresh is the concavity threshold, d indicates the vector from point x 2 to point x 1 , CCe is the extended convexity criterion, and n 1 , n 2 are the normals of two adjacent supervoxels [53].
If two adjacent supervoxels are not connected and have one side isolated, the SC is introduced with the following Equations (8)–(10).
ϑ P i , P j = min d , s , d , s = min d , s , 180 ° , d , s
ϑ t h r e s h β = ϑ t h r e s h m a x · 1 + exp a · β β o f f 1
S C P i , P j = t r u e   ϑ P i , P j > ϑ t h r e s h β n 1 , n 2 f a l s e   o t h e r w i s e
where ϑ is the minimum angle between two directions, ϑ t h r e s h m a x is 60°, a is 0.25, and βoff is 25° [53].
Equation (11) shows the formula for the convex edge criterion between two adjacent supervoxels.
c o n v P i , P j = C C b , e P i , P j S C P i , P j
where c o n v P i , P j refers to the concave–convex relationship between two supervoxels, C C b , e P i , P j refers to the result of using the CC between two supervoxels, and S C P i , P j refers to the result of using the SC between two supervoxels.
After marking the concave–convex relationships of each small region, these regions were clustered into larger objects using a region-growing algorithm, completing the segmentation process.

3. Results

3.1. Evaluation of Point Cloud Denoising Accuracy

3.1.1. Evaluation of Pass-Through Filtering Denoising Accuracy

Taking the denoising effect of the first plot of rapeseed in Huyou 039 as an example, the rapeseed point cloud was filtered using pass-through filtering based on the elevation values. The same steps were followed for the subsequent plots. Figure 5a shows the original rapeseed point cloud, where the red box highlights the target point cloud. When coloring the point cloud, the z-axis direction was colored from blue to green to red, with the z-coordinate of a randomly selected point from the fitted plane serving as one limit of the pass-through filtering threshold. The blue value was used as the other limit, and denoising was performed using this threshold. The filtered point cloud is shown in Figure 5b. The z-axis direction of the filtered point cloud was colored from red to green to blue. The blank areas in the image indicate that the low points of the ground point cloud have been filtered out. If filtering continues upwards, the rapeseed point cloud will also be filtered out. Therefore, the appearance of blank areas in the denoised point cloud image serves as the criterion for the end of pass-through filtering.
The point cloud structure scanned in this experiment was complex, with a lot of noise. Point cloud filtering was performed to improve the quality and accuracy of point cloud data. The denoising ratio served as an evaluation metric to verify the effectiveness of pass-through filtering on denoising point cloud data. Equation (12) shows the formula for the denoising ratio.
α = p p i p
where α represents the percentage of removed points compared to the total number of original points, p is the total number of original points, and pi is the number of points after denoising.
Six parts of scanned point cloud data were denoised using pass-through filtering in this experiment. The number of denoised points and denoising ratios are shown in Table 2.
A comparison between the denoised point cloud image and the original point cloud image reveals that ground noise was successfully removed through pass-through filtering. Results presented in Table 2 show that the original point cloud data were very large, and even after pass-through filtering, the lowest denoising ratio was only 42.2%. This result helps streamline the large point cloud data for further processing.

3.1.2. Evaluation of Statistical Filtering Denoising Accuracy

The previous denoising of the point cloud was only aimed at filtering the ground point cloud of the entire rapeseed plot. Noise generated by the rapeseed plants during scanning was not addressed. When segmenting the point cloud of individual rapeseed plants using conditional filtering, small-scale noise needed to be filtered out to reduce errors in subsequent point cloud processing. Therefore, statistical filtering was also required.
To prevent incomplete denoising or over-denoising, suitable threshold ranges needed to be selected for statistical filtering. The parameters involved were the number of neighboring points (k) and the standard deviation multiple (α). In this study, values of 5, 10, 20, 30, 50, and 100 were selected for k, and values of 0.01, 0.05, 0.1, 0.5, 1, 5, and 10 were selected for α. Combinations of k and α were tested for statistical filtering. A single rapeseed plant was selected as a demonstration, and line graphs of the standard deviation multiple, the number of neighboring points, and the filtered point cloud count were plotted, as shown in Figure 6.
From the graph, it can be observed that under constant neighboring points, the number of removed points decreases as the standard deviation multiple increases. However, when α reaches 5 or higher, the decreasing trend in the number of removed points tends to stabilize. When α is between 0.01 and 0.1, the number of removed points with a k value of 5 significantly exceeds the number removed than other values of k. However, when α is 0.1, the number of removed points under k = 100 significantly exceeds that for other values. When the α is between 0.1 and 0.5, the number of removed points with k = 5 decreases noticeably. Additionally, when the standard deviation multiple was between 0.5 and 5, the number of removed points increased as the k values increased. The line graph provides a rough understanding of the relationship between the number of removed points and the standard deviation multiple under different neighboring point counts. However, further precision is needed to determine the ranges of these two parameters. The comparison of the point cloud images after statistical filtering is shown in Figure 7.
We found that, when the standard deviation multiple is 0.01, as the number of neighboring points increases, not only were the noise point clouds removed, but valid point clouds were also significantly filtered out, as shown in Figure 7a,b. Therefore, it is important to determine the upper limit of k. Comparing Figure 7a,c,d, it is evident that when the number of neighboring points k remains constant, as α increases, the filtering effect on the noise point clouds (represented by the red portion) becomes less pronounced, while the filtering of valid points is not as significant. Additionally, it was observed that only a small number of valid points were removed under low α values (Figure 7a,c).
Further, the above results reveal that in statistical filtering, the number of neighboring points mainly affects the filtering of non-target noise. At the same time, the standard deviation multiple primarily influences the removal of target noise. Therefore, k values of 5 and 100 can be excluded, and α values of 0.01, 5, and 10 can also be excluded. Setting the k values to 10, 20, 30, and 50 and the α to 0.05, 0.1, 0.5, and 1.0 would be a suitable approach. Additionally, when α is set to 0.1, regardless of the different values of k, the four lines almost overlap, indicating minimal variation in the number of removed points. Hence, this study selects α as 0.1. Subsequently, a comparison of the number of points before and after statistical filtering for k values of 10, 20, 30, and 50 was conducted, and the experimental results are shown in Table 3.
We found that when k is set to 50, the proportion of removed points to the total points before filtering was the highest (15.21%), as presented in Table 3. Therefore, this study sets the number of neighboring points to 50 and the standard deviation multiple to 0.1 for statistical filtering processing.

3.2. Evaluation of Segmentation Accuracy in Target Point Clouds

3.2.1. Evaluation of Rapeseed Leaf Segmentation Accuracy at the Seedling Stage

Leaf-related parameters were extracted from the point cloud of a single rapeseed plant after segmentation algorithms were applied. Since the number of leaves in the rapeseed seedlings was small and the leaves did not overlap, the region-growing algorithm was used for segmenting individual rapeseed plant leaves. This method requires determining suitable normal angle thresholds and curvature thresholds. In this study, normal angle thresholds were taken as 3.0/180.0*M_PI, 5.0/180.0*M_PI, and 7.0/180.0*M_PI, and curvature thresholds were taken as 0.5, 1.0, and 1.5. The constant “M_PI” represents the numerical value of pi (approximately 3.1416) in programming languages.
Firstly, measurements were taken on different normal angle threshold values, and it was found that segmentation was not possible when the normal angle threshold was set to 7.0/180.0*M_PI. As the rapeseed plants at the seedling stage had four leaves, the normal angle threshold was set to 3.0/180.0*M_PI, and the segmented regions were fewer than four, leading to incorrect segmentation. However, when the normal angle threshold was set to 5.0/180.0*M_PI, it was observed that four regions could be accurately segmented; hence, this threshold was chosen.
During point cloud segmentation, the entire point cloud was divided into target point clouds and non-target point clouds, where non-target point clouds could also be considered as background points outside the target. Segmentation is often not perfect in a single step, and under-segmentation and over-segmentation are common issues. Generally, under-segmentation occurs when target point clouds are mistakenly segmented as background point clouds, leading to incomplete segmentation of the target point clouds. Over-segmentation occurs when background point clouds are mistakenly considered as target point clouds and segmented as such. After setting the normal angle threshold to 5.0/180.0*M_PI, tests were conducted with curvature thresholds of 0.5, 1.0, and 1.5. Different colors in the segmentation results represent different segmented regions, and the results are shown in Figure 8.
The results presented in Figure 8 reveal that under all three curvature values, most of the leaves could be segmented. However, when the curvature values were set to 0.5 and 1.0, the leaf edges and petiole parts were mistakenly identified as background point clouds, leading to under-segmentation. Setting the curvature value to 1.5 helped to alleviate this issue. Thus, based on the tuning results of this study, we set the normal angle threshold to 5.0/180.0*M_PI and the curvature threshold to 1.5.
Figure 9 shows the accuracy assessment of the Huyou039 leaf area, comparing the manually measured leaf area with the leaf area extracted from the point cloud. The R2 and RMSE were 0.995 and 0.2589 cm2, respectively.

3.2.2. Evaluation of Rapeseed Leaf Segmentation Accuracy at the Bolting Stage

Due to the unique characteristics of rapeseed leaves at the bolting stage, using the same segmentation method as in the seedling stage resulted in uneven segmentation of overlapping leaves, as shown in Figure 10. Different colors in the segmentation results represent different segmented regions. Therefore, this study employed the LCCP algorithm for leaf segmentation of rapeseed plants at the bolting stage. The main parameters involved in this algorithm are voxel resolution, seed resolution, concavity tolerance threshold, and smoothness threshold. Through experimentation with different parameter values, it was found that the best segmentation results were achieved when the values of these four parameters were set to 2.0, 100.0, 100, and 10, respectively. The results of the LCCP algorithm are shown in Figure 10.
After the segmentation using LCCP, rapeseed leaves were segmented and differentiated by different colors. By zooming in on a region where leaves overlap, it was observed that there is no over- or under-segmentation; rather, complete segmentation of the overlapping leaves occurs, as shown in Figure 11.

4. Discussion

4.1. Denoising of Rapeseed 3D Point Clouds

This study aimed to address the challenges of noise removal and leaf segmentation in rapeseed 3D reconstruction, providing methodologies for subsequent 3D reconstructions to obtain more comprehensive phenotypic parameters. The experimental design utilized rapeseed cultivated under field conditions as the data source. Previously, most research on rapeseed phenotypes relied on unmanned aerial vehicles equipped with an RGB camera [54], or LiDAR [55], which do not facilitate the convenient acquisition of 3D structural information about rapeseed. While utilizing 3D scanning technologies like LiDAR for object scanning, a substantial volume of point cloud data is acquired [56]. Furthermore, this process creates significant noise, leading to a considerable workload. Research on 3D crops has predominantly focused on species such as maize and cotton, with relatively few studies dedicated to the three-dimensional analysis of rapeseed leaves. The binocular stereo vision technology utilized in this study enabled the simultaneous acquisition of both color and structural information about the target while also being more cost-effective [34].
The point cloud acquired in this study contained a large number of non-target point clouds, such as ground point clouds, as well as outliers surrounding the primary point cloud. Since crops grew on the ground, in order to ensure the integrity of the crops, the ground point clouds were inevitably included in the scan. To address this noise, a combination of RANSAC and pass-through filtering algorithms was employed for denoising, while statistical filtering was applied to eliminate outliers. The RANSAC algorithm was used to fit a ground plane [57,58]. This approach of using pass-through filtering for setting z-axis thresholds effectively removed large areas of noisy point clouds, which was similar to the results of Teng, Zhou, Wu, Huang, Dong, and Xu [51]. In comparison to the research conducted by Hu, Lin, Peng, Wang, and Zhai [48], the primary distinction of this study lies in the adoption of the RANSAC algorithm for fitting the ground plane. This method demonstrates greater robustness compared to conventional techniques and further optimizes the z-axis threshold setting during the pass-through filtering process, leading to a more pronounced filtering effect. Additionally, the colorization of point clouds allowed for the visualization of noise removal effectiveness, providing a more intuitive and convenient reference for data analysis and subsequent processing. However, the number of points affected the speed of the denoising operation, and parameter thresholds needed to be fine-tuned. Therefore, future research needs to focus on improving the speed and other aspects of this method to achieve faster and more accurate denoising of point clouds.

4.2. Segmentation of Rapeseed Leaf

In the field of point cloud segmentation, the region-growing algorithm can successfully segment small plants such as green beans [59] and greenhouse ornamentals [60]. This study further supports the capability of the algorithm to efficiently maintain the integrity of rapeseed leaves at the seedling stage, and our results align with the successful application of this technique by Li, Liu, Zhang, Wang, Yao, Zhang, Fan, Li, Hai, and Fan [42] in segmenting the stalk of corn seedlings. However, rapeseed undergoes morphological changes as it grows, particularly at the bolting stage when the leaf surface greatly undulates, and there is significant leaf overlap. Thus, employing traditional region-growing segmentation to separate rapeseed leaves at the bolting stage into a whole single leaf is challenging.
In light of the aforementioned challenges, this study explores the application of the LCCP algorithm for leaf segmentation at the bolting stage. The research conducted by Wang and Chen [61] on the 3D reconstruction of early-stage green pepper seedlings inspired this investigation, as they observed that LCCP effectively leverages the natural concave–convex relationships between stems and leaves for segmentation. In this experiment, the LCCP algorithm was used to segment the leaves of rapeseed plants during the bolting stage. The results showed that the leaves were segmented accurately without any over-segmentation in the overlapping regions. This effectiveness is particularly notable in managing structural variations in complex plant morphologies. In contrast to the sophisticated methods that rely on deep learning approaches [62], the LCCP algorithm employed in this study does not require a large-scale training dataset, thereby reducing computational resource consumption and implementation complexity while significantly shortening processing times. This finding enriches the technical tools for plant point cloud segmentation. It also provides an efficient and cost-effective solution for monitoring and analyzing leaf conditions throughout the growth period of crops like rapeseed.
Zhang et al. [37] used the PointNet++ network to automatically segment the tip point cloud of the fringe branch. They achieved Intersection over Union (IoU), precision, and recall values of 96.29, 96.36, and 93.01, respectively. By combining PointNet++ to quantify canopy height and extracting dynamic numerical phenotypes, Chen et al. [63] investigated and analyzed a genome-wide association study of 160 wheat varieties to find reliable loci associated with height and NUE. The blade segmentation method proposed in this paper demonstrates significant effectiveness in addressing the issue of overlapping rapeseed leaves, thereby providing a technical foundation for the subsequent accurate extraction of leaf area and 3D morphological reconstruction. However, the current work has certain limitations, primarily reflected in its focus on processing point cloud data from only two growth stages of the rapeseed plants without a comprehensive analysis covering the entire growth period. Therefore, to enhance the universality and applicability of this segmentation algorithm, future research must urgently explore how to adjust and optimize algorithm parameters and strategies. This will ensure that the algorithm can effectively cope with and accurately segment the complex leaf shapes and dense overlapping characteristics exhibited by rapeseed plants in different late growth stages, thereby comprehensively advancing the development of rapeseed growth monitoring and precision agricultural management technologies.

5. Conclusions

The current study combined line laser binocular stereo vision technology, filtering algorithms, and segmentation algorithms to separate the point cloud of rapeseed leaves from the original rapeseed field and to segment rapeseed leaves at seedling and bolting stages. The study results demonstrated that integrating color filtering and filtering algorithms could effectively eliminate ground point clouds and outliers, resulting in the acquisition of relatively smooth point cloud data, thereby enhancing the quality and clarity of the point cloud. The region-growing algorithm performed well during the seedling stage but struggled to segment overlapping leaves during the bolting stage. The LCCP clustering method proved to be a more effective solution to address this issue. The leaf segmentation algorithms perform well at different growth stages, but a universal method for leaf segmentation has not been established. The method proposed in this study has produced satisfactory results in the early developmental stages of rapeseed, offering feasibility for extracting subsequent point cloud phenotypic parameters such as leaf area. Different segmentation algorithms are required for rapeseed leaves at different growth stages due to variations in plant morphology. Future research endeavors must aim to identify a universal algorithm for leaf segmentation, thereby streamlining the process of leaf segmentation.

Author Contributions

Conceptualization, L.Z. and C.S.; methodology, L.Z., B.S., S.S. and C.S.; software, M.Z. and S.S.; validation, M.Z.; formal analysis, L.Z.; investigation, B.S. and M.Z.; resources, D.H. and M.Z.; data curation, L.Z., M.Z. and D.H.; writing—original draft preparation, L.Z. and M.Z.; writing—review and editing, L.Z., M.Z., B.S., D.H. and C.S.; supervision, C.S.; funding acquisition, C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Key Research and Development Program (Modern Agriculture) of Jiangsu Province (BE2022335, BE2022338).

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries should be directed to the corresponding author.

Acknowledgments

We would like to express our sincere gratitude to the editor and the reviewers for their valuable feedback and insightful comments, which have significantly improved the quality of our manuscript. Additionally, we would like to extend our thanks to all contributing authors for their hard work and collaboration throughout the research process. This study would not have been possible without their commitment and expertise.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hu, J.; Chen, B.; Zhao, J.; Zhang, F.; Xie, T.; Xu, K.; Gao, G.; Yan, G.; Li, H.; Li, L.; et al. Genomic selection and genetic architecture of agronomic traits during modern rapeseed breeding. Nat. Genet. 2022, 54, 694–704. [Google Scholar] [CrossRef] [PubMed]
  2. Zheng, M.; Terzaghi, W.; Wang, H.; Hua, W. Integrated strategies for increasing rapeseed yield. Trends Plant Sci. 2022, 27, 742–745. [Google Scholar] [CrossRef] [PubMed]
  3. National Bureau of Statistics of China. China Statistical Yearbook-2023; China Statistics Press: Beijing, China, 2023. [Google Scholar]
  4. Pirgozliev, V.R.; Mansbridge, S.C.; Kendal, T.; Watts, E.S.; Rose, S.P.; Brearley, C.A.; Bedford, M.R. Rapeseed meal processing and dietary enzymes modulate excreta inositol phosphate profile, nutrient availability, and production performance of broiler chickens. Poult. Sci. 2022, 101, 102067. [Google Scholar] [CrossRef] [PubMed]
  5. Taheri, M.; Dastar, B.; Ashayerizadeh, O.; Mirshekar, R. The effect of fermented rapeseed meal on production performance, egg quality and hatchability in broiler breeders after peak production. Br. Poult. Sci. 2023, 64, 259–267. [Google Scholar] [CrossRef] [PubMed]
  6. Cen, Y.; Guo, L.; Liu, M.; Gu, X.; Li, C.; Jiang, G. Using organic fertilizers to increase crop yield, economic growth, and soil quality in a temperate farmland. PeerJ 2020, 8, e9668. [Google Scholar] [CrossRef] [PubMed]
  7. Chen, H.; Gao, L.; Li, M.; Liao, Y.; Liao, Q. Fertilization depth effect on mechanized direct-seeded winter rapeseed yield and fertilizer use efficiency. J. Sci. Food Agric. 2023, 103, 2574–2584. [Google Scholar] [CrossRef] [PubMed]
  8. Wang, C.; Yang, J.; Chen, W.; Zhao, X.; Wang, Z. Contribution of the leaf and silique photosynthesis to the seeds yield and quality of oilseed rape (Brassica napus L.) in reproductive stage. Sci. Rep. 2023, 13, 4721. [Google Scholar] [CrossRef] [PubMed]
  9. Jin, S.; Su, Y.; Wu, F.; Pang, S.; Gao, S.; Hu, T.; Liu, J.; Guo, Q. Stem–Leaf Segmentation and Phenotypic Trait Extraction of Individual Maize Using Terrestrial LiDAR Data. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1336–1346. [Google Scholar] [CrossRef]
  10. Hussain, S.; Gao, K.; Din, M.; Gao, Y.; Shi, Z.; Wang, S. Assessment of UAV-Onboard Multispectral Sensor for Non-Destructive Site-Specific Rapeseed Crop Phenotype Variable at Different Phenological Stages and Resolutions. Remote Sens. 2020, 12, 397. [Google Scholar] [CrossRef]
  11. Liu, Y.; An, L.; Wang, N.; Tang, W.; Liu, M.; Liu, G.; Sun, H.; Li, M.; Ma, Y. Leaf area index estimation under wheat powdery mildew stress by integrating UAV-based spectral, textural and structural features. Comput. Electron. Agric. 2023, 213, 108169. [Google Scholar] [CrossRef]
  12. Li, W.; Li, D.; Liu, S.; Baret, F.; Ma, Z.; He, C.; Warner, T.A.; Guo, C.; Cheng, T.; Zhu, Y.; et al. RSARE: A physically-based vegetation index for estimating wheat green LAI to mitigate the impact of leaf chlorophyll content and residue-soil background. ISPRS J. Photogramm. Remote Sens. 2023, 200, 138–152. [Google Scholar] [CrossRef]
  13. Liu, S.; Jin, X.; Nie, C.; Wang, S.; Yu, X.; Cheng, M.; Shao, M.; Wang, Z.; Tuohuti, N.; Bai, Y.; et al. Estimating leaf area index using unmanned aerial vehicle data: Shallow vs. deep machine learning algorithms. Plant Physiol. 2021, 187, 1551–1576. [Google Scholar] [CrossRef] [PubMed]
  14. Li, Y.; Yang, B.; Zhou, S.; Cui, Q. Identification lodging degree of wheat using point cloud data and convolutional neural network. Front. Plant Sci. 2022, 13, 968479. [Google Scholar] [CrossRef] [PubMed]
  15. Wahabzada, M.; Paulus, S.; Kersting, K.; Mahlein, A.-K. Automated interpretation of 3D laserscanned point clouds for plant organ segmentation. BMC Bioinform. 2015, 16, 248. [Google Scholar] [CrossRef] [PubMed]
  16. Hämmerle, M.; Höfle, B. Effects of Reduced Terrestrial LiDAR Point Density on High-Resolution Grain Crop Surface Models in Precision Agriculture. Sensors 2014, 14, 24212–24230. [Google Scholar] [CrossRef]
  17. Volpato, L.; Pinto, F.; González-Pérez, L.; Thompson, I.G.; Borém, A.; Reynolds, M.; Gérard, B.; Molero, G.; Rodrigues, F.A. High Throughput Field Phenotyping for Plant Height Using UAV-Based RGB Imagery in Wheat Breeding Lines: Feasibility and Validation. Front. Plant Sci. 2021, 12, 591587. [Google Scholar] [CrossRef]
  18. Paturkar, A.; Gupta, G.S.; Bailey, D. Non-destructive and cost-effective 3D plant growth monitoring system in outdoor conditions. Multimed. Tools Appl. 2020, 79, 34955–34971. [Google Scholar] [CrossRef]
  19. Dandrifosse, S.; Bouvry, A.; Leemans, V.; Dumont, B.; Mercatoris, B. Imaging Wheat Canopy Through Stereo Vision: Overcoming the Challenges of the Laboratory to Field Transition for Morphological Features Extraction. Front. Plant Sci. 2020, 11, 96. [Google Scholar] [CrossRef] [PubMed]
  20. Lei, X.; Wu, M.; Li, Y.; Liu, A.; Tang, Z.; Chen, S.; Xiang, Y. Detection and Positioning of Camellia oleifera Fruit Based on LBP Image Texture Matching and Binocular Stereo Vision. Agronomy 2023, 13, 2153. [Google Scholar] [CrossRef]
  21. Liu, S.; Zhang, X.; Wang, X.; Hou, X.; Chen, X.; Xu, J. Tomato flower pollination features recognition based on binocular gray value-deformation coupled template matching. Comput. Electron. Agric. 2023, 214, 108345. [Google Scholar] [CrossRef]
  22. Liu, T.-H.; Nie, X.-N.; Wu, J.-M.; Zhang, D.; Liu, W.; Cheng, Y.-F.; Zheng, Y.; Qiu, J.; Qi, L. Pineapple (Ananas comosus) fruit detection and localization in natural environment based on binocular stereo vision and improved YOLOv3 model. Precis. Agric. 2023, 24, 139–160. [Google Scholar] [CrossRef]
  23. Zhang, H.; Tang, C.; Sun, X.; Fu, L. A Refined Apple Binocular Positioning Method with Segmentation-Based Deep Learning for Robotic Picking. Agronomy 2023, 13, 1469. [Google Scholar] [CrossRef]
  24. Ge, L.; Yang, Z.; Sun, Z.; Zhang, G.; Zhang, M.; Zhang, K.; Zhang, C.; Tan, Y.; Li, W. A Method for Broccoli Seedling Recognition in Natural Environment Based on Binocular Stereo Vision and Gaussian Mixture Model. Sensors 2019, 19, 1132. [Google Scholar] [CrossRef] [PubMed]
  25. Zheng, S.; Liu, Y.; Weng, W.; Jia, X.; Yu, S.; Wu, Z. Tomato Recognition and Localization Method Based on Improved YOLOv5n-seg Model and Binocular Stereo Vision. Agronomy 2023, 13, 2339. [Google Scholar] [CrossRef]
  26. Nguyen, A.; Le, B. 3D point cloud segmentation: A survey. In Proceedings of the 2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), Manila, Philippines, 12–15 November 2013; pp. 225–230. [Google Scholar]
  27. Wang, W.; Zhang, Y.; Ge, G.; Jiang, Q.; Wang, Y.; Hu, L. Indoor Point Cloud Segmentation Using a Modified Region Growing Algorithm and Accurate Normal Estimation. IEEE Access 2023, 11, 42510–42520. [Google Scholar] [CrossRef]
  28. Fu, Y.; Niu, Y.; Wang, L.; Li, W. Individual-Tree Segmentation from UAV–LiDAR Data Using a Region-Growing Segmentation and Supervoxel-Weighted Fuzzy Clustering Approach. Remote Sens. 2024, 16, 608. [Google Scholar] [CrossRef]
  29. Yang, X.; Huang, Y.; Zhang, Q. Automatic Stockpile Extraction and Measurement Using 3D Point Cloud and Multi-Scale Directional Curvature. Remote Sens. 2020, 12, 960. [Google Scholar] [CrossRef]
  30. Zhu, B.; Zhang, Y.; Sun, Y.; Shi, Y.; Ma, Y.; Guo, Y. Quantitative estimation of organ-scale phenotypic parameters of field crops through 3D modeling using extremely low altitude UAV images. Comput. Electron. Agric. 2023, 210, 107910. [Google Scholar] [CrossRef]
  31. Fugacci, U.; Romanengo, C.; Falcidieno, B.; Biasotti, S. Reconstruction and Preservation of Feature Curves in 3D Point Cloud Processing. Comput. -Aided Des. 2024, 167, 103649. [Google Scholar] [CrossRef]
  32. Ghahremani, M.; Williams, K.; Corke, F.; Tiddeman, B.; Liu, Y.; Wang, X.; Doonan, J.H. Direct and accurate feature extraction from 3D point clouds of plants using RANSAC. Comput. Electron. Agric. 2021, 187, 106240. [Google Scholar] [CrossRef]
  33. Miao, Y.; Li, S.; Wang, L.; Li, H.; Qiu, R.; Zhang, M. A single plant segmentation method of maize point cloud based on Euclidean clustering and K-means clustering. Comput. Electron. Agric. 2023, 210, 107951. [Google Scholar] [CrossRef]
  34. Zou, R.; Zhang, Y.; Chen, J.; Li, J.; Dai, W.; Mu, S. Density estimation method of mature wheat based on point cloud segmentation and clustering. Comput. Electron. Agric. 2023, 205, 107626. [Google Scholar] [CrossRef]
  35. Du, R.; Ma, Z.; Xie, P.; He, Y.; Cen, H. PST: Plant segmentation transformer for 3D point clouds of rapeseed plants at the podding stage. ISPRS J. Photogramm. Remote Sens. 2023, 195, 380–392. [Google Scholar] [CrossRef]
  36. Yan, J.; Tan, F.; Li, C.; Jin, S.; Zhang, C.; Gao, P.; Xu, W. Stem–Leaf segmentation and phenotypic trait extraction of individual plant using a precise and efficient point cloud segmentation network. Comput. Electron. Agric. 2024, 220, 108839. [Google Scholar] [CrossRef]
  37. Zhang, W.; Wu, S.; Wen, W.; Lu, X.; Wang, C.; Gou, W.; Li, Y.; Guo, X.; Zhao, C. Three-dimensional branch segmentation and phenotype extraction of maize tassel based on deep learning. Plant Methods 2023, 19, 76. [Google Scholar] [CrossRef] [PubMed]
  38. Liu, Y.; Yuan, H.; Zhao, X.; Fan, C.; Cheng, M. Fast reconstruction method of three-dimension model based on dual RGB-D cameras for peanut plant. Plant Methods 2023, 19, 17. [Google Scholar] [CrossRef]
  39. Wang, L.; Miao, Y.; Han, Y.; Li, H.; Zhang, M.; Peng, C. Extraction of 3D distribution of potato plant CWSI based on thermal infrared image and binocular stereovision system. Front. Plant Sci. 2023, 13, 1104390. [Google Scholar] [CrossRef] [PubMed]
  40. Bao, Y.; Tang, L.; Breitzman, M.W.; Salas Fernandez, M.G.; Schnable, P.S. Field-based robotic phenotyping of sorghum plant architecture using stereo vision. J. Field Robot. 2019, 36, 397–415. [Google Scholar] [CrossRef]
  41. Ma, X.; Zhu, K.; Guan, H.; Feng, J.; Yu, S.; Liu, G. Calculation Method for Phenotypic Traits Based on the 3D Reconstruction of Maize Canopies. Sensors 2019, 19, 1201. [Google Scholar] [CrossRef] [PubMed]
  42. Li, Y.; Liu, J.; Zhang, B.; Wang, Y.; Yao, J.; Zhang, X.; Fan, B.; Li, X.; Hai, Y.; Fan, X. Three-dimensional reconstruction and phenotype measurement of maize seedlings based on multi-view image sequences. Front. Plant Sci. 2022, 13, 974339. [Google Scholar] [CrossRef] [PubMed]
  43. Wei, B.; Ma, X.; Guan, H.; Yu, M.; Yang, C.; He, H.; Wang, F.; Shen, P. Dynamic simulation of leaf area index for the soybean canopy based on 3D reconstruction. Ecol. Inform. 2023, 75, 102070. [Google Scholar] [CrossRef]
  44. Miao, Y.; Peng, C.; Wang, L.; Qiu, R.; Li, H.; Zhang, M. Measurement method of maize morphological parameters based on point cloud image conversion. Comput. Electron. Agric. 2022, 199, 107174. [Google Scholar] [CrossRef]
  45. Zhu, T.; Ma, X.; Guan, H.; Wu, X.; Wang, F.; Yang, C.; Jiang, Q. A calculation method of phenotypic traits based on three-dimensional reconstruction of tomato canopy. Comput. Electron. Agric. 2023, 204, 107515. [Google Scholar] [CrossRef]
  46. Wang, D.; Song, Z.; Miao, T.; Zhu, C.; Yang, X.; Yang, T.; Zhou, Y.; Den, H.; Xu, T. DFSP: A fast and automatic distance field-based stem-leaf segmentation pipeline for point cloud of maize shoot. Front. Plant Sci. 2023, 14, 1109314. [Google Scholar] [CrossRef] [PubMed]
  47. Hao, H.; Wu, S.; Li, Y.; Wen, W.; Fan, J.; Zhang, Y.; Zhuang, L.; Xu, L.; Li, H.; Guo, X.; et al. Automatic acquisition, analysis and wilting measurement of cotton 3D phenotype based on point cloud. Biosyst. Eng. 2024, 239, 173–189. [Google Scholar] [CrossRef]
  48. Hu, F.; Lin, C.; Peng, J.; Wang, J.; Zhai, R. Rapeseed Leaf Estimation Methods at Field Scale by Using Terrestrial LiDAR Point Cloud. Agronomy 2022, 12, 2409. [Google Scholar] [CrossRef]
  49. Qiao, Y.; Liao, Q.; Zhang, M.; Han, B.; Peng, C.; Huang, Z.; Wang, S.; Zhou, G.; Xu, S. Point clouds segmentation of rapeseed siliques based on sparse-dense point clouds mapping. Front. Plant Sci. 2023, 14, 1188286. [Google Scholar] [CrossRef]
  50. Xiong, X.; Yu, L.; Yang, W.; Liu, M.; Jiang, N.; Wu, D.; Chen, G.; Xiong, L.; Liu, K.; Liu, Q. A high-throughput stereo-imaging system for quantifying rape leaf traits during the seedling stage. Plant Methods 2017, 13, 7. [Google Scholar] [CrossRef]
  51. Teng, X.; Zhou, G.; Wu, Y.; Huang, C.; Dong, W.; Xu, S. Three-Dimensional Reconstruction Method of Rapeseed Plants in the Whole Growth Period Using RGB-D Camera. Sensors 2021, 21, 4628. [Google Scholar] [CrossRef] [PubMed]
  52. Besl, P.J.; Jain, R.C. Segmentation through variable-order surface fitting. IEEE Trans. Pattern Anal. Mach. Intell. 1988, 10, 167–192. [Google Scholar] [CrossRef]
  53. Stein, S.C.; Schoeler, M.; Papon, J.; Wörgötter, F. Object Partitioning Using Local Convexity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 304–311. [Google Scholar]
  54. Wang, C.; Xu, S.; Yang, C.; You, Y.; Zhang, J.; Kuai, J.; Xie, J.; Zuo, Q.; Yan, M.; Du, H.; et al. Determining rapeseed lodging angles and types for lodging phenotyping using morphological traits derived from UAV images. Eur. J. Agron. 2024, 155, 127104. [Google Scholar] [CrossRef]
  55. Jiang, Y.; Wu, F.; Zhu, S.; Zhang, W.; Wu, F.; Yang, T.; Yang, G.; Zhao, Y.; Sun, C.; Liu, T. Research on Rapeseed Above-Ground Biomass Estimation Based on Spectral and LiDAR Data. Agronomy 2024, 14, 1610. [Google Scholar] [CrossRef]
  56. Paulus, S.; Dupuis, J.; Mahlein, A.-K.; Kuhlmann, H. Surface feature based classification of plant organs from 3D laserscanned point clouds for plant phenotyping. BMC Bioinform. 2013, 14, 238. [Google Scholar] [CrossRef] [PubMed]
  57. Xiang, L.; Bao, Y.; Tang, L.; Ortiz, D.; Salas-Fernandez, M.G. Automated morphological traits extraction for sorghum plants via 3D point cloud data analysis. Comput. Electron. Agric. 2019, 162, 951–961. [Google Scholar] [CrossRef]
  58. Xiao, S.; Chai, H.; Shao, K.; Shen, M.; Wang, Q.; Wang, R.; Sui, Y.; Ma, Y. Image-Based Dynamic Quantification of Aboveground Structure of Sugar Beet in Field. Remote Sens. 2020, 12, 269. [Google Scholar] [CrossRef]
  59. Kartal, S.; Choudhary, S.; Masner, J.; Kholová, J.; Stočes, M.; Gattu, P.; Schwartz, S.; Kissel, E. Machine Learning-Based Plant Detection Algorithms to Automate Counting Tasks Using 3D Canopy Scans. Sensors 2021, 21, 8022. [Google Scholar] [CrossRef]
  60. Li, D.; Cao, Y.; Tang, X.-s.; Yan, S.; Cai, X. Leaf Segmentation on Dense Plant Point Clouds with Facet Region Growing. Sensors 2018, 18, 3625. [Google Scholar] [CrossRef]
  61. Wang, Y.; Chen, Y. Non-Destructive Measurement of Three-Dimensional Plants Based on Point Cloud. Plants 2020, 9, 571. [Google Scholar] [CrossRef]
  62. Chen, H.; Liu, S.; Wang, C.; Wang, C.; Gong, K.; Li, Y.; Lan, Y. Point Cloud Completion of Plant Leaves under Occlusion Conditions Based on Deep Learning. Plant Phenomics 2023, 5, 0117. [Google Scholar] [CrossRef]
  63. Chen, J.; Li, Q.; Jiang, D. From Images to Loci: Applying 3D Deep Learning to Enable Multivariate and Multitemporal Digital Phenotyping and Mapping the Genetics Underlying Nitrogen Use Efficiency in Wheat. Plant Phenomics 2024, 6, 0270. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Diagram of the scanning process in the field: (a) a smart phenotyping platform; (b) a sidebar camera; (c) a data flow diagram.
Figure 1. Diagram of the scanning process in the field: (a) a smart phenotyping platform; (b) a sidebar camera; (c) a data flow diagram.
Agronomy 15 00245 g001
Figure 2. The colored point clouds diagram: (a) cross-section of point clouds, (b) three-dimensional (3D) view of rapeseed.
Figure 2. The colored point clouds diagram: (a) cross-section of point clouds, (b) three-dimensional (3D) view of rapeseed.
Agronomy 15 00245 g002
Figure 3. The point cloud image after fitting the plane.
Figure 3. The point cloud image after fitting the plane.
Agronomy 15 00245 g003
Figure 4. Illustration of the Extended Convexity Criterion (CC) theory.
Figure 4. Illustration of the Extended Convexity Criterion (CC) theory.
Agronomy 15 00245 g004
Figure 5. Pass-through filtering effect diagram: (a) the original point cloud image of rapeseed plot, (b) the point cloud image of rapeseed after pass-through.
Figure 5. Pass-through filtering effect diagram: (a) the original point cloud image of rapeseed plot, (b) the point cloud image of rapeseed after pass-through.
Agronomy 15 00245 g005
Figure 6. The relationship between the number of removed points and the standard deviation multiple under various nearest-neighbor numbers.
Figure 6. The relationship between the number of removed points and the standard deviation multiple under various nearest-neighbor numbers.
Agronomy 15 00245 g006
Figure 7. The denoising results from the point cloud image of rapeseed after statistical filtering: (a) when k = 5, α = 0.01; (b) when k = 100, α = 0.01; (c) when k = 5, α = 0.5; (d) when k = 5, α = 5.
Figure 7. The denoising results from the point cloud image of rapeseed after statistical filtering: (a) when k = 5, α = 0.01; (b) when k = 100, α = 0.01; (c) when k = 5, α = 0.5; (d) when k = 5, α = 5.
Agronomy 15 00245 g007
Figure 8. Segmentation results of a single rapeseed plant based on region growth under the curvature value of (a) 0.5, (b) 1.0, and (c) 1.5.
Figure 8. Segmentation results of a single rapeseed plant based on region growth under the curvature value of (a) 0.5, (b) 1.0, and (c) 1.5.
Agronomy 15 00245 g008
Figure 9. Evaluation of the leaf area accuracy of Huyou 039.
Figure 9. Evaluation of the leaf area accuracy of Huyou 039.
Agronomy 15 00245 g009
Figure 10. Segmentation results of rapeseed leaves at the bolting stage using (a) the region-growing algorithm and (b) the LCCP algorithm.
Figure 10. Segmentation results of rapeseed leaves at the bolting stage using (a) the region-growing algorithm and (b) the LCCP algorithm.
Agronomy 15 00245 g010
Figure 11. The point cloud of the leaf overlaps: (a) the red circle highlights the overlapping region, and (b) an enlarged view of this overlapping area.
Figure 11. The point cloud of the leaf overlaps: (a) the red circle highlights the overlapping region, and (b) an enlarged view of this overlapping area.
Agronomy 15 00245 g011
Table 1. The specific parameters of the stereo camera.
Table 1. The specific parameters of the stereo camera.
Parameter NamesDetailed DescriptionsParameter NamesDetailed Descriptions
Camera size (L × W × H)Topside 400 × 66 × 75 mm
Sidebar 260 × 66 × 75 mm
Output dataX/Y/Z depth point cloud data
Weight0.75 kgBaseline distanceTopside 320 mm
Sidebar 160 mm
Resolution1536 × 2048Lens focus6 mm
Detection accuracySpatial resolution ± 1 mm
Positional repeatability ± 0.5 mm
Lens interfaceM12
Maximum scanning frequency2000 HzExposure modeGlobal shutter
External interfaceGigabit Ethernet portLaser perspective60°
communication methodCommunication SDKLaser wavelength850 nm
TemperatureWorking temperature: −10~50 °C Storage temperature: −20~70 °CLaser power1000 mW
Table 2. Denoising accuracy of the rapeseed point cloud using pass-through filtering method.
Table 2. Denoising accuracy of the rapeseed point cloud using pass-through filtering method.
Point Number123456
Number of original points18,386,97920,343,61920,404,34020,356,67920,226,67320,337,138
Number of denoised points 9,526,27210,274,00010,428,85611,763,01210,844,22010,093,445
Denoising ratio α (%) 48.249.548.942.246.450.4
Table 3. Comparison of the total number of points before and after statistical filtering.
Table 3. Comparison of the total number of points before and after statistical filtering.
TypologyThe Number of Neighboring Points k
10203050
Total number of points in the point cloud before denoising20,25920,25920,25920,259
Total number of points in the point cloud after denoising17,19017,18217,19217,177
Number of points removed3069307730673082
Percentage of points removed (%)15.1515.1915.1415.21
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, L.; Shi, S.; Zain, M.; Sun, B.; Han, D.; Sun, C. Evaluation of Rapeseed Leave Segmentation Accuracy Using Binocular Stereo Vision 3D Point Clouds. Agronomy 2025, 15, 245. https://doi.org/10.3390/agronomy15010245

AMA Style

Zhang L, Shi S, Zain M, Sun B, Han D, Sun C. Evaluation of Rapeseed Leave Segmentation Accuracy Using Binocular Stereo Vision 3D Point Clouds. Agronomy. 2025; 15(1):245. https://doi.org/10.3390/agronomy15010245

Chicago/Turabian Style

Zhang, Lili, Shuangyue Shi, Muhammad Zain, Binqian Sun, Dongwei Han, and Chengming Sun. 2025. "Evaluation of Rapeseed Leave Segmentation Accuracy Using Binocular Stereo Vision 3D Point Clouds" Agronomy 15, no. 1: 245. https://doi.org/10.3390/agronomy15010245

APA Style

Zhang, L., Shi, S., Zain, M., Sun, B., Han, D., & Sun, C. (2025). Evaluation of Rapeseed Leave Segmentation Accuracy Using Binocular Stereo Vision 3D Point Clouds. Agronomy, 15(1), 245. https://doi.org/10.3390/agronomy15010245

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop