0% found this document useful (0 votes)
9 views23 pages

Sensors 23 07730

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views23 pages

Sensors 23 07730

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

sensors

Article
Body Weight Estimation for Pigs Based on 3D Hybrid Filter and
Convolutional Neural Network
Zihao Liu 1,2,† , Jingyi Hua 2,3,† , Hongxiang Xue 1,2, *, Haonan Tian 2,3 , Yang Chen 3 and Haowei Liu 1

1 College of Engineering, Nanjing Agricultural University, Nanjing 210031, China


2 Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China
3 College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
* Correspondence: [email protected]
† These authors contributed equally to this work.

Abstract: The measurement of pig weight holds significant importance for producers as it plays a
crucial role in managing pig growth, health, and marketing, thereby facilitating informed decisions
regarding scientific feeding practices. On one hand, the conventional manual weighing approach
is characterized by inefficiency and time consumption. On the other hand, it has the potential to
induce heightened stress levels in pigs. This research introduces a hybrid 3D point cloud denoising
approach for precise pig weight estimation. By integrating statistical filtering and DBSCAN clustering
techniques, we mitigate weight estimation bias and overcome limitations in feature extraction. The
convex hull technique refines the dataset to the pig’s back, while voxel down-sampling enhances
real-time efficiency. Our model integrates pig back parameters with a convolutional neural network
(CNN) for accurate weight estimation. Experimental analysis indicates that the mean absolute error
(MAE), mean absolute percent error (MAPE), and root mean square error (RMSE) of the weight
estimation model proposed in this research are 12.45 kg, 5.36%, and 12.91 kg, respectively. In contrast
to the currently available weight estimation methods based on 2D and 3D techniques, the suggested
approach offers the advantages of simplified equipment configuration and reduced data processing
complexity. These benefits are achieved without compromising the accuracy of weight estimation.
Consequently, the proposed method presents an effective monitoring solution for precise pig feeding
management, leading to reduced human resource losses and improved welfare in pig breeding.
Citation: Liu, Z.; Hua, J.; Xue, H.;
Tian, H.; Chen, Y.; Liu, H. Body Keywords: pig weight estimation; 3D sensor; point cloud segmentation; convolutional neural network
Weight Estimation for Pigs Based on
3D Hybrid Filter and Convolutional
Neural Network. Sensors 2023, 23,
7730. https://doi.org/10.3390/ 1. Introduction
s23187730
Pork is an essential source of animal protein and fat for European, Chinese, and
Academic Editor: Yongwha Chung some other populations, which plays a crucial role in ensuring food security and social
stability [1–3]. In China, the pig industry is a traditionally advantageous industry that
Received: 3 August 2023
contributes greatly to sustaining the agricultural market, fostering the growth of the na-
Revised: 29 August 2023
Accepted: 4 September 2023
tional economy, and increasing farmers’ incomes. As such, the quantity of pig breeding
Published: 7 September 2023
has witnessed a notable increase. However, conventional pig breeding methods have
proven inadequate for keeping pace with the rapid development of pig output. The inte-
gration of science and technology has facilitated the widespread adoption of contemporary
technologies in various aspects of aquaculture production.
Copyright: © 2023 by the authors. Consequently, the improvement of pig production and quality has consistently re-
Licensee MDPI, Basel, Switzerland. mained a central objective in the development of agricultural strategies [4–6]. Among these
This article is an open access article objectives, pig weight stands out as one of the most influential factors in pig production
distributed under the terms and and pork quality. It provides essential information on feed conversion rate, growth rate,
conditions of the Creative Commons disease, development uniformity, and the health status of pig herds [7–10]. Presently, the
Attribution (CC BY) license (https:// traditional manual pig driving weighing method involves direct contact weighing on a
creativecommons.org/licenses/by/ scale. This method is known for its time-consuming, labor-intensive nature, and is prone
4.0/).

Sensors 2023, 23, 7730. https://doi.org/10.3390/s23187730 https://www.mdpi.com/journal/sensors


Sensors 2023, 23, x FOR PEER REVIEW 2 of 23

these objectives, pig weight stands out as one of the most influential factors in pig produc-
Sensors 2023, 23, 7730 tion and pork quality. It provides essential information on feed conversion rate, growth 2 of 23
rate, disease, development uniformity, and the health status of pig herds [7–10]. Presently,
the traditional manual pig driving weighing method involves direct contact weighing on
a scale. This method is known for its time-consuming, labor-intensive nature, and is prone
to causing the stress reaction in pigs. These limitations make it challenging to meet the
to causing the stress reaction in pigs. These limitations make it challenging to meet the
requirements of large-scale breeding for real-time monitoring of pig weight [11].
requirements of large-scale breeding for real-time monitoring of pig weight [11].
Non-contact monitoring of pig weight has progressively become a hotspot for research,
Non-contact monitoring of pig weight has progressively become a hotspot for re-
with weight estimation primarily relying on 2D and 3D images [12–15]. In 2D imaging,
search, with weight estimation primarily relying on 2D and 3D images [12–15]. In 2D im-
segmentation methods
aging, segmentation are employed
methods to delineate
are employed the pig’sthe
to delineate outline, while weight
pig’s outline, while estimation
weight
isestimation
based on is a relationship
based on a relationship model using geometric parameters andbody
model using geometric parameters and actual weight,
actual body as
illustrated
weight, as in Figure 1.inInFigure
illustrated 2017, 1.
Okinda
In 2017,[16] used HD
Okinda cameras
[16] used HD to capture
cameras tooverhead
capture over-images
ofhead
pigs, applied the background difference method to eliminate background
images of pigs, applied the background difference method to eliminate background interference,
and devised aand
interference, weight
devisedestimation
a weight model using
estimation linear
model regression
using and adaptive
linear regression fuzzy rea-
and adaptive
soning. This model necessitates manual selection of optimal inference
fuzzy reasoning. This model necessitates manual selection of optimal inference images images due to the
influence
due to the influence of porcine posture and movement. In 2018, Suwannakhun [17] et al. an
of porcine posture and movement. In 2018, Suwannakhun [17] et al. used
adaptive thresholdthreshold
used an adaptive segmentation method to
segmentation identify
method the pig’sthe
to identify body region
pig’s bodyand developed
region and
adeveloped
body weight estimation model leveraging eight features, including color,
a body weight estimation model leveraging eight features, including color, tex- texture, center
ofture,
mass, main
center axis length,
of mass, main axis minimum axis length,
length, minimum axis eccentricity, and area,
length, eccentricity, and to mitigate
area, to mit- the
impact
igate theof impact
pig movement. Nevertheless,
of pig movement. 2D methods
Nevertheless, for estimation
2D methods often often
for estimation fall short
fall in
accurately estimating
short in accurately the animal’s
estimating volumevolume
the animal’s or parameters associated
or parameters with its
associated with body surface.
its body
Furthermore, variations
surface. Furthermore, in camera
variations parameters,
in camera objectobject
parameters, distance, and and
distance, lighting conditions
lighting con-
ditions
have have a significant
a significant impactimpact
on theon the measurement
measurement findings,
findings, resulting
resulting in limited
in limited adapt-
adaptability
ability
and and challenges
posing posing challenges in enhancing
in enhancing accuracyaccuracy in real-world
in real-world multi-scene
multi-scene applica- In
applications.
tions.
the In theofcontext
context weightofregression
weight regression
estimation,estimation, the conventional
the conventional regression
regression approach
approach exhibits
exhibitscapability
limited limited capability
in feature inlearning
feature learning
and failsand fails to comprehensively
to comprehensively capturecapture the
the interplay
interplay among the defining factors
among the defining factors of pig body size. of pig body size.

Figure1.1.The
Figure Thefundamental
fundamental stages
stages involved
involved in
in 2D
2D image
image processing
processing for
for the
the purpose
purpose of
of estimating
estimating pig
pig weight [12].
weight [12].

Research aimed
Research aimedatatestimating
estimatingthethe
weight of pig
weight of based on three-dimensional
pig based data hasdata
on three-dimensional
been undertaken to address these problems. Liu [18] et al. used binocular
has been undertaken to address these problems. Liu [18] et al. used binocular vision technol-
vision
ogy to collect three-dimensional image data of pigs. They extracted growth-related
technology to collect three-dimensional image data of pigs. They extracted growth-related data,
including chest circumference, buttock height, body length, body height, and body width,
data, including chest circumference, buttock height, body length, body height, and body
width, through image processing methods. After that, they developed the estimation model
based on linear functions, nonlinear functions, machine learning algorithms, deep learning
algorithms, among others, to estimate pig weight. The correlation coefficient with the
measured value was 97%, and the average relative error was 2.5%, outperforming models
solely relying on back area measurements. As the price of depth cameras continues to fall,
and their performance steadily improves, they have been increasingly employed in studies
through image processing methods. After that, they developed the estimation model
based on linear functions, nonlinear functions, machine learning algorithms, deep learn-
ing algorithms, among others, to estimate pig weight. The correlation coefficient with the
Sensors 2023, 23, 7730 3 of 23
measured value was 97%, and the average relative error was 2.5%, outperforming models
solely relying on back area measurements. As the price of depth cameras continues to fall,
and their performance steadily improves, they have been increasingly employed in stud-
of
iesanimal
of animalbody measurement,
body measurement, body condition
body andand
condition weight estimation,
weight estimation, automatic behavioral
automatic be-
havioral identification,
identification, etc. In
etc. In 2021, He2021,
[19] He [19]
et al. et al.
used used acamera
a D430 D430 camera
to captureto capture
depth depth
images of
images
pigs. of pigs.
They They employed
employed instance segmentation,
instance segmentation, distance distance independence,
independence, noise re- and
noise reduction,
duction, and
rotational rotational
correction to correction to eliminate background
eliminate background interference interference
and constructed and constructed
a BotNet-based
a BotNet-based
model modelthe
for estimating forbody
estimating
weightthe bodyIn
of pigs. weight of pigs.[20]
2023, Kwon In et
2023, Kwon [20] aetmethod
al. introduced al.
introduced
for a method
accurately for accurately
estimating pig weightestimating pig weight
using point using
clouds, as point clouds,
depicted as depicted
in Figure 2. They
in Figure
used 2. They
a rapid and used
robust a rapid and robust
reconstructed reconstructed
grid grid model
model to address to address
the problem the prob-
of unrecognized
noise or missing points and optimized deep neural networks for precise weightfor
lem of unrecognized noise or missing points and optimized deep neural networks pre-
estimation.
cise weight estimation. While the aforementioned techniques have
While the aforementioned techniques have demonstrated enhanced precision in estimating demonstrated en-
hanced precision in estimating the weight of pigs, it is crucial to
the weight of pigs, it is crucial to acknowledge that the hardware systems employed acknowledge that the in
hardware systems employed in these experiments are intricate. In real-world
these experiments are intricate. In real-world production settings, the available space is production
settings,
often the available
limited, hinderingspacethe
is often limited,
ability hindering
to conduct the ability to conduct
multi-directional multi-direc-
or azimuthal livestock
tional or azimuthal livestock detection. Meanwhile, data collection
detection. Meanwhile, data collection conditions may not be optimal, posing conditions may not be in
challenges
optimal, posing challenges in meeting current manufacturing demands.
meeting current manufacturing demands.

Figure 2.
Figure 2. Configuration
Configurationofofthe
theacquisition
acquisitionsystem forfor
system point clouds
point [20].[20].
clouds

In conclusion,
In conclusion, there
there is
is aa pressing
pressing requirement
requirementfor foraa precise
preciseand
andcost-effective
cost-effectiveintelli-
intelligent
gent system
system to estimate
to estimate pig weight
pig weight accurately.
accurately. Such aSuch
systema system
shouldshould be applicable
be applicable to
to large-scale
large-scale
pig farms, pig farms, facilitating
facilitating the monitoring the monitoring
and analysis and analysis of pig growth
of pig growth processes
processes and as
and serving
serving
an as antool
effective effective tool for
for precise precise
pig feedingpigandfeeding and management.
management.
The primary contributions of this article
The primary contributions of this article can be can be summarized
summarized as follows:
as follows:
(1) Point cloud filtering, the denoising of large-scale
(1) Point cloud filtering, the denoising of large-scale and small-scale and small-scale discrete points
discrete is is
points
achieved using statistical filtering and DBSCAN point cloud segmentation,
achieved using statistical filtering and DBSCAN point cloud segmentation, respec- respec-
tively,
tively,resulting
resultingininthe theaccurate
accurate segmentation
segmentation of the pigpig
of the back image.
back What
image. is more,
What is more,
the
the voxel downsampling approach is employed to reduce the point cloudres-
voxel downsampling approach is employed to reduce the point cloud data’s data’s
olution on the
resolution onpig’s back,back,
the pig’s so as to
so reduce the volume
as to reduce of pointofcloud
the volume pointdata while
cloud pre-
data while
serving the inherent
preserving characteristics
the inherent of theof
characteristics original dataset.
the original dataset.
(2) In
(2) Interms
termsofofthetheweight
weight estimation,
estimation,the plane projection
the plane technique
projection is employed
technique to de- to
is employed
termine the body size of the pig, while the convex hull detection method is utilized
determine the body size of the pig, while the convex hull detection method is utilized
to eliminate the head and tail regions of the pig. The point set information in the pig
to eliminate the head and tail regions of the pig. The point set information in the pig
point cloud data is input into the convolutional neural network to construct the pig
point cloud data is input into the convolutional neural network to construct the pig
weight estimation model, which realizes the full learning of features and accurate
estimation of pig weight.
(3) As for actual breeding, the estimation method constructed in this research overcomes
the problems of high cost, data processing difficulties, space occupation, and layout
restrictions while ensuring the estimation accuracy, which can provide technical
support for precise feeding, selection, and breeding of pigs.
This paper is structured as follows: Section 2 introduces the point cloud data collec-
tion, noise reduction segmentation, feature extraction, and weight estimation methods; in
estimation of pig weight.
(3) As for actual breeding, the estimation method constructed in this research overcomes
the problems of high cost, data processing difficulties, space occupation, and layout
restrictions while ensuring the estimation accuracy, which can provide technical sup-
Sensors 2023, 23, 7730 port for precise feeding, selection, and breeding of pigs. 4 of 23
This paper is structured as follows: Section 2 introduces the point cloud data collec-
tion, noise reduction segmentation, feature extraction, and weight estimation methods; in
Section
Section 3,
3, the
the performance
performance of of denoising
denoising segmentation and the
segmentation and the weight
weight estimation
estimation model
model is
is
analyzed.
analyzed. Section 4 discusses the comparison with existing methods, the limitations, im-
Section 4 discusses the comparison with existing methods, the limitations, im-
provements of
provements of the
the research,
research, and
and outlines
outlines potential
potential future
future work.
work. Finally,
Finally, Section
Section 55 presents
presents
the conclusions.
the conclusions.

2. Materials and Methods


This section primarily describes the experimental
experimental objects
objects and
and techniques
techniques employed.
employed.
Figure 3 depicts the technical path of this research.
research.

Figure 3. The technical path.


Figure 3. The technical path.

2.1. Animals and Housing


The experimental
experimentalperiod
periodspanned
spannedfrom from7 June
7 Juneto to
2727 June
June 20222022
andand
the the experimental
experimental site
site was
was located
located in Wens
in Wens Pig Farm,
Pig Farm, XinxingXinxing
County,County,
YunfuYunfu City, Guangdong
City, Guangdong Province,
Province, China.
China.
The The experimental
experimental subjectssubjects
were 198 werepigs198 pigs (Large
(Large White × White × Landrace
Landrace breed),breed), with
with body
body weights
weights rangingranging
from from
190 kg 190
tokg300tokg.
300The
kg. equipment
The equipment was was installed
installed in the
in the passage-
passageway
way connecting
connecting the gestation
the gestation house
house to farrowing
to the the farrowing
house house of the
of the farm,
farm, which
which hadhad a width
a width of
of 1.7
1.7 mm and
and a height
a height ofof
2.82.8
m.m.All
Allexperimental
experimentaldesign
designandandprocedures
procedures conducted
conducted in this
study received
receivedapproval
approvalfrom
fromthetheAnimal
Animal Care
Careand
andUse Committee
Use Committee of Nanjing Agricultural
of Nanjing Agricul-
University,
tural in compliance
University, with thewith
in compliance Regulations for the Administration
the Regulations of Affairs Concerning
for the Administration of Affairs
Experimental
Concerning Animals of China
Experimental Animals (Certification NJAU.No20220530088).
of China (Certification NJAU.No20220530088).

2.2. Data
2.2. Data Acquisition
Acquisition
A 3D
A 3D depth
depth Intel
Intel RealSense
RealSense D435i
D435i camera
camera (manufactured
(manufactured by by Intel
Intel Corp.,
Corp., Santa
Santa Clara,
Clara,
CA, USA) was used to acquire color and depth images. This camera features
CA, USA) was used to acquire color and depth images. This camera features a complete a complete
depth processing
depth processing module,
module, aa vision
vision processing
processing module,
module, and
and aa signal
signal processing
processing module,
module,
capable of delivering depth data and color information data at a rate of 90
capable of delivering depth data and color information data at a rate of 90 frames per frames per
second. What is more, it was installed at 2.5 m directly above the XK3190-A12
second. What is more, it was installed at 2.5 m directly above the XK3190-A12 weighing
weighing
Sensors 2023, 23, x FOR PEER REVIEW 5 of 23
scale (manufactured by Henan Nanshang Husbandry Technology
scale (manufactured by Henan Nanshang Husbandry Technology Co. Ltd., Nanyang,Co., Ltd., Nanyang,
China). An illustration of the pig weight data acquisition system is presented in Figure 4.
China). An illustration of the pig weight data acquisition system is presented in Figure 4.

Figure 4. The pig weight data acquisition system.


Figure 4. The pig weight data acquisition system.
2.3. Data Set Construction
To improve the efficiency of data acquisition, this research used ROS parsing for
video parsing to acquire 3D point cloud data in PLY format. Each data set comprises a
color image and a depth image, captured at a resolution of 640 × 480 pixels and a frame
rate of 30 frames per second. A portion of the dataset fragment is depicted in Figure 5.
Sensors 2023, 23, 7730 5 of 23

Figure 4. The pig weight data acquisition system.


2.3. Data Set Construction
2.3. Data Set Construction
To improve the efficiency of data acquisition, this research used ROS parsing for video
To improve the efficiency of data acquisition, this research used ROS parsing for
parsing to acquire 3D point cloud data in PLY format. Each data set comprises a color
video parsing to acquire 3D point cloud data in PLY format. Each data set comprises a
image and
color image a depth image,
and a depth captured
image, at a at
captured resolution of 640
a resolution × 480
of 640 × 480pixels
pixelsand
andaa frame
frame rate of
30rate
frames
of 30 frames per second. A portion of the dataset fragment is depicted in Figure5.
per second. A portion of the dataset fragment is depicted in Figure 5.

Sensors 2023, 23, x FOR PEER REVIEW 6 of 23

(a) (b) (c)


Figure 5. The dataset fragment. (a) Color images; (b) Depth images; (c) color images after mapping.
Figure 5. The dataset fragment. (a) Color images; (b) Depth images; (c) color images after mapping.
Point cloud files with missing point clouds and incomplete pig images were ex-
Point cloud files with missing point clouds and incomplete pig images were excluded,
cluded,
as shown asinshown
Figurein6.Figure 6. Only
Only point point
cloud cloud
files files featuring
featuring completecomplete pigwere
pig images images were
retained.
retained. To ensure data sufficiency and meet the requirements of the pig weight
To ensure data sufficiency and meet the requirements of the pig weight estimation model, estima-
tion
50 model,
point cloud50 files
pointwere
cloud files were
selected selected
for each pig. for each apig.
Finally, Finally,
total a total
of 10,000 setsof
of 10,000
pig 3Dsets of
point
pig 3D point cloud data
cloud data were acquired. were acquired.

(a) (b)
Figure 6. Cont.
Sensors 2023, 23, 7730 6 of 23

(a) (b)

(c) (d)
Figure 6. Incomplete pig picture. (a) No pig; (b) pig with missing body; (c) pig with missing rump;
(d) pig
(d) pig with
with missing
missing head.
head.

2.4.
2.4. Model
Model Construction
Construction
2.4.1.
2.4.1. Overall Process
Overall Process
The
The primary
primary components
components ofof the
the method
method proposed
proposed in in this
this research
research are
are point
point cloud
cloud
segmentation
segmentation and weight estimation, as illustrated in Figure 7. In order to address the
and weight estimation, as illustrated in Figure 7. In order to address the
adhesion
adhesion problem
problembetween
betweenpigs and
pigs andthethe
background
backgroundin the
in segmentation of pigof
the segmentation point
pig cloud,
point
statistical filtering and DBSCAN was used to separate pig point cloud from background.
cloud, statistical filtering and DBSCAN was used to separate pig point cloud from back-
To obtain complete point cloud data of the pigs’ backs, the plane projection is employed to
ground. To obtain complete point cloud data of the pigs’ backs, the plane projection is
capture images of the pig’s back. After that, convex hull detection was applied to remove
employed to capture images of the pig’s back. After that, convex hull detection was ap-
extraneous head and tail point cloud data. To reduce the computational complexity of
plied to remove extraneous head and tail point cloud data. To reduce the computational
subsequent processing, voxel down-sampling was used to reduce the quantity of point
complexity of subsequent processing, voxel down-sampling was used to reduce the quan-
cloud data on pigs’ backs. Finally, for precise pig weight estimation, we devised a weight
tity of point cloud data on pigs’ backs. Finally, for precise pig weight estimation, we de-
estimation model based on back parameter characteristics and a convolutional neural
Sensors 2023, 23, x FOR PEER REVIEW 7 of 23
vised a weight estimation model based on back parameter characteristics and a convolu-
network (CNN) during the weight estimation phase.
tional neural network (CNN) during the weight estimation phase.

Figure 7. Overall process.


Figure 7. Overall process.
2.4.2. Data Denoising
The collected data contain some three-dimensional point cloud noise, attributed to
occluders, equipment limitations, and external conditions [21]. This noise can compromise
the accuracy of expressing useful information, which results in a large error in pig weight
estimation. To address this issue, the noise points within the point cloud were separated
into large-scale outlier points and small-scale discrete points based on their distribution
Sensors 2023, 23, 7730 7 of 23

2.4.2. Data Denoising


The collected data contain some three-dimensional point cloud noise, attributed to
occluders, equipment limitations, and external conditions [21]. This noise can compro-
mise the accuracy of expressing useful information, which results in a large error in pig
weight estimation. To address this issue, the noise points within the point cloud were
separated into large-scale outlier points and small-scale discrete points based on their dis-
tribution characteristics. Prior research has demonstrated that employing a diverse range
of hybrid filtering algorithms tailored to specific attributes of noise might yield superior
outcomes [22]. In this research, a statistical filter was used to remove large-scale discrete
points, and DBSCAN was used to remove small-scale discrete points.
Preserving the local features of point cloud data is of utmost importance when cap-
turing the physical characteristics of pigs. Within the collected 3D point cloud data, there
exists large-scale noise, primarily consisting of sparse points that deviate from the pig’s
body point cloud and hover above it. What is more, there are smaller, denser point clouds
that are more distant from the center of the pig’s body point cloud. In the present, for the
removal of such discrete points, techniques like passthrough filtering, statistical filtering,
conditional filtering, guided filtering, and others are usually employed [23]. Passthrough
filtering serves as an effective manner in removing noise in large-scale backgrounds. What
should be noticed is that it almost does not work when it comes to randomly generated
discrete noise and may even lead to inadvertent removal of pig point cloud data.
This study mainly employs the open-source library Open3D (version 0.13.0) to process
three-dimensional point cloud images. Open3D is an open-source library that is avail-
able in the rapid development of software for handling 3D data. The statistical filtering
algorithm employs the distance between the queried point in the point cloud data and all
neighboring point sets in its vicinity to perform statistical analysis and eliminate large-scale
discrete points [24]. When the point cloud density of a particular location falls below the
predetermined threshold, these points are removed as discrete points. The exact procedure
is outlined as follows:
Read into the 3D point cloud data set P( p1 , p2 , ..., pn ), and establish the k-d tree data
structure of the point cloud data.
For each point pi in the point cloud, define the required nearest neighbor parameter k,
establish the k nn neighborhood based on the value of k, and calculate the average distance
from the nearest k nearest neighbor, as shown in Equations (1) and (2), where dij is the
spatial distance between point pi and point pj , and d¯i is the average distance between point
pi and its k nearest neighbors.
q 2 2 2
dij = pix − p jx + piy − p jy + piz − p jz (1)
1 j=k
d¯i = ∑i=1 dij (2)
k
On the basis of the last step, calculate the average distance d¯ni of the average distance
d¯i of all k neighbors and the standard deviation dstd , as shown in Equation (3):
s
2
∑in=1 d¯ni − d¯i
dstd = (3)
n−1

Calculate whether the average distance d¯i between the current pi and the k nearest
neighbor is greater than the set threshold L. When d¯i > L, delete point pi ; when d¯i < L, retain
point pi . As shown in Equation (4), σ is the calculated coefficient, which is generally valued
according to the distribution of the cloud data of the measured point.

L = d¯ni + σ × dstd (4)


𝑑𝑠𝑡𝑑 = √ (3)
𝑛−1
Calculate whether the average distance 𝑑‾𝑖 between the current 𝑝𝑖 and the k nearest
neighbor is greater than the set threshold L. When 𝑑‾𝑖 > L, delete point 𝑝𝑖 ; when 𝑑‾𝑖 < L,
retain point 𝑝𝑖 . As shown in Equation (4), σ is the calculated coefficient, which is generally
Sensors 2023, 23, 7730 8 of 23
valued according to the distribution of the cloud data of the measured point.
𝐿 = 𝑑‾𝑛𝑖 + 𝜎 × 𝑑𝑠𝑡𝑑 (4)
Traverse all points in the point cloud for calculation until all point cloud data is pro-
Traverse all points in the point cloud for calculation until all point cloud data is pro-
cessed and filtered point cloud data is output. The statistical filtering process is presented
cessed and filtered point cloud data is output. The statistical filtering process is presented
in Figure 8.
in Figure 8.

Figure 8. Statistical filtering process diagram.


Figure 8. Statistical filtering process diagram.

Using statistical filtering for denoising 3D point cloud data can remove some of the
large-scale noise, though there still exists some. These noises mixed with the pig’s data
point can interfere with the surface smoothness of 3D model reconstruction. To address this
issue, a secondary denoising step is required, incorporating additional filtering algorithms.
Employing a hybrid filtering approach serves a dual purpose—it removes small-scale noise
and mitigates the remaining large-scale noise. Clustering algorithms help tackle small-
scale noise, to certain degree. By identifying points with high similarity within clusters,
noise points can be distinguished from valid data points. Clustering algorithms usually
include K-Means clustering, DBSCAN clustering, region growing clustering, etc. K-Means
clustering relies on geometric distance and proves advantageous when dealing with point
clouds where the number and location of seed points are known [23]. However, when
applied to interconnected point clouds, it may cause issues such as over-segmentation or
under-segmentation due to the reliance on single growth criteria or stopping conditions.
In contrast, DBSCAN clustering leverages point cloud density for clustering. During the
clustering process, it randomly selects a core object as a seed and proceeds to identify all
sample sets that are density-reachable from this core object. This process repeats until
all core objects are clustered. DBSCAN clustering excels in situations where different
regions of the pig’s body exhibit similar point cloud densities, coming up with robust
clustering results.
The density clustering algorithm has been employed to remove noise from a point
cloud because of its sensitivity and effectiveness in recognizing small-scale noise points [25].
In this section, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
algorithm was for noise removal from the point cloud. DBSCAN is an algorithm for density-
based spatial clustering that clusters regions with sufficient density [26]. It also defines
where different regions of the pig’s body exhibit similar point cloud densities, coming up
with robust clustering results.
The density clustering algorithm has been employed to remove noise from a point
cloud because of its sensitivity and effectiveness in recognizing small-scale noise points
[25]. In this section, the Density-Based Spatial Clustering of Applications with Noise
Sensors 2023, 23, 7730 (DBSCAN) algorithm was for noise removal from the point cloud. DBSCAN is an 9algo- of 23
rithm for density-based spatial clustering that clusters regions with sufficient density [26].
It also defines a cluster as the largest set of density-connected points, allowing for the
identification
a oflargest
cluster as the clustersset
with arbitrary shapes inpoints,
of density-connected a noisyallowing
spatial database [27,28], as illus-
for the identification of
trated inwith
clusters Figure 9.
arbitrary shapes in a noisy spatial database [27,28], as illustrated in Figure 9.

Figure 9. Cluster region division.


Figure 9. Cluster region division.
ors 2023, 23, x FOR PEER REVIEW 10 of 23
The
The length
length parameter
parameter epsilon
epsilon (Eps)
(Eps) and
and minimum
minimum points
points (MinPts)
(MinPts) are
are crucial factors
crucial factors
in determining the clustering effect of the DBSCAN clustering algorithm [29]. MinPts sets
in determining the clustering effect of the DBSCAN clustering algorithm [29]. MinPts sets
the threshold
the threshold for
for the
the minimum
minimum number
number of
of data
data points
points needed
needed to
to form
form aa density
density region,
region,
while Eps quantifies the distance between a data point and its neighbors. Core points,
border points,while
andEps quantifies
noise points canthebedistance
derivedbetween
from thesea data
two point and itsasneighbors.
parameters, illustratedCore points,
border points, and noise points can be derived from these two parameters,
in Figure 10. Notably, there exist four relationships between these three data points: Di- as illustrated in
Figure 10. Notably,
rectly density-reachable, there exist fourdensity-connected,
density-reachable, relationships betweenandthese three data points: Directly
non-density-con-
density-reachable, density-reachable, density-connected, and non-density-connected, as
nected, as illustrated in Figure 11.
illustrated in Figure 11.

Figure
Figure 10. Cluster 10. Cluster
point point classification.
classification.
Sensors 2023, 23, 7730 10 of 23
Figure 10. Cluster point classification.

Figure 11. Cluster


Figuredensity relation.
11. Cluster density relation.

The pseudocode for the clustering


The pseudocode algorithm
for the based
clustering on DBSCAN
algorithm used
based on in this research
DBSCAN used in this research
is shown in1.Algorithm 1.
is shown in Algorithm

Algorithm 1. The pseudocode for the clustering algorithm based on DBSCAN


Algorithm 1. The pseudocode for the clustering algorithm based on DBSCAN
Input: Datasets Ds, parameter c and MinPts
Input: Datasets Ds, parameter c and MinPts
Output: Clustering result Cl
Output: Clustering
DBSCANresult
(Ds, c, Cl
MinPts)
DBSCAN (Ds, c, 0MinPts)
Cl ←
every point in Ds is unlabeled
for each unlabeled point p ∈ Ds do
mark p as labeled
set c_neighborhood as p’s c_neighborhood
if sizeof (c_neighborhood) < MinPts then
mark p as noise point
else
Cl ← Cl + 1
add p to Cl
neighbor ← c_neighborhood
for each point q in neighbor
if q is unlabeled
mark q as labeled
set c_neighborhood’ as q’s c_neighborhood
if sizeof (c_neighborhood’) >= MinPts then
neighbor ← neighbor ∪ c_neighborhood’
end if
if q is not assigned to any cluster
add q to cluster Cl
end if
end for
end if
end for
if q is not assigned to any cluster
add q to cluster Cl
end if
end for
Sensors 2023, 23, 7730 end if 11 of 23
end for

2.4.3. Splitting
2.4.3. Splitting thethe Head
Head and
and Tail
Tail
ItIt is
is necessary
necessary to toremove
removethe thehead
head andand tailtail
point
pointcloud datadata
cloud because
becausethe poses of pigs’
the poses of
heads and tails frequently change, which has a remarkable impact
pigs’ heads and tails frequently change, which has a remarkable impact on such essential on such essential char-
acteristic parameters
characteristic parametersas the
asenvelope volume,
the envelope volume,envelope surface
envelope area, area,
surface back back
area, area,
and body
and
length. Initially, the data from the point cloud were projected onto
body length. Initially, the data from the point cloud were projected onto a plane to create a plane to create a 2D
aoutline of theofpig’s
2D outline back. back.
the pig’s The minimum
The minimum envelope polygon
envelope of the pig’s
polygon of the body
pig’swas
bodyderived
was
from thefrom
derived envelope verticesvertices
the envelope and lineandsegments
line segmentscorresponding
corresponding to thetopig’s headhead
the pig’s and and
tail.
TheThe
tail. sag sag degree
degreeof the pigpig
of the contour
contour was
was calculated
calculatedbased basedon onthe
thepoint
point where contour
where the contour
envelope overlapped
envelope overlapped withwith the
the pig’s
pig’s contour
contour and andthe thepixel
pixeldistance
distancebetween
betweenneighboring
neighboring
points and the pig’s contour. A deeper depression indicated
points and the pig’s contour. A deeper depression indicated a higher probability ofa higher probability of head
head
and tail
and tail split
split points.
points. Given
Given the fixed
fixed direction
direction of of the
the pig’s
pig’s head,
head, thethe short
short axis
axis of
of some
some
particlesin
particles inthe
thepig’s
pig’sbody
bodywaswaschosen
chosenas asthe
thedividing
dividingline, line,and
andthethepoint
pointfurthest
furthestfrom
fromthis
this
axis was
axis was considered
considered the the dividing
dividing point
point ofof the
the tail,
tail, while
while thethe point
point closest
closest to
to the
the axis
axiswas
was
considered the
considered the dividing
dividing point
point of
of the
thehead.
head. Figure
Figure 12 12 depicts
depicts the
the comparison
comparison beforebefore and
and
after
afterhead
headand andtail
tailremoval.
removal.

(a) (b)
Figure 12.
Figure 12. Comparison
Comparison before
before and
and after
after head
head and
and tail
tail removal:
removal: (a)
(a) original image; (b)
original image; (b)processed
processed
image.
image.

2.4.4. Point Cloud Sampling


This research performs voxel down-sampling of point cloud data because the amount
of 3D point cloud data is exceedingly large and irregular, which has a profound effect on
the model’s input. The particular procedure is as follows [30]:
Based on the point cloud data within the dot cloud files, the maximum and minimum
values of X, Y, Z in the 3D space coordinates are obtained. The maximum values of the 3D
coordinates are represented as xmax , ymax , zmax , while the minimum values are denoted
as xmin , ymin , zmin . We used NumPy libraries to construct a variation of the synchronized
function, generating an n × 3 matrix concerning x, y, z. Each entry represents the 3D
coordinates xyz of the point, with the relationships between z, x, and y as follows:

sin(x2 + y2 )
z= (5)
x2 + y2

The side length of the regular cube grid is represented as r. The smallest bounding
body of the point cloud data is determined by the maximum and minimum values of 3D
coordinate points in the point cloud data.
Calculate the size of the voxel grid.

(xmax −xmin )

 Dx =
 r
(ymax −ymin )
Dy = r
(6)
(zmax −zmin )

Dz =

r
Sensors 2023, 23, 7730 12 of 23

Calculate the index of each regular small cube grid, and the index is represented by h
as follows:
(x−xmin )


 hx = r
(y−ymin )

hy =

r
(z−zmin )
(7)


 hz = r
h = hx + hy × Dx + hz × Dx × Dy

Sort the elements in h by size, and then calculate the center of gravity for each regular
cube to represent each point in the small grid.

2.4.5. Back Feature Extraction


Several studies have demonstrated a correlation between pig body measurement, body
weight, and other growth parameters. A relationship model can be used to estimate the
body weight of pigs in an effective manner [31,32]. Body size parameters were extracted
based on critical points along the back contour of the pigs, and a relationship model
between body size and body weight was established.
During the process of acquiring pig 3D point cloud data, the pig’s body is often
inclined, making it difficult to extract body size parameters. In this research, a minimum
horizontal bounding box was created for the point cloud on the pig’s back. The bounding
box maintained the parallelism of the border with the X and Y axes, ensuring that the rear
image remained horizontal. As shown in Figure 13, the 3D point cloud data were projected
Sensors 2023, 23, x FOR PEER REVIEW
onto the horizontal plane to obtain the horizontal projection of the point cloud from13 ofthe
23

pig’s back.

(a) (b)
Figure 13. Pig point cloud projection. (a) Before horizontal processing; (b) after horizontal pro-
Figure 13. Pig point cloud projection. (a) Before horizontal processing; (b) after horizontal processing.
cessing.
The six key points along the pig’s outline were extracted. L2 represented the shortest
The between
distance six key points along
the two endstheofpig’s outline were
the outline, whileextracted. 𝐿2 represented
L1 represented thedistance
the greatest shortest
distance between the two ends of the outline, while 𝐿1 represented the greatest
between the ends of the left side and L3 signified the longest distance between the ends of distance
between
the the ends
right side. of the left side
The calculation and is𝐿3assignified
formula the longest
follows, where x and distance between
y represent the ends
the coordinates
of the right side. The calculation
of the respective points on the plane: formula is as follows, where x and y represent the coor-
dinates of the respective points on the plane:
q
L1𝐿1 ( x1 1−−x2𝑥)22)2++(y(𝑦
= = √(𝑥 2
1 1−−y2𝑦)2 )2 (8)
(8)

𝐿2 = √(𝑥3 − 𝑥42)2 + (𝑦3 − 𝑦42)2 (9)


q
L2 = ( x3 − x4 ) + (y3 − y4 ) (9)

𝐿3 =q√(𝑥5 − 𝑥6 )2 + (𝑦5 − 𝑦6 )2 (10)


L3 = ( x5 − x6 )2 + (y5 − y6 )2 (10)
Then, the convex hull model of the pig back was constructed, and the point cloud of
Then,
the pig backthewas
convex hull model
encapsulated in of
thethe pig of
form back was constructed,
minimum and
convex hull sothe point
that cloudand
the area of
the pig back
volume was
of the encapsulated
convex in could
hull model the form of minimum
be calculated. convex
The hull so
extraction that the
results are area and
depicted
volume of14.
in Figure the convex hull model could be calculated. The extraction results are depicted in
Figure 14.
𝐿3 = √(𝑥5 − 𝑥6 )2 + (𝑦5 − 𝑦6 )2 (10)

Then, the convex hull model of the pig back was constructed, and the point cloud of
the pig back was encapsulated in the form of minimum convex hull so that the area and
Sensors 2023, 23, 7730 13 of 23
volume of the convex hull model could be calculated. The extraction results are depicted
in Figure 14.

Figure
Figure14.
14.Pig
Pigback
backenvelope.
envelope.

2.4.6.Weight
2.4.6. WeightEstimation
Estimation
Nowadays, methods for
Nowadays, methods for estimating
estimating pigs’
pigs’ weight
weightmainly
mainlyinclude
includethree
threegroups:
groups:classical
classi-
regression methods (including Linear Regression, Polynomial Regression, Ridge
cal regression methods (including Linear Regression, Polynomial Regression, Ridge Re-Regression,
and Logistic
gression, andRegression), machine learning
Logistic Regression), machinemethods
learning(including Decision Tree
methods (including Regression,
Decision Tree
Random Forest Regression, and Support Vector Regression, SVR), and
Regression, Random Forest Regression, and Support Vector Regression, SVR), and deep learning
deep
methodsmethods
learning (like Multi-Layer Perceptron,
(like Multi-Layer MLP, and MLP,
Perceptron, Convolutional Neural Network,
and Convolutional NeuralCNN).
Net-
Classical regression models, though commonly used, may be instable due
work, CNN). Classical regression models, though commonly used, may be instable to the nonlinear
due
relationships and complex features among body feature parameters. Machine learning
to the nonlinear relationships and complex features among body feature parameters. Ma-
regression methods, though flexible in handling nonlinear relationships, can be sensitive
chine learning regression methods, though flexible in handling nonlinear relationships,
to outliers and noise within the training data. In contrast, deep learning-based regression
can be sensitive to outliers and noise within the training data. In contrast, deep learning-
methods excel in handling nonlinear patterns and demonstrate great scalability compared
based regression methods excel in handling nonlinear patterns and demonstrate great
to the first two groups of regression estimation methods. CNN possesses strong data-
scalability compared to the first two groups of regression estimation methods. CNN pos-
solving capabilities and can delve deeply into data relationship information [33]. As a
sesses strong data-solving capabilities and can delve deeply into data relationship infor-
network with multiple layers, CNN consists primarily of input layer, convolutional layer,
mation [33]. As a network with multiple layers, CNN consists primarily of input layer,
down-sampling layer, fully connected layer, and output layer [34]. Its fundamental concept
convolutional layer, down-sampling layer, fully connected layer, and output layer [34]. Its
lies in the use of neuron weight sharing, which reduces the diversity of network parameters,
simplifies the network, and enhances execution efficiency [35].
Input layer: The initial sample data are input into the network to facilitate the feature
learning of the convolutional layer, and the input data are preprocessed to ensure that the
data align more closely with the requirements of network training.
Convolution layer: In this layer, the input features from the previous layer undergo
convolution with filters, and the biased top is added. Nonlinear functions are applied to
obtain the final output of the convolutional layer.
The output of the convolution layer is expressed as follows:

ulj0 = ∑i0 ∈Ri0 χlj−0 1 ∗ Wil0 j0 + Blj0 (11)


 
χlj0 = f ulj0 (12)

where χlj−0
1
represents the activation value of feature j0 in layer l − 1, Wil0 j0 represents the
convolution kernel of layer l feature j0 and the previous layer feature i0 , while B denotes the
bias value. ulj0 describes the weighted sum of feature ulj0 in the l layer, and f (·) represents
the activation function, with ∗ indicating the convolution operation.
All outputs are connected with the adjacent neurons in the upper layer, which helps
train the sample features, reduce the parameter connections, and improve learning efficiency.
Down-sampling layer: In this layer, the output features are processed via pooling using
the down-sampling function. After that, weighted and biased calculations are performed
on the processed features to obtain the output of the down-sampling layer:
 
ulj0 = βlj0 down χil0−1 + Blj0 (13)
 
χlj0 = f ulj0 (14)
Sensors 2023, 23, 7730 14 of 23

where β represents the weight coefficient, and down(·) is the down-sampling function.
Fully connected layer: In the fully connected layer, all two-dimensional features are
transformed into one-dimensional features, which serve as the output of the layer. And the
output of the layer is obtained through weighted summation:

χl = f (ul ) (15)

u l = ω 0 l χ l −1 + B l (16)
Output layer: The main job is to classify and identify the feature vector obtained by
the CNN model and output the result.
The training process of CNN mainly involves two phases: the forward propagation
phase and the backward propagation phase. During the former one, input data are pro-
cessed through intermediate layers to generate output values and a loss function. In the
latter phase, when there is an error between the output values and the target values, the
parameters of each layer in the network are optimized through gradient descent based on
the loss function.

2.5. Model Evaluation


To test the performance of the pig weight estimation model, metrics such as the
mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean
square error (RMSE) were calculated by comparing the actual weight of the pig with the
estimated weight.
1 N
N k∑
MAE = | Pk − Rk | (17)
=1

1 N | Pk − Rk |
N k∑
MAPE = × 100% (18)
=1
Rk
v
u N
u1
RMSE = t ∑ (Pk − Rk )2 (19)
N k=1

where P represents the estimated value of the model, R represents the actual value, N
signifies the number of samples, and k denotes the current sample number.

3. Results
3.1. Performance Comparison of Different Point Cloud Filtering Methods
To investigate the filtering effects of different thresholds, a comparative experiment
was conducted with thresholds of 1, 2, and 4, and neighborhood points of 30. Figure 15
depicts the filtering effects of distinct thresholds. When the threshold value is set to 0.1,
there is excessive filtering of the pig point cloud, which leads to partial loss of pig head
point cloud data. On the other hand, when the threshold is set to 2, most of the discrete
points can be effectively eliminated. When the threshold is set to 4, only a small number
of discrete points are removed, and the impact is not apparent. Consequently, a threshold
value of 2 was selected for the statistical filtering parameter in this study.
To compare the filtering effects of different filters, the guided filter was used as a com-
parison in this research. The guided filter is a kind of edge-preserving smoothing method
commonly employed in image enhancement, image dehazing, and other applications [36].
Figure 16 depicts the effect of guided filtering. The comparison reveals that while the
guided filter was capable of preserving the entire point cloud data of the pigs, it did not
filter the discrete point cloud data from the surrounding environment of the pigs. That is
why the filtering effect was worse than that of statistical filtering.
was conducted with thresholds of 1, 2, and 4, and neighborhood points of 30. Figure 15
depicts the filtering effects of distinct thresholds. When the threshold value is set to 0.1,
there is excessive filtering of the pig point cloud, which leads to partial loss of pig head
point cloud data. On the other hand, when the threshold is set to 2, most of the discrete
points can be effectively eliminated. When the threshold is set to 4, only a small number
Sensors 2023, 23, 7730 15 of 23
of discrete points are removed, and the impact is not apparent. Consequently, a threshold
value of 2 was selected for the statistical filtering parameter in this study.

(a) (b)

(c) (d)
Sensors 2023, 23, x FOR PEER REVIEW
Figure 15. Comparison of statistical filtering effects: (a) Original image; (b) filtering effect16with
of 23a
Figure 15. Comparison of statistical filtering effects: (a) Original image; (b) filtering effect with a
threshold of 1; (c) filtering effect with a threshold of 2; (d) filtering effect with a threshold of 4.
threshold of 1; (c) filtering effect with a threshold of 2; (d) filtering effect with a threshold of 4.
To compare the filtering effects of different filters, the guided filter was used as a
comparison in this research. The guided filter is a kind of edge-preserving smoothing
method commonly employed in image enhancement, image dehazing, and other applica-
tions [36]. Figure 16 depicts the effect of guided filtering. The comparison reveals that
while the guided filter was capable of preserving the entire point cloud data of the pigs, it
did not filter the discrete point cloud data from the surrounding environment of the pigs.
That is why the filtering effect was worse than that of statistical filtering.
(a) (b)
Figure 16. Comparison of guided filtering effect: (a) Original image; (b) guided filtering effect image.
Figure 16. Comparison of guided filtering effect: (a) Original image; (b) guided filtering effect image.

3.2.
3.2. Performance
PerformanceComparison
Comparison of
of Different
Different Segmentation
Segmentation Methods
Methods for
for Point
Point Cloud
Cloud
For the
For the purpose
purpose of of exploring
exploring the the influence
influence of of different
different radius
radius of of neighborhood
neighborhood on on the
the
clustering effect,
clustering effect, aa comparative
comparative experiment
experiment was was conducted
conducted in in this
this study
study using
using the
the radius
radius
of neighborhood
of neighborhood (Eps) (Eps) ofof 0.01,
0.01, 0.02,
0.02, and
and 0.025.
0.025. TheThe clustering
clustering effect
effect ofof different
different Eps
Eps isis
shown in
shown in Figure
Figure 17.
17. When
When EpsEps isis set
set to
to 0.01,
0.01, the
the pig’s
pig’s body
body trunk
trunk is is divided,
divided, butbut the
the head
head
and tail
and tail are
are categorized
categorized as as background.
background. Also, Also,the
theoutline
outlineedge
edgeof ofthe
thepig
pigbody
bodyisisblurred,
blurred,
and the point cloud part of the pig is missing. Setting Eps to 0.02 bring
and the point cloud part of the pig is missing. Setting Eps to 0.02 bring about a complete about a complete
separation of
separation of the
the pig,
pig, with
with aa clear
clear and
and smooth
smooth outline.
outline. When
When thethe Eps
Eps is is set
set to
to 0.025,
0.025, parts
parts
of the
of the pig
pig outline
outline become
become visible,
visible, but
but the
the pig
pig cannot
cannotbe be separated
separatedfrom fromthethebackground.
background.
Therefore, the
Therefore, the threshold
threshold value
value for
for Eps
Eps was
was set
set to
to 0.02
0.02 as
as the
the Eps
Eps of
of DBSCAN
DBSCAN point point cloud
cloud
density clustering.
density clustering.
This research selected the region growing segmentation for point cloud as a reference
for comparing the effects of various cloud segmentation methods. The region growing
segmentation method is recognized for its flexibility, preservation of similarity, extensibility,
computational efficiency, and interpretability, making it one of the most favored methods in
point cloud segmentation [37]. The algorithm selects the seed point as the starting point and
expands the seed region by adding neighboring points that meet specific conditions [38].
The curvature threshold (ct) is an important parameter influencing the region growing
segmentation algorithm. Figure 18 illustrates the segmentation outcomes under different
curvature thresholds. When ct = 0.01, a substantial portion of the pig’s back point cloud
(a)is missing, and some is erroneously classified as outliers. (b) With ct = 0.02, the number of
clusters in the point cloud data decreases, which brings about substantial filling of the
pig’s back point cloud. Moreover, the segmentation correctly identifies and categorizes
pig back point clouds as the primary class, but the issue of pig point cloud adhesion to the
(a) (b)
Figure 16. Comparison of guided filtering effect: (a) Original image; (b) guided filtering effect image.

3.2. Performance Comparison of Different Segmentation Methods for Point Cloud


Sensors 2023, 23, 7730 For the purpose of exploring the influence of different radius of neighborhood 16 onofthe
23
clustering effect, a comparative experiment was conducted in this study using the radius
of neighborhood (Eps) of 0.01, 0.02, and 0.025. The clustering effect of different Eps is
shown in Figure
background 17. When
remains severe.Eps is set to
Finally, 0.01,ctthe
when pig’sthe
= 0.05, body
pigtrunk
pointiscloud
divided, but with
blends the head
the
and tail are categorized
background, which means as background. Also, the outline
a failure. In comparison to theedge of the
region pig body
growing is blurred,
segmentation
and the point
algorithm, the cloud part of the pig
DBSCAN-based is missing.model
segmentation Settingdemonstrates
Eps to 0.02 bring about
greater a complete
robustness in
separation
handling of the pig,and
parameters withnoise.
a clear and smooth
DBSCAN, as a outline. When
clustering the Epsoffers
algorithm, is set advantages
to 0.025, partsin
of thecloud
point pig outline becometasks,
segmentation visible, but the pig
especially cannot where
in settings be separated
there isfrom the background.
no need to predefine
Therefore,
the numberthe threshold
of clusters. value forrobustness
It exhibits Eps was set to 0.02
against as the
noise andEps of DBSCAN
outliers point
while being cloud
capable
density
of clustering.
handling clusters with irregular shapes.

Sensors 2023, 23, x FOR PEER REVIEW 17 of 23


(a) (b)

0.02, the number of clusters in the point cloud data decreases, which brings about sub-
stantial filling of the pig’s back point cloud. Moreover, the segmentation correctly identi-
fies and categorizes pig back point clouds as the primary class, but the issue of pig point
cloud adhesion to the background remains severe. Finally, when ct = 0.05, the pig point
cloud blends with the background, which means a failure. In comparison to the region
growing segmentation algorithm, the DBSCAN-based segmentation model demonstrates
(c)
greater robustness in handling parameters and noise. DBSCAN, (d) as a clustering algorithm,
offers
Figureadvantages
Figure 17. Comparison
17.
in point
Comparison of
cloud segmentation
of DBSCAN
DBSCAN Pointcloud
Point
tasks, especially
cloudsegmentation
segmentation in settings
withdifferent
with different where
Epsvalue:
Eps value: (a)there
(a)
is
Original
Original
no need
image; to
(b) predefine
segmentationthe number
effect with of
an clusters. It
Eps of 0.01; exhibits robustness
(c) segmentation against
effect noise
with an Eps of
image; (b) segmentation effect with an Eps of 0.01; (c) segmentation effect with an Eps of 0.02;
and out-
0.02; (d)
segmentation
liers while effect
being with anofEps
capable of 0.025.clusters with irregular shapes.
handling
(d) segmentation effect with an Eps of 0.025.

This research selected the region growing segmentation for point cloud as a reference
for comparing the effects of various cloud segmentation methods. The region growing
segmentation method is recognized for its flexibility, preservation of similarity, extensi-
bility, computational efficiency, and interpretability, making it one of the most favored
methods in point cloud segmentation [37]. The algorithm selects the seed point as the
starting point and expands the seed region by adding neighboring points that meet spe-
cific conditions [38]. The curvature threshold (ct) is an important parameter influencing
the region growing segmentation algorithm. Figure 18 illustrates the segmentation out-
(a) (b)
comes under different curvature thresholds. When ct = 0.01, a substantial portion of the
pig’s back point cloud is missing, and some is erroneously classified as outliers. With ct =

(c) (d)
Figure 18. Comparison of segmentation effect of region growing segmentation for point cloud: (a)
Figure 18. Comparison of segmentation effect of region growing segmentation for point cloud:
Original image; (b) segmentation effect with a ct of 0.01; (c) segmentation effect with a ct of 0.02; (d)
(a) Original image; (b) segmentation effect with a ct of 0.01; (c) segmentation effect with a ct of 0.02;
segmentation effect with a ct of 0.05.
(d) segmentation effect with a ct of 0.05.
3.3.
3.3.Analysis
AnalysisofofDown-Sampling
Down-SamplingResults
Results
Selecting
Selecting an appropriate down-sampling voxel
an appropriate down-sampling voxel parameter
parameter is
iscritical
criticalto
toimproving
improvingthe
the
efficiency
efficiency of body dimension feature extraction. Our experiments have shownthat
of body dimension feature extraction. Our experiments have shown thatas
asthe
the
parameter value for the voxel’s regular cube increases, the number of points retained after
down-sampling decreases. This parameter value determines whether the down-sampled
point cloud remains stable and representative. Figure 19 illustrates the down-sampling
effects of different parameter values. When the voxel parameter is set to 0.005, the down-
Figure 18. Comparison of segmentation effect of region growing segmentation for point cloud: (a)
Original image; (b) segmentation effect with a ct of 0.01; (c) segmentation effect with a ct of 0.02; (d)
segmentation effect with a ct of 0.05.

Sensors 2023, 23, 7730 3.3. Analysis of Down-Sampling Results 17 of 23


Selecting an appropriate down-sampling voxel parameter is critical to improving the
efficiency of body dimension feature extraction. Our experiments have shown that as the
parametervalue
parameter valuefor
forthe
thevoxel’s
voxel’sregular
regularcube
cubeincreases,
increases,the
thenumber
numberofofpoints
pointsretained
retainedafter
after
down-sampling decreases. This parameter value determines whether
down-sampling decreases. This parameter value determines whether the down-sampled the down-sampled
pointcloud
point cloudremains
remainsstable
stableand
andrepresentative.
representative.Figure
Figure1919illustrates
illustratesthe
thedown-sampling
down-sampling
effectsofofdifferent
effects differentparameter
parametervalues.
values.When
Whenthethevoxel
voxelparameter
parameterisisset
setto
to0.005,
0.005,the
thedown-
down-
sampling rate
sampling rate is 85%.
85%.It Itensures
ensuresboth thethe
both preservation of point
preservation cloudcloud
of point features and aand
features signif-
a
icant reduction
significant in point
reduction cloud
in point segmentation
cloud computational
segmentation computational load. More
load. importantly,
More the
importantly,
computational
the computational efficiency gets
efficiency enhanced.
gets enhanced.

Sensors 2023, 23, x FOR PEER REVIEW 18 of 23

(a) (b)

(c) (d)
Figure 19. Down-sampling results of point clouds with varying voxel parameters. (a) Voxel param-
Figure 19. Down-sampling results of point clouds with varying voxel parameters. (a) Voxel parameter
eter of 0.05; (b) voxel parameter of 0.06; (c) voxel parameter of 0.07; (d) voxel parameter of 0.08.
of 0.05; (b) voxel parameter of 0.06; (c) voxel parameter of 0.07; (d) voxel parameter of 0.08.

3.4.Extraction
3.4. ExtractionResults
ResultsofofPoint
PointCloud
Cloudononthe
theBack
BackofofPig
Pig
In this research, the eigenvalues of envelope
In this research, the eigenvalues of envelope volume, volume,envelope
envelopearea,
area,projection
projectionarea,
area,
shoulderwidth,
shoulder width,belly
bellywidth,
width,and
andhip
hipwidth
widthofofthethepig’s
pig’sback
backwere
wereobtained.
obtained.The
Thepartial
partial
extractionresults
extraction resultsofofthe
thepig’s
pig’sback
backfeatures
featuresare
areshown
shownininTable
Table1.1.AAPearson
Pearsoncoefficient
coefficienttest
test
wasemployed
was employedtotoassess
assessthe
theback
backfeature
featurecorrelation
correlationofofpigs.
pigs.The
Thecorrelation
correlationcoefficients
coefficients
betweendifferent
between differentfeature
featureparameters
parametersare areshown
shownininTable
Table2.2.Among
Amongthem,
them,the
thecorrelation
correlation
coefficients between
coefficients between shoulder widthwidthandandenvelope
envelopearea areaand
andprojection area
projection areare
area 0.753 and
0.753
and 0.759,
0.759, respectively.
respectively. This indicates
This indicates that isthere
that there is a substantial
a substantial correlation
correlation betweenbetween
the char-
the characteristics.
acteristics.

Table1.1.Part
Table Partofofthe
thepig
pigcharacteristic
characteristicparameter.
parameter.

Envelope Envelope Projection


Envelope Envelope Projection
Shoulder ShoulderBelly Belly Hip
Hip
Number Number
Volume Area Volume Area
Area Area
Width Width WidthWidth Width
Width
001 1.092 001 0.145 1.092 0.145
3.278 3.278
0.385 0.385 0.492 0.492 0.414
0.414
002 1.049 0.113 4.83 0.449 0.357 0.365
002 1.049 0.113 4.83 0.449 0.357 0.365
003 0.893 0.044 2.884 0.283 0.304 0.306
003 0.893 004 0.044 0.927 2.884
0.081 0.283
3.867 0.395 0.304 0.325 0.306
0.434
004 0.927 005 0.081 0.989 3.867
0.042 0.395
3.978 0.275 0.325 0.317 0.434
0.323
005 0.989 006 0.042 0.946 0.086
3.978 3.465
0.275 0.427 0.317 0.373 0.362
0.323
006 0.946 007 0.086 0.887 8.672
3.465 0.082
0.427 0.285 0.373 0.035 0.032
0.362
008 0.983 0.067 3.391 0.456 0.412 0.373
007 0.887 009
8.672 0.883
0.082
0.035
0.285
2.774 0.283
0.035 0.353
0.032
0.305
008 0.983 010 0.067 0.901 3.391
0.046 0.456
3.325 0.346 0.412 0.364 0.373
0.332
009 0.883 0.035 2.774 0.283 0.353 0.305
010 0.901 0.046 3.325 0.346 0.364 0.332

Table 2. Pearson correlation coefficient between different parameters

Envelope Envelope Projection Shoulder Belly Hip


Sensors 2023, 23, 7730 18 of 23

Table 2. Pearson correlation coefficient between different parameters.

Envelope Envelope Projection Shoulder Belly Hip


Volume Area Area Width Width Width
Envelope
1 - - - - -
volume
Envelope
0.341 1 - - - -
area
Projection
0.264 0.686 1 - - -
area
Shoulder
0.391 0.759 0.753 1 - -
width
Belly
0.223 0.442 0.547 0.661 1 -
width
Hip
0.251 0.571 0.671 0.819 0.648 1
width

3.5. Validation of Weight Estimation Model


To validate the accuracy of the pig weight estimation model, a comparison was
delivered between the actual and estimated values of pig weights. Part of the results
for the test dataset are presented in Table 3. It is evident from the table that the relative
estimation error for the majority of pigs falls within 6%, which meets the production
demands. Nevertheless, due to the relatively smooth surface of the weighing scale at the
bottom of the data collection system, certain pigs may lean forward or backward, causing
some features to be obscured or distorted. This can hinder the model’s ability to accurately
extract the correct feature information, which would make it difficult to achieve precise
weight estimation. It would be helpful to integrate a posture detection algorithm within
the model for enhancing weight estimation accuracy.

Table 3. Part of the results for the test dataset.

Pig No. Actual Weight/kg Estimated Weight/kg Relative Error


1 238 246.2 3.865%
2 239.5 232.5 3.340%
3 243 226.6 7.160%
4 244.5 248.1 1.063%
5 246.5 252.8 2.961%
6 247.5 272.6 9.737%
7 249 252.8 1.927%
8 250 252.6 0.64%
9 250.5 236.4 5.229%
10 251 261.5 3.784%
11 251.5 225.7 9.860%
12 252.5 268.3 5.861%
13 254 247.6 2.125%
14 256.5 248.9 3.352%
15 259.5 274.2 5.279%
16 262.5 253.4 3.085%
17 263 256.3 2.927%
18 264 268.7 1.401%
19 264.5 274.3 4.083%
20 265.5 246.8 7.419%

3.6. Comparison of Different Weight Estimation Methods


The dataset consisted of 140 pigs used as training samples for the CNN, with the
remaining 58 pigs designated as test samples. The scatter plot between the true and
estimated weight of pigs is shown in Figure 20. It demonstrates that the CNN-based pig
weight estimation model exhibits a higher degree of accuracy in estimating pig weights.
Sensors 2023, 23, 7730 19 of 23
Sensors 2023, 23, x FOR PEER REVIEW 20 of 23

Sensors 2023, 23, x FOR PEER REVIEW 20 of 23

Figure 20. Scatter chart for weight estimation.


Figure 20. Scatter chart for weight estimation.

Radial Basis Function (RBF) Neural Networks was selected to compare the efficacy
of different weight estimation model. RBF is a one-way propagation neural network with
three or more layers that provides the most accurate approximation of nonlinear functions
and the greatest global performance [39]. Figure 21 presents the results of the two weight
estimation models using mean absolute error (MAE), mean absolute percent error (MAPE),
and root mean square error (RMSE) as evaluation parameters. This comparison illustrates
that all 20.
Figure CNN parameters
Scatter chart for outperform those of RBF neural networks.
weight estimation.

Figure 21. Comparison of pig weight estimation and evaluation indicators.

4. Discussion
4.1. Comparison with Previous Research
To evaluate the effectiveness and practicality of the algorithm proposed in this re-
search, a comparative analysis is conducted in comparison to previous research. In refer-
ence [14], a 2D camera is employed to capture dorsal images of pigs, followed by the ex-
traction of the dorsal image area of pigs. The correlation between dorsal area and weight
is investigated, leading to the development of a model for estimating pig weight. In con-
Figure21.21. Comparisonofofpig pigweight
weightestimation
estimationand
andevaluation
evaluationindicators.
indicators.
trast, ourComparison
Figure proposed approach, building upon the extraction of dorsal projection area, ex-
tensively
4.4.Discussionextracts various dorsal body dimension features, including envelope volume,
Discussion
envelope
4.1. area,
Comparison projection
with area, shoulder width, belly width, and hip width. All of these
4.1. Comparison
contribute withPrevious
to a more Previous Research
Research of pig weight. Leveraging a 3D-based weight esti-
precise regression
To
mationToevaluate
evaluate
system thethe
effectiveness
allows effectiveness
for accurateandcapture
practicality
and of the
practicality
of intricate ofalgorithm proposed
the algorithm
details from inangles,
proposed
various thisinresearch,
this re-
effec-
asearch,
comparative analysis
a comparative
tively addressing is conducted
analysis
distortions in comparison
is conducted
arising to previous
in comparison
from changes research.
to previous
in perspective In reference
research.
and scale. [14],
In refer-
This system
aence
2D camera
provides is employed
[14], enhanced
a 2D cameraaccuracyto capture
is employed todorsal inimages
capture
and reliability dorsalofestimation.
weight pigs, followed
images of pigs,
In the by the extraction
followed
realm ofby3Dthe ex-
im-
oftraction
the dorsal
age-based theimage
of weight
dorsal area
imageof area
estimation pigs. ofThe
pigs.correlation
methodology, The between
correlation
reference dorsal
between
[20] employs area
dorsal
four andcameras
area
Kinect weight
and tois
weight
investigated, leading to the development of a model for estimating pig weight. In contrast,
is investigated, leading to the development of a model for estimating pig weight. In con-
trast, our proposed approach, building upon the extraction of dorsal projection area, ex-
tensively extracts various dorsal body dimension features, including envelope volume,
envelope area, projection area, shoulder width, belly width, and hip width. All of these
contribute to a more precise regression of pig weight. Leveraging a 3D-based weight esti-
Sensors 2023, 23, 7730 20 of 23

our proposed approach, building upon the extraction of dorsal projection area, extensively
extracts various dorsal body dimension features, including envelope volume, envelope
area, projection area, shoulder width, belly width, and hip width. All of these contribute to
a more precise regression of pig weight. Leveraging a 3D-based weight estimation system
Sensors 2023, 23, x FOR PEER REVIEW 21 of 23
allows for accurate capture of intricate details from various angles, effectively addressing
distortions arising from changes in perspective and scale. This system provides enhanced
accuracy and reliability in weight estimation. In the realm of 3D image-based weight
capture point
estimation cloud data
methodology, of pigs, [20]
reference subsequently employing
employs four mesh reconstruction
Kinect cameras to capture pointand deep
cloud
neural network (DNN) to construct a weight estimation model. However, the
data of pigs, subsequently employing mesh reconstruction and deep neural network (DNN) utilization
toofconstruct
multiple acameras,
weight while comprehensive
estimation in capturing
model. However, morphological
the utilization features,cameras,
of multiple presents
challenges including high costs, intricate data fusion and processing, spatial
while comprehensive in capturing morphological features, presents challenges including constraints,
andcosts,
high layoutintricate
limitations,
data rendering
fusion anditprocessing,
unsuitable spatial
for deployment in large-scale
constraints, and layout pig farms.
limitations,
rendering it unsuitable for deployment in large-scale pig farms.
4.2. Deficiencies and Improvements
4.2. Deficiencies
The posturesand Improvements
of the pigs were found to influence the reliability and accuracy of back
The extraction.
feature postures ofThe the pigs
changewerein found to influence
pig postures impact the reliability
the model’s and accuracy
feature of back
extraction, re-
feature
sultingextraction.
in deviations The changethe
between inactual
pig postures impact the and
feature parameters model’s feature extraction,
the extracted feature pa-
resulting
rameters,inwhich
deviations between
will then affect the
the actual
efficacyfeature
of the parameters and the extracted
pig weight estimation model. In feature
future
parameters, which will then affect the efficacy of the pig weight estimation
research, we will consider the feature differences under various postures and construct model. In future
research,
differentwe will consider
models to estimate thethe
feature
body differences
weight of pigs under various postures
in different postures. and construct
different models to estimate
Furthermore, the body weight
due to uncontrolled of pigs of
movements in different
pigs, the postures.
acquired point cloud data
are Furthermore,
susceptible todue to uncontrolled
noise, non-uniformmovements of pigs, thevoids,
or under-sampling, acquired
andpoint cloud data
omissions, all of
are susceptible
which can affecttothe
noise, non-uniform
effectiveness or dimension
of body under-sampling,parametervoids, and omissions,
extraction. In futureall of
work,
which can affect the effectiveness of body dimension parameter extraction.
we plan to integrate multiple consecutive frames of point cloud data to densify and fill in In future work,
wetheplan
sparseto integrate
and missing multiple consecutive
portions of the pig’sframes
pointofcloud.
pointTo cloud data the
facilitate to densify
practical and fill
imple-
inmentation
the sparseofand missing portions of the pig’s point cloud. To facilitate
non-contact weight estimation for pigs based on 3D imagery, further re- the practical
implementation
search will broaden of non-contact
the scopeweight estimation
of target subjectsforandpigsenhance
based onthe 3Dweight
imagery, further
estimation
research
model’s will broaden the
generalization scope of target
capabilities. Drawing subjects
uponand the enhance
output ofthe pigweight
weightestimation
estimation,
model’s generalization capabilities. Drawing upon the output of pig
we intend to assess the growth status of the pigs and offer tailored feeding recommenda-weight estimation, we
intend to assess the growth status of the pigs and offer tailored feeding recommendations.
tions. A weight management system was constructed in this research, as illustrated in Fig- A
weight management system was constructed in this research, as illustrated
ure 22. The weight estimation model can be deployed on application terminals to provide in Figure 22. The
weight estimation
information on pigmodel cangain
weight be deployed on application
status, weight-based terminals
feeding to provide information
recommendations, and alert
on pig weight
notifications. gain status, weight-based feeding recommendations, and alert notifications.

Figure 22. The weight management system.


Figure 22. The weight management system.

5. Conclusions
This study introduces a 3D point cloud-based method for estimating pig weight, so
as to address the challenges related pig weight monitoring and their susceptibility to
stress. In this method, we separated the pig point cloud from the background using sta-
Sensors 2023, 23, 7730 21 of 23

5. Conclusions
This study introduces a 3D point cloud-based method for estimating pig weight, so as
to address the challenges related pig weight monitoring and their susceptibility to stress.
In this method, we separated the pig point cloud from the background using statistical
filtering and DBSCAN, thereby resolving the problem of pig and background adhesion.
Using convex hull detection, the point cloud data of the pig’s head and tail was removed
in order to obtain complete point cloud data of their backs. Voxel down-sampling was
used to reduce the number of point cloud data on the backs of pigs and enhance the weight
estimation model’s efficiency and real-time performance. The weight estimation model
based on back parameter characteristics and CNN was constructed. Also, the MAPE of
the weight estimation model is only 5.357%, which could meet the demand of automatic
weight monitoring in actual production.
It is worth mentioning that all the data used for training and validation in this method
were collected from a real production environment, and only a depth camera above the
drive channel was needed to achieve this method. Therefore, the method is easy to
popularize and apply. What shall be noticed is that it can provide technical support for pig
weight monitoring within both breeding and slaughterhouse settings.

Author Contributions: Z.L.: writing—original draft, review and editing, conceptualization, method-
ology, equipment adjustment, data curation, software. J.H.: writing—original draft, data curation,
software, formal analysis, investigation, conceptualization. H.X.: review and editing, conceptu-
alization, methodology, formal analysis, funding acquisition. H.T.: writing—review and editing,
conceptualization. Y.C.: review and editing, conceptualization, formal analysis. H.L.: review and
editing, conceptualization, formal analysis. All authors have read and agreed to the published version
of the manuscript.
Funding: This research was funded by the Key Projects of Intergovernmental Cooperation in Interna-
tional Scientific and Technological Innovation (Project No. 2017YFE0114400), China.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: The authors would like to acknowledge the support from the Wens Foodstuffs
Group Co., Ltd. (Guangzhou, China) for the use of their animals and facilities.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Noya, I.; Aldea, X.; González-García, S.; Gasol, C.M.; Moreira, M.T.; Amores, M.J.; Marin, D.; Boschmonart-Rives, J. Environmental
assessment of the entire pork value chain in catalonia—A strategy to work towards circular economy. Sci. Total Environ. 2017, 589,
122–129. [CrossRef]
2. Secco, C.; da Luz, L.M.; Pinheiro, E.; Puglieri, F.N.; Freire, F.M.C.S. Circular economy in the pig farming chain: Proposing a model
for measurement. J. Clean. Prod. 2020, 260, 121003. [CrossRef]
3. He, W.; Mi, Y.; Ding, X.; Liu, G.; Li, T. Two-stream cross-attention vision Transformer based on RGB-D images for pig weight
estimation. Comput. Electron. Agric. 2023, 212, 107986. [CrossRef]
4. Bonneau, M.; Lebret, B. Production systems and influence on eating quality of pork. Meat Sci. 2010, 84, 293–300. [CrossRef]
5. Lebret, B.; Čandek-Potokar, M. Pork quality attributes from farm to fork. Part I. Carcass and fresh meat. Animal 2022, 16, 100402.
[CrossRef]
6. Li, J.; Yang, Y.; Zhan, T.; Zhao, Q.; Zhang, J.; Ao, X.; He, J.; Zhou, J.; Tang, C. Effect of slaughter weight on carcass characteristics,
meat quality, and lipidomics profiling in longissimus thoracis of finishing pigs. LWT 2021, 140, 110705. [CrossRef]
7. Jensen, D.B.; Toft, N.; Cornou, C. The effect of wind shielding and pen position on the average daily weight gain and feed
conversion rate of grower/finisher pigs. Livest. Sci. 2014, 167, 353–361. [CrossRef]
8. Valros, A.; Sali, V.; Hälli, O.; Saari, S.; Heinonen, M. Does weight matter? Exploring links between birth weight, growth and
pig-directed manipulative behaviour in growing-finishing pigs. Appl. Anim. Behav. Sci. 2021, 245, 105506. [CrossRef]
9. Kashiha, M.; Bahr, C.; Ott, S.; Moons, C.P.; Niewold, T.A.; Ödberg, F.O.; Berckmans, D. Automatic weight estimation of individual
pigs using image analysis. Comput. Electron. Agric. 2014, 107, 38–44. [CrossRef]
Sensors 2023, 23, 7730 22 of 23

10. Apichottanakul, A.; Pathumnakul, S.; Piewthongngam, K. The role of pig size prediction in supply chain planning. Biosyst. Eng.
2012, 113, 298–307. [CrossRef]
11. Dokmanović, M.; Velarde, A.; Tomović, V.; Glamočlija, N.; Marković, R.; Janjić, J.; Baltić, M.Ž. The effects of lairage time and
handling procedure prior to slaughter on stress and meat quality parameters in pigs. Meat Sci. 2014, 98, 220–226. [CrossRef]
[PubMed]
12. Bhoj, S.; Tarafdar, A.; Chauhan, A.; Singh, M.; Gaur, G.K. Image processing strategies for pig liveweight measurement: Updates
and challenges. Comput. Electron. Agric. 2022, 193, 106693. [CrossRef]
13. Tu, G.J.; Jørgensen, E. Vision analysis and prediction for estimation of pig weight in slaughter pens. Expert Syst. Appl. 2023,
220, 119684. [CrossRef]
14. Minagawa, H.; Hosono, D. A light projection method to estimate pig height. In Swine Housing, Proceedings of the First International
Conference, Des Moines, IA, USA, 9–11 October 2000; ASABE: St. Joseph, MI, USA, 2000; pp. 120–125.
15. Li, G.; Liu, X.; Ma, Y.; Wang, B.; Zheng, L.; Wang, M. Body size measurement and live body weight estimation for pigs based on
back surface point clouds. Biosyst. Eng. 2022, 218, 10–22. [CrossRef]
16. Sean, O.C. Research and Implementation of a Pig Weight Estimation System Based on Fuzzy Neural Network. Master’s Thesis,
Nanjing Agricultural University, Nanjing, China, 2017.
17. Suwannakhun, S.; Daungmala, P. Estimating pig weight with digital image processing using deep learning. In Proceedings of the
2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Las Palmas de Gran Canaria,
Spain, 26–29 November 2018; pp. 320–326.
18. Liu, T.H. Optimization and 3D Reconstruction of Pig Body Size Parameter Extraction Algorithm Based on Binocular Vision. Ph.D.
Thesis, China Agricultural University, Beijing, China, 2014.
19. He, H.; Qiao, Y.; Li, X.; Chen, C.; Zhang, X. Automatic weight measurement of pigs based on 3D images and regression network.
Comput. Electron. Agric. 2021, 187, 106299. [CrossRef]
20. Kwon, K.; Park, A.; Lee, H.; Mun, D. Deep learning-based weight estimation using a fast-reconstructed mesh model from the
point cloud of a pig. Comput. Electron. Agric. 2023, 210, 107903. [CrossRef]
21. Guo, R.; Xie, J.; Zhu, J.; Cheng, R.; Zhang, Y.; Zhang, X.; Gong, X.; Zhang, R.; Wang, H.; Meng, F. Improved 3D point cloud
segmentation for accurate phenotypic analysis of cabbage plants using deep learning and clustering algorithms. Comput. Electron.
Agric. 2023, 211, 108014. [CrossRef]
22. Ali, Z.A.; Zhangang, H. Multi-unmanned aerial vehicle swarm formation control using hybrid strategy. Trans. Inst. Meas. Control
2021, 43, 2689–2701. [CrossRef]
23. Li, J.; Ma, W.; Li, Q.; Zhao, C.; Tulpan, D.; Yang, S.; Ding, L.; Gao, R.; Yu, L.; Wang, Z. Multi-view real-time acquisition and 3D
reconstruction of point clouds for beef cattle. Comput. Electron. Agric. 2022, 197, 106987. [CrossRef]
24. Zhao, Q.; Gao, X.; Li, J.; Luo, L. Optimization algorithm for point cloud quality enhancement based on statistical filtering. J. Sens.
2021, 2021, 7325600. [CrossRef]
25. Yang, Q.F.; Gao, W.Y.; Han, G.; Li, Z.Y.; Tian, M.; Zhu, S.H.; Deng, Y.H. HCDC: A novel hierarchical clustering algorithm based on
density-distance cores for data sets with varying density. Inf. Syst. 2023, 114, 102159. [CrossRef]
26. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise.
In Proceedings of the KDD, München, Germany, 2 August 1996; pp. 226–231.
27. Chen, H.; Liang, M.; Liu, W.; Wang, W.; Liu, P.X. An approach to boundary detection for 3D point clouds based on DBSCAN
clustering. Pattern Recognit. 2022, 124, 108431. [CrossRef]
28. Yu, T.; Hu, C.; Xie, Y.; Liu, J.; Li, P. Mature pomegranate fruit detection and location combining improved F-PointNet with 3D
point cloud clustering in orchard. Comput. Electron. Agric. 2022, 200, 107233. [CrossRef]
29. Ma, B.; Du, J.; Wang, L.; Jiang, H.; Zhou, M. Automatic branch detection of jujube trees based on 3D reconstruction for dormant
pruning using the deep learning-based method. Comput. Electron. Agric. 2021, 190, 106484. [CrossRef]
30. Lu, M.; Xiong, Y.; Li, K.; Liu, L.; Yan, L.; Ding, Y.; Lin, X.; Yang, X.; Shen, M. An automatic splitting method for the adhesive
piglets’ gray scale image based on the ellipse shape feature. Comput. Electron. Agric. 2016, 120, 53–62. [CrossRef]
31. Fernandes, A.F.; Dórea, J.R.; Fitzgerald, R.; Herring, W.; Rosa, G.J. A novel automated system to acquire biometric and
morphological measurements and predict body weight of pigs via 3D computer vision. J. Anim. Sci. 2019, 97, 496–508. [CrossRef]
32. Panda, S.; Gaur, G.K.; Chauhan, A.; Kar, J.; Mehrotra, A. Accurate assessment of body weights using morphometric measurements
in Landlly pigs. Trop. Anim. Health Prod. 2021, 53, 362. [CrossRef]
33. Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE
Trans. Neural Netw. Learn. Syst. 2022, 33, 6999–7019. [CrossRef]
34. Jiang, Q.; Jia, M.; Bi, L.; Zhuang, Z.; Gao, K. Development of a core feature identification application based on the Faster R-CNN
algorithm. Eng. Appl. Artif. Intell. 2022, 115, 105200. [CrossRef]
35. Diao, Z.; Yan, J.; He, Z.; Zhao, S.; Guo, P. Corn seedling recognition algorithm based on hyperspectral image and lightweight-3D-
CNN. Comput. Electron. Agric. 2022, 201, 107343. [CrossRef]
36. Kong, W.; Chen, Y.; Lei, Y. Medical image fusion using guided filter random walks and spatial frequency in framelet domain.
Signal Process. 2021, 181, 107921. [CrossRef]
Sensors 2023, 23, 7730 23 of 23

37. Ma, X.; Luo, W.; Chen, M.; Li, J.; Yan, X.; Zhang, X.; Wei, W. A fast point cloud segmentation algorithm based on region growth. In
Proceedings of the 2019 18th International Conference on Optical Communications and Networks (ICOCN), Huangshan, China,
5–8 August 2019; pp. 1–2.
38. Han, X.; Wang, X.; Leng, Y.; Zhou, W. A plane extraction approach in inverse depth images based on region-growing. Sensors
2021, 21, 1141. [CrossRef] [PubMed]
39. Xu, Y.; Jung, C.; Chang, Y. Head pose estimation using deep neural networks and 3D point clouds. Pattern Recognit. 2022,
121, 108210. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like