Sensors 23 07730
Sensors 23 07730
Article
Body Weight Estimation for Pigs Based on 3D Hybrid Filter and
Convolutional Neural Network
Zihao Liu 1,2,† , Jingyi Hua 2,3,† , Hongxiang Xue 1,2, *, Haonan Tian 2,3 , Yang Chen 3 and Haowei Liu 1
Abstract: The measurement of pig weight holds significant importance for producers as it plays a
crucial role in managing pig growth, health, and marketing, thereby facilitating informed decisions
regarding scientific feeding practices. On one hand, the conventional manual weighing approach
is characterized by inefficiency and time consumption. On the other hand, it has the potential to
induce heightened stress levels in pigs. This research introduces a hybrid 3D point cloud denoising
approach for precise pig weight estimation. By integrating statistical filtering and DBSCAN clustering
techniques, we mitigate weight estimation bias and overcome limitations in feature extraction. The
convex hull technique refines the dataset to the pig’s back, while voxel down-sampling enhances
real-time efficiency. Our model integrates pig back parameters with a convolutional neural network
(CNN) for accurate weight estimation. Experimental analysis indicates that the mean absolute error
(MAE), mean absolute percent error (MAPE), and root mean square error (RMSE) of the weight
estimation model proposed in this research are 12.45 kg, 5.36%, and 12.91 kg, respectively. In contrast
to the currently available weight estimation methods based on 2D and 3D techniques, the suggested
approach offers the advantages of simplified equipment configuration and reduced data processing
complexity. These benefits are achieved without compromising the accuracy of weight estimation.
Consequently, the proposed method presents an effective monitoring solution for precise pig feeding
management, leading to reduced human resource losses and improved welfare in pig breeding.
Citation: Liu, Z.; Hua, J.; Xue, H.;
Tian, H.; Chen, Y.; Liu, H. Body Keywords: pig weight estimation; 3D sensor; point cloud segmentation; convolutional neural network
Weight Estimation for Pigs Based on
3D Hybrid Filter and Convolutional
Neural Network. Sensors 2023, 23,
7730. https://doi.org/10.3390/ 1. Introduction
s23187730
Pork is an essential source of animal protein and fat for European, Chinese, and
Academic Editor: Yongwha Chung some other populations, which plays a crucial role in ensuring food security and social
stability [1–3]. In China, the pig industry is a traditionally advantageous industry that
Received: 3 August 2023
contributes greatly to sustaining the agricultural market, fostering the growth of the na-
Revised: 29 August 2023
Accepted: 4 September 2023
tional economy, and increasing farmers’ incomes. As such, the quantity of pig breeding
Published: 7 September 2023
has witnessed a notable increase. However, conventional pig breeding methods have
proven inadequate for keeping pace with the rapid development of pig output. The inte-
gration of science and technology has facilitated the widespread adoption of contemporary
technologies in various aspects of aquaculture production.
Copyright: © 2023 by the authors. Consequently, the improvement of pig production and quality has consistently re-
Licensee MDPI, Basel, Switzerland. mained a central objective in the development of agricultural strategies [4–6]. Among these
This article is an open access article objectives, pig weight stands out as one of the most influential factors in pig production
distributed under the terms and and pork quality. It provides essential information on feed conversion rate, growth rate,
conditions of the Creative Commons disease, development uniformity, and the health status of pig herds [7–10]. Presently, the
Attribution (CC BY) license (https:// traditional manual pig driving weighing method involves direct contact weighing on a
creativecommons.org/licenses/by/ scale. This method is known for its time-consuming, labor-intensive nature, and is prone
4.0/).
these objectives, pig weight stands out as one of the most influential factors in pig produc-
Sensors 2023, 23, 7730 tion and pork quality. It provides essential information on feed conversion rate, growth 2 of 23
rate, disease, development uniformity, and the health status of pig herds [7–10]. Presently,
the traditional manual pig driving weighing method involves direct contact weighing on
a scale. This method is known for its time-consuming, labor-intensive nature, and is prone
to causing the stress reaction in pigs. These limitations make it challenging to meet the
to causing the stress reaction in pigs. These limitations make it challenging to meet the
requirements of large-scale breeding for real-time monitoring of pig weight [11].
requirements of large-scale breeding for real-time monitoring of pig weight [11].
Non-contact monitoring of pig weight has progressively become a hotspot for research,
Non-contact monitoring of pig weight has progressively become a hotspot for re-
with weight estimation primarily relying on 2D and 3D images [12–15]. In 2D imaging,
search, with weight estimation primarily relying on 2D and 3D images [12–15]. In 2D im-
segmentation methods
aging, segmentation are employed
methods to delineate
are employed the pig’sthe
to delineate outline, while weight
pig’s outline, while estimation
weight
isestimation
based on is a relationship
based on a relationship model using geometric parameters andbody
model using geometric parameters and actual weight,
actual body as
illustrated
weight, as in Figure 1.inInFigure
illustrated 2017, 1.
Okinda
In 2017,[16] used HD
Okinda cameras
[16] used HD to capture
cameras tooverhead
capture over-images
ofhead
pigs, applied the background difference method to eliminate background
images of pigs, applied the background difference method to eliminate background interference,
and devised aand
interference, weight
devisedestimation
a weight model using
estimation linear
model regression
using and adaptive
linear regression fuzzy rea-
and adaptive
soning. This model necessitates manual selection of optimal inference
fuzzy reasoning. This model necessitates manual selection of optimal inference images images due to the
influence
due to the influence of porcine posture and movement. In 2018, Suwannakhun [17] et al. an
of porcine posture and movement. In 2018, Suwannakhun [17] et al. used
adaptive thresholdthreshold
used an adaptive segmentation method to
segmentation identify
method the pig’sthe
to identify body region
pig’s bodyand developed
region and
adeveloped
body weight estimation model leveraging eight features, including color,
a body weight estimation model leveraging eight features, including color, tex- texture, center
ofture,
mass, main
center axis length,
of mass, main axis minimum axis length,
length, minimum axis eccentricity, and area,
length, eccentricity, and to mitigate
area, to mit- the
impact
igate theof impact
pig movement. Nevertheless,
of pig movement. 2D methods
Nevertheless, for estimation
2D methods often often
for estimation fall short
fall in
accurately estimating
short in accurately the animal’s
estimating volumevolume
the animal’s or parameters associated
or parameters with its
associated with body surface.
its body
Furthermore, variations
surface. Furthermore, in camera
variations parameters,
in camera objectobject
parameters, distance, and and
distance, lighting conditions
lighting con-
ditions
have have a significant
a significant impactimpact
on theon the measurement
measurement findings,
findings, resulting
resulting in limited
in limited adapt-
adaptability
ability
and and challenges
posing posing challenges in enhancing
in enhancing accuracyaccuracy in real-world
in real-world multi-scene
multi-scene applica- In
applications.
tions.
the In theofcontext
context weightofregression
weight regression
estimation,estimation, the conventional
the conventional regression
regression approach
approach exhibits
exhibitscapability
limited limited capability
in feature inlearning
feature learning
and failsand fails to comprehensively
to comprehensively capturecapture the
the interplay
interplay among the defining factors
among the defining factors of pig body size. of pig body size.
Figure1.1.The
Figure Thefundamental
fundamental stages
stages involved
involved in
in 2D
2D image
image processing
processing for
for the
the purpose
purpose of
of estimating
estimating pig
pig weight [12].
weight [12].
Research aimed
Research aimedatatestimating
estimatingthethe
weight of pig
weight of based on three-dimensional
pig based data hasdata
on three-dimensional
been undertaken to address these problems. Liu [18] et al. used binocular
has been undertaken to address these problems. Liu [18] et al. used binocular vision technol-
vision
ogy to collect three-dimensional image data of pigs. They extracted growth-related
technology to collect three-dimensional image data of pigs. They extracted growth-related data,
including chest circumference, buttock height, body length, body height, and body width,
data, including chest circumference, buttock height, body length, body height, and body
width, through image processing methods. After that, they developed the estimation model
based on linear functions, nonlinear functions, machine learning algorithms, deep learning
algorithms, among others, to estimate pig weight. The correlation coefficient with the
measured value was 97%, and the average relative error was 2.5%, outperforming models
solely relying on back area measurements. As the price of depth cameras continues to fall,
and their performance steadily improves, they have been increasingly employed in studies
through image processing methods. After that, they developed the estimation model
based on linear functions, nonlinear functions, machine learning algorithms, deep learn-
ing algorithms, among others, to estimate pig weight. The correlation coefficient with the
Sensors 2023, 23, 7730 3 of 23
measured value was 97%, and the average relative error was 2.5%, outperforming models
solely relying on back area measurements. As the price of depth cameras continues to fall,
and their performance steadily improves, they have been increasingly employed in stud-
of
iesanimal
of animalbody measurement,
body measurement, body condition
body andand
condition weight estimation,
weight estimation, automatic behavioral
automatic be-
havioral identification,
identification, etc. In
etc. In 2021, He2021,
[19] He [19]
et al. et al.
used used acamera
a D430 D430 camera
to captureto capture
depth depth
images of
images
pigs. of pigs.
They They employed
employed instance segmentation,
instance segmentation, distance distance independence,
independence, noise re- and
noise reduction,
duction, and
rotational rotational
correction to correction to eliminate background
eliminate background interference interference
and constructed and constructed
a BotNet-based
a BotNet-based
model modelthe
for estimating forbody
estimating
weightthe bodyIn
of pigs. weight of pigs.[20]
2023, Kwon In et
2023, Kwon [20] aetmethod
al. introduced al.
introduced
for a method
accurately for accurately
estimating pig weightestimating pig weight
using point using
clouds, as point clouds,
depicted as depicted
in Figure 2. They
in Figure
used 2. They
a rapid and used
robust a rapid and robust
reconstructed reconstructed
grid grid model
model to address to address
the problem the prob-
of unrecognized
noise or missing points and optimized deep neural networks for precise weightfor
lem of unrecognized noise or missing points and optimized deep neural networks pre-
estimation.
cise weight estimation. While the aforementioned techniques have
While the aforementioned techniques have demonstrated enhanced precision in estimating demonstrated en-
hanced precision in estimating the weight of pigs, it is crucial to
the weight of pigs, it is crucial to acknowledge that the hardware systems employed acknowledge that the in
hardware systems employed in these experiments are intricate. In real-world
these experiments are intricate. In real-world production settings, the available space is production
settings,
often the available
limited, hinderingspacethe
is often limited,
ability hindering
to conduct the ability to conduct
multi-directional multi-direc-
or azimuthal livestock
tional or azimuthal livestock detection. Meanwhile, data collection
detection. Meanwhile, data collection conditions may not be optimal, posing conditions may not be in
challenges
optimal, posing challenges in meeting current manufacturing demands.
meeting current manufacturing demands.
Figure 2.
Figure 2. Configuration
Configurationofofthe
theacquisition
acquisitionsystem forfor
system point clouds
point [20].[20].
clouds
In conclusion,
In conclusion, there
there is
is aa pressing
pressing requirement
requirementfor foraa precise
preciseand
andcost-effective
cost-effectiveintelli-
intelligent
gent system
system to estimate
to estimate pig weight
pig weight accurately.
accurately. Such aSuch
systema system
shouldshould be applicable
be applicable to
to large-scale
large-scale
pig farms, pig farms, facilitating
facilitating the monitoring the monitoring
and analysis and analysis of pig growth
of pig growth processes
processes and as
and serving
serving
an as antool
effective effective tool for
for precise precise
pig feedingpigandfeeding and management.
management.
The primary contributions of this article
The primary contributions of this article can be can be summarized
summarized as follows:
as follows:
(1) Point cloud filtering, the denoising of large-scale
(1) Point cloud filtering, the denoising of large-scale and small-scale and small-scale discrete points
discrete is is
points
achieved using statistical filtering and DBSCAN point cloud segmentation,
achieved using statistical filtering and DBSCAN point cloud segmentation, respec- respec-
tively,
tively,resulting
resultingininthe theaccurate
accurate segmentation
segmentation of the pigpig
of the back image.
back What
image. is more,
What is more,
the
the voxel downsampling approach is employed to reduce the point cloudres-
voxel downsampling approach is employed to reduce the point cloud data’s data’s
olution on the
resolution onpig’s back,back,
the pig’s so as to
so reduce the volume
as to reduce of pointofcloud
the volume pointdata while
cloud pre-
data while
serving the inherent
preserving characteristics
the inherent of theof
characteristics original dataset.
the original dataset.
(2) In
(2) Interms
termsofofthetheweight
weight estimation,
estimation,the plane projection
the plane technique
projection is employed
technique to de- to
is employed
termine the body size of the pig, while the convex hull detection method is utilized
determine the body size of the pig, while the convex hull detection method is utilized
to eliminate the head and tail regions of the pig. The point set information in the pig
to eliminate the head and tail regions of the pig. The point set information in the pig
point cloud data is input into the convolutional neural network to construct the pig
point cloud data is input into the convolutional neural network to construct the pig
weight estimation model, which realizes the full learning of features and accurate
estimation of pig weight.
(3) As for actual breeding, the estimation method constructed in this research overcomes
the problems of high cost, data processing difficulties, space occupation, and layout
restrictions while ensuring the estimation accuracy, which can provide technical
support for precise feeding, selection, and breeding of pigs.
This paper is structured as follows: Section 2 introduces the point cloud data collec-
tion, noise reduction segmentation, feature extraction, and weight estimation methods; in
estimation of pig weight.
(3) As for actual breeding, the estimation method constructed in this research overcomes
the problems of high cost, data processing difficulties, space occupation, and layout
restrictions while ensuring the estimation accuracy, which can provide technical sup-
Sensors 2023, 23, 7730 port for precise feeding, selection, and breeding of pigs. 4 of 23
This paper is structured as follows: Section 2 introduces the point cloud data collec-
tion, noise reduction segmentation, feature extraction, and weight estimation methods; in
Section
Section 3,
3, the
the performance
performance of of denoising
denoising segmentation and the
segmentation and the weight
weight estimation
estimation model
model is
is
analyzed.
analyzed. Section 4 discusses the comparison with existing methods, the limitations, im-
Section 4 discusses the comparison with existing methods, the limitations, im-
provements of
provements of the
the research,
research, and
and outlines
outlines potential
potential future
future work.
work. Finally,
Finally, Section
Section 55 presents
presents
the conclusions.
the conclusions.
2.2. Data
2.2. Data Acquisition
Acquisition
A 3D
A 3D depth
depth Intel
Intel RealSense
RealSense D435i
D435i camera
camera (manufactured
(manufactured by by Intel
Intel Corp.,
Corp., Santa
Santa Clara,
Clara,
CA, USA) was used to acquire color and depth images. This camera features
CA, USA) was used to acquire color and depth images. This camera features a complete a complete
depth processing
depth processing module,
module, aa vision
vision processing
processing module,
module, and
and aa signal
signal processing
processing module,
module,
capable of delivering depth data and color information data at a rate of 90
capable of delivering depth data and color information data at a rate of 90 frames per frames per
second. What is more, it was installed at 2.5 m directly above the XK3190-A12
second. What is more, it was installed at 2.5 m directly above the XK3190-A12 weighing
weighing
Sensors 2023, 23, x FOR PEER REVIEW 5 of 23
scale (manufactured by Henan Nanshang Husbandry Technology
scale (manufactured by Henan Nanshang Husbandry Technology Co. Ltd., Nanyang,Co., Ltd., Nanyang,
China). An illustration of the pig weight data acquisition system is presented in Figure 4.
China). An illustration of the pig weight data acquisition system is presented in Figure 4.
(a) (b)
Figure 6. Cont.
Sensors 2023, 23, 7730 6 of 23
(a) (b)
(c) (d)
Figure 6. Incomplete pig picture. (a) No pig; (b) pig with missing body; (c) pig with missing rump;
(d) pig
(d) pig with
with missing
missing head.
head.
2.4.
2.4. Model
Model Construction
Construction
2.4.1.
2.4.1. Overall Process
Overall Process
The
The primary
primary components
components ofof the
the method
method proposed
proposed in in this
this research
research are
are point
point cloud
cloud
segmentation
segmentation and weight estimation, as illustrated in Figure 7. In order to address the
and weight estimation, as illustrated in Figure 7. In order to address the
adhesion
adhesion problem
problembetween
betweenpigs and
pigs andthethe
background
backgroundin the
in segmentation of pigof
the segmentation point
pig cloud,
point
statistical filtering and DBSCAN was used to separate pig point cloud from background.
cloud, statistical filtering and DBSCAN was used to separate pig point cloud from back-
To obtain complete point cloud data of the pigs’ backs, the plane projection is employed to
ground. To obtain complete point cloud data of the pigs’ backs, the plane projection is
capture images of the pig’s back. After that, convex hull detection was applied to remove
employed to capture images of the pig’s back. After that, convex hull detection was ap-
extraneous head and tail point cloud data. To reduce the computational complexity of
plied to remove extraneous head and tail point cloud data. To reduce the computational
subsequent processing, voxel down-sampling was used to reduce the quantity of point
complexity of subsequent processing, voxel down-sampling was used to reduce the quan-
cloud data on pigs’ backs. Finally, for precise pig weight estimation, we devised a weight
tity of point cloud data on pigs’ backs. Finally, for precise pig weight estimation, we de-
estimation model based on back parameter characteristics and a convolutional neural
Sensors 2023, 23, x FOR PEER REVIEW 7 of 23
vised a weight estimation model based on back parameter characteristics and a convolu-
network (CNN) during the weight estimation phase.
tional neural network (CNN) during the weight estimation phase.
Calculate whether the average distance d¯i between the current pi and the k nearest
neighbor is greater than the set threshold L. When d¯i > L, delete point pi ; when d¯i < L, retain
point pi . As shown in Equation (4), σ is the calculated coefficient, which is generally valued
according to the distribution of the cloud data of the measured point.
Using statistical filtering for denoising 3D point cloud data can remove some of the
large-scale noise, though there still exists some. These noises mixed with the pig’s data
point can interfere with the surface smoothness of 3D model reconstruction. To address this
issue, a secondary denoising step is required, incorporating additional filtering algorithms.
Employing a hybrid filtering approach serves a dual purpose—it removes small-scale noise
and mitigates the remaining large-scale noise. Clustering algorithms help tackle small-
scale noise, to certain degree. By identifying points with high similarity within clusters,
noise points can be distinguished from valid data points. Clustering algorithms usually
include K-Means clustering, DBSCAN clustering, region growing clustering, etc. K-Means
clustering relies on geometric distance and proves advantageous when dealing with point
clouds where the number and location of seed points are known [23]. However, when
applied to interconnected point clouds, it may cause issues such as over-segmentation or
under-segmentation due to the reliance on single growth criteria or stopping conditions.
In contrast, DBSCAN clustering leverages point cloud density for clustering. During the
clustering process, it randomly selects a core object as a seed and proceeds to identify all
sample sets that are density-reachable from this core object. This process repeats until
all core objects are clustered. DBSCAN clustering excels in situations where different
regions of the pig’s body exhibit similar point cloud densities, coming up with robust
clustering results.
The density clustering algorithm has been employed to remove noise from a point
cloud because of its sensitivity and effectiveness in recognizing small-scale noise points [25].
In this section, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
algorithm was for noise removal from the point cloud. DBSCAN is an algorithm for density-
based spatial clustering that clusters regions with sufficient density [26]. It also defines
where different regions of the pig’s body exhibit similar point cloud densities, coming up
with robust clustering results.
The density clustering algorithm has been employed to remove noise from a point
cloud because of its sensitivity and effectiveness in recognizing small-scale noise points
[25]. In this section, the Density-Based Spatial Clustering of Applications with Noise
Sensors 2023, 23, 7730 (DBSCAN) algorithm was for noise removal from the point cloud. DBSCAN is an 9algo- of 23
rithm for density-based spatial clustering that clusters regions with sufficient density [26].
It also defines a cluster as the largest set of density-connected points, allowing for the
identification
a oflargest
cluster as the clustersset
with arbitrary shapes inpoints,
of density-connected a noisyallowing
spatial database [27,28], as illus-
for the identification of
trated inwith
clusters Figure 9.
arbitrary shapes in a noisy spatial database [27,28], as illustrated in Figure 9.
Figure
Figure 10. Cluster 10. Cluster
point point classification.
classification.
Sensors 2023, 23, 7730 10 of 23
Figure 10. Cluster point classification.
2.4.3. Splitting
2.4.3. Splitting thethe Head
Head and
and Tail
Tail
ItIt is
is necessary
necessary to toremove
removethe thehead
head andand tailtail
point
pointcloud datadata
cloud because
becausethe poses of pigs’
the poses of
heads and tails frequently change, which has a remarkable impact
pigs’ heads and tails frequently change, which has a remarkable impact on such essential on such essential char-
acteristic parameters
characteristic parametersas the
asenvelope volume,
the envelope volume,envelope surface
envelope area, area,
surface back back
area, area,
and body
and
length. Initially, the data from the point cloud were projected onto
body length. Initially, the data from the point cloud were projected onto a plane to create a plane to create a 2D
aoutline of theofpig’s
2D outline back. back.
the pig’s The minimum
The minimum envelope polygon
envelope of the pig’s
polygon of the body
pig’swas
bodyderived
was
from thefrom
derived envelope verticesvertices
the envelope and lineandsegments
line segmentscorresponding
corresponding to thetopig’s headhead
the pig’s and and
tail.
TheThe
tail. sag sag degree
degreeof the pigpig
of the contour
contour was
was calculated
calculatedbased basedon onthe
thepoint
point where contour
where the contour
envelope overlapped
envelope overlapped withwith the
the pig’s
pig’s contour
contour and andthe thepixel
pixeldistance
distancebetween
betweenneighboring
neighboring
points and the pig’s contour. A deeper depression indicated
points and the pig’s contour. A deeper depression indicated a higher probability ofa higher probability of head
head
and tail
and tail split
split points.
points. Given
Given the fixed
fixed direction
direction of of the
the pig’s
pig’s head,
head, thethe short
short axis
axis of
of some
some
particlesin
particles inthe
thepig’s
pig’sbody
bodywaswaschosen
chosenas asthe
thedividing
dividingline, line,and
andthethepoint
pointfurthest
furthestfrom
fromthis
this
axis was
axis was considered
considered the the dividing
dividing point
point ofof the
the tail,
tail, while
while thethe point
point closest
closest to
to the
the axis
axiswas
was
considered the
considered the dividing
dividing point
point of
of the
thehead.
head. Figure
Figure 12 12 depicts
depicts the
the comparison
comparison beforebefore and
and
after
afterhead
headand andtail
tailremoval.
removal.
(a) (b)
Figure 12.
Figure 12. Comparison
Comparison before
before and
and after
after head
head and
and tail
tail removal:
removal: (a)
(a) original image; (b)
original image; (b)processed
processed
image.
image.
sin(x2 + y2 )
z= (5)
x2 + y2
The side length of the regular cube grid is represented as r. The smallest bounding
body of the point cloud data is determined by the maximum and minimum values of 3D
coordinate points in the point cloud data.
Calculate the size of the voxel grid.
(xmax −xmin )
Dx =
r
(ymax −ymin )
Dy = r
(6)
(zmax −zmin )
Dz =
r
Sensors 2023, 23, 7730 12 of 23
Calculate the index of each regular small cube grid, and the index is represented by h
as follows:
(x−xmin )
hx = r
(y−ymin )
hy =
r
(z−zmin )
(7)
hz = r
h = hx + hy × Dx + hz × Dx × Dy
Sort the elements in h by size, and then calculate the center of gravity for each regular
cube to represent each point in the small grid.
pig’s back.
(a) (b)
Figure 13. Pig point cloud projection. (a) Before horizontal processing; (b) after horizontal pro-
Figure 13. Pig point cloud projection. (a) Before horizontal processing; (b) after horizontal processing.
cessing.
The six key points along the pig’s outline were extracted. L2 represented the shortest
The between
distance six key points along
the two endstheofpig’s outline were
the outline, whileextracted. 𝐿2 represented
L1 represented thedistance
the greatest shortest
distance between the two ends of the outline, while 𝐿1 represented the greatest
between the ends of the left side and L3 signified the longest distance between the ends of distance
between
the the ends
right side. of the left side
The calculation and is𝐿3assignified
formula the longest
follows, where x and distance between
y represent the ends
the coordinates
of the right side. The calculation
of the respective points on the plane: formula is as follows, where x and y represent the coor-
dinates of the respective points on the plane:
q
L1𝐿1 ( x1 1−−x2𝑥)22)2++(y(𝑦
= = √(𝑥 2
1 1−−y2𝑦)2 )2 (8)
(8)
Then, the convex hull model of the pig back was constructed, and the point cloud of
the pig back was encapsulated in the form of minimum convex hull so that the area and
Sensors 2023, 23, 7730 13 of 23
volume of the convex hull model could be calculated. The extraction results are depicted
in Figure 14.
Figure
Figure14.
14.Pig
Pigback
backenvelope.
envelope.
2.4.6.Weight
2.4.6. WeightEstimation
Estimation
Nowadays, methods for
Nowadays, methods for estimating
estimating pigs’
pigs’ weight
weightmainly
mainlyinclude
includethree
threegroups:
groups:classical
classi-
regression methods (including Linear Regression, Polynomial Regression, Ridge
cal regression methods (including Linear Regression, Polynomial Regression, Ridge Re-Regression,
and Logistic
gression, andRegression), machine learning
Logistic Regression), machinemethods
learning(including Decision Tree
methods (including Regression,
Decision Tree
Random Forest Regression, and Support Vector Regression, SVR), and
Regression, Random Forest Regression, and Support Vector Regression, SVR), and deep learning
deep
methodsmethods
learning (like Multi-Layer Perceptron,
(like Multi-Layer MLP, and MLP,
Perceptron, Convolutional Neural Network,
and Convolutional NeuralCNN).
Net-
Classical regression models, though commonly used, may be instable due
work, CNN). Classical regression models, though commonly used, may be instable to the nonlinear
due
relationships and complex features among body feature parameters. Machine learning
to the nonlinear relationships and complex features among body feature parameters. Ma-
regression methods, though flexible in handling nonlinear relationships, can be sensitive
chine learning regression methods, though flexible in handling nonlinear relationships,
to outliers and noise within the training data. In contrast, deep learning-based regression
can be sensitive to outliers and noise within the training data. In contrast, deep learning-
methods excel in handling nonlinear patterns and demonstrate great scalability compared
based regression methods excel in handling nonlinear patterns and demonstrate great
to the first two groups of regression estimation methods. CNN possesses strong data-
scalability compared to the first two groups of regression estimation methods. CNN pos-
solving capabilities and can delve deeply into data relationship information [33]. As a
sesses strong data-solving capabilities and can delve deeply into data relationship infor-
network with multiple layers, CNN consists primarily of input layer, convolutional layer,
mation [33]. As a network with multiple layers, CNN consists primarily of input layer,
down-sampling layer, fully connected layer, and output layer [34]. Its fundamental concept
convolutional layer, down-sampling layer, fully connected layer, and output layer [34]. Its
lies in the use of neuron weight sharing, which reduces the diversity of network parameters,
simplifies the network, and enhances execution efficiency [35].
Input layer: The initial sample data are input into the network to facilitate the feature
learning of the convolutional layer, and the input data are preprocessed to ensure that the
data align more closely with the requirements of network training.
Convolution layer: In this layer, the input features from the previous layer undergo
convolution with filters, and the biased top is added. Nonlinear functions are applied to
obtain the final output of the convolutional layer.
The output of the convolution layer is expressed as follows:
where χlj−0
1
represents the activation value of feature j0 in layer l − 1, Wil0 j0 represents the
convolution kernel of layer l feature j0 and the previous layer feature i0 , while B denotes the
bias value. ulj0 describes the weighted sum of feature ulj0 in the l layer, and f (·) represents
the activation function, with ∗ indicating the convolution operation.
All outputs are connected with the adjacent neurons in the upper layer, which helps
train the sample features, reduce the parameter connections, and improve learning efficiency.
Down-sampling layer: In this layer, the output features are processed via pooling using
the down-sampling function. After that, weighted and biased calculations are performed
on the processed features to obtain the output of the down-sampling layer:
ulj0 = βlj0 down χil0−1 + Blj0 (13)
χlj0 = f ulj0 (14)
Sensors 2023, 23, 7730 14 of 23
where β represents the weight coefficient, and down(·) is the down-sampling function.
Fully connected layer: In the fully connected layer, all two-dimensional features are
transformed into one-dimensional features, which serve as the output of the layer. And the
output of the layer is obtained through weighted summation:
χl = f (ul ) (15)
u l = ω 0 l χ l −1 + B l (16)
Output layer: The main job is to classify and identify the feature vector obtained by
the CNN model and output the result.
The training process of CNN mainly involves two phases: the forward propagation
phase and the backward propagation phase. During the former one, input data are pro-
cessed through intermediate layers to generate output values and a loss function. In the
latter phase, when there is an error between the output values and the target values, the
parameters of each layer in the network are optimized through gradient descent based on
the loss function.
1 N | Pk − Rk |
N k∑
MAPE = × 100% (18)
=1
Rk
v
u N
u1
RMSE = t ∑ (Pk − Rk )2 (19)
N k=1
where P represents the estimated value of the model, R represents the actual value, N
signifies the number of samples, and k denotes the current sample number.
3. Results
3.1. Performance Comparison of Different Point Cloud Filtering Methods
To investigate the filtering effects of different thresholds, a comparative experiment
was conducted with thresholds of 1, 2, and 4, and neighborhood points of 30. Figure 15
depicts the filtering effects of distinct thresholds. When the threshold value is set to 0.1,
there is excessive filtering of the pig point cloud, which leads to partial loss of pig head
point cloud data. On the other hand, when the threshold is set to 2, most of the discrete
points can be effectively eliminated. When the threshold is set to 4, only a small number
of discrete points are removed, and the impact is not apparent. Consequently, a threshold
value of 2 was selected for the statistical filtering parameter in this study.
To compare the filtering effects of different filters, the guided filter was used as a com-
parison in this research. The guided filter is a kind of edge-preserving smoothing method
commonly employed in image enhancement, image dehazing, and other applications [36].
Figure 16 depicts the effect of guided filtering. The comparison reveals that while the
guided filter was capable of preserving the entire point cloud data of the pigs, it did not
filter the discrete point cloud data from the surrounding environment of the pigs. That is
why the filtering effect was worse than that of statistical filtering.
was conducted with thresholds of 1, 2, and 4, and neighborhood points of 30. Figure 15
depicts the filtering effects of distinct thresholds. When the threshold value is set to 0.1,
there is excessive filtering of the pig point cloud, which leads to partial loss of pig head
point cloud data. On the other hand, when the threshold is set to 2, most of the discrete
points can be effectively eliminated. When the threshold is set to 4, only a small number
Sensors 2023, 23, 7730 15 of 23
of discrete points are removed, and the impact is not apparent. Consequently, a threshold
value of 2 was selected for the statistical filtering parameter in this study.
(a) (b)
(c) (d)
Sensors 2023, 23, x FOR PEER REVIEW
Figure 15. Comparison of statistical filtering effects: (a) Original image; (b) filtering effect16with
of 23a
Figure 15. Comparison of statistical filtering effects: (a) Original image; (b) filtering effect with a
threshold of 1; (c) filtering effect with a threshold of 2; (d) filtering effect with a threshold of 4.
threshold of 1; (c) filtering effect with a threshold of 2; (d) filtering effect with a threshold of 4.
To compare the filtering effects of different filters, the guided filter was used as a
comparison in this research. The guided filter is a kind of edge-preserving smoothing
method commonly employed in image enhancement, image dehazing, and other applica-
tions [36]. Figure 16 depicts the effect of guided filtering. The comparison reveals that
while the guided filter was capable of preserving the entire point cloud data of the pigs, it
did not filter the discrete point cloud data from the surrounding environment of the pigs.
That is why the filtering effect was worse than that of statistical filtering.
(a) (b)
Figure 16. Comparison of guided filtering effect: (a) Original image; (b) guided filtering effect image.
Figure 16. Comparison of guided filtering effect: (a) Original image; (b) guided filtering effect image.
3.2.
3.2. Performance
PerformanceComparison
Comparison of
of Different
Different Segmentation
Segmentation Methods
Methods for
for Point
Point Cloud
Cloud
For the
For the purpose
purpose of of exploring
exploring the the influence
influence of of different
different radius
radius of of neighborhood
neighborhood on on the
the
clustering effect,
clustering effect, aa comparative
comparative experiment
experiment was was conducted
conducted in in this
this study
study using
using the
the radius
radius
of neighborhood
of neighborhood (Eps) (Eps) ofof 0.01,
0.01, 0.02,
0.02, and
and 0.025.
0.025. TheThe clustering
clustering effect
effect ofof different
different Eps
Eps isis
shown in
shown in Figure
Figure 17.
17. When
When EpsEps isis set
set to
to 0.01,
0.01, the
the pig’s
pig’s body
body trunk
trunk is is divided,
divided, butbut the
the head
head
and tail
and tail are
are categorized
categorized as as background.
background. Also, Also,the
theoutline
outlineedge
edgeof ofthe
thepig
pigbody
bodyisisblurred,
blurred,
and the point cloud part of the pig is missing. Setting Eps to 0.02 bring
and the point cloud part of the pig is missing. Setting Eps to 0.02 bring about a complete about a complete
separation of
separation of the
the pig,
pig, with
with aa clear
clear and
and smooth
smooth outline.
outline. When
When thethe Eps
Eps is is set
set to
to 0.025,
0.025, parts
parts
of the
of the pig
pig outline
outline become
become visible,
visible, but
but the
the pig
pig cannot
cannotbe be separated
separatedfrom fromthethebackground.
background.
Therefore, the
Therefore, the threshold
threshold value
value for
for Eps
Eps was
was set
set to
to 0.02
0.02 as
as the
the Eps
Eps of
of DBSCAN
DBSCAN point point cloud
cloud
density clustering.
density clustering.
This research selected the region growing segmentation for point cloud as a reference
for comparing the effects of various cloud segmentation methods. The region growing
segmentation method is recognized for its flexibility, preservation of similarity, extensibility,
computational efficiency, and interpretability, making it one of the most favored methods in
point cloud segmentation [37]. The algorithm selects the seed point as the starting point and
expands the seed region by adding neighboring points that meet specific conditions [38].
The curvature threshold (ct) is an important parameter influencing the region growing
segmentation algorithm. Figure 18 illustrates the segmentation outcomes under different
curvature thresholds. When ct = 0.01, a substantial portion of the pig’s back point cloud
(a)is missing, and some is erroneously classified as outliers. (b) With ct = 0.02, the number of
clusters in the point cloud data decreases, which brings about substantial filling of the
pig’s back point cloud. Moreover, the segmentation correctly identifies and categorizes
pig back point clouds as the primary class, but the issue of pig point cloud adhesion to the
(a) (b)
Figure 16. Comparison of guided filtering effect: (a) Original image; (b) guided filtering effect image.
0.02, the number of clusters in the point cloud data decreases, which brings about sub-
stantial filling of the pig’s back point cloud. Moreover, the segmentation correctly identi-
fies and categorizes pig back point clouds as the primary class, but the issue of pig point
cloud adhesion to the background remains severe. Finally, when ct = 0.05, the pig point
cloud blends with the background, which means a failure. In comparison to the region
growing segmentation algorithm, the DBSCAN-based segmentation model demonstrates
(c)
greater robustness in handling parameters and noise. DBSCAN, (d) as a clustering algorithm,
offers
Figureadvantages
Figure 17. Comparison
17.
in point
Comparison of
cloud segmentation
of DBSCAN
DBSCAN Pointcloud
Point
tasks, especially
cloudsegmentation
segmentation in settings
withdifferent
with different where
Epsvalue:
Eps value: (a)there
(a)
is
Original
Original
no need
image; to
(b) predefine
segmentationthe number
effect with of
an clusters. It
Eps of 0.01; exhibits robustness
(c) segmentation against
effect noise
with an Eps of
image; (b) segmentation effect with an Eps of 0.01; (c) segmentation effect with an Eps of 0.02;
and out-
0.02; (d)
segmentation
liers while effect
being with anofEps
capable of 0.025.clusters with irregular shapes.
handling
(d) segmentation effect with an Eps of 0.025.
This research selected the region growing segmentation for point cloud as a reference
for comparing the effects of various cloud segmentation methods. The region growing
segmentation method is recognized for its flexibility, preservation of similarity, extensi-
bility, computational efficiency, and interpretability, making it one of the most favored
methods in point cloud segmentation [37]. The algorithm selects the seed point as the
starting point and expands the seed region by adding neighboring points that meet spe-
cific conditions [38]. The curvature threshold (ct) is an important parameter influencing
the region growing segmentation algorithm. Figure 18 illustrates the segmentation out-
(a) (b)
comes under different curvature thresholds. When ct = 0.01, a substantial portion of the
pig’s back point cloud is missing, and some is erroneously classified as outliers. With ct =
(c) (d)
Figure 18. Comparison of segmentation effect of region growing segmentation for point cloud: (a)
Figure 18. Comparison of segmentation effect of region growing segmentation for point cloud:
Original image; (b) segmentation effect with a ct of 0.01; (c) segmentation effect with a ct of 0.02; (d)
(a) Original image; (b) segmentation effect with a ct of 0.01; (c) segmentation effect with a ct of 0.02;
segmentation effect with a ct of 0.05.
(d) segmentation effect with a ct of 0.05.
3.3.
3.3.Analysis
AnalysisofofDown-Sampling
Down-SamplingResults
Results
Selecting
Selecting an appropriate down-sampling voxel
an appropriate down-sampling voxel parameter
parameter is
iscritical
criticalto
toimproving
improvingthe
the
efficiency
efficiency of body dimension feature extraction. Our experiments have shownthat
of body dimension feature extraction. Our experiments have shown thatas
asthe
the
parameter value for the voxel’s regular cube increases, the number of points retained after
down-sampling decreases. This parameter value determines whether the down-sampled
point cloud remains stable and representative. Figure 19 illustrates the down-sampling
effects of different parameter values. When the voxel parameter is set to 0.005, the down-
Figure 18. Comparison of segmentation effect of region growing segmentation for point cloud: (a)
Original image; (b) segmentation effect with a ct of 0.01; (c) segmentation effect with a ct of 0.02; (d)
segmentation effect with a ct of 0.05.
(a) (b)
(c) (d)
Figure 19. Down-sampling results of point clouds with varying voxel parameters. (a) Voxel param-
Figure 19. Down-sampling results of point clouds with varying voxel parameters. (a) Voxel parameter
eter of 0.05; (b) voxel parameter of 0.06; (c) voxel parameter of 0.07; (d) voxel parameter of 0.08.
of 0.05; (b) voxel parameter of 0.06; (c) voxel parameter of 0.07; (d) voxel parameter of 0.08.
3.4.Extraction
3.4. ExtractionResults
ResultsofofPoint
PointCloud
Cloudononthe
theBack
BackofofPig
Pig
In this research, the eigenvalues of envelope
In this research, the eigenvalues of envelope volume, volume,envelope
envelopearea,
area,projection
projectionarea,
area,
shoulderwidth,
shoulder width,belly
bellywidth,
width,and
andhip
hipwidth
widthofofthethepig’s
pig’sback
backwere
wereobtained.
obtained.The
Thepartial
partial
extractionresults
extraction resultsofofthe
thepig’s
pig’sback
backfeatures
featuresare
areshown
shownininTable
Table1.1.AAPearson
Pearsoncoefficient
coefficienttest
test
wasemployed
was employedtotoassess
assessthe
theback
backfeature
featurecorrelation
correlationofofpigs.
pigs.The
Thecorrelation
correlationcoefficients
coefficients
betweendifferent
between differentfeature
featureparameters
parametersare areshown
shownininTable
Table2.2.Among
Amongthem,
them,the
thecorrelation
correlation
coefficients between
coefficients between shoulder widthwidthandandenvelope
envelopearea areaand
andprojection area
projection areare
area 0.753 and
0.753
and 0.759,
0.759, respectively.
respectively. This indicates
This indicates that isthere
that there is a substantial
a substantial correlation
correlation betweenbetween
the char-
the characteristics.
acteristics.
Table1.1.Part
Table Partofofthe
thepig
pigcharacteristic
characteristicparameter.
parameter.
Radial Basis Function (RBF) Neural Networks was selected to compare the efficacy
of different weight estimation model. RBF is a one-way propagation neural network with
three or more layers that provides the most accurate approximation of nonlinear functions
and the greatest global performance [39]. Figure 21 presents the results of the two weight
estimation models using mean absolute error (MAE), mean absolute percent error (MAPE),
and root mean square error (RMSE) as evaluation parameters. This comparison illustrates
that all 20.
Figure CNN parameters
Scatter chart for outperform those of RBF neural networks.
weight estimation.
4. Discussion
4.1. Comparison with Previous Research
To evaluate the effectiveness and practicality of the algorithm proposed in this re-
search, a comparative analysis is conducted in comparison to previous research. In refer-
ence [14], a 2D camera is employed to capture dorsal images of pigs, followed by the ex-
traction of the dorsal image area of pigs. The correlation between dorsal area and weight
is investigated, leading to the development of a model for estimating pig weight. In con-
Figure21.21. Comparisonofofpig pigweight
weightestimation
estimationand
andevaluation
evaluationindicators.
indicators.
trast, ourComparison
Figure proposed approach, building upon the extraction of dorsal projection area, ex-
tensively
4.4.Discussionextracts various dorsal body dimension features, including envelope volume,
Discussion
envelope
4.1. area,
Comparison projection
with area, shoulder width, belly width, and hip width. All of these
4.1. Comparison
contribute withPrevious
to a more Previous Research
Research of pig weight. Leveraging a 3D-based weight esti-
precise regression
To
mationToevaluate
evaluate
system thethe
effectiveness
allows effectiveness
for accurateandcapture
practicality
and of the
practicality
of intricate ofalgorithm proposed
the algorithm
details from inangles,
proposed
various thisinresearch,
this re-
effec-
asearch,
comparative analysis
a comparative
tively addressing is conducted
analysis
distortions in comparison
is conducted
arising to previous
in comparison
from changes research.
to previous
in perspective In reference
research.
and scale. [14],
In refer-
This system
aence
2D camera
provides is employed
[14], enhanced
a 2D cameraaccuracyto capture
is employed todorsal inimages
capture
and reliability dorsalofestimation.
weight pigs, followed
images of pigs,
In the by the extraction
followed
realm ofby3Dthe ex-
im-
oftraction
the dorsal
age-based theimage
of weight
dorsal area
imageof area
estimation pigs. ofThe
pigs.correlation
methodology, The between
correlation
reference dorsal
between
[20] employs area
dorsal
four andcameras
area
Kinect weight
and tois
weight
investigated, leading to the development of a model for estimating pig weight. In contrast,
is investigated, leading to the development of a model for estimating pig weight. In con-
trast, our proposed approach, building upon the extraction of dorsal projection area, ex-
tensively extracts various dorsal body dimension features, including envelope volume,
envelope area, projection area, shoulder width, belly width, and hip width. All of these
contribute to a more precise regression of pig weight. Leveraging a 3D-based weight esti-
Sensors 2023, 23, 7730 20 of 23
our proposed approach, building upon the extraction of dorsal projection area, extensively
extracts various dorsal body dimension features, including envelope volume, envelope
area, projection area, shoulder width, belly width, and hip width. All of these contribute to
a more precise regression of pig weight. Leveraging a 3D-based weight estimation system
Sensors 2023, 23, x FOR PEER REVIEW 21 of 23
allows for accurate capture of intricate details from various angles, effectively addressing
distortions arising from changes in perspective and scale. This system provides enhanced
accuracy and reliability in weight estimation. In the realm of 3D image-based weight
capture point
estimation cloud data
methodology, of pigs, [20]
reference subsequently employing
employs four mesh reconstruction
Kinect cameras to capture pointand deep
cloud
neural network (DNN) to construct a weight estimation model. However, the
data of pigs, subsequently employing mesh reconstruction and deep neural network (DNN) utilization
toofconstruct
multiple acameras,
weight while comprehensive
estimation in capturing
model. However, morphological
the utilization features,cameras,
of multiple presents
challenges including high costs, intricate data fusion and processing, spatial
while comprehensive in capturing morphological features, presents challenges including constraints,
andcosts,
high layoutintricate
limitations,
data rendering
fusion anditprocessing,
unsuitable spatial
for deployment in large-scale
constraints, and layout pig farms.
limitations,
rendering it unsuitable for deployment in large-scale pig farms.
4.2. Deficiencies and Improvements
4.2. Deficiencies
The posturesand Improvements
of the pigs were found to influence the reliability and accuracy of back
The extraction.
feature postures ofThe the pigs
changewerein found to influence
pig postures impact the reliability
the model’s and accuracy
feature of back
extraction, re-
feature
sultingextraction.
in deviations The changethe
between inactual
pig postures impact the and
feature parameters model’s feature extraction,
the extracted feature pa-
resulting
rameters,inwhich
deviations between
will then affect the
the actual
efficacyfeature
of the parameters and the extracted
pig weight estimation model. In feature
future
parameters, which will then affect the efficacy of the pig weight estimation
research, we will consider the feature differences under various postures and construct model. In future
research,
differentwe will consider
models to estimate thethe
feature
body differences
weight of pigs under various postures
in different postures. and construct
different models to estimate
Furthermore, the body weight
due to uncontrolled of pigs of
movements in different
pigs, the postures.
acquired point cloud data
are Furthermore,
susceptible todue to uncontrolled
noise, non-uniformmovements of pigs, thevoids,
or under-sampling, acquired
andpoint cloud data
omissions, all of
are susceptible
which can affecttothe
noise, non-uniform
effectiveness or dimension
of body under-sampling,parametervoids, and omissions,
extraction. In futureall of
work,
which can affect the effectiveness of body dimension parameter extraction.
we plan to integrate multiple consecutive frames of point cloud data to densify and fill in In future work,
wetheplan
sparseto integrate
and missing multiple consecutive
portions of the pig’sframes
pointofcloud.
pointTo cloud data the
facilitate to densify
practical and fill
imple-
inmentation
the sparseofand missing portions of the pig’s point cloud. To facilitate
non-contact weight estimation for pigs based on 3D imagery, further re- the practical
implementation
search will broaden of non-contact
the scopeweight estimation
of target subjectsforandpigsenhance
based onthe 3Dweight
imagery, further
estimation
research
model’s will broaden the
generalization scope of target
capabilities. Drawing subjects
uponand the enhance
output ofthe pigweight
weightestimation
estimation,
model’s generalization capabilities. Drawing upon the output of pig
we intend to assess the growth status of the pigs and offer tailored feeding recommenda-weight estimation, we
intend to assess the growth status of the pigs and offer tailored feeding recommendations.
tions. A weight management system was constructed in this research, as illustrated in Fig- A
weight management system was constructed in this research, as illustrated
ure 22. The weight estimation model can be deployed on application terminals to provide in Figure 22. The
weight estimation
information on pigmodel cangain
weight be deployed on application
status, weight-based terminals
feeding to provide information
recommendations, and alert
on pig weight
notifications. gain status, weight-based feeding recommendations, and alert notifications.
5. Conclusions
This study introduces a 3D point cloud-based method for estimating pig weight, so
as to address the challenges related pig weight monitoring and their susceptibility to
stress. In this method, we separated the pig point cloud from the background using sta-
Sensors 2023, 23, 7730 21 of 23
5. Conclusions
This study introduces a 3D point cloud-based method for estimating pig weight, so as
to address the challenges related pig weight monitoring and their susceptibility to stress.
In this method, we separated the pig point cloud from the background using statistical
filtering and DBSCAN, thereby resolving the problem of pig and background adhesion.
Using convex hull detection, the point cloud data of the pig’s head and tail was removed
in order to obtain complete point cloud data of their backs. Voxel down-sampling was
used to reduce the number of point cloud data on the backs of pigs and enhance the weight
estimation model’s efficiency and real-time performance. The weight estimation model
based on back parameter characteristics and CNN was constructed. Also, the MAPE of
the weight estimation model is only 5.357%, which could meet the demand of automatic
weight monitoring in actual production.
It is worth mentioning that all the data used for training and validation in this method
were collected from a real production environment, and only a depth camera above the
drive channel was needed to achieve this method. Therefore, the method is easy to
popularize and apply. What shall be noticed is that it can provide technical support for pig
weight monitoring within both breeding and slaughterhouse settings.
Author Contributions: Z.L.: writing—original draft, review and editing, conceptualization, method-
ology, equipment adjustment, data curation, software. J.H.: writing—original draft, data curation,
software, formal analysis, investigation, conceptualization. H.X.: review and editing, conceptu-
alization, methodology, formal analysis, funding acquisition. H.T.: writing—review and editing,
conceptualization. Y.C.: review and editing, conceptualization, formal analysis. H.L.: review and
editing, conceptualization, formal analysis. All authors have read and agreed to the published version
of the manuscript.
Funding: This research was funded by the Key Projects of Intergovernmental Cooperation in Interna-
tional Scientific and Technological Innovation (Project No. 2017YFE0114400), China.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: The authors would like to acknowledge the support from the Wens Foodstuffs
Group Co., Ltd. (Guangzhou, China) for the use of their animals and facilities.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Noya, I.; Aldea, X.; González-García, S.; Gasol, C.M.; Moreira, M.T.; Amores, M.J.; Marin, D.; Boschmonart-Rives, J. Environmental
assessment of the entire pork value chain in catalonia—A strategy to work towards circular economy. Sci. Total Environ. 2017, 589,
122–129. [CrossRef]
2. Secco, C.; da Luz, L.M.; Pinheiro, E.; Puglieri, F.N.; Freire, F.M.C.S. Circular economy in the pig farming chain: Proposing a model
for measurement. J. Clean. Prod. 2020, 260, 121003. [CrossRef]
3. He, W.; Mi, Y.; Ding, X.; Liu, G.; Li, T. Two-stream cross-attention vision Transformer based on RGB-D images for pig weight
estimation. Comput. Electron. Agric. 2023, 212, 107986. [CrossRef]
4. Bonneau, M.; Lebret, B. Production systems and influence on eating quality of pork. Meat Sci. 2010, 84, 293–300. [CrossRef]
5. Lebret, B.; Čandek-Potokar, M. Pork quality attributes from farm to fork. Part I. Carcass and fresh meat. Animal 2022, 16, 100402.
[CrossRef]
6. Li, J.; Yang, Y.; Zhan, T.; Zhao, Q.; Zhang, J.; Ao, X.; He, J.; Zhou, J.; Tang, C. Effect of slaughter weight on carcass characteristics,
meat quality, and lipidomics profiling in longissimus thoracis of finishing pigs. LWT 2021, 140, 110705. [CrossRef]
7. Jensen, D.B.; Toft, N.; Cornou, C. The effect of wind shielding and pen position on the average daily weight gain and feed
conversion rate of grower/finisher pigs. Livest. Sci. 2014, 167, 353–361. [CrossRef]
8. Valros, A.; Sali, V.; Hälli, O.; Saari, S.; Heinonen, M. Does weight matter? Exploring links between birth weight, growth and
pig-directed manipulative behaviour in growing-finishing pigs. Appl. Anim. Behav. Sci. 2021, 245, 105506. [CrossRef]
9. Kashiha, M.; Bahr, C.; Ott, S.; Moons, C.P.; Niewold, T.A.; Ödberg, F.O.; Berckmans, D. Automatic weight estimation of individual
pigs using image analysis. Comput. Electron. Agric. 2014, 107, 38–44. [CrossRef]
Sensors 2023, 23, 7730 22 of 23
10. Apichottanakul, A.; Pathumnakul, S.; Piewthongngam, K. The role of pig size prediction in supply chain planning. Biosyst. Eng.
2012, 113, 298–307. [CrossRef]
11. Dokmanović, M.; Velarde, A.; Tomović, V.; Glamočlija, N.; Marković, R.; Janjić, J.; Baltić, M.Ž. The effects of lairage time and
handling procedure prior to slaughter on stress and meat quality parameters in pigs. Meat Sci. 2014, 98, 220–226. [CrossRef]
[PubMed]
12. Bhoj, S.; Tarafdar, A.; Chauhan, A.; Singh, M.; Gaur, G.K. Image processing strategies for pig liveweight measurement: Updates
and challenges. Comput. Electron. Agric. 2022, 193, 106693. [CrossRef]
13. Tu, G.J.; Jørgensen, E. Vision analysis and prediction for estimation of pig weight in slaughter pens. Expert Syst. Appl. 2023,
220, 119684. [CrossRef]
14. Minagawa, H.; Hosono, D. A light projection method to estimate pig height. In Swine Housing, Proceedings of the First International
Conference, Des Moines, IA, USA, 9–11 October 2000; ASABE: St. Joseph, MI, USA, 2000; pp. 120–125.
15. Li, G.; Liu, X.; Ma, Y.; Wang, B.; Zheng, L.; Wang, M. Body size measurement and live body weight estimation for pigs based on
back surface point clouds. Biosyst. Eng. 2022, 218, 10–22. [CrossRef]
16. Sean, O.C. Research and Implementation of a Pig Weight Estimation System Based on Fuzzy Neural Network. Master’s Thesis,
Nanjing Agricultural University, Nanjing, China, 2017.
17. Suwannakhun, S.; Daungmala, P. Estimating pig weight with digital image processing using deep learning. In Proceedings of the
2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Las Palmas de Gran Canaria,
Spain, 26–29 November 2018; pp. 320–326.
18. Liu, T.H. Optimization and 3D Reconstruction of Pig Body Size Parameter Extraction Algorithm Based on Binocular Vision. Ph.D.
Thesis, China Agricultural University, Beijing, China, 2014.
19. He, H.; Qiao, Y.; Li, X.; Chen, C.; Zhang, X. Automatic weight measurement of pigs based on 3D images and regression network.
Comput. Electron. Agric. 2021, 187, 106299. [CrossRef]
20. Kwon, K.; Park, A.; Lee, H.; Mun, D. Deep learning-based weight estimation using a fast-reconstructed mesh model from the
point cloud of a pig. Comput. Electron. Agric. 2023, 210, 107903. [CrossRef]
21. Guo, R.; Xie, J.; Zhu, J.; Cheng, R.; Zhang, Y.; Zhang, X.; Gong, X.; Zhang, R.; Wang, H.; Meng, F. Improved 3D point cloud
segmentation for accurate phenotypic analysis of cabbage plants using deep learning and clustering algorithms. Comput. Electron.
Agric. 2023, 211, 108014. [CrossRef]
22. Ali, Z.A.; Zhangang, H. Multi-unmanned aerial vehicle swarm formation control using hybrid strategy. Trans. Inst. Meas. Control
2021, 43, 2689–2701. [CrossRef]
23. Li, J.; Ma, W.; Li, Q.; Zhao, C.; Tulpan, D.; Yang, S.; Ding, L.; Gao, R.; Yu, L.; Wang, Z. Multi-view real-time acquisition and 3D
reconstruction of point clouds for beef cattle. Comput. Electron. Agric. 2022, 197, 106987. [CrossRef]
24. Zhao, Q.; Gao, X.; Li, J.; Luo, L. Optimization algorithm for point cloud quality enhancement based on statistical filtering. J. Sens.
2021, 2021, 7325600. [CrossRef]
25. Yang, Q.F.; Gao, W.Y.; Han, G.; Li, Z.Y.; Tian, M.; Zhu, S.H.; Deng, Y.H. HCDC: A novel hierarchical clustering algorithm based on
density-distance cores for data sets with varying density. Inf. Syst. 2023, 114, 102159. [CrossRef]
26. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise.
In Proceedings of the KDD, München, Germany, 2 August 1996; pp. 226–231.
27. Chen, H.; Liang, M.; Liu, W.; Wang, W.; Liu, P.X. An approach to boundary detection for 3D point clouds based on DBSCAN
clustering. Pattern Recognit. 2022, 124, 108431. [CrossRef]
28. Yu, T.; Hu, C.; Xie, Y.; Liu, J.; Li, P. Mature pomegranate fruit detection and location combining improved F-PointNet with 3D
point cloud clustering in orchard. Comput. Electron. Agric. 2022, 200, 107233. [CrossRef]
29. Ma, B.; Du, J.; Wang, L.; Jiang, H.; Zhou, M. Automatic branch detection of jujube trees based on 3D reconstruction for dormant
pruning using the deep learning-based method. Comput. Electron. Agric. 2021, 190, 106484. [CrossRef]
30. Lu, M.; Xiong, Y.; Li, K.; Liu, L.; Yan, L.; Ding, Y.; Lin, X.; Yang, X.; Shen, M. An automatic splitting method for the adhesive
piglets’ gray scale image based on the ellipse shape feature. Comput. Electron. Agric. 2016, 120, 53–62. [CrossRef]
31. Fernandes, A.F.; Dórea, J.R.; Fitzgerald, R.; Herring, W.; Rosa, G.J. A novel automated system to acquire biometric and
morphological measurements and predict body weight of pigs via 3D computer vision. J. Anim. Sci. 2019, 97, 496–508. [CrossRef]
32. Panda, S.; Gaur, G.K.; Chauhan, A.; Kar, J.; Mehrotra, A. Accurate assessment of body weights using morphometric measurements
in Landlly pigs. Trop. Anim. Health Prod. 2021, 53, 362. [CrossRef]
33. Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE
Trans. Neural Netw. Learn. Syst. 2022, 33, 6999–7019. [CrossRef]
34. Jiang, Q.; Jia, M.; Bi, L.; Zhuang, Z.; Gao, K. Development of a core feature identification application based on the Faster R-CNN
algorithm. Eng. Appl. Artif. Intell. 2022, 115, 105200. [CrossRef]
35. Diao, Z.; Yan, J.; He, Z.; Zhao, S.; Guo, P. Corn seedling recognition algorithm based on hyperspectral image and lightweight-3D-
CNN. Comput. Electron. Agric. 2022, 201, 107343. [CrossRef]
36. Kong, W.; Chen, Y.; Lei, Y. Medical image fusion using guided filter random walks and spatial frequency in framelet domain.
Signal Process. 2021, 181, 107921. [CrossRef]
Sensors 2023, 23, 7730 23 of 23
37. Ma, X.; Luo, W.; Chen, M.; Li, J.; Yan, X.; Zhang, X.; Wei, W. A fast point cloud segmentation algorithm based on region growth. In
Proceedings of the 2019 18th International Conference on Optical Communications and Networks (ICOCN), Huangshan, China,
5–8 August 2019; pp. 1–2.
38. Han, X.; Wang, X.; Leng, Y.; Zhou, W. A plane extraction approach in inverse depth images based on region-growing. Sensors
2021, 21, 1141. [CrossRef] [PubMed]
39. Xu, Y.; Jung, C.; Chang, Y. Head pose estimation using deep neural networks and 3D point clouds. Pattern Recognit. 2022,
121, 108210. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.