Enhanced Particle Swarm Optimization Algorithm For-1
Enhanced Particle Swarm Optimization Algorithm For-1
sciences
Article
Enhanced Particle Swarm Optimization Algorithm for Sea Clutter
Parameter Estimation in Generalized Pareto Distribution
Bin Yang 1,2, * and Qing Li 1,2
Abstract: Accurate parameter estimation is essential for modeling the statistical characteristics of
ocean clutter. Common parameter estimation methods in generalized Pareto distribution models
have limitations, such as restricted parameter ranges, lack of closed-form expressions, and low
estimation accuracy. In this study, the particle swarm optimization (PSO) algorithm is used to solve
the non-closed-form parameter estimation equations of the generalized Pareto distribution. The
goodness-of-fit experiments show that the PSO algorithm effectively solves the non-closed parameter
estimation problem and enhances the robustness of fitting the generalized Pareto distribution to
heavy-tailed oceanic clutter data. In addition, a new parameter estimation method for the generalized
Pareto distribution is proposed in this study. By using the difference between the statistical histogram
of the data and the probability density function/cumulative distribution function of the generalized
Pareto distribution as the target, an adaptive function with weighted coefficients is constructed to
estimate the distribution parameters. A hybrid PSO (HPSO) algorithm is used to search for the best
position of the fitness function to achieve the best parameter estimation of the generalized Pareto
distribution. Simulation analysis shows that the HPSO algorithm outperforms the PSO algorithm in
solving the parameter optimization task of the generalized Pareto distribution. A comparison with
other traditional parameter estimation methods for generalized Pareto distribution shows that the
HPSOHPSO algorithm exhibits strong parameter estimation performance, is efficient and stable, and
is not limited by the parameter range.
Keywords: sea clutter; generalized Pareto distributed; hybridized particles; particle swarm optimiza-
Citation: Yang, B.; Li, Q. Enhanced
tion; parameter estimation; estimated performance
Particle Swarm Optimization
Algorithm for Sea Clutter Parameter
Estimation in Generalized Pareto
Distribution. Appl. Sci. 2023, 13, 9115.
1. Introduction
https://doi.org/10.3390/app13169115
Understanding the characteristics of sea clutter is crucial for designing effective radar
Academic Editor: Vincent A. Cicirello
target-detection algorithms. The statistical properties of sea clutter play a significant role in
Received: 4 July 2023 determining the constant false alarm characteristics of target detection [1–3]. Specifically,
Revised: 30 July 2023 the collected sea clutter data from shore-based radar exhibits a pronounced long trailing
Accepted: 9 August 2023 behavior in its statistical distribution [4–8], commonly referred to as the phenomenon of
Published: 10 August 2023 heavy trailing.
The composite Gaussian model is well-suited for capturing the heavy trailing charac-
teristics observed in sea clutter data [9–11]. This model comprises a slow-varying structural
component that modulates a fast-varying scattering component [12], providing a plausible
Copyright: © 2023 by the authors.
explanation for the formation mechanism of sea clutter. Depending on the composition of
Licensee MDPI, Basel, Switzerland.
the texture components, three distinct statistical distributions are commonly used, namely
This article is an open access article
the K distribution [13], generalized Pareto distribution [14], and IG-CG distribution [15].
distributed under the terms and
The probability density function of the K distribution can be regarded as the product of the
conditions of the Creative Commons
texture component and the speckle component of sea clutter, with the texture component
Attribution (CC BY) license (https://
following a gamma distribution. In contrast, the texture component of the generalized
creativecommons.org/licenses/by/
4.0/).
Pareto distribution follows an inverse gamma distribution, while the IG-CG is a compound
Gaussian distribution model with an inverse Gaussian texture. Each distribution model is
associated with its optimal detectors, and accurate parameter estimation plays a pivotal role
in determining the performance of these detectors. Hence, precise parameter estimation is
of utmost importance for the effective application of distribution models.
Parameter estimation methods for the generalized Pareto distribution can be cate-
gorized into moment estimation [16–19], maximum likelihood estimation [20,21], and
quantile parameter estimation [22,23]. Moment estimation involves estimating integer
order moments, fractional order moments, logarithmic moments, and other variants. The
accuracy of parameter estimation varies across different moment estimation methods,
with challenges such as non-closed expressions and a limited range for shape parameter
estimation. Maximum likelihood estimation provides the highest accuracy but involves
computationally intensive nonlinear expression solutions [24,25]. Sub-locality-based pa-
rameter estimation methods effectively mitigate estimation errors caused by anomalous
clutter data samples [23]. However, their utilization of echo data information is limited, and
practical application often requires combining estimation results from multiple sub-locality
points, thereby increasing computational complexity.
Liang et al. [26] proposed a multi-scan recursive Bayesian estimation method for
the parameter estimation of the generalized Pareto distribution in large-scale sea clutter
scenes, which demonstrated convergence and robustness. Shui et al. [20] addressed sea
clutter modeling with outliers using the generalized Pareto distribution and proposed
an iterative algorithm to efficiently solve the truncated maximum likelihood equation.
Yu et al. [23] employed a double-percentile parameter estimation method for effective
parameter estimation of the generalized Pareto distribution measured sea clutter waves,
providing an analysis of its effectiveness. Solving non-closed expressions poses challenges
for parameter estimators in practical applications of the generalized Pareto distribution.
These non-closed expressions can be treated as nonlinear optimization problems of the
objective function, and population intelligence search methods from the field of artificial
intelligence [27,28] offer common approaches for solving such problems. The application
of artificial intelligence methods to parameter estimation of sea clutter distribution models
is a relatively new research direction [29].
In response to the problem of non-closed parameter estimation equations in some
parameter estimation methods for the generalized Pareto distribution model, this study pro-
poses the use of the particle swarm optimization (PSO) algorithm to search for non-closed
expressions and solve the parameter estimation problem. The PSO algorithm is applied
to nonlinear optimization problems, including the non-closed expressions of parameter
estimation methods such as 0.5th/1st-order moment estimation, 0.25th-order logarithmic
moment estimation, and maximum likelihood estimation. Furthermore, to address the
limitations of low estimation accuracy, restricted estimation range, and non-closed expres-
sions in traditional parameter estimation methods for the generalized Pareto distribution,
this study aims to find an optimal parameter estimation model for the generalized Pareto
distribution that does not rely on complex mathematical expressions. To achieve this, a
hybrid particle swarm optimization algorithm (HPSO) is proposed to perform the optimal
parameter search task for the target objectives in parameter estimation of the generalized
Pareto distribution.
The main work and innovations of this study are summarized as follows:
• In response to the non-closed expression phenomenon in different parameter estima-
tion methods, this study investigates the construction of fitness functions for PSO
algorithm and HPSO when solving target optimization problems.
• By using simulated random data samples of the generalized Pareto distribution, the
impact of parameters such as population size and iteration count in the PSO algorithm
on its performance is examined, and the optimal parameter configuration for each
targeted objective is determined.
Appl. Sci. 2023, 13, 9115 3 of 24
2. Background
In this section, we briefly introduce the necessary background of the generalized
Pareto distribution model and its basic parameter estimation methods.
where the scattered component µ is a complex Gaussian random variable with zero mean
and unit covariance, the structural component τ is a positive random process obeying the
inverse gamma distribution, i.e., 1/τ satisfies the gamma distribution, and the probability
density function is defined as
1 1
f τ (τ; a, b) = τ −(a+1) e− bτ (2)
b a Γ( a)
where a is the shape parameter, reflecting the dragging degree of the statistical distribution
of sea clutter data, the larger the shape parameter a, the more obvious the dragging
phenomenon; b is the scale parameter, characterizing the intensity level of the echo signal.
According to the product form of two random processes in Equation (1), the amplitude
probability density function (PDF) of the generalized Pareto distribution can be deduced as
Z ∞
f x (x) = f τ (τ; a, b) f µ ( x |τ )dτ (3)
0
probability density function (PDF) of the generalized Pareto distribution can be deduced
as
∞
where 𝑓 (𝑥|𝜏) is the PDF of the scattered component 𝜇, which has the following mathe-
matical expression:
where f τ ( x |τ ) is the PDF of the scattered component µ, which has the following mathe-
2
matical expression: 2 x −τx
f u ( x τ ) = 2x e−x2 (4)
f u ( x |τ ) = τ e τ (4)
τ
Substituting
SubstitutingEquations
Equations(2)
(2)and
and(4)
(4)intointoEquation
Equation(3) (3)above,
above,the
thefollowing
followinggeneralized
generalized
Pareto
Paretodistribution
distributionmagnitude
magnitudePDF
PDFderivation
derivationcan canbe
beobtained:
obtained:
1 − x2
11 τ − ( a +1)e bτ 2 x1 e2x
∞ −
f x ( x )R=∞ τ
dτ−x2
f x ( x ) = 0 0 ba a Γ ( a )τ −(a+1) e−τbτ e τ dτ
b Γ( a) τ
2 xb ∞
= (1 + bx 2 )a +a1+Γ1( a ) 0 0 y a e−y dy
a −y
= 2xb y∞
R e dy
(5)
(5)
(1 + bx ) Γ( a)
2
2 xab 2xab
= = a +1
(1 + bx 2()1 + bx2 )a+1
where
wherethe
the variable
variablexx isis replaced
replaced by by 𝑦y= = (11++𝑏𝑥 bx2)⁄/bτ.
𝑏𝜏. InInaddition,
addition,whenwhenthe
thestructural
structural
component
componentininEquation
Equation (3)(3) is in thethe
is in form formof an of exponential
an exponential distribution, the intensity
distribution, dis-
the intensity
tribution of the generalized Pareto distribution can
distribution of the generalized Pareto distribution can be deduced as be deduced as
ab
p( xp)( =
x) = ab
( a +1) (6)
(6)
(1(1++bx
bx )( a+1)
)
From
Fromthe theabove
aboveEquation
Equation(5),
(5),ititcan
canbe
beseen
seenthat
thatthe
theamplitude
amplitudePDF PDFofofthe
thegeneralized
generalized
Pareto
Pareto distribution approximates the Rayleigh distribution when the shapeparameter
distribution approximates the Rayleigh distribution when the shape parameter
𝑎a→→∞,∞and, andthe
thesea
seaclutter
clutteramplitude
amplitudetrailing
trailingphenomenon
phenomenonisisgradually
graduallyaggravated
aggravatedwhen
when
𝑎a→→0.0The
. Thefixed scale
fixed parameter
scale b, theb,variation
parameter of theof
the variation amplitude PDF curve
the amplitude PDFof the gener-
curve of the
alized Pareto Pareto
generalized distribution with thewith
distribution shapetheparameter a, is shown
shape parameter a, isin Figurein1.Figure
shown As can1.beAs
seen
can
from Figure 1, when the scale parameter b is fixed to 1.5, the wave crest
be seen from Figure 1, when the scale parameter b is fixed to 1.5, the wave crest of the of the amplitude
PDF curve of
amplitude thecurve
PDF generalized Pareto distribution
of the generalized increases with
Pareto distribution the increase
increases with thein increase
the shapein
parameter, the wave crest
the shape parameter, gradually
the wave moves in the
crest gradually direction
moves in the of smallerofamplitude,
direction and the
smaller amplitude,
trailing
and thebecomes shorter. shorter.
trailing becomes
Probability density
2.2. Basic Parameter Estimation Methods for the Generalized Pareto Distribution
The most critical step in the practical application of the generalized Pareto distribu-
tion for modeling the statistical properties of sea clutter is to achieve accurate parameter
estimation and to improve the detection performance of sea radar. For different distribu-
tion models, a robust parameter estimation method is the basis for the application of the
distribution model. The commonly used parameter estimation methods include moment
estimation class methods, maximum likelihood estimation methods, quantile parameter
estimation methods, and artificial intelligence class parameter estimation methods.
Appl. Sci. 2023, 13, 9115 5 of 24
Among the moment estimation class methods, both positive 2nd/4th-order moment
estimation and 1st-order logarithmic moment estimation have closed expressions for pa-
rameter estimation, which can easily obtain the results of parameter estimation, but are
limited by the estimation range of shape parameters. The positive 0.5th/1st-order moment
estimation extends the estimation range of the shape parameter to the fractional domain,
but the method does not have a closed expression. The positive 0.25th-order logarith-
mic moment estimation method can achieve a further extension of the shape parameter,
thus widening the space where the generalized Pareto distribution should be used in the
statistical modeling of sea clutter, but the method also cannot derive a closed parameter
estimation expression. The maximum likelihood estimation can obtain a relatively high
accuracy of parameter estimation, but the parametric results of the scale parameters are
given by a nonlinear system of equations, which again does not have a displayed solution.
In the following, the mathematical forms of each type of parameter estimation method
introduced above will be derived one by one and used as the basis for the optimization of
parameter estimation of the generalized Pareto distribution later.
The method of moment estimation class parameter estimation equates the moments
of each order of the probability distribution model of the sample to the cumulants of the
statistical distribution model and obtains all parameter values of the distribution model by
solving the system of moment equations of each order. Assuming the unknown parameter
θ = ( a, b) of the generalized Pareto distribution, the r-th-order moments of the origin of the
probability density function can be calculated from Equation (5) as
R∞
E( xr ) = 0 xr f ( x; θ)dx, {r ∈ R and r 6= 0}
R∞ 2abxr+1 a R∞ λr/2
= 0 ( a +1)
dx = r 0 dλ
(7)
(1 + bx2 ) b 2 (1 + λ ) ( a +1)
1 Γ 1+ r
Γ a− r
2 2
=
br/2 Γ( a)
where the variable λ = bx2 , the solution process involves some of the relevant properties
of the gamma function operation, and it is known from its properties that the above
equation needs to satisfy a > r/2 to hold, i.e., the estimation range of the shape parameter
will be limited. If the sea clutter sample data X = [ x1 , x2 , · · · , xn ] are known, the r-th-
order sample origin moment of the random data X is mr = 1/n ∑1n xir . The generalized
Pareto distribution contains two parameters to be estimated, so it is necessary to list the
equations of two different order moments and complete the equation solution to obtain the
determined distribution model.
When r is taken as 2 and 4, respectively, the parameter estimation expression for
positive 2nd/4th-order moment estimation is:
2m22
â = 2 +
m4 − 2m22
(8)
1
b̂ =
( â − 1)m2
When r is taken as 0.5 and 1, respectively, the parameter estimation expression for
positive 0.5th/1st-order moment estimation is:
The above Equation (8) can be obtained by substituting the calculated sample 2nd-
and 4th-order moments of origin into the set of equations to obtain the estimation results
Appl. Sci. 2023, 13, 9115 6 of 24
of the parameters. From Equation (9), it can be seen that there is no closed expression
for the estimation of the shape parameters, so the parameter estimation results cannot be
calculated directly by substituting the values of the sample moments.
In addition, the method of estimating log moments of the generalized Pareto dis-
tribution proposed in the literature [32] also belongs to the moment estimation class of
parameter estimation methods, assuming Z = x2 , then the r-th-order origin moments of
the intensity probability density function of the generalized Pareto distribution and the
corresponding sample moments can be defined as
R∞
E(zr ) = 0 zr ln z · p(z; θ)dz
1 n ,r ∈ R (10)
κr = ∑ zri ln zi
n i =1
Replacing the overall moment with the sample moment in Equation (10) yields
κ
r − κ0 = ψ(1 + r ) − ψ(1) + ψ( â) − ψ( â − r )
m2r (11)
b̂ = exp(ψ(1) − κ0 − ψ( a))
where Ψ (·) is the digamma function, which is obtained by taking the logarithm of the
gamma function and then deriving it, and satisfies Ψ (1 + x ) = Ψ ( x ) + 1/x. By the nature
of the digamma function, it is known that the effective estimation range of the shape
parameter is a > r.
When r is equal to 1, the parameter estimation expression for positive 1st-order
logarithmic moment estimation is
â = 1 +
1
κ1 /m2 − κ0 − 1 (12)
b̂ = exp(ψ(1) − κ − ψ( â))
0
The positive 1st-order logarithmic moment estimation exists for the displayed solution,
and the parameter estimation can be obtained by calculating the logarithmic moment and
the integer order moment values of the corresponding order of the sample and then
substituting them into Equation (12).
When r is equal to 0.25, the logarithmic moment estimation corresponds to the positive
0.25th-order logarithmic moment estimation method, and it can be seen from Equation (11)
that the shape parameter estimator does not have a closed expression at this time, and the
parameters cannot be solved directly through the substitution of sample moments.
Maximum likelihood estimation is a relatively common parameter estimation method
with specific applications in the estimation of parameters of various types of statistical
distribution models. The method has a relatively high accuracy of parameter estimation
and is close to the lower bound of what can be achieved in terms of parameter estimation
accuracy. The log-likelihood function of the generalized Pareto distribution is as follows:
" #
n n xi
n
L( a, b) = ln ∏ f x ( xi ; a, b) = ln (2ab) ∏ a +1
i =1 i =1 1 + bx2
i (13)
n n
2
= n ln(2ab) − ∑ ( a + 1) ln 1 + bxi + ∑ ln xi
i =1 i =1
To find the optimal parameters when the log-likelihood function is maximized, Equation (13)
is derived for the generalized Pareto distribution for the shape parameter a and the scale param-
eter b, respectively, and the derivative is set to zero, resulting in the following equation of the
likelihood function:
Appl. Sci. 2023, 13, 9115 7 of 24
∂L( a, b) n n
= − ∑ ln 1 + bxi2 = 0
∂a a i =1
n xi2 (14)
∂L( a, b) n
= − ( a + 1) ∑ =0
b 2
∂b i =1 bxi + 1
By eliminating the scale parameter b in the above set of equations through covariance
substitution, the maximum likelihood function estimator of the shape parameter a can be
obtained, and it is re-substituted into Equation (14) to obtain the estimator of the scale
parameter. The maximum likelihood estimator of the generalized Pareto distribution is as
follows:
â = n n x2 − 1
b̂ ∑ i
2
i =1 1+b̂xi (15)
n n
n
b̂xi2 b̂xi2
1
n ∑ ln 1 + b̂xi 2 1 1
1− n ∑ = n ∑ 1+b̂x2
1+b̂x2
i =1 i =1 i i =1 i
where the calculation of the shape parameter â needs to obtain the value of the scale
parameter b̂ first, but the parameter b̂ cannot be obtained directly through calculation, and
the corresponding objective function can be established, and then the optimal value can be
obtained by searching for the optimal value.
The basic parameter estimation methods of the generalized Pareto distribution are
introduced above, the characteristics of various estimation methods are explained, specific
parameter estimation expressions are given, and Table 1 shows the comparison of the
characteristics of each parameter estimation method.
As can be seen from Table 1, some of the parameter estimation methods for the gener-
alized Pareto distribution have expressions that are non-closed, and to obtain parameter
estimation results for such parameter estimation methods, intelligent algorithms can be
used to solve the parameter estimation problem for non-closed expressions. All the non-
closed expressions involved in the table are nonlinear functions, so they can be transformed
into a nonlinear function of the optimization problem.
The intelligent algorithm itself has nonlinear characteristics, and a suitable intelligent
algorithm can be used to complete the search for the optimal solution of the nonlinear
function in the target feasible solution space in order to obtain the estimated values of
the non-closed expressions in the parameter estimation method of the partial generalized
Pareto distribution.
3. Method
3.1. Particle Swarm Optimization Algorithm
The particle swarm optimization (PSO) algorithm is a swarm intelligence class op-
timization algorithm that originated from the study of bird predation behavior, which
was proposed by Kennedy and Eberhart in 1995 [33] and has been widely used in many
fields after more than a decade of continuous research and development [34,35]. The PSO
algorithm randomly initializes particles in the feasible solution space of the problem, and
each particle is given a certain velocity of motion at its initial position, and the trajectory of
the particles is influenced by its factors and the overall behavior of the population during
the whole iteration cycle.
Appl. Sci. 2023, 13, 9115 8 of 24
The fitness function is the target object of the particle population activity because it
determines the activity space and search path of the particles. The particle population
has the ability of memory, and all particles calculate the corresponding fitness value in
each iteration, then compare it with the historical value and select the optimal value of the
historical record, and then guide the whole population toward the direction of the global
optimal solution. The basic process of the particle swarm algorithm is:
(a) The positions X and velocities V of the particles are randomly initialized in the D-
dimensional space of feasible solutions, where the i-th particle position and velocity
can be expressed as
Xi = ( xi1 , xi2 , · · · , xiD )
, i = 1, 2, · · · , N (16)
Vi = (vi1 , vi2 , · · · , viD )
(b) Based on the determined fitness function (particle population search object) and the
corresponding position of each particle, the corresponding fitness value is calculated,
and then the global optimum is evaluated, where the historical optimum of the particle
and the global optimum of the population is assumed to be Pbest and Gbest , respectively,
that is, we have
Pbest = ( p1i , p2i , · · · , piD )
, i = 1, 2, · · · , N (17)
Gbest = ( g1i , g2i , · · · , giD )
(c) The particle population is continuously updated iteratively to search for the extreme
value solution of the fitness function, and the velocity and position of each particle in
the next iteration are updated by the individual historical optimal value Pbest and the
current velocity V i , and the k + 1th update of the particle is given by
Vidk+1 = ωVidk + c1 r1 Pid
k − X k + c r Pk − X k
id 2 2 gd id
(18)
k +1 k +V k + 1
Xid = Xid id 0
where w is the inertia weight, which gives the particle the inertia of motion trend;
k is the number of current iterations; c1 and c2 are learning factors; and r1 and r2
are random numbers distributed in the interval of [0, 1]. To prevent the particles
from searching blindly and falling into the risk of local optimum, the velocity and
position of the particles are usually limited to a certain interval. The learning factor
and inertia weight are particularly important parameters of the PSO algorithm, where
the learning factor c1 of the particle itself, also known as the cognitive parameter, is
an important indicator of the particle’s search ability; the learning factor c2 of the
population is a social cognitive parameter that affects the search behavior of the
whole particle population; and the size of the inertia weight is an expression of the
movement ability of each particle.
(d) Finally, the algorithm is terminated by setting the corresponding end conditions. There
are generally two kinds of termination conditions: the first is to set the maximum
number of iterations of the particle population, and the second criterion is to terminate
when the optimal solution of the particle swarm has remained unchanged for five or
more consecutive iterations.
The most critical issue in the practical application of particle swarm algorithms is to
balance the relationship between particle search capability and algorithm performance and
parameters.
of the particles. For the specific search problem, it is necessary to choose the appropriate
combination of algorithm parameters, and in this paper, it is necessary to solve the problem
of solving the non-closed expressions for the generalized Pareto distribution with positive
0.5th/1st-order moment estimation, positive 0.25th-order logarithmic moment estimation,
and maximum likelihood estimation. Therefore, for the above discussion, this paper will use
the appropriate combination of parameters to complete the solution of the corresponding
target problem.
To address the problem of solving non-closed expressions in the parameter estimation
methods of generalized Pareto distribution, the whole thought process of the PSO algorithm
introduced to solve this problem is introduced in detail, including the construction of the
fitness function and the setting of algorithm parameters. According to the derivation
in Section 2, it is known that the positive 0.5th/1st-order moment estimation, positive
0.25th-order logarithmic moment estimation, and maximum likelihood estimation have
non-closed expressions, so the fitness functions of the three types of estimation methods
are constructed as follows:
f ( a) = Γ( a − 0.5)Γ( a) − Γ (1.25)m1
2
1
Γ2 ( a − 0.25) Γ(1.5)m20.5
κ0.25
f 2 ( a) = − κ0 − ψ(1.25) + ψ(1) − ψ( a) + ψ( a − r ) (19)
m0.5
!
bxi2 bxi2
n n 1 n
f 3 (b) = 1 ∑ ln 1 + bx2 1 − 1 ∑
− ∑
n i =1 i n i=1 1 + bxi2 n i=1 1 + bxi2
where parent (x) and parent (v) represent the position and velocity of the hybrid particles,
while child (x) and child (v) represent the position and velocity of the offspring particles
resulting from the hybridization process. When two particles trapped in different local
optima undergo hybridization, they can escape from their respective local optima and
improve the global search capability of the population. Additionally, if the hybrid particle
group adopts a fixed hybridization probability Pc , it may result in repetitive and ineffective
hybridization operations within the local range during the later iterations, leading to
decreased algorithm efficiency and the loss of the advantages of hybridization. To address
this issue, a nonlinearly decreasing hybridization probability is employed, where the
hybridization probability of the parent particles decreases nonlinearly with the increase in
iteration count. The updated formula for the hybridization probability Pc , is as follows:
2
k
Pc = ( Pc1 − Pc2 ) 1 − + Pc2 (22)
Nmax
where Pc1 represents the initial crossover probability and Pc2 represents the crossover
probability at the maximum number of iterations. The introduction of a nonlinearly
decreasing crossover probability enhances the global search capability of particles in the
early iterations while addressing the issue of local search efficiency in the later iterations.
The basic steps of the proposed HPSO algorithm in this study are as follows:
Step 1. Initialize the particle swarm, including the swarm size N, the inertia weight
ω for particle updates, the learning factors c1 and c2 , as well as parameters for crossover
operations such as the crossover pool size ratio Ps and the crossover probability Pc . Set the
position X and velocity V of each particle using uniform distribution random numbers
within a certain range.
Step 2. Compute the fitness value for each particle based on the objective function
and store it in the variable Pid . Select the best fitness value among the particles and store it
in the variable Pgd . The variables Pid and Pgd , respectively, represent the fitness value of the
particles and the fitness value of the best position in the population for the current iteration.
Step 3. Update the inertia weight factor ω using the nonlinearly decreasing Formula (20).
Step 4. Update the velocity and position of each particle using the current iteration’s
inertia weight, learning factors, and other algorithm parameters, following Formula (18).
Step 5. Compute the fitness value for each particle in the current iteration and compare
it with the particle’s personal best position. Update the variable Pid based on the fitness
value. Then, compare all updated Pid values with the global best position, updating the
swarm’s variable Pgd .
Step 6. Update the crossover probability Pc using the nonlinearly decreasing Formula (22).
Step 7. Select a specified number of particles specified by Ps and place them in the
crossover pool. Randomly select two particles from the pool to participate in a crossover.
Update the position and velocity of the resulting offspring using Formula (21). Crossover
operations generate the same number of particles as the parent generation.
Step 8. Check whether the termination condition of the algorithm is met. If satisfied,
stop the swarm search and save the global best position of the particles. Otherwise, return
to Steps 3 to 7 to continue the search.
The HPSO algorithm provides two ways to terminate the particle swarm search.
The first approach is to set a maximum number of iterations for the particles, while the
second approach involves specifying the number of generations to maintain the global
best particle. Since determining the maximum number of iterations depends on the nature
of the objective function and particle activities exhibit inherent randomness during each
iteration, this section adopts the method of setting the number of generations to retain the
Appl. Sci. 2023, 13, 9115 11 of 24
global best particle to ensure an effective search for the optimal position of the objective
function.
Different parameters of the generalized Pareto distribution can describe the amplitude
distribution characteristics of ocean clutter under different background environments. The
shape and scale parameters are the main factors influencing the statistical characteristics
curve of the generalized Pareto distribution. To achieve an accurate fit of the generalized
Pareto distribution to clutter data, the statistical histogram of clutter data is compared with
the cumulative error on the theoretical distribution curve, which serves as the objective
function for optimizing the parameter estimation of the generalized Pareto distribution.
Furthermore, to balance the fitting discrepancies between the probability density func-
tion (PDF) curve and the cumulative distribution function (CDF) curve of clutter data, an
adapted fitness function is constructed by appropriately weighting and combining two dis-
crepancy functions. The HPSO algorithm is then employed to perform the corresponding
optimal parameter search task. The fitness function for the parameter estimation of the
generalized Pareto distribution is defined as follows:
1
f f itness (θ) = (23)
N N
1 + λ1 ∑ ( f (n; θ) − h(n))2 + λ2 ∑ ( F (n; θ) − H (n))2
n =1 n =1
where
Appl. Sci. 2023, 13, x FOR PEER REVIEW f (n; θ ) and F (n; θ ), respectively, denote the PDF and CDF of the generalized12Pareto of 26
distribution. h(n) and H (n) represent the values of the clutter statistical histogram at the
PDF and CDF curve sampling intervals. The weighting coefficients are set as λ1 = 0.8 and
λ2 = 0.2 in the experiments conducted in this paper.
The
The parameters
parametersofofthe theHPSO
HPSO algorithm
algorithm areare
configured
configured as follows: the population
as follows: the population size
is set to 20, the learning factor is denoted as 𝑐 = 𝑐 = 2, the hybrid pool
size is set to 20, the learning factor is denoted as c1 = c2 = 2, the hybrid pool size ratio size ratio is de-
noted as 𝑃 = 0.5, and the minimum number of generations to maintain
is denoted as Ps = 0.5, and the minimum number of generations to maintain the global the global best
position is setistoset
best position 15.toThe
15. initial values
The initial of theofnon-linearly
values decreasing
the non-linearly inertia
decreasing weight
inertia and
weight
hybrid probability
and hybrid are both
probability 0.9 and
are both theirtheir
0.9 and ending values
ending are set
values aretoset0.4.
to Additionally,
0.4. Additionally, the
maximum number of iterations is denoted as 𝑁 and is set to 50. Figure
the maximum number of iterations is denoted as Nmax and is set to 50. Figure 2 illustrates 2 illustrates the
flowchart
the flowchart of the
of HPSO
the HPSOalgorithm
algorithmoptimizing the objective
optimizing function
the objective for estimating
function the pa-
for estimating the
rameters of the generalized Pareto distribution.
parameters of the generalized Pareto distribution.
Construct an adaptation function based on the sampling Calculate the fitness value of each particle and record the initial
interval of the statistical histogram of the simulation data individual extremum Pbest and global extremum Gbest of the particle
Figure
Figure 2.
2. HPSO
HPSO algorithm
algorithm flowchart.
flowchart.
For comparative analysis, the parameters of the PSO algorithm are set as follows: the
population size is 20, the learning factor is denoted as 𝑐 = 𝑐 = 2, the inertia weight is
denoted as 𝜔 = 0.4, and the maximum number of iterations is set to 50.
Appl. Sci. 2023, 13, 9115 12 of 24
For comparative analysis, the parameters of the PSO algorithm are set as follows: the
population size is 20, the learning factor is denoted as c1 = c2 = 2, the inertia weight is
denoted as ω = 0.4, and the maximum number of iterations is set to 50.
Range value
Range value
Figure 3. Statistical histogram of simulation data.
Figure 3. Statistical histogram of simulation data.
Function 1
Function 1
Function 2
Function 2
Function 3
Function 3
Thecurve
Figure4.4.The
Figure curveofofthe
thefitness
fitnessfunction.
function.
Figure 4. The curve of the fitness function.
The fitness functions 1 and 2 correspond to the non-closed expressions for positive
The
Thefitness
fitnessfunctions
functions 11and
and22correspond
correspond to the
thenon-closed expressions for
forpositive
0.5th/1st-order moment estimation and positiveto0.25th-ordernon-closed expressions
logarithmic moment positive
estima-
0.5th/1st-order
0.5th/1st-order moment
moment estimation
estimation andand positive 0.25th-order logarithmic moment estima-
tion, respectively. The fitness function 3 ispositive 0.25th-order
significantly different logarithmic
from the first moment estima-
two functions
tion,
tion,respectively.
respectively. The
Thefitness
fitness function
function 33isissignificantly
significantly different
different from the thefirst two
twofunc-
because the horizontal coordinate of this function corresponds tofrom
the scale first
parameter func-
do-
tions
tionsbecause the horizontal coordinate of this function corresponds to the scale parameter
main because the horizontal
and corresponds to thecoordinate
non-closed ofexpression
this function of corresponds
the maximum to likelihood
the scale parameter
estimate
domain
domain and
andcorresponds
corresponds to the
thenon-closed expression ofofthe maximum likelihood esti-
of the scale parameter. Astocan non-closed
be seen fromexpression
Figure 4, the the rangemaximum
of values likelihood esti-
of the fitness
mate
mate of the
of the scale
scale parameter.
parameter. As
As can
canbe beseen
seen from
from Figure
Figure 4,4,the
the range
range of
of values
values ofofthe
thefitness
fitness
functions 1 and 2 are both restricted to a certain space, [0.5, +∞] and [0.25, +∞], respec-
functions
functions 11and
and 22are
areboth
bothrestricted
restricted totoaacertain
certain space, [0.5, +∞]
+∞]and and[0.25, +∞], respectively,
tively, which must match the range of values of space, [0.5,parameters
the shape [0.25,
of the+∞], respectively,
two-parameter
which
which must
mustmatch
estimation matchthe
methods. therange
range
The ofofvalues
values
notches ofof ofthe
the the shape
shapeparameters
extreme parameters
value points ofofthe
ofthe two-parameter
two-parameter
these three functionsesti-
esti-
mation
mation methods.
are relatively
methods. The
deep,The notches
sonotches of
the particle the
of the extreme
population value
extreme value points
can quickly
points ofof these
find three
the three
these optimal functions are
location
functions are rel-
point
rel-
atively
ativelydeep,
deep,sosothe theparticle
particlepopulation
populationcan canquickly
quicklyfindfindthetheoptimal
optimallocationlocationpoint pointduring
during
the
thesearch,
search,but butthethenumber
numberofofinertinertparticles
particlesincreases
increasessharply
sharplyasasthe thesearch
searchprocess
processpro-pro-
ceeds.
ceeds. The setting of the parameters of the PSO algorithm will determine the trajectoryofof
The setting of the parameters of the PSO algorithm will determine the trajectory
the
theparticles,
particles,and andthe theinfluence
influenceofofthe thenumber
numberofofparticle
particlepopulations
populationsand andthe thenumber
numberofof
iterations on the search behavior of the particle populations
iterations on the search behavior of the particle populations will be studied below.will be studied below.
Appl. Sci. 2023, 13, 9115 13 of 24
during the search, but the number of inert particles increases sharply as the search process
proceeds. The setting of the parameters of the PSO algorithm will determine the trajectory
of the particles, and the influence of the number of particle populations and the number of
iterations on the search behavior of the particle populations will be studied below.
According to the recommended values of optimal parameter settings given in the
literature [37], the learning factors and inertia weights are set to w = 0.4, c1 = c2 = 2. To
prevent the particles from crossing the boundary when searching, the particle population
search space on the fitness functions 1 and 2 are restricted to [0.5, +∞] and [0.25, +∞],
respectively, while the particle population search range of the fitness function 3 is restricted
to [0.15, +∞]. The number of iterations is fixed to 20, and then the trajectories of each particle
Appl.
Appl.Sci.
Sci.2023,
2023,13,
13,xxFOR
FORPEER
population
PEERREVIEW
REVIEW
at different numbers are observed separately, and the number of particles is 26
14 of
14 of set
26
to 5 and 10, respectively. Figures 5–7 give the positions of each particle in the first, 5, 10, 15,
and 20 iterations for the three groups of particle populations, respectively.
The
Thenumber
numberof
ofparticles
particlesisis55 The
The number of particles
number of particles is
is 10
10
1st
1stiteration
iteration 5th
5thiteration
iteration 10th
10thiteration
iteration 15th
15thiteration
iteration 20th
20thiteration
iteration 1st
1st iteration
iteration 5th
5th iteration
iteration 10th iteration
10th iteration 15th iteration
15th iteration 20th iteration
20th iteration
Fit
Fitfunction
function11 Ordinary
Ordinary particles
particles Optimal
Optimal particles
Figure5.5.Particle
Figure Particle trajectory
trajectory of
trajectory of the
the first fitness function.
first fitness function.
Thenumber
The numberofofparticles
particlesisis55 The number of particles is 10
1stiteration
1st iteration 5thiteration
5th iteration 10thiteration
10th iteration 15thiteration
15th iteration 20thiteration
20th iteration 1st iteration
1st iteration 5th iteration 10th iteration 15th iteration
iteration 20th
20th iteration
iteration
Fitfunction
Fit function22 Ordinary particles
Ordinary particles Optimal particles
Optimal particles
Figure6.6.Particle
Figure Particle trajectory
trajectory of
trajectory of the
of the second
the second fitness
second fitness function.
fitness function.
function.
Thenumber
The numberof
ofparticles
particlesisis55 The number
The number of
of particles
particles is
is 10
10
1stiteration
1st iteration 5thiteration
5th iteration 10thiteration
10th iteration 15thiteration
15th iteration 20thiteration
20th iteration 1st iteration
1st iteration 5th iteration
5th iteration 10th iteration
10th iteration 15th
15th iteration
iteration 20th
20th iteration
iteration
Fit function 2 Ordinary particles Optimal particles
Appl. Sci. 2023, 13, 9115 14 of 24
Figure 7.
Figure 7. Particle trajectory of the third fitness function.
hood function, the search range of the particle population is also set to (0, +∞]. As can be
seen from the figure, after expanding the search range of particles in the one-dimensional
space, the particle population is initially trapped in an undefined position, making the
particles inactive and falsely dead. The above phenomenon does not occur when the
particle population is searched in the two-dimensional space of the likelihood function,
and a higher accuracy of parameter estimation is obtained by searching directly in the
two-dimensional space. Although the particle search in two dimensions can solve the
problem of one-dimensional search, the expansion of the spatial search domain requires a
larger number of particles and iterations to complete the search task, which undoubtedly
Appl. Sci. 2023, 13, x FOR PEER REVIEW 16 of 26
increases the computational time complexity. Table 2 lists the time consumed by each
parameter method of the generalized Pareto distribution.
iterations iterations
Figure 8. Comparison of spatial search in different dimensions of the maximum likelihood function.
Parameter Esti-
Parameter PSO-MFoM
MoM MoM ZlogZ ZlogZ PSO-MFoM PSO-ZlogZ PSO-ZlogZ PSO-MLE PSO-MLE PSO-MLEPSO-MLE
mation Meth-
Estimation Methods (2nd/4th-Order) (1st-Order) (0.5th/1st-Or-
(0.5th/1st-Order) (0.25th-Order) (1D) (2D)
(2nd/4th-Order) (1st-Order) (0.25th-Order) (1D) (2D)
ods
Time/s 2.43 × 10 − 2 8.94 × 10 − 3 der)
3.02 × 10 − 2 3.66 × 10 − 2 8.51 × 10 − 2 9.37 × 10−1
Time/s 2.43 × 10 −2 8.94 × 10 −3 3.02 × 10 −2 3.66 × 10 −2 8.51 × 10 −2 9.37 × 10−1
From the data in Table 2, it can be seen that the parameter estimation of positive
Fromlogarithmic
1st-order the data in moments
Table 2, itiscan thebe seenefficient,
most that the and
parameter estimation
the parameter of positive
estimation time1st-
of
order logarithmic
positive moments
0.5th/1st-order moment is the most efficient,
estimation, positive and the parameter
0.25th-order moment estimation
estimation,timeandof
positive 0.5th/1st-order moment estimation, positive 0.25th-order
maximum likelihood estimation based on the swarm subgroup optimization to solve the moment estimation,
and maximum
non-closed likelihood
expression estimation to
is comparable based
that on the swarm
of positive subgroup optimization
2nd/4th-order to solve
moment estimation;
the non-closed expression is comparable to that of positive 2nd/4th-order
so the introduction of the particle swarm search does not significantly increase the compu- moment estima-
tion; so the introduction of the particle swarm search does not significantly
tational time complexity of the parameter estimation method itself. The two-dimensional increase the
computational
space timemaximum
search of the complexity of the parameter
likelihood estimationestimation
method doublesmethod itsitself. The two-di-
computation time
mensional space search of the maximum likelihood estimation method
compared to the one-dimensional space, which is also consistent with the previous analysis. doubles its com-
putation
The time compared
operational efficiencytoofthe theone-dimensional space, which
algorithm is particularly is alsoinconsistent
important applicationswithwiththe
previous analysis. The operational efficiency of the algorithm is
real-time processing, and the various advantages and disadvantages can be weighed in particularly important in
applications with real-time processing, and the various advantages
choosing a specific parameter estimation method based on the above discussion, which and disadvantages can
be weighed
leads in choosing
to the selection of aasuitable
specificparameter
parameterestimation
estimationmethod.
method based on the above dis-
cussion,
The which leadsranges
estimation to the ofselection of a suitable
shape parameters forparameter estimation
each parameter method.
estimation method are
Thein
detailed estimation
Section 2.ranges
Here, ofto shape parameters
investigate for each
the effect parameter
of shape estimation
parameters on themethod
parameter are
detailed in performance,
estimation Section 2. Here, wetofixedinvestigate
the samplethe effect
lengthofofshape parameters
the simulation on at
data 4 , and the
the10parameter
estimation
scale performance,
parameter b was setwe fixed
to 0.3. Thetheestimation
sample length of the simulation
performance data at 104, and
of the two-parameter the
model
scalethen
was parameter
calculatedb was
by set to 0.3.the
varying The estimation
shape parameter performance
a from 0.1of tothe two-parameter
10 in equal intervals. model
The
was then
results werecalculated
evaluated byusing
varying the shape
the relative rootparameter
mean square a from 0.1(RRMSE).
error to 10 in equal
Figureintervals.
9 shows
The results were evaluated using the relative root mean square error (RRMSE). Figure 9
shows the results of this experiment, where the specific calculation of the Cramér–Rao
bound (CRB) can be found in the literature [38], and the values of each parameter node in
the figure are obtained from the calculation of 30 independent replications of the experi-
ment.
Appl. Sci. 2023, 13, 9115 16 of 24
the
Appl. Sci. 2023, 13, x FOR PEER REVIEW results of this experiment, where the specific calculation of the Cramér–Rao 17 bound
of 26
(CRB) can be found in the literature [38], and the values of each parameter node in the
figure are obtained from the calculation of 30 independent replications of the experiment.
Figure 9. Parameter
Parameter estimation
estimationperformance
performanceasasaafunction
functionofofshape
shapeparameters. (a,b)
parameters. indicate
(a,b) thethe
indicate
trends of RRMSE metrics with shape parameters for scale and shape parameters, respectively.
trends of RRMSE metrics with shape parameters for scale and shape parameters, respectively.
Observing
Observing Figure
Figure 9a,b,
9a,b, it can be found that the parameter estimation results obtained
using PSO for the one-dimensional likelihood function are unstable, exhibiting significant
fluctuations with errors oscillating beyond 0.1. Conversely, the results obtained from the
two-dimensional
two-dimensional likelihood
likelihood function
function space
space search
search show
show more
more stable
stable parameter
parameter estimation
estimation
performance, with the RRMSE smaller than 0.1, and it approaches
performance, with the RRMSE smaller than 0.1, and it approaches the CRB the CRB lower
lower bound,
bound,
which is consistent with the previous analysis. The parameter estimation
which is consistent with the previous analysis. The parameter estimation performance performance of the
of
maximum
the maximum likelihood estimation
likelihood method
estimation is not is
method limited by the range
not limited by theofrange
the shape
of theparameters,
shape pa-
while other
rameters, parameter
while estimationestimation
other parameter methods are to some
methods areextent
to somelimited
extentbylimited
the range of
by the
the shape parameters, i.e., accurate estimation of the shape parameters
range of the shape parameters, i.e., accurate estimation of the shape parameters can be can be achieved
only within
achieved onlythe specified
within range. When
the specified range.theWhenshape
the parameter is small
shape parameter (corresponding
is small (correspond-to
the heavy trailing phenomenon of sea clutter), the estimation of higher
ing to the heavy trailing phenomenon of sea clutter), the estimation of higher order mo- order moments
(positive 2nd/4th-
ments (positive and positive
2nd/4th- 1st-order
and positive logarithmic
1st-order moments
logarithmic in Figure
moments in9) is affected
Figure by
9) is af-
the cumulative error of the sample, which leads to the poor performance
fected by the cumulative error of the sample, which leads to the poor performance of its of its parameter
estimation. In addition,
parameter estimation. Inthe estimation
addition, performance
the estimation of all parameter
performance estimationestimation
of all parameter methods
decreases with the increase in shape parameters.
methods decreases with the increase in shape parameters.
4.2. Analysis of the Fit of the Measured Data
4.2. Analysis of the Fit of the Measured Data
To verify the fitting effect of the generalized Pareto distribution model on the statistical
To verify
properties the fitting
of heavy effect
trailing seaofclutter
the generalized Pareto
data and to distribution
analyze model on
the adaptability ofthe statis-
different
tical properties of heavy trailing sea clutter data and to analyze the adaptability
parameter estimation methods for parameter estimation of real data, this section performs a of differ-
ent parameter estimation
goodness-of-fit analysis onmethods for parameter
the statistical propertiesestimation
of a set ofof real data, this
high-intensity section
data fromper-
the
forms a goodness-of-fit analysis on the statistical properties of a set of
IPIX radar dataset [39,40] and an X-band radar open-source dataset [41,42], respectively.high-intensity data
fromMany
the IPIX radarhave
scholars dataset [39,40]
carried outand an X-band
a large amount radar open-source
of research work ondataset [41,42], re-
the perception of
spectively.
sea clutter properties based on the IPIX radar dataset, so this dataset is very reliable for the
Many scholars
goodness-of-fit haveThe
analysis. carried out
effect of athe
large amount of
distribution research
model on thework
sea on the perception
clutter amplitude
of seafitclutter
PDF is shownproperties
in Figurebased on the
10, and the IPIX radar dataset,
two-parameter so thisofdataset
estimates is very
sea clutter datareliable
obtainedfor
the different
via goodness-of-fit
parameteranalysis. The effect
estimation of theare
methods distribution model
given in Table 3. on thedataset
Each sea clutter ampli-
of the IPIX
tude PDF
radar fit is
dataset shown 14
contains in distance
Figure 10, and and
units, the two-parameter
a total of 131,702estimates
data samplesof seaare
clutter data
collected
obtained via different parameter estimation methods are given in Table
for each distance unit. To reduce the running time, this experiment selects the non-target 3. Each dataset of
the IPIX radar dataset contains 14 distance units, and a total of 131,702 data samples are
collected for each distance unit. To reduce the running time, this experiment selects the
non-target distance unit within (pure clutter data) 50,000 data samples, as the object of
parameter estimation.
Appl. Sci. 2023, 13, 9115 17 of 24
Measured data
Positive 2nd/4th order MoM
Positive 1st order ZlogZ
Positive 0.5/1st order PSO-MFoM
Positive 0.25 order PSO-ZlogZ
1D PSO-MLE
Magnitude Magnitude Magnitude 2D PSO-MLE
Magnitude
Magnitude Magnitude Magnitude
Figure 11.
Figure 11. Fitting
Fitting effect
effect of
of generalized
generalized Pareto
Pareto distribution.
distribution.
Table Both
4. Thesets of test
fitting experiments above
results of IPIX observe
radar thedata.
measured fitting effect of the distribution model
directly from the plots. To quantitatively characterize the strengths and weaknesses of the
MoM ZlogZ
fitting effect and MFoMaccurate analysis
then make a more ZlogZ of the fitting
MLE MLEof the
effect, the results
Assessment Metrics
(2/4) (1) (05/1) (0.25) (1D)
Kolmogorov–Smirnov (K-S) test, the mean square deviation (MSD) test, and the root (2D)mean
MSD 10−4 deviation
2.73 ×square 5.98 × 10 −5
(RMSD) tests 10−
4.26 ×for 5
the IPIX4.10 −5
× 10data
radar and3.96 × 10−5radar 3.94
X-band × 10−5
open-source
RMSD 10−2
1.65 ×measured 7.70 × 10 −3 6.50 × 10 −3 6.40 × 10 −3 6.30 × 10 −3 6.30 × 10−3
data are given in Tables 4 and 5, respectively. The three test rules mentioned
K-S 10−2 are common
4.39 ×above −2
2.01 × 10methods1.91 × 10 −2 1.74 × 10 −2 1.90 × 10 −2 1.90 × 10 −2
for analyzing the fitting effect of distribution models, which
are important for guiding the construction of statistical models and evaluating the perfor-
mance
Table 5. of
Theparameter
fitting testestimation.
results of a domestic open source measured data.
MoM ZlogZ
Table 4. The fitting test results of MFoM ZlogZ
IPIX radar measured data. MLE MLE
Assessment Metrics
(2/4) (1) (05/1) (0.25) (1D) (2D)
Assessment MoM ZlogZ −3 MFoM −4 ZlogZ −4 MLE MLE
MSD 3.30 × 10−3 2.40 × 10 8.36 × 10 4.17 × 10 1.20 × 10−3 1.20 × 10−3
Metrics
RMSD (2/4)
5.70 × 10−2 (1) × 10−2
4.90 (05/1)
2.89 × 10−2 (0.25)
2.04 × 10−2 (1D)
3.43 × 10−2 (2D)
3.44 × 10−2
MSD
K-S 2.735.84
× 10×−410−2 5.986.09 × 10 −5 −2
× 10 4.26 × 10 −5 −2
3.37 × 10 4.10 × 10 −5 −2
2.84 × 10 3.96 × 10
3.00 × 10−5−2 3.94
3.01 ×10
× 10−5−2
RMSD 1.65 × 10 −2 7.70 × 10 −3 6.50 × 10 −3 6.40 × 10 −3 6.30 × 10 −3 6.30 × 10−3
K-S 4.39 × 10 −2 2.01 × 10 −2 1.91 × 10 −2 1.74 × 10 −2 1.90 × 10 −2 1.90 × 10−2
From the data in Table 4, it can be seen that in terms of moment estimation, the higher-
order moment estimation methods (positive 2nd/4th-order moments and positive 1st-
Table 5. The fitting test results of a domestic open source measured data.
order logarithmic moments) result in poorer fitting performance of the generalized Pareto
Assessment MoM distribution model for the observed
ZlogZ MFoM data compared ZlogZ to the lower-order
MLE momentMLE estimation
Metrics (2/4) methods (positive
(1) 0.5th/1st-order(05/1) fractional moments
(0.25) and positive (1D)0.25th-order logarithmic
(2D)
MSD 3.30 × 10moments).
−3 Among
2.40 × 10−3 them,8.36the positive
× 10−4 0.25th-order
4.17 × 10−4logarithmic
1.20 moment
× 10−3 achieves
1.20 × the
10−3best
estimation results (MSD test: 4.26 × 10 −5 ; RMSD: 6.40 × 10−3 ; and K-S test: 1.74 × 10−2 ),
RMSD 5.70 × 10−2 4.90 × 10−2 2.89 × 10−2 2.04 × 10−2 3.43 × 10−2 3.44 × 10−2
K-S 5.84 × 10with
−2 estimation
6.09 × 10 errors
−2 reduced
3.37 × 10by−2 84.98%, 61.21%,
2.84 × 10 −2 and 60.36% compared
3.00 × 10 −2 to the
3.01 × 10worst
−2
performing positive 2nd/4th-order moments estimation. The inferior performance of
higher-order moment estimation is due to larger accumulated sample errors in higher-order
From the data in Table 4, it can be seen that in terms of moment estimation, the
moments. Additionally, the maximum likelihood estimation method shows comparable
higher-order moment estimation methods (positive 2nd/4th-order moments and positive
fitting performance to lower-order moment estimation. Compared to the optimal lower-
1st-order logarithmic moments) result in poorer fitting performance of the generalized
order moment estimation (i.e., positive 0.25th-order logarithmic moment), the performance
Pareto distribution model for the observed data compared to the lower-order moment
improvements are 4.10% (MSD), 1.59% (RMSD), and −8.42% (K-S). There is no significant
estimation methods (positive 0.5th/1st-order fractional moments and positive 0.25th-order
difference in the fitting results of the one-dimensional and two-dimensional non-closed
logarithmic moments). Among them, the positive 0.25th-order logarithmic moment
expression search results of the maximum likelihood estimation for the measured data.
achieves the best estimation results (MSD test: 4.26 × 10−5; RMSD: 6.40 × 10−3; and K-S test:
The fitting results of the sea clutter data in Table 5 yielded conclusions consistent with
1.74 × 10−2), with estimation errors reduced by 84.98%, 61.21%, and 60.36% compared to
the worst performing positive 2nd/4th-order moments estimation. The inferior perfor-
mance of higher-order moment estimation is due to larger accumulated sample errors in
comparable fitting performance to lower-order moment estimation. Compared to the op-
timal lower-order moment estimation (i.e., positive 0.25th-order logarithmic moment), the
performance improvements are 4.10% (MSD), 1.59% (RMSD), and −8.42% (K-S). There is
no significant difference in the fitting results of the one-dimensional and two-dimensional
Appl. Sci. 2023, 13, 9115 non-closed expression search results of the maximum likelihood estimation for the 19 meas-
of 24
ured data. The fitting results of the sea clutter data in Table 5 yielded conclusions con-
sistent with the above. The fitting test results indicate that the PSO algorithm used in this
paper
the not only
above. The efficiently
fitting testand accurately
results indicatefinds
that the
theparameter
PSO algorithmestimates
usedofineach
this low-order
paper not
moment estimation method but also improves the fitting effect of the
only efficiently and accurately finds the parameter estimates of each low-order moment generalized Pareto
distribution model on the measured data.
estimation method but also improves the fitting effect of the generalized Pareto distribution
modelInon
addition, some scholars
the measured data. used the segmented mean square deviation (SMSD) test
[43,44] to observe
In addition, the scholars
some local fitting
usedeffect of the distribution
the segmented mean square model on the
deviation measured
(SMSD) data,
test [43,44]
and
to Figurethe
observe 12local
shows the SMSD
fitting effect oftest
theeffect of the two
distribution model sets
onofthe
data. From Figure
measured 12,Figure
data, and it can be
12
seen that the generalized Pareto distribution is relatively stable in fitting
shows the SMSD test effect of the two sets of data. From Figure 12, it can be seen that thethe real measure-
ments within
generalized eachdistribution
Pareto interval segment without
is relatively substantial
stable in fittingfluctuations, so the target
the real measurements detector
within each
interval segment without substantial fluctuations, so the target detector design basedaon
design based on the generalized Pareto distribution model can be used to obtain more
the
robust performance.
generalized Pareto distribution model can be used to obtain a more robust performance.
To further investigate the performance of the HPSO algorithm in optimizing the ob-
Appl.
Appl. Sci.
Sci. 2023,
2023, 13,
13, x9115
FOR PEER REVIEW
jective function for parameter estimation of the generalized Pareto distribution,21we 20ofoffixed
26
24
either the shape parameter or the scale parameter and varied the other parameter. The
PSO algorithm and the HPSO algorithm were employed to search for the global optimal
0
value of the fitness function for parameter estimation of the generalized Pareto distribu-
tion. In this way, the corresponding parameter estimation values were obtained. Firstly,
HPSO algorithm
杂交PSO算法
PSO algorithm
标准PSO算法
-0.2
we fixed the scale parameter as b = 0.3 and varied the shape parameter of each set of sim-
ulated clutter samples from 0.1 to 10 with equal intervals. The number of simulated sam-
-0.4
ples for the generalized Pareto distribution was set as 5 × 104. The experimental results
适应度 were obtained by averaging the results of 10 repeated experiments. Figure 14a shows the
variation curve of the absolute distance between the parameter estimation values obtained
-0.6
by the PSO algorithm and the HPSO algorithm and the actual values.
-0.8 From Figure 14a, it can be observed that the estimation error of the shape parameter
a increases with the increase in the simulated data parameter a, while the estimation error
of the scale parameter undergoes a trend of decreasing and then increasing. In addition,
-1
the0 parameter
10 20
estimation 30error of 40 the PSO 50 algorithm shows significant variations when
迭代次数
Number of iteration
the shape parameter a is small, indicating the poor robustness of the parameter estimation
using
Figure
Figure 13.the
13. PSO algorithm.
Comparison
Comparison of two
of two different
different algorithms’
algorithms’ iterative
iterative processes.
processes.
To further investigate the performance of the HPSO algorithm in optimizing the ob-
HPSO
HPSO algorithm
algorithmjective function for parameter estimation of the generalized Pareto distribution, we fixed
Shape parameter estimation error
Shape parameter estimation error
PSO algorithm
PSO algorithm
either the shape parameter or the scale parameter and varied the other parameter. The
PSO algorithm and the HPSO algorithm were employed to search for the global optimal
value of the fitness function for parameter estimation of the generalized Pareto distribu-
tion. In this way, the corresponding parameter estimation values were obtained. Firstly,
we fixed the scale parameter as b = 0.3 and varied the shape parameter of each set of sim-
ulated clutter samples from 0.1 to 10 with equal intervals. The number of simulated sam-
ples for the generalized Pareto distribution was set as 5 × 104. The experimental results
were obtained by averaging the results of 10 repeated experiments. Figure 14a shows the
variation curve of the absolute distance between the parameter estimation values obtained
by the PSO algorithm and the HPSO algorithm and the actual values.
From Figure 14a, it can be observed that the estimation error of the shape parameter
Shape parameters
Shape parameters a increases with
Scale the increase in the simulated
parameters
Scale parameters Shapedata parameter a, while
parameters
Shape parameters the
Scale estimation
parameters error
parameters
Scale
of(a)
the scale parameter undergoes a trend of decreasing and (b) then increasing. In addition,
the parameter estimation error of the PSO algorithm shows significant variations when
the Figure
Figure 14.14.
shape Parameter
parameter
Parameter aestimation
estimation error
is small,error curves
indicating
curves of
ofthe
twotwo
poor different
robustness
different algorithms.
of the
algorithms. (a,b) denote
parameter
(a,b) denote the estimation
estimation
the estimation
errors of the corresponding parameters when the shape and scale parameters are varied at equal
using
errors the PSO
of the algorithm. parameters when the shape and scale parameters are varied at equal
corresponding
intervals, respectively.
intervals, respectively.
0.3 10 1
HPSO algorithm
杂交PSO算法 杂交PSO算法
From Figure 杂交PSO算法
14a, it can be observed that 杂交PSO算法
the estimation error of the shape parameter a
Shape parameter estimation error
标准PSO算法
PSO algorithm increases with
标准PSO算法 标准PSO算法
the increase in the simulated data parameter a, while标准PSO算法
the estimation error
12 8 0.8
Scale形状参数b估计误差
尺度参数b估计误差
parameter estimation error
parameter estimation error
形状参数a估计误差
of the scale parameter undergoes a trend of decreasing and then increasing. In addition,
形状参数a估计误差
0.2parameter estimation error of the PSO algorithm shows significant variations when
the
6
the shape parameter a is small, indicating the poor robustness 0.6 of the parameter estimation
8
using the PSO algorithm.
To fix the shape parameter at4 a = 2.5, we varied the0.4 scale parameter of the clutter
0.1
samples from 0.1 to 10 with equal intervals, while keeping the length of the simulated
4 data constant. Similarly, the variation curves of the parameter estimation values for both
Scale
Shape
2 0.2
algorithms were plotted (Figure 14b). From Figure 14b, it can be observed that as the
scale parameter increases, there is no significant change in the parameter estimation error
0 0 shape parameter a, while the
of the 0 parameter estimation0error of the scale parameter
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
b shows an increasing trend. The PSO algorithm still exhibits pronounced oscillations,
形状参数a
Shape parameters
indicating形状参数a
Scale parameters 尺度参数b
Shape parameters
that the particles in the algorithm are prone to premature 尺度参数b
Scale parameters
convergence, resulting
(a)in the population being trapped in local optimal positions. (b) Based on the experimental
results mentioned above, it can be concluded that the HPSO algorithm not only achieves
Figure 14. Parameter estimation error curves of two different algorithms. (a,b) denote the estimation
higher parameter estimation accuracy compared to the PSO algorithm
errors of the corresponding parameters when the shape and scale parameters are butvaried
also exhibits
at equal
better stability in
intervals, respectively. parameter estimation. Therefore, in this study, the HPSO algorithm
Appl. Sci. 2023, 13, 9115 21 of 24
was employed to perform the optimal parameter search task for the fitness function of
parameter estimation in the generalized Pareto distribution.
To compare the performance differences of various parameter estimation methods for
the generalized Pareto distribution after introducing the HPSO algorithm, this experiment
employed the 2nd/4th-moment estimation (2/4th-MoM), 1st-order logarithmic moment
estimation (1st-order ZlogZ), and maximum likelihood estimation (MLE) methods. The
known parameters were estimated for simulated random sequences of the generalized
Pareto distribution, and the results of these three commonly used parameter estimation
methods were compared with the HPSO algorithm. The scale parameter b was fixed at
0.5, and the number of clutter samples was 5 × 104 . The selection parameter for each set
of clutter data varied evenly from 0.1 to 10, with 10 independent repeated experiments
conducted. The curves of mean squared deviation (MSD) fitting results for each parameter
estimation method were obtained, as shown in Figure 15a. Similarly, by selecting the scale
parameter a as 3.5 and varying the selection parameter for each set of clutter data from
Appl. Sci. 2023, 13, x FOR PEER REVIEW 23 of 26
0.1 to 10, the variation of the MSD fitting results for each parameter estimation method
under different scale parameters was studied (Figure 15b).
HPSO algorithm
HPSO algorithm
2/4th order 2/4th
MoMorder MoM
1st order ZlogZ
1st order ZlogZ
MLE MLE
Shape parameters
Shape parameters Scale parameters
Scale parameters
(a) (b)
Figure
Figure 15.15.Parameter
Parameterestimation
estimationerror
errorcurves
curvesofoftwo
two different
different algorithms.(a,b)
algorithms. (a,b)denote
denotethe
the fitting
fitting
errors when the shape and scale parameters are varied at equal intervals, respectively.
errors when the shape and scale parameters are varied at equal intervals, respectively.
Based on the
Observing results
Figure 15a,presented in Figure 15,
it can be observed thatitthe
candifferences
be concluded that compared
in MSD to the
fitting curves
other three commonly used parameter estimation methods, the PDF
among the parameter estimation methods gradually decrease as the shape parameter in- curve of the general-
ized Pareto
creases. distribution
In the range where approximated by the HPSO
the shape parameter a isalgorithm
less than exhibits
2, the HPSObetteralgorithm
robustness
and does notsignificantly
demonstrates suffer from restricted
better MSD parameter estimation range
fitting performance issues.
than the This algorithm
2nd/4th-MoM pro-
estima-
vides
tion anda the
new1st-order
solution approach for accurately
ZlogZ estimation method, estimating the parameters
approaching the fittingof the generalized
performance of
Pareto
the MLEdistribution.
method. Figure 15b shows that as the scale parameter increases, the MSD fitting
curvesThe computational
for all four parameter time complexity
estimation of thegradually
methods parameter estimation
decrease. methodsthe
In addition, is also
MSDan
test of the MLE method exhibits significant fluctuations in the range of
indicator for evaluating their performance. Table 6 lists the running times of each param- scale parameters
from
eter1estimation
to 2. Moreover,
methodin the region
under thewhere the scale parameter
same conditions. From Table b is 6,
greater
it canthan 2, the MSD
be observed that
test −2 higher than other estimation methods. This
thevalues are consistently
2nd/4th-MoM estimationmoreandthan
the 10
1st-order ZlogZ estimation method achieve fast pa-
indicates
rameter that the MLE
estimation method’s
processes. fitting
The MLEperformance
method requiresis poorer than the
the longest HPSO algorithm,
estimation time. Both
the
the2nd/4th-MoM
PSO algorithm estimation
and the HPSOmethods, and thehave
algorithm 1st-order
the sameZlogZ estimation method.
computational time complex-
ity.Based on theconsidering
Therefore, results presented
overallinperformance,
Figure 15, it canthe be concluded
proposed HPSO thatalgorithm
compareddemon-to the
other three commonly used parameter estimation methods, the PDF curve
strates promising effectiveness in solving the parameter estimation problem for the gen- of the general-
ized ParetoPareto
eralized distribution approximated by the HPSO algorithm exhibits better robustness
distribution.
and does not suffer from restricted parameter estimation range issues. This algorithm pro-
vides
Tablea 6.
new solution
Estimate timeapproach forparameter
of different accurately estimating
estimation the parameters of the generalized
methods.
Pareto distribution.
Parameter Estimation The computationalHPSO time complexity of the parameter estimation
PSO 2nd/4th-MoM 1st-Order ZlogZ methods MLE is also
Methods an indicator for evaluating their performance. Table 6 lists the running times of each
Running time (s) 3.42 × 10−1 3.22 × 10−1 5.81 × 10−2 2.53 × 10−2 5.56 × 10−1
5. Conclusions
To address the issues in parameter estimation for the generalized Pareto distribution,
Appl. Sci. 2023, 13, 9115 22 of 24
parameter estimation method under the same conditions. From Table 6, it can be observed
that the 2nd/4th-MoM estimation and the 1st-order ZlogZ estimation method achieve
fast parameter estimation processes. The MLE method requires the longest estimation
time. Both the PSO algorithm and the HPSO algorithm have the same computational time
complexity. Therefore, considering overall performance, the proposed HPSO algorithm
demonstrates promising effectiveness in solving the parameter estimation problem for the
generalized Pareto distribution.
5. Conclusions
To address the issues in parameter estimation for the generalized Pareto distribution,
this study successfully solved the problem of non-closed-form expression using the PSO
algorithm and obtained accurate parameter estimates. By expanding the search space of the
likelihood function to two dimensions, the stagnation issue in the one-dimensional search
was resolved. Fit analysis experiments showed that the positive 0.5th/1st-order fractional
moment and positive 0.25th-order logarithmic moment estimation methods performed
well in fitting heavy-tailed sea clutter data. To improve the parameter estimation of the
PSO algorithm, this research proposed an HPSO algorithm for parameter estimation of
the generalized Pareto distribution. Through simulation verification and analysis, the
following results were obtained:
• The HPSO algorithm overcame the premature convergence problem of the PSO algo-
rithm and demonstrated better parameter estimation performance.
• The parameters of the HPSO algorithm were optimized, resulting in good performance.
• Through the analysis of parameter estimation variations, it was found that the param-
eter estimation results of the PSO algorithm were unstable.
• Compared to other methods, the generalized Pareto distribution estimated using the
HPSO algorithm exhibited the most stable and optimal fitting results for real data, and
it was not influenced by the range of shape parameter values.
• The HPSO algorithm achieved high-precision parameter estimation results while
maintaining fast computational speed.
These research findings provide new insights and practical value for parameter es-
timation of the generalized Pareto distribution. For future research, it is suggested that
the influence of different parameter settings in the HPSO algorithm on the optimization
process is further investigated and detailed performance analyses using real-world data
are conducted.
Author Contributions: Conceptualization, B.Y. and Q.L.; data collection, B.Y.; data analysis, B.Y.
and Q.L.; data interpretation B.Y.; methodology, B.Y.; software, B.Y.; writing—original draft, B.Y.;
writing—review and editing, B.Y. and Q.L.; final approval, B.Y. and Q.L. All authors have read and
agreed to the published version of the manuscript.
Funding: This work was supported in part by the Director’s Foundation of Institute of Microelec-
tronics, Chinese Academy of Sciences, under grant no. E0518101.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data that support the findings of this study are available within
the article.
Conflicts of Interest: The authors declare no conflict of interest.
Appl. Sci. 2023, 13, 9115 23 of 24
References
1. Guo, Z.-X.; Bai, X.-H.; Shui, P.-L.; Wang, L.; Su, J. Fast Dual Trifeature-Based Detection of Small Targets in Sea Clutter by Using
Median Normalized Doppler Amplitude Spectra. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 4050–4063. [CrossRef]
2. Huang, P.; Yang, H.; Zou, Z.; Xia, X.-G.; Liao, G.; Zhang, Y. Range-Ambiguous Sea Clutter Suppression for Multi-channel
Spaceborne Radar Applications Via Alternating APC Processing. IEEE Trans. Aerosp. Electron. Syst. 2023, 1–18. [CrossRef]
3. Yin, J.; Unal, C.; Schleiss, M.; Russchenberg, H. Radar Target and Moving Clutter Separation Based on the Low-Rank Matrix
Optimization. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4765–4780. [CrossRef]
4. Luo, F.; Feng, Y.; Liao, G.; Zhang, L. The Dynamic Sea Clutter Simulation of Shore-Based Radar Based on Stokes Waves. Remote
Sens. 2022, 14, 3915. [CrossRef]
5. Guidoum, N.; Soltani, F.; Mezache, A. Modeling of High-Resolution Radar Sea Clutter Using Two Approximations of the Weibull
Plus Thermal Noise Distribution. Arab. J. Sci. Eng. 2022, 47, 14957–14967. [CrossRef]
6. Watts, S.; Rosenberg, L. Challenges in radar sea clutter modelling. IET Radar Sonar Navig. 2022, 16, 1403–1414. [CrossRef]
7. Zhao, J.; Jiang, R.; Li, R. Modeling of Non-homogeneous Sea Clutter with Texture Modulated Doppler Spectra. In Proceedings
of the 2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Xi’an, China,
25–27 October 2022; IEEE: New York, NY, USA, 2022.
8. Wang, R.; Li, X.; Zhang, Z.; Ma, H.-G. Modeling and simulation methods of sea clutter based on measured data. Int. J. Model.
Simul. Sci. Comput. 2020, 12, 2050068. [CrossRef]
9. Amani, M.; Moghimi, A.; Mirmazloumi, S.M.; Ranjgar, B.; Ghorbanian, A.; Ojaghi, S.; Ebrahimy, H.; Naboureh, A.; Nazari, M.E.;
Mahdavi, S.; et al. Ocean Remote Sensing Techniques and Applications: A Review (Part I). Water 2022, 14, 3400. [CrossRef]
10. El Mashade, M.B. Heterogeneous Performance Assessment of New Approach for Partially-Correlated χ2-Targets Adaptive
Detection. Radioelectron. Commun. Syst. 2021, 64, 633–648. [CrossRef]
11. Rosenberg, L.; Bocquet, S. The Pareto distribution for high grazing angle sea-clutter. In Proceedings of the 2013 IEEE International
Geoscience and Remote Sensing Symposium—IGARSS, Melbourne, VIC, Australia, 21–26 July 2013; IEEE: New York, NY, USA, 2013.
12. Mezache, A.; Bentoumi, A.; Sahed, M. Parameter estimation for compound-Gaussian clutter with inverse-Gaussian texture. IET
Radar Sonar Navig. 2017, 11, 586–596. [CrossRef]
13. Medeiros, D.S.; Garcia, F.D.A.; Machado, R.; Filho, J.C.S.S.; Saotome, O. CA-CFAR Performance in K-Distributed Sea Clutter With
Fully Correlated Texture. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [CrossRef]
14. Mahgoun, H.; Taieb, A.; Azmedroub, B.; Souissi, B. Generalized Pareto distribution exploited for ship detection as a model for sea
clutter in a Pol-SAR application. In Proceedings of the 2022 7th International Conference on Image and Signal Processing and
their Applications (ISPA), Mostaganem, Algeria, 8–9 May 2022; IEEE: New York, NY, USA, 2022.
15. Wang, J.; Wang, Z.; He, Z.; Li, J. GLRT-Based Polarimetric Detection in Compound-Gaussian Sea Clutter With Inverse-Gaussian
Texture. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [CrossRef]
16. Cao, C.; Zhang, J.; Zhangs, X.; Gao, G.; Zhang, Y.; Meng, J.; Liu, G.; Zhang, Z.; Han, Q.; Jia, Y.; et al. Modeling and Parameter
Representation of Sea Clutter Amplitude at Different Grazing Angles. IEEE J. Miniat. Air Space Syst. 2022, 3, 284–293. [CrossRef]
17. Fan, Y.; Chen, D.; Tao, M.; Su, J.; Wang, L. Parameter Estimation for Sea Clutter Pareto Distribution Model Based on Variable
Interval. Remote Sens. 2022, 14, 2326. [CrossRef]
18. Zebiri, K.; Mezache, A. Triple-order statistics-based CFAR detection for heterogeneous Pareto type I background. Signal Image
Video Process. 2023, 17, 1105–1111. [CrossRef]
19. Hu, C.; Luo, F.; Zhang, L.; Fan, Y.; Chen, S. Widening valid estimation range of multilook Pareto shape parameter with closed-form
estimators. Electron. Lett. 2016, 52, 1486–1488. [CrossRef]
20. Shui, P.L.; Zou, P.J.; Feng, T. Outlier-robust truncated maximum likelihood parameter estimators of generalized Pareto distribu-
tions. Digit. Signal Process. 2022, 127, 103527. [CrossRef]
21. Tian, C.; Shui, P.-L. Outlier-Robust Truncated Maximum Likelihood Parameter Estimation of Compound-Gaussian Clutter with
Inverse Gaussian Texture. Remote Sens. 2022, 14, 4004. [CrossRef]
22. Shui, P.L.; Tian, C.; Feng, T. Outlier-robust Tri-percentile Parameter Estimation Method of Compound-Gaussian Clutter with
Inverse Gaussian Textures. J. Electron. Inf. Technol. 2023, 45, 542–549. [CrossRef]
23. YU, H.; Shui, P.L.; Shi, S.N.; Yang, C.J. Combined Bipercentile Parameter Estimation of Generalized Pareto Distributed Sea Clutter
Model. J. Electron. Inf. Technol. 2019, 41, 2836–2843. [CrossRef]
24. Xue, J.; Xu, S.; Liu, J.; SHUI, P. Model for Non-Gaussian Sea Clutter Amplitudes Using Generalized Inverse Gaussian Texture.
IEEE Geosci. Remote Sens. Lett. 2019, 16, 892–896. [CrossRef]
25. Xia, X.-Y.; Shui, P.-L.; Zhang, Y.-S.; Li, X.; Xu, X.-Y. An Empirical Model of Shape Parameter of Sea Clutter Based on X-Band
Island-Based Radar Database. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [CrossRef]
26. Liang, X.; Yu, H.; Zou, P.-J.; Shui, P.-L.; Su, H.-T. Multiscan Recursive Bayesian Parameter Estimation of Large-Scene Spatial-
Temporally Varying Generalized Pareto Distribution Model of Sea Clutter. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16.
[CrossRef]
27. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems:
Applications and Trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [CrossRef]
28. Wei, X.; Huang, H. A Survey on Several New Popular Swarm Intelligence Optimization Algorithms; Research Square Platform LLC:
Durham, NC, USA, 2023.
Appl. Sci. 2023, 13, 9115 24 of 24
29. Hong, S.-H.; Kim, J.; Jung, H.-S. Special Issue on Selected Papers from “International Symposium on Remote Sensing 2021”.
Remote Sens. 2023, 15, 2993. [CrossRef]
30. Shui, P.-L.; Yu, H.; Shi, L.-X.; Yang, C.-J. Explicit bipercentile parameter estimation of compound-Gaussian clutter with inverse
gamma distributed texture. IET Radar Sonar Navig. 2018, 12, 202–208. [CrossRef]
31. Sergievskaya, I.A.; Ermakov, S.A.; Ermoshkin, A.V.; Kapustin, I.A.; Shomina, O.V.; Kupaev, A.V. The Role of Micro Breaking of
Small-Scale Wind Waves in Radar Backscattering from Sea Surface. Remote Sens. 2020, 12, 4159. [CrossRef]
32. Hu, C.; Luo, F.; Zhang, L.R.; Fan, Y.F.; Chen, S.L. Widening Efficacious Parameter Estimation Range of Multi-look Pareto
Distribution. J. Electron. Inf. Technol. 2017, 39, 412–416. [CrossRef]
33. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural
Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: New York, NY, USA, 2002.
34. Wu, J.; Hu, J.; Yang, Y. Optimized Design of Large-Body Structure of Pile Driver Based on Particle Swarm Optimization Improved
BP Neural Network. Appl. Sci. 2023, 13, 7200. [CrossRef]
35. Xu, Z.; Xia, D.; Yong, N.; Wang, J.; Lin, J.; Wang, F.; Xu, S.; Ge, D. Hybrid Particle Swarm Optimization for High-Dimensional
Latin Hypercube Design Problem. Appl. Sci. 2023, 13, 7066. [CrossRef]
36. Chandrashekar, C.; Krishnadoss, P.; Kedalu Poornachary, V.; Ananthakrishnan, B.; Rangasamy, K. HWACOA Scheduler: Hybrid
Weighted Ant Colony Optimization Algorithm for Task Scheduling in Cloud Computing. Appl. Sci. 2023, 13, 3433. [CrossRef]
37. Wang, D.; Meng, L. Performance Analysis and Parameter Selection of PSO Algorithms. Acta Autom. Sin. 2016, 42, 1552–1561.
38. Xu, S.; Wang, L.; Shui, P.; Li, X.; Zhang, J. Iterative maximum likelihood and zFlogz estimation of parameters of compound-
Gaussian clutter with inverse gamma texture. In Proceedings of the 2018 IEEE International Conference on Signal Processing,
Communications and Computing (ICSPCC), Qingdao, China, 14–16 September 2018; IEEE: New York, NY, USA, 2018.
39. Xu, S.; Ru, H.; Li, D.; Shui, P.; Xue, J. Marine Radar Small Target Classification Based on Block-Whitened Time–Frequency
Spectrogram and Pre-Trained CNN. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–11. [CrossRef]
40. Li, D.; Zhao, Z.; Zhao, Y. Analysis of Experimental Data of IPIX Radar. In Proceedings of the 2018 IEEE International Conference
on Computational Electromagnetics (ICCEM), Chengdu, China, 26–28 March 2018; IEEE: New York, NY, USA, 2018.
41. Ding, H.; Liu, N.B.; Dong, Y.L.; Chen, X.L.; Guan, J. Overview and Prospects of Radar Sea Clutter Measurement Experiments.
J. Radars 2019, 8, 281–302. [CrossRef]
42. Liu, N.B.; Ding, H.; Huang, Y.; Dong, Y.L.; Wang, G.Q.; Dong, K. Annual Progress of the Sea-detecting X-band Radar and Data
Acquisition Program. J. Radars 2021, 10, 173–182. [CrossRef]
43. Fan, Y.; Tao, M.; Su, J.; Wang, L. Analysis of goodness-of-fit method based on local property of statistical model for airborne sea
clutter data. Digit. Signal Process. 2020, 99, 102653. [CrossRef]
44. Huang, P.; Zou, Z.; Xia, X.-G.; Liu, X.; Liao, G. A Statistical Model Based on Modified Generalized-K Distribution for Sea Clutter.
IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.