100% found this document useful (1 vote)
60 views12 pages

A Study of Various Clustering Algorithms Used For Radar Signal Sorting

Radar signal sorting plays a significant part in radar countermeasure technology and reconnaissance systems. By means of radar signal sorting, several radars and their parameters in the battlefield are precisely recognized and placed in the radar records for subsequent positioning and jamming processing. The basic sorting methods cannot fulfill the sorting process with accurate and efficient results
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
60 views12 pages

A Study of Various Clustering Algorithms Used For Radar Signal Sorting

Radar signal sorting plays a significant part in radar countermeasure technology and reconnaissance systems. By means of radar signal sorting, several radars and their parameters in the battlefield are precisely recognized and placed in the radar records for subsequent positioning and jamming processing. The basic sorting methods cannot fulfill the sorting process with accurate and efficient results
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1779

A Study of Various Clustering


Algorithms Used for Radar Signal Sorting
Fathi Elnour Hammed1; Ahmed Abdalla Ali2; Ahmed Awad3
Department of Electrical and Computer Engineering, College of Engineering
Karary University- Khartoum- Sudan

Abstract:- Radar signal sorting plays a significant part in performances, etc. There are three parts of EW, that is,
radar countermeasure technology and reconnaissance electronic support measures (ESM), electronic
systems. By means of radar signal sorting, several radars countermeasures (ECM) and electronic counter-
and their parameters in the battlefield are precisely countermeasures (ECCM)[2-4].
recognized and placed in the radar records for
subsequent positioning and jamming processing. The Generally, the ESM system consists of three parts:
basic sorting methods cannot fulfill the sorting process receiver, processer, and identifier [5]. Firstly, the ESM
with accurate and efficient results. Therefore, in this receiver converts the interleaved signal into digital form by
paper, we conduct a study on two main classes clustering using a unit called pulse analyzer, and the parameters of each
techniques, the first is hierarchical based clustering, and pulse are involved in small files called pulse descriptor words
the second one is partition based clustering, which have (PDWs). Generally, PDWs consist of time of arrival (ToA),
different characteristics into groups of pulses and they pulse amplitude (PA), angle of arrival (AoA), pulse width
can sort and handle large number of radar sequence with (PW), and radio frequency (RF). Therefore, one or more
high precision and accuracy. The numerical simulations PDW parameters must be used to accomplish the sorting
studied and compared both methods under different process. Secondly, the main processing part sorts the
perspectives to clarify a new directions based on the interleaved signals into different groups, such that, the pulses
insightful investigations of these sorting techniques. of each radar cannot be placed into more than one group.
Finally, the sorted pulses are entered the emitter table in
Keywords:- Electronic Warfare (EW); Electronic Support order to update the previous information in the table, and also
Measure (ESM); Radar Signal Sorting; Clustering. to identify whether the new pulses will be associated with
new emitters or not. Figure 1, shows the ESM received the
I. INTRODUCTION interleaved radar signals, which were emanated from
multiple emitters. It is evident that the processor is used to
Due to recent increase in sophistication of weapons’ associate each pulse with its emitter.
system and the great progress in signal processing
techniques, the ability to find the position of enemy It should be notable that the words sorting and
equipment and implement effective countermeasures to deinterleaving are almost interchangeable. However, the ToA
minimize hostile threats and maximize the successful of our is often used for deinterleaving, while the PDWs parameters
own weapons are absolutely essential [1]. This is the primary are often used for sorting. It would be preferable to
objective of the electronic warfare (EW), which was initiated demonstrating the general performances of the PDWs
among the Second World War. EW takes many forms, such parameters prior to sinking more deeply into sorting process.
as detecting of the hostile emitters and degrading their

Fig 1 General Demonstration of Radar Signal Sorting.

IJISRT24OCT1779 www.ijisrt.com 1801


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1779

II. FUNDAMENTAL OF CLUSTERING sample belongs to all clusters with a specific degree of
membership. It should be notable that fuzzy clustering is
Due to modern radar technologies and ample quite computational than hard clustering; however, it is more
application of radar signals, the basic ToA deinterleaving suitable and accurate. Partition clustering is simple and
methods cannot fulfill the deinterleaving process with particularly appropriated on spherical clusters. On the other
accurate and efficient results. Clustering is an unsupervised hand, partition clustering suffers from the bad initiation of
technique, which can be defined as an explorative method the clusters, which leads to wrong clustering results. Besides,
that organizes data samples (objects) based on the similarities it needs foreknowledge of the number of clusters, which is
between the objects into groups of samples. From this difficult issue in real time processing.
definition, we can deploy clustering techniques in order to
group large number of interleaved pulses into meaningful Hierarchal cluster analysis is an important technique in
partitions, such that, the pulses in each partition are the data mining field. It has been widely used in various
associated with a unique emitter. Implementing clustering fields, such as pattern recognition, data analysis, and
technique is more flexible for modern radar signals such as biological studies. The dynamic distance clustering (DDC)
staggered, hopping, and jitter signals. Generally, clustering algorithm is dynamic, that is, the result of clustering has the
techniques can be classified into three main classes, the first dynamic class centers and unfixed the number of classes
is hierarchical based clustering, the second one is partition which depends on the input data. These are important to radar
based clustering [6], and the last one density based clustering. signal sorting.

In partition based clustering technique, the data samples A. Clustering Concept.


are partitioned into a pre-defined number of clusters [7]. The There is no specific notion of cluster, since there are
main idea of partition based clustering technique is to many clustering types and each method defines the concept
minimize the cost function “objective function” based on the of cluster in different way. Consider Fig.1. It is clear that, in
measured distance between clusters and prototypes. (a) treated all the entities as one cluster, while in (b)
Commonly, partition based clustering is further classified considered the entities as two clusters, and (c) treated the
into two main classes, that is, hard (crisp) clustering and soft entities as three clusters. Therefore, the definition of cluster is
(fuzzy) clustering [8]. In hard clustering each data sample changed from one type to other. Before we get started the
belongs to only one cluster, while in soft clustering, each data clustering type, some concepts must be known.

Fig 2 (a) One Cluster. (b) Two Clusters. (c) Three Clusters

 Preprocessing Step: Generally, before we start any type of clustering, some


Let the received signal Y  { y j }, j  1, , n consists of kind of preprocessing have to be done. The preprocessing
equation can be represented as
n pulses, each pulse has N dimensional space, i.e. g i   N .
Mathematically, this signal can be represented in matrix form yij  min( yi )
as follows. yij  (2)
max( yi )  min( yi )

 y11 y12 y1n  Where i  1, , N , and j  1, , n . The above


  equation is also called normalization equation.
y21 y22 y2 n 
Y 
  (1)  Distance Measure;
 
Clustering is technique for grouping samples with
similar attributes. In order to achieve this goal, some distance
 yN 1 yN 2 yNn  N n measure must define whether these samples are similar or

IJISRT24OCT1779 www.ijisrt.com 1802


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1779

not. There are a lot of types of distance measure; however,  Global Minima and Local Minima:
the most famous one is Euclidean distance. Euclidean The Global minima can be defined as the minimum
distance, d , between y1  ( y11 , y21 , , yN 1 )T and point that corresponds to the smallest value of the all-error
function, while local minima are the minimum values of error
y 2  ( y12 , y22 , , y N 2 )T is defined as follows compared with the nearby errors. Fig.2 delineated the
concept of local minima and Global minima.

d  y11  y12    y21  y22     yN 1  yN 2  (3)

Fig 3 Global and Local Minima

III. THE PROPOSED CLUSTERING divisive clustering considers all data samples as a unique
ALGORITHMS cluster, and then the algorithm splits the cluster until each
data sample represents a cluster on its own. Both
A. Hierarchical Clustering Methods Agglomerative clustering and Divisive clustering are
Hierarchical based clustering technique is a method represented by cluster tree or “dendrogram”. Fig.3 shows the
which creates a hierarchy’s structure of clusters. It is main difference between Agglomerative clustering and Divisive
concept is that; objects are more likely to be linked with close clustering. The main merits of hierarchical are that, it does
data sample rather than farther one. In general Hierarchical not need prior knowledge of clusters’ number, and there is no
clustering has been categorized into two types, i.e. effect of initialization. However, the drawbacks of
Agglomerative clustering and Divisive clustering [8,9]. hierarchical clustering are that, it only deals with local
Agglomerative clustering first and foremost, by considers neighbors, it cannot incorporate information about clusters
each data sample as a cluster and then the algorithm merges shape and size, and it associated with static algorithm, thus
the elements into larger clusters based on their distances. At each data sample belonging to a cluster at the initial steps
the end, these clusters are merged to form one cluster. While cannot belong to another cluster during the final steps.

Fig 4 The difference between Agglomerative clustering and Divisive clustering

IJISRT24OCT1779 www.ijisrt.com 1803


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1779

 There are Three Commonly used Strategies to Calculate B. Partition Based Clustering].
the Distance in Hierarchical Agglomerative Clustering: In partition based clustering technique, the data samples
are partitioned into a pre-defined number of clusters. The
 Single Linkage Method[6]. main concept Partition based clustering technique is to
It also called neighbor joining, minimum method, or the minimize the cost function or objective function based on the
nearest neighbor method. In this method the role of distance measured between clusters and prototypes.
combination is based on “the shortest distance”, that is, Generally, partition based clustering are further classified
distance between two clusters is the nearest distance between into two main classes, that is, hard (crisp) clustering [10], and
any object in the first cluster and any object in the second soft (fuzzy) clustering [11]. In hard clustering each data
cluster. Suppose that A and B are two clusters in 2-D sample belongs to only one cluster, while in soft clustering,
space,  , then the Single Linkage distance, d SL , is defined each data sample belongs to all clusters with a specific
2
degree of membership. Fuzzy clustering methods are more
as. computational than hard clustering methods. Moreover, to
cluster the data samples, fuzzy clustering methods are more
d SL  A, B   min d  a, b  suitable than hard clustering methods.
aA, bB
(4)
 Hard Clustering
 Complete Linkage Method [7]:
It also called the maximum method or the furthest  k-means Clustering Algorithm.
neighbor method. Its role of combination is based on The well-known hard clustering is k-means algorithm
“maximum distance”, that is, distance between two clusters is [12,13], which has been used in many fields such as, pattern
the farthest distance between any object in the first cluster recognition, image processing, data mining, etc. k-means is
and any object in the second cluster, i.e very simple algorithm, and it is used to cluster our data
samples based on given number of clusters. The steps of the
dCL  A, B   max d  a, b  (5) algorithm can be given as follows:.
aA, bB
 Initialize the algorithm by setting k centers randomly.
Where dCL is the Complete Linkage distance.  Assign each sample (object) to the nearest center.
 Modify the centers by calculating the average samples in
each cluster.
 Average Linkage Method [8]:
It also called minimum variance method. The distance
Repeat step (3) and (4) until the number of samples
between two clusters is the distance between their centers
remain constant in each cluster or some stopping criteria
(mean value), i.e.
satisfied Figure.5 shows the final clustering result of some
data sample. We set the number of clusters to three. There are
d AL  A, B   d   A , B  (6) three types of data samples, which are represented by circles,
and center, are denoted by (  ). In case A, the centers are
Where d_AL is the Average Linkage distance, and located in the center of each cluster, and the algorithm
converged into correct results. In case B, one center is located
a b between two circles, while two centers are located in one
circle, and the algorithm converged into false results. In case
 A  aA , B  bB (7) C, the algorithm converged into two clusters, and hence one
a b
cluster is empty.

Fig 5 Three k-means for one Data set.

IJISRT24OCT1779 www.ijisrt.com 1804


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1779

Therefore, it clear that k-means algorithm is very optimal clusters that are nonlinearly separable in the input
sensitive to the initial centers. Thus it may not converge or space. To overcome this drawback, we use Kernel method as
yield to the accurate results (global solution). Besides, they follows. The objective function of the Kernel K-means can
need foreknowledge of the number of clusters, which is be represented as;
difficult issue in real time processing.

 
k
J Pl l 1   Φ  ui   ml , where ml  Φ  ui  / Pl
k 2
 Kernel K-means. (9)
Given a number of k prototypes, the K-means clustering l 1 ui Pl uiPl
algorithm looks for clusters.
Φ  ui   ml
2
k 2 The norm Euclidean distance may be
J (Pl l 1 )    ui  ml , where ml  ui / Pl
k
(8) represented as;
l 1 ui Pl uiPl
2 u Φ  ui  .Φ  u j   Φ  u j  .Φ  ul 
 Φ  ui  .Φ  ui  
ul u jPl
That minimizes the cost function ml is the mean of the
jPl
 2
(10)
Pl Pl
l th center, and ui is defined before. One of the main
drawbacks of the K-means algorithm is an inability to find

Table 1 The Kernel K-means Algorithm


=Kernel K-means
Input:
, , = Kernel Matrix, number of the prototypes, and maximum number of iterations, respectively
Output:
=Assignment of the data samples to their clusters.
1- Initialize the clusters randomly. set .
2- Compute the distance .
3- Update the clusters according to the computed distance. .
4- If stop, otherwise go to step 2.

 Soft Clustering Then FCM algorithm will be calculated by setting the


number of clusters, initializing Vj, and reiteratively solving
 Fuzzy C-Means (FCM) (12), and (13).
ifFCM algorithm is used to partition n samples to
number of known clusters (C). It offers a degree of  KFCM.
membership (  ji ) between every sample and cluster , with KFCM is an improvement of FCM that map the data
points from input space into kernel space. The objective
the cost function is given by function of the Kernel k-means can be represented as
C N
J  (  ji )l xi  v j
2

   
k N

J Pl l 1  Φ  ui   Φ  ml 
(11) k h 2
j 1 i 1
il (14)
l 1 i 1


N
Where l (l 1) is a fuzz function constant, i 1
 ji  1 ,
Where Φ  ml  and Φ  ui   Φ  ml  are defined
2
and v j is the jth clustering center. We can minimize the
cost function previously.

IV. EVALUATION OF SORTING


 i 1( ji )l xi
N
ALGORITHMS
vj  (12)
 i 1( ji )l
N
In this Part, we are going to compare some sorting
algorithms, which introduced before. It is worth nothing that,
each of these algorithms might combine different types of
1 / x  v 
1
2 l 1
clustering in order to enhance the efficiency of the sorting.
 ji 
i j
(13)
 1 / x  v 
C 1
2 l 1
k 1 i k

IJISRT24OCT1779 www.ijisrt.com 1805


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1779

 Dynamic Distance Clustering (DDC) and (IDDC)  Compute the minimum distance between all samples and
Algorithms: the obtained centers, and then if the maximum
DDC belongs to partition clustering, and its aim to sort distance, Dmax , satisfy Dmax   , then the
radar pulses without prior knowledge of radars’ number. It is
mainly based on the minimum distance principle. The steps corresponding sample will be considered as a next cluster
of this algorithm are summarized as follows: center.
 Continue the previous step until Dmax   , and then all
 Choose any sample, usually the first one, to be the first centers will be estimated.
cluster center, z1  Set each sample with the nearest center.
 Compute the Euclidean distances between all samples and  Combine the adjacent clusters which have too less pulses
the obtain cluster center, and then select the sample, (samples).
which corresponding to the maximum distance as the next
In order to verify the validity of sorting algorithm, nine
cluster center, z2 . radars’ pulse signals which are mixed according to the TOA
 Calculate the threshold based on are simulated. Considering radar signals from the same
direction, we use four-dimensional clustering. The clustering
   z1  z2 (15)
parameters include RF, PW, modulation slope k for linear
frequency modulation (LFM) and bit rate R for phase shift
keying (PSK). Taking into account measurement error
i max( yi ) inevitably, random quantities are added to the parameters in
i  simulation. Set the bias of RF to 2% or less, and set the bias
max( yi ) 1  i   min( yi )
(16)
of PW, k and R to 10% or less. The variation range of stable
PRI and staggered PRI is 1% to 3% of the mean PRI value,
and the variation range of jittered PRI is 5% to 10% of the
0.5
  mean value. The preset parameters of the nine radars are
   i2  (17) shown in Table 2.
 i 
To improve the DDC method, we combine the last four
steps together into one step, that is, the cluster centers are
estimated with their corresponding samples, simultaneously.
Table 2 shows nine radar signals are simulated at clustered
with DDC and IDDC algorithm and the result depicted in
Fig.6(a), and (b), respectively.

Table 2 The DDC Method Radar Parameters


No Type RF PW PRI(µS) K R
(MHz) (µS)
1 LFM 2200-2800 A(32) 40 3400(F) 50 0
2 PSK 3400-3700 65 2000(F) 0 2
A(16)
3 Monopulse 3200 65 3400(J) 0 0
4 PSK 2600 100 1100,1200(S) 0 2.5
5 Monopulse 3500 120 900,850,1200(S) 0 0
6 LFM 3800 100 2500(F) 150 0
7 PSK 3800 70 38(J) 0 5
8 Monopulse 3000 150 4000(F) 0 0
9 LFM 2400-3000 40 1300,1100, 100 0
A(16) 1600(S)

IJISRT24OCT1779 www.ijisrt.com 1806


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1779

The Result of DDC algorithm


1

0.9

0.8 clu ster1


clu ster2
0.7 clu ster3
clu ster4
0.6 clu ster5
clu ster6
RF

0.5 clu ster7


clu ster8
0.4 clu ster9
clu ster10
0.3 clu ster11
clu ster12
0.2

0.1

0
0 0.2 0.4 0.6 0.8 1
PW

(a)

The Result of IDDC algorithm


1

0.9

0.8

0.7 clu ster1


clu ster2
0.6 clu ster3
clu ster4
RF

0.5 clu ster5


clu ster6
0.4
clu ster7
clu ster8
0.3
clu ster9
0.2

0.1

0
0 0.2 0.4 0.6 0.8 1
PW

(b)
Fig 6 (a) DDC Clustering Results. (b) IDDC Clustering Results.

It is obvious to see that, the DDC estimated twelve


clusters instead of nine, whereas IDDC obtained the correct
number of clusters, which indicate the superiority of IDDC.

 Tolerance Threshold Clustering (TTC) algorithm.


The concept of tolerance in this method means the
allowable level in each parameter that can correspond to
some cluster [10]. More simplicity, consider Fig.7 which
consist of one cluster and some noise pulses. Suppose the
center of the cluster is denoted by c1 Then the tolerance

become  c1   RF , c1   PW  .
 2 2 
Fig 7 The Concept of Tollerance in Clustering

IJISRT24OCT1779 www.ijisrt.com 1807


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1779

Therefore, if we have good estimate of this tolerance  Update the center when a new sample belong to it as
(threshold), then we can obtains our cluster. The algorithm is
work as follows:
c j  avg  c j  yi  (19)

 Select arbitrary point as the first center, c j , usually the


 Discard all samples which belong to the current cluster
first one.  If the remaining number of sample larger than 5, go to
 Compute the distance between c j and the other pulses. first step. Else, break the algorithm.

Let consider the following radar pulses which


if d (c j , yi )  Th yi  c j distributed as shown in Table 3.
 (18)
 Else yi  c j

Table 3 The Radar Parameters


radar PRI (µs) RF (GHz) PW DoA Tot
1 3~4 8.9~9.3 0.3~0.4 38~41 130
2 30~60 9.1~9.3 0.8~0.5 36~39 26
3 4~5 9.4~9.8 0.4~0.5 40~43 130
4 40~70 9.6~9.8 0.9~1 42~45 18

Fig.7 shows the results of clustering using TTC class center has little effect on the entire performance of both
algorithm in different threshold to indicate the importance of approaches. It indicates that DDC and IDDC are not sensitive
the threshold criteria. In addition, the selection of the first to the sequence of input data.

Result of clustering on threshold= 0.6

cluster1
1 cluster2
cluster3
DOA

0.5

0
1
1
0.5
PW 0.5
0 0 RF
(a)

Result of clustering on threshold= 0.35

cluster1
1 cluster2
cluster3
cluster4
DOA

0.5

0
1
1
0.5
PW 0.5
0 0 RF
(b)

IJISRT24OCT1779 www.ijisrt.com 1808


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1779

Result of clustering on threshold= 0.2

cluster1
1 cluster2
cluster3
cluster4
DOA

0.5 cluster5
cluster6
cluster7
0
1 cluster8
1 cluster9
0.5
PW 0.5 cluster10
0 0 RF
(c)
Fig 8 The Clustering Results for Different Threshold

In Fig.8(a) the threshold is very large, i.e Th=0.6, and data samples from low dimension feature space to high
hence two clusters are merged into one cluster. As the dimension feature space by nonlinear transformation. The
threshold decreases the performances of clustering become common nonlinear transformations are shown in Table 4.
well Th=0.35 as depicted in Fig.7 (b). In Fig.7 (c) the
threshold is very small, and thus large numbers of clusters are Let, ui  U , i  1, , N be a radar pulse chain
obtained. Accordingly, if the threshold is very big, then all
consisting of N pulses with U   , where d is the features’
d
samples will be clustered into one cluster, while when the
threshold is very small, then every sample may be considered dimension. Applying nonlinear transformation  from U to
as a unique cluster. some high dimension space, the clusters take a far better
form. We look for the smallest sphere which comprises
 SVC and k-means Algorithm [8]. almost all the data samples.
SVC method is used to cluster data points with
nonlinear boundaries in data space. Its main idea is to map

Table 4 The Common Nonlinear Transformation


Gaussian Kernel

K  ui , u j   exp h ui  u j
2

K  ui , u j    ui .u j  c 
Polynomial Kernal 2

Sigmoid Kernel

K  ui , u j   tanh c  ui .u j    
2

The enclosing sphere of radius R which contains all


data samples can be represented by  2

L  R2   j R 2   j    ui  a   j   j j j  C  j j (22)

  ui  a   R 2
2
(20) Where  j  0,  j  0 are the
Largangian multipliers, C is a constant, and
Where
2
is Euclidean norm distance and a is the C  j j is a penalty factor. Minimizing L with
sphere’s center. The soft constraint can be obtained by adding respect to R, and a respectively lead to
slack variables    0 .
 j
 j 1 (23)

  ui  a   R 2   j
2
(21)
a   j  j   ui  (24)
To solve these constraints we apply the Largangian,
 j  C  j (25)

IJISRT24OCT1779 www.ijisrt.com 1809


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1779

The Karush-Kuhn-Tucker (KKT) conditions are


W   K  u j , u j  j  i j K  ui , u j  (29)

 R    u  a  
j i, j
2
2
j i j 0 (26)
The distance from each image's sample u to sphere's
center a can be represented as
 j j  0 (27)
R 2    ui  a 
2
(30)
The first equation of KKT implies that, the
image   ui  with j 0 and j  0 The radius of the sphere is { R  R  ui  | ui is support
locates out of the sphere. While the second condition of KKT vector}. The tightness of the boundaries is controlled by two
implies  j  0 . So, if  j  C , then the image parameters, h and C , while the number of outliers are
governed by C.
  ui  locates outside the boundary of the sphere, which is
known as outliers. If 0   j  C , then the image To distinguish the pair of data samples are belonged to
same cluster or not, the following steps should be considered.
  ui  locates on the boundary surface of the sphere, First, connect the path between the pair of points in the
feature space. Next, divide the line path to segment of points,
known as Support Vector (SV). The reminder points locate
inside the sphere, i.e.  j  0 .
z. Finally, if R  z   R , then the pair of points belong to
same cluster. Otherwise, they belong to different clusters.
Using the above relations, we can write the Lagrangian
The main idea of SVC & K-means is to partition our
formula in wolf dual form as;
data into small parts, in order to decrease the time
computation of SVC. Then we apply K-means into each
W     ui   j   i j   ui    ui 
2
(28) small segment. In order to illustrate the performance of SVC
j i, j & k-means, four radar signals are simulated as shown in
Table 5.
Where 0   j  C . In this section we used Gaussian
Kernel which is defined in Table 4-4, thus the Lagrangian
W can be written as;
Table 5 The Radars’ Parameters
Radar RF(GHz) PW(µs) DOA No.of pulses
1 2.08~2.25 1.2~1.3 48~50 824
2 2.75~2.85 1~1.1 60~65 823
3 2.25~2.35 1.2~1.25 68~70 2149
4 2.22~2.75 1.3~1.4 56~60 1891

Classification per class for K-means=4


2000
No of sample per class

cluster1
1500
cluster2
cluster3
1000 cluster4

500

0
1 2 3 4
cluster
(a)

IJISRT24OCT1779 www.ijisrt.com 1810


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1779

Classification per class for K-means=6


2000
cluster1
No of sample per class

cluster2
1500 cluster3
cluster4
1000

500

0
1 2 3 4 5 6
cluster
(b)

classification per class


2000
Cluster 1
No of sample per class

1500 Cluster 2
Cluster 3
Cluster 4
1000

500

0
1 2 3 4 5
clusters
(c)
Fig 9 (a) Three clusters Clustering result with K-means=4. (b) Clustering result with K-means=6. (c) Clustering result for with
SVC and K-means.

In Fig. 9(a) we cluster the received signal using k- V. CONCLUSION


means, and we set k-means signal to 4 clusters. It is clear
detecting that, all radar signals are clustered well. Whereas, In this paper we presented a comparison between
in Fig. 9(b) we set the number of cluster to be 6, resulting hierarchical and partition clustering methods in order to
that, the algorithm converged into six clusters. Since k-means obtain the amenable algorithm for sorting radar signal. It is
is largely depend on two factors, that is, the number of clear that all compared algorithms have good performances
clusters and the position of centers. Finally, these information when the initial centers are inherent in the problem.
conducted by using SVC and hence we clustered the signal Additionally, we observe the hard algorithms are highly
using k-means. The method worked well; however, this may sensitive to the initial values as compared with soft
lead to false estimation, since small part of signal cannot algorithms. The initiation, missing pulses, and noise pulses
always reflect the actual number of clusters as depicted in Fig all are considered for the comparison. Next, a new Fuzzy c-
9(c). means validity index is proposed and the simulation results
show the efficiency of this method in time consuming and
resistance to noise. Additionally, an improved density
clustering algorithm for sorting radar signal is proposed to
improve the time processing of the previous version. It is also
compared with the other benchmark density clustering
algorithms.

IJISRT24OCT1779 www.ijisrt.com 1811


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1779

REFERENCES

[1]. Ahmed Abdalla Ali. Mohammed Ramadan. Yongjian


Liao . Shijie Zhou , “An adaptive filtering algorithm
in pulse-Doppler radar for counteracting range-
velocity jamming”, International Journal of
Electronics, 109(10),pp 1695–1713, 2022.
[2]. Ahmed Abdalla Ali., Wang, W. Q., Yuan, Z.,
Mohamed, S., & Bin, T. “Subarray-based FDA radar
to counteract deceptive ECM signals”, EURASIP
Journal on Advances in Signal Processing, 104, 1–
11,2016.
[3]. Ahmed Abdalla Ali., Yuan, Z., & Bin, B. “ECCM
schemes in netted radar system based on temporal
pulse diversity”. Journal of Systems Engineering and
Electronics, 27(5), 1001–1009, 2016.
[4]. Ahmed Abdalla Ali., Shokrallah, A. M. G., Yuan, Z.
H. A. O., Ying, X., & Bin, T. A. N. G. “ Deceptive
jamming suppression in multistatic radar based on
coherent clustering”, Journal of Systems Engineering
and Electronics, 29(2), 269–277, April 2018.
[5]. Rokach, Lior, and Oded Maimon. "Clustering
methods." Data mining and knowledge discovery
handbook. Springer US, 2005. 321-352.
[6]. Murtagh, F. A survey of recent advances in
hierarchical clustering algorithms which use cluster
centers. Comput. J. 26 354-359, 1984.
[7]. Chen, Long, CL Philip Chen, and Mingzhu Lu. "A
multiple-kernel fuzzy C-means algorithm for image
segmentation." IEEE Transactions on Systems, Man,
and Cybernetics, Part B: Cybernetics, 41.5: 1263-
1274,2011.
[8]. He, Ai-Ling, et al. "Multi-Parameter Signal Sorting
Algorithm Based on Dynamic Distance Clustering."
Journal of Electronic Science And Technology of
China7.3: 249-253,2009.
[9]. D. Xu, M. Xu, H. Wang, F. Feng, L. Tang and M. Gu,
"A Real-time Radar Signal Sorting Method and
Implementation Based on DSP," 2020 IEEE MTT-S
International Wireless Symposium (IWS), Shanghai,
China, 2020, pp. 1-3, 2020.
[10]. Z. Cui, X. Fu, P. Lang, J. Dong, F. Wu and H. Gao,
"Radar Signal Sorting Based on Adaptive SOFM and
Coyote optimization," 2022 7th International
Conference on Signal and Image Processing (ICSIP),
Suzhou, China, 2022, pp. 157-161,2022.
[11]. M. Wan, Y. Zhang, Y. Bai, Y. Sun, Q. Yu and Q.
Wang, "A Real-Time Radar Signal Sorting Method
Under Bayesian Framework With Dynamic Cluster
Merging," in IEEE Sensors Journal, vol. 24, no. 17,
pp. 27859-27869, 1 Sept.1, 2024.
[12]. Z. Zhou, X. Fu, J. Dong and M. Gao, "Radar Signal
Sorting With Multiple Self-Attention Coupling
Mechanism Based Transformer Network," in IEEE
Signal Processing Letters, vol. 31, pp. 1765-1769,
2024.
[13]. Zhizhong Zhang, Xiaoran Shi, Xinyi Guo, Feng Zhou,
"TR-RAGCN-AFF-RESS: A Method for Radar
Emitter Signal Sorting", Remote Sensing, vol.16,
no.7, pp.1121, 2024.

IJISRT24OCT1779 www.ijisrt.com 1812

You might also like