Bahutachaafusion
Bahutachaafusion
fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2822712, IEEE Sensors
Journal
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2822712, IEEE Sensors
Journal
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 2
maximizing fusion rule for complementary fusion [16]. In last In second stage of homomorphic filtering the exponential of
decade, various soft computing techniques were applied in the image is calculated using Eq (7). The output image consist
multimodal image fusion applications. Kavitha and both illumination as well as reflection coefficients in time
Chellamuthu were proposed Ant Colony Optimization (ACO) domain as shown in Eq (8).
based edge feature detection and fused using Pulse Coupled 𝑔(𝑥,𝑦) = 𝑒 𝑠(𝑥,𝑦)
Neural Network. (PCNN) [17]. There were various soft ′ ′
= 𝑒 𝑖(𝑥,𝑦)+𝑟(𝑥,𝑦)
computing techniques implemented in image fusion ′ ′
applications such as, Quantum-Behaved Particle Swarm = 𝑒 𝑖(𝑥,𝑦) 𝑒 𝑟(𝑥,𝑦) (9)
Optimization (QPSO ) [18] , deep Convolutional neural = 𝑖0 (𝑥,𝑦) 𝑟0 (𝑥,𝑦)
network [19] , human visual system (HVS) [20] , Neuro Fuzzy
[21] and Grey Wolf Optimization (GWO) [22,23] techniques. The final homomorphic output image is given in the Eq (9). It
Intelligent medical image fusion techniques are used in consists of illumination coefficients which represent low
computer aided brain surgery, Alzheimer’s treatment, tumor frequency components and reflection coefficients represent the
detection and other clinical diagnosis [24-30] high frequency components. The conventional flow of
In this paper, we proposed an optimum wavelet domain homomorphic filtering is given in Fig. 1.
homomorphic filter which provides multi level decomposition A. Need of multi scale decomposition in homographic
in homorphic domain. The optimum scale values are selected filter
using a novel optimization approach called Hybrid Genetic The conventional homomorphic filtering consists of two levels
Grey Wolf Optimization algorithm. scaling such as illumination coefficients and reflection
The organization of this paper is as follows. Section 2 coefficients. In medical imaging applications multi scaling is
describes the generalized homomorphic filtering and the need an essential way for extracting the soft and hard tissue details.
of multi scale filtering. Section 3 presents the proposed In our proposed approach, discrete wavelet transform is
wavelet based homomorphic fusion in detail. Section 4 introduced for multilevel decomposition in homomorphic
discusses the results and discussion. Lastly, Section 6 presents filtering. Wavelet transform can divide the input signal into
the conclusions. multi-level bands, which provides enhanced details of soft
tissues as well as hard tissues in medical imaging.
II. GENERALIZED HOMOMORPHIC FILTERING B. Proposed wavelet based homographic filter
Homomorphic filtering is an image enhancement technique, Our proposed technique is an optimum multi scale
which decomposes the original image. The input image 𝑓(𝑥,𝑦) homomorphic filter, in which multi scale decomposition is
performed using wavelet transform as shown in Fig. 2.
is the product of illumination 𝑖(𝑥,𝑦) and reflection 𝑟(𝑥,𝑦)
Original image consist of illumination as well as reflection
coefficients as shown in Eq (1). coefficients as given in Eq (1), which is transformed to
𝑓(𝑥,𝑦)=𝑖(𝑥,𝑦)𝑟(𝑥,𝑦) (1) logarithmic operators and the resultant image as given in Eq
The original image is operated using logarithmic filter. The (2), and the output of logarithmic filter is transformed in to
resultant 𝑧(𝑥,𝑦) is the sum of illumination and reflection wavelet domain.
1
coefficients in logarithmic domain as shown in Eq (2). 𝑊1 𝜑(𝑗0 , 𝑚, 𝑛) = ∑𝑀−1 𝑁−1
𝑥=0 ∑𝑦=0 𝑍(𝑥,𝑦) 𝜑(𝑗0 ,𝑥,𝑦)
√𝑀𝑁
𝑧(𝑥,𝑦) = 𝑙𝑛 (𝑖(𝑥,𝑦) ) + 𝑙𝑛 (𝑟(𝑥,𝑦) ) (2)
(10)
The decomposed components of logarithmic coefficients are 1
transformed using Discrete Fourier Transform (DFT) as 𝑊 𝑖 1 𝜓(𝑗,𝑚,𝑛)= ∑𝑀−1 𝑁−1 𝑖
𝑥=0 ∑𝑦=0 𝑍(𝑥,𝑦) 𝜓 (𝑗
√𝑀𝑁 0 ,𝑥,𝑦)
shown in Eq (3). The resultant of DFT is operated using a
1
selective filter 𝐻(𝑢,𝑣) as shown in Eq (4). Where 𝑆(𝑢,𝑣) is the 𝑊2 𝜑(𝑗0 , 𝑚, 𝑛) = ∑𝑀−1 𝑁−1
𝑥=0 ∑𝑦=0 𝑍(𝑥,𝑦) 𝜑(𝑗0 ,𝑥,𝑦)
√𝑀𝑁
selective filtered image. (11)
𝑧(𝑢,𝑣) = 𝐹𝑖 (𝑢,𝑣) + 𝐹𝑟 (𝑢,𝑣) (3) 𝑖 1
∑𝑀−1 𝑁−1 𝑖
𝑊 2 𝜓(𝑗,𝑚,𝑛)= √𝑀𝑁 𝑥=0 ∑𝑦=0 𝑍(𝑥,𝑦) 𝜓 (𝑗 0 ,𝑥,𝑦)
𝑆(𝑢,𝑣) = 𝐻(𝑢,𝑣) 𝐹𝑖 (𝑢,𝑣) + 𝐻(𝑢,𝑣) 𝐹𝑟 (𝑢,𝑣) (4)
𝑆(𝑢,𝑣) = 𝐻(𝑢,𝑣) 𝑍(𝑢,𝑣) (5) The decomposed low pass and high pass filtered features of
modality 1 and modality 2 are given in Eq (10) and Eq (11)
The resultant homomorphic image is in logarithmic respectively, where 𝑖 = {𝐻, 𝑉, 𝐷}, which provides horizontal,
frequency domain. For getting the spatial domain information, vertical and diagonal details of the input modality.
the Inverse Fourier transform of the image is obtained using 𝑍1 (𝑢,𝑣) = 𝑊2 𝜑(𝑗0 , 𝑚, 𝑛) + 𝑊 𝑖 1 𝜓(𝑗,𝑚,𝑛) (12)
Eq (5).The resultant image consist of both illumination and
reflection coefficients in logarithmic range as given in Eq (6). 𝑍2 (𝑢,𝑣) = 𝜎1 W ( j0 , m, n) + 𝜎2 W i ( j, m, n) (13)
In which, 𝑍1 (𝑢,𝑣) and 𝑍2 (𝑢,𝑣) are the adder 1 and adder 2
𝑆(𝑥,𝑦) = 𝐹 −1 𝑆(𝑢,𝑣) (6) outputs respectively.
𝑍𝑓𝑢𝑠𝑒𝑑 = 𝐴𝑣𝑒𝑟𝑎𝑔𝑒(𝑍1 (𝑢,𝑣) , 𝑍2 (𝑢,𝑣) ) (14)
𝑆(𝑥,𝑦) = 𝐹 −1 {𝐻(𝑢,𝑣) 𝐹𝑖 −1
(𝑢,𝑣)
} + 𝐹 −1 {𝐻(𝑢,𝑣) 𝐹𝑟 −1
(𝑢,𝑣)
} (7) −1
𝑍(𝑥,𝑦) = 𝑊 (𝑍(𝑢,𝑣) ) (15)
′ ′
𝑆(𝑥,𝑦) = 𝑖(𝑥,𝑦) + 𝑟(𝑥,𝑦) (8) 𝑔(𝑥,𝑦) = 𝑒 𝑍(𝑥,𝑦) (16)
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2822712, IEEE Sensors
Journal
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 3
In which, 𝑍(𝑢,𝑣) is the fused wavelet homomorphic The mathematical modeling of GWO algorithm
output, as given in Eq (14). The 𝑔(𝑥,𝑦) is the spatial domain consists of three stages, tracking, encircling and attacking prey
fused image as given in Eq (16). [31]. The mathematical model for encircling the prey can be
represents in the following equations:
III. HYBRID GENETIC BASED GREY WOLF ⃗ = |𝐶 . 𝑋 (𝑡) − 𝑋(𝑡) |
𝐷 (15)
𝑝
OPTIMIZATION ⃗
𝑋(𝑡 + 1) = 𝑋𝑝(𝑡) − 𝐴 . 𝐷 (16)
Where 𝑡 represents the current iteration, 𝐴 and 𝐶 represents
This section is divided into following 3 sub sections,
• Grey wolf optimization the coefficient vectors, 𝑋𝑝 is the position vector of prey, 𝑋 is
• Need of hybrid grey wolf optimization the position vector of the grey wolf, 𝐴 and 𝐶 are calculated
• Proposed hybrid genetic grey wolf optimization using Eq 15 and Eq16.
𝐴 = 2𝑎 . 𝑟1 − 𝑎 (17)
A. Grey Wolf Optimization 𝐶 = 2 . 𝑟2 (18)
Grey Wolf Optimization is the one of the biologically In which, 𝑎 is a variable, linearly decreasing from 2 to 0
inspired optimization algorithm used in various fusion over the course of iteration. Here, 𝑟1 and 𝑟2 are the random
applications in literature[22, 23].The GWO algorithm imitates vectors, as given in Eq 17 and Eq16. . The grey wolves can
the hunting nature of grey wolf family , they usually prefer to update their position anywhere in search space based on the
be in a pack . They have a strict social dominant 4 level random variables 𝑟1 and 𝑟2 [31, 32].
hierarchy, size of grey wolves pack is varying from 5 to 12 in The hunt is usually lead based on the social hierarchy given in
number. In grey wolf social hierarchy, the best grey wolf Fig.3. The hunting nature can be mathematically expressed
candidate called alpha (α), they are the dominant wolves in the using the following equations:
pack. The second best in hierarchy called beta (β), they are the ⃗ 𝛼 =|𝐶1 . 𝑋𝛼 − 𝑋|
𝐷
subordinates to the alpha wolves. If the alpha wolves are
⃗ 𝛽 =|𝐶2 . 𝑋𝛽 − 𝑋|
𝐷 (19)
absence in the pack then beta wolves will lead the pack. The
lowest ranking wolves are called omega (𝜔). The omega ⃗ 𝛿 =|𝐶3 . 𝑋𝛿 − 𝑋|
𝐷
wolves need to submit to all other wolves and they are allowed
to eat last in a pack. The wolves other than alpha, beta or ⃗ 𝛼)
𝑋1 =𝑋𝛼 − 𝐴1 . (𝐷
omega in their pack are called delta (𝛿). The delta wolves need ⃗ 𝛽)
𝑋2 =𝑋𝛽 − 𝐴2 . (𝐷 (20)
to obey alpha and beta level wolves but dominate over omega
wolves in their pack [31]. ⃗ 𝛿)
𝑋3 =𝑋𝛿 − 𝐴3 . (𝐷
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2822712, IEEE Sensors
Journal
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 4
𝑋⃗1 +𝑋⃗2 +𝑋⃗3 agent. The direction and position of pack is calculated based
𝑋 (𝑡 + 1) = (21)
3 on the Eq 14-20. The flow chart for hybrid HG-GWO is given
In which, the first three solutions are considered as in Fig 2.
optimum and reaming solutions are discarded, in which 𝑋 IV. RESULTS AND DISCUSSION
provides the average of best three solutions. Grey wolves In this section, we have analyzed the performance of the
finish the hunt by attacking the prey when it stops moving. If proposed fusion technique OWHF technique. The proposed
|𝐴| < 1 then grey wolves attack towards the prey and if |𝐴| > OWHF is tested on various MR-SPECT, MR-PET, MR-CT
1then they diverge from the current prey and find a fitter one. and MR-T1-T2 input modalities. The database images used for
B. Need of Hybrid Grey Wolf Optimization our fusion analysis are available in
The variable 𝑎 is linearly selected whereas variables 𝑟1 and http://www.med.harvard.edu/AANLIB/home.html.
𝑟2 are randomly fixing. The drawback of GWO is that, The implementation results of proposed HG-GWO
randomly selected variables which deciding the optimum are given in Table 1. The optimum scale values obtained using
position. In this paper, we introduced a hybrid genetic proposed HG-GWO are listed in Table 2. In optimum weight
algorithm based grey wolf optimization for overcoming the analysis, 𝜎1 best and 𝜎2 best are the optimum values
limitations of conventional GWO. In which, GWO technique corresponding to input modality 1 and modality 2. All the
performs the optimum prey position meanwhile genetic computational experiments in this work were performed using
algorithm selects the optimum control variables 𝑟1 and𝑟2 . Matlab 2010a on a PC with Pentium dual core processor and
C. Proposed Hybrid Genetic based Grey Wolf speed 2.30.
TABLE 1
Optimization Implementation results of HG-GWO
The prime motivation of hybrid HG-GWO algorithm is to n1 number of positions for grey wolfs 50
combine the advantages of genetic algorithm in conventional 𝑛2 number of initial population for GA 20
grey wolf algorithms for overcoming the static scales selection Cross over ratio Dynamic
Mutation ratio Dynamic
problem in GWO. The conventional grey wolf optimization Number of GWO iterations 50
consists of randomly selected control elements 𝑟1 and 𝑟2 , the Number of GA iterations 30
static scale values might leads to the local minima region. In
proposed technique, genetic operators such as: cross over and TABLE 2
Results of optimum scale values using HG-GWO
mutations are included to select the optimum control
Type of image 𝜎1 𝜎2
parameters. The implementation parameters are given in
MR-SPECT 0.2412 0.0141
Table. 1. The details of HG-GWO are presented in following MR-PET 0.4675 0.0352
three subsections. MR-CT 0.5105 0.2302
1) Generation of initial population MR-T1-T2 0.4813 0.3264
The GWO and GA are population based optimization A. Quantitative and qualitative analysis of OWHF fusion
algorithms. In HG-GWO, which generates population consist technique
of 𝑛1 number of positions for grey wolfs and 𝑛2 number of In this section, the proposed fusion technique is tested for
initial population for GA. In HG-GWO, range of 𝑛1 can be 20, MR-SPECT, MR-PET, MR-CT and MR-T1-T2 modalities.
50,100,200 or any higher values, whereas the range of 𝑛2 is The quantitative results of the proposed OWHF based fusion
10, 20, 50 or any least values and 𝑛2 should be lesser than 𝑛1 technique is evaluated based on various matrices such as edge
in each iteration. quality (𝑄𝐴𝐵⁄𝐹 ) [33], Mutual Information (MI) [9, 34], Entropy
2) Optimum Selection of control parameters (E) [33], and Standard Deviation (STD) [35]. The proposed
During the hunting process of grey wolfs, firstly they fusion technique is compared with the various existing fusion
encircle the prey and make sure that the prey is not moving. algorithms such as DCT [36], DWT [37], FFT [38] and IHS
The mathematical modeling of encircling is given in Eq 14-20. [39] and also compared with well-known optimization
In HG-GWO, two control parameters namely 𝑟1 and 𝑟2 is algorithms such as GA and PSO. The examples of 2 cases of
selected using genetic operators. The conventional GA having input medical image models are given in Fig.5 and Fig. 6.
static crossover value and mutation as reported in various B. MR-SPECT Medical image Fusion
literatures. In HG-GWO, dynamic crossover ratio and In this section, MR and SPECT images are fused using
mutation ratio is used ,which consist of following 3 steps : 1) the proposed OWHF technique. Examples of fused MR-
ranking all the population based on the fitness 2) find the SPECT dataset images are given in Fig.8, in which metabolic
average value of fitness value and fixed as threshold value 3) information of SPECT is mapped over the anatomical feature
discarding all the population with least fitness than threshold of MR image. The quantitative results of MR-SPECT fusion
value with new population [31,32]. for dataset1 and dataset 2 are given in Tables3 and 4.With
3) Generating the social hierarchy regard to the attained results in Tables 1 and 2, the proposed
In Grey wolf family, the hunting is leaded according technique can reveal superior results in terms of mutual
to their social hierarchy. The selection of the optimum information, signal quantity and edge information compared to
position is based on the fitness, where, 𝑋𝛼 =the first search the existing techniques.
agent, 𝑋𝛽 =the second search agent and, 𝑋𝛾 =the third search
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2822712, IEEE Sensors
Journal
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 5
TABLE 3
RESULTS OF CASE1 MR-SPECT-IMAGE FUSION ANALYSIS
DCT [10] DWT [13] FFT [11] IHS [38] OHWF-GA Proposed
MI 2.3586 3.1351 3.2415 3.5214 3.8412 3.9426
Entropy 3.1201 4.7516 4.9534 5.1142 5.7564 5.9560
𝐴𝐵
𝑄 ⁄𝐹 0.1285 0.1314 0.1502 0.1267 0.2056 0.2256
STD 48.4182 56.6352 61.2376 59.2354 73.5624 78.0651
TABLE 4
RESULTS OF CASE 2 MR-SPECT-IMAGES FUSION ANALYSIS
DCT [10] DWT [13] FFT [11] IHS [38] OHWF-GA Proposed
MI 2.7965 3.2341 3.6210 3.8514 4.4214 4.7283
Entropy 4.1256 5.4213 5.3541 5.6327 5.8854 5.9861
𝐴𝐵
𝑄 ⁄𝐹 0.0854 0.0527 0.1031 0.0887 0.1726 0.1924
STD 57.5641 61.2350 58.3641 59.6351 68.2750 71.8351
TABLE 5
RESULTS OF CASE1 MR-PET-IMAGE FUSION ANALYSIS
DCT [10] DWT [13] FFT [11] IHS [38] OHWF-GA Proposed
MI 2.8745 3.0214 2.9685 3.2415 3.5471 3.8094
Entropy 4.5451 4.9204 4.7521 5.1124 5.5023 5.8135
𝐴𝐵
𝑄 ⁄𝐹 0.0481 0.0621 0.0741 0.0687 0.1240 0.1526
STD 41.2543 44.5219 50.2341 47.3625 61.3623 68.4725
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2822712, IEEE Sensors
Journal
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 6
TABLE 6
RESULTS OF CASE2 MR-PET-IMAGE FUSION ANALYSIS
DCT [10] DWT [13] FFT [11] IHS [38] OHWF-GA Proposed
MI 2.4517 2.6324 3.1024 2.8451 3.4210 3.8371
Entropy 3.3126 3.5124 3.9874 3.6851 4.1254 4.8120
𝐴𝐵
𝑄 ⁄𝐹 0.0841 0.0712 0.1042 0.1204 1.4102 1.6530
STD 2.4517 2.6324 3.1024 2.8451 3.5210 3.7453
TABLE 7
RESULTS OF CASE1 MR-CT-IMAGE FUSION ANALYSIS
DCT [10] DWT [13] FFT [11] IHS [38] OHWF-GA Proposed
MI 2.8412 3.1475 3.2134 3.5721 3.7241 3.9706
Entropy 3.6541 3.9412 4.0231 4.3541 4.8121 5.0143
𝐴𝐵
𝑄 ⁄𝐹 0.0415 0.0652 0.0748 0.0875 0.1241 0.1586
STD 45.5210 53.2651 61.2351 64.4571 72.2327 78.9420
TABLE 8
RESULTS OF CASE2 MR-CT-IMAGE FUSION ANALYSIS
DCT [10] DWT [13] FFT [11] IHS [38] OHWF-GA Proposed
MI 2.8741 3.2014 2.9124 3.4102 3.9812 4.0723
Entropy 3.6545 4.1207 3.8231 4.3145 4.7125 5.3854
𝐴𝐵
𝑄 ⁄𝐹 0.0641 0.0742 0.1041 0.1254 0.1452 0.1816
STD 53.6541 58.8472 61.2351 68.8214 74.3251 81.5372
TABLE 9
RESULTS OF CASE1 MR-T1-T2 IMAGE FUSION ANALYSIS
DCT [10] DWT [13] FFT [11] IHS [38] OHWF-GA Proposed
MI 2.6452 2.9874 2.8741 3.1025 3.6241 3.9356
Entropy 3.4512 3.7841 3.6571 3.9874 4.3214 4.7832
𝐴𝐵
𝑄 ⁄𝐹 0.0741 0.1240 0.0891 0.1342 0.1542 0.1894
STD 47.8541 56.3201 54.1278 59.8514 76.3214 82.0821
TABLE 10
RESULTS OF CASE2 MR-T1-T2 IMAGE FUSION ANALYSIS
DCT [10] DWT [13] FFT [11] IHS [38] OHWF-GA Proposed
MI 2.4751 2.8915 2.5417 3.1241 3.8457 4.1382
Entropy 3.4152 4.0815 3.6514 4.3451 4.9421 5.3290
𝐴𝐵
𝑄 ⁄𝐹 0.0778 0.1087 0.1143 0.1341 0.1641 0.2190
STD 56.6214 63.5961 59.6345 68.8741 81.2354 85.4781
C. MR-PET Medical image Fusion The proposed OWHF with HG-GWO produced better
In this section, MR and PET images are fused using quality of fusion than other reported literature fusion works
our proposed OWHF technique, the examples of fused MR- and optimization techniques. The optimum homomorphic
PET images for dataset 3 and dataset 4 are given in Fig.8. The domain fusion in our proposed approach mapped more
quantitative results are given in Table 5 and Table 6, in which amounts of anatomical as well as metabolic features into a
our proposed technique has higher performance than existing single frame. Our second contribution, hybrid genetic grey
techniques. wolf algorithm provided dynamic range of scale values.
D. MR-CT and MR-T1-T2 Medical image Fusion
In this section, MR-CT and MR-T1-T2 fusion are performed V. CONCLUSION AND FUTURE DIRECTIONS
using our proposed OWHF technique. The MR soft tissues are In this paper, we proposed Optimum Homomorphic Wavelet
mapped to the CT images, where the soft tissues, as well as Fusion (OHWF) for multi modal medical image fusion is
hard tissues information, can be mapped to a single frame. The proposed. Here the advantages of wavelet transform and
quantitative results are reflected in Tables 7 to 10. The homomorphic filter are mapped into a single image frame. The
attained results demonstrate that the proposed technique has a proposed technique enhances the quality of fusion by
higher quantitative results compared to existing techniques in combining anatomical and functional features using multi
terms of mutual information, signal strength and structural level decomposition. Here we performed MR-PET, MR-
similarity. The results of MR-CT fused dataset 1 and dataset 2 SPECT, MR T1-T2 and MR-CT modal fusion for large
images are shown in Fig. 8. The results of MR-T1-MR-T2 number of data base images is performed .The optimum scale
fused dataset 1 and dataset 2 images are given in Fig. 8. values are selected using Hybrid Genetic –Grey Wolf
Optimization (HG-GWO).
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2822712, IEEE Sensors
Journal
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 7
(data set 1)
MR-SPECT
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2822712, IEEE Sensors
Journal
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 8
transform” Biomedical Signal Processing and Control , vol. 40, pp. 29. J. M. R. S. Tavares, Analysis of biomedical images based on automated
1014- 1024, Feb. 2018. methods of image registration, in: Advances in Visual Computing,
6. S. M Darwish “Multi-level fuzzy Contourlet-based image fusion for Lecture Notes in Computer Science, Vol. 8887, Springer, 2014, pp. 21–
medical applications” IET Image Process., vol. 7, no.7, pp. 694–700, 30.
Oct. 2013. 30. P.M.O. Francisco, T.C. Pataky, J.M.R.S. Tavares, Registration of
7. J. Du, W. Li , B. Xiao and Q. Nawaz “Union Laplacian pyramid with pedobarographic image data in the frequency domain, Comput. Methods
multiple features for medical image fusion” Neurocomputing , vol.194 , Biomech. Biomed. Eng. 13 (6) (2010) 731–740.
pp. 326–339, Jun. 2016. 31. E.Daniel, J. Anitha, Optimum green plane masking for the contrast
8. Y. Yang, Y. Que, S. Huang and P. Lin “Multimodal sensor medical enhancement of retinal images using enhanced genetic algorithm, Optik
image fusion based on type-2 Fuzzy logic in NSCT domain” IEEE 126 (2015) 1726–1730.
Sensors Journal, vol. 16, no. 10, pp. 3735 - 3745, May. 2016. 32. E.Daniel, J. Anitha, Optimum wavelet based masking for the contrast
9. L. Wang, B. Li and L. F Tian “EGGDD: An explicit dependency model enhancement of medical images using enhanced cuckoo search
for multi-modal medical image fusion in shift-invariant shearlet algorithm, Computers in Biology and Medicine 71 (2016) 149–155.
transform domain” Information Fusion, vol. 19, pp. 29 - 37, Sep. 2014. 33. Y.Liu, S. Liu, Z. Wang, A general framework for image fusion based on
10. Y.Lifeng, Z. Donglin, W. Weidong and B. Shanglian “Multi-modality multi-scale transform and sparse representation, Information Fusion 24
medical image fusion based on wavelet analysis and quality evaluation” (2015) 147–164.
Journal of Systems Engineering and Electronics, vol. 12, no. 1, pp. 42– 34. Y. Zhuang, K. Gao, X. Miu, L. Han, X. Gong, Infrared and visual image
48, Mar. 2016. registration based on mutual information with a combined particle
11. P. Chai , X. Luo and Z. Zhang “Image Fusion Using Quaternion swarm optimization – Powell search algorithm, Optik 127 (2016) 188–
Wavelet Transform and Multiple Features” IEEE Access , vol. 5, pp. 191.
6724 – 6734 , Mar. 2017. 35. J. Li, Z. Peng, Multi-source image fusion algorithm based on cellular
12. X. Xu, Y. Wang and S. Chen “Medical image fusion using discrete neural networks with genetic algorithm, Optik 126 (2015) 5230–5236.
fractional wavelet transform” Biomedical Signal Processing and 36. L. Cao, L. Jin, H. Tao, G. Li, Z. Zhuang, Y. Zhang, Multi-focus image
Control, vol. 27, pp. 103–111, May. 2016. fusion based on spatial frequency in discrete cosine transform domain,
13. Z. Zhu, Y. Chai, H. Yin, Y. Li and Z. Liu “A novel dictionary learning IEEE Signal Processing Letters, 22 (2015) 220-224.
approach for multimodality medical image fusion” Neurocomputing , 37. R. Vijayarajan, S. Muttan, Discrete wavelet transform based principal
vol. 214, no. 19, pp. 471–482, Nov. 2016. component averaging fusion for medical images, Int. J. Electron.
14. C. I Chen “Fusion of PET and MR brain images based on IHS and Log- Commun. (AEÜ) 69 (2015) 896–902.
Gabor transforms” IEEE Sensors Journal, vol. 17, no. 21, pp. 6995 – 38. J.B. Sharma, K.K. Sharma, V. Sahula, Hybrid image fusion scheme
7010 , Nov. 2017. using self-fractional Fourier functions and multivariate empirical mode
15. J. Du, W. Li and B. Xiao “ Fusion of anatomical and functional images decomposition, Signal Processing 100 (2014) 146–159 .
using parallel saliency features” Information Sciences , vols. 430-431, 39. C. He, Q. Liu, H. Li, H. Wang, Multimodal medical image fusion based
pp. 567–576, Mar. 2018. on IHS and PCA, Procedia Eng. 7 (2010) 280–285.
16. S. S. Chavan, A. Mahajan, S. N Talbar, S. Desai, M. Thakur and A.
Dcruz “Nonsubsampled rotated complex wavelet transform
(NSRCxWT) for medical image fusion related to clinical aspects in
neurocysticercosis” Computers in Biology and Medicine vol. 81, no. 1,
pp. 64–78, Feb. 2017.
17. C.T Kavitha and C. Chellamuthu “Medical image fusion based on
hybrid intelligence” Applied Soft Computing vol. 20, pp. 83–94, Jul.
2014.
18. X. Xu, D. Shan, G. Wang and X. Jiang “Multimodal medical image
fusion using PCNN optimized by the QPSO algorithm” Applied Soft
Computing vol. 46, pp. 588-595, Sep. 2016.
19. Y. Liu, X. Chen, H. Peng and Z. Wang “Multi-focus image fusion with a
deep Convolutional neural network” Information Fusion vol. 36, pp.
Ebenezer Daniel Author was born in
191–207, Jul. 2017. Kerala, India, in 1989. He received B.Tech degree in
20. G. Bhatnagar , Q.M. J Wu and Zheng Liu “Human visual system electronics and communication engineering from Mahatma
inspired multi-modal medical image fusion framework” Expert Systems Gandhi University, Kerala, India in 2011 and the M.Tech and
with Applications vol. 40, no. 5, pp. 1708–1720, Apr. 2013.
21. S. Dasand M. K Kundu, A Neuro-Fuzzy Approach for Medical Image
Ph.D degree in electronics and communication engineering
Fusion, IEEE Transactions on Biomedical Engineering, vol. 60, no. 12, , from Karunya Institute of Science and Technology,
pp. 3347–3353, Dec 2013. Coimbatore, India in 2014 and 2017, with respectively.
22. E. Daniel, J. Anitha and J. Gnanaraj “Optimum laplacian wavelet mask Currently he is working as an Assistant Professor in
based medical image using hybrid cuckoo search – grey wolf
optimization algorithm”, Knowledge-Based Systems vol. 131, no. 1, pp.
Department of Electronics & Communication Engineering,
58–69, Sep. 2017. Vignan’s Foundation for Science, Technology & Research, in
23. E. Daniel,J.Anitha,K.K Kamaleshwaran and Indu Rani, "Optimum Andhra Pradesh, India. His research areas include Image
spectrum mask based medical image fusion using Gray Wolf processing, medical image analysis and bio inspired
Optimization," Biomedical Signal Processing and Control, vol. 34, pp.
36–43, Apr. 2017.
optimization techniques.
24. A. P. James and B. V. Dasarathy, Medical image fusion: A survey of the
state of the art, Information Fusion 19 (2014) 4–19.
25. S. Li, X. Kanga, L. Fanga, J. Hub and H. Yin, Pixel-level image fusion:
A survey of the state of the art, Information Fusion 33 (2017) 100–112.
26. A. Dogra, B. Goyal and S. Agrawal, From multi-scale decomposition to
non-multi-scale decomposition methods: a comprehensive survey of
image fusion techniques and its applications, IEEE Access, vol. 5, pp.
16040 – 16067, Aug. 2017.
27. R.S. Alves, J.M.R.S. Tavares, Computer image registration techniques
applied to nuclear medicine images, Computational and Experimental
Biomedical Sciences: Methods and Applications, Lecture Notes in
Computational Vision and Biomechanics 21 (2015) 173–191.
28. F.P. Oliveira, J.M.R. Tavares, Medical image registration: a review,
Comput. Methods Biomech. Biomed. Eng. 17 (2014) 73–93.
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.