DWT PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

International Journal of Computer Applications Technology and Research

Volume 2– Issue 6, 676 - 679, 2013

Implementation of Medical Image Fusion Using DWT


Process on FPGA

D.Khasim Hussain C.Laxmikanth Reddy V.Ashok Kumar


P.R.R.M. Engineering College P.R.R.M. Engineering College PRRM ENGG COLLEGE
Shabad, Ranga Reddy, India Shabad, Ranga Reddy, India Shabad, Ranga Reddy, India

Abstract: Image fusion is a data fusion technology which keeps images as main research contents. It refers to the techniques that
integrate multi-images of the same scene from multiple image sensor data or integrate multi images of the same scene at different
times from one image sensor.. Wavelet Transform has good time-frequency characteristics. It was applied successfully in image
processing field. Nevertheless, its excellent characteristic in one-dimension can’t be extended to two dimensions or multi-dimension
simply. Separable wavelet which was spanning by one-dimensional wavelet has limited directivity. The experiments show that the
method could extract useful information from source images to fused images so that clear images are obtained. The selection
principles about low and high frequency coefficients according to different frequency domain after wavelet. In choosing the low-
frequency coefficients, the concept of local area variance was chosen to measuring criteria. In choosing the high frequency
coefficients, the window property and local characteristics of pixels were analyzed.

Keywords: Fusion oriented images, FPGA, Application of computer vision, DWT, Wavelet Coefficient maps .

1. INTRODUCTION the input sequences must be present in the fused sequence


The actual fusion process can take place at different without any delay or contrast change.
levels of information representation [1]: a generic
categorization is to consider the different levels as, sorted in 2. IMAGE FUSION [1]
ascending order of abstraction: signal, pixel, feature and
symbolic level. This site focuses on the so-called pixel level 2.1 Image Fusion Process
fusion process, where a composite image has to be built of
several input images. To date, the result of pixel level image
fusion is considered primarily to be presented to the human Input Wavelet
observer, especially in image sequence fusion (where the Image(CT) DWT
Coefficient maps
input data consists of image sequences). A possible
application is the fusion of forward looking infrared (FLIR)
and low light visible images (LLTV) obtained by an airborne
sensor platform to aid a pilot navigates in poor weather
conditions or darkness. In pixel-level image fusion[2], some Fused Inverse Fusion
generic requirements can be imposed on the fusion result. The Image Transform
fusion process should preserve all relevant information of the
input imagery in the composite image (pattern conservation)
The fusion scheme should not introduce any artifacts or
inconsistencies which would distract the human observer or
following processing stages .The fusion process should be Input Wavelet
shift and rotational invariant, i.e. the fusion result should not DWT Coefficient maps
Image(MRI)
depend on the location or orientation of an object the input
imagery .In case of image sequence fusion arises the Figure 1: Block Diagram Image Fusion Process
additional problem of temporal stability and consistency of
the fused image sequence. When constructing each wavelet coefficient for the fused
image. We will have to determine which source image
The main target in these techniques is to produce an describes this coefficient better. This information will be kept
effective representation of the combined multispectral image in the fusion decision map. The fusion decision map has the
data, i.e., an application-oriented visualization in a reduced same size as the original image. Each value is the index of the
data set [3]–[8].The human visual system is primarily source image which may be more informative on the
sensitive to moving light stimuli, so moving artifacts or time corresponding wavelet coefficient. Thus, we will actually
depended contrast changes introduced by the fusion process make decision on each coefficient. There are two frequently
are highly distracting to the human observer. So, in case of used methods in the previous research. In order to make the
image sequence fusion the two additional requirements apply. decision on one of the coefficients of the fused image, one
Temporal stability: The fused image sequence should be way is to consider the corresponding coefficients in the source
temporal stable, i.e. gray level changes in the fused sequence images as illustrated by the red pixels. This is called pixel-
must only be caused by gray level changes in the input based fusion rule. The other way is to consider not only the
sequences, they must not be introduced by the fusion scheme corresponding coefficients, but also their close neighbors, say
itself. Temporal consistency: Gray level changes occurring in a 3x3 or 5x5 windows, as illustrated by the blue and
shadowing pixels. This is called window-based fusion rules.

www.ijcat.com 676
International Journal of Computer Applications Technology and Research
Volume 2– Issue 6, 676 - 679, 2013

This method considered the fact that there usually has high Calculating wavelet coefficients at every possible scale is
correlation among neighboring pixels. a fair amount of work, and it generates an awful lot of data. If
the scales and positions are chosen based on powers of two,
the so-called dyadic scales and positions, then calculating
wavelet coefficients are efficient and just as accurate. This is
DWT obtained from discrete wavelet transform (DWT).
CT Scan Image
The 2-D subband decomposition is just an extension of 1-D
subband decomposition. The entire process is carried out by
Fusion MicroBlaze IDWT Fused executing 1-D subband decomposition twice, first in one
Rules direction (horizontal), then in the orthogonal (vertical)
Processor Image direction. For example, the low-pass subbands (Li) resulting
from the horizontal direction is further decomposed in the
vertical direction, leading to LLi and LHi subbands.
Fused Wavelet
MRI Scan DWT Coefficient Map
Similarly, the high pass subband (Hi) is further decomposed
Image into HLi and HHi. After one level of transform, the image can
FPGA Kit be further decomposed by applying the 2-D subband
decomposition to the existing LLi subband. This iterative
Registered
Wavelet process results in multiple “transform levels”. In Fig. 4. the
source Images Coefficient Maps first level of transform results in LH1, HL1, and HH1, in
Figure 2 Fusion process on FPGA Kit addition to LL1, which is further decomposed into LH2, HL2,
HH2, LL2 at the second level, and the information of LL2 is
In our research, we think objects carry the information of used for the third level transform. The subband LLi is a low-
interest, each pixel or small neighboring pixels are just one resolution subband and high-pass subbands LHi, HLi, HHi are
part of an object. Thus, we proposed a region-based fusion horizontal, vertical, and diagonal subband respectively since
scheme. When make the decision on each coefficient, we they represent the horizontal, vertical, and diagonal residual
consider not only the corresponding coefficients and their information of the original image.
closing neighborhood, but also the regions the coefficients are
in. We think the regions represent the objects of interest. We LL1 HL1
will provide more details of the scheme in the following.
HL2
2.2 Wavelet Transform LH1 HH1 HL3
Wavelets are mathematical functions defined over a finite
interval and having an average value of zero that transform LH2 HH2
data into different frequency components, representing each
component with a resolution matched to its scale.

LH3 HH3

Figure 4: Subband labeling Scheme for a Three Level, 2-D


Wavelet Transform

To obtain a two-dimensional wavelet transform, the one-


dimensional transform is applied first along the rows and then
along the columns to produce four subbands: low-resolution,
horizontal, vertical, and diagonal. (The vertical subband is
created by applying a horizontal high-pass, which yields
vertical edges.) At each level, the wavelet transform can be
reapplied to the low-resolution subband to further decorrelate
the image. Fig. 5 illustrates the image decomposition, defining
level and subband conventions. The final configuration
contains a small low-resolution subband. In addition to the
Figure 3: Wavelet Coefficient Representation
various transform levels, the phrase level 0 is used to refer to
the original image data. When the user requests zero levels of
2.3 2-D Transform Hierarchy transform, the original image data (level 0) is treated as a low-
pass band and processing follows its natural flow.
The 1-D wavelet transform can be extended to a two-
dimensional (2-D) wavelet transform using separable wavelet
filters. With separable filters the 2-D transform can be
computed by applying a 1-D transform to all the rows of the
input, and then repeating on all of the columns.

2.4 Discrete Wavelet Transform

www.ijcat.com 677
International Journal of Computer Applications Technology and Research
Volume 2– Issue 6, 676 - 679, 2013

Low Resolution Subband


4
3 Level 1
4 4 Level 2
Vertical subband
3 3
HL
Level 2 Level 2

Level 1 Level 1

Horizontal Subband Diagonal Subband


(e)
LH HH
Figure 6 Fused Images of CT and MRI Scan
Figure.5 Image Decomposition Using Wavelets
Above figure shows the fused and DWT levels of MRI Scan
Images are taken from the Xilinx platform studio tool.
Wavelet transform is first performed on each source images,
Registered Source images like CT scan (fig 6.a) and MRI
then a fusion decision map is generated based on a set of
scan(fig 6.c) images are taken for fusing the both resolution
fusion rules. The fused wavelet coefficient map can be
images by considering pixel coefficients of the images by
constructed from the wavelet coefficients of the source images
using DWT. After the DWT process the resulted image for
according to the fusion decision map. Finally the fused image
CT scan and MRI scan are shown in figure 6 (b),(d). By
is obtained by performing the inverse wavelet transform.From
considering the low resolution subband images in fused CT
the above diagram, we can see that the fusion rules are
scan and MRI scan images fusion will be done using DWT
playing a very important role during the fusion process.
method. Inverse DWT is applied for the image to get final
fused image . It is implemented on Spartan -3 kit by using of
Xilinx is tool. It shown in figure 7.
3. RESULTS
The image fusion process follows by considering System C
coding for DWT method to implement fused image using
FPGA .

Figure7: Spartan-3 kit


A new approach to 3-D image fusion using wavelet
transforms. Several known 2-D DWT fusion schemes have
been extended to handle 3-D images have been proposed.
(a) (b)
Wavelet transform fusion diagrams have been introduced as a
convenient tool to visually describe different image fusion
schemes. A very important advantage of using 3-D DWT
image fusion over alternative image fusion algorithms is that
it may be combined with other 3-D image processing
algorithms working in the wavelet domain

4. CONCLUSION
In order to evaluate the results and compare these methods
two quantitative assessment criteria Information Entropy and
Root Mean Square Error were employed. Experimental results
indicated that there are no considerable differences between
(c) (d) these two methods in performance. The fusions have been
implemented for medical images and remote sensing images.
It is hoped that the techniques can be extended for colored
images and for fusion of multiple sensor images with memory
constraints.

www.ijcat.com 678
International Journal of Computer Applications Technology and Research
Volume 2– Issue 6, 676 - 679, 2013

5. REFERENCES [6] K. Kotwal and S. Chaudhuri, “Visualization of hyperspectral


images using bilateral filtering,” IEEE Trans. Geosci. Remote
Sens., vol. 48, no. 5, pp. 2308–2316, May 2010.
[1] A. Goshtaby and S. Nikolov, “Image fusion: Advances
in the state of the art,” Inf. Fusion, vol. 8, no. 2, pp. 114– [7] Q. Du, N. Raksuntorn, S. Cai, and R. J. Moorhead, “Color
118, Apr. 2007. display for hyperspectral imagery,” IEEE Trans. Geosci.
Remote Sens., vol. 46, no. 6, pp. 1858–1866, Jun. 2008.
[2] V. Tsagaris and V. Anastassopoulos, “Multispectral image
fusion for improved RGB representation based on perceptual [8] S. Cai, Q. Du, and R. J. Moorhead, “Feature-driven multilayer
attributes,” Int. J. Remote Sens., vol. 26, no. 15, pp. 3241–3254, visualization for remotely sensed hyperspectral imagery,” IEEE
Aug. 2005. Trans. Geosci. Remote Sens., vol. 48, no. 9, pp. 3471–3481,
Sep. 2010.
[3] J. Tyo, A. Konsolakis, D. Diersen, and R. C. Olsen,
“Principalcomponents- based display strategy for spectral
imagery,” IEEE Trans. Geosci. Remote Sens., vol. 41, no. 3, pp.
708–718, Mar. 2003.

[4] W. Zhang and J. Kang, “QuickBird panchromatic and multi-


spectral image fusion using wavelet packet transform,” in
Lecture Notes in Control and Information Sciences, vol. 344.
Berlin, Germany: Springer-Verlag, 2006, pp. 976–981.

[5] V. Shah, N. Younan, and R. King, “An efficient pan-sharpening


method via a combined adaptive PCA approach and
contourlets,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 5,
pp. 1323–1335, May 2008.

www.ijcat.com 679

You might also like