Thesisrashamuthna
Thesisrashamuthna
net/publication/351821456
CITATIONS READS
0 42
1 author:
SEE PROFILE
All content following this page was uploaded by Rasha Muthana Jameel on 24 May 2021.
A Thesis
Submitted to the Council of the Faculty of Computer Science and
Mathematics / University of Kufa in Partial Fulfillment of the
Requirement for the Degree of Master of Science in Computer Science
By:
Supervised by
ـ َّ َّ ـ م ْ ُ َّ ُ ْْ
ـي ْرف ِع الل ُه الذي ـن ـام ُنوا ِ نك ْم ـوالذي ـن ُاوتوا ال ِعلمـ
ِ ِ
ْ ـ
ـ ـ ـج ـ ال َّ ُ ب ـم تع ـم ُل ـ حـ
ات و له ِ ا ون ِبير در ٍ
ـلع العظـ
ـص ـذ ـق الل ُه ا ِ ل ُّي مي
Signature:
Supervisor’s Name: Asst. Prof. Dr. Asaad Noori Hashim Al-shareefi.
Degree: Asst. Prof. Dr.
Date: / /2020
Signature:
Name: Asst. Prof. Dr. Asaad Noori Hashim Al-shareefi.
Head of Computer Science Department, Faculty of
Computer science and mathematics, University of Kufa.
Date: / /2020
Certification of Linguistic Expert
Signature:
Name: Dr. Alaa L. Alnajm
Degree: Lecturer
Address: University of Kufa, Faculty of Languages
Date: 22 / 6 /2020
Certification of Scientific Expert
Signature:
Name: Asst. Prof. Dr. Enas Hamood Al-Saadi
Degree: Assistant Professor
Address: University of Babylon / College of Education for Pure Sciences
Date: 25 / 6 /2020
Committee’s Report
We certify that we have read this thesis “Enhancement Blurred Images
Based on Hybrid Method” as Eexamining committee, examined the student in
its content and it is qualified as a thesis for the degree of Master of Science in
Computer Science.
Signature: Signature:
Name: Prof. Dr. Abbas H. Hassin Name: Asst. Prof. Dr. Wafaa Mohammed Saeed
Alasadi Al Hameed
Degree: Chairman Degree: Member
Date: / / 2020 Date: / / 2020
Signature: Signature:
Name: Dr. Ahmed Jabbar Obaid Name: Asst. Prof. Dr. Asaad Noori Hashim Al-
shareefi.
Degree: Member
Degree: Member & Supervisor
Date: / / 2020
Date: / / 2020
Signature:
Date: / /2020
Dedication
Imams
Rasha
Acknowledgments
The suggested algorithm applied on 150 images and showed a promise result
depending on the measures value and the vision of the image and it treat all the
blurring images.
I
List of Abbreviations
Sample Name
1D One dimension image
2D Two dimension image
AV Average Difference
BMP Bit map picture
BPG Better Portable Graphics
CMY Cyan, Magenta and Yellow color model
Deconvblind blind deconvolution
Deconvwnr winner deconvolution
DWT Discrete Wavelet Transform
GIF Graphics Interchange Format
HD the decomposition high-pass filter
HR the reconstruction high-pass filter
HSV Hue, Saturation and Value
Imfill Image fill region
JPEG 2000 JPEG 2000 is an enhancement of the JPEG standard
JPEG Joint Photographic Experts Group
LD the decomposition low-pass filter
LR the reconstruction low-pass filter
MD Maximum Difference
MSE Mean Square Error
II
List of Abbreviations
III
Table of Contents
Scaling .......................................................................................... 29
V
3 CHAPTER THREE THE PROPOSED SYSTEM ................................... 40
7 References ................................................................................................ 62
VI
List of Figures
2.1 2D wavelet transform, (a) Original image (b) Wavelet transform of one-
level…………………………………..……………………………...………13
2.15 CIE L*a*b* color model when lightness is 75, 50, 25……………....35
VII
List of Tables
VIII
Chapter
One
General Introduction
Chapter One General Introduction
1 CHAPTER ONE
GENERAL INTRODUCTION
1.1 Introduction
1
Chapter One General Introduction
[3] Whyte 2014 They addressed the problem of Results are shown for
Oliver and deblurring images degraded by non-blind deblurring
et al. camera shake blur and of real photographs
saturated (over-exposed) containing saturated
pixels. Saturated pixels violate regions,
the common assumption that demonstrating
the image-formation process is improved de-blurred
linear, and often cause ringing image quality
in de-blurred outputs. They compared to previous
provided an analysis of ringing work
in general, and show that in
order to prevent ringing, it is
insufficient to simply discard
saturated pixels. They show
that even when saturated pixels
are removed, ringing is caused
by attempting to estimate the
values of latent pixels that are
brighter than the sensor’s
maximum output. Estimating
these latent pixels is likely to
cause large errors, and these
errors p ropagate across the rest
2
Chapter One General Introduction
[4] Sonia and 2014 Focused on image restoration They showed the
Lalit which is sometimes referred to effective Blind
image deblurring or image Deconvolution
deconvolution. Image algorithm for image
restoration is concerned with restoration which is
the reconstruction or the recovery in the
estimation of blur parameters form of a sharp
of the uncorrupted image from version of blurred
a blurred and noisy one. The image when the blur
goal of blur identification is to kernel is unknown
estimate the attributes of the
imperfect imaging system from
the observed degraded image
itself prior to the restoration
process. Blind Deconvolution
algorithm can be used
effectively when no in
formation about the blurring
3
Chapter One General Introduction
[6] Kumari 2015 The motivation to use wavelet In this work, can see
Neeraj as a possible alternative is to Reverse Bi-
explore new ways to reduce orthogonal Wavelet
computational complexity and and Bi- orthogonal
to achieve better noise Wavelet are superior
reduction performance. wavelets to reduce
Wavelets provide a powerful t noise from a speech
ool to represent signals.
4
Chapter One General Introduction
5
Chapter One General Introduction
6
Chapter One General Introduction
7
Chapter One General Introduction
state-of-the-art
techniques.
8
Chapter One General Introduction
How to deal with degraded image resulted from blind blur through
transmission or acquisition operation without any known of information about
the type or degree of blur. Or possible from lens aberrations, relative motion
between sensors and objects and with moving objects, and may be resulted from
Out-of-focus objects, use of a wide-angle lens, environmental disturbance, or a
tiny exposure duration that decrease photons captured count and handle with
motion at the time image capture p rocedure using camera or elongated
9
Chapter One General Introduction
disclosure duration is applied when image capture and maybe from hands shake
when captured the image.
1. The primary purpose of removing image blur is to retain all important visual
information contained in the original images.
2. The de- blurring is also used to increase the spatial resolution of the original
image and to improve image properties to increase quality and increase
clarity.
3. The objective of removing blur from an image by using wavelets is to
provide an effective way to reduce blurry images and to preserve information
and important details of the image.
4. Using a hybrid method to remove blurring from image relies on wavelet and
blind deconvolution and imsharpen filter to getting an image that is more
informative and less losing of important information than the input image.
5. Some measurements such as Average Difference, Peak Signal to Noise
Ratio, Maximum Difference, and Mean Square Error provide the high
quality of the outcome image.
As wall as the subjects which explained previously, the other chapters of this
thesis as follow:
Chapter two: “Theoretical Background”, that include blur, Removal
methods, discrete wavelet transform, and de-blurring wavelet process.
Chapter three: “The Proposed System”, contains the procedure steps for
the algorithms together with the proposed method.
10
Chapter One General Introduction
11
Chapter Two
Theoretical Background
12
Chapter Two Theoretical Background
2 CHAPTER TWO
THEORETICAL BACKGROUND
2.1 Introduction
This chapter deals with blur definitions, some types and techniques that are
used to remove them in spatial and frequency domain. As well, it presentations
reverse biorthogonal discrete wavelet transform (rbio3.9), Gaussian smooth filter
and the used applications in image processing. It introduces image enhancement and
image restoration and the techniques to remove blur. Finally, introduces some
measured that used in image processing to compare with two image (input blur
image, output de-blur image).
12
Chapter Two Theoretical Background
widespread and a success in the domain of image processing. The samples in low
pass sub band are called the scaling coefficients with low pass filter being the scaling
filter. The scaling coefficients are called "average", "approximation", or "smooth"
coefficients, while the samples in the high pass sub band are called the wavelet
coefficients with the high-pass filter being the wavelet filter. The wavelet
coefficients are called "detail" or "difference" coefficients [13, 14]. Discrete wavelet
transform is simple to execute and decreases the computation time. In the beginning,
wavelet decomposition is implemented on the rows of the image. Then, wavelet
decomposition is executed on the columns. This process is repeated with number of
required levels. Figure (2.1) clarifies one level discrete wavelet transform [15].
Figure (2.1): 2D wavelet transform, (a) Original image (b) Wavelet transform of
one- level
Wavelet transforms are a mathematical means for performing signal analysis when
signal frequency varies over time. For certain classes of signals and images, wavelet
analysis provides more precise information about signal data than other signal
analysis techniques. Common applications of wavelet transforms include:
14
Chapter Two Theoretical Background
image, included important edges. Waves is a local signal of volume and time and
generally have irregular shapes [16].
15
Chapter Two Theoretical Background
low pass filter while detail coefficient is computed by the original image convolution
16
Chapter Two Theoretical Background
17
Chapter Two Theoretical Background
2.5 Blurring
Blur can be defined as un-sharp image region that happens through camera
motion, object motion and through image acquisition. There are many types of blur
as shown in Figure (2.5). The Equation (2.3) for forming blurred image in the spatial
domain as follows:
Where d(x, y) refers to the degraded image by blur, h(x, y) the kernel blur, f(x, y)
refers to the input image without blur and * means the convolution.
When construction blur image in the frequency domain it can be applied discrete
Fourier transform to the previous Equation as follows:
Motion blur occurs due to camera motion or object motion through the time of the
camera exposure. The defocus blur is resulted from photographing images that is out
18
Chapter Two Theoretical Background
The taken images are more or less blur because to a large amount of overlapping
in the natural world as well as at the camera. The registered image must be of
goodness quality. While using a blurry image for helpful information in several
applications, it’s needful to de-blur the images. De-blurring an image is applied to
create a sharp images and restore them as numerous detail information of the image
[21].
19
Chapter Two Theoretical Background
Where hr is blur size of the horizontal direction, vr is blur size of vertical orientation
and R is the radius volume for average circular blur.
In this blur type, pixel weights are unequal. The blur is high at the center and
decreased at the edges following a bell shaped curve. If we want to control the blur
effect, we have to add a Gaussian blur to an image. Gaussian blur depends on the
size and Alfa. It is used openly in many graphics materials to minimize image details.
For this function the Equation is [24]:
𝒙𝟐 +𝒚𝟐
𝟏
𝑮(𝒙, 𝒚) = 𝒆 𝟐𝝈𝟐 (2.6)
𝟐𝝅𝝈𝟐
Where x is space on x-axis, y is the distance on y-axis, and σ represents the standard
deviation.
Motion blur occurs when there is a motion between the device that takes
picture and the object. It can occur in many forms like rotation, translation, sudden
change of the scale, or the combination of these. The control of motion is performed
by the angle (0 to 360 degrees), (–90 to +90) and density in pixel values (0 to 999)
based on the software used [25].
It blurs the image because of the wrong focus. That means, the defocus of an
image means that the image is out of focus. In detail is unclear information in the
image. Equation (2.7) represents this function [24]:
20
Chapter Two Theoretical Background
𝟐
𝟎 𝒊𝒇 √𝒙𝟐 + 𝒚𝟐 > 𝒓
𝒅(𝒙, 𝒚) = { 𝟏 𝟐 (2.7)
𝒊𝒇 √𝒙𝟐 + 𝒚𝟐 ≤ 𝒓
𝝅𝒓𝟐
r refers to the radius, according into the radius range, PSF the size will be calculated
i.e. into r and k = 2r + 1and the kernel size is= k x k.
A box blur (also known as a box linear filter) is a spatial domain linear filter
in which each pixel in the resulting image has a value equal to the average value of
it is neighboring pixels in the input image. It is a form of low-pass ("blurring") filter.
A 3 by 3 box blur can be written as matrix [26].
𝟏𝟏𝟏
𝟏
𝟗
[𝟏 𝟏 𝟏 ] (2.8)
𝟏𝟏𝟏
In the frequency domain, a. box blur has zeros and negative components. That is, a
sine wave with a period equal to the size of the box will be blurred away entirely,
and wavelengths shorter than the size of the box may be phase-reversed, as seen
when two bokeh circles touch to form a bright spot where there would be a dark spot
between two bright spots in the original image.
Image de-blurring uses (PSF) to de-convolve the blur image [36]. The
Deconvolution is categorized at two kinds: blind Deconvolution and a non-blind
deconvolution, Blind deconvolution used just blurry image to de-blurring and in a
non-blind deconvolution used blurry image with a (PSF) to de-blurring. Blind
21
Chapter Two Theoretical Background
deconvolution is more complicated and takes an extra time than the non-blind,
because of it is evaluation the point spread function next to each iteration [21].
Where, 𝑓𝑛+1 refers to the new approaches of the prior 𝑓𝑛 , g the catch blurry image,
n is measured of the repetitions, H refers to the PSF and the first of all iterations, 𝑓𝑛
the selfsame blurry image g. It reduces the differences between the blurred image
and the predictive image grant of Poisson statistics [25].
22
Chapter Two Theoretical Background
That, f n+1 refers to the new approaches of the prior f n, g was the blurry image, n
the numeral of reiterations, H is ( PSF ) that means the blurry function, β means a
constant which mastery the sharpen quantum, f n in the initial repetition is identical
as an image blur g. The disadvantage of such an algorithm is that have extra time.
𝒈
𝒏+𝟏 [𝑯( )−𝟏]
𝒇 = 𝒇𝒏 𝒆 𝑯𝒇𝒏 (2.12)
That, f n+1 is the new approaches of the prior f n, g is the catches blurry image, n
the number of iterations, H is PSF from the initial reiteration, the amount of (f n) is
selfsame as blurry image g. There are many limitations such as complex calculation
for exponential function usage because of this slow algorithm.
23
Chapter Two Theoretical Background
𝟎 𝟏 𝟎 𝟏 𝟏 𝟏 −𝟏 − 𝟏 − 𝟏
𝑳 = [𝟏 − 𝟒 𝟎] , [𝟏 − 𝟖 𝟏] , [ −𝟏 − 𝟗 − 𝟏 ]
𝟎 𝟏 𝟎 𝟏 𝟏 𝟏 −𝟏 − 𝟏 − 𝟏
𝑭 = 𝑰 − [𝑰⨂𝐋𝐊] (2.13)
That, F the retrieval image, I is the corrupted image by using Laplacian blur mask
while ⨂ representing the convolution operation. That was useful in increasing the
visibility of the images and taking the minimum amount of time to calculate. But
they not reiterative.
involves the simultaneous evaluation of the recovered image and PSF that leads to a
more sophisticated computational algorithm [35, 36].
𝑯∗(𝒌,𝒍)
𝑮(𝒌, 𝒍) = |𝑯(𝒌,𝒍)|ᶺ𝟐+𝑺_𝒖(𝒌,𝒍)⁄ (2.14)
𝑺_𝒙(𝒌,𝒍)
Where S_x is the signal power spectrum and S_u is the noise power spectrum.
Compute H so that it has the same size input image.
2.8 Interpolation
Interpolation is the operation of appreciating the signal value at the
intermediate positions of the original samples. In general, this is done by installing
a continuous function of known samples and estimate the function at the desired
25
Chapter Two Theoretical Background
locations. In order to avoid aliasing without any degradation or change to the original
correct signal, a low-pass filter should be used in the spatial domain, this filter is
presented as a function since, which is infinite in the spatial range. The cube
interpolation use up to four samples from the original signal to calculate the value
of an interpolated sample [39].
2.9 Preprocessing
Zoom, Shrink, and Resize one of the most common geometry operations is
resizing. A distinction is made between resizing the real image. That the produced
image is resized (in pixels) and resizing the image for human display which indicates
magnification (in) and shrinking (or minimizing). Both image processing processes
are useful and often rely on the same basic algorithms [40].
The three conversion curves show how the values are set when gamma is less than,
equal, and greater than 1. In each graph, the x-axis represents the intensity values in
the input image, and the y-axis represents the intensity values in the output image.
27
Chapter Two Theoretical Background
• So can smooth with small-width kernel, in repeat, it gets the same result as
the larger-width kernel would have.
𝟏 𝟐
𝑮(𝒙) = 𝒆 −𝒙𝟐𝝈𝟐 (2.15)
√𝟐𝝅𝝈
28
Chapter Two Theoretical Background
𝒙𝟐 +𝒚𝟐
𝟏 −
𝑮(𝒙, 𝒚) = 𝒆 𝟐𝝈𝟐 (2.16)
𝟐𝝅𝝈𝟐
Scaling
The Gaussian of standard derivative σ when convoluted with the same, gives
a greater Gaussian on standard derivative √2 σ. If an image is filtered using Gaussian
using the σ spread, the image itself should be filtered using the larger Gaussian with
√2 σ spread. Then, instead of filtering the image using a larger Gaussian, the
previous result could be more effective with diffusion σ, to obtain the image filtered
with √2 σ.
29
Chapter Two Theoretical Background
2.11.2 Separability
The two-dimensional Gaussian filter can be separated into one-dimensional
Gaussian, one along x-direction and the other along the y-direction. Thus, the
Gaussian filter can be used to an image by first convolving it with one-dimensional
Gaussian along each row and then associating the result again with one-dimensional
Gaussian along each column as shown in Figure (2.11)and Figure (2.12).
30
Chapter Two Theoretical Background
32
Chapter Two Theoretical Background
they reflect is often associated with the name of color [39]. A color light source can
be described in three basic quantities:
• Intensity (or Radiance): the total amount of energy that flows from the light source,
measured in watts (W).
• Luminance: a measure of the amount of information an observer perceives from a
light source, measured in lumen (lm). It corresponds to the radiant power of a light
source weighted by a spectral sensitivity function (characteristic of the HVS).
• Brightness: the subjective perception of (achromatic) luminous intensity.
For color mixes using dyes (or paints), the primary colors is red, green, and blue
while the secondary colors is cyan, yellow and magenta (Figure 2.13). It is important
to notice that for dyes, the color is named after the part of the spectrum it absorbs,
while the color of light is defined based on the part of the vision (spectrum) which
it is emitted. Thus, blending of the three primary colors of light lead to white color
(that is, a full spectrum of a visible light), while blending the three primary colours
of the dyes leads to black colour (that is, All colors are absorbed, so it is nothing
remains to reflect the incoming colors light).
The following are the most common color samples used for image processing:
The number of discrete values for R, G, and B represents the pixel depth function,
defined as the number of bits used to represent each pixel. The standard value is 24
bits it is equal to 3 levels of image x 8 bits per level.
34
Chapter Two Theoretical Background
color model, there are three components are luminance (Y) with two color difference
signals are (I) and saturation (Q). The conversion from RGB into YIQ applied by
employing the conversion bellow:
Figure (2.15): CIE L*a*b* color model when lightness is 75, 50, 25.
There are no simple formulas to convert from RGB to Lab, so we must convert first
RGB to a definite color space as CIEXYZ and then to Lab. Apply the CIELab:
35
Chapter Two Theoretical Background
𝒀
𝑳∗ = 𝟏𝟏𝟔𝒈 ( ) − 𝟏𝟔 (𝟐. 𝟏𝟖)
𝒀𝒏
𝑿𝟏 𝒀𝟏
𝒂∗ = 𝟓𝟎𝟎 (𝒈 ( ) − 𝒈 (𝒀𝟏 )) (𝟐. 𝟏𝟗)
𝑿𝟏𝒏 𝒏
𝒀𝟏 𝒁𝟏
𝒃∗ = 𝟐𝟎𝟎(𝒈 ( ) − 𝒈 (𝒁𝟏 )) (2.20)
𝒀𝟏𝒏 𝒏
𝟑
√𝒕 𝒊𝒇 𝒕 > 𝝈𝟑
𝑮(𝒕) = { 𝒕 𝟒 (𝟐. 𝟐𝟏)
𝟐
+ 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆
𝟑𝝈 𝟐𝟗
𝑳∗ +𝟏𝟔 𝒂∗
X1=𝑿𝟏𝒏 𝒈−𝟏 ( + ) (2.22)
𝟏𝟏𝟔 𝟓𝟎𝟎
𝑳∗ +𝟏𝟔
Y1=𝒀𝟏𝒏 𝒈−𝟏 ( ) (2.23)
𝟏𝟏𝟔
𝑳∗ +𝟏𝟔 𝒃
Z1=𝒁𝟏𝒏 𝒈−𝟏 ( + ) (2.24)
𝟏𝟏𝟔 𝟐𝟎𝟎
𝐭𝟑 𝐢𝐟 𝐭 > 𝛔
𝐆(𝐭) = { 𝟐 𝟒 (𝟐. 𝟐𝟓)
𝟑𝛔 (𝐭 − ) 𝐨𝐭𝐡𝐞𝐫𝐰𝐢𝐬𝐞
𝟐𝟗
𝟔
Where 𝝈=
𝟐𝟗
2.13 Histogram
36
Chapter Two Theoretical Background
axis represents color values of the image, while the y-axis represents the frequency
of each color values within the image. The left side of the X-axis represents the dark
areas, the middle side represents the average gray areas and the right side represents
the light areas [39]. Consists histogram for a color image from three colored curves.
Red represents a repetition of the color values of the red layer in the color image, the
green color is a repetition of the color values of the green layer, and blue represents
the repetitions of the color values of the blue layer.
1. Objective measurement
2. Subjective measurement
1. Subjective measurement
A number of observers are selected, tested for their visual capabilities, shown a series
of test scenes and asked to score the quality of the scenes. It is the only “correct”
method of quantifying visual image quality. However, subjective evaluation is
usually too inconvenient, time- consuming and expensive.
2. Objective measurement
These are automatic algorithms for quality assessment that could analyses images
and report their quality without human involvement.
Such methods could eliminate the need for expensive subjective studies. Objective
image quality metrics can be classified according to the availability of an original
(distortion-free) image, with which the distorted image is to be compared.
37
Chapter Two Theoretical Background
The work in this thesis is based on the design of No-reference image quality measure.
38
Chapter Two Theoretical Background
of the error signal. The mean-squared-error (MSE) is the simplest, and the most
widely used, full-reference image quality measurement [42].
𝑴𝑺𝑬 = ∑𝑴 𝑵 𝟐
𝒊=𝟏 ∑𝒋=𝟏(𝑰(𝒊, 𝒋) − 𝑪(𝒊, 𝒋)) ⁄𝑴 ∗ 𝑵 (2.29)
Where 𝑰 is origin image, 𝑪 is colorized image, and (𝑵 x 𝑴) represent image size. The
value is zero when the two images are identical and this value begins with the higher
whenever increase the difference between images.
5- Peak Signal to Noise Ratio (PSNR):
A measure uses to measure the proportion of the similarity or extent of the difference
between two images of the same structure. It is determined by:
𝑷𝑺𝑵𝑹 = 𝟏𝟎 𝒍𝒐𝒈𝟏𝟎 (𝟐𝟓𝟓 𝟐 / 𝑴𝑺𝑬) (2.30)
The resulting value is 100 when two images are identical and begin to fall below that
amount when the two images are different. The value is zero when two images are
completely different [42].
39
Chapter
Three
The Proposed
System
11
Chapter Three the Proposed System
3 CHAPTER THREE
THE PROPOSED SYSTEM
3.1 Introduction
This chapter describes the practical stages of the proposed system to remove
blur from the images. The blurring of an image is a major cause of image
degradation. Due to blurring, it is impossible to get exact details of the original
image. The proposed de-blurring system is a process to remove the blur and restore
the image with high quality and keeping the features and edges as possible we can.
This method relies on some concept such as discrete wavelet transform, blind
deconvolution, sharpen image, adjust image intensity values, Gaussian smoothing,
and convert the model of a color image. We applied this proposed method on (150)
blur image from public data set without any information known about the type of
blur or degree of blurring as shows in Figure (3.1).
40
Chapter Three the Proposed System
Start
Display (Y)
Used Gaussian filter to smooth the image
im[r,c+1] v[i,j,k]
im[r+1,c] v[i,j,k]
im[r+1,c+1] v[i,j,k]
42
Chapter Three The Proposed System
c c+2
end
r r+2
end
End
43
Chapter Three The Proposed System
1. Step1:- Convolution the image with the second derivative of Gaussian filter
(gyy(y)) along each column.
2. Step2:- Convolution the result image from step (1) by a Gaussian filter (g(x))
along each row. Call the result image Lx.
44
Chapter Three The Proposed System
3. Step3:- Convolution the origin image with a Gaussian filter (g(y)) along each
column.
4. Step4:- Convolution the result image from step (3) by a second derivative of
Gaussian filter (gxx(x)) along each row. Call the result image Ly.
5. Add Lx with Ly.
6. Step5:- Display the smoothing image.
End
𝒀=𝒌∗𝑿 + 𝒏 (3.2)
Where, X is the input RGB image. Y is the degraded image. K is the kernel or
convolution matrix that is added with the input image X to transform it into the blurry
image called Y. * is the convolution operator. The goal of Blind Deconvolution is to
inverse the above process and to recover both X and k.
45
Chapter Three The Proposed System
This proposed applies convolution method that convolves the image with low pass
and high pass filters and then down sampling operation on both rows and columns
to perform wavelet coefficient for processing. Discrete wavelet transform procedure
needs to delete rows or columns in odd locations of image matrix.
46
Chapter Three The Proposed System
𝟏 𝟏
Step2:- Set g = [ , ] // low pass filter
√𝟐 √𝟐
𝟏 𝟏
h=[ ,− ] // high pass filter
√𝟐 √𝟐
Step3:- Compute approximation coefficient along rows is multiply the input image
with low pass filter
For i=0 to number of rows -1
For j=0 to numbers of columns-1 Let s=0
For k=0 to number of column
s = s + image (i, k) * g(j-k)
save variable s in matrix named LL (i, j)
Step4:- Delete columns in odd positions // down sampling process
Step5:- Repeat the step 3, but find approximation along columns is multiply the image
resulted from step 3 with low pass filter and replace the location between i, j .
Step6:- Repeat step 4 with delete rows instead of columns in odd positions // down
sampling process
Step7:- Repeat 1-6 steps to calculate detail coefficient with using high pass filter in
addition to low pass filter.
Step8:- End For
Step 9:- End For
Step10:- End For
End
translated pixel values that are close to the output pixel values. Create the output
matrix by replacing each input pixel value with the translated value nearest to it.
47
Chapter Three The Proposed System
Used to modify image values this is because the image has already performed
closely to the visual perception of the human being as shows in Algorithm (3.4).
48
Chapter Three The Proposed System
End
1- Radius is standard deviation of the Gaussian low pass filter, Radius is positive
number and default equal one. This value controls the size of the region around the
49
Chapter Three The Proposed System
edge pixels that is affected by sharpening. A large value sharpens wider regions
around the edges, whereas a small value sharpens narrower regions around edges.
2- Amount is the power of the sharpening effect, Amount is specified as a numeric
scalar any numeric scalar and default 0.8. A higher value leads to larger increase in
the contrast of the sharpened pixels. Typical values for this parameter are within the
range [0 2], although values greater than 2 are allowed. Very large values for this
parameter may create undesirable effects in the output image.
3- Threshold is minimum contrast required for a pixel to be considered an edge pixel
and default equal zero. Minimum contrast required for a pixel to be considered an
edge pixel, specified as a scalar in the range [0 1]. Higher values (closer to 1) allow
sharpening only in high-contrast regions, such as strong edges, while leaving low-
contrast regions unaffected. Lower values (closer to 0) additionally allow sharpening
in relatively smoother regions of the image. This parameter is useful in avoiding
sharpening noise in the output image.
In our Work we used imsharpen to keeping and sharping the edge in an image as
shows in Algorithm (3.5).
Begin
Step1:- Convert each image from RGB into Lab color space.
Step2:- Create a mask with different size.
Step3:- Define a set of coefficients and set a standard deviation of 1 to increase
the intensity around the edges.
50
Chapter Three The Proposed System
Step4:- Use the coefficient of the amount and the value of 0.8 to increase the
contrast of sharp pixels.
Step5:- Using threshold is minimum default equal zero. Minimum contrast
required for a pixel to be considered an edge pixel, specified as a scalar in the
range [0 1]. Higher values (closer to 1) allow sharpening only in high-contrast
regions, such as strong edges, while leaving low-contrast regions unaffected.
Lower values (closer to 0) additionally allow sharpening in relatively smoother
regions of the image.
Step6:- The filter is applied to the image, and a window is taken that we pass on
the L layer of the image after converting it to the color space within the
mentioned parameters to highlight the edges of the image.
Step7:- convert image form step6 to RGB image.
End Algorithm
51
Chapter
Four
The experimental results
Chapter Four The Experimental Results
4 CHAPTER FOUR
THE EXPERIMENTAL RESULTS
4.1 Introduction
This chapter displays the results of the proposed system. The proposed system
is applied on matlab2018 program with public data set without know any
information about size of the input image and kind of blur. The goodness of the de-
burr image is measured through peak signal to noise ratio (PSNR) , Maximum
Difference (MD) and Mean Square Error (MSE) that return the better value between
the input image ( blur image ) and the result image ( de-blur image ).
52
Chapter Four The Experimental Results
Image
1
PCC: 3.6064e+04
MD: 142
MSE: 1.5743e+03
PSNR: 16.1601
TIME:
Elapsed time is
8.835409 seconds
Image
2
PCC : 4.8278e+04
MD: 140
MSE: 853.8346
PSNR: 18.8171
TIME:
53
Chapter Four The Experimental Results
Elapsed time is
8.709530seconds
Image
3 PCC : 8.4931e+04
MD: 233
MSE: 765.4394
PSNR: 19.2917
TIME:
Elapsed time is
13.323320 seconds
54
Chapter Four The Experimental Results
Image
4 PCC: 4.3427e+04
MD: 191
MSE: 644.1223
PSNR: 20.0411
TIME:
Elapsed time is
21.200954 seconds
Image
5
PCC: 3.7576e+04
MD: 142
MSE: 966.4136
PSNR: 18.2792
TIME:
Elapsed time is
6.646867 seconds
55
Chapter Four The Experimental Results
Image
6 PCC: 3.4789e+04
MD: 125
MSE: 3.1559e+03
PSNR: 13.1396
TIME:
Elapsed time is
7.467728 seconds
Image
7 PCC: 1.5015e+05
MD: 135
MSE: 761.0532
PSNR: 19.3167
TIME: Elapsed
time is 22.388760
seconds
56
Chapter Four The Experimental Results
Image
8 PCC: 3.7381e+04
MD: 179
MSE: 1.8062e+03
PSNR: 15.5631
TIME: Elapsed
time is 7.095356
seconds
Image
9
PCC: 1.4057e+05
MD: 218
MSE: 807.1285
PSNR: 19.0614
TIME: Elapsed
time is 22.916523
seconds
57
Chapter Four The Experimental Results
Image
10
PCC: 4.4216e+04
MD: 207
MSE: 684.0111
PSNR: 19.7802
TIME: Elapsed
time is
7.624593seconds
Image
11
PCC: 1.4107e+05
MD: 189
MSE: 858.4725
PSNR: 18.7935
TIME: Elapsed
time is 23.379268
seconds
58
Chapter Four The Experimental Results
Image
12 PCC: 4.4504e+04
MD: 120
MSE: 685.0607
PSNR: 19.7735
TIME: Elapsed
time is 8.978696
seconds
Through the previous experiments of the discrete wavelet transform (rbio3.9) with
the other operation, it has been shown to be useful in removing blur from the images
and is affected by the type of image and the type of the blur. For more de-blur image
results in appendix 1.
59
Chapter
Five
6 CHAPTER FIVE
CONCLUSION AND FUTURE WORK
5.1 Conclusions
There are a set of points that are concluded from the proposed system it can be
summarized as the following:
1- The proposed system is successful in removing the blur from the image comparing
with pervious work such as the less value of PSNR is 26.10 while in our work the
less value is 13.139.
2- The time required for appearing de-blurring image is related to the size and nature
of the input images.
3- Most of the previous research within the field of deblurring image used the PSNR
scale to test the results and some of them used the measures of MD and MSE. In our
work, we suggested four measures to measuring the algorithm performance.
4- De-blurring wavelet apply on LL approximation coefficient because it contains an
important information of image. The approximation coefficient is low frequency that
includes less noise, while detail coefficient (LH, HL, HH) are a high frequency that
contain more blurring.
5- The de-blurring of the proposed system can be applied on sub band of image or
applying on the whole image.
6- Wavelet de-blurring is considered successful for deblurring image and it makes the
image smooth, sharp and more clarity.
Chapter Five Conclusion and Future Work
There are a number of suggestions that will be proposed for future works as
following:
The proposed system can be used on other multimedia, like video. The proposed
system can be used in video deblurring. A shot is a sequence of frames in the same
scene that is selected optimal frame based on PSNR, MD and MSE.
Also, de-blurring can be considered as a preprocessing for several applications such
biometric.
References
Reference
7 References
[1] Mohapatra, Biswa Ranjan, Ansuman Mishra and Sarat Kumar Rout. "A
Comprehensive Review on Image Restoration Techniques". International Journal
of Research in Advent Technology, Vol.2, No.3, pp. 101-102, (2014).
[2] Al-amri, Salem Saleh, N.V. Kalyankar and Dr. Khamitkar S.D." Deblured
Gaussian Blurred Images". Journal of Computing, Vol.2, No.4, pp. 33-34, (2010).
[3] Whyte Oliver, Josef Sivic, and et al., "Deblurring shaken and partially saturated
images." International journal of computer vision 110.2, pp.185-201(2014).
[4] Saini, Sonia and Lalit. "Image Processing Using Blind Deconvolution Deblurring
Technique”. International Journal of Applied Engineering and Technology,
Vol.4, pp.115-118, (2014).
[5] Biswas Prodip, Abu Sufian Sarkar, and et al., "Deblurring images using a Wiener
filter". International Journal of Computer Applications 109.7, pp. 36-38(2015).
[6] Kumari Neeraj, and Shelly Chugh. "Reduction of Noise from Audio Signals
Using Wavelets." International Journal for Advance Research in Engineering and
Technology 3 (2015).
[7] Ruikar Sachin, and Rohit Kabade, "Image Deblurring and Restoration using
Wavelet Transform." International Journal of Control Theory and Applications
(IJCTA) 9.22, pp. 95-104(2016).
[8] Mastriani Mario, "Denoising based on wavelets and deblurring via self-
organizing map for Synthetic Aperture Radar images." arXiv preprint
arXiv:1608.00274 (2016).
No-Reference Image Quality Assessment for Blur and Noise Distortions”. Vol.
6, 2846585(2018).
[11] Xiaoyang li (Rebecca). “Multiplex Channels De-noising and De-blurring by
Wavelet Transform”. 2018-5 University of Houston.
[12] Piyush Joshi, and et al. “Continuous Wavelet Transform Based No-Reference
Image Quality Assessment for Blur and Noise Distortions”. IEEE Access (2018).
[13] AL_Sultani, Hamed Sadeq Mahdi Saoud. "Digital Image Compression Using
Discrete Cosine Transform Method" .M.Sc. Thesis, college of sciences, the
university of Al_Mustansiriya , (2006).
[14] Xiaoyin, Mr.Zhang. "Efficient Architecture for Discrete Wavelet Transform
Using Daubechies" .M.Sc. Thesis, Elictrical Engineering, Songkla University,
(2010).
[15] Jana, Debalina and Kaushik Sinha. "Wavelet Thresholding for Image Noise
Removal". International Journal on Recent and Innovation Trends in Computing
and Communication, Vol.2, No.6, pp.1400 – 1405, (2014).
[16] Saini, Neeraj, and Pramod Sethy. "Performance based analysis of wavelets
family for image compression-a practical approach." Int J Comput Appl 129.9,
pp. 206-223(2015).
[17] Prasad, B. Rajendra, K. Vinayaka Kota, and B. Mysura Reddy. "Biorthogonal
wavelet transform digital image watermarking." International Journal of
Advanced Computer Research 2.3, pp. 84-89(2012).
[18] Neeraj, and Pramod Sethy. "Performance based analysis of wavelets family
for image compression-a practical approach." Int J Comput Appl 129.9, pp. 206-
223(2015).
[19] Kamthan, Surbhi and Jaimala Jha. "A Performance Based Analysis of
Various Image Deblurring Techniques" .AEIJST, Vol.3, No.12, pp.1-5, (2015).
63
Reference
[20] Wu, Jiunn-Lin, Chia-Feng Chang and Chun-Shih Chen. "An Adaptive
Richardson-Lucy Algorithm for Single Image Deblurring Using Local Extreme
Filtering" .Journal of Applied Science and Engineering, Vol.16, No.3, pp.
270271, (2013).
[21] Rohina Ansari, Himanshu Yadav, Anurag Jain,” A Survey on Blurred Images
with Restoration and Transformation Techniques”, International Journal of
Computer Applications (0975 – 8887) Vol. 68– No.22, April (2013).
[22] Al-amri, Salem Saleh, N.V. Kalyankar and Dr. Khamitkar S.D. "Deblured
Gaussian Blurred Images". Journal of Computing, Vol.2, No.4, pp. 33-34, (2010).
[23] Karan, Leena. "Study of Deblurring Techniques for Restored Motion Blurred
Images”. New Man International Journal of Multidisciplinary Studies, Vol.3,
No.5, pp.15-17, (2016).
[24] Dejee Singh1, Mr R. K. Sahu, “A Survey on Various Image Deblurring
Techniques”, International Journal of Advanced Research in Computer and
Communication Engineering Vol. 2, Issue 12, December (2013).
[25] Jiunn-Lin Wu, Chia-Feng Chang, Chun-Shih Chen,” An Improved
Richardson-Lucy Algorithm for Single Image Deblurring Using Local Extrema
Filtering”, IEEE International Symposium on Intelligent Signal Processing and
Communication Systems (ISPACS 2012) November 4-7, (2012).
[26] Vankawala, Fagun, Amit Ganatra, and Amit Patel. "A Survey on different
Image Deblurring Techniques." International Journal of Computer Applications
116.13 (2015).
[27] A. Bennia and S. M. Riad, “Filtering Capabilities and Convergence of the
Van-Cittert Deconvolution Technique”, IEEE Transactions on Instrumentation
and Measurement, vol. 41, no. 2, (1992).
[28] L. Lang and Y. Xu, “Adaptive Landweber method to deblur images”, IEEE
Signal Processing Letters, vol. 10, no. 5, (2003).
64
Reference
[29] O. Marques, “Practical image and video processing using MATLAB”, John
Wiley & Sons, (2011).
[30] Zohair Al-Ameen, Ghazali Sulong and Md. Gapar Md. Johar,” A
Comprehensive Study on Fast image Deblurring Techniques”, International
Journal of Advanced Science and Technology Vol. 44, July, (2012).
[31] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, "Understanding Blind
Deconvolution Algorithms," IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 33, no. 12, pp. 2354-2367, (2011).
[32] P. Campisi and K. Egiazarian, Blind image deconvolution: theory and
applications. (2016).
[33] S. Yadav, C. Jain, and C. Aarti, "Evaluation of Image Deblurring
Techniques," International Journal of Computer Applications, vol. 139, no. 12,
pp. 32-36, (2016).
[34] A. P. Abhilasha, S. Vasudha, N. Reddy, V. Maik, and K. Karibassappa, "Point
Spread Function Estimation and Deblurring Using Code V," IEEE international
Conference on Electronics, Information and Communications, pp. 1-4, (2016).
[35] M. Poulose, "Literature Survey on Image Deblurring Techniques,"
International Journal of Computer Applications Technology and Research, vol.
2, no. 3, pp. 286-288, (2013).
[36] D. Singh and R. K. Sahu, "A Survey on Various Image Deblurring
Techniques," International Journal of Advanced Research in Computer and
Communicational Engineering, vol. 2, no. 12, pp. 4736-4739, (2013).
[37] L. Zhong, S. Cho, D. Metaxas, S. Paris, and J. Wang, "Handling Noise in
Single Image Deblurring using Directional Filters," In proceeding of IEEE
conference on computer vision and pattern recognition, pp. 612-619, (2013).
65
Reference
[38] J. Pan, D. Sun, and M. Yang, "Blind Image Deblurring Using Dark Channel
Prior," IEEE Conference on Computer Vision and Pattern Recognition, pp. 1628-
1636, (2016).
[39] Gallagher, Andrew C. "Detection of linear and cubic interpolation in JPEG
compressed images." Computer and Robot Vision, 2005. Proceedings. The 2nd
Canadian Conference on. IEEE, (2005).
[40] “Practical Image and Video Processing Using MATLAB®”. By Oge
Marques. © 2011 John Wiley & Sons, Inc. (2011).
[41] Mubarak .Shah, “Fundamentals of Computer Vision”, Computer Science
Department, University of central Florida, Orlando, FL 32816 (1997).
[42] Memon, F., Unar, M. A., & Memon, S. (2016). Image quality assessment for
performance evaluation of focus measure operators. Mehran University Research
Journal of Engineering & Technology, ISSN 0254-7821, Vol. 34, No. 4, October,
2016.
[43] A. F. Hassan, D. Cailin, and Z. M. Hussain (2014). An information theoretic
image quality measure: Comparison with statistical similarity. Journal of
Computer Science, Vol. 10, No. 11, pp. 22692283.
66
Reference
Appendix
Cases Blur image with it is histogram De-blur image withit is histogram Result of some
Measures on
image
Image
1
PCC: 7.4777e+05
MD: 177
MSE: 408.4957
PSNR: 22.0189
TIME:
Elapsed time is
98.549830 seconds
67
Reference
Image
2
PCC : 3.6489e+04
MD: 227
MSE: 2.0078e+03
PSNR: 15.1037
TIME:
Elapsed time is
8.243140 seconds
Image
3 PCC : 3.9541e+04
MD: 143
MSE: 551.9738
PSNR: 20.7116
TIME:
Elapsed time is
8.826724 seconds
68
Reference
Image
4 PCC: 1.4403e+05
MD: 218
MSE: 968.0428
PSNR: 18.2719
TIME:
Elapsed time is
23.302129 seconds
Image
5
PCC: 2.8133e+04
MD: 219
MSE: 1.6953e+03
PSNR: 15.8382
TIME:
Elapsed time is
8.381572 seconds
69
Reference
Image
6 PCC: 1.4752e+05
MD: 125
MSE: 1.3870e+03
PSNR: 16.7100
TIME:
Elapsed time is
21.911811 seconds
Image
7 PCC: 4.7462+04
MD: 232
MSE: 1.0988e+03
PSNR: 17.725
TIME: Elapsed
time is 7.260409
seconds
70
Reference
Image
8 PCC: 4.0075e+04
MD: 241
MSE: 1.4375e+03
PSNR: 16.5548
TIME: Elapsed
time is 7.092364
seconds
Image
9
PCC: 4.5288e+04
MD: 112
MSE: 743.6509
PSNR: 19.4171
TIME: Elapsed
time is 6.606667
seconds
71
Reference
Image
10
PCC: 2.2733e+06
MD: 169
MSE: 182.1088
PSNR: 25.5275
TIME: Elapsed
time is 252.169154
seconds.
Image
11
PCC: 4.0770e+04
MD: 212
MSE: 936.2864
PSNR: 18.4167
TIME: Elapsed
time is 18.4167
seconds
72
Reference
Image
12 PCC: 3.6101e+04
MD: 157
MSE: 1.0370e+03
PSNR: 17.9728
TIME: Elapsed
time is 14.675177
seconds.
Image
13 PCC: 2.9272e+05
MD: 185
MSE: 447.0674
PSNR: 21.6271
TIME: Elapsed
time is 35.282512
seconds.
73
Reference
Image
14 PCC: 1.1643e+05
MD: 174
MSE: 717.4452
PSNR: 19.5729
TIME: Elapsed
time is 17.946415
seconds.
74
الخالصة
بشكل عام ،يمكن تعريف الضوضاء على أنها معلومات غير مرغوب فيها في الصورة أثناء اإلرسال أو
االستحواذ .الضبابية يعتبر نوع خاص من الضوضاء .تقلل تأثيرات الضبابية من فعالية الرؤية ،لذا فإن إزالة
الضبابية من الصورة يسهل المعالجة .مشكلة إزالة الضبابية في المجال المكاني هي تجانس البيانات والحواف.
إزالة المويجات هي العملية التي يمكن فيها إزالة التمويه باستخدام المويجات في مجال التردد .لذلك ،فإنه يستخدم
للحفاظ على حواف الصورة ،ومنع أنواع الضوضاء والحفاظ على ميزات الصورة الهامة .ومع ذلك ،يشتمل
النظام المقترح على مجموعة من الخطوات التي تتمثل في التكبير ،ومرشح كاوس ،ومرشح ، Wienerو
Deconvolutionالعمياء ،وتحويل المويجات المنفصلة ،وأقرب استيفاء ،ونموذج ألوان ، NTSCوصورة
أكثر حدة .تم تطبيق العديد من العمليات :تم تطبيق التكبير /التصغير لتكبير الصورة وتحديث قيم كثافة الصورة
وتصفية Gaussianالمطبقة على الصورة الملساء وخوارزمية مرشح Wienerالمطبقة وخوارزمية
deconvolutionالعمياء المطبقة إلزالة بعض التمويه من الصورة واستخدام تحويل المويجات المنفصل
أخيرا
ً للصورة ،استخدام االستيفاء ،وتحويل نموذج لون الصورة من RGBإلى ، YIQوتطبيق صورة شحذ.
،تطبيق بعض المقاييس على الصور الختبار جودة النظام المقترح ( معامل ارتباط بيرسون ،أقصى فرق ،خطأ
مربع متوسط ،ونسبة ذروة اإلشارة إلى الضوضاء) .تُستخدم الصور في هذه الرسالة عبارة عن صور ملونة
لمجموعة بيانات عامة ذات أبعاد مختلفة ويتم تنفيذ النظام المقترح باستخدام لغة البرمجةMatlab 2018.
تم تطبيق الخوارزمية المقترحة على 151صورة وأظهرت نتيجة واعدة اعتماد ًا على قيمة المقاييس ورؤية
الصورة ومعالجتها لجميع الصور الضبابية.
جمهوريه العراق
جامعة الكوفة
مجلس كلية علوم الحاسوب والرياضيات /جامعة الكوفة كجزء من متطلبات نيل درجة
الماجستير في علوم الحاسوب من قبل الطالبة :
باشراف
2020م 1111هـ
View publication stats