0% found this document useful (0 votes)
29 views100 pages

Thesisrashamuthna

معالجة الصور الرقمية

Uploaded by

mradamqdinotaher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views100 pages

Thesisrashamuthna

معالجة الصور الرقمية

Uploaded by

mradamqdinotaher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/351821456

Enhancement Blurred Images Based on Hybrid Method

Thesis · February 2020


DOI: 10.13140/RG.2.2.22293.45289

CITATIONS READS
0 42

1 author:

Rasha Muthana Jameel


University of Al-Qadisiyah
7 PUBLICATIONS 8 CITATIONS

SEE PROFILE

All content following this page was uploaded by Rasha Muthana Jameel on 24 May 2021.

The user has requested enhancement of the downloaded file.


Republic of Iraq
Ministry of Higher Education and
Scientific Research
University of Kufa
Faculty of Computer Science and Mathematics
Department of Computer Science

Enhancement Blurred Images Based on Hybrid


Method

A Thesis
Submitted to the Council of the Faculty of Computer Science and
Mathematics / University of Kufa in Partial Fulfillment of the
Requirement for the Degree of Master of Science in Computer Science
By:

Rasha Muthna Jameel Al Fatlawy

Supervised by

Asst. Prof. Dr. Asaad Noori Hashim Al-shareefi

2020 A.D 1441 A.H


‫ْ‬
‫ح‬ ‫َّ‬ ‫ْ‬
‫ِبسِم الل ِه ال َّر مـن ال َّ ِر ِمي‬
‫ح‬ ‫ـ‬
‫ِ‬

‫ـ َّ َّ ـ م ْ ُ َّ ُ ْْ‬
‫ـي ْرف ِع الل ُه الذي ـن ـام ُنوا ِ نك ْم ـوالذي ـن ُاوتوا ال ِعلمـ‬
‫ِ‬ ‫ِ‬
‫ْ‬ ‫ـ‬
‫ـ ـ ـج ـ ال َّ ُ ب ـم تع ـم ُل ـ حـ‬
‫ات و له ِ ا ون ِبير‬ ‫در ٍ‬
‫ـلع العظـ‬
‫ـص ـذ ـق الل ُه ا ِ ل ُّي مي‬

‫سورة المجادلة ‪ /‬اية ‪11‬‬


Approval of Scientific Supervisor

I certify that this thesis “Enhancement Blurred Images Based on


Hybrid Method" was prepared under my supervision at the University of
Kufa, Faculty of Computer Science and Mathematics, in a partial
falfillment of the requirement for the degree of Master of Science in
Computer Science.

Signature:
Supervisor’s Name: Asst. Prof. Dr. Asaad Noori Hashim Al-shareefi.
Degree: Asst. Prof. Dr.
Date: / /2020

In view of the available recommendations, I forward this thesis for


debate by the examining committee.

Signature:
Name: Asst. Prof. Dr. Asaad Noori Hashim Al-shareefi.
Head of Computer Science Department, Faculty of
Computer science and mathematics, University of Kufa.
Date: / /2020
Certification of Linguistic Expert

I certify that I have read this thesis “Enhancement Blurred Images


Based on Hybrid Method” and corrected its grammatical mistakes;
therefore, it has become qualified for debate.

Signature:
Name: Dr. Alaa L. Alnajm
Degree: Lecturer
Address: University of Kufa, Faculty of Languages
Date: 22 / 6 /2020
Certification of Scientific Expert

I certify that I have corrected the scientific content of this thesis


“Enhancement Blurred Images Based on Hybrid Method"; therefore, it has
become qualified for debate.

Signature:
Name: Asst. Prof. Dr. Enas Hamood Al-Saadi
Degree: Assistant Professor
Address: University of Babylon / College of Education for Pure Sciences
Date: 25 / 6 /2020
Committee’s Report
We certify that we have read this thesis “Enhancement Blurred Images
Based on Hybrid Method” as Eexamining committee, examined the student in
its content and it is qualified as a thesis for the degree of Master of Science in
Computer Science.

Signature: Signature:
Name: Prof. Dr. Abbas H. Hassin Name: Asst. Prof. Dr. Wafaa Mohammed Saeed
Alasadi Al Hameed
Degree: Chairman Degree: Member
Date: / / 2020 Date: / / 2020

Signature: Signature:
Name: Dr. Ahmed Jabbar Obaid Name: Asst. Prof. Dr. Asaad Noori Hashim Al-
shareefi.
Degree: Member
Degree: Member & Supervisor
Date: / / 2020
Date: / / 2020

Approved for the Council of the Faculty of Computer Science and


Mathematics, University of Kufa.

Signature:

Name: Prof. Dr. Kadhim B. Swadi Aljanabi

Dean of the Faculty of Computer Science and Mathematics

Date: / /2020
Dedication

I would like to dedicate this work...

To the city of science and mercy of the world’s Prophet

Muhammad (PBUH), Fatima Alzahraa, and the Twelfth

Imams

To my sun and my moon from watching and tiring for what I

am now (my mother and father).

To my strength (my husband and daughters).

Rasha
Acknowledgments

In the beginning and before everything, I thank and praise


“Allah” for helping and blessing me to finish my thesis.
All thanks and appreciations are due to my supervisors
“Asst. Prof. Dr. Asaad Noori Hashim Al-shareefi “for
granting me the time, effort, and encouragement to complete
my thesis, and all thank to prof. hind Rostom Mohmmed
Shaban, I am very grateful to them.
I wish to report my thanks to all my teaching staff in the
Faculty of Computer Science and Mathematics. I wish them
best of luck and success.

Finally, Thanks to everyone who supported me and helped


me from friends and family.
Abstract
Generally, noise may be defined as unwanted information in image during the
transmission or acquisition. Blur considered as special type of noise. The blur effects
reduce the effectiveness of vision, so, removing blur from the image eases the
processing. The problem of the blur removal in the spatial domain is smoothing the
data and the edges. Wavelet de-blurring is the process where blur can be removed
using wavelet in frequency domain. Therefore, it uses to keeping the edges of the
image, preventing types of noises and keeping the significant features of image.
However, the proposed system includes a set of steps which are a zooming, Gaussian
filter, Wiener filter, blind Deconvolution, discrete wavelet transform, nearest
interpolation, YIQ color model, and sharpen image. Several process have applied:
zoom was applied to enlarge the image, image intensity values update, applied
Gaussian filter to smooth image, applied Wiener filter algorithm, applied blind
deconvolution algorithm to remove some blur from an image, used discrete wavelet
transform on the image, used interpolation, convert color model of image from RGB
to YIQ, applied sharpen image. Finally, applying some measures on images to test
quality of proposed system are (Pearson correlation coefficient, maximum difference,
mean square error, and Peak Signal to Noise Ratio). The images are used in this thesis
are public data set color images with different dimensions and the proposed system is
performed using Matlab 2018 programing language.

The suggested algorithm applied on 150 images and showed a promise result
depending on the measures value and the vision of the image and it treat all the
blurring images.

I
List of Abbreviations

Sample Name
1D One dimension image
2D Two dimension image
AV Average Difference
BMP Bit map picture
BPG Better Portable Graphics
CMY Cyan, Magenta and Yellow color model
Deconvblind blind deconvolution
Deconvwnr winner deconvolution
DWT Discrete Wavelet Transform
GIF Graphics Interchange Format
HD the decomposition high-pass filter
HR the reconstruction high-pass filter
HSV Hue, Saturation and Value
Imfill Image fill region
JPEG 2000 JPEG 2000 is an enhancement of the JPEG standard
JPEG Joint Photographic Experts Group
LD the decomposition low-pass filter
LR the reconstruction low-pass filter
MD Maximum Difference
MSE Mean Square Error

II
List of Abbreviations

NTSC National Television System Committee


PCC compute Pearson Correlation Coefficient
PNG Portable network graphics
PSF Point Spread Function
PSNR Peak Signal to Noise Ratio
Rbio Reverse biorthogonal wavelet
RGB Red Green and Blue color image model
TIF/TIFF Tagged image (file) format

III
Table of Contents

1 CHAPTER ONE GENERAL INTRODUCTION .................................... 1

1.1 Introduction .............................................................................................. 1

1.2 Related Works…………………………………………………………..1

1.3 Problem Statement ................................................................................... 9

1.4 Aims of Thesis ....................................................................................... 10

1.5 Thesis Outlying ...................................................................................... 10

2 CHAPTER TWO THEORETICAL BACKGROUND .......................... 12

2.1 Introduction ............................................................................................ 12

2.2 Wavelet Transform................................................................................. 12

2.3 discrete wavelet transform (DWT)…………………………………….15

2.4 Reverse Biorthogonal Wavelet .............................................................. 17

2.5 Blurring ............................................................................................ 18

2.6 Types of Blur.................................................................................... 19

2.6.1 Average Blur………………………………………………………19

2.6.2 Gaussian Blur .................................................................................. 20

2.6.3 Motion Blur……………………………………………………….20

2.6.4 Out Of Focus……………………………………………………...20

2.6.5 Box Blur .......................................................................................... 21

2.7 De-blurring Techniques ……………………………………………….21


IV
2.7.1 Richardson-Lucy Iterative Algorithm …………………..…….…22

2.7.2 Van Cittert Iterative Algorithm ………………………..…………22

2.7.3 Landweber Iterative Algorithm ………………….……………….23

2.7.4 Poisson Map Iterative Algorithm ............................................…...23

2.7.5 Laplacian Sharpening Filters ………………………….………….24

2.7.6 Blind Image Deconvolution ………………………………………24

2.7.7 Non-Blind Deconvolution ………………………………………..25

2.7.8 Wiener Deconvolution Filter .......................................................... 25

2.8 Interpolation ……………………………………………………….25

2.9 Preprocessing ................................................................................... 26

2.10 Image Enhancement ......................................................................... 26

2.11 Properties of Gaussian ..................................................................... 28

Scaling .......................................................................................... 29

2.11.2 Separability ................................................................................... 30

2.11.3 Symmetry ...................................................................................... 31

2.12 Color model ...................................................................................... 32

2.12.1 RGB Color Model ............................................................................. 34

2.12.2 YIQ (NTSC) Color Model ............................................................ 34

2.12.3 CIE L*a*b* (CIELAB) color space .............................................. 35

2.13 Histogram ......................................................................................... 36

2.14 types of quality measure ……………………………………….….37

V
3 CHAPTER THREE THE PROPOSED SYSTEM ................................... 40

3.1 Introduction ……………………………………………………………40

3.2 Proposed System Steps ……………………………..…………………42

3.2.1 Input Image ......................................................................................... 44

3.2.2 Enlarge image.................................................................................. 44

3.2.3 Gaussian Smoothing ....................................................................... 44

3.2.4 Blind Deconvolution ....................................................................... 45

3.2.5 Split the Layers of Color Image ...................................................... 46

3.2.6 Discrete Wavelet Transform ........................................................... 46

3.2.7 Nearest interpolation ....................................................................... 47

3.2.8 RGB to YIQ Color model ............................................................... 48

3.2.9 Sharpen Image and Unsharp Mask ................................................. 49

3.3 Deburred Image .................................................................................. 51

4 CHAPTER FOUR THE EXPERIMENTAL RESULTS ......................... 52

4.1 Introduction ...................................................................................... 52

4.2 Cases Study of De-blur Image ......................................................... 52

4.3 Performance of the Proposed System .............................................. 52

5 Conclusion and Future Work ................................................................... 60

6 CHAPTER FIVE CONCLUSION AND FUTURE WORK ..................... 60

7 References ................................................................................................ 62

VI
List of Figures

2.1 2D wavelet transform, (a) Original image (b) Wavelet transform of one-
level…………………………………..……………………………...………13

2.2 Decomposition of Image at 3rd Level Using Wavelet…………………..15

2.3 2rd Level of Decomposition of an Image ………………………...……17

2.4 Reverse Biorthogonal rbio3.9 Wavelet …………………….…………..18

2.5 Different type of blur…………………………………………………..…19

2.6 Different Gamma Correction…………………………………….………27

2.7 Histogram of gray image……………………………………….………27

2.8 1-D Gaussian distribution with mean 0 and =1…................................28

2.9 2-D Gaussian distribution with mean (0, 0) and =1…...………….….29

2.10 Convolving Gaussian with Itself…………….. .............................…....30

2.11 Gaussian x and y direction .......................................................…........31

2.12 Separability Gaussian filter……………………………………………..31

2.13 Color mixtures (a) Additive and (b) subtractive…………………….…33

2.14 RGB color cub………………………………………….……………..34

2.15 CIE L*a*b* color model when lightness is 75, 50, 25……………....35

VII
List of Tables

Table 1.1: Related Work. ..................................................................................................... 2


Table 4.1: explain the result of the proposed system. ........................................................ 53

VIII
Chapter
One
General Introduction
Chapter One General Introduction

1 CHAPTER ONE
GENERAL INTRODUCTION

1.1 Introduction

Image enhancement is differed from image restoration. Image enhancement is


a process taking an image and making it visually to the human eye. The blurring
motion is the main problem in computer vision because it effects in image quality.
Image restoration is the task used to improve the goodness of image constructed
estimation noise and blur in the images. The distortion has been composed as a
degraded function or a blur that reconstructs an image from noise by filters such as
median and mean filters. A destroyed image by blurring is got better using one of
the techniques like inverse kernel, Wiener filter and blind deconvolution. De-
blurring image is employed by form frequency domain filters while the de-noising
process is achieved by a spatial domain filters assign to a small size of masks in
spatial domain [1]. There are different types of blur in digital images such as
Gaussian, medium, blur box and motion blur. An image de-blurring can be
interpreted as a process that removes blur from images using one of de-blurring
methods. Many techniques are used to restore images and to remove or reduce blur
from images such as Regularized filter, Wiener filter, blind deconvolution and Lucy
Richardson algorithm [2].

1.2 Related Works


The following Table (1.1) shows the related works selected because of its
proximity as a part of the work.

1
Chapter One General Introduction

Table (1.1): Related Work.


Reference Authors year Technique advantage
no. name

[3] Whyte 2014 They addressed the problem of Results are shown for
Oliver and deblurring images degraded by non-blind deblurring
et al. camera shake blur and of real photographs
saturated (over-exposed) containing saturated
pixels. Saturated pixels violate regions,
the common assumption that demonstrating
the image-formation process is improved de-blurred
linear, and often cause ringing image quality
in de-blurred outputs. They compared to previous
provided an analysis of ringing work
in general, and show that in
order to prevent ringing, it is
insufficient to simply discard
saturated pixels. They show
that even when saturated pixels
are removed, ringing is caused
by attempting to estimate the
values of latent pixels that are
brighter than the sensor’s
maximum output. Estimating
these latent pixels is likely to
cause large errors, and these
errors p ropagate across the rest

2
Chapter One General Introduction

of the image in the form of


ringing. They proposed a new
deblurring algorithm that
locates these error-prone bright
pixels in the latent sharp image,
and by decoupling them from
the remainder of the latent
image, greatly reduces ringing.

[4] Sonia and 2014 Focused on image restoration They showed the
Lalit which is sometimes referred to effective Blind
image deblurring or image Deconvolution
deconvolution. Image algorithm for image
restoration is concerned with restoration which is
the reconstruction or the recovery in the
estimation of blur parameters form of a sharp
of the uncorrupted image from version of blurred
a blurred and noisy one. The image when the blur
goal of blur identification is to kernel is unknown
estimate the attributes of the
imperfect imaging system from
the observed degraded image
itself prior to the restoration
process. Blind Deconvolution
algorithm can be used
effectively when no in
formation about the blurring

3
Chapter One General Introduction

and noise is known. The


algorithm restores the image
and the point spread function
(PSF).

[5] Biswas 2015 Presented how to deblurring The Wiener filter


Prodip and images using a wiener filter. minimizes the mean
et al. Basically wiener filter is used square error between
to produce an estimate of a the estimated random
desired or target random process and the
process by linear time- desired process. It is
invariant filtering of an very important and
observed noisy process, widely used process
assuming known stationary in which images are
signal and noise spectra, and processed to retrieve
additive noise. information that is not
visible to the naked
eye.

[6] Kumari 2015 The motivation to use wavelet In this work, can see
Neeraj as a possible alternative is to Reverse Bi-
explore new ways to reduce orthogonal Wavelet
computational complexity and and Bi- orthogonal
to achieve better noise Wavelet are superior
reduction performance. wavelets to reduce
Wavelets provide a powerful t noise from a speech
ool to represent signals.

4
Chapter One General Introduction

Wavelet de-noising attempts to signal as compared to


remove the noise present in the Haar Wavelet
signal while preserving the
signal's characteristics. It
involves three steps: a forward
wavelet transform,
thresholding, and an inverse
wavelet transform, where the
data are modeled as
observations of a signal
contaminated with additive
Gaussian noise. The wavelet
de-noising technique is called
thresholding. Thresholding is a
technique used for signal and
image de-noising. This
technique is simple and
efficient. In this work, wavelet
is used as de-noising
algorithm. Performance of the
Haar, Reverse Biorthogonal
and Biorthogonal wavelets are
experimentally evaluated.

[7] Ruikar 2016 Introduced image de-blurring It preserves the


Sachin, and process used for removal of important information
noise as well as blurred in an of the image while

5
Chapter One General Introduction

Rohit image in frequency domain. retrieving. In this


Kabade Wavelet transform is applied work analyzed the
for the delete of low frequency various techniques of
noise components in the image deblurring and
destroying image. The various after analysis it is
thresholding techniques used concluded that the
to get the better results. These image can be further
thresholding techniques are enhanced by using the
applied for both colour and wavelet transforms.
grey level images. It consider
PSNR parameter for
comparison between the
blurred and restored image.

[8] Mastriani 201 6 Presented a new deblurring This technique has


Mario procedure on images provided been successfully
by low resolution synthetic applied to real SAR
aperture radar (SAR) or simply images, and the
by multimedia in presence of simulation results are
multiplicative (speckle) or presented to
additive noise, respectively. demonstrate the
The method proposed is effectiveness of the
defined as a two-step process. proposed algorithms
First, use an original technique
for noise reduction in wavelet
domain. Then, the learning of a
Kohonen self-organizing map

6
Chapter One General Introduction

(SOM) is performed directly


on the de-noised image to take
out it the blur.

[9] Varsha DWT Haar This work also applies


Sharma and the Haar wavelet
2017
Transform for
Ajay Goyal
filtering the image in
order to reconstruct
image which have the
noise and blur. The
results of this paper
show that the
proposed method
gives the better result
from the previous
methods. It seems to
be that the PSNR,
Mean Square Rate
and execution time is
better.

[10] Piyush 2018 using continuous-wavelet It is observed from the


Joshi And transform in order to assess the obtained results that
Surya quality of the image the proposed
Prakash technique
(Mexican hat as a mother
outperforms the latest
wavelet)

7
Chapter One General Introduction

state-of-the-art
techniques.

[11] Xiaoyang li 2018 (wavelet transform) Wavelet transform


(Rebecca) could restore high-
resolution image from
low resolution noised
signal. And it is a way
to analyses the
frequency
components of the
signals. Compared to
Fourier transform, the
wavelet transform
offers good time-
frequency
representation of the
signals to locate both
time and frequency.

[12] Piyush 2018 (continual wavelet transform) It is observed from the


Joshi, and obtained results that
Proposed a no-reference
et al. the proposed
quality assessment technique
technique
to assess the quality of images
outperforms the latest
that is degraded due to blur and
state-of-the-art
noise. The technique is mo
techniques.
tivated from the fact that when

8
Chapter One General Introduction

distortion occurs in a natural


image, its naturalness gets
disturbed. We analyze this
disturbance using continuous-
wavelet transform in order to
assess the quality of the image.
The proposed image quality
estimation technique is free
from training and prior
knowledge of the distortion
(noise or blur) present in the
image. The technique is being
evaluated on LIVE and CSIQ
databases, and the obtained
results are found very similar
with subjective scores
provided by the databases.

1.3 Problem Statement

How to deal with degraded image resulted from blind blur through
transmission or acquisition operation without any known of information about
the type or degree of blur. Or possible from lens aberrations, relative motion
between sensors and objects and with moving objects, and may be resulted from
Out-of-focus objects, use of a wide-angle lens, environmental disturbance, or a
tiny exposure duration that decrease photons captured count and handle with
motion at the time image capture p rocedure using camera or elongated

9
Chapter One General Introduction

disclosure duration is applied when image capture and maybe from hands shake
when captured the image.

1.4 Aims of Thesis

1. The primary purpose of removing image blur is to retain all important visual
information contained in the original images.
2. The de- blurring is also used to increase the spatial resolution of the original
image and to improve image properties to increase quality and increase
clarity.
3. The objective of removing blur from an image by using wavelets is to
provide an effective way to reduce blurry images and to preserve information
and important details of the image.
4. Using a hybrid method to remove blurring from image relies on wavelet and
blind deconvolution and imsharpen filter to getting an image that is more
informative and less losing of important information than the input image.
5. Some measurements such as Average Difference, Peak Signal to Noise
Ratio, Maximum Difference, and Mean Square Error provide the high
quality of the outcome image.

1.5 Thesis Outlying

As wall as the subjects which explained previously, the other chapters of this
thesis as follow:
 Chapter two: “Theoretical Background”, that include blur, Removal
methods, discrete wavelet transform, and de-blurring wavelet process.
 Chapter three: “The Proposed System”, contains the procedure steps for
the algorithms together with the proposed method.

10
Chapter One General Introduction

 Chapter four: “Discuss Result”, displays the results that obtains of


proposition system.
 Chapter five: “Conclusions and Future Work”, discusses the result as well
as the future works.

11
Chapter Two
Theoretical Background

12
Chapter Two Theoretical Background

2 CHAPTER TWO
THEORETICAL BACKGROUND
2.1 Introduction

This chapter deals with blur definitions, some types and techniques that are
used to remove them in spatial and frequency domain. As well, it presentations
reverse biorthogonal discrete wavelet transform (rbio3.9), Gaussian smooth filter
and the used applications in image processing. It introduces image enhancement and
image restoration and the techniques to remove blur. Finally, introduces some
measured that used in image processing to compare with two image (input blur
image, output de-blur image).

2.2 Wavelet Transform

Wavelet transform is a dominant mathematical tool for classification of non-


stationary signals. It has the potential to decompose Source signals with varying
degrees of time and frequency which are the characteristics of the defect
mechanisms. The wavelet transform converts the signal into different forms of
wavelet to extract the hidden features. It has an additional advantage to filter the raw
waveform and compress the huge data without loss of information. Wavelet contains
both time and frequency information of the signal. The wavelet analysis decomposes
the signal into scaled and translated version of a base wavelet. The wavelet transform
can be represented in Continuous Wavelet Transform (CWT) and Discrete Wavelet
Transform (DWT) [13]. CWT takes a lot of computation time, generates large
amount of redundant information and requires more memory space. Since DWT
does not have these disadvantages, it’s present in CWT. In this study DWT, is used
to decompose the signals into wavelets. Many of the ideas behind WT have been in
existence for a long time. However, DWT is a powerful tool that has acquired

12
Chapter Two Theoretical Background

widespread and a success in the domain of image processing. The samples in low
pass sub band are called the scaling coefficients with low pass filter being the scaling
filter. The scaling coefficients are called "average", "approximation", or "smooth"
coefficients, while the samples in the high pass sub band are called the wavelet
coefficients with the high-pass filter being the wavelet filter. The wavelet
coefficients are called "detail" or "difference" coefficients [13, 14]. Discrete wavelet
transform is simple to execute and decreases the computation time. In the beginning,
wavelet decomposition is implemented on the rows of the image. Then, wavelet
decomposition is executed on the columns. This process is repeated with number of
required levels. Figure (2.1) clarifies one level discrete wavelet transform [15].

Figure (2.1): 2D wavelet transform, (a) Original image (b) Wavelet transform of
one- level

Wavelet transforms are a mathematical means for performing signal analysis when
signal frequency varies over time. For certain classes of signals and images, wavelet
analysis provides more precise information about signal data than other signal
analysis techniques. Common applications of wavelet transforms include:

- Speech and audio recognition.


13
Chapter Two Theoretical Background

- Image and video processing.


- Biomedical imaging.
- 1D and 2D applications in communications and geophysics.
- Protein and DNA analyses.
- Electro cardio gram (ECG) analyses heart-rate and blood-pressure.
- Computer graphics.

The advantages of Wavelet transform

- It is useful for compression.


- It is useful for storing images in less memory.
- It is useful to transmitting images faster.
- It is useful for reliable transmission.
- It is also useful for cleaning images i.e. reducing undesirable noise and
blurring.

Wavelets coefficients of vibration signals were extracted using Haar, Daubechies


(db11, db12, db13 and db14), Symlets (sym7, sym8, and sym9), Coiflets (coif3,
coif4 and coif5), Biorthogonal (bior 3.9, bior 4.4, bior 5.5 and bior 6.8) and Reverse
biorthogonal (rbio 4.4, rbio 5.5 rbio 3.9 and rbio 6.8) wavelets.

Wavelet-based encoding is very dynamic with transmission errors and decryption.


Waves are a way to analyze signals such as image, to order increased resolution. The
Wavelet transform can be applied to analyze signs to wavelet components. Wavelets
have major advantages in being capable to distinguish time details. Wavelets are a
small wave that concentrates its energy over time. Whole of this functions are
formed from one function named wave waves through expansion and interpretation
in the time domain. Wavelets produce a naturalist multiple resolutions for each

14
Chapter Two Theoretical Background

image, included important edges. Waves is a local signal of volume and time and
generally have irregular shapes [16].

2.3 Discrete Wavelet Transform (DWT)

2D wavelet transform DWT performs a single-level analysis and calculates


the rounding coefficient matrix and detail coefficient matrix details based on the
selected wavelet filters (low pass filter or high pass filter) [17]. In 2D wavelet
transforms, digital data decomposes into four sub-bands. The discrete wavelet
transform converts the original image into four frequency sub-bands LL, HL, LH, and
HH. Figure (2.2) and Figure (2.3) show LL the sub-frequency band specifies the
estimation details. The LH sub-band is used in form the vertical details of the image.
The HL includes the horizontal details from the image, while the HH contains the
diagonal element of the image. LL sub-band, which approximates the digital image.
It can decompose by applying a discrete wavelet transform to obtain any level of
digital content degradation and more than four sub-bands will be created. So, multiple
levels of decomposition can be obtained by applying separate wavelet transformations
to the rounding portion, the LL portion of the digital content as desired.

Figure (2.2): Decomposition of Image at 3rd Level Using Wavelet.

15
Chapter Two Theoretical Background

This sub-band is the original image decomposition. The LL sub-band has an


approximate image element, LH contains the vertical element of the image, HL
contains the horizontal element from the image and HH contains the diagonal
element of the image. So, the image information is thus stored in decomposed form
in these sub-bands. Wavelet transform deal within both spatial correlation and data
replication through contractions (or reductions) and Mather waves on input data.
They can be performed at different levels according to the required information
details, allowing a gradual transition and expansion without additional storage.
Wavelet transform has acquired wide acceptance in signal processing and image
compression. Due to their multi-resolution inherently modality, Wavelet coding
diagrams are primarily suitable for applications where scalability and degradation
are important [18]. The approximation coefficient and detail coefficient for image
can be computed according to Equations (2.1) and (2.2) respectively.

𝒂(𝒓, 𝒄) = ∑𝒌 𝒙(𝒓, 𝒌)𝒈(𝒄 − 𝒌) (2.1)

𝒅(𝒓, 𝒄) = ∑𝒌 𝒙(𝒓, 𝒌)𝒉(𝒄 − 𝒌) (2.2)

The approximation coefficient is calculated by the original image convolution with

low pass filter while detail coefficient is computed by the original image convolution

with high pass filter.

16
Chapter Two Theoretical Background

Figure (2.3): 2rd Level of Decomposition of an Image

2.4 Reverse Biorthogonal Wavelet


This group of wavelets shows the property of linear phase, which is required
for signal and image reconstruction. By using two functions, one for decomposition
(on the left side) and another for reconstruction (on the right side) in place of the
family of biorthogonal and reverse biorthogonal wavelets is biorthogonal and
symmetric. The property of symmetry ensures that they have linear phase
characteristics. Same single one, exciting properties are derived. Rbio3.9 wavelet is
the good types of wavelet family and it distinguishes the fast and efficiency of
memory. It has have more than two filter coefficients according to family wavelet
[18].

17
Chapter Two Theoretical Background

Figure (2.4): Reverse Biorthogonal rbio3.9 Wavelet

Figure (2.4) explain the shape of rbio3.9 is based on reconstruction and


decomposition from scaling filter. This wavelet have vanishing moments when
decomposition for analysis and vanishing moment for reconstruction of the
synthesis. It can referred to as (rbio).

2.5 Blurring
Blur can be defined as un-sharp image region that happens through camera
motion, object motion and through image acquisition. There are many types of blur
as shown in Figure (2.5). The Equation (2.3) for forming blurred image in the spatial
domain as follows:

𝒅(𝒙, 𝒚) = 𝒉(𝒙, 𝒚) ∗ 𝒇(𝒙, 𝒚) (2.3)

Where d(x, y) refers to the degraded image by blur, h(x, y) the kernel blur, f(x, y)
refers to the input image without blur and * means the convolution.

When construction blur image in the frequency domain it can be applied discrete
Fourier transform to the previous Equation as follows:

𝑫(𝑿, 𝒀) = 𝑯(𝑿, 𝒀) ∗ 𝑭(𝑿, 𝒀) (2.4)

Motion blur occurs due to camera motion or object motion through the time of the
camera exposure. The defocus blur is resulted from photographing images that is out

18
Chapter Two Theoretical Background

of the camera concentrate. Gaussian blurs caused by atmospheric disturbance or


errors of lens .The shake blur can happen for example when person takes a picture
and his hands are shaken [19, 20].

Figure (2.5): Different type of blur.

The taken images are more or less blur because to a large amount of overlapping
in the natural world as well as at the camera. The registered image must be of
goodness quality. While using a blurry image for helpful information in several
applications, it’s needful to de-blur the images. De-blurring an image is applied to
create a sharp images and restore them as numerous detail information of the image
[21].

2.6 Types of Blur


There are many blurry types in digital images like an average blur, Gaussian blur
and motion blur [5, 6].

2.6.1 Average Blur


This type distribute in vertical and horizontal directions as in the following
equation:

𝑹 = √ 𝒉𝒓𝟐 + 𝒗𝒓𝟐 (2.5)

19
Chapter Two Theoretical Background

Where hr is blur size of the horizontal direction, vr is blur size of vertical orientation
and R is the radius volume for average circular blur.

2.6.2 Gaussian Blur

In this blur type, pixel weights are unequal. The blur is high at the center and
decreased at the edges following a bell shaped curve. If we want to control the blur
effect, we have to add a Gaussian blur to an image. Gaussian blur depends on the
size and Alfa. It is used openly in many graphics materials to minimize image details.
For this function the Equation is [24]:

𝒙𝟐 +𝒚𝟐
𝟏
𝑮(𝒙, 𝒚) = 𝒆 𝟐𝝈𝟐 (2.6)
𝟐𝝅𝝈𝟐

Where x is space on x-axis, y is the distance on y-axis, and σ represents the standard
deviation.

2.6.3 Motion Blur

Motion blur occurs when there is a motion between the device that takes
picture and the object. It can occur in many forms like rotation, translation, sudden
change of the scale, or the combination of these. The control of motion is performed
by the angle (0 to 360 degrees), (–90 to +90) and density in pixel values (0 to 999)
based on the software used [25].

2.6.4 Out of Focus

It blurs the image because of the wrong focus. That means, the defocus of an
image means that the image is out of focus. In detail is unclear information in the
image. Equation (2.7) represents this function [24]:

20
Chapter Two Theoretical Background

𝟐
𝟎 𝒊𝒇 √𝒙𝟐 + 𝒚𝟐 > 𝒓
𝒅(𝒙, 𝒚) = { 𝟏 𝟐 (2.7)
𝒊𝒇 √𝒙𝟐 + 𝒚𝟐 ≤ 𝒓
𝝅𝒓𝟐

r refers to the radius, according into the radius range, PSF the size will be calculated
i.e. into r and k = 2r + 1and the kernel size is= k x k.

2.6.5 Box Blur

A box blur (also known as a box linear filter) is a spatial domain linear filter
in which each pixel in the resulting image has a value equal to the average value of
it is neighboring pixels in the input image. It is a form of low-pass ("blurring") filter.
A 3 by 3 box blur can be written as matrix [26].

𝟏𝟏𝟏
𝟏
𝟗
[𝟏 𝟏 𝟏 ] (2.8)
𝟏𝟏𝟏

In the frequency domain, a. box blur has zeros and negative components. That is, a
sine wave with a period equal to the size of the box will be blurred away entirely,
and wavelengths shorter than the size of the box may be phase-reversed, as seen
when two bokeh circles touch to form a bright spot where there would be a dark spot
between two bright spots in the original image.

2.7 De-blurring Techniques

Image de-blurring uses (PSF) to de-convolve the blur image [36]. The
Deconvolution is categorized at two kinds: blind Deconvolution and a non-blind
deconvolution, Blind deconvolution used just blurry image to de-blurring and in a
non-blind deconvolution used blurry image with a (PSF) to de-blurring. Blind

21
Chapter Two Theoretical Background

deconvolution is more complicated and takes an extra time than the non-blind,
because of it is evaluation the point spread function next to each iteration [21].

2.7.1 Richardson-Lucy Iterative Algorithm


It is the more using in de-blurring algorithms one for several of reasons, such
as it does not depend on the type of noise, in addition, it repeated algorithm. For this
algorithm the formula equation are below:
𝒈
𝒇𝒏+𝟏 = 𝒇𝒏 𝑯( ) (2.9)
𝑯𝒇𝒏

Where, 𝑓𝑛+1 refers to the new approaches of the prior 𝑓𝑛 , g the catch blurry image,
n is measured of the repetitions, H refers to the PSF and the first of all iterations, 𝑓𝑛
the selfsame blurry image g. It reduces the differences between the blurred image
and the predictive image grant of Poisson statistics [25].

2.7.2 Van Cittert Iterative Algorithm


A repeating algorithm to eliminate image blur. This algorithm reduces the
disparity between the acquired image gained by subtracting the image estimation
and the saved image. For this algorithm the mathematic formula as below [27]:

𝒇𝒏+𝟏 = 𝒇𝒏 + (𝒈 − 𝑯𝒇𝒏 ) (2.10)


Where 𝑓𝑛+1 refers to the new approximation of the prior 𝑓𝑛, g was the catch blurry
image, n the measure of the duplicates, H is PSF and in the initial of all reiteration,
𝑓𝑛 is selfsame as blurry image g. One of the advantages of this algorithm contains
easy calculations. Its limits sensitive to noise in the image, unsettled after a certain
amount of duplicates while the image canful to look as faint.

22
Chapter Two Theoretical Background

2.7.3 Landweber Iterative Algorithm


Improved version from the Cittert algorithm. In addition it reiterative algorithm.
When using Landweber Iterative, some of the more stable and reliable results can be
performed when a greater number of reiterations are made. This algorithm has
equation below [28]:

𝒇𝒏+𝟏 = 𝒇𝒏 + 𝜷𝑯(𝒈 − 𝑯𝒇𝒏 ) (2.11)

That, f n+1 refers to the new approaches of the prior f n, g was the blurry image, n
the numeral of reiterations, H is ( PSF ) that means the blurry function, β means a
constant which mastery the sharpen quantum, f n in the initial repetition is identical
as an image blur g. The disadvantage of such an algorithm is that have extra time.

2.7.4 Poisson Map Iterative Algorithm


Repeated algorithm. The same as the Richardson Lucy algorithm. The
variation between the algorithms above is that the Poisson map employs a
formulation in exponential form. The formulation for this algorithm is mentioned
below [29]:

𝒈
𝒏+𝟏 [𝑯( )−𝟏]
𝒇 = 𝒇𝒏 𝒆 𝑯𝒇𝒏 (2.12)

That, f n+1 is the new approaches of the prior f n, g is the catches blurry image, n
the number of iterations, H is PSF from the initial reiteration, the amount of (f n) is
selfsame as blurry image g. There are many limitations such as complex calculation
for exponential function usage because of this slow algorithm.

23
Chapter Two Theoretical Background

2.7.5 Laplacian Sharpening Filters


There are several filters used to sharpen the image. A common filter for this
purpose is the Laplacian filter. Image sharpening is a form of image deblurring. The
Laplacian filter matrix is a 3x3 matrix that possesses three basic matrices, -4, -8 and
9. The following is the form from Laplacian masks:

𝟎 𝟏 𝟎 𝟏 𝟏 𝟏 −𝟏 − 𝟏 − 𝟏
𝑳 = [𝟏 − 𝟒 𝟎] , [𝟏 − 𝟖 𝟏] , [ −𝟏 − 𝟗 − 𝟏 ]
𝟎 𝟏 𝟎 𝟏 𝟏 𝟏 −𝟏 − 𝟏 − 𝟏

For this formula is [30]:

𝑭 = 𝑰 − [𝑰⨂𝐋𝐊] (2.13)

That, F the retrieval image, I is the corrupted image by using Laplacian blur mask
while ⨂ representing the convolution operation. That was useful in increasing the
visibility of the images and taking the minimum amount of time to calculate. But
they not reiterative.

2.7.6 Blind Image Deconvolution


Image de-blurring includes blind image deconvolution which jointly estimates
the clear image and blur kernel [31]. It is ill-posed problem due to the loss of
information on both images and blurring process [32]. Blind Deconvolution as the
name indicates, works blindly where there is no information about point spread
function. PSF is a point input, represented as a single pixel in the “ideal” image,
which will be produced as something other than a single pixel in the “real” image
[33, 34]. There are two approaches to blind deconvolution; projection based and
maximum likelihood restoration based. In the former approach, it restores the true
image and PSF by making the initial estimation. The technique is cylindrical in
nature. This cyclic process is repeated until a predefined convergence criterion is
met. The advantage of these techniques is, it insensitive to noise. This approach
24
Chapter Two Theoretical Background

involves the simultaneous evaluation of the recovered image and PSF that leads to a
more sophisticated computational algorithm [35, 36].

2.7.7 Non-Blind Deconvolution


Non-Blind deconvolution estimates only the clear image a using known
kernel. The prior knowledge about the parameters of blur kernel is required (point
spread function length and angle). Non-blind deconvolution is used in the literature
[37]. Non-Blind deconvolution is performed by an already estimated kernel. Kernel
estimation can be performed using radon transform and it also states that a small
amount of noise can affect the kernel estimation task [38].

2.7.8 Wiener Deconvolution Filter


The Wiener deconvolution algorithm estimates the original image from a
distorted image. It approximately releases the filtering (e.g. blurring) that
deteriorated the original image with additional additive blur. Restores the image that
was degraded by blur. The algorithm is optimal in a sense of least mean square error
between the estimated and the true image, and uses the correlation matrixes of image
and blur. In the absence of noise, the Weiner filter reduces to the ideal inverse filter.
To compute the Wiener restoration filter:

𝑯∗(𝒌,𝒍)
𝑮(𝒌, 𝒍) = |𝑯(𝒌,𝒍)|ᶺ𝟐+𝑺_𝒖(𝒌,𝒍)⁄ (2.14)
𝑺_𝒙(𝒌,𝒍)

Where S_x is the signal power spectrum and S_u is the noise power spectrum.
Compute H so that it has the same size input image.

2.8 Interpolation
Interpolation is the operation of appreciating the signal value at the
intermediate positions of the original samples. In general, this is done by installing
a continuous function of known samples and estimate the function at the desired

25
Chapter Two Theoretical Background

locations. In order to avoid aliasing without any degradation or change to the original
correct signal, a low-pass filter should be used in the spatial domain, this filter is
presented as a function since, which is infinite in the spatial range. The cube
interpolation use up to four samples from the original signal to calculate the value
of an interpolated sample [39].

2.9 Preprocessing
Zoom, Shrink, and Resize one of the most common geometry operations is
resizing. A distinction is made between resizing the real image. That the produced
image is resized (in pixels) and resizing the image for human display which indicates
magnification (in) and shrinking (or minimizing). Both image processing processes
are useful and often rely on the same basic algorithms [40].

2.10 Image Enhancement


The enhancement is improving the contrast of the image and show the details.
There are several methods to improve gamma correction. A Gamma correction is a
non-linear adjustment of individual pixel values. While normalizing the image,
performed linear operations on individual pixels, such as numerical multiplication,
addition and subtraction, the gamma correction performs a non-linear operation on
the pixels of the source image, and can cause the image to be changed saturated
[41]. Gamma correction based on the gamma value, the mapping between the values
in the input and the image output may be nonlinear. Gamma can be any value
between 0 and infinity. In the case, gamma value is 1 (default), the mapping is linear.
If the gamma value is less than 1, the mapping is set to higher (brighter) output
values. In the case the gamma value is greater than 1, the mapping is set to lower
output values (darker). In our work, we used gamma less than 1. Figure (2.6)
explains this relation. Figure (2.7) explain the enhancement to an image with it is
histogram.
26
Chapter Two Theoretical Background

The three conversion curves show how the values are set when gamma is less than,
equal, and greater than 1. In each graph, the x-axis represents the intensity values in
the input image, and the y-axis represents the intensity values in the output image.

Figure (2.6): Different Gamma Correction.

Figure (2.7): Histogram of gray image.

27
Chapter Two Theoretical Background

2.11 Properties of Gaussian


The Gaussian smoothing operator is a 2-D convolution operator that is used to
de-blur images and remove noise. Gaussian have a good characteristics such as
scaling, separation, and symmetry witch can be exploited for effective execution
[41]. This filter has some special properties which are detailed below:

• It removes “high-frequency” components from the image (low-pass filter).

• When convolving with self itis another Gaussian.as Figure (2.10).

• So can smooth with small-width kernel, in repeat, it gets the same result as
the larger-width kernel would have.

• Convolving double times with Gaussian kernel of width σ is the same as


convolving once with a kernel of width σ √2.

The Gaussian distribution in 1-D has the form:

𝟏 𝟐
𝑮(𝒙) = 𝒆 −𝒙𝟐𝝈𝟐 (2.15)
√𝟐𝝅𝝈

The distribution is illustrated in Figure (2.8).

Figure (2.8): 1-D Gaussian distribution with mean 0 and =1

28
Chapter Two Theoretical Background

Where σ is the standard deviation of the distribution. The distribution is assumed to


have a mean of 0.

In 2-D, an isotropic (i.e. circularly symmetric) Gaussian has the form:

𝒙𝟐 +𝒚𝟐
𝟏 −
𝑮(𝒙, 𝒚) = 𝒆 𝟐𝝈𝟐 (2.16)
𝟐𝝅𝝈𝟐

This distribution is shown in Figure (2.9).

Figure (2.9): 2-D Gaussian distribution with mean (0, 0) and =1

Scaling
The Gaussian of standard derivative σ when convoluted with the same, gives
a greater Gaussian on standard derivative √2 σ. If an image is filtered using Gaussian
using the σ spread, the image itself should be filtered using the larger Gaussian with
√2 σ spread. Then, instead of filtering the image using a larger Gaussian, the
previous result could be more effective with diffusion σ, to obtain the image filtered
with √2 σ.

29
Chapter Two Theoretical Background

Figure (2.10): Convolving Gaussian with Itself.

2.11.2 Separability
The two-dimensional Gaussian filter can be separated into one-dimensional
Gaussian, one along x-direction and the other along the y-direction. Thus, the
Gaussian filter can be used to an image by first convolving it with one-dimensional
Gaussian along each row and then associating the result again with one-dimensional
Gaussian along each column as shown in Figure (2.11)and Figure (2.12).

30
Chapter Two Theoretical Background

Figure (2.11): Gaussian x and y direction.

Figure (2.12): Separability Gaussian filter.


2.11.3 Symmetry
A Gaussian symmetrical around the original, i.e. 𝐠(𝐱) = 𝐠 (−𝐱) for any x.
These estates can be applied for minimize the numeral of the multiplication
operation.

The Gaussian function is used in numerous research areas:


31
Chapter Two Theoretical Background

 It defines a probability distribution for noise or data.


 It is a smoothing operator.
 It is used in mathematics.
The Gaussian function has important properties which are verified with respect to
its integral

 Most common natural model.


 Smooth function, it has infinite number of derivatives.
 Fourier transform of Gaussian is Gaussian.
 Convolution of a Gaussian with itself is a Gaussian.
 There are cells in eye that perform Gaussian filtering.

2.12 Color model


A color model is a specification of the coordinate system and a subspace within
that system where each color is represented by a single point (as well as it called a
color space or color system). There have been many various color models
proposition over the last 400 years. Evolution color models evolved into distinct
colors for a variety of reasons (such as physical measurements of light, photography,
color mix, and so on). Color perception begins with a chromatic light source capable
of transmit an electromagnetic beam at wavelengths between about 400 and 700 nm
approximately. Part of these radiation is reflected on the surfaces of objects in a
scene and the resulting reflected the light reaches the humane eyes, leading to a sense
of color. An object that reflects light almost equally at all wavelengths during the
visible spectrum is seen as a white color, while an object that imbibes the most of
the incoming light, regardless of the wavelength, is seen as black. The concept of
many gray colors is usually referred to as gray between pure white and pure black.
Objects with selective properties are more chromatic, and the range of the spectrum

32
Chapter Two Theoretical Background

they reflect is often associated with the name of color [39]. A color light source can
be described in three basic quantities:
• Intensity (or Radiance): the total amount of energy that flows from the light source,
measured in watts (W).
• Luminance: a measure of the amount of information an observer perceives from a
light source, measured in lumen (lm). It corresponds to the radiant power of a light
source weighted by a spectral sensitivity function (characteristic of the HVS).
• Brightness: the subjective perception of (achromatic) luminous intensity.

For color mixes using dyes (or paints), the primary colors is red, green, and blue
while the secondary colors is cyan, yellow and magenta (Figure 2.13). It is important
to notice that for dyes, the color is named after the part of the spectrum it absorbs,
while the color of light is defined based on the part of the vision (spectrum) which
it is emitted. Thus, blending of the three primary colors of light lead to white color
(that is, a full spectrum of a visible light), while blending the three primary colours
of the dyes leads to black colour (that is, All colors are absorbed, so it is nothing
remains to reflect the incoming colors light).

Figure (2.13): Color mixtures (a) Additive and (b) subtractive.


33
Chapter Two Theoretical Background

The following are the most common color samples used for image processing:

2.12.1 RGB Color Model


The color model RGB is based on the Cartesian coordinate system whom
perform the three primary colors of light (Red, Green, and Blue), which are usually
normal in a rang [0, 1] as the ( Figure 2.14). The eight peaks of the producing cube
matching to the three primary colors of light, the three secondary colors, pure white,
and pure black.

Figure (2.14): RGB color model.

The number of discrete values for R, G, and B represents the pixel depth function,
defined as the number of bits used to represent each pixel. The standard value is 24
bits it is equal to 3 levels of image x 8 bits per level.

2.12.2 YIQ (NTSC) Color Model


This color model (NTSC) was used in the American standard television. A key
advantage of this model was the capability to split the grayscale contents from color
data, a key design requirement at a time when emerging color TV sets and
transmitters had to be suitable for previous versions of Black and White. At this

34
Chapter Two Theoretical Background

color model, there are three components are luminance (Y) with two color difference
signals are (I) and saturation (Q). The conversion from RGB into YIQ applied by
employing the conversion bellow:

𝒀 𝟎. 𝟐𝟗𝟗 𝟎. 𝟓𝟖𝟕 𝟎. 𝟏𝟏𝟒 𝑹


[ 𝑰 ] = [𝟎. 𝟓𝟗𝟓 − 𝟎. 𝟐𝟕𝟒 − 𝟎. 𝟑𝟐𝟏] [𝑮] (2.17)
𝑸 𝟎. 𝟐𝟏𝟏 − 𝟎. 𝟓𝟐𝟐 𝟎. 𝟑𝟏𝟏 𝑩

2.12.3 CIE L*a*b* (CIELAB) color space


A developed model of CIEXYZ color space. It displays all the visible colors
of the human and is building to act as an independent model. The three coordinates
for Lab were L represent lightness where (0-represent color black and 100 represent
color white), the (a) gradient represents a green and magenta color (range from -128
to 128), where negative values represented green, while positive values represented
magenta, the (b) gradient represents between blue and yellow (range from -128 to
128), where the negative values represented the blue color and the positive values
are yellow. Figure (2.15) shown CIE L*a*b* color space with different lightness.

Figure (2.15): CIE L*a*b* color model when lightness is 75, 50, 25.

There are no simple formulas to convert from RGB to Lab, so we must convert first
RGB to a definite color space as CIEXYZ and then to Lab. Apply the CIELab:
35
Chapter Two Theoretical Background

𝒀
𝑳∗ = 𝟏𝟏𝟔𝒈 ( ) − 𝟏𝟔 (𝟐. 𝟏𝟖)
𝒀𝒏
𝑿𝟏 𝒀𝟏
𝒂∗ = 𝟓𝟎𝟎 (𝒈 ( ) − 𝒈 (𝒀𝟏 )) (𝟐. 𝟏𝟗)
𝑿𝟏𝒏 𝒏

𝒀𝟏 𝒁𝟏
𝒃∗ = 𝟐𝟎𝟎(𝒈 ( ) − 𝒈 (𝒁𝟏 )) (2.20)
𝒀𝟏𝒏 𝒏
𝟑
√𝒕 𝒊𝒇 𝒕 > 𝝈𝟑
𝑮(𝒕) = { 𝒕 𝟒 (𝟐. 𝟐𝟏)
𝟐
+ 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆
𝟑𝝈 𝟐𝟗

𝑳∗ +𝟏𝟔 𝒂∗
X1=𝑿𝟏𝒏 𝒈−𝟏 ( + ) (2.22)
𝟏𝟏𝟔 𝟓𝟎𝟎

𝑳∗ +𝟏𝟔
Y1=𝒀𝟏𝒏 𝒈−𝟏 ( ) (2.23)
𝟏𝟏𝟔

𝑳∗ +𝟏𝟔 𝒃
Z1=𝒁𝟏𝒏 𝒈−𝟏 ( + ) (2.24)
𝟏𝟏𝟔 𝟐𝟎𝟎

𝐭𝟑 𝐢𝐟 𝐭 > 𝛔
𝐆(𝐭) = { 𝟐 𝟒 (𝟐. 𝟐𝟓)
𝟑𝛔 (𝐭 − ) 𝐨𝐭𝐡𝐞𝐫𝐰𝐢𝐬𝐞
𝟐𝟗

𝟔
Where 𝝈=
𝟐𝟗

2.13 Histogram

Histograms provide an easy, practical, and straightforward way of evaluating image


attributes, such as overall contrast and average brightness. The curves in histogram
reflect the distribution of color values in the digital image. Composed of axes the x-

36
Chapter Two Theoretical Background

axis represents color values of the image, while the y-axis represents the frequency
of each color values within the image. The left side of the X-axis represents the dark
areas, the middle side represents the average gray areas and the right side represents
the light areas [39]. Consists histogram for a color image from three colored curves.
Red represents a repetition of the color values of the red layer in the color image, the
green color is a repetition of the color values of the green layer, and blue represents
the repetitions of the color values of the blue layer.

2.14 Types of Quality Measure

There are basically two approaches for image Quality measurement:-

1. Objective measurement

2. Subjective measurement

1. Subjective measurement

A number of observers are selected, tested for their visual capabilities, shown a series
of test scenes and asked to score the quality of the scenes. It is the only “correct”
method of quantifying visual image quality. However, subjective evaluation is
usually too inconvenient, time- consuming and expensive.

2. Objective measurement

These are automatic algorithms for quality assessment that could analyses images
and report their quality without human involvement.

Such methods could eliminate the need for expensive subjective studies. Objective
image quality metrics can be classified according to the availability of an original
(distortion-free) image, with which the distorted image is to be compared.

Most existing approaches are known as: -

37
Chapter Two Theoretical Background

(i) Full-reference: meaning that a complete reference image is assumed to be known.

(ii) No-reference: In many practical applications, however, the reference image is


not available, and a no-reference or “blind” quality assessment approach is desirable.

(iii) Reduced-reference: In a third type of method, the reference image is only


partially available, in the form of a set of extracted features made available as side
information to help evaluate the quality of the distorted image.

The work in this thesis is based on the design of No-reference image quality measure.

1- Maximum Difference (MD)


Represents the maximum value of the difference between the two images.
MD=𝒎𝒂𝒙|𝑰(𝒊, 𝒋) − 𝑪(𝒊, 𝒋)| (𝟐. 𝟐𝟔)
The maximum value of subtracting the images I and C when be zero means the two
images are identical but if there is a difference the value is greater than zero depending
on the difference between the two images [42].
2- Correlation Coefficient (CC)
Very helpful statistical formula uses to determine just how strong that
relationship between two variables, the formula is:
∑𝑴 𝑵 ̅ ̅
𝟏 ∑𝟏 (𝑰(𝒊, 𝒋) − 𝑰 ) (𝑪(𝒊, 𝒋) − 𝑪
𝒓= (𝟐. 𝟐𝟖)
√(∑𝑴 𝑵
𝟏 ∑𝟏 (𝑰(𝒊, 𝒋) −
̅𝑰)𝟐 )(∑𝑴 𝑵
𝟏 ∑𝟏 ((𝑪(𝒊, 𝒋)
̅ )𝟐 ))
−𝑪

Where 𝑰 = mean (I), and 𝑪 = mean (C)


The resulting value between zero and one. If two images are identical, the value equal
is one. In case, different images the value less than one. If completely different are
zero result [43].
3- Mean Squared Error (MSE):
One obvious way of measuring this similarity is to compute an error signal by
subtracting the test signal from the reference, and then computing the average energy

38
Chapter Two Theoretical Background

of the error signal. The mean-squared-error (MSE) is the simplest, and the most
widely used, full-reference image quality measurement [42].
𝑴𝑺𝑬 = ∑𝑴 𝑵 𝟐
𝒊=𝟏 ∑𝒋=𝟏(𝑰(𝒊, 𝒋) − 𝑪(𝒊, 𝒋)) ⁄𝑴 ∗ 𝑵 (2.29)
Where 𝑰 is origin image, 𝑪 is colorized image, and (𝑵 x 𝑴) represent image size. The
value is zero when the two images are identical and this value begins with the higher
whenever increase the difference between images.
5- Peak Signal to Noise Ratio (PSNR):
A measure uses to measure the proportion of the similarity or extent of the difference
between two images of the same structure. It is determined by:
𝑷𝑺𝑵𝑹 = 𝟏𝟎 𝒍𝒐𝒈𝟏𝟎 (𝟐𝟓𝟓 𝟐 / 𝑴𝑺𝑬) (2.30)
The resulting value is 100 when two images are identical and begin to fall below that
amount when the two images are different. The value is zero when two images are
completely different [42].

39
Chapter
Three
The Proposed
System

11
Chapter Three the Proposed System

3 CHAPTER THREE
THE PROPOSED SYSTEM
3.1 Introduction

This chapter describes the practical stages of the proposed system to remove
blur from the images. The blurring of an image is a major cause of image
degradation. Due to blurring, it is impossible to get exact details of the original
image. The proposed de-blurring system is a process to remove the blur and restore
the image with high quality and keeping the features and edges as possible we can.
This method relies on some concept such as discrete wavelet transform, blind
deconvolution, sharpen image, adjust image intensity values, Gaussian smoothing,
and convert the model of a color image. We applied this proposed method on (150)
blur image from public data set without any information known about the type of
blur or degree of blurring as shows in Figure (3.1).

40
Chapter Three the Proposed System

Start

Read the blurry image


Y=De-blur image

Used first zoom to enlarge the image No


Y is optimization

Adjust image intensity value Yes

Display (Y)
Used Gaussian filter to smooth the image

Apply winner filter on the image Yes


Applied other
image
Applied blind deconvolution algorithm
No
Split the layers of color image
End

Apply wavelet transform


(rbio3.9)

Convert the class of the image into uint8

Enhancement the brightness

Applied Nearest interpolation of the image

Convert the color model from RGB to YIQ

41 Figure (3.1): Flowchart of the Proposed


Sharpen the image System
Chapter Three The Proposed System

3.2 Proposed System Steps


The proposed system consists of main three stages they are discrete wavelet
transform, winner deconvolution and blind deconvolution algorithm. The key idea
of this proposed is applying blind deconvolution on image to reduce the blurring in
an image. Then used DWT on the result image to get de-blurring image and then
sharp image for purpose getting on the output image. Algorithm (3.1) shows steps
of proposed method.

Algorithm (3.1): De-blurring Image.


Input: Blur image.
Output: De-blur image.
Begin
Step1:- Read RGB image.
im load image.
1. Step1:- Scale of the input image.
2. im resize image with the size [n m].
3. Step2:- Used first zoom to enlarge the image.
4. r 1
k 1
for i 1 to size of row image
c 1
for j 1 to size of Colum image
im[r,c] v[i,j,k]

im[r,c+1] v[i,j,k]

im[r+1,c] v[i,j,k]

im[r+1,c+1] v[i,j,k]

42
Chapter Three The Proposed System

c c+2

end

r r+2

end

1. Step3: - Used adjust image intensity values.


2. Step4:- Used Gaussian filter to smooth image.
3. Step5:- Applied Wiener filter algorithm convolution mask with image.
4. s 1 to 3
5. ss 1 to 3
6. im im[i,j] * G[s,ss].
7. Step6:- Applied blind Deconvolution algorithm convolution PSF mask with
image.
8. s 1 to 3
9. ss 1 to 3
im im[i,j] * BD[s,ss].

Step7:- Split the layers of color image.


Step8:- Applied discrete wavelet transform of the image where used rbio3.9.
Step9:- Convert class of the image into uint8.
Step10:- Enhance the image brightness.
Step11:- Applied Nearest interpolation of the image.
Step12:-Convert the color model of image from RGB to NTSC.
Step13:- Applied the imsharpen image.
Step14:- Display the de-blur image.

End

43
Chapter Three The Proposed System

3.2.1 Input Image


This proposed system used blind blur images with no information about the
type of blur or the blur degree. These images content 2D array with size M*N and
set of pixel represent the value of RGB color. Then scale the image.

3.2.2 Enlarge image


There are many methods used to enlarge image such that zooming, where three
types of zoom are first, average, and convolution zoom. Used in proposed method
first zoom to enlarge an image, where convert the size of the original image into
double size. We used zoom because wavelet transforms reduce the size of the image,
so enlarge input image size before used process by wavelet transform.

3.2.3 Gaussian Smoothing


The idea of Gaussian smoothing is to use this 2-D distribution as a ‘PSF‘, and
this is achieved by convolution. Since the image is stored as a collection of discrete
pixels we need to produce a discrete approximation to the Gaussian function before
we can perform the convolution as shows in Algorithm (3.2).

Algorithm (3.2): Gaussian Smoothing on Image.


Input: blur image.
Output: smooth image.
Begin

1. Step1:- Convolution the image with the second derivative of Gaussian filter
(gyy(y)) along each column.
2. Step2:- Convolution the result image from step (1) by a Gaussian filter (g(x))
along each row. Call the result image Lx.

44
Chapter Three The Proposed System

3. Step3:- Convolution the origin image with a Gaussian filter (g(y)) along each
column.
4. Step4:- Convolution the result image from step (3) by a second derivative of
Gaussian filter (gxx(x)) along each row. Call the result image Ly.
5. Add Lx with Ly.
6. Step5:- Display the smoothing image.

End

3.2.4 Blind Deconvolution


In this technique we restore an image without having prior knowledge of the
degradation process i.e. we have no knowledge of PSF. Blind Deconvolution is a
technique of restoration of a degraded/ blurred image without having any knowledge
of blur kernel or PSF of an image. A kernel is a mask in the form of small matrix
used to de-blur an image. This small matrix is known as Convolution Matrix. In this
technique of restoration, blur kernel is unknown that is why it is known as “Blind‟.
In the absence of any priori information about the imagery system and the true
image, this estimation is normally done by trial and error experimentation, until an
acceptable restored image quality is obtained. Deconvolution is a technique to
sharpen or de-blur a blurry image, and collectively it is known as Blind
Deconvolution Technique. It can be represented as equation (3.2):

𝒀=𝒌∗𝑿 + 𝒏 (3.2)

Where, X is the input RGB image. Y is the degraded image. K is the kernel or
convolution matrix that is added with the input image X to transform it into the blurry
image called Y. * is the convolution operator. The goal of Blind Deconvolution is to
inverse the above process and to recover both X and k.

45
Chapter Three The Proposed System

3.2.5 Split the Layers of Color Image


Read and split each layer in to an RGB image to apply discrete wavelet transform
on each layer in that image.

3.2.6 Discrete Wavelet Transform


In this section, discrete wavelet transform to image is performed. We applied
“rbio3.9” wavelet as one of wavelet family. A wavelet decomposition of a digital
image is performed by first going through an image row by-row and decomposing
each row like it was a standard one dimensional signal. After having gone through
all the lines a new image can be build where the left side represents the low
frequency part of each row while the right side shows the high frequency parts of
each row. The same steps can next be repeated column-by-column which at the end
gives an image which is arranged of four quadrats. The square on upper-left only
consists of low frequencies while the lower-right square only shows the very high
frequency details. The other two sub bands display a mixture of low and high
frequency data as shown in Algorithm (3.3).

This proposed applies convolution method that convolves the image with low pass
and high pass filters and then down sampling operation on both rows and columns
to perform wavelet coefficient for processing. Discrete wavelet transform procedure
needs to delete rows or columns in odd locations of image matrix.

Algorithm (3.3): Discrete Wavelet Transform.


Input: image as matrix with size (N*M).
Output: wavelet image as four matrix (LL, LH, HL, HH).
Begin
Step1:- Convert each image into gray image.

46
Chapter Three The Proposed System

𝟏 𝟏
Step2:- Set g = [ , ] // low pass filter
√𝟐 √𝟐
𝟏 𝟏
h=[ ,− ] // high pass filter
√𝟐 √𝟐

Step3:- Compute approximation coefficient along rows is multiply the input image
with low pass filter
For i=0 to number of rows -1
For j=0 to numbers of columns-1 Let s=0
For k=0 to number of column
s = s + image (i, k) * g(j-k)
save variable s in matrix named LL (i, j)
Step4:- Delete columns in odd positions // down sampling process
Step5:- Repeat the step 3, but find approximation along columns is multiply the image
resulted from step 3 with low pass filter and replace the location between i, j .
Step6:- Repeat step 4 with delete rows instead of columns in odd positions // down
sampling process
Step7:- Repeat 1-6 steps to calculate detail coefficient with using high pass filter in
addition to low pass filter.
Step8:- End For
Step 9:- End For
Step10:- End For
End

3.2.7 Nearest interpolation


For nearest neighbor interpolation, the block uses the value of nearby

translated pixel values that are close to the output pixel values. Create the output

matrix by replacing each input pixel value with the translated value nearest to it.

47
Chapter Three The Proposed System

Used to modify image values this is because the image has already performed

calculations such as division, addition and subtraction.

3.2.8 RGB to YIQ Color model


In this section, the use of the color model YIQ helps us to present the image

closely to the visual perception of the human being as shows in Algorithm (3.4).

Algorithm (3.4): NTSC color model Image.


Input: RGB image.
Output: NTSC image.
Begin

1. Step1:- Scale of the RGB image.


2. Step2:- Convert image to NTSC color model.
3. Step3:- Suppose a value as a threshold (a1, a2and a3).
4. Step4:- Subtract average of layer 1.called the result MA.
5. Step5:- Summation layer 1 with AM* layer 1.
6. Step6:- Subtract average of layer 2 from a1.
7. Step7:- Summation layer 2 with AM* layer 2.
8. Step8:- Subtract average of layer 3 from a2.
9. Step9:- Summation layer 3 with AM* layer 3.
Step10:- Convert image to RGB color model.
Step11:- Return the image to its original values by multiply the image by * 255.
Step12:- Convert image into 1D and sort.
Step13:- Find (image_min and image_max).
Step14:- Convert pixel from RGB to NTSC.

48
Chapter Three The Proposed System

Step15:- Apply normalization (image-min/max-min).

Step16:- Convert range between 0 and 200.

Step17:- Display the NTSC image.

End

3.2.9 Sharpen Image and Unsharp Mask


Unsharp masking is the most powerful sharpening method, however it is a little
more complicated to use. When select Unsharp Masking, the sharpen dialog box
Sharpen expands to add two additional sliders for Radius and Threshold. The Radius
slider lets you control the amount of blurring. Generally you should set the radius to
correspond to the degree to which the original image is blurred. The blurrier the
image, the higher radius need to select. Choosing too large a radius creates a sort of
ghosting effect around the edges of objects; if the radius is too small, the sharpening
effect is minimized. The Threshold setting allows us to restrict the sharpening work
only to those pixels whose difference from their neighbors exceeds a specified
threshold value. The idea behind setting the threshold value is to select a value that
still brings out edge detail without creating unwanted texture in smooth areas like
clouds or clear blue skies. Applied Sharpen image using Unsharp masking in
MATLAB (imsharpen). Input Image specified as an RGB image. If input image is a
true color (RGB) image, then imsharpen converts the image to the L*a*b* color
space, applies sharpening to the L* channel only, and then converts the image back
to the RGB color space before returning it as the output image. When applied
imsharpen need to name-value pair arguments explains as follows:

1- Radius is standard deviation of the Gaussian low pass filter, Radius is positive
number and default equal one. This value controls the size of the region around the

49
Chapter Three The Proposed System

edge pixels that is affected by sharpening. A large value sharpens wider regions
around the edges, whereas a small value sharpens narrower regions around edges.
2- Amount is the power of the sharpening effect, Amount is specified as a numeric
scalar any numeric scalar and default 0.8. A higher value leads to larger increase in
the contrast of the sharpened pixels. Typical values for this parameter are within the
range [0 2], although values greater than 2 are allowed. Very large values for this
parameter may create undesirable effects in the output image.
3- Threshold is minimum contrast required for a pixel to be considered an edge pixel
and default equal zero. Minimum contrast required for a pixel to be considered an
edge pixel, specified as a scalar in the range [0 1]. Higher values (closer to 1) allow
sharpening only in high-contrast regions, such as strong edges, while leaving low-
contrast regions unaffected. Lower values (closer to 0) additionally allow sharpening
in relatively smoother regions of the image. This parameter is useful in avoiding
sharpening noise in the output image.

In our Work we used imsharpen to keeping and sharping the edge in an image as
shows in Algorithm (3.5).

Algorithm (3.5): Unsharp Mask.

Input: image as matrix with size (N*M), mask Unsharp.

Output: Sharpen Image.

Begin
Step1:- Convert each image from RGB into Lab color space.
Step2:- Create a mask with different size.
Step3:- Define a set of coefficients and set a standard deviation of 1 to increase
the intensity around the edges.

50
Chapter Three The Proposed System

Step4:- Use the coefficient of the amount and the value of 0.8 to increase the
contrast of sharp pixels.
Step5:- Using threshold is minimum default equal zero. Minimum contrast
required for a pixel to be considered an edge pixel, specified as a scalar in the
range [0 1]. Higher values (closer to 1) allow sharpening only in high-contrast
regions, such as strong edges, while leaving low-contrast regions unaffected.
Lower values (closer to 0) additionally allow sharpening in relatively smoother
regions of the image.
Step6:- The filter is applied to the image, and a window is taken that we pass on
the L layer of the image after converting it to the color space within the
mentioned parameters to highlight the edges of the image.
Step7:- convert image form step6 to RGB image.
End Algorithm

3.3 Deburred Image


Finally, after applying the previous steps on the blur image it will be enhancement
and deblurring as a resulted image.

51
Chapter
Four
The experimental results
Chapter Four The Experimental Results

4 CHAPTER FOUR
THE EXPERIMENTAL RESULTS

4.1 Introduction
This chapter displays the results of the proposed system. The proposed system
is applied on matlab2018 program with public data set without know any
information about size of the input image and kind of blur. The goodness of the de-
burr image is measured through peak signal to noise ratio (PSNR) , Maximum
Difference (MD) and Mean Square Error (MSE) that return the better value between
the input image ( blur image ) and the result image ( de-blur image ).

4.2 Cases Study of De-blur Image


There are four general cases for testing this system. First, used blurry RGB
images. Second, is used de-blur RGB image by using Gaussian smoothing filter and
by using Winner filter deconvolution and blind deconvolution, third, using revers
biorthogonal wavelet (rbio3.9) to de-blurring the image. Then, used Unsharp image.
Finally convert the color of the result image from RGB to NTSC to make the image
more suitable for visual perception. When we use a reverse biorthogonal wavelet
transform (rbio3.9) with the author way and with smoothing Gaussian filter, we will
get a less blurry smooth image from the input image according to the histogram and
the measurements used for each input image.

4.3 Performance of the Proposed System


After merging the methods to enhancement the input blur image, we found some
metrics is good as MD, PSNR and MSE and the time of all process was acceptable
as shown in Table (4.1).

52
Chapter Four The Experimental Results

Table 4.1): explain the result of the proposed system.


Cases Blur image with it is histogram De-blur image withit is histogram Result of some
Measures on
image

Image
1
PCC: 3.6064e+04

MD: 142

MSE: 1.5743e+03

PSNR: 16.1601

TIME:

Elapsed time is
8.835409 seconds

Image
2
PCC : 4.8278e+04

MD: 140

MSE: 853.8346

PSNR: 18.8171

TIME:

53
Chapter Four The Experimental Results

Elapsed time is
8.709530seconds

Image

3 PCC : 8.4931e+04
MD: 233
MSE: 765.4394
PSNR: 19.2917
TIME:
Elapsed time is
13.323320 seconds

54
Chapter Four The Experimental Results

Image

4 PCC: 4.3427e+04
MD: 191
MSE: 644.1223
PSNR: 20.0411
TIME:
Elapsed time is
21.200954 seconds

Image
5
PCC: 3.7576e+04
MD: 142
MSE: 966.4136
PSNR: 18.2792
TIME:

Elapsed time is
6.646867 seconds

55
Chapter Four The Experimental Results

Image

6 PCC: 3.4789e+04
MD: 125
MSE: 3.1559e+03
PSNR: 13.1396
TIME:

Elapsed time is
7.467728 seconds

Image

7 PCC: 1.5015e+05
MD: 135
MSE: 761.0532
PSNR: 19.3167
TIME: Elapsed
time is 22.388760
seconds

56
Chapter Four The Experimental Results

Image

8 PCC: 3.7381e+04
MD: 179
MSE: 1.8062e+03
PSNR: 15.5631
TIME: Elapsed
time is 7.095356
seconds

Image
9
PCC: 1.4057e+05
MD: 218
MSE: 807.1285
PSNR: 19.0614
TIME: Elapsed
time is 22.916523
seconds

57
Chapter Four The Experimental Results

Image
10
PCC: 4.4216e+04
MD: 207
MSE: 684.0111
PSNR: 19.7802
TIME: Elapsed
time is
7.624593seconds

Image
11
PCC: 1.4107e+05
MD: 189
MSE: 858.4725
PSNR: 18.7935
TIME: Elapsed
time is 23.379268
seconds

58
Chapter Four The Experimental Results

Image

12 PCC: 4.4504e+04
MD: 120
MSE: 685.0607
PSNR: 19.7735
TIME: Elapsed
time is 8.978696
seconds

Through the previous experiments of the discrete wavelet transform (rbio3.9) with
the other operation, it has been shown to be useful in removing blur from the images
and is affected by the type of image and the type of the blur. For more de-blur image
results in appendix 1.

59
Chapter

Five

5 Conclusion and Future Work


Chapter Five Conclusion and Future Work

6 CHAPTER FIVE
CONCLUSION AND FUTURE WORK
5.1 Conclusions

There are a set of points that are concluded from the proposed system it can be
summarized as the following:

1- The proposed system is successful in removing the blur from the image comparing
with pervious work such as the less value of PSNR is 26.10 while in our work the
less value is 13.139.
2- The time required for appearing de-blurring image is related to the size and nature
of the input images.
3- Most of the previous research within the field of deblurring image used the PSNR
scale to test the results and some of them used the measures of MD and MSE. In our
work, we suggested four measures to measuring the algorithm performance.
4- De-blurring wavelet apply on LL approximation coefficient because it contains an
important information of image. The approximation coefficient is low frequency that
includes less noise, while detail coefficient (LH, HL, HH) are a high frequency that
contain more blurring.
5- The de-blurring of the proposed system can be applied on sub band of image or
applying on the whole image.
6- Wavelet de-blurring is considered successful for deblurring image and it makes the
image smooth, sharp and more clarity.
Chapter Five Conclusion and Future Work

5.2 Future Works

There are a number of suggestions that will be proposed for future works as
following:

The proposed system can be used on other multimedia, like video. The proposed
system can be used in video deblurring. A shot is a sequence of frames in the same
scene that is selected optimal frame based on PSNR, MD and MSE.
Also, de-blurring can be considered as a preprocessing for several applications such
biometric.
References
Reference

7 References
[1] Mohapatra, Biswa Ranjan, Ansuman Mishra and Sarat Kumar Rout. "A
Comprehensive Review on Image Restoration Techniques". International Journal
of Research in Advent Technology, Vol.2, No.3, pp. 101-102, (2014).
[2] Al-amri, Salem Saleh, N.V. Kalyankar and Dr. Khamitkar S.D." Deblured
Gaussian Blurred Images". Journal of Computing, Vol.2, No.4, pp. 33-34, (2010).
[3] Whyte Oliver, Josef Sivic, and et al., "Deblurring shaken and partially saturated
images." International journal of computer vision 110.2, pp.185-201(2014).
[4] Saini, Sonia and Lalit. "Image Processing Using Blind Deconvolution Deblurring
Technique”. International Journal of Applied Engineering and Technology,
Vol.4, pp.115-118, (2014).
[5] Biswas Prodip, Abu Sufian Sarkar, and et al., "Deblurring images using a Wiener
filter". International Journal of Computer Applications 109.7, pp. 36-38(2015).
[6] Kumari Neeraj, and Shelly Chugh. "Reduction of Noise from Audio Signals
Using Wavelets." International Journal for Advance Research in Engineering and
Technology 3 (2015).
[7] Ruikar Sachin, and Rohit Kabade, "Image Deblurring and Restoration using
Wavelet Transform." International Journal of Control Theory and Applications
(IJCTA) 9.22, pp. 95-104(2016).
[8] Mastriani Mario, "Denoising based on wavelets and deblurring via self-
organizing map for Synthetic Aperture Radar images." arXiv preprint
arXiv:1608.00274 (2016).

[9] Varsha Sharma and Ajay Goyal,” An Effective Technique of Image


Degradation using DWT based Padding Kernel Detection”. Vol .170 ,No.3,
July (2017).
[10] Piyush joshi and surya Prakash,” Continuous Wavelet Transform Based
62
Reference

No-Reference Image Quality Assessment for Blur and Noise Distortions”. Vol.
6, 2846585(2018).
[11] Xiaoyang li (Rebecca). “Multiplex Channels De-noising and De-blurring by
Wavelet Transform”. 2018-5 University of Houston.
[12] Piyush Joshi, and et al. “Continuous Wavelet Transform Based No-Reference
Image Quality Assessment for Blur and Noise Distortions”. IEEE Access (2018).
[13] AL_Sultani, Hamed Sadeq Mahdi Saoud. "Digital Image Compression Using
Discrete Cosine Transform Method" .M.Sc. Thesis, college of sciences, the
university of Al_Mustansiriya , (2006).
[14] Xiaoyin, Mr.Zhang. "Efficient Architecture for Discrete Wavelet Transform
Using Daubechies" .M.Sc. Thesis, Elictrical Engineering, Songkla University,
(2010).
[15] Jana, Debalina and Kaushik Sinha. "Wavelet Thresholding for Image Noise
Removal". International Journal on Recent and Innovation Trends in Computing
and Communication, Vol.2, No.6, pp.1400 – 1405, (2014).
[16] Saini, Neeraj, and Pramod Sethy. "Performance based analysis of wavelets
family for image compression-a practical approach." Int J Comput Appl 129.9,
pp. 206-223(2015).
[17] Prasad, B. Rajendra, K. Vinayaka Kota, and B. Mysura Reddy. "Biorthogonal
wavelet transform digital image watermarking." International Journal of
Advanced Computer Research 2.3, pp. 84-89(2012).
[18] Neeraj, and Pramod Sethy. "Performance based analysis of wavelets family
for image compression-a practical approach." Int J Comput Appl 129.9, pp. 206-
223(2015).
[19] Kamthan, Surbhi and Jaimala Jha. "A Performance Based Analysis of
Various Image Deblurring Techniques" .AEIJST, Vol.3, No.12, pp.1-5, (2015).

63
Reference

[20] Wu, Jiunn-Lin, Chia-Feng Chang and Chun-Shih Chen. "An Adaptive
Richardson-Lucy Algorithm for Single Image Deblurring Using Local Extreme
Filtering" .Journal of Applied Science and Engineering, Vol.16, No.3, pp.
270271, (2013).
[21] Rohina Ansari, Himanshu Yadav, Anurag Jain,” A Survey on Blurred Images
with Restoration and Transformation Techniques”, International Journal of
Computer Applications (0975 – 8887) Vol. 68– No.22, April (2013).
[22] Al-amri, Salem Saleh, N.V. Kalyankar and Dr. Khamitkar S.D. "Deblured
Gaussian Blurred Images". Journal of Computing, Vol.2, No.4, pp. 33-34, (2010).
[23] Karan, Leena. "Study of Deblurring Techniques for Restored Motion Blurred
Images”. New Man International Journal of Multidisciplinary Studies, Vol.3,
No.5, pp.15-17, (2016).
[24] Dejee Singh1, Mr R. K. Sahu, “A Survey on Various Image Deblurring
Techniques”, International Journal of Advanced Research in Computer and
Communication Engineering Vol. 2, Issue 12, December (2013).
[25] Jiunn-Lin Wu, Chia-Feng Chang, Chun-Shih Chen,” An Improved
Richardson-Lucy Algorithm for Single Image Deblurring Using Local Extrema
Filtering”, IEEE International Symposium on Intelligent Signal Processing and
Communication Systems (ISPACS 2012) November 4-7, (2012).
[26] Vankawala, Fagun, Amit Ganatra, and Amit Patel. "A Survey on different
Image Deblurring Techniques." International Journal of Computer Applications
116.13 (2015).
[27] A. Bennia and S. M. Riad, “Filtering Capabilities and Convergence of the
Van-Cittert Deconvolution Technique”, IEEE Transactions on Instrumentation
and Measurement, vol. 41, no. 2, (1992).
[28] L. Lang and Y. Xu, “Adaptive Landweber method to deblur images”, IEEE
Signal Processing Letters, vol. 10, no. 5, (2003).
64
Reference

[29] O. Marques, “Practical image and video processing using MATLAB”, John
Wiley & Sons, (2011).
[30] Zohair Al-Ameen, Ghazali Sulong and Md. Gapar Md. Johar,” A
Comprehensive Study on Fast image Deblurring Techniques”, International
Journal of Advanced Science and Technology Vol. 44, July, (2012).
[31] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, "Understanding Blind
Deconvolution Algorithms," IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 33, no. 12, pp. 2354-2367, (2011).
[32] P. Campisi and K. Egiazarian, Blind image deconvolution: theory and
applications. (2016).
[33] S. Yadav, C. Jain, and C. Aarti, "Evaluation of Image Deblurring
Techniques," International Journal of Computer Applications, vol. 139, no. 12,
pp. 32-36, (2016).
[34] A. P. Abhilasha, S. Vasudha, N. Reddy, V. Maik, and K. Karibassappa, "Point
Spread Function Estimation and Deblurring Using Code V," IEEE international
Conference on Electronics, Information and Communications, pp. 1-4, (2016).
[35] M. Poulose, "Literature Survey on Image Deblurring Techniques,"
International Journal of Computer Applications Technology and Research, vol.
2, no. 3, pp. 286-288, (2013).
[36] D. Singh and R. K. Sahu, "A Survey on Various Image Deblurring
Techniques," International Journal of Advanced Research in Computer and
Communicational Engineering, vol. 2, no. 12, pp. 4736-4739, (2013).
[37] L. Zhong, S. Cho, D. Metaxas, S. Paris, and J. Wang, "Handling Noise in
Single Image Deblurring using Directional Filters," In proceeding of IEEE
conference on computer vision and pattern recognition, pp. 612-619, (2013).

65
Reference

[38] J. Pan, D. Sun, and M. Yang, "Blind Image Deblurring Using Dark Channel
Prior," IEEE Conference on Computer Vision and Pattern Recognition, pp. 1628-
1636, (2016).
[39] Gallagher, Andrew C. "Detection of linear and cubic interpolation in JPEG
compressed images." Computer and Robot Vision, 2005. Proceedings. The 2nd
Canadian Conference on. IEEE, (2005).
[40] “Practical Image and Video Processing Using MATLAB®”. By Oge
Marques. © 2011 John Wiley & Sons, Inc. (2011).
[41] Mubarak .Shah, “Fundamentals of Computer Vision”, Computer Science
Department, University of central Florida, Orlando, FL 32816 (1997).
[42] Memon, F., Unar, M. A., & Memon, S. (2016). Image quality assessment for
performance evaluation of focus measure operators. Mehran University Research
Journal of Engineering & Technology, ISSN 0254-7821, Vol. 34, No. 4, October,
2016.
[43] A. F. Hassan, D. Cailin, and Z. M. Hussain (2014). An information theoretic
image quality measure: Comparison with statistical similarity. Journal of
Computer Science, Vol. 10, No. 11, pp. 22692283.

66
Reference

Appendix

Cases Blur image with it is histogram De-blur image withit is histogram Result of some
Measures on
image

Image
1
PCC: 7.4777e+05

MD: 177

MSE: 408.4957

PSNR: 22.0189

TIME:

Elapsed time is
98.549830 seconds

67
Reference

Image
2
PCC : 3.6489e+04

MD: 227

MSE: 2.0078e+03

PSNR: 15.1037

TIME:

Elapsed time is
8.243140 seconds

Image

3 PCC : 3.9541e+04
MD: 143
MSE: 551.9738
PSNR: 20.7116
TIME:
Elapsed time is
8.826724 seconds

68
Reference

Image

4 PCC: 1.4403e+05
MD: 218
MSE: 968.0428
PSNR: 18.2719
TIME:
Elapsed time is
23.302129 seconds

Image
5
PCC: 2.8133e+04
MD: 219
MSE: 1.6953e+03
PSNR: 15.8382
TIME:

Elapsed time is
8.381572 seconds

69
Reference

Image

6 PCC: 1.4752e+05
MD: 125
MSE: 1.3870e+03
PSNR: 16.7100
TIME:

Elapsed time is
21.911811 seconds

Image

7 PCC: 4.7462+04
MD: 232
MSE: 1.0988e+03
PSNR: 17.725
TIME: Elapsed
time is 7.260409
seconds

70
Reference

Image

8 PCC: 4.0075e+04
MD: 241
MSE: 1.4375e+03
PSNR: 16.5548
TIME: Elapsed
time is 7.092364
seconds

Image
9
PCC: 4.5288e+04
MD: 112
MSE: 743.6509
PSNR: 19.4171
TIME: Elapsed
time is 6.606667
seconds

71
Reference

Image
10
PCC: 2.2733e+06
MD: 169
MSE: 182.1088
PSNR: 25.5275
TIME: Elapsed
time is 252.169154
seconds.

Image
11
PCC: 4.0770e+04
MD: 212
MSE: 936.2864
PSNR: 18.4167
TIME: Elapsed
time is 18.4167
seconds

72
Reference

Image

12 PCC: 3.6101e+04
MD: 157
MSE: 1.0370e+03
PSNR: 17.9728
TIME: Elapsed
time is 14.675177
seconds.

Image

13 PCC: 2.9272e+05
MD: 185
MSE: 447.0674
PSNR: 21.6271
TIME: Elapsed
time is 35.282512
seconds.

73
Reference

Image

14 PCC: 1.1643e+05
MD: 174
MSE: 717.4452
PSNR: 19.5729
TIME: Elapsed
time is 17.946415
seconds.

74
‫الخالصة‬
‫بشكل عام ‪ ،‬يمكن تعريف الضوضاء على أنها معلومات غير مرغوب فيها في الصورة أثناء اإلرسال أو‬
‫االستحواذ‪ .‬الضبابية يعتبر نوع خاص من الضوضاء‪ .‬تقلل تأثيرات الضبابية من فعالية الرؤية ‪ ،‬لذا فإن إزالة‬
‫الضبابية من الصورة يسهل المعالجة‪ .‬مشكلة إزالة الضبابية في المجال المكاني هي تجانس البيانات والحواف‪.‬‬
‫إزالة المويجات هي العملية التي يمكن فيها إزالة التمويه باستخدام المويجات في مجال التردد‪ .‬لذلك ‪ ،‬فإنه يستخدم‬
‫للحفاظ على حواف الصورة ‪ ،‬ومنع أنواع الضوضاء والحفاظ على ميزات الصورة الهامة‪ .‬ومع ذلك ‪ ،‬يشتمل‬
‫النظام المقترح على مجموعة من الخطوات التي تتمثل في التكبير ‪ ،‬ومرشح كاوس‪ ،‬ومرشح ‪ ، Wiener‬و‬
‫‪Deconvolution‬العمياء ‪ ،‬وتحويل المويجات المنفصلة ‪ ،‬وأقرب استيفاء ‪ ،‬ونموذج ألوان ‪ ، NTSC‬وصورة‬
‫أكثر حدة‪ .‬تم تطبيق العديد من العمليات‪ :‬تم تطبيق التكبير ‪ /‬التصغير لتكبير الصورة وتحديث قيم كثافة الصورة‬
‫وتصفية ‪ Gaussian‬المطبقة على الصورة الملساء وخوارزمية مرشح ‪ Wiener‬المطبقة وخوارزمية‬
‫‪deconvolution‬العمياء المطبقة إلزالة بعض التمويه من الصورة واستخدام تحويل المويجات المنفصل‬
‫أخيرا‬
‫ً‬ ‫للصورة ‪ ،‬استخدام االستيفاء ‪ ،‬وتحويل نموذج لون الصورة من ‪ RGB‬إلى ‪ ، YIQ‬وتطبيق صورة شحذ‪.‬‬
‫‪ ،‬تطبيق بعض المقاييس على الصور الختبار جودة النظام المقترح ( معامل ارتباط بيرسون ‪ ،‬أقصى فرق ‪ ،‬خطأ‬
‫مربع متوسط ‪ ،‬ونسبة ذروة اإلشارة إلى الضوضاء)‪ .‬تُستخدم الصور في هذه الرسالة عبارة عن صور ملونة‬
‫لمجموعة بيانات عامة ذات أبعاد مختلفة ويتم تنفيذ النظام المقترح باستخدام لغة البرمجة‪Matlab 2018.‬‬

‫تم تطبيق الخوارزمية المقترحة على ‪ 151‬صورة وأظهرت نتيجة واعدة اعتماد ًا على قيمة المقاييس ورؤية‬
‫الصورة ومعالجتها لجميع الصور الضبابية‪.‬‬
‫جمهوريه العراق‬

‫وزارة التعليم العالي والبحث العلمي‬

‫جامعة الكوفة‬

‫كليه علوم الحاسوب والرياضيات‬

‫قسم علوم الحاسوب‬

‫طريقة هجينة لتحسين الصور الضبابية‬

‫رسالة مقدمة الى‬

‫مجلس كلية علوم الحاسوب والرياضيات ‪ /‬جامعة الكوفة كجزء من متطلبات نيل درجة‬
‫الماجستير في علوم الحاسوب من قبل الطالبة ‪:‬‬

‫رشا مثنى جميل الفتالوي‬

‫باشراف‬

‫أ‪.‬م‪.‬د اسعد نوري هاشم سلمان الشريفي‬

‫‪ 2020‬م‬ ‫‪ 1111‬هـ‬
View publication stats

You might also like