0% found this document useful (0 votes)
37 views76 pages

Mini D9

This document describes a project that aims to reduce speckle noise in Synthetic Aperture Radar (SAR) data from the Sentinel-1 satellite to monitor paddy field areas. Speckle noise reduces image quality and makes monitoring more difficult. The project proposes using Fast Fourier Transform (FFT) temporal filtering to remove speckle noise. Six FFT filtering scenarios are investigated using different numbers of FFT results and their performance is evaluated based on correlation with the original image. The results show that using FFT1 to FFT5 filtering provides the best performance with an average 92% correlation while keeping all correlations above 85%. This optimum scenario allows easier identification of paddy growth trends for monitoring purposes.

Uploaded by

208r1a04j1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views76 pages

Mini D9

This document describes a project that aims to reduce speckle noise in Synthetic Aperture Radar (SAR) data from the Sentinel-1 satellite to monitor paddy field areas. Speckle noise reduces image quality and makes monitoring more difficult. The project proposes using Fast Fourier Transform (FFT) temporal filtering to remove speckle noise. Six FFT filtering scenarios are investigated using different numbers of FFT results and their performance is evaluated based on correlation with the original image. The results show that using FFT1 to FFT5 filtering provides the best performance with an average 92% correlation while keeping all correlations above 85%. This optimum scenario allows easier identification of paddy growth trends for monitoring purposes.

Uploaded by

208r1a04j1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 76

An Industry Oriented Mini Project Report on

SPECKLE NOISE REDUCTION OF SENTINEL-1 SAR DATA USING FAST


FOURIER TRANSFORM TEMPORAL FILTERING TO MONITOR PADDY FIELD
AREA

Submitted in partial fulfilment of the requirement for the award of degree of

BACHELOR OF TECHNOLOGY

IN

ELECTRONICS AND COMMUNICATION ENGINEERING


Submitted By

M. RAMESH 208R1A04L3

M. BHAVANA 208R1A04L4

N. RAHUL 208R1A04L5

N. DURGAPRASAD 208R1A04L6

Under the Guidance of

Dr. S. POONGODI
Professor
ECE DEPARTMENT

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

UGC AUTONOMOUS

(Approved by AICTE, Affiliated to JNTUH, Accredited by NBA, NAAC)

Kandlakoya (v), Medchal, Telangana.

2023-24
CMR ENGINEERING COLLEGE

UGC AUTONOMOUS

(Approved by AICTE, Affiliated to JNTUH, Accredited by NBA, NAAC)


Kandlakoya (v), Medchal, Telangana.

Department of Electronics and Communication Engineering

CERTIFICATE
This is to certify that the Industry Oriented mini-project work entitled
“SPECKLE NOISE REDUCTION OF SENTINEL-1 SAR DATA USING FAST
FOURIER TRANSFORM TEMPORAL FILTERING TO MONITOR PADDY
FIELD AREA” is being submitted by M.RAMESH bearing Roll No:208R1A04L3,
M.BHAVANA bearing Roll No:208R1A04L4, N.RAHUL bearing Roll
No:208R1A04L5, N.DURGAPRASAD bearing Roll No:208R1A04L6 in Batch IV-1
semester, Electronics and Communication Engineering is a record bonafide work
carried out by them during the academic year 2023-2024.The results embodied in this
report have not been submitted to any other University for the award of any degree.

INTERNAL GUIDE HEAD OF THE


DEPARTMENT

Dr. S. POONGODI Dr. SUMAN MISHRA

EXTERNAL EXAMINER
ACKNOWLEDGEMENT
We sincerely thank the management of our college CMR ENGINEERING
COLLEGE for providing during our project work.

We derive great pleasure in expressing our sincere gratitude to our principal


Dr.A.S.REDDY for his timely suggestions, which helped us to complete the project
work successfully.

It is the very auspicious moment we would like to express our gratitude to Dr.
SUMAN MISHRA, Head of the Department, and ECE for his consistent
encouragement during the progress of this project.

We take it as a privilege to thank our project coordinator Dr. T.


SATYANARAYANA, Associate professor, Department of ECE for his ideas that led
to complete the project work and we also thank his continuous guidance, support and
unfailing patience, throughout the course of this work.

We sincerely thank our project internal guide Dr. S. POONGODI, professor,


Department of ECE for her guidance and encouragement in carrying out this project.
DECLARATION

We hereby declare that the project entitled “SPECKLE NOISE REDUCTION


OF SENTINEL-1 SAR DATA USING FAST FOURIER TRANSFORM
TEMPORAL FILTERING TO MONITOR PADDY FIELD AREA” is the work
done by us in campus at CMR ENGINEERING COLLEGE(UGC Autonomous),
Kandlakoya during the academic year 2023-2024 and is submitted as Industry oriented
Mini Project in partial fulfilment of the requirements for the award of degree of
BACHELOR OF TECHNOLOGY in ELECTRONICS AND
COMMUNICATION ENGINEERING
from JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY, HYDERABAD.

M.RAMESH 208R1A04L3

M. BHAVANA 208R1A04L4

N. RAHUL 208R1A04L5

N. DURGAPRASAD 208R1A04L6
CONTENTS

CHAPTERS PAGE

LIST OF ABBREVIATIONS i

LIST OF FIGURES ii

LIST OF TABLES iii

ABSTRACT iv

CHAPTER-1 INTRODUCTION

1.1 Overview of the Project 1


1.2 Objective of the project 3
1.3 Problem Statement 3
1.4 Organization of the project 4

CHAPTER-2 LITERATURE SURVEY

2.1 Introduction to Literature Survey 5

CHAPTER-3 EXISTING SYSTEM

3.1 Introduction 8

3.2 Drawbacks 9

CHAPTER-4 PROPOSED SYSTEM

4.1 Introduction 12

4.2 Proposed Method 13

CHAPTER-5 SOFTWARE DESCRIPTION

5.1 Introduction to Python 16

5.2 Python Installation and Setup 27

5.3 Wavelets 34

5.3.1 Modules 41
CHAPTER-6 RESULT AND DISCUSSION 44

CHAPTER-7 APPLICATIONS AND ADVANTAGES

7.1 Applications 48

7.2 Advantages 50

CHAPTER-8 CONCLUSION AND FUTURE SCOPE 53

REFERENCES 54

ANNEXURE 56
LIST OF ABBREVIATIONS

ACRONYM ABBREVIATIONS

ANN ARTIFICIAL NEURAL NETWORKS

CNN CONVOLUTIONAL NEURAL NETWORKS

CWT COMPLEX WAVELET TRANSFORM

CWT CONTINUOUS WAVELET TRANSFORM

DWT DISCRETE WAVELET TRANSFORM

FFT FAST FOURIER TRANSFORM

PSNR PEAK SIGNAL TO NOISE RATIO

MSE MEAN SQUARE ERROR

SRCNN SUPER RESOLUTION CONVOLUTION NEURAL NETWORK

SSIM STRCTURAL SIMILARITY INDEX

SWT STATIONARY WAVELET TRANSFORM

i
LIST OF FIGURES

FIGURE NO DESCRIPTION PAGE NO


Processing Flow of Proposed method
Fig 3.3.1
13

Fig 5.1.1 Operators in Python 17

Fig 5.2.1 Python Releases for Windows 28

Fig 5.2.2 Install Python 3.10.10 (64-bit) 29

Fig 5.2.3 Optional Features 30

Fig 5.2.4 Advanced Options 31

Fig 5.2.5 Python 3.10.10 (64-bit) Setup 33

Fig 5.2.6 Python IDLE Shell 3.10.10 34

Fig 6.1 Original Image 44

Fig 6.2 Image Resize 45

Fig 6.3 FastNDenoising 45

Fig 6.4 Gaussian Blur 46

Fig 6.5 Algorithm comparison 46

ii
LIST OF TABLES

TABLE NO NAME OF THE TABLE PAGE NO

Table 4.2.1 Sentinel-1 data specification 14

Table 5.1.1 Arithmetic operators 18

Table 5.1.2 Bitwise operators 18

Table 5.1.3 Identity operators 18

Table 5.1.4 Logical operators 19

Table 5.1.5 Membership operators 19

Assignment operators 20
Table 5.1.6

Table 5.1.7 Relational operators 20

Table 5.3.1 Wavelet Family 36

Table 6.1 Performance comparison 47

iii
ABSTRACT
The Synthetic Aperture Radar (SAR) images have the ability to work in any
weather situation. It is mostly impossible to get cloud free optical image monthly in
Indonesia, especially in Subang. So, the use of SAR imagery for the monthly
monitoring of paddy is recommended. The disadvantage of the SAR images is noise,
known as speckle noise. This noise reduces the quality of the image; reducing the
speckle noise is necessary. This research proposes the FFT algorithm to remove the
speckle noise. Because the frequency pattern of FFT is periodic, and the research
focuses on the paddy field area, so the one year of paddy growth was used. There are
many planting times in the research area, from one to three planting times a year; most
of the area is twice planting time a year. Six scenarios of the FFT filter were proposed
and investigated to select the optimum scenario. The scenarios use the number of FFT
results to implement in filtering. The first scenario used FFT1 to FFT2, and the sixth
scenario used FFT-1 to FFT-8. The performance results were measure by the
correlation value of the original image and the results of FFT filtering. The results
show that by increasing the number of FFT in the filtering process, the correlation
increase. The minimal FFT filtering is FFT1-FFF3 filtering, with the average
correlation is 87%, but some acquisition still has the correlation less than 85%. The
optimum is FFT1- FFT5 filtering, with the average correlation, is 92%, and all
correlations are above 85% for all data during a year. Using the optimum scenario, the
backscattered trend of paddy growth could be identified better and easier, so this
technique is recommended for paddy growth monitoring.

iv
CHAPTER-1

INTRODUCTION

1.1 OVERVIEW OF THE PROJECT


The High cloud cover in equatorial region such as Indonesia cause the providing
cloud free optical remote sensing data is difficult. This research is using Synthetic
Aperture Radar (SAR) data that can penetrate the haze and cloud, but it contains of
speckle noise, so it has to be reduced. The noise reduction can be done using spatial
domain or frequency domain filtering techniques. The spatial domain can be used box
filter, median filter or local statistic filter. The most research in noise reduction of SAR
image using are spatial domain, such as Lee filtering, using combine Lee and median
filter, using Local Adaptive Median Filter and using Gaussian Markov Random Field.
The used of frequency domain is limited, such as using iterative threshold of wavelet.
Oher researcher focus on preserve edge image, and focus on pasture area. On the other
hand, several research in FFT area estimation have been done, FFT is used in any
applications including earth and science, chemistry, communications, and signal
processing. FFT process need huge capacity so design processing is needed to make
faster processing. FFT can be done using continues FFT or discrete FFT, and fixed-point
fast or integer FFT. This research proposes the noise reduction algorithm using FFT of
temporal data. One-year period of sentinel-1 was used corresponding with the period of
paddy growth cycle. The optimum result was selected by comparing the six scenarios of
FFT filtering using correlation value of the original image and the results of FFT
filtering. Hope, this research can be implemented to whole are of Indonesia to monitor
paddy field. Synthetic Aperture Radar (SAR) images employ active sensors that detect
microwave radiation, which has longer wavelength than visible light that is detected in
passive sensors, such as the optical sensor. Therefore, the surface of the Earth can be
observed at high resolution, regardless of weather conditions and sun phenomena. The
active sensor of SAR images is also used with satellites or unmanned aerial vehicles
(UAVs), as the development of the active sensor technology applied to SAR images
enabled high-resolution target detection and identification. SAR images are widely used
in applications of a variety of fields, such as the military, agricultural, weather
forecasting, and environmental analysis, etc. Due to the advantages of SAR images and

1
various applications, research on the technology behind SAR images is being actively
conducted around the world.

In contrast with the optical sensor, the active sensor of the SAR is accompanied
by speckle noise that arises from the coherent imaging mechanism. Speckle noise in
SAR images is generated by the random interference of many elementary reflectors
within one resolution cell. This noise has different features from the noise observed in
images that were obtained by passive sensors, such as the optical sensor. Speckle noise
appears as a form of multiplicative noise in SAR images and it has the characteristics of
a Rayleigh distribution. SAR images are used by observers to extract information and
identify targets. Speckle noise degrades SAR images and thus interferes with the
transfer of image information to the observer. Therefore, the development of effective
filtering methods in the reduction of speckle noise is critical for the analysis of
information that is contained in various SAR images.

Numerous studies have been conducted with the aim of extracting image
information from SAR images by removing speckle noise. Five main categories of
methods were applied in these studies: linear filtering, nonlinear filtering, partial
differential equation (PDE) filtering, hybrid methods, and filtering methods that are
based on the discrete wavelet transform (DWT).

The linear filter convolutes an image with a symmetric mask and then
reconstructs each pixel value as a weighted average value of the neighboring pixel
values. The mean filter and the Gaussian filter are typical linear filtering techniques
that are effective in simple and smoothing speckle noise reduction. A mean value of
several pixel values around the target pixel substitute the mean filter. The target pixel is
located in the center. The mean filter exhibits a low-edge preservation performance,
because it does not consider the flat and homogeneous areas in the image.

The Gaussian filter uses a Gaussian function of two dimensions (2D) as a


convolution mask. This filtering technique uses the Gaussian function as the mask
weight value. The Gaussian function gives a larger weight to the center of the mask.
Moreover, the Gaussian filtering technique shows excellent performance in terms of
noise removal with a small variance; however, a blurring phenomenon appears in the
edge areas.

2
1.2 OBJECTIVE OF THE PROJECT

The objective of "Speckle Noise Reduction of Sentinel-1 SAR Data Using Fast
Fourier Transform Temporal Filtering to Monitor Paddy Field Area" is to improve the
quality of Synthetic Aperture Radar (SAR) data acquired by the Sentinel-1 satellite in
order to better monitor and analyze changes in paddy fields.

To obtain more precise and consistent information about changes in these


agricultural areas. This improved data can be used for various applications, including
agriculture monitoring, land use planning, and environmental assessment.

1.3 PROBLEM STATEMENT

Synthetic Aperture Radar (SAR) imagery from Sentinel-1 satellites is a valuable


resource for monitoring paddy fields and assessing agricultural conditions. However,
SAR images often suffer from speckle noise, which can compromise the accuracy of
monitoring and analysis.

 Speckle Noise in SAR Data: Sentinel-1 SAR images of paddy fields are
inherently affected by speckle noise, a granular interference that hinders the
extraction of accurate information.
 Impact on Agricultural Monitoring: Speckle noise in SAR images poses a
significant challenge for reliable monitoring of paddy fields, as it can obscure
crucial details related to crop growth, water levels, and other dynamic factors
 Specialized Speckle Reduction Techniques: There is a need for specialized
algorithms capable of effectively reducing speckle noise in SAR data while preserving
the essential features, such as edges and fine details, crucial for agricultural
monitoring.
 Temporal Dynamics and Continuous Monitoring: Monitoring paddy fields requires
consideration of temporal dynamics, capturing changes over time. A method that
integrates temporal information through Fast Fourier Transform (FFT) is desirable.

3
1.4 ORGANIZATION OF THE PROJECT

chapter 1, deal with overview, and objective of project and problem statement.

chapter 2, deal with Literature Survey.

chapter 3, deal with Introduction, Existing system.

chapter 4, deal with Proposed System and its method.

chapter 5, deal with Software of the project.

chapter 6, deal with result of the project.

chapter 7, deal with advantages, applications of the project.

chapter 8, deal with conclusion, future scope of the project

4
CHAPTER-2

LITERATURE SURVEY

2.1 INTRODUCTION TO LITERATURE SURVEY

The following are the base papers deals with various techniques of proposed
annulment in SAR images. These include Lee filtering (Lee, 1980), Kuan filtering (Kuan et
al., 1985), Frost filtering (Frost et al., 1982) and gamma MAP filtering (Hsiao et al., 2002).
Lee and Kuan filters are adaptive mean filters, whereas a Frost filter is a mean adaptive
weighted filter. A GMAP filter is introduced and it requires the probability density function
(PDF) prior knowledge of the image before applying (Jia et al., 2019). The image reflectivity
is considered to be a gamma distribution instead of a Gaussian distribution. The gamma
distribution is a 2 different class of continuum probability density functions used in statistical
hypothesis testing. The gamma range includes specific examples such as the exponentially,
Erlang, as well as chi-square distributions. The Gaussian distribution is a bellshaped arc in
which numbers are supposed to test the normality including an equivalent number of data
points along each total mean throughout every observation. It is otherwise called as normal
distribution. A radar cross section model for designing a speckle noise removal filter is
introduced in Achim et al. (2006) and it is based on the heavy-tailed Rayleigh density
function. The corresponding area observed by every radar is the target's radar cross section. Its
the fictional region that intercepts the quantity of energy and, if dispersed evenly in all
locations, generates a radar echo equivalent to the enemy targets. Bayesian filters are
explained using the Bayesian theorem, which describes a posterior probability by a prior pdf.
A Bayesian filter is a software that evaluates the header as well as substance of an arriving
email communication and determines the likelihood that it is junk using Bayesian reasoning,
also known as Bayesian analysis. Anti-virus tools should be utilized in combination with
Bayesian filtering. The wavelet-based despeckling algorithms are proposed in Gleich and
Datcu (2009) and Argenti et al., (2006), it is developed using MAP estimation and
undecimated wavelet decomposition. Here, the assumption is that each wavelet coefficient’s
pdf is generalized Gaussian. The parameters for GG pdf are used to obtain space-variation in
every wavelet frame. Therefore, they can be modified to the context of the spatial image in
addition to orientation and scale. The GARCH method is used to predict that may be used to
examine a range of investment information, such as market indicators. This method is

5
frequently applied organizations to predict the unpredictability of stocks, bond and
macroeconomic indicators values. A 2-D GARCH model is used in despeckling of SAR
images as presented in Amirmazlaghani et al. (2008), a logarithmic transformation of the
original SAR image is decomposed into multi-scale wavelet domain. SAR images wavelet
coefficients have got non-Gaussian statistics which are fully specified by this model. A
sequential Monte Carlo method proposed in Gleich and Datcu (2009) is a model-based
Bayesian approach. The secondgeneration wavelets such as bandelet (Lu et al., 2014), chirplet
(Lian & Jiang, 2017), contourlet (Metwalli et al., 2014) have been developed in the last few
years. The process of despeckling the SAR images using contourlet transform (Li et al., 2006)
and bandelet (Sveinsson et al., 2008) transform gives better despeckling results as compared
with each wavelet-based method. The contourlet transform is a novel two-dimensional picture
encoding transformation technique. The outlines of actual photos, which are the most
prominent elements in environmental photos, may be managed to capture to use the wavelet
transforms with just a few parameters. The bandelet is a multiresolution linear transformation
that maintains the geometrical integrity of pictures and materials. The bandelet transformation
takes use of the mathematical consistency of a picture's architecture and is useful for analyzing
picture borders and texturing. In the wavelet domain, the noise and image models have
defined the results in Gleich and Datcu (2009) and Argenti and Alparone (2002) using MAP
estimation of the denoised image evaluated. A maximum a posteriori probability (MAP)
estimation in Bayesian inference is an estimation of an arbitrary function that matches the
subsequent distribution's feature. Mostly on analysis of the experimental information, the
MAP may be used to produce a prediction equation of a multitude of settings. The speckle
noise is a type of multiplicative noise which is present in the SAR image (Yuan et al., 2014).
Speckle is a granular interference that occurs naturally in dynamic radiation, synthetic aperture
radar (SAR), clinical ultrasonography and optically coherent scanning pictures and decreases
their clarity. Coordinated reception of diffracted information from several dispersed sources is
the reason. The RADARSAT-2 dataset was subjected toward the speckled filtration, which
included modified Lee, boxcar, enhanced Lee–Sigma and IDAN filtration. Speckle filtering
must reduce random noise while maintaining spatially and analysis required data. Hybrid
polarimetric information is used to assess speckled filtering. Based on other wavelet denoising
techniques (Liu et al., 2017) use logarithmic transform to change the multiplicative noise
model into an additive noise model (Achim et al., 2006). The logarithmically modified image
is designed by employing filtering methods like zero-mean Gaussian distributions and zero
location Cauchy approaches (Bhuiyan et al., 2007). This is done to generate the MAP
6
estimator and minimum mean absolute error estimator. The extension of the Kalman filter is
the wiener filter. A recursive unscented Kalman filter (UKF) was presented in Subrahmanyam
et al. (2008) and this does not require any parameter estimation to incorporate non-Gaussian
prior. The unscented Kalman filter is a substandard nonlinear filtering technique that, unlike
EKF or LKF, employs an unprocessed transition (UT) instead of exponential function to
linearize nonlinear solutions. The Kalman filter algorithm employs a dynamical framework to
explain an angular velocity and is the best approach for estimating its condition if the
program's modeling is straight or the procedure and inaccurate data are cumulative and
stochastic. The well-known algorithm is the extended Kalman filter, and it provides a
nonlinear and non-Gaussian model, it depends on the principle of linearization of the
evolution as well as measurement models utilizing the Taylor series. The particle filtering
(Gleich & Datcu, 2009) method presents a solution for solving the nonlinear filtering and non-
Gaussian problems, which are having numerical Bayesian techniques. For solving tracking
problems (Godsill & Clapp, 2001), particle filters were adopted. Recently, it is seen applied
for resolving the problems related to source separation (Everson & Roberts, 2000), object
collision (Tamminen & Lampinen, 2006), segmentation (Li et al., 2006) and road detection
(Chen et al., 2006). Kalman filter can be applied for despeckling the SAR images (Geling &
Ionescu, 1994), but the filter parameters will modify the SAR image local area. Using the
Kalman filter, the segmented regions of SAR images are despeckled (Tsuchida et al., 2003). A
two-dimensional adaptive block Kalman filter process is presented in Azimi-Sadjadi and
Bannour (1991). Despeckling of SAR image with a particle filter is proposed in Gencaga et al.
(2005). Despeckling of SAR image using total variational (TV) models presented in Woo and
Yun (2011), an alternating minimization algorithm with shift technique (AMAST) algorithm
is used to remove the speckle noise, and it is based on the Lagrangian function. A linearized
proximal alternating minimization algorithm (LPAMA) (Yun & Woo, 2012) is used to reduce
the multiplicative noise in SAR images. It is an ‘m’th root transformed total variational model.
Two despeckling models presented in Feng et al. (2014) based upon the total generalized
variation regularization. These two models are developed based on an algorithm called prima
dual method with TGV penalty. The first model is called PDTGV exponential model and the
second model is called PDTGV I-divergence model. PDTGVs can obtain smooth regions and
discontinuities at the object boundaries. PDTGVs remove the staircasing artifacts produced by
the TV (Ahmed et al., 2012; Balamurugan et al., 2019; Nguyen et al., 2018). Based on TV
algorithms, bring staircasing artifacts in which that are absent when working with PDTGV

7
exponential model. PDTGV I-divergence model removes any inverse operations or inner
iterations, whereby this obtains a considerable speed advantage.

CHAPTER-3

EXISTING SYSTEM

3.1 INTRODUCTION

The Speckle noise reduction is a crucial step in improving the visual quality
and interpretability of synthetic aperture radar (SAR) images. SAR images are often
affected by speckle noise, which appears as granular or salt-and-pepper noise and can
obscure fine details in the images. To reduce speckle noise in SAR images, various
techniques have been developed, including those based on statistical characteristics of
speckle. Here's an overview of existing speckle noise reduction techniques using
statistical characteristics of speckle in SAR images.

 Lee Filter: The Lee filter is one of the earliest and simplest speckle reduction
techniques. It's a statistical filter that uses local statistical properties of the image
to reduce speckle noise. The filter estimates the local mean and variance of the
image and then filters the image accordingly.
 Frost Filter: The Frost filter is an adaptive speckle reduction technique that
takes into account the local statistics of the image. It uses an estimate of the local
signal-to-noise ratio (SNR) to adaptively filter the image, making it effective in
both homogeneous and heterogeneous regions.
 Kuan Filter: The Kuan filter is another adaptive speckle reduction technique
that uses the local statistics of the image. It estimates the local signal level and
the speckle noise level and then applies a filter based on these estimates.
 Gamma-MAP Filter: This filter is based on a maximum a posteriori (MAP)
estimation framework and takes into account the statistical distribution of
speckle noise in SAR images. It provides effective speckle reduction while
preserving image details.
 Non-Local Means (NLM) Filter: The NLM filter is a powerful denoising
technique that uses non-local similarity information to reduce speckle noise. It

8
compares patches of pixels within the image to find similar regions and then
applies a weighted average to reduce noise.
 Wavelet-Based Techniques: Wavelet-based methods decompose the SAR image
into different frequency components using wavelet transforms and then apply
speckle reduction to specific scales or components.
 Markov Random Fields (MRF): MRF-based techniques model the
relationships between neighboring pixels in SAR images as a Markov random
field. By considering these relationships, they can reduce speckle noise while
preserving image structure.
 Sparse Representation-Based Techniques: Sparse representation methods
exploit the sparsity property of SAR images in certain domains (e.g., wavelet or
dictionary-based representations) to separate noise from the signal. This can lead
to effective speckle reduction.
 Deep Learning-Based Approaches: Deep neural networks, such as
convolutional neural networks (CNNs) and autoencoders, have also been applied
to speckle noise reduction in SAR images. These approaches learn complex
representations from the data and can achieve impressive results.

3.2 DRAWBACKS

 Lee Filter: The Lee filter, while one of the earliest and simplest techniques for speckle
reduction in SAR images, has certain limitations:
1. Limited in Low SNR Environments
2. Parameter Dependency
3. Smoothing Effect
4. Edge Preservation
 Frost Filter: The Frost filter, an adaptive speckle reduction technique used in SAR
image processing, addresses certain limitations:
1. Complex Computation
2. Adaptability Challenges
3. Performance in Low SNR Areas
 Kuan filter: The Kuan filter, another adaptive speckle reduction technique used in SAR
image processing, presents its own set of limitations despite its adaptive nature:
1. Limited Adaptability

9
2. Parameter Sensitivity

 Gamma -MAP filter: The Gamma-MAP filter, operating within a maximum a


posteriori (MAP) estimation framework for speckle reduction in SAR images, exhibits
certain limitations despite its effective noise reduction capabilities:
1. Complexity in Implementation
2. Adaptability in Heterogeneous Regions
3. Computationally Intensive
4. Trade-off Between Noise Reduction and Details
5. Parameter Sensitivity
 Non-Local Means (NLM) Filter: The Non-Local Means (NLM) filter, a powerful
denoising technique used in SAR image processing, offers notable advantages but also
presents certain limitations:
1. Computational Demands
2. Parameter Tuning
3. Handling Large-Scale Structures
4. Memory Consumption
5. Trade-off Between Efficiency and Detail Preservation
 Wavelet-Based Techniques: Wavelet-based techniques in SAR image processing offer
various advantages but also come with certain limitations:
1. Decomposition Level Selection
2. Artifacts at Borders
3. Complexity in Implementation
 Markov Random Fields (MRF): Markov Random Fields (MRF) techniques in SAR
image processing offer distinct advantages but also come with certain limitations:
1. Limited Adaptability to Rapid Changes
2. Border Artifacts
3. Parameter Sensitivity
4. Computational Complexity
 Sparse Representation-Based Techniques: Sparse Representation-Based Techniques in
SAR image processing have their unique advantages but also come with certain
limitations:

10
1. Parameter Tuning
2. Artifact Generation
3. Complexity in Implementation
4. Computational Overhead
 Deep Learning-Based Approaches: Deep Learning-Based Approaches in SAR image
processing offer significant advantages along with some specific limitations:
1. Data Dependency
2. Overfitting and Generalization
3. Black Box Nature
4. Computational Demands

Understanding these limitations across different speckle noise reduction


techniques helps in selecting appropriate methods based on specific application
requirements and challenges encountered in SAR image processing tasks.

11
CHAPTER-4

PROPOSED SYSTEM

4.1 INTRODUCTION

We proposed an algorithm for the reduction of speckle noise and the preservation of
the edges in the SAR image. The proposed algorithm employs the SRAD filtering method as a
preprocessing filter instead of directly applying the wavelet domain, as the SRAD can be
directly applied to the SAR image, because it uses the image without log-compressed data.
However, the SRAD filtering result image still includes the speckle noise, which represents a
form of multiplicative noise. Since most of the filtering methods are developed for reducing
the AGWN, the logarithmic transform is applied to the resulting SRAD image to convert the
multiplicative noise into additive noise, after which the resulting SRAD image contains
additive noise. Subsequently, the two-dimensional (2D) DWT transforms the SRAD filtering
result image, which represents the logarithmic transform, into four sub-band images (vertical
sub-band image (LH), horizontal sub-band image (HL), diagonal sub-band image (HH), and
approximate sub-band image (LL )). We employed the DWT performed until two-level
decomposition. An effect of algorithm and analysis of results is tested at one to two
decomposition level. The two-level decomposition of the DWT shows the best results. Most of
the speckle noise occurs in high-frequency sub-band images. Therefore, the soft threshold of
the wavelet coefficients is only applied to the horizontal and vertical sub-band images, which
have similar energy, to preserve the original signal and remove the noise signal. However, the
diagonal sub-band image has a low energy when compared to the vertical and horizontal sub-
band images.

For the diagonal sub-band image, we employ an IGF that is based on a new edge-aware
weighting method to preserve a low original signal and suppress the noise signal using this
new edge-aware weighting method. The approximate sub-band image contains significant
components of the image and is less affected by the noise; however, the noise exists in the
approximate sub-band image. The GF is employed to reduce the speckle noise and preserve
the edges in the approximate sub-band image. Each sub-band image, once the noise is

12
removed, is reconstructed by wavelet reconstruction, and the exponential transform is
performed to reverse the logarithmic transform. Finally, we obtain the despeckled image.

4.2 PROPOSED METHOD

13
Fig 4.2.1: Processing Flow of Proposed Method

The processing flow was shown in Fig 4.2.1. The pre-processing was done on-
line using GEE. The results were downloaded and then processed off-line. The missing
data were interpolated using previous and after acquisition date. The all 31 data in a
year period was transform to the frequency domain using temporal FFT, the results are
FFT1 to FFT31. Then the FFT1 to FFT-n, was selected and transform back using IFFT
(Invers FFT) with the rest FFT were set to zero. The results of IFFT is backscattered in
time domain, this result is filtered images. Six scenarios of FFT filtering were
investigated to select the optimum scenario. First scenario used FFT1-FFT2, and the
sixth scenario used FFT-1 to FFT-8. The performance results were measure by
correlation value of the original image and the results of FFT filtering.

 Data:
Sentinel-1 data were acquired from October 2019 to September 2020,
there are 31 data. Each single acquisition data was pre-processed in Google Erath
Engine (GEE) and downloaded.

Table 4.2.1. Sentinel-1 data specification sections


Item Description
Satellite name Sentinel-1
Wave length C band
Mode IW (Interferometric wide swath)
Resolution 10x10 meter
Product level Product: GRD
Polarization VH, and VV

14
Ascending/Descending Descending

 Pre-processing:
The input data from GRD Sentinel-1 collection from GEE is already in
sigma0 backscatter. Additional processing was applied, first is transformation
from Sigma0 to Gamma0, and flattened to correct the topographic effect. The
final pixel resolution was saved to 20x20 meter. The Gab-filling was applied to
fill the no-data caused by no acquisition data and layover during topographic
flattening process. The linier interpolation technique was used in the gab-filling
process.
 FFT:
Fourier Transform is introduced by Jean Baptiste Joseph Fourier to solve
the computational complexity in wide varieties of fields such as earth and
science. FFT convert signal in time domain to frequency domain. FFT is a basic
approach to remote sensing image processing, it applied to process
hyperspectral, high spatial resolution and high temporal resolution. FFT
algorithm can be used for stripe noise removal, image compression, image
registration.
 FFT1 – FFTn:
The all 31 data in a year period was transform to the frequency domain
using temporal FFT, the results are FFT1 to FFT31.
 FFT1 - FFT2:
The first scenario used FFT1 to FFT2. The performance results were
measure by the correlation value of the original image and the results of FFT
filtering.

 FFT1 to FFT3:
The minimal FFT filtering is FFT1-FFF3 filtering, with the average
correlation is 87%, but some acquisition still has the correlation less than 85%.
 FFT1 to FFT-i:

15
The optimum is FFT1- FFT-i filtering, with the average correlation, is
92% and all correlations are above 85% for all data during a year.
 Invers FFT:
The FFT1 to FFT-n, was selected and transform back using IFFT (Invers
FFT) with the rest FFT were set to zero. The results of IFFT is backscattered in
time domain, this result is filtered images.
 FFT Filtered:
Using the optimum scenario, the backscattered trend of paddy growth could be
identified better and easier, so this technique is recommended for paddy growth
monitoring.
 Comparison and analysis:
To assets to FFT results, the analysis will use temporal domain and spatial
domain. In the temporal domain the analysis compares the trend of the original
temporal trend with the FFT temporal filtered trend. In spatial domain the analysis
compares the original image with the FFT filtering results, the comparison will do for
all image in a year.

CHAPTER-5

SOFTWARE DISCRIPTION

16
5.1 INTRODUCTION TO PYTHON

Python is a high-level, interpreted, interactive and object-oriented scripting language.


Python is designed to be highly readable. It uses English keywords frequently where as other
languages use punctuation, and it has fewer syntactical constructions than other languages.

 Python is Interpreted: Python is processed at runtime by the interpreter. You do not


need to compile your program before executing it. This is similar to PERL and PHP.
 Python is Interactive: You can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
 Python is Object-Oriented: Python supports Object-Oriented style or technique of
programming that encapsulates code within objects.
 Python is a Beginner's Language: Python is a great language for the beginner-level
programmers and supports the development of a wide range of applications from
simple text processing to WWW browsers to games.

• History of Python

Python was developed by Guido van Rossum in the late eighties and early nineties at
the National Research Institute for Mathematics and Computer Science in the Netherlands.

Python is derived from many other languages, including ABC, Modula-3, C, C++, Algol-68,
SmallTalk, and Unix shell and other scripting languages.

Python is copyrighted. Like Perl, Python source code is now available under the GNU
General Public License (GPL). Python is now maintained by a core development team at the
institute, although Guido van Rossum still holds a vital role in directing its progress

• Python Features

Python's features include:

1. Easy-to-learn: Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.
2. Easy-to-read: Python code is more clearly defined and visible to the eyes.
3. Easy-to-maintain: Python's source code is fairly easy-to-maintain.
4. A broad standard library: Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.
17
5. Interactive Mode: Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
6. Portable: Python can run on a wide variety of hardware platforms and has the
same interface on all platforms.
7. Extendable: You can add low-level modules to the Python interpreter. These
modules enable programmers to add to or customize their tools to be more
efficient.
8. Databases: Python provides interfaces to all major commercial databases.
9. GUI Programming: Python supports GUI applications that can be created and
ported to many system calls, libraries and windows systems, such as Windows
MFC, Macintosh, and the X Window system of Unix.
10. Scalable: Python provides a better structure and support for large programs than
shell scripting.

• Operators in Python

Fig 5.1.1: Operators in Python

TABLE 5.1.1: ARITHMETIC OPERATORS

18
TABLE 5.1.2: BITWISE OPERATORS

TABLE 5.1.3: IDENTITY OPERATORS

TABLE 5.1.4: LOGICAL OPERATORS

19
Operator Description Example

and Logical If both the operands are true then (a and b)


AND condition becomes true. is true.

or Logical If any of the two operands are non- (a or b)


OR zero then condition becomes true. is true.

not Logical Used to reverse the logical state of Not(a


NOT its operand. and b)
is false.

TABLE 5.1.5: MEMBERSHIP OPERATORS

Operator Description Example

and Logical If both the operands are true then (a and b)


AND condition becomes true. is true.

or Logical If any of the two operands are (a or b)


OR non-zero then condition becomes is true.
true.

not Logical Used to reverse the logical state of Not(a


NOT its operand. and b)
is false.

TABLE 5.1.6: ASSIGNMENT OPERATORS

20
TABLE 5.1.7: RELATIONAL OPERATORS

• List

21
The list is a most versatile data type available in Python which can be written as a list
of comma-separated values (items) between square brackets. Important thing about a list is
that items in a list need not be of the same type.
Creating a list is as simple as putting different comma-separated values between square
brackets. For example –

list1 = ['physics', 'chemistry', 1997, 2000];

list2 = [1, 2, 3, 4, 5 ];

list3 = ["a", "b", "c", "d"]

• Tuples

A tuple is a sequence of immutable Python objects. Tuples are sequences, just like
lists. The differences between tuples and lists are, the tuples cannot be changed unlike lists
and tuples use parentheses, whereas lists use square brackets.

Creating a tuple is as simple as putting different comma-separated values. Optionally


we can put these comma-separated values between parentheses also. For example –

tup1 = ('physics', 'chemistry', 1997, 2000);

tup2 = (1, 2, 3, 4, 5 );

tup3 = "a", "b", "c", "d";

The empty tuple is written as two parentheses containing nothing –

tup1 = ();

To write a tuple containing a single value you have to include a comma, even though
there is only one value –

tup1 = (50,);

Like string indices, tuple indices start at 0, and they can be sliced, concatenated, and so on.

22
• Accessing Values in Tuples:

To access values in tuple, use the square brackets for slicing along with the index or
indices to obtain value available at that index. For example –

tup1 = ('physics', 'chemistry', 1997, 2000);

tup2 = (1, 2, 3, 4, 5, 6, 7 );

print "tup1[0]: ", tup1[0]

print "tup2[1:5]: ", tup2[1:5]

When the code is executed, it produces the following result –

tup1[0]: physics

tup2[1:5]: [2, 3, 4, 5]

• Updating Tuples:

Tuples are immutable which means you cannot update or change the values of tuple
elements. We are able to take portions of existing tuples to create new tuples as the following
example demonstrates –

tup1 = (12, 34.56);

tup2 = ('abc', 'xyz');

tup3 = tup1 + tup2;

print tup3

When the above code is executed, it produces the following result –

(12, 34.56, 'abc', 'xyz')

• Delete Tuple Elements

Removing individual tuple elements is not possible. There is, of course, nothing
wrong with putting together another tuple with the undesired elements discarded.

23
To explicitly remove an entire tuple, just use the del statement. For example:

tup = ('physics', 'chemistry', 1997, 2000);

print tup del tup;

print "After deleting tup : "

print tup

• Dictionary

Each key is separated from its value by a colon (:), the items are separated by commas,
and the whole thing is enclosed in curly braces. An empty dictionary without any items is
written with just two curly braces, like this: {}.

Keys are unique within a dictionary while values may not be. The values of a
dictionary can be of any type, but the keys must be of an immutable data type such as strings,
numbers, or tuples.

• Accessing Values in Dictionary:

To access dictionary elements, you can use the familiar square brackets along with the
key to obtain its value. Following is a simple example –

dict = {'Name': 'Zara', 'Age': 7, 'Class': 'First'}

print "dict['Name']: ", dict['Name']

print "dict['Age']: ", dict['Age']

Result – dict['Name']: Zara dict['Age']: 7

• Updating Dictionary:

We can update a dictionary by adding a new entry or a key-value pair, modifying an


existing entry, or deleting an existing entry as shown below in the simple example –

dict = {'Name': 'Zara', 'Age': 7, 'Class': 'First'}

dict['Age'] = 8; # update existing entry

24
dict['School'] = "DPS School"; # Add new entry

print"dict['Age']: ", dict['Age']

print "dict['School']: ", dict['School']

Result –

dict['Age']: 8 dict['School']: DPS School

• Delete Dictionary Elements:

We can either remove individual dictionary elements or clear the entire contents of a
dictionary. You can also delete entire dictionary in a single operation.

To explicitly remove an entire dictionary, just use the del statement. Following is a
simple example –

dict = {'Name': 'Zara', 'Age': 7, 'Class': 'First'}

del dict['Name']; # remove entry with key 'Name'

dict.clear(); # remove all entries in dict

del dict ; # delete entire dictionary

print "dict['Age']: ", dict['Age']

print "dict['School']: ", dict['School']

A function is a block of organized, reusable code that is used to perform a single,


related action. Functions provide better modularity for your application and a high degree of
code reusing. Python gives you many built-in functions like print(), etc. but you can also
create your own functions. These functions are called user-defined functions.

• Defining a Function
Simple rules to define a function in Python.

 Function blocks begin with the keyword def followed by the function name and
parentheses ( ( ) ).

25
 Any input parameters or arguments should be placed within these parentheses. You
can also define parameters inside these parentheses.
 The first statement of a function can be an optional statement - the documentation
string of the function or docstring.
 The code block within every function starts with a colon (:) and is indented.
 The statement return [expression] exits a function, optionally passing back an
expression to the caller. A return statement with no arguments is the same as return
None.

• Calling a Function
Defining a function only gives it a name, specifies the parameters that are to be
included in the function and structures the blocks of code.Once the basic structure of a
function is finalized, you can execute it by calling it from another function or directly from
the Python prompt.

• Function Arguments
You can call a function by using the following types of formal arguments:

 Required arguments
 Keyword arguments
 Default arguments
 Variable-length arguments

• Opening and closing files


Python provides basic functions and methods necessary to manipulate files by default.
You can do most of the file manipulation using a file object.

The open Function

Before you can read or write a file, you have to open it using Python's built-in open()
function. This function creates a file object, which would be utilized to call other support
methods associated with it.

Syntax:

file object = open(file_name [, access_mode][, buffering])

26
The close()Method

The close() method of a file object flushes any unwritten information and closes the
file object, after which no more writing can be done.Python automatically closes a file when
the reference object of a file is reassigned to another file. It is a good practice to use the
close() method to close a file.

Syntax:

fileObject.close();

• Exception
An exception is an event, which occurs during the execution of a program that disrupts
the normal flow of the program's instructions. In general, when a Python script encounters a
situation that it cannot cope with, it raises an exception. An exception is a Python object that
represents an error. When a Python script raises an exception, it must either handle the
exception immediately otherwise it terminates and quits.

• Handling an exception
If you have some suspicious code that may raise an exception, you can defend your
program by placing the suspicious code in a try: block. After the try: block, include
an except: statement, followed by a block of code which handles the problem as elegantly as
possible.

The Python standard for database interfaces is the Python DB-API. Most Python
database interfaces adhere to this standard.

You can choose the right database for your application. Python Database API supports a
wide range of database servers such as −

 GadFly
 mSQL
 MySQL
 PostgreSQL
 Microsoft SQL Server 2000
 Informix

27
 Interbase
 Oracle
 Sybase

The DB API provides a minimal standard for working with databases using Python
structures and syntax wherever possible. This API includes the following:

 Importing the API module.


 Acquiring a connection with the database.
 Issuing SQL statements and stored procedures.
 Closing the connection.

5.2 PYTHON INSTALLATION AND SETUP

You’ll need a computer running Windows 10 with administrative privileges and an


internet connection.

Step 1 — Downloading the Python Installer


1. Go to the official Python download page for Windows.
2. Find a stable Python 3 release. This tutorial was tested with Python version 3.10.10.
3. Click the appropriate link for your system to download the executable file:
Windows installer (64-bit) or Windows installer (32-bit).

28
Fig 5.2.1: Python Releases for Windows

Step 2 — Running the Executable Installer


1. After the installer is downloaded, double-click the .exe file, for example python-
3.10.10-amd64.exe, to run the Python installer.
2. Select the Install launcher for all users checkbox, which enables all users of the
computer to access the Python launcher application.
3. Select the Add python.exe to PATH checkbox, which enables users to launch Python
from the command line.

29
Fig 5.2.2: Install Python 3.10.10 (64-bit)

4. If you’re just getting started with Python and you want to install it with default features
as described in the dialog, then click Install Now and go to Step 4 - Verify the Python
Installation. To install other optional and advanced features, click Customize
installation and continue.
5. The Optional Features include common tools and resources for Python and you can
install all of them, even if you don’t plan to use them.

30
Fig 5.2.3: Optional Features

Select some or all of the following options:

(i) Documentation: recommended


(ii) pip: recommended if you want to install other Python packages, such as
NumPy or pandas
(iii) TCL/TK and IDLE: recommended if you plan to use IDLE or follow tutorials that
use it
(iv) Python test suite: recommended for testing and learning
(v) py launcher and for all users: recommended to enable users to launch
Python from the command line
6. Click Next.
7. The Advanced Options dialog displays.

31
Fig 5.2.4: Advanced Options

Select the options that suit your requirements:

(i) Install for all users: recommended if you’re not the only user on this computer

(ii) Associate files with Python: recommended, because this option associates

all the Python file types with the launcher or editor

(iii) Create shortcuts for installed applications: recommended to enable

shortcuts for Python applications

(iv) Add Python to environment variables: recommended to enable launching

Python

(v) Precompile standard library: not required, it might down the installation

(vi) Download debugging symbols and Download debug binaries: recommended only

if you plan to create C or C++ extensions.

32
Make note of the Python installation directory in case you need to reference it later.
8. Click Install to start the installation.
9. After the installation is complete, a Setup was successful message displays.

Fig 5.2.5: Python 3.10.10 (64-bit) Setup

Step 3 — Adding Python to the Environment Variables (optional)


Skip this step if you selected Add Python to environment variables during installation.

If you want to access Python through the command line but you didn’t add Python to
your environment variables during installation, then you can still do it manually.

Before you start, locate the Python installation directory on your system. The
following directories are examples of the default directory paths:

 C:\Program Files\Python310: if you selected Install for all users during installation,
then the directory will be system wide.

33
 C:\Users\Sammy\AppData\Local\Programs\Python\Python310: If didn’t select Install
for all users during installation, then the directory will be in the Windows user path
Note that the folder name will be different if you installed a different version, but will still
start with Python.
1. Go to Start and enter advanced system settings in the search bar.
2. Click View advanced system settings.
3. In the System Properties dialog, click the Advanced tab and then click Environment
Variables.
4. Depending on your installation:
 If you selected Install for all users during installation, select Path from the list
of System Variables and click Edit.
 If you didn’t select Install for all users during installation, select Path from the
list of User Variables and click Edit.
5. Click New and enter the Python directory path, then click OK until all the dialogs are
closed.

Step 4 — Verify the Python Installation

You can verify whether the Python installation is successful either through the
command line or through the Integrated Development Environment (IDLE) application, if you
chose to install it.

Go to Start and enter cmd in the search bar. Click Command Prompt.
Enter the following command in the command prompt:
python --version

An example of the output is:


Output
Python 3.10.10

You can also check the version of Python by opening the IDLE application. Go
to Start and enter python in the search bar and then click the IDLE app, for example IDLE
(Python 3.10 64-bit).

34
Fig 5.2.6: Python IDLE Shell 3.10.10

You can start coding in Python using IDLE or your preferred code editor.

Conclusion

You’ve installed Python on your Windows 10 computer and are ready to start learning
and programming in Python.

5.3 WAVELETS

A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases,
and then decreases back to zero. It can typically be visualized as a "brief oscillation" like one
might see recorded by a seismograph or heart monitor. Generally, wavelets are purposefully
crafted to have specific properties that make them useful for signal processing. Wavelets can
be combined, using a "shift, multiply and sum" technique called convolution, with portions of
an unknown signal to extract information from the unknown signal.

For example, a wavelet could be created to have a frequency of Middle C and a short
duration of roughly a 32nd note. If this wavelet were to be convolved at periodic intervals with
a signal created from the recording of a song, then the results of these convolutions would be
useful for determining when the Middle C note was being played in the song. Mathematically,
the wavelet will resonate if the unknown signal contains information of similar frequency -
just as a tuning fork physically resonates with sound waves of its specific tuning frequency.
This concept of resonance is at the core of many practical applications of wavelet theory.

35
As a mathematical tool, wavelets can be used to extract information from many
different kinds of data, including - but certainly not limited to - audio signals and images. Sets
of wavelets are generally needed to analyze data fully. A set of "complementary" wavelets
will deconstruct data without gaps or overlap so that the deconstruction process is
mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet based
compression/decompression algorithms where it is desirable to recover the original
information with minimal loss.

Wavelet transforms

A wavelet is a mathematical function used to divide a given function or continuous-


time signal into different scale components. Usually one can assign a frequency range to each
scale component. Each scale component can then be studied with a resolution that matches its
scale. A wavelet transform is the representation of a function by wavelets. The wavelets are
scaled and translated copies (known as "daughter wavelets") of a finite-length or fast-decaying
oscillating waveform (known as the "mother wavelet"). Wavelet transforms have advantages
over traditional Fourier transforms for representing functions that have discontinuities and
sharp peaks, and for accurately deconstructing and reconstructing finite, non-periodic and/or
non-stationary signals.

Wavelet transforms are classified into discrete wavelet transforms and continuous
wavelet transforms. Note that both DWT and CWT are continuous-time (analog) transforms.
They can be used to represent continuous-time (analog) signals. CWTs operate over every
possible scale and translation whereas DWTs use a specific subset of scale and translation
values or representation grid.

Wavelet Transform:

A wavelet series is a representation of a square-integrable (real- or complex-valued


function by a certain orthonormal series generated by a wavelet. This article provides a
formal, mathematical definition of an orthonormal wavelet and of the integral wavelet
transform. The integral wavelet transform is the integral transform defined as

36
The wavelet coefficients c jk are then given by

Here, a = 2 − j is called the binary dilation or dyadic dilation, and b = k2 − j is the binary or

dyadic position.

TABLE 5.3.1: WAVELET FAMILY

There are a large number of wavelet transforms each suitable for different applications. For
full list see list of wavelet-related transforms but the common ones are listed below:

 Continuous wavelet transform


 Discrete wavelet transform
 Fast wavelet transform
 Stationary wavelet transform

37
 Lifting Scheme
 Complex Wavelet Transform

Continuous Wavelet Transform

A continuous wavelet transform is used to divide a continuous-time function into


wavelets. Unlike Fourier transform, the continuous wavelet transform possesses the ability to
construct a time-frequency representation of a signal that offers very good time and frequency
localization. In mathematics, the continuous wavelet transform of a continuous, square-
integrable function x(t) at a scale a > 0 and translational value is expressed by the
following integral

where ψ (t) is a continuous function in both the time domain and the frequency domain
called the mother wavelet and represents operation of complex conjugate. The main purpose
of the mother wavelet is to provide a source function to generate the daughter wavelets which
are simply the translated and scaled versions of the mother wavelet. To recover the original
signal x (t), inverse continuous wavelet transform can be exploited.

is the dual function of ψ(t). And the dual function should satisfy

Sometimes, , where

38
Is called the admissibility constant and is the Fourier transform of ψ. For a successful
inverse transform, the admissibility constant has to satisfy the admissibility condition:

It is possible to show that the admissibility condition implies that , so that a


wavelet must integrate to zero.

Mother wavelet

In general, it is preferable to choose a mother wavelet that is continuously


differentiable with compactly supported scaling function and high vanishing moments. A
wavelet associated with a multi resolution analysis is defined by the following two functions:

the wavelet function ψ (t), and the scaling function .

The scaling function is compactly supported if and only if the scaling filter h has a finite
support, and their supports are the same. For instance, if the support of the scaling function is
[N1, N2], then the wavelet is [(N1-N2+1)/2,(N2-N1+1)/2]. On the other hand, the kth moments
can be expressed by the following equation

If m0 = m1 = m2 = ..... = mp − 1 = 0, we say ψ (t) has p vanishing moments. The number of


vanishing moments of a wavelet analysis represents the order of a wavelet transform.
According to the Strang-Fix conditions, the error for an orthogonal wavelet approximation at
−i
scale a = 2 globally decays as aL, where L is the order of the transform. In other words, a
wavelet transform with higher order will result in better signal approximations.

Mathematically, the process of Fourier analysis is represented by the Fourier transform:


Which is the sum over all time of the signal f (t) multiplied by a complex exponential.

Graphically, the process looks like

39
The results of the CWT are many wavelet coefficients C, which are a function of scale
and position.

Multiplying each coefficient by the appropriately scaled and shifted wavelet yields the
constituent wavelets of the original signal:

Continuous wavelets

Real-valued:

 Beta wavelet
 Hermitian wavelet
 Hermitian hat wavelet
 Mexican hat wavelet
 Shannon wavelet

40
Complex-valued:

 Complex Mexican hat wavelet


 Morlet wavelet
 Shannon wavelet
 Modified Morlet wavelet

Discrete Wavelet Transform

A discrete wavelet transform (DWT) is any wavelet transform for which the wavelets
are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier
transforms is temporal resolution: it captures both frequency and location information
(location in time).

Definition:

One level of the transform

The DWT of a signal x is calculated by passing it through a series of filters. First the
samples are passed through a low pass filter with impulse response g resulting in a
convolution of the two:

The signal is also decomposed simultaneously using a high-pass filter h. the outputs
giving the detail coefficients (from the high-pass filter) and approximation coefficients (from
the low-pass). It is important that the two filters are related to each other and they are known
as a quadrature mirror filter.

However, since half the frequencies of the signal have now been removed, half the
samples can be discarded according to Nyquist’s rule. The filter outputs are then sub sampled
by 2 (Mallat's and the common notation is the opposite, g- high pass and h- low pass):

41
This decomposition has halved the time resolution since only half of each filter output
characterizes the signal. However, each output has half the frequency band of the input so the
frequency resolution has been doubled.

5.3.1 MODULES:

 Image Selection
 Image preprocessing
 Image Denoising
 Classification
 Performance Analysis

IMAGE SELECTION:

 The input data was collected from dataset repository.


 In our process, the BSD68 dataset is used.
 The data selection is the process of removing the noise from our input dataset.
 The dataset contains the 68 natural images of various size.

IMAGE PREPROCESSING:

 Image pre-processing is the process of removing the unwanted noise from the input
image.

 In this step, we have to use the interpolation.

 Image interpolation occurs when you resize or distort your image from one pixel grid
to another.

 In interpolation, we have to use the Bilinear Interpolation.

 Bilinear Interpolation considers 4 surrounding Pixels to predict new pixel values.

42
 Image resizing is necessary when you need to increase or decrease the total number of
pixels, whereas remapping can occur when you are correcting for lens distortion or
rotating an image.

 An interpolation technique that reduces the visual distortion caused by the fractional
zoom calculation is the bilinear interpolation algorithm, where the fractional part of the
pixel address is used to compute a weighted average of pixel brightness values over a
small neighborhood of pixels in the source image.

IMAGE DENOISING:

 Image denoising is the technique of removing noise or distortions from an image.


 There are a vast range of application such as blurred images can be made clear.
 Image Denoising is the process of removing noise from the Images.
 The noise present in the images may be caused by various intrinsic or extrinsic
conditions which are practically hard to deal with.

CLASSIFICATION:

 The SRCNN is a deep convolutional neural network that learns end-to-end mapping of
low resolution to high resolution images.
 As a result, we can use it to improve the image quality of low resolution images.
 SRCNN takes a low-resolution image as input and converts (maps) it to a high-
resolution image.
 It learns this mapping method by using training data.
 The great thing about SRCNN is that it is so simple.
 It has only three layers including the output layer. The first layer corresponds to the
extraction of patches, the second layer, to non-linear mapping, and the third layer,
to reconstruction.
 For training, we need a pair of high-resolution images (correct data) and low-
resolution images (input data). However, the size of the input and output images are
the same in the SRCNN network.

43
PERFORMANCE ANALYSIS

The Final Result will get generated based on the overall classification and prediction.
The performance of this proposed approach is evaluated using some measures like

• PSNR

Peak Signal to Noise Ratio (PSNR) is a commonly used metric to define the similarity
between two images. It is calculated using the Mean-Square-Error (MSE) of the pixels and the
maximum possible pixel value (MAXI) as follows:

PSNR=10⋅log(MAX2IMSE)

A high PSNR value corresponds to a high similarity between two images and a low
value corresponds to a low similarity respectively.

• SSIM

The structural similarity index is developed in order to improve traditional methods


such as PSNR, which have been proven to be inconsistent with human visual perception. It
takes luminance, contrast and structure of both images into account.

The SSIM index is calculated on various windows of an image. The measure between
two windows and of common size N×N is:
SSIM(x,y)=(2μxμy+c1)(2σxy+c2)(μ2x+μ2y+c1)(σ2x+σ2y+c2)

44
CHAPTER-6
RESULTS AND DISCUSSION

This section gives application of speckle noise reduction techniques resulted in a


significant improvement in the visual quality of Sentinel-1 SAR images. The following
images demonstrate the effectiveness of the speckle noise reduction process:

After heading to the destination folder containing project code and datasets using
command prompt. Execute the python file using command prompt, it’ll then prompt a popup
box asking to select the image. After navigation to the folder containing datasets, browse an
image.

Fig 6.1: Original Image

Then the Image selected will undergo several processes popping multiple windows as shown
in the figures below.

45
Fig 6.2: Image Resize

It involves interpolation techniques to either increase or decrease the number of pixels.


When enlarging an image, interpolation methods like bilinear or bicubic are used to estimate
new pixel values based on neighboring pixels.

Fig 6.3: FastNDenoising

It involves applying a specific denoising algorithm, such as FastNDenoising, to the


resized image. Once the image has been resized using interpolation methods to adjust its
dimensions, FastNDenoising is applied to reduce noise within the resized image. This
denoising technique uses a fast and efficient algorithm, often based on non-local means or

46
wavelet transforms, to remove noise while preserving image details, resulting in a cleaner and
more visually appealing resized image.

Fig 6.4: Gaussian Blur

Gaussian blur involves a change in the specific denoising method used. It simplifies the
process, applying a uniform blurring effect by averaging pixel values within a defined
neighborhood, irrespective of non-local similarities. This shift results in a smoother but less
nuanced image, as Gaussian blur lacks the complexity of preserving specific image
characteristics present in FastNDenoising.

Fig 6.5: Algorithm Comparison

47
TABLE 6.1: Performance Comparison

Type PSNR Value MSE Value SSIM Value

CNN 32.8843109 33.469503 0.6951416

Results are clearly visible in each figure after performing specific processes and in the
end the image is set for regression execution that performs multiple executions on the image
that makes better differentiation between CNN and ANN that ends the execution.

48
CHAPTER-7

APPLICATIONS AND ADVANTAGES

7.1 APPLICATIONS

The speckle noise reduction and FFT temporal filtering in monitoring paddy field areas
using Sentinel-1 SAR data has diverse and impactful applications, ranging from agricultural
management to environmental assessment and disaster response.

1. Agricultural Monitoring:

Accurate monitoring of paddy fields over time, enabling farmers to assess crop
growth, detect changes in water levels, and make informed decisions regarding
irrigation and crop management.

2. Crop Health Assessment:

Identification of variations in crop health within paddy fields, allowing for early
detection of potential issues such as diseases, pests, or nutrient deficiencies.

3. Land Use Planning:

Contribution to effective land use planning by providing detailed information


about the dynamics of paddy fields. This is valuable for optimizing agricultural
resources and supporting sustainable land management practices.

4. Water Management:

Monitoring changes in water levels within paddy fields, aiding in efficient water
management. This is particularly crucial in regions where water resources are limited,
helping to optimize irrigation practices.

49
5. Environmental Impact Assessment:

Evaluation of the environmental impact of agricultural activities in paddy fields.


The project can contribute to assessing the sustainability of farming practices and their
effects on the surrounding ecosystem.

6. Disaster Response and Resilience:

Rapid assessment of damage to paddy fields caused by natural disasters such as


floods or storms. This information is valuable for disaster response efforts and
developing resilience strategies for the agricultural sector.

7. Precision Agriculture:

Integration with precision agriculture practices, enabling farmers to make data-


driven decisions for optimizing input usage, minimizing environmental impact, and
improving overall crop yield.

8. Government and Policy Planning:

Provision of accurate and up-to-date information for government agencies and


policymakers involved in agricultural planning, resource allocation, and policy
formulation.

9. Research and Academic Studies:

Support for research studies and academic projects related to agriculture,


environmental science, and remote sensing. The project's findings can contribute to a
better understanding of paddy field dynamics.

10. Satellite Data Utilization:

50
Demonstration of the practical application of Sentinel-1 SAR satellite data for
agriculture and land monitoring. This encourages the utilization of satellite data in
similar projects and research endeavors.

11. Global Food Security:

Contribution to global food security by improving the efficiency and sustainability


of paddy field agriculture. Monitoring and optimizing rice cultivation can have a
positive impact on food production.

12. Community Engagement:

Involvement of local communities in monitoring and managing paddy fields. The


project can provide accessible information to farmers, empowering them to make
informed decisions about their agricultural practices.

7.2 ADVANTAGES

It offers several advantages, making it a valuable tool for agricultural and environmental
monitoring:

1. Improved Data Quality:

The speckle noise reduction techniques enhance the quality of Sentinel-1 SAR
data, ensuring that the monitored paddy field areas are represented more accurately.
This leads to more reliable and precise information for analysis.

2. Accurate Monitoring Over Time:

The project utilizes Fast Fourier Transform temporal filtering, enabling accurate
monitoring of changes in paddy field areas over time. This is crucial for assessing crop
growth, water levels, and other dynamic factors in agriculture.

3. Enhanced Visualization:

51
By reducing speckle noise and applying temporal filtering, the project enhances
the visualization of paddy field areas in SAR data. This improved visual representation
aids in the interpretation of changes and patterns over time.

4. Automation and Efficiency:

The project introduces automation to the monitoring process, allowing for efficient
and continuous assessment of paddy fields. Automated analysis reduces the need for
manual intervention and accelerates the decision-making process.

5. Quantitative Analysis:

The use of image processing techniques facilitates quantitative analysis, enabling


the extraction of precise measurements and data from the SAR images. This
quantitative information is valuable for scientific research and agricultural planning.

6. Optimization of Agricultural Practices:

Farmers and agricultural stakeholders can benefit from the project's insights into
paddy field conditions. The data obtained can inform decisions related to irrigation,
crop health, and overall management practices, leading to optimized agricultural
outcomes.

7. Environmental Impact Assessment:

The project contributes to assessing the environmental impact of agricultural


activities. Understanding how paddy fields change over time allows for more informed
decisions regarding sustainable farming practices and ecosystem preservation.

8. Early Detection of Issues:

The continuous monitoring capability enables early detection of issues such as


changes in crop health or water levels. Early intervention based on timely information
can prevent or mitigate potential problems, enhancing overall crop productivity.

9. Data-driven Decision Making:

52
The project provides a data-driven approach to decision-making in agriculture.
Farmers, policymakers, and researchers can make informed decisions based on the
analyzed data, improving overall efficiency and resource utilization.

10. Remote Sensing Advantages:

Leveraging Sentinel-1 SAR satellite data demonstrates the advantages of remote


sensing in agriculture. The project showcases the potential of satellite technology for
large-scale and continuous monitoring of agricultural areas.

11. Contributions to Agricultural Research:

The project contributes to the field of agricultural research by providing a practical


application of image processing techniques in monitoring paddy fields. The outcomes
can serve as a basis for further studies and advancements in the domain.

12. Technology Integration:

Integration of image processing techniques and satellite data demonstrates the


synergy between technology and agriculture. This integration can inspire further
technological advancements and innovation in the agricultural sector.

53
CHAPTER-8

CONCLUSION AND FUTURE SCOPE


We developed a novel three-step technique based on the conventional PPB algorithm.
The proposed algorithm improved the calculation accuracy of the weighting by pre-processing
speckle noise with the LMMSE filter and reducing the influence of bright structures. The
algorithm also improves upon the accuracy of the homogeneous factor by adaptively changing
the size of the search window, and then corrects for the spreading and blurring of bright
structures. Terra SAR-X images with clear edges, uniform backgrounds, and complicated
internal structures were used to validate this technique. This algorithm has the advantages of
the conventional PPB and has better performance for both speckle suppression and the
preservation of edges and textures.

Firstly, computational complexity, a concern in ANN-based methods, can be mitigated


by CNNs' architecture, leveraging parallel processing and optimized convolutional operations.
Their capacity for hierarchical feature extraction is particularly advantageous in handling
heterogeneous regions, as CNNs autonomously learn relevant features from SAR images,
potentially enhancing performance in diverse environments.

CNNs also demonstrate less parameter sensitivity compared to ANNs. Their ability to
learn representations from data reduces the reliance on manual parameter tuning. Additionally,
the intricate structures present in complex noise patterns, which are challenging for ANNs, can
be better discerned by CNNs due to their proficiency in capturing hierarchical representations
through convolutional layers.

Moreover, CNNs excel in generalizing across diverse SAR datasets by learning robust
and adaptable features. Their capability to discern intricate patterns even in low signal-to-
noise ratio (SNR) regions stands as a promising solution. These attributes suggest that
transitioning to a CNN-based approach for SAR image denoising could address many
limitations inherent in existing ANN-based methods. DNN - Deep Neural Networks finds
good future scope for the project.

54
REFERENCES

[1] Zhao, R.; Li, Y.; Ma, M. Mapping Paddy Rice with Satellite Remote Sensing: A
Review. Sustainability Vol.13, 503, 2021.

[2] Wang, S.; di Tommaso, S.; Faulkner, J.; Friedel, T.; Kennepohl, A.; Strey, R.; Lobell,
D.B. Mapping crop types in Southeast India with smartphone crowdsourcing and
deep learning. Remote Sens. Vol.12, 2957, 2020.

[3] Ding, M.; Guan, Q.; Li, L.; Zhang, H.; Liu, C.; Zhang, L. Phenology-based rice
paddy mapping using multi-source satellite imagery and a fusion algorithm applied
to the Poyang Lake Plain, Southern China. Remote Sens. Vol.12, 1022, 2020.

[4] Ghani H A, Razwan M, Malek A, Fadzli M, Azmi K, Muril M J and Azizan A review
on sparse Fast Fourier Transform applications in image processing Int. J. Electr.
Comput. Eng. Vol.10, 1346–51, 2020.

[5] Hoang-Phi Phung, Nguyen Lam-Dao, Thong Nguyen-Huy, Thuy Le-Toan, Armando
A. Apan, “Monitoring rice growth status in the Mekong Delta, Vietnam using
multitemporal Sentinel-1 data,” J. Appl. Remote Sens. Vol.14(1), 014518, 2020.

[6] Lestari A I, Kushardono D and Sensing R The use of C-Band Synthetic Aperture
Radar Satellite Data for Rice Plant Growth Phase Identification Int. J. Remote Sens.
Earth Sci. Vol.16, 31–4, 2019.

[7] Bazzi, H.; Baghdadi, N.; El Hajj, M.; Zribi, M.; Minh, D.H.T.; Ndikumana, E.;
Courault, D.; Belhouchette, H. Mapping paddy rice using Sentinel-1 SAR time series
in Camargue, France. Remote Sens. Vol.11, 887, 2019.

[8] Zhao, H.; Chen, Z.; Jiang, H.; Jing, W.; Sun, L.; Feng, M. Evaluation of three deep
learning models for early crop classification using sentinel-1A imagery time series—
A case study in Zhanjiang, China. Remote Sens. Vol.11, 2673, 2019.

[9] Yin, Q.; Liu, M.; Cheng, J.; Ke, Y.; Chen, X. Mapping rice planting area in
Northeastern China using spatiotemporal data fusion and phenology-based method.
Remote Sens. Vol.11, 1699, 2019.

55
[10] Huang, J.; Ma, H.; Sedano, F.; Lewis, P.; Liang, S.; Wu, Q.; Su, W.; Zhang, X.; Zhu,
D. Evaluation of regional estimates of winter wheat yield by assimilating three
remotely sensed reflectance datasets into the coupled WOFOST–PROSAIL
model. Eur. J. Agron. Vol.102, 1–13, 2019.

[11] Jiang, H.; Li, D.; Jing, W.; Xu, J.; Huang, J.; Yang, J.; Chen, S. Early Season
Mapping of Sugarcane by Applying Machine Learning Algorithms to Sentinel-1A/2
Time Series Data: A Case Study in Zhanjiang City, China. Remote Sens. Vol.11, 861,
2019.

[12] Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop
classification. Remote Sens. Environ. Vol.221, 430–443, 2019.

[13] Guan, K.; Li, Z.; Rao, L.N.; Gao, F.; Xie, D.; Hien, N.T.; Zeng, Z. Mapping Paddy
Rice Area and Yields Over Thai Binh Province in Viet Nam From MODIS, Landsat,
and ALOS-2/PALSAR-2. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens, Vol. 11,
2238–2252, 2018.

[14] Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K.-I. Crop
classification from Sentinel-2-derived vegetation indices using ensemble learning. J.
Appl. Remote Sens. Vol.12, 026019, 2018.

[15] Li, H.; Wan, W.; Fang, Y.; Zhu, S.; Chen, X.; Liu, B.; Yang, H. A Google Earth
Engine-enabled Software for Efficiently Generating High-quality User-ready
Landsat Mosaic Images. Environ. Model. Softw., Vol.112, 2018.

[16] Dos Santos Luciano, A.C.; Picoli, M.C.A.; Rocha, J.V.; Franco, H.C.J.; Sanches,
G.M.; Leal, M.R.L.V.; Le Maire, G. Generalized space-time classifiers for
monitoring sugarcane areas in Brazil. Remote Sens. Environ. Vol.215, 438–451,
2018.

56
ANNEXURE

#========================= IMPORT PACKAGES ====================

import numpy as np

import matplotlib.pyplot as plt

from tkinter.filedialog import askopenfilename

import cv2

#========================= READ A IMAGE ========================

filename = askopenfilename()

import matplotlib.pyplot as plt

import matplotlib.image as mpimg

img = mpimg.imread(filename)

plt.imshow(img)

plt.title('ORIGINAL IMAGE')

plt.show()

#========================= PREPROCESSING ======================

image_rescaled = cv2.resize(img,(300,300),interpolation = cv2.INTER_LINEAR)

fig = plt.figure()

plt.imshow(image_rescaled)

plt.title('IMAGE RESIZE')

plt.show()

57
#========================= IMAGE DENOISING
=====================

dst = cv2.fastNlMeansDenoisingColored(img, None, 10, 10, 7, 15)

plt.imshow(dst)

plt.title('FastNDenoising')

plt.show()

Gaussian = cv2.GaussianBlur(img, (7, 7), 0)

plt.imshow(Gaussian)

plt.title('Gaussian Blur')

plt.show()

#=================CALCULATING PERFORMANCES==================

from math import log10, sqrt

def PSNR(original, compressed):

mse = np.mean((original - compressed) ** 2)

if(mse == 0): # MSE is zero means no noise is present in the signal .

# Therefore PSNR have no importance.

return 100

max_pixel = 255.0

psnr = 20 * log10(max_pixel / sqrt(mse))

return psnr

58
value = PSNR(img, Gaussian)

MSE = np.square(np.subtract(img, Gaussian)).mean()

#===========================DATA SPLITTING=====================

import os

# ===========================test and train==========================

from sklearn.model_selection import train_test_split

data_1 = os.listdir('DataSet/')

data_2 = os.listdir('DataSet/')

dot1= []

labels1 = []

for img in data_1:

# print(img)

img_1 = cv2.imread('DataSet/' + "/" + img)

img_1 = cv2.resize(img_1,((50, 50)))

try:

gray = cv2.cvtColor(img_1, cv2.COLOR_BGR2GRAY)

except:

gray = img_1

dot1.append(np.array(gray))

labels1.append(0)

for img in data_2:

try:

59
img_2 = cv2.imread('DataSet/'+ "/" + img)

img_2 = cv2.resize(img_2,((50, 50)))

try:

gray = cv2.cvtColor(img_2, cv2.COLOR_BGR2GRAY)

except:

gray = img_2

dot1.append(np.array(gray))

labels1.append(1)

except:

None

x_train, x_test, y_train, y_test = train_test_split(dot1,labels1,test_size = 0.2,


random_state = 101)

from keras.utils import to_categorical

y_train1=np.array(y_train)

y_test1=np.array(y_test)

train_Y_one_hot = to_categorical(y_train1)

test_Y_one_hot = to_categorical(y_test)

x_train2=np.zeros((len(x_train),50,50,3))

for i in range(0,len(x_train)):

x_train2[i,:,:,:]=x_train2[i]

x_test2=np.zeros((len(x_test),50,50,3))

60
for i in range(0,len(x_test)):

x_test2[i,:,:,:]=x_test2[i]

#===================Convolutional Neural Network =====================

from keras.layers import Dense, Conv2D

from keras.layers import Flatten

from keras.layers import MaxPooling2D

# from keras.layers import Activation

# from keras.layers import BatchNormalization

from keras.layers import Dropout

from tensorflow.keras.models import Sequential

# =======================initialize the model=========================

model=Sequential()

#CNN layes

model.add(Conv2D(filters=16,kernel_size=2,padding="same",activation="relu",input_sh
ape=(50,50,3)))

model.add(MaxPooling2D(pool_size=2))

model.add(Conv2D(filters=32,kernel_size=2,padding="same",activation="relu"))

model.add(MaxPooling2D(pool_size=2))

model.add(Conv2D(filters=64,kernel_size=2,padding="same",activation="relu"))

model.add(MaxPooling2D(pool_size=2))

model.add(Dropout(0.2))

61
model.add(Flatten())

model.add(Dense(500,activation="relu"))

model.add(Dropout(0.2))

model.add(Dense(2,activation="softmax"))

#summary the model

model.summary()

#compile the model

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

y_train1=np.array(y_train)

train_Y_one_hot = to_categorical(y_train1)

test_Y_one_hot = to_categorical(y_test)

#fit the model

history=model.fit(x_train2,train_Y_one_hot,batch_size=2,epochs=10,verbose=1)

ssim = model.evaluate(x_test2, test_Y_one_hot, verbose=1)

print("=======================================”)

print("------------- Convolutional Neural Network -------------")

print("=======================================")

print()

accuracy=history.history['loss']

accuracy=max(accuracy)

accuracy_cnn=100-accuracy

print()

62
print("Accuracy is :",accuracy_cnn,'%')

dot11= []

labels11 = []

for img in data_1:

# print(img)

img_1 = cv2.imread('DataSet/' + "/" + img)

img_1 = cv2.resize(img_1,((50, 50)))

try:

gray = cv2.cvtColor(img_1, cv2.COLOR_BGR2GRAY)

except:

gray = img_1

dot11.append(np.array(gray))

labels11.append(0)

for img in data_2:

try:

img_2 = cv2.imread('DataSet/'+ "/" + img)

img_2 = cv2.resize(img_2,((50, 50)))

try:

gray = cv2.cvtColor(img_2, cv2.COLOR_BGR2GRAY)

except:

63
gray = img_2

dot11.append(np.array(gray))

labels11.append(1)

except:

None

x_train, x_test, y_train, y_test = train_test_split(dot11,labels11,test_size = 0.2,


random_state = 101)

from keras.utils import to_categorical

y_train1=np.array(y_train)

y_test1=np.array(y_test)

train_Y_one_hot = to_categorical(y_train1)

test_Y_one_hot = to_categorical(y_test)

x_train2=np.zeros((len(x_train),50,50))

for i in range(0,len(x_train)):

x_train2[i,:,:]=x_train2[i]

x_test2=np.zeros((len(x_test),50,50))

for i in range(0,len(x_test)):

x_test2[i,:,:]=x_test2[i]

print()

#==========================CLASSIFICATION======================

#initialize the model

classifier = Sequential()

64
#defining the layers

classifier.add(Dense(activation = "relu", input_dim = 50, units = 9, kernel_initializer =


"uniform"))

classifier.add(Dense(activation = "relu", units = 14,kernel_initializer = "uniform"))

classifier.add(Dense(activation = "sigmoid", units = 1,kernel_initializer = "uniform"))

#compile the model

classifier.compile(optimizer = 'adam' , loss = 'mae', metrics = ['accuracy'] )

#fitting the model

history=classifier.fit(x_train2,y_train1, batch_size = 2 ,epochs = 10 )

print("=======================================")

print("------------- Convolutional Neural Network --------------")

print("=======================================")

print()

accuracy=history.history['loss']

accuracy=max(accuracy)

accuracy_ann=100-accuracy

print()

print("Accuracy is :",accuracy_ann,'%')

print("The PSNR Value is ",value )

print()

print("The MSE Value is ",MSE )

print()

65
print("The SSIM Value is ",ssim[0] )

#=== COMPARISON ===

objects = ('CNN', 'ANN')

y_pos = np.arange(len(objects))

performance = [accuracy_cnn,accuracy_ann]

plt.bar(y_pos, performance, align='center', alpha=0.5)

plt.xticks(y_pos, objects)

plt.ylabel('Accuracy')

plt.title('Algorithm comparison')

plt.show()

66

You might also like