Mini D9
Mini D9
BACHELOR OF TECHNOLOGY
IN
M. RAMESH 208R1A04L3
M. BHAVANA 208R1A04L4
N. RAHUL 208R1A04L5
N. DURGAPRASAD 208R1A04L6
Dr. S. POONGODI
Professor
ECE DEPARTMENT
UGC AUTONOMOUS
2023-24
CMR ENGINEERING COLLEGE
UGC AUTONOMOUS
CERTIFICATE
This is to certify that the Industry Oriented mini-project work entitled
“SPECKLE NOISE REDUCTION OF SENTINEL-1 SAR DATA USING FAST
FOURIER TRANSFORM TEMPORAL FILTERING TO MONITOR PADDY
FIELD AREA” is being submitted by M.RAMESH bearing Roll No:208R1A04L3,
M.BHAVANA bearing Roll No:208R1A04L4, N.RAHUL bearing Roll
No:208R1A04L5, N.DURGAPRASAD bearing Roll No:208R1A04L6 in Batch IV-1
semester, Electronics and Communication Engineering is a record bonafide work
carried out by them during the academic year 2023-2024.The results embodied in this
report have not been submitted to any other University for the award of any degree.
EXTERNAL EXAMINER
ACKNOWLEDGEMENT
We sincerely thank the management of our college CMR ENGINEERING
COLLEGE for providing during our project work.
It is the very auspicious moment we would like to express our gratitude to Dr.
SUMAN MISHRA, Head of the Department, and ECE for his consistent
encouragement during the progress of this project.
M.RAMESH 208R1A04L3
M. BHAVANA 208R1A04L4
N. RAHUL 208R1A04L5
N. DURGAPRASAD 208R1A04L6
CONTENTS
CHAPTERS PAGE
LIST OF ABBREVIATIONS i
LIST OF FIGURES ii
ABSTRACT iv
CHAPTER-1 INTRODUCTION
3.1 Introduction 8
3.2 Drawbacks 9
4.1 Introduction 12
5.3 Wavelets 34
5.3.1 Modules 41
CHAPTER-6 RESULT AND DISCUSSION 44
7.1 Applications 48
7.2 Advantages 50
REFERENCES 54
ANNEXURE 56
LIST OF ABBREVIATIONS
ACRONYM ABBREVIATIONS
i
LIST OF FIGURES
ii
LIST OF TABLES
Assignment operators 20
Table 5.1.6
iii
ABSTRACT
The Synthetic Aperture Radar (SAR) images have the ability to work in any
weather situation. It is mostly impossible to get cloud free optical image monthly in
Indonesia, especially in Subang. So, the use of SAR imagery for the monthly
monitoring of paddy is recommended. The disadvantage of the SAR images is noise,
known as speckle noise. This noise reduces the quality of the image; reducing the
speckle noise is necessary. This research proposes the FFT algorithm to remove the
speckle noise. Because the frequency pattern of FFT is periodic, and the research
focuses on the paddy field area, so the one year of paddy growth was used. There are
many planting times in the research area, from one to three planting times a year; most
of the area is twice planting time a year. Six scenarios of the FFT filter were proposed
and investigated to select the optimum scenario. The scenarios use the number of FFT
results to implement in filtering. The first scenario used FFT1 to FFT2, and the sixth
scenario used FFT-1 to FFT-8. The performance results were measure by the
correlation value of the original image and the results of FFT filtering. The results
show that by increasing the number of FFT in the filtering process, the correlation
increase. The minimal FFT filtering is FFT1-FFF3 filtering, with the average
correlation is 87%, but some acquisition still has the correlation less than 85%. The
optimum is FFT1- FFT5 filtering, with the average correlation, is 92%, and all
correlations are above 85% for all data during a year. Using the optimum scenario, the
backscattered trend of paddy growth could be identified better and easier, so this
technique is recommended for paddy growth monitoring.
iv
CHAPTER-1
INTRODUCTION
1
various applications, research on the technology behind SAR images is being actively
conducted around the world.
In contrast with the optical sensor, the active sensor of the SAR is accompanied
by speckle noise that arises from the coherent imaging mechanism. Speckle noise in
SAR images is generated by the random interference of many elementary reflectors
within one resolution cell. This noise has different features from the noise observed in
images that were obtained by passive sensors, such as the optical sensor. Speckle noise
appears as a form of multiplicative noise in SAR images and it has the characteristics of
a Rayleigh distribution. SAR images are used by observers to extract information and
identify targets. Speckle noise degrades SAR images and thus interferes with the
transfer of image information to the observer. Therefore, the development of effective
filtering methods in the reduction of speckle noise is critical for the analysis of
information that is contained in various SAR images.
Numerous studies have been conducted with the aim of extracting image
information from SAR images by removing speckle noise. Five main categories of
methods were applied in these studies: linear filtering, nonlinear filtering, partial
differential equation (PDE) filtering, hybrid methods, and filtering methods that are
based on the discrete wavelet transform (DWT).
The linear filter convolutes an image with a symmetric mask and then
reconstructs each pixel value as a weighted average value of the neighboring pixel
values. The mean filter and the Gaussian filter are typical linear filtering techniques
that are effective in simple and smoothing speckle noise reduction. A mean value of
several pixel values around the target pixel substitute the mean filter. The target pixel is
located in the center. The mean filter exhibits a low-edge preservation performance,
because it does not consider the flat and homogeneous areas in the image.
2
1.2 OBJECTIVE OF THE PROJECT
The objective of "Speckle Noise Reduction of Sentinel-1 SAR Data Using Fast
Fourier Transform Temporal Filtering to Monitor Paddy Field Area" is to improve the
quality of Synthetic Aperture Radar (SAR) data acquired by the Sentinel-1 satellite in
order to better monitor and analyze changes in paddy fields.
Speckle Noise in SAR Data: Sentinel-1 SAR images of paddy fields are
inherently affected by speckle noise, a granular interference that hinders the
extraction of accurate information.
Impact on Agricultural Monitoring: Speckle noise in SAR images poses a
significant challenge for reliable monitoring of paddy fields, as it can obscure
crucial details related to crop growth, water levels, and other dynamic factors
Specialized Speckle Reduction Techniques: There is a need for specialized
algorithms capable of effectively reducing speckle noise in SAR data while preserving
the essential features, such as edges and fine details, crucial for agricultural
monitoring.
Temporal Dynamics and Continuous Monitoring: Monitoring paddy fields requires
consideration of temporal dynamics, capturing changes over time. A method that
integrates temporal information through Fast Fourier Transform (FFT) is desirable.
3
1.4 ORGANIZATION OF THE PROJECT
chapter 1, deal with overview, and objective of project and problem statement.
4
CHAPTER-2
LITERATURE SURVEY
The following are the base papers deals with various techniques of proposed
annulment in SAR images. These include Lee filtering (Lee, 1980), Kuan filtering (Kuan et
al., 1985), Frost filtering (Frost et al., 1982) and gamma MAP filtering (Hsiao et al., 2002).
Lee and Kuan filters are adaptive mean filters, whereas a Frost filter is a mean adaptive
weighted filter. A GMAP filter is introduced and it requires the probability density function
(PDF) prior knowledge of the image before applying (Jia et al., 2019). The image reflectivity
is considered to be a gamma distribution instead of a Gaussian distribution. The gamma
distribution is a 2 different class of continuum probability density functions used in statistical
hypothesis testing. The gamma range includes specific examples such as the exponentially,
Erlang, as well as chi-square distributions. The Gaussian distribution is a bellshaped arc in
which numbers are supposed to test the normality including an equivalent number of data
points along each total mean throughout every observation. It is otherwise called as normal
distribution. A radar cross section model for designing a speckle noise removal filter is
introduced in Achim et al. (2006) and it is based on the heavy-tailed Rayleigh density
function. The corresponding area observed by every radar is the target's radar cross section. Its
the fictional region that intercepts the quantity of energy and, if dispersed evenly in all
locations, generates a radar echo equivalent to the enemy targets. Bayesian filters are
explained using the Bayesian theorem, which describes a posterior probability by a prior pdf.
A Bayesian filter is a software that evaluates the header as well as substance of an arriving
email communication and determines the likelihood that it is junk using Bayesian reasoning,
also known as Bayesian analysis. Anti-virus tools should be utilized in combination with
Bayesian filtering. The wavelet-based despeckling algorithms are proposed in Gleich and
Datcu (2009) and Argenti et al., (2006), it is developed using MAP estimation and
undecimated wavelet decomposition. Here, the assumption is that each wavelet coefficient’s
pdf is generalized Gaussian. The parameters for GG pdf are used to obtain space-variation in
every wavelet frame. Therefore, they can be modified to the context of the spatial image in
addition to orientation and scale. The GARCH method is used to predict that may be used to
examine a range of investment information, such as market indicators. This method is
5
frequently applied organizations to predict the unpredictability of stocks, bond and
macroeconomic indicators values. A 2-D GARCH model is used in despeckling of SAR
images as presented in Amirmazlaghani et al. (2008), a logarithmic transformation of the
original SAR image is decomposed into multi-scale wavelet domain. SAR images wavelet
coefficients have got non-Gaussian statistics which are fully specified by this model. A
sequential Monte Carlo method proposed in Gleich and Datcu (2009) is a model-based
Bayesian approach. The secondgeneration wavelets such as bandelet (Lu et al., 2014), chirplet
(Lian & Jiang, 2017), contourlet (Metwalli et al., 2014) have been developed in the last few
years. The process of despeckling the SAR images using contourlet transform (Li et al., 2006)
and bandelet (Sveinsson et al., 2008) transform gives better despeckling results as compared
with each wavelet-based method. The contourlet transform is a novel two-dimensional picture
encoding transformation technique. The outlines of actual photos, which are the most
prominent elements in environmental photos, may be managed to capture to use the wavelet
transforms with just a few parameters. The bandelet is a multiresolution linear transformation
that maintains the geometrical integrity of pictures and materials. The bandelet transformation
takes use of the mathematical consistency of a picture's architecture and is useful for analyzing
picture borders and texturing. In the wavelet domain, the noise and image models have
defined the results in Gleich and Datcu (2009) and Argenti and Alparone (2002) using MAP
estimation of the denoised image evaluated. A maximum a posteriori probability (MAP)
estimation in Bayesian inference is an estimation of an arbitrary function that matches the
subsequent distribution's feature. Mostly on analysis of the experimental information, the
MAP may be used to produce a prediction equation of a multitude of settings. The speckle
noise is a type of multiplicative noise which is present in the SAR image (Yuan et al., 2014).
Speckle is a granular interference that occurs naturally in dynamic radiation, synthetic aperture
radar (SAR), clinical ultrasonography and optically coherent scanning pictures and decreases
their clarity. Coordinated reception of diffracted information from several dispersed sources is
the reason. The RADARSAT-2 dataset was subjected toward the speckled filtration, which
included modified Lee, boxcar, enhanced Lee–Sigma and IDAN filtration. Speckle filtering
must reduce random noise while maintaining spatially and analysis required data. Hybrid
polarimetric information is used to assess speckled filtering. Based on other wavelet denoising
techniques (Liu et al., 2017) use logarithmic transform to change the multiplicative noise
model into an additive noise model (Achim et al., 2006). The logarithmically modified image
is designed by employing filtering methods like zero-mean Gaussian distributions and zero
location Cauchy approaches (Bhuiyan et al., 2007). This is done to generate the MAP
6
estimator and minimum mean absolute error estimator. The extension of the Kalman filter is
the wiener filter. A recursive unscented Kalman filter (UKF) was presented in Subrahmanyam
et al. (2008) and this does not require any parameter estimation to incorporate non-Gaussian
prior. The unscented Kalman filter is a substandard nonlinear filtering technique that, unlike
EKF or LKF, employs an unprocessed transition (UT) instead of exponential function to
linearize nonlinear solutions. The Kalman filter algorithm employs a dynamical framework to
explain an angular velocity and is the best approach for estimating its condition if the
program's modeling is straight or the procedure and inaccurate data are cumulative and
stochastic. The well-known algorithm is the extended Kalman filter, and it provides a
nonlinear and non-Gaussian model, it depends on the principle of linearization of the
evolution as well as measurement models utilizing the Taylor series. The particle filtering
(Gleich & Datcu, 2009) method presents a solution for solving the nonlinear filtering and non-
Gaussian problems, which are having numerical Bayesian techniques. For solving tracking
problems (Godsill & Clapp, 2001), particle filters were adopted. Recently, it is seen applied
for resolving the problems related to source separation (Everson & Roberts, 2000), object
collision (Tamminen & Lampinen, 2006), segmentation (Li et al., 2006) and road detection
(Chen et al., 2006). Kalman filter can be applied for despeckling the SAR images (Geling &
Ionescu, 1994), but the filter parameters will modify the SAR image local area. Using the
Kalman filter, the segmented regions of SAR images are despeckled (Tsuchida et al., 2003). A
two-dimensional adaptive block Kalman filter process is presented in Azimi-Sadjadi and
Bannour (1991). Despeckling of SAR image with a particle filter is proposed in Gencaga et al.
(2005). Despeckling of SAR image using total variational (TV) models presented in Woo and
Yun (2011), an alternating minimization algorithm with shift technique (AMAST) algorithm
is used to remove the speckle noise, and it is based on the Lagrangian function. A linearized
proximal alternating minimization algorithm (LPAMA) (Yun & Woo, 2012) is used to reduce
the multiplicative noise in SAR images. It is an ‘m’th root transformed total variational model.
Two despeckling models presented in Feng et al. (2014) based upon the total generalized
variation regularization. These two models are developed based on an algorithm called prima
dual method with TGV penalty. The first model is called PDTGV exponential model and the
second model is called PDTGV I-divergence model. PDTGVs can obtain smooth regions and
discontinuities at the object boundaries. PDTGVs remove the staircasing artifacts produced by
the TV (Ahmed et al., 2012; Balamurugan et al., 2019; Nguyen et al., 2018). Based on TV
algorithms, bring staircasing artifacts in which that are absent when working with PDTGV
7
exponential model. PDTGV I-divergence model removes any inverse operations or inner
iterations, whereby this obtains a considerable speed advantage.
CHAPTER-3
EXISTING SYSTEM
3.1 INTRODUCTION
The Speckle noise reduction is a crucial step in improving the visual quality
and interpretability of synthetic aperture radar (SAR) images. SAR images are often
affected by speckle noise, which appears as granular or salt-and-pepper noise and can
obscure fine details in the images. To reduce speckle noise in SAR images, various
techniques have been developed, including those based on statistical characteristics of
speckle. Here's an overview of existing speckle noise reduction techniques using
statistical characteristics of speckle in SAR images.
Lee Filter: The Lee filter is one of the earliest and simplest speckle reduction
techniques. It's a statistical filter that uses local statistical properties of the image
to reduce speckle noise. The filter estimates the local mean and variance of the
image and then filters the image accordingly.
Frost Filter: The Frost filter is an adaptive speckle reduction technique that
takes into account the local statistics of the image. It uses an estimate of the local
signal-to-noise ratio (SNR) to adaptively filter the image, making it effective in
both homogeneous and heterogeneous regions.
Kuan Filter: The Kuan filter is another adaptive speckle reduction technique
that uses the local statistics of the image. It estimates the local signal level and
the speckle noise level and then applies a filter based on these estimates.
Gamma-MAP Filter: This filter is based on a maximum a posteriori (MAP)
estimation framework and takes into account the statistical distribution of
speckle noise in SAR images. It provides effective speckle reduction while
preserving image details.
Non-Local Means (NLM) Filter: The NLM filter is a powerful denoising
technique that uses non-local similarity information to reduce speckle noise. It
8
compares patches of pixels within the image to find similar regions and then
applies a weighted average to reduce noise.
Wavelet-Based Techniques: Wavelet-based methods decompose the SAR image
into different frequency components using wavelet transforms and then apply
speckle reduction to specific scales or components.
Markov Random Fields (MRF): MRF-based techniques model the
relationships between neighboring pixels in SAR images as a Markov random
field. By considering these relationships, they can reduce speckle noise while
preserving image structure.
Sparse Representation-Based Techniques: Sparse representation methods
exploit the sparsity property of SAR images in certain domains (e.g., wavelet or
dictionary-based representations) to separate noise from the signal. This can lead
to effective speckle reduction.
Deep Learning-Based Approaches: Deep neural networks, such as
convolutional neural networks (CNNs) and autoencoders, have also been applied
to speckle noise reduction in SAR images. These approaches learn complex
representations from the data and can achieve impressive results.
3.2 DRAWBACKS
Lee Filter: The Lee filter, while one of the earliest and simplest techniques for speckle
reduction in SAR images, has certain limitations:
1. Limited in Low SNR Environments
2. Parameter Dependency
3. Smoothing Effect
4. Edge Preservation
Frost Filter: The Frost filter, an adaptive speckle reduction technique used in SAR
image processing, addresses certain limitations:
1. Complex Computation
2. Adaptability Challenges
3. Performance in Low SNR Areas
Kuan filter: The Kuan filter, another adaptive speckle reduction technique used in SAR
image processing, presents its own set of limitations despite its adaptive nature:
1. Limited Adaptability
9
2. Parameter Sensitivity
10
1. Parameter Tuning
2. Artifact Generation
3. Complexity in Implementation
4. Computational Overhead
Deep Learning-Based Approaches: Deep Learning-Based Approaches in SAR image
processing offer significant advantages along with some specific limitations:
1. Data Dependency
2. Overfitting and Generalization
3. Black Box Nature
4. Computational Demands
11
CHAPTER-4
PROPOSED SYSTEM
4.1 INTRODUCTION
We proposed an algorithm for the reduction of speckle noise and the preservation of
the edges in the SAR image. The proposed algorithm employs the SRAD filtering method as a
preprocessing filter instead of directly applying the wavelet domain, as the SRAD can be
directly applied to the SAR image, because it uses the image without log-compressed data.
However, the SRAD filtering result image still includes the speckle noise, which represents a
form of multiplicative noise. Since most of the filtering methods are developed for reducing
the AGWN, the logarithmic transform is applied to the resulting SRAD image to convert the
multiplicative noise into additive noise, after which the resulting SRAD image contains
additive noise. Subsequently, the two-dimensional (2D) DWT transforms the SRAD filtering
result image, which represents the logarithmic transform, into four sub-band images (vertical
sub-band image (LH), horizontal sub-band image (HL), diagonal sub-band image (HH), and
approximate sub-band image (LL )). We employed the DWT performed until two-level
decomposition. An effect of algorithm and analysis of results is tested at one to two
decomposition level. The two-level decomposition of the DWT shows the best results. Most of
the speckle noise occurs in high-frequency sub-band images. Therefore, the soft threshold of
the wavelet coefficients is only applied to the horizontal and vertical sub-band images, which
have similar energy, to preserve the original signal and remove the noise signal. However, the
diagonal sub-band image has a low energy when compared to the vertical and horizontal sub-
band images.
For the diagonal sub-band image, we employ an IGF that is based on a new edge-aware
weighting method to preserve a low original signal and suppress the noise signal using this
new edge-aware weighting method. The approximate sub-band image contains significant
components of the image and is less affected by the noise; however, the noise exists in the
approximate sub-band image. The GF is employed to reduce the speckle noise and preserve
the edges in the approximate sub-band image. Each sub-band image, once the noise is
12
removed, is reconstructed by wavelet reconstruction, and the exponential transform is
performed to reverse the logarithmic transform. Finally, we obtain the despeckled image.
13
Fig 4.2.1: Processing Flow of Proposed Method
The processing flow was shown in Fig 4.2.1. The pre-processing was done on-
line using GEE. The results were downloaded and then processed off-line. The missing
data were interpolated using previous and after acquisition date. The all 31 data in a
year period was transform to the frequency domain using temporal FFT, the results are
FFT1 to FFT31. Then the FFT1 to FFT-n, was selected and transform back using IFFT
(Invers FFT) with the rest FFT were set to zero. The results of IFFT is backscattered in
time domain, this result is filtered images. Six scenarios of FFT filtering were
investigated to select the optimum scenario. First scenario used FFT1-FFT2, and the
sixth scenario used FFT-1 to FFT-8. The performance results were measure by
correlation value of the original image and the results of FFT filtering.
Data:
Sentinel-1 data were acquired from October 2019 to September 2020,
there are 31 data. Each single acquisition data was pre-processed in Google Erath
Engine (GEE) and downloaded.
14
Ascending/Descending Descending
Pre-processing:
The input data from GRD Sentinel-1 collection from GEE is already in
sigma0 backscatter. Additional processing was applied, first is transformation
from Sigma0 to Gamma0, and flattened to correct the topographic effect. The
final pixel resolution was saved to 20x20 meter. The Gab-filling was applied to
fill the no-data caused by no acquisition data and layover during topographic
flattening process. The linier interpolation technique was used in the gab-filling
process.
FFT:
Fourier Transform is introduced by Jean Baptiste Joseph Fourier to solve
the computational complexity in wide varieties of fields such as earth and
science. FFT convert signal in time domain to frequency domain. FFT is a basic
approach to remote sensing image processing, it applied to process
hyperspectral, high spatial resolution and high temporal resolution. FFT
algorithm can be used for stripe noise removal, image compression, image
registration.
FFT1 – FFTn:
The all 31 data in a year period was transform to the frequency domain
using temporal FFT, the results are FFT1 to FFT31.
FFT1 - FFT2:
The first scenario used FFT1 to FFT2. The performance results were
measure by the correlation value of the original image and the results of FFT
filtering.
FFT1 to FFT3:
The minimal FFT filtering is FFT1-FFF3 filtering, with the average
correlation is 87%, but some acquisition still has the correlation less than 85%.
FFT1 to FFT-i:
15
The optimum is FFT1- FFT-i filtering, with the average correlation, is
92% and all correlations are above 85% for all data during a year.
Invers FFT:
The FFT1 to FFT-n, was selected and transform back using IFFT (Invers
FFT) with the rest FFT were set to zero. The results of IFFT is backscattered in
time domain, this result is filtered images.
FFT Filtered:
Using the optimum scenario, the backscattered trend of paddy growth could be
identified better and easier, so this technique is recommended for paddy growth
monitoring.
Comparison and analysis:
To assets to FFT results, the analysis will use temporal domain and spatial
domain. In the temporal domain the analysis compares the trend of the original
temporal trend with the FFT temporal filtered trend. In spatial domain the analysis
compares the original image with the FFT filtering results, the comparison will do for
all image in a year.
CHAPTER-5
SOFTWARE DISCRIPTION
16
5.1 INTRODUCTION TO PYTHON
• History of Python
Python was developed by Guido van Rossum in the late eighties and early nineties at
the National Research Institute for Mathematics and Computer Science in the Netherlands.
Python is derived from many other languages, including ABC, Modula-3, C, C++, Algol-68,
SmallTalk, and Unix shell and other scripting languages.
Python is copyrighted. Like Perl, Python source code is now available under the GNU
General Public License (GPL). Python is now maintained by a core development team at the
institute, although Guido van Rossum still holds a vital role in directing its progress
• Python Features
1. Easy-to-learn: Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.
2. Easy-to-read: Python code is more clearly defined and visible to the eyes.
3. Easy-to-maintain: Python's source code is fairly easy-to-maintain.
4. A broad standard library: Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.
17
5. Interactive Mode: Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
6. Portable: Python can run on a wide variety of hardware platforms and has the
same interface on all platforms.
7. Extendable: You can add low-level modules to the Python interpreter. These
modules enable programmers to add to or customize their tools to be more
efficient.
8. Databases: Python provides interfaces to all major commercial databases.
9. GUI Programming: Python supports GUI applications that can be created and
ported to many system calls, libraries and windows systems, such as Windows
MFC, Macintosh, and the X Window system of Unix.
10. Scalable: Python provides a better structure and support for large programs than
shell scripting.
• Operators in Python
18
TABLE 5.1.2: BITWISE OPERATORS
19
Operator Description Example
20
TABLE 5.1.7: RELATIONAL OPERATORS
• List
21
The list is a most versatile data type available in Python which can be written as a list
of comma-separated values (items) between square brackets. Important thing about a list is
that items in a list need not be of the same type.
Creating a list is as simple as putting different comma-separated values between square
brackets. For example –
list2 = [1, 2, 3, 4, 5 ];
• Tuples
A tuple is a sequence of immutable Python objects. Tuples are sequences, just like
lists. The differences between tuples and lists are, the tuples cannot be changed unlike lists
and tuples use parentheses, whereas lists use square brackets.
tup2 = (1, 2, 3, 4, 5 );
tup1 = ();
To write a tuple containing a single value you have to include a comma, even though
there is only one value –
tup1 = (50,);
Like string indices, tuple indices start at 0, and they can be sliced, concatenated, and so on.
22
• Accessing Values in Tuples:
To access values in tuple, use the square brackets for slicing along with the index or
indices to obtain value available at that index. For example –
tup2 = (1, 2, 3, 4, 5, 6, 7 );
tup1[0]: physics
tup2[1:5]: [2, 3, 4, 5]
• Updating Tuples:
Tuples are immutable which means you cannot update or change the values of tuple
elements. We are able to take portions of existing tuples to create new tuples as the following
example demonstrates –
print tup3
Removing individual tuple elements is not possible. There is, of course, nothing
wrong with putting together another tuple with the undesired elements discarded.
23
To explicitly remove an entire tuple, just use the del statement. For example:
print tup
• Dictionary
Each key is separated from its value by a colon (:), the items are separated by commas,
and the whole thing is enclosed in curly braces. An empty dictionary without any items is
written with just two curly braces, like this: {}.
Keys are unique within a dictionary while values may not be. The values of a
dictionary can be of any type, but the keys must be of an immutable data type such as strings,
numbers, or tuples.
To access dictionary elements, you can use the familiar square brackets along with the
key to obtain its value. Following is a simple example –
• Updating Dictionary:
24
dict['School'] = "DPS School"; # Add new entry
Result –
We can either remove individual dictionary elements or clear the entire contents of a
dictionary. You can also delete entire dictionary in a single operation.
To explicitly remove an entire dictionary, just use the del statement. Following is a
simple example –
• Defining a Function
Simple rules to define a function in Python.
Function blocks begin with the keyword def followed by the function name and
parentheses ( ( ) ).
25
Any input parameters or arguments should be placed within these parentheses. You
can also define parameters inside these parentheses.
The first statement of a function can be an optional statement - the documentation
string of the function or docstring.
The code block within every function starts with a colon (:) and is indented.
The statement return [expression] exits a function, optionally passing back an
expression to the caller. A return statement with no arguments is the same as return
None.
• Calling a Function
Defining a function only gives it a name, specifies the parameters that are to be
included in the function and structures the blocks of code.Once the basic structure of a
function is finalized, you can execute it by calling it from another function or directly from
the Python prompt.
• Function Arguments
You can call a function by using the following types of formal arguments:
Required arguments
Keyword arguments
Default arguments
Variable-length arguments
Before you can read or write a file, you have to open it using Python's built-in open()
function. This function creates a file object, which would be utilized to call other support
methods associated with it.
Syntax:
26
The close()Method
The close() method of a file object flushes any unwritten information and closes the
file object, after which no more writing can be done.Python automatically closes a file when
the reference object of a file is reassigned to another file. It is a good practice to use the
close() method to close a file.
Syntax:
fileObject.close();
• Exception
An exception is an event, which occurs during the execution of a program that disrupts
the normal flow of the program's instructions. In general, when a Python script encounters a
situation that it cannot cope with, it raises an exception. An exception is a Python object that
represents an error. When a Python script raises an exception, it must either handle the
exception immediately otherwise it terminates and quits.
• Handling an exception
If you have some suspicious code that may raise an exception, you can defend your
program by placing the suspicious code in a try: block. After the try: block, include
an except: statement, followed by a block of code which handles the problem as elegantly as
possible.
The Python standard for database interfaces is the Python DB-API. Most Python
database interfaces adhere to this standard.
You can choose the right database for your application. Python Database API supports a
wide range of database servers such as −
GadFly
mSQL
MySQL
PostgreSQL
Microsoft SQL Server 2000
Informix
27
Interbase
Oracle
Sybase
The DB API provides a minimal standard for working with databases using Python
structures and syntax wherever possible. This API includes the following:
28
Fig 5.2.1: Python Releases for Windows
29
Fig 5.2.2: Install Python 3.10.10 (64-bit)
4. If you’re just getting started with Python and you want to install it with default features
as described in the dialog, then click Install Now and go to Step 4 - Verify the Python
Installation. To install other optional and advanced features, click Customize
installation and continue.
5. The Optional Features include common tools and resources for Python and you can
install all of them, even if you don’t plan to use them.
30
Fig 5.2.3: Optional Features
31
Fig 5.2.4: Advanced Options
(i) Install for all users: recommended if you’re not the only user on this computer
(ii) Associate files with Python: recommended, because this option associates
Python
(v) Precompile standard library: not required, it might down the installation
(vi) Download debugging symbols and Download debug binaries: recommended only
32
Make note of the Python installation directory in case you need to reference it later.
8. Click Install to start the installation.
9. After the installation is complete, a Setup was successful message displays.
If you want to access Python through the command line but you didn’t add Python to
your environment variables during installation, then you can still do it manually.
Before you start, locate the Python installation directory on your system. The
following directories are examples of the default directory paths:
C:\Program Files\Python310: if you selected Install for all users during installation,
then the directory will be system wide.
33
C:\Users\Sammy\AppData\Local\Programs\Python\Python310: If didn’t select Install
for all users during installation, then the directory will be in the Windows user path
Note that the folder name will be different if you installed a different version, but will still
start with Python.
1. Go to Start and enter advanced system settings in the search bar.
2. Click View advanced system settings.
3. In the System Properties dialog, click the Advanced tab and then click Environment
Variables.
4. Depending on your installation:
If you selected Install for all users during installation, select Path from the list
of System Variables and click Edit.
If you didn’t select Install for all users during installation, select Path from the
list of User Variables and click Edit.
5. Click New and enter the Python directory path, then click OK until all the dialogs are
closed.
You can verify whether the Python installation is successful either through the
command line or through the Integrated Development Environment (IDLE) application, if you
chose to install it.
Go to Start and enter cmd in the search bar. Click Command Prompt.
Enter the following command in the command prompt:
python --version
You can also check the version of Python by opening the IDLE application. Go
to Start and enter python in the search bar and then click the IDLE app, for example IDLE
(Python 3.10 64-bit).
34
Fig 5.2.6: Python IDLE Shell 3.10.10
You can start coding in Python using IDLE or your preferred code editor.
Conclusion
You’ve installed Python on your Windows 10 computer and are ready to start learning
and programming in Python.
5.3 WAVELETS
A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases,
and then decreases back to zero. It can typically be visualized as a "brief oscillation" like one
might see recorded by a seismograph or heart monitor. Generally, wavelets are purposefully
crafted to have specific properties that make them useful for signal processing. Wavelets can
be combined, using a "shift, multiply and sum" technique called convolution, with portions of
an unknown signal to extract information from the unknown signal.
For example, a wavelet could be created to have a frequency of Middle C and a short
duration of roughly a 32nd note. If this wavelet were to be convolved at periodic intervals with
a signal created from the recording of a song, then the results of these convolutions would be
useful for determining when the Middle C note was being played in the song. Mathematically,
the wavelet will resonate if the unknown signal contains information of similar frequency -
just as a tuning fork physically resonates with sound waves of its specific tuning frequency.
This concept of resonance is at the core of many practical applications of wavelet theory.
35
As a mathematical tool, wavelets can be used to extract information from many
different kinds of data, including - but certainly not limited to - audio signals and images. Sets
of wavelets are generally needed to analyze data fully. A set of "complementary" wavelets
will deconstruct data without gaps or overlap so that the deconstruction process is
mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet based
compression/decompression algorithms where it is desirable to recover the original
information with minimal loss.
Wavelet transforms
Wavelet transforms are classified into discrete wavelet transforms and continuous
wavelet transforms. Note that both DWT and CWT are continuous-time (analog) transforms.
They can be used to represent continuous-time (analog) signals. CWTs operate over every
possible scale and translation whereas DWTs use a specific subset of scale and translation
values or representation grid.
Wavelet Transform:
36
The wavelet coefficients c jk are then given by
Here, a = 2 − j is called the binary dilation or dyadic dilation, and b = k2 − j is the binary or
dyadic position.
There are a large number of wavelet transforms each suitable for different applications. For
full list see list of wavelet-related transforms but the common ones are listed below:
37
Lifting Scheme
Complex Wavelet Transform
where ψ (t) is a continuous function in both the time domain and the frequency domain
called the mother wavelet and represents operation of complex conjugate. The main purpose
of the mother wavelet is to provide a source function to generate the daughter wavelets which
are simply the translated and scaled versions of the mother wavelet. To recover the original
signal x (t), inverse continuous wavelet transform can be exploited.
is the dual function of ψ(t). And the dual function should satisfy
Sometimes, , where
38
Is called the admissibility constant and is the Fourier transform of ψ. For a successful
inverse transform, the admissibility constant has to satisfy the admissibility condition:
Mother wavelet
The scaling function is compactly supported if and only if the scaling filter h has a finite
support, and their supports are the same. For instance, if the support of the scaling function is
[N1, N2], then the wavelet is [(N1-N2+1)/2,(N2-N1+1)/2]. On the other hand, the kth moments
can be expressed by the following equation
39
The results of the CWT are many wavelet coefficients C, which are a function of scale
and position.
Multiplying each coefficient by the appropriately scaled and shifted wavelet yields the
constituent wavelets of the original signal:
Continuous wavelets
Real-valued:
Beta wavelet
Hermitian wavelet
Hermitian hat wavelet
Mexican hat wavelet
Shannon wavelet
40
Complex-valued:
A discrete wavelet transform (DWT) is any wavelet transform for which the wavelets
are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier
transforms is temporal resolution: it captures both frequency and location information
(location in time).
Definition:
The DWT of a signal x is calculated by passing it through a series of filters. First the
samples are passed through a low pass filter with impulse response g resulting in a
convolution of the two:
The signal is also decomposed simultaneously using a high-pass filter h. the outputs
giving the detail coefficients (from the high-pass filter) and approximation coefficients (from
the low-pass). It is important that the two filters are related to each other and they are known
as a quadrature mirror filter.
However, since half the frequencies of the signal have now been removed, half the
samples can be discarded according to Nyquist’s rule. The filter outputs are then sub sampled
by 2 (Mallat's and the common notation is the opposite, g- high pass and h- low pass):
41
This decomposition has halved the time resolution since only half of each filter output
characterizes the signal. However, each output has half the frequency band of the input so the
frequency resolution has been doubled.
5.3.1 MODULES:
Image Selection
Image preprocessing
Image Denoising
Classification
Performance Analysis
IMAGE SELECTION:
IMAGE PREPROCESSING:
Image pre-processing is the process of removing the unwanted noise from the input
image.
Image interpolation occurs when you resize or distort your image from one pixel grid
to another.
42
Image resizing is necessary when you need to increase or decrease the total number of
pixels, whereas remapping can occur when you are correcting for lens distortion or
rotating an image.
An interpolation technique that reduces the visual distortion caused by the fractional
zoom calculation is the bilinear interpolation algorithm, where the fractional part of the
pixel address is used to compute a weighted average of pixel brightness values over a
small neighborhood of pixels in the source image.
IMAGE DENOISING:
CLASSIFICATION:
The SRCNN is a deep convolutional neural network that learns end-to-end mapping of
low resolution to high resolution images.
As a result, we can use it to improve the image quality of low resolution images.
SRCNN takes a low-resolution image as input and converts (maps) it to a high-
resolution image.
It learns this mapping method by using training data.
The great thing about SRCNN is that it is so simple.
It has only three layers including the output layer. The first layer corresponds to the
extraction of patches, the second layer, to non-linear mapping, and the third layer,
to reconstruction.
For training, we need a pair of high-resolution images (correct data) and low-
resolution images (input data). However, the size of the input and output images are
the same in the SRCNN network.
43
PERFORMANCE ANALYSIS
The Final Result will get generated based on the overall classification and prediction.
The performance of this proposed approach is evaluated using some measures like
• PSNR
Peak Signal to Noise Ratio (PSNR) is a commonly used metric to define the similarity
between two images. It is calculated using the Mean-Square-Error (MSE) of the pixels and the
maximum possible pixel value (MAXI) as follows:
PSNR=10⋅log(MAX2IMSE)
A high PSNR value corresponds to a high similarity between two images and a low
value corresponds to a low similarity respectively.
• SSIM
The SSIM index is calculated on various windows of an image. The measure between
two windows and of common size N×N is:
SSIM(x,y)=(2μxμy+c1)(2σxy+c2)(μ2x+μ2y+c1)(σ2x+σ2y+c2)
44
CHAPTER-6
RESULTS AND DISCUSSION
After heading to the destination folder containing project code and datasets using
command prompt. Execute the python file using command prompt, it’ll then prompt a popup
box asking to select the image. After navigation to the folder containing datasets, browse an
image.
Then the Image selected will undergo several processes popping multiple windows as shown
in the figures below.
45
Fig 6.2: Image Resize
46
wavelet transforms, to remove noise while preserving image details, resulting in a cleaner and
more visually appealing resized image.
Gaussian blur involves a change in the specific denoising method used. It simplifies the
process, applying a uniform blurring effect by averaging pixel values within a defined
neighborhood, irrespective of non-local similarities. This shift results in a smoother but less
nuanced image, as Gaussian blur lacks the complexity of preserving specific image
characteristics present in FastNDenoising.
47
TABLE 6.1: Performance Comparison
Results are clearly visible in each figure after performing specific processes and in the
end the image is set for regression execution that performs multiple executions on the image
that makes better differentiation between CNN and ANN that ends the execution.
48
CHAPTER-7
7.1 APPLICATIONS
The speckle noise reduction and FFT temporal filtering in monitoring paddy field areas
using Sentinel-1 SAR data has diverse and impactful applications, ranging from agricultural
management to environmental assessment and disaster response.
1. Agricultural Monitoring:
Accurate monitoring of paddy fields over time, enabling farmers to assess crop
growth, detect changes in water levels, and make informed decisions regarding
irrigation and crop management.
Identification of variations in crop health within paddy fields, allowing for early
detection of potential issues such as diseases, pests, or nutrient deficiencies.
4. Water Management:
Monitoring changes in water levels within paddy fields, aiding in efficient water
management. This is particularly crucial in regions where water resources are limited,
helping to optimize irrigation practices.
49
5. Environmental Impact Assessment:
7. Precision Agriculture:
50
Demonstration of the practical application of Sentinel-1 SAR satellite data for
agriculture and land monitoring. This encourages the utilization of satellite data in
similar projects and research endeavors.
7.2 ADVANTAGES
It offers several advantages, making it a valuable tool for agricultural and environmental
monitoring:
The speckle noise reduction techniques enhance the quality of Sentinel-1 SAR
data, ensuring that the monitored paddy field areas are represented more accurately.
This leads to more reliable and precise information for analysis.
The project utilizes Fast Fourier Transform temporal filtering, enabling accurate
monitoring of changes in paddy field areas over time. This is crucial for assessing crop
growth, water levels, and other dynamic factors in agriculture.
3. Enhanced Visualization:
51
By reducing speckle noise and applying temporal filtering, the project enhances
the visualization of paddy field areas in SAR data. This improved visual representation
aids in the interpretation of changes and patterns over time.
The project introduces automation to the monitoring process, allowing for efficient
and continuous assessment of paddy fields. Automated analysis reduces the need for
manual intervention and accelerates the decision-making process.
5. Quantitative Analysis:
Farmers and agricultural stakeholders can benefit from the project's insights into
paddy field conditions. The data obtained can inform decisions related to irrigation,
crop health, and overall management practices, leading to optimized agricultural
outcomes.
52
The project provides a data-driven approach to decision-making in agriculture.
Farmers, policymakers, and researchers can make informed decisions based on the
analyzed data, improving overall efficiency and resource utilization.
53
CHAPTER-8
CNNs also demonstrate less parameter sensitivity compared to ANNs. Their ability to
learn representations from data reduces the reliance on manual parameter tuning. Additionally,
the intricate structures present in complex noise patterns, which are challenging for ANNs, can
be better discerned by CNNs due to their proficiency in capturing hierarchical representations
through convolutional layers.
Moreover, CNNs excel in generalizing across diverse SAR datasets by learning robust
and adaptable features. Their capability to discern intricate patterns even in low signal-to-
noise ratio (SNR) regions stands as a promising solution. These attributes suggest that
transitioning to a CNN-based approach for SAR image denoising could address many
limitations inherent in existing ANN-based methods. DNN - Deep Neural Networks finds
good future scope for the project.
54
REFERENCES
[1] Zhao, R.; Li, Y.; Ma, M. Mapping Paddy Rice with Satellite Remote Sensing: A
Review. Sustainability Vol.13, 503, 2021.
[2] Wang, S.; di Tommaso, S.; Faulkner, J.; Friedel, T.; Kennepohl, A.; Strey, R.; Lobell,
D.B. Mapping crop types in Southeast India with smartphone crowdsourcing and
deep learning. Remote Sens. Vol.12, 2957, 2020.
[3] Ding, M.; Guan, Q.; Li, L.; Zhang, H.; Liu, C.; Zhang, L. Phenology-based rice
paddy mapping using multi-source satellite imagery and a fusion algorithm applied
to the Poyang Lake Plain, Southern China. Remote Sens. Vol.12, 1022, 2020.
[4] Ghani H A, Razwan M, Malek A, Fadzli M, Azmi K, Muril M J and Azizan A review
on sparse Fast Fourier Transform applications in image processing Int. J. Electr.
Comput. Eng. Vol.10, 1346–51, 2020.
[5] Hoang-Phi Phung, Nguyen Lam-Dao, Thong Nguyen-Huy, Thuy Le-Toan, Armando
A. Apan, “Monitoring rice growth status in the Mekong Delta, Vietnam using
multitemporal Sentinel-1 data,” J. Appl. Remote Sens. Vol.14(1), 014518, 2020.
[6] Lestari A I, Kushardono D and Sensing R The use of C-Band Synthetic Aperture
Radar Satellite Data for Rice Plant Growth Phase Identification Int. J. Remote Sens.
Earth Sci. Vol.16, 31–4, 2019.
[7] Bazzi, H.; Baghdadi, N.; El Hajj, M.; Zribi, M.; Minh, D.H.T.; Ndikumana, E.;
Courault, D.; Belhouchette, H. Mapping paddy rice using Sentinel-1 SAR time series
in Camargue, France. Remote Sens. Vol.11, 887, 2019.
[8] Zhao, H.; Chen, Z.; Jiang, H.; Jing, W.; Sun, L.; Feng, M. Evaluation of three deep
learning models for early crop classification using sentinel-1A imagery time series—
A case study in Zhanjiang, China. Remote Sens. Vol.11, 2673, 2019.
[9] Yin, Q.; Liu, M.; Cheng, J.; Ke, Y.; Chen, X. Mapping rice planting area in
Northeastern China using spatiotemporal data fusion and phenology-based method.
Remote Sens. Vol.11, 1699, 2019.
55
[10] Huang, J.; Ma, H.; Sedano, F.; Lewis, P.; Liang, S.; Wu, Q.; Su, W.; Zhang, X.; Zhu,
D. Evaluation of regional estimates of winter wheat yield by assimilating three
remotely sensed reflectance datasets into the coupled WOFOST–PROSAIL
model. Eur. J. Agron. Vol.102, 1–13, 2019.
[11] Jiang, H.; Li, D.; Jing, W.; Xu, J.; Huang, J.; Yang, J.; Chen, S. Early Season
Mapping of Sugarcane by Applying Machine Learning Algorithms to Sentinel-1A/2
Time Series Data: A Case Study in Zhanjiang City, China. Remote Sens. Vol.11, 861,
2019.
[12] Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop
classification. Remote Sens. Environ. Vol.221, 430–443, 2019.
[13] Guan, K.; Li, Z.; Rao, L.N.; Gao, F.; Xie, D.; Hien, N.T.; Zeng, Z. Mapping Paddy
Rice Area and Yields Over Thai Binh Province in Viet Nam From MODIS, Landsat,
and ALOS-2/PALSAR-2. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens, Vol. 11,
2238–2252, 2018.
[14] Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K.-I. Crop
classification from Sentinel-2-derived vegetation indices using ensemble learning. J.
Appl. Remote Sens. Vol.12, 026019, 2018.
[15] Li, H.; Wan, W.; Fang, Y.; Zhu, S.; Chen, X.; Liu, B.; Yang, H. A Google Earth
Engine-enabled Software for Efficiently Generating High-quality User-ready
Landsat Mosaic Images. Environ. Model. Softw., Vol.112, 2018.
[16] Dos Santos Luciano, A.C.; Picoli, M.C.A.; Rocha, J.V.; Franco, H.C.J.; Sanches,
G.M.; Leal, M.R.L.V.; Le Maire, G. Generalized space-time classifiers for
monitoring sugarcane areas in Brazil. Remote Sens. Environ. Vol.215, 438–451,
2018.
56
ANNEXURE
import numpy as np
import cv2
filename = askopenfilename()
img = mpimg.imread(filename)
plt.imshow(img)
plt.title('ORIGINAL IMAGE')
plt.show()
fig = plt.figure()
plt.imshow(image_rescaled)
plt.title('IMAGE RESIZE')
plt.show()
57
#========================= IMAGE DENOISING
=====================
plt.imshow(dst)
plt.title('FastNDenoising')
plt.show()
plt.imshow(Gaussian)
plt.title('Gaussian Blur')
plt.show()
#=================CALCULATING PERFORMANCES==================
return 100
max_pixel = 255.0
return psnr
58
value = PSNR(img, Gaussian)
#===========================DATA SPLITTING=====================
import os
data_1 = os.listdir('DataSet/')
data_2 = os.listdir('DataSet/')
dot1= []
labels1 = []
# print(img)
try:
except:
gray = img_1
dot1.append(np.array(gray))
labels1.append(0)
try:
59
img_2 = cv2.imread('DataSet/'+ "/" + img)
try:
except:
gray = img_2
dot1.append(np.array(gray))
labels1.append(1)
except:
None
y_train1=np.array(y_train)
y_test1=np.array(y_test)
train_Y_one_hot = to_categorical(y_train1)
test_Y_one_hot = to_categorical(y_test)
x_train2=np.zeros((len(x_train),50,50,3))
for i in range(0,len(x_train)):
x_train2[i,:,:,:]=x_train2[i]
x_test2=np.zeros((len(x_test),50,50,3))
60
for i in range(0,len(x_test)):
x_test2[i,:,:,:]=x_test2[i]
model=Sequential()
#CNN layes
model.add(Conv2D(filters=16,kernel_size=2,padding="same",activation="relu",input_sh
ape=(50,50,3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
61
model.add(Flatten())
model.add(Dense(500,activation="relu"))
model.add(Dropout(0.2))
model.add(Dense(2,activation="softmax"))
model.summary()
y_train1=np.array(y_train)
train_Y_one_hot = to_categorical(y_train1)
test_Y_one_hot = to_categorical(y_test)
history=model.fit(x_train2,train_Y_one_hot,batch_size=2,epochs=10,verbose=1)
print("=======================================”)
print("=======================================")
print()
accuracy=history.history['loss']
accuracy=max(accuracy)
accuracy_cnn=100-accuracy
print()
62
print("Accuracy is :",accuracy_cnn,'%')
dot11= []
labels11 = []
# print(img)
try:
except:
gray = img_1
dot11.append(np.array(gray))
labels11.append(0)
try:
try:
except:
63
gray = img_2
dot11.append(np.array(gray))
labels11.append(1)
except:
None
y_train1=np.array(y_train)
y_test1=np.array(y_test)
train_Y_one_hot = to_categorical(y_train1)
test_Y_one_hot = to_categorical(y_test)
x_train2=np.zeros((len(x_train),50,50))
for i in range(0,len(x_train)):
x_train2[i,:,:]=x_train2[i]
x_test2=np.zeros((len(x_test),50,50))
for i in range(0,len(x_test)):
x_test2[i,:,:]=x_test2[i]
print()
#==========================CLASSIFICATION======================
classifier = Sequential()
64
#defining the layers
print("=======================================")
print("=======================================")
print()
accuracy=history.history['loss']
accuracy=max(accuracy)
accuracy_ann=100-accuracy
print()
print("Accuracy is :",accuracy_ann,'%')
print()
print()
65
print("The SSIM Value is ",ssim[0] )
y_pos = np.arange(len(objects))
performance = [accuracy_cnn,accuracy_ann]
plt.xticks(y_pos, objects)
plt.ylabel('Accuracy')
plt.title('Algorithm comparison')
plt.show()
66