SAR Processing Training
SAR Processing Training
Geomatica Banff
Version 2.7
Geomatica Banff
© 2019 PCI Geomatics Enterprises, Inc ®. All rights reserved.
COPYRIGHT NOTICE
Software copyrighted © by:
PCI Geomatics Enterprises Inc., 90 Allstate Parkway, Suite 501, Markham, Ontario, L3R 6H3, CANADA
Telephone number: (905) 764-0614
RESTRICTED RIGHTS
Canadian Government
Use, duplication, or disclosure by the Government is subject to restrictions as set forth in DSS9400-18 “General Conditions —
Short Form — Licensed Software”.
U.S. Government
Use, duplication, or disclosure by the Government is subject to restrictions set forth in subparagraph (b)(3) of the Rights in
Technical Data and Computer Software clause of DFARS 252.227-7013 or subparagraph (c)(1) and (2) of the Commercial
Computer Software-Restricted Rights clause at 48 CFR 52.227-19 as amended, or any successor regulations thereto.
PCI, PCI Geomatics, PCI and design (logo), Geomatica, Committed to Image-Centric Excellence, GeoGateway, FLY!,
OrthoEngine, RADARSOFT, EASI/PACE, ImageWorks, GCPWorks, PCI Author, PCI Visual Modeler, and SPANS are
registered trademarks of PCI Geomatics Enterprises, Inc.
All other trademarks and registered trademarks are the property of their respective owners.
Appendix A 167
Processing with single-pol or dual-pol detected data 167
Applying ratio or difference to detected channels using EASI Modeling 167
Applying an intensity-change detection on detected data 171
Computing SAR texture measures 175
Appendix B 183
Polarimetric discriminators 183
Generating polarimetric discriminators from coherency matrix eigenvalues 184
Generating polarimetric discriminators from analysis of the Poincaré Sphere 189
Synthesizing a backscatter image for arbitrary transmit and receive
polarizations 194
Maximizing the contrast between two targets 199
Course overview
Welcome to the SAR processing with Geomatica training course.
The course is designed for experienced users of geospatial software and introduces
you to the radar analysis tools available in Geomatica.
This guide contains six modules. The lessons in each module are designed for tasks
you are likely to perform in your analysis of radar imagery. They provide instruction
for using the software to carry out key processes while sampling key Geomatica
applications and features.
Radar-processing workflows
Geomatica includes several programs you can use to work with radar data.
Toolbar
button Name Description
Focus You use Focus to display radar data. If information about the
geolocation of the data is available, it is projected and
resampled on the fly (Module 1). Metadata is also imported,
which you can view (Module 1). You can resample, reproject,
and crop your data, and create histograms for entire channels
or for a specific area of interest (AOI).
Note: Topics not specific to radar data are covered in the
Geomatica I and Geomatica II training courses.
Focus features Algorithm Librarian, a collection of
algorithms you use to apply custom processing of radar data,
such as importing raw data, speckle filtering, polarimetric
decompositions and classification, change detection, and
more (Table 2). Each algorithm is presented in an easy-to-
use graphical user interface (GUI).
Focus includes a Python Scripting environment in which you
can create, edit, and run Python scripts.
SPTA With SPTA, which stands for SAR Polarimetry Target Analysis,
you can explore your SAR data. You can select targets in a
polarimetric SAR scene, draw a target (or load an existing
target), extract polarimetric parameters (from the image),
and display the results numerically and graphically. While you
can use SPTA to analyze all types of SAR data products, your
data must be fully polarimetric and in complex format to
exploit the full functionality of the program.
You can use SPTA with Focus to develop applications. For
example, in Module 4 and Module 5 an example of a land-
cover classification is presented.
EASI EASI is a command-line interface that provides you with
access to all the algorithms available in the Focus Algorithm
Librarian. With the EASI programming language, you can
automate processes and run processes in batches.
OrthoEngine You can use OrthoEngine to correct geometric distortions and
geocode SAR data based on radar-satellite modeling for
orthorectification or on other math models, such as the
polynomial or the thin-plate spline models. OrthoEngine
features powerful capabilities to collect ground control points
(GCP) either automatically or manually.
OrthoEngine also features support for complex data. This
means you can either geocode your data first, and then
process it, or vice versa.
Depending on needs and objectives, the programs that comprise Geomatica can be
used alone or in combination. For example, in Figure 1, a SAR scene characterized
by its header file and image data is first loaded and displayed in SPTA for data
exploration.
Based on selected areas of interest (AOI), several statistics related to the scattering
intensities or polarimetric parameters can be extracted simultaneously. After
finding the most relevant algorithms or parameters, the entire scene can be
analyzed in Focus by selecting the corresponding algorithm from Algorithm
Librarian or in EASI (Table 2).
Finally, the image data and the results extracted from it can be georeferenced in
OrthoEngine before distribution. Examples of workflows using the various
components of Geomatica are demonstrated in Module 4.3, Module 5.1, Module
5.3, Module 6.1, and Module 6.5.
SAR-processing algorithms
The following table identifies the most relevant algorithms available in Geomatica to
process SAR data.
1 To verify whether a sensor is supported by GDB, browse the GDB file formats in the
Technical Reference section of the Geomatica Help.
ex. 𝑇
⃗ 𝑥 (H)&(V), 𝑅⃗𝑥 (H)&(V)
→ HH, HV, VH, VV + ΦHH-HV, ΦHH-VH,
ΦHH-VV, ΦHV-VH, ΦVV-HV, ΦVV-VH
Data structure
With Focus, you can work with data in a variety of formats by using GDB and the
PCIDSK file format. This means that most RADAR data is supported in its original
distribution format and can be opened in Focus or SPTA using the key file name.
• Sensors [key file name]
• Radarsat-2, TerraSAR-X [*.xml]
• UAVSAR, [*.ann]
• Cosmos-SkyMed, Kompsat-5 [*.h5]
• ALOS-1 PALSAR, ALOS-2 PALSAR [LED-*, summary.txt]
To open a file in its original distribution format using its key file name
1. In Focus, click the File menu, and then click Open.
2. In the File Selector window, open the folder that contains the file you
want, select the key file, and then click Open.
3. If data calibration is supported for the sensor used, at the prompt, select
a calibration type, and then click OK.
4. If supported, select a projection, and then click OK.
For each sensor, depending on the product and acquisition type, different image
layers and auxiliary segments are available and can be imported.
Finally, you choose whether to display the file North Up or Raster Up.
Note Focus and SPTA read geolocation information "as is", due
to the variation in positional accuracy of sensors and
acquisition modes. SAR data is also resampled and
projected "on the fly", meaning the data always remains
in its original format, unprojected, and at full resolution.
Any processing will be applied on the original data format.
If you want to increase the positional accuracy by
collecting external GCPs and permanently apply map
projection to the data of a given spatial resolution, you
must do so in OrthoEngine.
To take advantage of all the features offered in Geomatica, convert your data files
to PCIDSK format (.pix). By doing so, you can, in particular, create overviews
faster, and store auxiliary layers, such as lookup tables (LUT), pseudocolor tables
(PCT), bitmaps, and vectors. More information on the PCIDSK format is provided in
Lesson 1.5: Conversion utilities.
Matrix type
Geomatica characterizes SAR data according to a matrix type, which is determined
from the file metadata, the channel type, and the transmit/receive configuration.
Lesson 1.1: Supported SAR sensors and data formats discussed that channel type
is either complex (indicated by "c") or detected (indicated by "r"). A SAR image can
contain a complex or detected channel or channels only or a mix of both. The
number of channels of each type determines the matrix type. The matrix type is
important because many SAR algorithms require the input to be of a particular
matrix type.
The following table describes the matrix types for complex data.
The following table describes the matrix types for detected data.
6. Click Import.
The File Selector window appears.
7. Enter a path and file name.
8. Click Save.
The image is imported (ingested) into a PCIDSK file.
SAR geometry
Most synthetic-aperture radars (SAR) used in geoscience application are usually
side-looking. A series of waves, or pulses, are transmitted by the antenna toward
the ground. Between each transmission, the same antenna is used to receive the
transmitted signal.
Each transmitted pulse is carefully controlled; that is, the frequency, polarization,
and phase of the signal are known. After the signal is scattered back to the sensor,
the travel time, backscatter power, and phase is compared to the original pulse.
Typically, over a thousand pulses are coherently averaged to form a single pixel.
With a SAR image, the location of each pixel and its resolution is a combination of
the time it took to be reflected back to the sensor (x, range resolution) and the
time between two pulses relative to the platform velocity (y, azimuth resolution).
Because it takes more time for a pulse to reach the far range of the radar swath,
the slant-range resolution is coarser at the far range than the near range (Figure
14 and Figure 15).
• x: Ground-range direction
• y: Azimuth direction
• r: Antenna radial axis (radar line of sight [LOS])
Geomatica also has tools for converting from DNs to both betanought ( ) and
0
gammanought ( ).
0
The radar backscatter coefficient (σ0 ) is used commonly and it is expressed per
unit area in ground range (Figure 16).
"A problem arises if there is a nonzero slope at the local terrain site. In this case,
the projected area is determined by the local incidence angle. It follows that the
correct values for σ° cannot be obtained unless one has at hand a reliable estimate
of the local slope (Raney, 1998)".
The radar brightness (β0 ), is the most natural and observable radar measurement
(Raney et al. 1994) and corresponds to the backscatter-per-unit area in slant
range, and requires no knowledge of local incidence angle (Figure 16, Figure 17 and
Figure 18). For the detected product, the radar brightness corresponds to:
0j = (DN 2j + A3)/ A2 j
th
where DNj is the digital number that represents the magnitude of the j pixel from
the start of a range line in the detected image data, and A2 j is the scaling-gain
th
value for the j pixel, and A3 is the fixed offset. Radar brightness in decibels (dB)
is given by:
For complex (SLC) single-beam products, the pixel number j is related to the LUT
index i, using the same procedure as for detected products. The radar brightness
th
for the j range pixel is then given by:
0j = (DN I j / A2 j )2 + (DN Q j / A2 j )2
th
where DNIj and DNQj are the digital values of the I and Q components of the j
pixel from the start of the range line, and A2 j is the corresponding range-
dependent gain. The offset is not used in SLC-product generation. For complex
data, radar brightness in decibels (dB) is given by:
0j (dB) = 10 * log 10 (DN I j / A2 j )2 + (DNQ j / A2 j )2
0 can be converted into 0 using:
0j = 0j * (sin j )
or
0j = 0j * (tan j )
or
In Figure 16, dR is the slant-range distance entering into the definition of 0.
0 represents the average reflectivity of a horizontal material sample, normalized
with respect to a unit area AL on the horizontal ground plane.
0 is defined with respect to incident area Ai , orthogonal to the incident ray from
the radar.
( 0 ) ( 0 )
( 0 )
Figure 16. Definition of surface area and incident area used to determine 0, 0
and 05
With newer SAR sensors, such as TerraSAR-X or RADARSAT-2, for example, the
calibrated data is read on the fly, or calibration is performed during ingest with the
SARINGEST algorithm.
j j
Figure 17. Depression, elevation, and incidence angle: =
j j
Figure 18. Depression, elevation, and incidence angle: ≠
You can use the PSCONV algorithm when a specific matrix format is required for a
polarimetric algorithm that does not automatically perform the conversion. Some
decomposition algorithms require a specific matrix representation as input. For
example, the Cloude-Pottier decomposition (PSEBA) requires a filtered-coherency
matrix while output from polarimetric filters corresponds to a covariance matrix.
Because the matrix type is read from the metadata, Geomatica will automatically
convert the covariance matrix to the coherency matrix (matrix type) before
applying the decomposition.
Conversion s4c S3c s2c s1c c4r6c C3r3c t4r6c T3r3c k16r K9r
to ↓↓↓
s4c -- √
S3c √ --
s2c --
s1c --
c4r6c √ -- √ √ √
C3r3c √ √ √ -- √ √ √ √
c2r1c √
t4r6c √ √ --
T3r3c √ √ √ -- √ √
k16r √ √ --
K9r √ √ --
c2r √
c1r √
Radiometric enhancement
The goal of radiometric enhancement is to improve the interpretation of the
radiometric information in an image using speckle and spatial filters. These filters
can reduce speckle, detect edges, analyze texture, and visually enhance the image.
Image variance, or speckle, is a granular noise that is inherent to SAR imagery.
Speckle gives a grainy, salt-and-pepper appearance and tends to be a dominating
factor in radar imagery. Speckle filters are used primarily with radar data to
remove high-frequency noise (speckle), while preserving high-frequency features
(edges).
SAR-speckle filters
Coherent signal-scattering in SAR data often causes image speckle or a salt-and-
pepper effect. Speckle is inherent to most SAR images, and can inhibit accurate
image interpretation. There are several types of speckle filters and they generally
occur in either of two general categories:
• Nonadaptive, or template
• Adaptive
Nonadaptive filters apply to the parameters of the whole image. They do not take
into account the local properties of terrain backscatter or the nature of the sensor.
Examples of nonadaptive filters are Mean, Median, Edge Detection, and Sieve.
c1r
B → Radiometric SAR filters → Analysis
c2r
Polarimetric
s4c Polarimetric SAR filters decomposition
S3c
C → [PSPOLFIL] or → (possible only with s4c → Analysis
s2c
[PSBOXCAR] and S3c data, see
s1c
Module 3).
[PSPOLSYN]
s4c [PSPOLSYNR]
E → → Radiometric SAR filters → Analysis
S3c [PSPOLSYNC]
(Appendix B)
Data preprocessing
Using the concepts learned in Lesson 1.4: Ingesting and extracting a calibrated
backscatter image and Lesson 1.5: Conversion utilities and the Vancouver scene
located in ~\SAR_Training\Radar\Vancouver_RS2_FQ2_SLC, do the following:
1. Ingest the product.xml file into a PCIDSK file using sigma as the
calibration type.
Name the file Van_RS2_FQ02sig.pix.
2. Use the PSIQINTERP algorithm to convert the complex data to detected
data.
Convert the HH channel to Intensity. Name the file
Van_RS2_FQ02sig_HH.pix.
This creates a (simulated) detected single-pol image in HH.
Exercise 1: With FSPEC, run the enhanced Frost filter again on the same file
(Van_RS2_FQ02sig_HH.pix). Specify a window size of 11. Specify the following
name in the Output: File Layer (s) port:
Van_RS2_FQ02sig_HH_FFrost11.pix.
Run the Average filter to get another comparison basis to evaluate the results
obtained from the SAR adaptive filters.
Compare the results to the original HH SLC channel (Van_RS2_FQ02sig_HH.pix),
both visually and numerically.
Stats: HH SLC
• Water: 0.343±0.35
(med: 0.226)
• Forest: 0.199±0.24
(med: 0.12)
•H-D. Urban:
2.159±5.56
(med: 0.50)
• L-D. urban:
0.212±0.32
(med: 0.11)
Stats: Touzi 11 x 11
• Water: 0.263±0.102
(med:0.258)
• Forest:
0.152±0.077
(med:0.148)
• H-D. Urban:
1.515±3.03
(med:0.592)
• L-D. urban:
0.163±0.117
(med:0.133)
Lesson summary
In this lesson, you:
• Filtered an image using enhanced Frost, Gamma, and Touzi adaptive
filters
• Compared visually and numerically the results of the different filter
operations
7x7 9x9 11 x 11
15 x 15 21 x 21 31 x 31
Lesson summary
In this lesson, you:
• Applied a polarimetric filter to the RADARSAT-2 image
• Compared the effect of various sizes of processing window
Exercise 3: When the module runs to completion, open the file in Focus, and then
compare the results of this filter to those of the boxcar filter.
Not applicable
7x7 9x9 11 x 11
15 x 15 21 x 21 31 x 31
Lesson summary
In this lesson, you:
• Filtered a quad-polarized image using the PSPOLFIL algorithm
• Compared the PSPOLFIL filtering with PSBOXCAR and the original
unfiltered data
⃗ 𝑥 = [𝐻 ], 𝑅⃗𝑥 = [𝐻 ]
𝑇
𝑉 𝑉
𝑇 𝑆 𝑆𝐻𝑉
⃗ 𝑥 • 𝑅⃗𝑥
𝑇 = [𝑆], [𝑆] = [ 𝐻𝐻 ]
𝑆𝑉𝐻 𝑆𝑉𝑉
Canonical targets
Canonical targets correspond to simple geometric structures whose interpretation
of diffusion is facilitated by the presence of symmetry planes in the matrices used
to represent them. Interpretation of polarimetric responses, like that of the
parameters from a polarimetric decomposition, is often based on a comparison with
the canonical targets.
1 0
Sphere = S =
0 1
1 0
Trihedral = S=
0 1
1 0
Dihedral = S =
0 −1
cos 2 sin 2
S=
sin 2 − cos 2
Horizontal dipole
1 0
S=
0 0
Vertical dipole
0 0
S=
0 1
Oriented dipole
1 2
cos sin
2
S= 2
1 sin 2 sin
2
2
Left helix
1 j
S=
j −1
Right helix
1 − j
S=
− j −1
Figure 24. Canonical targets representation (in H-V basis), (Source: van Zyl et
Ulaby (1990), p.33-45.)
The sphere and the trihedral are both characterized by an odd number of bounces,
which results in a phase difference of zero degrees in the backscatter alignment
(BSA) convention. Each produces a uniform scattering for all linear polarizations
(χ=0°), which results in HH = VV. For an even number of bounces, (dihedral, for
example) the target introduces a phase difference of 180 degrees between the HH
and VV polarization and HH = -VV.
However, the HH and VV channels are still equal in intensity. If the dihedral in a
plane perpendicular to the radar line of sight (LOS) is rotated, it introduces a
depolarization of the signal, and the HV and VH channels are no longer equal to
zero. For a pure canonical dihedral, the signal is repolarized more than depolarized.
If the value of the angle ϕ is found, it is possible to cancel out its effect and retrieve
a pure dihedral scattering.
A dipole will produce a strong scattering in only one polarization channel according
to its orientation. Like the dihedral, it is possible to cancel out the effect of the
orientation angle ϕ in a case where it is not equal to zero (HH) or 90 degrees (VV).
Finally, the helix is an abstract construction, because it does not correspond to a
real physical target. This kind of scattering can occur in an urban environment
where multiple scattering is common. One way to produce a pure-helix scattering is
to place two dihedrals, oriented at 45 degrees from each other. The phase
difference between the HH and VV channel will be 180 degrees.
If a pixel, or a small group of pixels, corresponds to a canonical target, it will
generally produce a highly polarized and strong scattering. These pixels are called
point targets or coherent-point targets.
Backscattering mechanisms
The reality, however, is often more complex. Only a small fraction of an image pixel
corresponds to a coherent-point target. The scattering mechanisms tend to be
horizontally and vertically superposed. In such a case, it is necessary to average
(multilook) several pixels, to estimate the dominant scattering mechanisms, if any,
and the degree of polarization.
For example, a "forest stand can be broken down into individual components that
drive radar backscatter, these components are (Lo, 1998):
• (a) Direct backscattering from the soil surface
• (b) Volume scattering from foliage, shrub canopy, and leaf litter, if present
• (c) Direct backscattering from big branches and trunks if they are rough
or at normal incidence
• (d) Interaction components due to corner reflection from tree trunks
• (e) Other interaction components due to multiple scattering between
foliage and ground surface, big branches and surface, foliage and shrub
canopy, and so forth"
With the same area, depending on the characteristics of the sensor (incidence and
orientation angle, spatial resolution, and wavelength), various scattering
mechanisms might dominate the scattered signal. Several polarimetric
decompositions have been proposed to facilitate the interpretation of the scattered
mechanism of a fully polarimetric image.
Polarimetric decomposition
There are two families of polarimetric decomposition: the coherent and the
incoherent decomposition.
The coherent target decompositions are applied only on SLC images and
generally pixel-by-pixel for the characterization of a point target. Although a
dominant scattering mechanism can be found for each pixel of an image, a
coherence test is applied generally on each before the decomposition to ensure a
meaningful result. This topic will be covered in Lesson 4.
As mentioned previously, most natural targets, called extended targets, are
incoherent; that is, they cover more than one pixel, are partially polarized, and in
most cases have more than one scattering mechanism for any given pixel. The
complex-scattering matrix ([S]) is no longer appropriate to represent an incoherent
target, and a second-order representation is needed. Several matrices can be used
to represent an incoherent target: the Mueller ([M]), Kennaugh ([K]), the
coherency ([T]) and the covariance ([C]) matrices are used commonly:
Symmetrized-covariance matrix
7 Source: Lo (1998)
Symmetrized-coherency matrix
Mathematical-incoherent decomposition
The mathematical decompositions are not based on a physical model; however,
they can be applied to the analysis of all kinds of land-use and land-cover classes.
The current mathematical decompositions are based on an eingenvector
decomposition of the coherency matrix, which is analogous to a principal-
component analysis (PCA).
8 Source: Cloude, S.R., Pottier, E. (1997). An Entropy based Classification scheme for Land
Applications of Polarimetric SARs. IEEE Transactions on Geoscience and Remote Sensing.
Vol.35, no.2, p.68–78.
̅ classification plane9
Figure 26. H-α
The Touzi decomposition (PSTOUZIDEC) is based on the characteristic
decomposition of the coherency matrix. With reciprocal targets, the characteristic
decomposition leads to the representation of the coherency matrix as the
incoherent sum of three single scatterers, each weighted by its normalized and
positive eigenvalues (λi i , i =1, 2, 3).
The Touzi decomposition uses the Touzi scattering-vector model to represent each
coherency eigenvector with unique target characteristics. Each coherency
eigenvector is characterized uniquely by five independent parameters. Scattering
type is described with a complex entity, whose magnitude ( αsi ) and phase (Φαsi )
characterize the magnitude and phase of target scattering. The helicity ( i )
characterizes the symmetric-asymmetric nature of target scattering.
The Touzi decomposition is similar to the Cloude-Pottier decomposition except that:
• It does not proceed to the weighted sum of each eigenvector’s parameters
by their respective eigenvalues
• It takes into account the polarimetric phase (Φαsi )
▪ Anisotropy ( A)
▪ Entropy ( H )
▪ Eigenvalues ( 1 , 2 , 3 )
Exercise 2: Locate and examine some features that produce a strong scattering in
only one of the components, in two components, or in all components.
For example, locate the bright area centered on 1789P, 3073L (671 036E, 5 823
936N). This industrial area produces a strong double-bounce scattering, as
predicted by the theory, but many buildings also seem characterized by a stronger
volume scattering compared to the surrounding forest, which is less intuitive. This
is because the Freeman-Durden decomposition directly uses the HV channel to
calculate the volume contribution of the total backscattered signal.
In urban areas, many scattering processes can create a strong return in HV, which
are not volumic by nature. Among these are multiple scattering (the addition of
many scattering mechanisms in one cell of resolution), a building not aligned with
the radar LOS (orientation effects), or a nonsymmetrical object.
Interpreting the Freeman-Durden decomposition can be made easier by normalizing
the radiometry to calculate the fraction of the total scattered power associated with
each component.
4. Click Run.
Both versions can now be compared; each has its own utility. The power version is
often preferred if the components are to be used in a classification, while the
normalized version helps to better understand the composition of backscattered
power in mixing of scattering mechanisms.
HH VV HV
Lesson summary
In this lesson, you:
• Ingested a RADARSAT-2 fully polarimetric image using SARINGEST
• Applied a PSBOXCAR filter on the ingested SAR data
• Used PSFREDUR to perform the Freeman-Durden decomposition
• Used the EASI Modeling feature in Focus to create a normalized version
of the Freeman-Durden parameters
To run this script, you must create two new bitmap layers by right-clicking the file
on the Files tab, and then selecting New > Bitmap Layer. This creates empty
bitmap segments 2 and 3.
Lesson summary
In this lesson, you:
• Used PSEABA to perform the Cloude-Pottier decomposition
• Used the EASI Modeling feature in Focus to analyze the Cloude-Pottier
parameters
Why do some of the parameters of the second component (i=2) and most of the
parameters of the third component (i=3) look noisy?
The Landsat-5 mosaic can be used to facilitate the interpretation:
~\SAR_Training\Landsat\Flevoland_L5_20100906_p198_r23r24.pix.
s1 s2 s3
s1 s1 (in pseudocolor) s 2
1 1 (pseudocolor) 2
1 1 (pseudocolor)
2
1 2 3
(min:0.37, max:0.99) (min:0.23, max:0.48) (min:0.10, max:0.29)
Note The parameter values are important, but so too are their
spatial distribution (the texture). In some analysis, the
sign of the orientation (ψi ) or helicity (τi ) does not
matter. Calculating their absolute values ( 1 |ψi |, 1 )
might facilitate interpretation.
Find areas in the Flevoland data set where Touzi αs1 differs from Cloude-Pottier .
Can you explain the observed differences?
Calculate the global Touzi alpha angle (α
̅s ). Do you still observe differences with the
Cloude-Pottier α
̅? If yes, why?
s1 sg Cloude
Touzi Touzi
Touzi
s1
, detail
sg Cloude , detail
Touzi , detail
Exercise 6: Find areas where 1 is low or high. What kinds of land use and land
cover are characterized by high-and-low helicity values? Use EASI Modeling to
create bitmaps containing high helicity with different thresholds, such as 5, 10, and
25 degrees.
Exercise 7: Compare the Touzi dominant orientation angle 1 with the Cloude-
Pottier average beta angle .
Both are also sensitive to bare-soil roughness. Bare soil with a high roughness
tends to produce higher 1 (or ) values, while smooth surfaces result in 1 (or
) values centered around zero degrees.
Exercise 8: Locate the urban or industrial areas in the Flevoland image, and
observe the main street orientation from the radar LOS. What are the average 1
and values when the streets are parallel to the LOS? What are the average 1
and values when the main streets orientation departs from the LOS?
The following vector file can help to locate the urban area and interpret the relation
between 1 , and the LOS: ~\SAR_Training\Vectors\Flevoland_Streets.pix.
Exercise 9: Find some bare fields and observe the 1 and values.
TOUZI PHASE s1
The Cloude-Pottier decomposition does not include a phase parameter, as does the
Touzi decomposition with s1 . This phase is similar, but not equivalent, to the
hh − vv phase difference. To better understand the s1 phase, the hh − vv
phase difference will be generated with the PSPHDIFF algorithm.
Exercise 10: Compare the Touzi s1 and the hh − vv phase difference.
Exercise 11: Explore the differences between Φαs1 and ϕhh -ϕvv using the
absolute value of 1 as a guide. The anisotropy (A) can also be used.
s1 hh − vv hh − vv
Touzi ,
Blue: Single-bounce
scattering
Red: Double-bounce
scattering
s1 hh − vv hh − vv
Touzi , detail , detail ,
Blue: Single-bounce
scattering
Red: Double-bounce
scattering
Figure 34. Comparison between Touzi s1 phase and hh − vv phase difference
Lesson summary
In this lesson, you:
• Used PSTOUZIDEC to perform the Touzi decomposition
• Compared the Touzi decomposition to the Cloude-Pottier decomposition
To start SPTA
• On the Geomatica toolbar, click SPTA.
The SAR Polarimetry Target Analysis and Target Selection windows
appear.
To open an image
1. In SPTA, click the File menu, and then click Open.
The File Selector window appears.
2. Open the
~\SAR_Training\Radar\Flevoland_RS2_SLC\FQ29_20100507
folder.
3. Select the FQ29_20100507sig.pix file, and then click Open.
A pop-up window appears, prompting you to select a georeferencing
system whether from the file or the math model (RPCs).
4. Click Select File.
A second pop-up window appears, prompting you how to display the
image.
5. Click North Up, and then click OK.
The FQ29_20100507sig.pix file is displayed in the Target Selection
window with channels 1, 2, and 4 mapped to RGB, respectively.
Beside RGB, you can change the channels displayed by selecting the channel you
want beside R, G, and B, respectively.
On the toolbar you can:
To draw a target
1. In SPTA, in the Target Selection window, pan or zoom to the area of
interest, as applicable.
2. In the SAR Polarimetry Target Analysis window, select a target-selection
mode.
3. Draw a target over the area you want.
If the Arbitrary region – polygon option is selected, double-click to
close the shape.
The new target is added to the list in the Target Manager window.
Figure 35. Target selection mode: pixel plus clutter estimation region
5. In the Target Manager window, click the target you want to apply, and
then click Set as Current.
The target displays in the imagery in the Target Selection window.
Figure 37. Target Selection Mode, Coherent Target Decomposition, and Symmetric
scattering parameters
A second target, located at 1585.5P, 4437.5L can be selected. If an Overall size of
5 and a Gap size of 1 are used, the target does not appear to be symmetric. Set
Overall size to 9 and Gap size to 3, select the target again, and then click
Compute.
The target is now defined against a larger clutter corresponding to a flat ground
surrounding what appears to be a pylon. The symmetric scattering characteristics
can now be estimated.
Exercise 1: Compare the statistics of the two selected targets (see Figure 38)
Target 2 – Overall size = 5, Gap Target 2 – Overall size = 9, Gap size=3
size=1
==== Huynen and Cameron parameters ==== ==== Huynen and Cameron parameters ====
Target is coherent Target is coherent
Eigenvalues: Eigenvalues:
-1365.166016 + 1757.697144 i, -1365.167969 + 1757.695557 i, -1365.166016 + 1757.697144 i, -1365.167969 +
Eigenvectors: 1757.695557 i,
-14.322257 + 0.000000 i, 0.999996 + 0.000000 i, Eigenvectors:
-1.000005 + 0.000000 i, 14.322258 - 0.000000 i, -14.322257 + 0.000000 i, 0.999996 + 0.000000 i,
Maximum returned power density: 225.422 (linear) -1.000005 + 0.000000 i, 14.322258 - 0.000000 i,
Characteristic angle: 45 deg Maximum returned power density: 225.422 (linear)
Absolute phase of the target: -127.836 deg Characteristic angle: 45 deg
Target skip angle: -1.59984e-05 deg Absolute phase of the target: -127.836 deg
Maximum polarisation orientation relative to the horizontal: - Target skip angle: -1.59984e-05 deg
89.999985 Maximum polarisation orientation relative to the horizontal: -
Maximum polarisation ellipticity: 0.000002504478 89.999985
Rotation angle for maximum symmetrical scattering component: Maximum polarisation ellipticity: 0.000002504478
45 deg Rotation angle for maximum symmetrical scattering component:
Degree of symmetry: 1.0000000 45 deg
Real Component of Co-polarised ratio: 1.0000000 Degree of symmetry: 1.0000000
Imaginary Component of Co-polarised ratio: 0.000000 Real Component of Co-polarised ratio: 1.0000000
Nearest elemental symmetric scattering mechanism: Trihedral Imaginary Component of Co-polarised ratio: 0.000000
Distance to nearest elemental symmetric scattering mechanism: Nearest elemental symmetric scattering mechanism: Trihedral
0.000000 Distance to nearest elemental symmetric scattering mechanism:
0.000000
==== Symmetric Scattering Characterization ====
Target is not symmetric
==== Symmetric Scattering Characterization ====
Degree of Coherence: 0.98805820.988058 deg
Scattering Vector Direction: 0 deg
Phase Difference: -0 deg
Target Sphere Angle (PSI): 0 deg
Target Sphere Angle (CHI): 0 deg
Rotation Angle: 45 deg
Bitmaps produced by the PSWHITE and PSSSCM algorithms can be viewed in Focus.
The resulting bitmap can be opened in Focus over the FQ29_20100507sig.pix file.
Figure 40 and Figure 41 show the detected coherent point target using various
configurations of window size.
In comparison, the PSWHITE algorithm is less strict in its definition of a coherent
point target. This algorithm relies mainly on a threshold-based detection to
discriminate bright-point targets.
WS=9
CWOW=15
CWCW=3
DST=0.8
DCH=0.8
SCRT=12
No. 2
Parameters
WS=15
CWOW=25
CWCW=15
DST=0.8
DCH=0.8
SCRT=12
Figure 40. Coherent targets detection using PSSSCM (HH channel in background)
Figure 41. Coherent targets detection using PSWHITE (HH channel in background)
Lesson summary
In this lesson, you:
• Selected coherent targets in SPTA
• Analyzed the scattering characteristics of a coherent target
• Automatically detected coherent targets
Exercise 3: You are now ready to select some regions. From the results obtained
in Module 3, Lesson 4.1, and Lesson 4.2, the major land-use and land-cover classes
in the Flevoland region are readily apparent. For this analysis, you will select
regions corresponding to the following land-use and land-cover classes:
• Open water
• Urban area 1 [double-bounce dominated, aligned with the radar LOS]
• Urban area 2 (with volume, multiple scattering, or both)
• Forested areas
• Wetland (meadow, reeds)
• Agriculture (with strong HV, vegetated, rough or both)
• Agriculture (strong HH, double-bounce)
• Agriculture (smooth bare field)
To select regions
1. In the Target Selection window, draw a polygon over an area of open
water in the image.
2. In the Target Manager window, in the Description box, type
a_OpenWater.
Exercise 4: Select open water and forested areas and compare their scattering
characteristics by selecting some of the available numerical-output options. You will
use the targets defined in the previous lesson.
Figure 43. SAR Polarimetry Target Analysis window, Numerical Output options
Exercise 5: With the urban and the forest target, produce and compare the:
1. Copolarized response plot using the normalized scaling
2. Scatter plot using the HH and VV channels
3. Copolarized response plot with the ones in Figure 24 for canonical targets
b_UrbanArea_DB_LOS d_Forest
response plot (normalized) response plot (normalized)
Figure 45. Examples of polarimetric response and scatter plot for an urban and a
forest target
Lesson summary
In this lesson, you:
• Defined and selected targets
• Produced numerical and graphical output for the selected targets
• Analyzed the numerical and graphical output
6. Click Run.
Lesson summary
In this lesson, you:
• Ran a Wishart unsupervised classification
• Analyzed the results of the unsupervised Wishart classification
Preclassification tasks
The supervised Wishart classifier (the PSSWIS algorithm) requires as input a series
of training areas representing each class to classify. These training sites can be in
bitmap or vector format and created in Focus and then imported.
The first task is to put all the training sites in the same PCIDSK file. Each class will
be stored in a different segment. When the training sites are imported from
multiple sources, you must ensure that each shares the same projection as the
image to be classified. If bitmaps are used as training sites, they must have the
same number of lines and columns as the image to be classified.
5. On the Focus toolbar, click the arrow beside New Shapes ( ), click
the type of shape you want (Polygon, Rectangle, Ellipse, or Trace),
and then draw a shape over an urban area on the new vector layer.
Note All shapes created on the same layer must belong to the
same class.
4. Double-click PSSWIS.
The PSSWIS Module Control Panel window appears.
Exercise 2: What are the main differences between these two classifications? You
can use an RGB composite made of the HH, HV, and VV channels of the
FQ29_20100507sig_PSBOXCAR_7.pix file to help in your interpretation.
Lesson summary
In this lesson, you:
• Set up preclassification tasks
• Ran a Wishart supervised classification
• Analyzed the results of the Wishart classification
• Visually compared the results of the Wishart supervised and unsupervised
classifications
References
Ulaby, F.T., Held, D., Dobson, M.C., McDonald, K.C., Senior, T.B.A (1987). Relating
Polarization Phase Difference of SAR Signals to Scene Properties. IEEE Transactions
on Geoscience and Remote Sensing. Vol. GE-25, no.1, p.83-92.
Lee, Jong-Sen., Pottier, Eric (2009). Polarimetric Radar Imagining: from basics to
applications. CRC Press, Taylor & Francis Group, Boca Raton, Florida, USA. 398
pages.
Lo, C.P. (1998). Applications of Imaging Radar to Land Use and Land Cover
Mapping. Published in: Manual of remote sensing: principles and applications of
imaging radar. R.A. Ryerson (ed). John Wiley & Sons, Inc., New York. 896 pages.
van Zyl, J.J., Ulaby, F.T (1990). Chapter 2: Scattering Matrix Representation for
simple targets. Publié dans: Radar Polarimetry for Geoscience Applications. F.T.
Ulaby. C. Elachy, editors. Artech House, Norwood, MA, USA. 388 pages.
The success of applying change detection depends on the adequacy between the
nature of change you want to detect and the sensor spatial, spectral, radiometric,
and temporal resolution. The techniques you select will have a significant bearing
on the results you obtain.
There are many techniques for detecting change with remote-sensing data. At one
end of the spectrum there is hard-change detection based on comparison of land-
use or land-cover classification and the resulting changes in discrete (qualitative)
categories.
At the other end, it is possible to directly compare two images using a band or ratio
technique or a principal-component analysis, for example. These techniques reveal
the magnitude of change, but provide little information on the nature of the
changes.
Between each end of the spectrum, a multitude of hybrid techniques exists with
varying levels of sophistication10.
The Geomatica radar suite has three coherent change-detection algorithms that can
extract the magnitude of change between two images (Table 10).
To detect change, you can run CCDINTEN, CCDPHASE, and CCDWISH individually
or in combination to produce images showing change based on metrics (Table 10).
Similar to the output from polarimetric decompositions (Module 3), the output from
the change-detection algorithms is typically not the end of the workflow, but rather
the beginning.
An option is to extract areas of change based on image thresholding to produce
binary maps that represent changed areas versus non. You can also convert binary
maps to vector layers and overlay or combine them with ancillary data to interpret
change.
Another option, if you are working with fully polarimetric data, is to apply one or
many polarimetric decompositions on the same data set. In this case, the change-
detection algorithms provide the magnitude of change (quantitative) while the
polarimetric decompositions help you to understand the nature of change
(qualitative) by identifying the scattering mechanisms. This strategy is
demonstrated in Lesson 6.4.
Finally, to evaluate the same area, use a similar size processing window for change
detection and polarimetric decompositions.
Figure 51. Typical workflow for detection of incoherent change. Dotted lines indicate
optional steps.
A similar workflow for change detection can be applied to coherent targets; that is,
hard targets that cover from one to a few tens of pixels that can be compared to
canonical targets (see Module 3, polarimetry fundamentals). A coregistration
between each SLC image pair is usually preferred over an orthorectification to
preserve the highest possible spatial resolution and to preserve the signal as close
as possible to its raw (calibrated) values. Because the characterization of coherent
targets relies essentially on the phase, use a smaller window size with algorithms
such as CCDPHASE and CCDWISH.
Distinguishing change that occurs over few contiguous pixels from the speckle can
be a challenge. This is common because (conventional) speckle filtering is not
usually applied to the data before the change-detection analysis or coherent-target
characterization.
Lesson summary
In this lesson you:
• Learned about the options available for change detection in Geomatica
• Examined examples of possible workflows for detection of coherent and
incoherent change
7. Click the Input Params 1 tab, and then in the Window size list, click
15.
8. Click Run.
Exercise 2: Interpret and compare the results of the CCDINTEN change detection
for HH, HV, VV, and the span.
HH/R: 0507 HH HH
/G:0531/B:0507 Change metric 15 x 15 Ranked change 15 x 15
HV/R: 0507 HV HV
/G:0531/B:0507 Change metric 15 x 15 Ranked change 15 x 15
VV/R: 0507 VV VV
/G:0531/B:0507 Change metric 15 x 15 Ranked change 15 x 15
Lesson summary
In this lesson, you:
• Ran intensity-change detection on SAR data
• Analyzed the results of the intensity-change detection
Exercise 4: Compare the change-detection results for the span obtained with
CCDINTEN and CCDWISH. Which type of land use and land cover shows the most
difference, and which one shows the least difference?
Lesson summary
In this lesson, you:
• Ran a Wishart change detection on SAR data
• Analyzed the results of a Wishart change detection
• Compared the results of a Wishart change detection and an intensity-
change detection
• Compared the results of a Wishart change detection with a change
observed in the HH-VV correlation coefficient
Span Span
Change metric Ranked change
Exercise 5: Run the CCDPHASE detection algorithm for HH, HV, and VV channels.
HH HH
Change metric 15 x 15 Ranked change 15 x 15
HV HV
Change metric 15 x 15 Ranked change 15 x 15
VV VV
Change metric 15 x 15 Ranked change 15 x 15
Data preparation
To complete this lesson you need:
• The FQ29_20100507_0531sig_CCDWISH_15x15_span.pix file produced in
Lesson 6.3. If this file has not been produced, it can be generated with
CCDWISH (15 x 15 processing window) using as input the two Radarsat-2
Flevoland data sets ingested with the sigma nought calibration
(FQ29_20100507sig.pix and FQ29_20100531sig.pix).
• The results from the Touzi decomposition (PSTOUZIDEC) applied on
FQ29_20100507sig.pix and FQ29_20100531sig.pix. Remember to first
filter the files using PSBOXCAR with a 7 x 7 processing window. The
outputs can be named:
FQ29_20100507sig_PSBOXCAR_7_PSTOUZIDEC.pix and
FQ29_20100531sig_PSBOXCAR_7_PSTOUZIDEC.pix.
As a reminder, the change metric from CCDWISH (and CCDPHASE and CCDINTEN)
does not consider the direction of change; therefore, the higher the number, the
stronger the change is according to the metric. Based on a visual inspection of the
histogram, the majority of pixels are distributed between 24.1 and 31 with a mean
of 25.52. The shape of the histogram suggests that a value of approximately 26
may be a good starting point for setting the threshold value.
Using Focus EASI Modeling to set threshold value and extract areas of
change
The change metric will be thresholded and the result stored in a bitmap layer.
1. If not open already, in Focus, open
FQ29_20100507_0531sig_CCDWISH_15x15_span.pix.
2. Click the Files tab, and then select the
FQ29_20100507_0531sig_CCDWISH_15x15_span.pix file.
3. Right-click, point to New, and then click Bitmap Layer.
4. Load the (empty) bitmap.
5. Right-click the bitmap, and then click View.
The bitmap is displayed automatically at the top on the Map tab.
6. On the Tools menu, click EASI Modeling.
You are now ready to extract the areas of change.
7. Using the following script, extract all pixels that are above 26 and output
the results in the new bitmap (%%2).
In the following procedure, you will use a filter to remove all polygons less than
3,000 meters squared in area, which corresponds approximately to 110 pixels.
Figure 61. Polygons greater than 3,000 meters squared (white) superposed on the
original bitmap layer in red (CCDWISH>27.5)
3. Under Primary Input, under Layer, click the File list, and then select
the vector file to add the new attributes (statistics); that is, select
polygons_CCDWISH_15_Change_sup27.5_sup3000sqm.pix.
4. Under Secondary Input, under Layer, click the File list, and then select
the file containing the layer of interest; that is, select
FQ29_20100507sig_PSBOXCAR_7_PSTOUZIDEC.pix.
If the file is already open in Focus it will be available in the list; otherwise,
to open the file, click Browse.
5. Click the Layer list, select (3 [32R] Dominant Touzi Alpha_S
Parameter, and then click Next.
6. In the table, select one or more attributes to compute (you may need to
click Advanced to see all of them), and then click Finish.
The selected attributes are calculated.
7. Repeat steps 1 to 7 for each layer of interest.
After collecting all of the statistics, the vector file should contain
approximately 1660 records(polygons) and 10 fields. This file can be
found in ~\SAR_Training\Radar\Vectors\
(polygons_CCDWISH_15_Change_sup27.5_sup3000sqm.pix).
Interpreting change
The polygons representing the areas of change, the statistics collected for each
polygon, the original parameters from the Touzi decomposition, and the Landsat
images can all be used to interpret and understand change.
In Figure 60, you can see that most change is associated with surface scattering in
agricultural fields. This is confirmed in Figure 62, where s1 shows values less than
40 degrees for the two dates for most of the polygons. In the same figure, ( s1 ),
most of the polygons are clustered along the 1:1 line and do not show a big change
in the dominant scattering mechanisms, which, at first, might look counterintuitive.
In comparison, as expected, most polygons show a strong difference in the span
scatter plot because CCDWISH uses the trace of the coherency matrix (Figure 62,
12-D). A possible explanation is that soil moisture increases the backscatter power,
but with little effect on the phase relationship between the two orthogonal
polarizations used for signal Tx/Rx (usually HH-VV). The backscattering mechanism
remains the same (wet soil versus dry), especially because the backscattered
power is normalized in the calculation of s1 .
Some polygons are characterized by a change in both s1 and the span. A typical
situation is fields occupied by row crops in which the combination of the plant
growth and rows increase s1 (from surface toward dipole or a week double-
bounce scattering) while the span has decreased due to soil drying between May 7
and May 31 (Figure 62, A and B; Figure 63, A and B versus C).
Figure 64 and Figure 65 show change associated with moving targets over land or
water. Typically, these targets produce the most noticeable changes in the
scattering type, the span and the purity of the backscattered power represented
here by 1 N .
You can see in Figure 64, E) that the workflows discussed in this section worked
well in urban areas to filter out the few changes that were associated with small
differences in the viewing geometry between the two images. In the same figure,
you can also see the effect of using a large window size (15 x 15) in CCDWISH:
change associated with small targets has expanded to the areas that surround
them.
A) May 7, 2010, s1 ,
B) May 31, 2010, s1 ,
C) Span
(R: May 7, G: May 31, B:
dominant symmetric dominant symmetric May 7)
scattering type scattering type
A) May 7, 2010, s1 ,
B) May 31, 2010, s1 ,
C) Span
(R: May 7, G: May 31, B:
dominant symmetric dominant symmetric May 7)
scattering type scattering type
A) May 7, 2010, s1 ,
B) May 31, 2010, s1 ,
C) Span
(R: May 7, G: May 31, B:
dominant symmetric dominant symmetric May 7)
scattering type scattering type
D) May 7, 2010, 1N , first
E) May 31, 1N , first
eigenvalue eigenvalue
Lesson summary
In this lesson you:
• Used CCDWISH to produce a map of change
• Extracted areas of change using channel thresholding
• Exported the areas of change to a vector file using BIT2POLY
• Interpreted the nature of change using a polarimetric decomposition
(PSTOUZIDEC)
References
Masroor Hussain, Dongmei Chen, Angela Cheng, Hui Wei, David Stanley (2013).
Change detection from remotely sensed images: From pixel-based to object-based
approaches. ISPRS Journal of Photogrammetry and Remote Sensing, Vol.80, p.91-
106.
Jensen, J. R (2005). Introductory Digital Image Processing: A Remote Sensing
Perspective. 3rd ed. Prentice-Hall Series in Geographic Information Science.
Pearson/Prentice Hall, Upper Saddle River, N.J., USA, 526 pages.
Data preprocessing
Using the concepts learned in Lesson 1.4, Lesson 1.5, and the Flevoland scene
located in ~\SAR_Training\ Radar\Flevoland_RS2_SLC\FQ29_20100507:
1. Ingest the product.xml file into a PCIDSK file using sigma as the
calibration type.
Name the file FLE_FQ29_20100507sig.
2. Use the PSIQINTERP algorithm to convert complex data to detected data.
Convert the HH and HV channels to intensity.
Name the file FLE_FQ29_20100507sig_HH-HV.
Compare the ratio and difference results to the original HH and HV channels.
Lesson summary
In this lesson you:
• Applied a band ratio and a band difference between two polarimetric
channels using EASI Modeling
• Compared the band-ratio and band-difference result
Exercise 2: For which kind of land use and land cover is HH higher than HV? For
which kind of land use and land cover is HH similar to HV?
Compare the results from the previous lesson (ratio). What are the main
differences you can observe between these two techniques? Are some differences
related to type of filter used?
To facilitate the interpretation of the results, a Landsat-5 image mosaic of the
region is provided. Select the Flevoland_L5_20100906_p198_r23r24.pix file located
in the ~\SAR_Training\Landsat folder.
HH (nonfiltered) HV (nonfiltered)
Lesson summary
In this lesson you:
• Applied and intensity-change detection between two polarimetric channels
• Analyzed the results
7. Under Basis of SAR Texture Measure, select all four of the check boxes.
8. In the Horizontal Window Size list, click 7.
9. In the Vertical Window Size list, click 7.
10. In the Image Units list, click Power.
Note: It is important to select the correct image format for the input
layer. Any required conversions are performed internally to use the
correct values for each computed texture measure.
11. Click Run.
Lesson summary
In this lesson you:
• Calculated SAR-specific texture measures by using SARTEX
• Calculated texture measures based on co-occurrence matrices by
running TEX
Polarimetric discriminators
Polarimetric decompositions are useful in identifying the scattering mechanisms
characterizing a point or a distributed target. However, they only characterize a
part of the polarimetric information available for a given target. It is possible to
deepen the characterization of a target by analyzing its eigenvalues ( i , i=1, 2, 3)
produced by a polarimetric decomposition (Module 3).
The polarimetric response plots introduced in Module 3 and Module 4 can also be
used to identify the backscattering mechanisms characterizing a target. These
responses also provide an analysis of the backscatter power that is often
overlooked in the polarimetric decompositions.
It is particularly interesting to analyze the peaks and valleys of a polarimetric
response plot:
"the polarization plots have peaks at polarizations that give rise to maximum
received power, and valleys where the received power is smallest, in agreement
with the concept of Huynen's polarization fork in the Poincaré sphere"
—(CCRS, 2007, Boerner, et al. 1998, fig.5-3-9)
The polarimetric response plot corresponds to a projection of the Poincaré sphere,
where the orientation angle ( ) represent the longitude and the ellipticity angle (
) represents the latitude. Using an increment of one degree, there are more than
16,000 possible combinations of and for signal transmission, and a similar
number for reception (16,000 x 16,000 possible transmit-receive combinations).
Fortunately, the SAR Analysis toolbox provides algorithms to automatically analyze
the properties of a target using the concept of the Huynen fork on the Poincaré
sphere.
The PSPOLDIS algorithm calculates a number of polarimetric discriminators for a
fully polarimetric SAR (POLSAR) data set. The PSPOLSYN algorithm creates a
synthesized-backscatter SAR image for arbitrary transmit and receive polarizations.
The PSPOLSYNC algorithm creates a synthesized-backscatter image to maximize
the contrast between two targets.
Appendix B has four lessons:
• Generating polarimetric discriminators based on coherency matrix
eigenvalues
• Generating polarimetric discriminators based on analysis of the Poincaré
sphere
Data preprocessing
Before you can calculate polarimetric discriminators, you must preprocess the data.
Exercise 1:
Compare the different eigenvalues based on polarimetric discriminators.
There is some correlation between anisotropy, entropy, and the polarimetric
discriminators based on the coherency matrix eigenvalues. Which polarimetric
discriminators are less correlated and which are more correlated?
%1 %2 %6 %7 %8 %9 %10 %11
%2 -0.28 x x x x x x x
%6 -0.86 -0.19 x x x x x x
%1, entropy (H) %2, anisotropy (A) %6, dominant point target
%7, more than one %8, two strong scattering %9, fully diffused scattering
scattering mechanism mechanisms
Lesson summary
In this lesson, you:
• Generated polarimetric discriminators based on the coherency matrix
eigenvalues
• Compared the polarimetric discriminators produced
% 3, Max. Int. C.P.C %6, Min. Int. C.P.C % 1, Int. Max. Pol.
part/total power
%9, Max. Int. C.U.C %10, Min. Int. C.U.C %16, Coef. Frac. Pol.
Lesson summary
In this lesson, you:
• Generated different polarimetric discriminators by using PSPOLDIS
• Compared the polarimetric discriminators produced
T1
T :46.1
T :12.5
R :46.1
R :12.5
Mean = 1.25
(0.96 dB)
T2
T :46.1
T :12.5
R :-57.04
R :-12.95
Mean = 0.12
(-9.2dB)
T3
T :-57.0
T :-12.9
R :-57.0
R :-12.9
Mean = 0.22
(-6.57dB)
Figure 73. Synthesized backscatter SAR images for various transmit and receive
polarization configurations
Lesson summary
In this lesson you:
• Created synthesized backscatter SAR images for arbitrary transmit and
receive polarizations
1. Name the first bitmap layer “urban”, name the second bitmap layer “forest”
2. Load the new empty bitmap in FOCUS. Right-click on the layer and then
select view.
3. Click on the FOCUS Maps tab
4. Highlight the “urban” bitmap layer and draw a region similar to the one
presented at figure 74.
Save your “urban” bitmap layer.
5. Repeat step 5 and draw the “forest” bitmap layer.
Blue: Target 2,
forest
0.054 (-12.7dB)
Figure 75. Synthesized backscatter image maximizing the contrast between the two
targets