0% found this document useful (0 votes)
45 views17 pages

EOL Quiz Answers Module 3

This document contains solutions to end-of-lecture quizzes for a course on remote sensing image acquisition, analysis, and applications. The solutions cover topics like feature reduction, classifier performance assessment, and choosing test pixels to evaluate map accuracy. Questions addressed include how to select features to retain from a correlation matrix, verifying classifier time complexity, and explaining the relationship between producer's accuracy and a classification algorithm's performance.

Uploaded by

Dawar Abbas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views17 pages

EOL Quiz Answers Module 3

This document contains solutions to end-of-lecture quizzes for a course on remote sensing image acquisition, analysis, and applications. The solutions cover topics like feature reduction, classifier performance assessment, and choosing test pixels to evaluate map accuracy. Questions addressed include how to select features to retain from a correlation matrix, verifying classifier time complexity, and explaining the relationship between producer's accuracy and a classification algorithm's performance.

Uploaded by

Dawar Abbas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

1

REMOTE SENSING IMAGE ACQUISITION, ANALYSIS AND APPLICATIONS

Module Three

Computer-based interpretation in practice


Remote sensing with imaging radar

SOLUTIONS TO END-OF-LECTURE QUIZZES

John Richards
The University of New South Wales
The Australian National University
2

Lecture 1. Feature reduction

• What is the difference between feature reduction and feature selection?


Feature reduction is the more general term, meaning reducing by some means the
number of features that need to be used in a classification. Feature selection is a
subset of feature reduction and means selecting a sub-set of the original features.

• The correlation matrix for a particular four-band image is

1.00 0.85 0.31 −0.09


! 0.85 1.00 0.39 −0.07,
0.31 0.39 1.00 0.86
−0.09−0.07 0.86 1.00

Which band would you discard if you were trying to retain the best three of the four
features?
The first and second bands are highly correlated (0.85) as are the third and fourth
bands (0.86). However, band 3 also shows stronger correlations with bands 1 and 2
(0.31 and 0,39). No other band matches that, so band 3 would be the one to discard.

Lecture 2. Exploiting the structure of the covariance matrix

• Can you show that the classification time is quadratically dependent on the number
of bands for the maximum likelihood classifier? You may find the following
expression for the discriminant function helpful in this answer.

𝑔! (𝐱) = ln 𝑝( ω! ) − ½ ln |𝐂" | − ½(𝐱 − 𝐦" )# 𝑪$%


" (𝐱 − 𝐦" )

Carrying out a classification requires the discriminant function to be evaluated. The


first two terms are not dependent on the number of bands, and so don’t enter this
discussion. To evaluate the last term 𝑁 & + 𝑁 multiplications are required, which is a
quadratic function of the number bands 𝑁.

• In the correlation matrix below, what do the large white blocks near the top right
hand and bottom left hand corners indicate?
That the corresponding sets of bands are highly correlated.

• In the same correlation matrix, what are the large vertical and horizontal black
stripes through the matrix?
They are bands which correspond to significant water absorption, for which little
signal is received by the sensor.
3

Lecture 3. Feature reduction by transformation

• Why is 𝜆 a scalar in equation (A) in this lecture?


Both the numerator and denominator are of the form 𝐝# 𝐂𝐝. 𝐝 is a column vector,
so that the product 𝐂𝐝 is also a column vector with same dimensions as 𝐝. Pre-
multiplying that result by 𝐝# is a scalar product, so that the result is a scalar.

• Suppose we wanted to compare directly the average within class covariance matrix
𝐶' and the among class covariance matrix 𝐶( , such as in the formula
$%
𝐶' 𝐶(

Is this a scalar and, if not, how can a scalar measure be derived from it?
Would we seek to minimise or maximise the expression?
That matrix product is a matrix. We want to maximise it, so that we have the largest
separation among the classes and the smallest class variances. If we wanted to
replace it by an appropriate scalar, we could use its trace.

Lecture 4. Separability measures

• Why is feature reduction important?


Too many features compared with the number of bands can lead to a poorly trained
classifier and one that does not generalize well. That leads to the so-called curse of
dimensionality, or the Hughes phenomenon. Also, reducing the number of features
improves classification time, and thus cost.

• Can you verify the end points of the divergence and JM curves shown in the 7th slide
of this lecture?
Consider their formulas:
Divergence
𝑑!" (𝒙) = ½tr'(𝑪! − 𝑪" +(𝑪#$ #$ #$ #$
! − 𝑪" +, + ½tr{(𝑪! + 𝑪" )(𝒎! − 𝒎" +(𝒎! − 𝒎" )
%

JM disdtance 𝐽!" (𝐱) = 2(1 − 𝒆#&𝒊𝒋 (𝐱) )


4

𝟏 𝐂𝒊 0𝐂𝒋 #𝟏 𝟏 𝐂𝒊 0𝐂𝒋
in which 𝐵𝒊𝒋 (𝐱) = 𝟖
(𝒎𝒊 − 𝒎𝒋 )𝐓 6 𝟐
7 (𝒎𝒊 − 𝒎𝒋 ++ 𝟐 8 𝟏/𝟐 9
𝟐2𝐂𝒊 𝐂𝒋 2

The abscissa of the graphs is the distance between means. When that is zero, divergence
has a constant value equal to its first term, which is a function of the covariance matrices.
Likewise, for the JM distance 𝐵 is then just a function of the covariances, as is 𝐽, so that it
also has a constant value at the origin.
When the distance between the means is very large both divergence and 𝐵 increase
quadratically without bound. But the exponential term in the JM formula limits it to 2.

Lecture 5. Distribution-free separability measures

• Why are non-parametric feature reduction techniques attractive?


Because they don’t depend on having reliable estimates of the parameters of
probability models, and thus can be applied to data of high dimensionality.

• Would the city block distance measure be preferable to Euclidean distance when
computing the weight adjustments in ReliefF?
The ReliefF algorithm, in common with most clustering techniques, is heavily reliant
on distance comparisons. The city block distance is faster to compute than Euclidean
distance

• If you did not know the class prior probabilities how would the within and among
class scatter matrix definitions be modified in NDA?
Equal priors could be assumed. Thus 𝑝(𝜔" ) = 1/𝐶.

Lecture 6. Assessing classifier performance and map errors

• Is producer’s accuracy more important than user’s accuracy? You may wish to
answer this from two points of view—as the designer of a classification algorithm or
as a user interested in crop hectarages on a thematic map.
As the names imply, the user of a thematic map is interested in the map’s accuracy—
thus user’s accuracy is the key measure. On the other hand, the designer of a
classification strategy uses producer’s accuracy to see how well the algorithm and
strategy are working.

• If a classifier performed equally well on all classes would the user’s accuracies all be
the same?
Not necessarily; it depends on the errors of commission.

• Are there practical problems with cross validation? To answer this remember that a
classifier has to be trained as many times as there are partitions of the reference
data set.
5

The problem is one of cost (time to get an answer), which is directly related to the
number of classifications that have to be performed; in turn that is determined by
how many subsets the reference data is divided into.

• Some researchers average the producer’s accuracies to get a single measure of


classifier performance. Would you recommend that?
It’s OK but it can mask poor performance on some classes, as does using the
measure of overall accuracy.

Lecture 7. Classifier performance and map accuracy

• Explain carefully the third dot point in the summary of the previous slide
Assume, as a simple example, that a classifier has an accuracy of 100% on a water
class, but that class, although important, occupies only 1% of the scene. By
comparison, suppose a grassland class occupies the other 99% of the scene. Assume
the classifier has an accuracy of 80% on the grassland class. That means 20% of the
time it labels grassland as water (committing an error)—that’s equivalent to about
19.8% of the scene. Thus, the water class in the thematic map is therefore badly in
error—only 1 in 20 of the pixels labelled water are actually water (even though the
classifier got all the actual water pixels right!).

• Verify the average map accuracy figures in the table on the sixth slide of this lecture
From the slide following (the seventh) the map accuracy can be calculated as the
classifier accuracies on each class, weighted by the prior probabilities. Thus, for each
of the three sets of priors the calculations are, in order:

0.700*0.368+0.925*0.294+0.891*0.338=0.831
0.700*0.333+0.925*0.333+0.891*0.333=0.838
0.700*0.900+0.925*0.050+0.891*0.050=0.721

Lecture 8. Choosing testing pixels for assessing map accuracy

• In slide 11 of this lecture the number of testing pixels drops with the anticipated
level of map accuracy. Why?
Because fewer test pixels are need if the accuracy is high. For example, if the
accuracy were suspected of being 100% only 1 testing pixel is needed to verify that
fact. If the accuracy were about 50% then a large number would be needed so that
on random trials (samples) the 50% estimate would be reached reliably.

• Consider the variance formula


𝑃(1 − 𝑃) (𝑁 − 𝑛)
𝑣𝑎𝑟(𝑝) =
𝑛 (𝑁 − 1)

If 𝑛 = 𝑁 what is the variance? What does that mean?


6

𝑁 is the number of pixels in the map and 𝑛 is the number of testing pixels used to
assess map accuracy.
If 𝑛 = 𝑁 the variance will be zero. The implication of this is as follows: There
will be uncertainty in the map accurate estimate derived from the n testing pixels,
which reduces as 𝑛 gets larger. In the ultimate, when 𝑛 = 𝑁—i.e. every map pixel is
checked—the real value of the map accuracy is found; in other words, there is then
no variance and thus no uncertainty.

• Explain the difference between confidence level and the tolerance on the estimated
accuracy 𝜀.
A given level of confidence tells us how sure we are that the map accuracy
determined from using reference samples lies within ±𝜀 of the true value.

Lecture 9. Classification methodologies

• Verify the information class labels attached to the clusters in the ninth slide of this
lecture

8000
7000
6000
5000
4000
3000
2000
1000
0
0.511 0.634 0.847 1.617 2.153
wavelength in micrometres

grass bare grass


sparse veg sparse veg tracks
trees buildings/roads water/veg
water

The cluster centres are plotted above, whereupon they can be identified by a knowledge of
spectral response curves. In general, those cover types that are predominantly vegetation
have a response at 0.634 (red) lower than that at 0.511 (green), and a high response at
0.847 (near IR). Those cover types that are predominantly bare/buildings etc. have a
response at 0.634 higher than that at 0.511, and a high response at 0.847. Both vegetated
and bare surfaces have lower responses at 2.153 (mid IR). Water tends to be low at all
wavelengths and tends to drop with increasing wavelength.
7

Lecture 10. Other interpretation methods

• The following is a scatter diagram in the red near infrared space showing a range of
ground covers. Show how the simple vegetation index and NDVI would plot in such
a space.
7000
vegetation
6000
channel 29 (NIR) brightness

5000

4000

3000 trees ils


so
2000

1000
water
0
0 2000 4000 6000 8000
channel 15 (vis red) brightness

The two indices are plotted below.

VI=4 VI=3 VI=2 VI=1 NDVI=1 NDVI=0.5 NDVI=0.25 NDVI=0


7000 7000

6000 6000
channel 29 (NIR) brightness

channel 29 (NIR) brightness

5000 5000

4000 4000

3000 3000

2000 2000

1000 1000

0 0
0 2000 4000 6000 8000 0 2000 4000 6000 8000
channel 15 (vis red) brightness channel 15 (vis red) brightness

Lecture 11. Fundamentals of imaging radar

• Even though it is small there is some microwave energy naturally radiated by the
earth. If you wanted to form an image using natural microwave emissions what
would you have to consider in terms of pixels size?
In order to collect enough energy to create a measurable signal a very large pixel size
is needed.
8

• For 𝜏 = 1𝜇𝑠 plot a graph of ground range resolution for 𝜃 = 30 to 60 degrees.


*+
Ground range resolution is given by 𝑟) = &,"-.. For the 1𝜇𝑠 ranging pulse we have
%/0
𝑟) = ,"-., which is plotted below:

500
450
400
ground range resolution

350
300
250
200
150
100
50
0
20 30 40 50 60 70
look angle

• Is there a benefit of using SAR instead of SLAR on an aircraft or drone platform?


Yes, because the azimuth resolution is independent of altitude, including random
altitude variations because of atmospheric turbulence

Lecture 12. Summary of SAR and its practical implications

• What would be the vertical dimensions of a SAR antenna to achieve a swath width of
30km at an altitude of 900km and a look angle of 30 degrees and operating
wavelength of 20cm?
1
The beamwidth of an antenna is given by Θ = 2 rad. When projected onto the
1
ground at distance R this is gives a swath of 𝑆 = 𝑅 Θ= 2 𝑅. Thus, for this example, to
achieve a 30km swath width at 900km range, for 30 degrees look angle, the vertical
1
dimension of the antenna will be 𝑙= 3 𝑅=6.93m, where R=900km/cos30.

• For the same system as above what would be the values of the other system
parameters to achieve a resolution cell size of 30x30m?
The azimuth antenna length will be 60m (twice the azimuth resolution) and the
ranging pulse width would be 100ns.
9

• How would the resolution cell size change if the platform were at an altitude of
1100km?
Both the azimuth and range resolutions are independent of platform altitude, so
they don’t change.

Lecture 13. The scattering coefficient

• Refer to the slide in which we described radar cross section; would you expect 𝜎 to
change if the target were rotated so that it showed a different aspect to the
incoming wave front?
Radar cross section is defined such that the energy intercepted from the incoming
wave front is then re-radiated isotopically. Th energy intercepted will depend upon
the geometry of the object facing the incoming wave front. If it were a flat plate,
then in rotation it could go from looking like a large area intercepting energy from
the wave to an edge which would intercept virtually none. Thus, the orientation of
the target affects the measured radar cross section. Often the radar cross section of
an object will be displayed on polar coordinates, showing this dependence.

• With respect to your answer above, what about possible changes in 𝜎 4 with
incidence angle for earth surface scattering?
In general, the same applies. It depends, theoretically on the “shapes” of the
incremental scatterers that reside within a resolution cell. If they were all roughly
spherical, which night be the case for some vegetation canopies, then one would
expect only a weak dependence on incidence angle. If they were shaped, like facets
representing an undulating surface, then a stronger dependence would be seen

• In words, describe what is meant by the gain of an antenna


The gain of an antenna tells us how much energy it will radiate in a given direction
compared with an isotropic (spherical) radiator. It will be dependent on the angle
with which the antenna is observed. Most antennas are designed to radiate
maximally in a given direction, and thus will have a high gain in that direction, but
they will also radiate on other directions too, although at a much lower level. As
with radar cross section, the gain of an antenna can be plotted on polar coordinates
to demonstrate how it varies with angle from the antenna.

• By reference to the radar range equation, if the range is doubled how does that
affect the received power level?
Since the received power is proportional to the inverse fourth power of range, a
doubling of range leads to a sixteen-fold drop in received power.

Lecture 14. Speckle and an introduction to scattering mechanisms

• In the 6th slide of this lecture, describe how the pixel would appear in an image if it
included a large dominant scatterer, such as the building shown in slide 9
10

The radar cross section of the dominant scatterer will be larger than the scattering
coefficient multiplied by the size of the pixel, so that the pixel response becomes
essentially just that of the dominant scatterer. Among a group of pixels with the
same background, that pixel will show as a bright spot.

• The standard deviation of the speckle in a radar image, reduced by averaging over 𝑁
5(789)
looks, is given by 𝜎(𝑎𝑣𝑔𝑒) = , where 𝜎(𝑟𝑎𝑤) is the standard deviation of the
√<
speckle in the image as recorded (i.e. the raw image). Note here 𝜎 is the symbol for
standard deviation and not scattering coefficient.

The diagram below shows a pixel composed as the average of four raw recorded
pixels. Discuss what it implies, including the levels of speckle noise.

!# /4

!"

This diagram tells us that the radar system was designed as a four (azimuth) look
system. The azimuth resolution was designed to be four times better than the
ground range resolution, so that the four pixels above could be averaged in azimuth
to produce on pixel with a square shape. In doing so the four looks that are
averaged reduce the speckle noise (standard deviation) by one half.

Lecture 15. Radar scattering from the earth’s surface

• In the 5th slide the formula for reflection coefficient is given for both horizontal and
vertical polarisation. For vertical incidence show that polarization dependence
disappears.
%$ > $>! ?√>!
For 𝜃 = 0, 𝑐𝑜𝑠𝜃 = 1 𝑎𝑛𝑑 𝑠𝑖𝑛𝜃 = 0. Thus we have 𝜌= = %?√>! and 𝜌@ = > ! ? √> !
=
√ !
√>! $√>! ?% %$ >
= %?√>! . Thus, both reflection coefficients are the same, demonstrating
√>! √>! ?% √ !
that three is no polarization difference at vertical incidence.

• If you were interested in mapping surface roughness would you use small or large
incidence angles?
As seen in slide 8 there is a greater difference between the surface scattering
coefficients at larger angles of incidence. Therefore, larger angles would be
preferred. In addition, at smaller angles of incidence terrain distortion becomes a
problem which often masks (the small) surface roughness variations.
11

• Suppose the dielectric constant of a surface was close to unity. What does that say
about the reflection and transmission of the incident electric field (and thus power
density)?
*4,.$√%$,"-" .
If 𝜀7 = 1, then from the formulas on slide 5, 𝜌= = = 0 and 𝜌A =
*4,.?√%$,"-" .
$*4,.?√%$,"-" .
= 0. Thus there is no reflection from the surface; all incident energy
*4,.?√%$,"-" .
is transmitted into the medium.

Lecture 16. Sub-surface imaging and volume scattering

• In the last slide before the summary above, fence lines are evident in the image.
Why?
They are wire fences, acting as strong scatterers. There is however a cardinal effect
evident; the fences running roughly north-south are easier to discern.

• If the total path of travel through dry sand was equivalent to four penetration
depths, what is the power density scattered back to the air-sand interface compared
with the incident level of power density just under the interface?
The power density would have dropped to 1[𝑒 B = 1[55 of its value just under the
interface.

• In slide 4 there seems to be other detail in the radar image adjacent to the paleo
river channel that also does not appear on the optical image. What might that detail
be?
Because that detail is not evident in the colour infrared image, which shows just the
sand sheet, it is below the sand. Typically, it would be the bedrock under the sand,
showing drainage patterns and surface roughness detail.

Lecture 17. Scattering from hard targets

• In slide 5 show that the two expressions are the same when 𝜃=45 degrees.
Physically, why should that be the case?
C
For 𝜃=45 (i.e. B 𝑟𝑎𝑑𝑖𝑎𝑛𝑠) , 𝑠𝑖𝑛& (𝜃 + 𝜋/4) in the left-hand expression is 1. The 𝑠𝑖𝑛& 𝜃
% % %
term in the right-hand expression is = &. Thus, both expressions are the same.
√& √&
That happens because at 45 degrees, the ground reflection of the vertical flat plate
in the right-hand diagram then has the same dimensions as that vertical element.

• How would the shapes of the curves in slide 8 be altered if they were computed for a
ship at sea, where the side of the vessel and the horizontal ocean surface form a
dihedral corner reflector?
They would not drop off at the higher angles of incidence because there is no
equivalent of canopy attenuation.
12

• There is an implicit assumption in slides 5, 6 and 7— that is, that the incoming radar
beam is square on to the corner reflector facets. What would happen if that were
not the case?
The equivalent radar cross section would drop significantly. Ray tracing shows that
the return ray is then not parallel to the incident ray and will not be received by the
radar.

Lecture 18. The cardinal effect, Bragg scattering and scattering


from the sea

• Would scattering from ships at sea suffer the cardinal effect?


Yes, if they had an elongated shape.

• The figure below shows the cross-section of a bridge over a smooth water body. In
radar imagery it often shows up as three reflections as indicated. Why?

cross section view from above


Each reflection corresponds to the reflected radar energy being received at different
times. Remember, that is how range detail is resolved in radar. The diagram below
shows three possible paths with different time delays, which could account for the
three different images of the bridge. The rays are intentionally drawn offset from
each other so the three mechanisms can be seen.

direct scattering
(single bounce)

triple bounce
scattering
double bounce
scattering

cross section view from above


13

Lecture 19. Geometric distortions in radar imagery

• In the phenomenon of layover demonstrated in slide 6 how important is the ground


range resolution compared with the height of the tower?
To see layover the projection of the tower onto the ground towards the radar has to
cover more than a single ground range resolution cell. Otherwise, all the tower detail
would be compressed into a single pixel and layover would not be observable.

• In slide 7 how would the back slope appear in a radar image if it was quite severe
such that it was hidden behind the summit of the mountain when viewed from the
radar?
The back slope would be in shadow and appear black in the radar image.

• In slide 7 how would the front slope appear in a radar image if was a vertical cliff?
Layover would occur, but with scattering from the backslope mixing with that from
the cliff.

• In slide 9 there is a bright patch on the bottom right-hand corner of the top Seasat
image. What might that be?
It is an urban region (Harrisburg), not easily seen on the second Seasat image
because of the direction of illumination (cardinal effect).

Lecture 20. Geometric distortions in radar imagery, cont.

• Why are look angles in the mid-range of 35° - 50° good for most land applications in
radar remote sensing?
If the angle is too small, the variation of surface scatter with surface roughness is not
as good as it is at larger angles. Also, relief distortion is exaggerated at small angles.
At very large angles shadowing can be a problem, and all surface (and canopy)
responses fall off. So, a good compromise is mid-range angles.

• Why is a dihedral corner reflector not suitable as a control point?


Its backscatter response is only high when the incoming beam is exactly at right
angles to the two reflecting plates. If it is at an angle, the beam reflects away from
the direction of the radar, as seen in the following diagram.
14

vertical plate

horizontal
plate

reflected
ray
incident
ray

• A particular imaging radar operates with 100ns long ranging pulses and a look angle
of 45o. If an active radar calibrator has an overall time delay of 500ns, how far will it
be displaced in the range direction in the recorded image?
The ground range resolution at a look angle of 45 degrees will be approximately
64m. That corresponds to the reflections from the ranging pulse being 100ns apart.
A reflection received 500ns from the first reflection will correspond to 5 resolutions
cells later in the range direction, which is the displacement of the ARC response.
Thus, on reception we then know that that response came from an object placed 5
resolution cells earlier.

Lecture 21. Radar interferometry

• How do the round-trip path lengths differ between ping pong and standard modes of
InSAR operation? Assume that the distance to the surface is sufficiently large that
the beams from both antennas could be considered to be parallel.
To answer this question, we examine the situation close to the antennas, as below.

radar 1 $ radar 2 radar 1 $ radar 2


( (
The extra path The extra path
length travelled length travelled
by this beam is by this beam is

∆" = $%&'( ∆" = 2$%&'(

• How is the so-called orthogonal baseline 𝐵D related to the physical baseline 𝐵? Is it


better to have a large or small orthogonal baseline?
The orthogonal baseline is simply the projection of the physical baseline at right
angles to the transmitted ray. The sensitivity of the phase difference between the
15

two received signals depends directly on the orthogonal baseline, so it is good to


have that large.
radar 1 ! radar 2
'
!"

The orthogonal
baseline is !" = !$%&'

• For the ERS-1 example on slide 4, what height variation corresponds to one full 2𝜋
cycle of phase variation?
From the result at the bottom of the slide 0.17rad of phase change corresponds to a
metre of height variation. Therefore 2𝜋 change in phase will be caused by a height
variation of 2𝜋/0.17 =37m.

Lecture 22. Radar interferometry for detecting change

• In the example on slide 2 what vertical change does the example represent, if the
phase change was due to vertical movement alone?
For a change in slant range of ∆𝑟7 = 14mm the corresponding change in ground
range is seen on the slide to be 14 sin23 = 5.5mm. The corresponding change in
height would be 14 cos23 = 12.9mm, if it were a height variation that led to the
change in slant range.

• How should the temporal baseline in along-track interferometry relate to the time of
occurrence of the change of interest?
The temporal baseline is the time between the two acquisitions used to form the
along track interferometer. The time change in the scene of interest has to have
occurred between the two acquisitions.

• By looking at slide 2, is the degree of phase change due to range movement


dependent on the magnitude of the temporal baseline?
No; it just depends, as above, on the change happening within the period of the
temporal baseline.

Lecture 23. Some other considerations in radar remote sensing

• In radar tomography of forests would it be better to use a long or short imaging


wavelength?
We need to use long wavelengths so there is some canopy penetration. Otherwise
vertical detail within the canopy cannot be resolved.
16

• Why are there the three images of the Sydney harbor bridge in slide 5
Each image corresponds to a reflection that arrives back at the radar receiver at a
given time. If the arrival times are different for different reflections from the same
target will appear several times in the same image. For the bridge, there are at least
three pathways for the incident radiation to take on its way to and from the bridge:
one is a direct reflection from the bridge itself—the shortest path; one is a double
bounce reflection from the bridge to the sea surface and then back to the receiver
(and vice-versa)— longer path; one is a triple bounce involving bridge-water-bridge
again on the way back to the receiver—the longest path.

• Why is one image highly detailed?


Because of the high spatial resolution of the spotlight mode radar.
17

Lecture 24. The course in review

• A set of logical rules for labelling a pixel can be put in the form

If radar imagery says X and optical imagery says Y then the pixel should be labelled Z

Devise suitable rules that involve both data types to differentiate among the
following cover types in the same scene. Assume the optical data consists of just a
visible red band and a near infrared band, and the radar data is HH polarized, L band
imagery.
Grassland
Desert sand
Smooth dry agricultural soil
Coniferous forest
Lake water

If radar indicates a smooth surface


and optical red/IR is high
then the pixel should be labelled grassland

If radar indicates a smooth surface


and optical red/IR is close to unity, and both red and IR responses are high
then the pixel should be labelled sand

If radar indicates a smooth surface


and optical red/IR is close to unity, and both red and IR responses are low
then the pixel should be labelled as dry agricultural soil

If radar indicates a strong dihedral response


and optical red/IR is high
then the pixel should be labelled as coniferous forest

If radar indicates very smooth surface


and optical IR is lower than red
then the pixel should be labelled as lake water

You might also like