Contours and Contouring in Hydrography Part Ii - Interpolation
Contours and Contouring in Hydrography Part Ii - Interpolation
Contours and Contouring in Hydrography Part Ii - Interpolation
This paper has already been published in Lighthouse, Edition No. 30, November 1984 and is
reproduced here with the kind permission o f the Canadian Hydrographers' Association.
ABSTRACT
In Part I of this series, the authors discussed those issues which we feel are
fundamentally important and which must be addressed by any method which aims
to mechanize the drawing of depth contours for hydrographic charts.
In this article we begin the discussion of the How of contouring. In particular,
we concentrate on some of the most common methods used in the interpolation of
the synthetic surface upon which computed contours will lie.
INTRODUCTION
(*) Planning & Development, Canadian Hydrographic Service, D epartm ent of Fisheries &
Oceans, 615 Booth Street, Ottawa, O ntario K.1A OE6, Canada.
(**) Canadian Hydrographic Service, Headquarters, Department of Fisheries & Oceans, 615
Booth Street, Ottawa, Ontario K.1A OE6, Canada.
Before becoming immersed in the details of interpolation, let s examine the
situation at a higher level. Figure 1 illustrates the main ideas behind interpolation.
Figure la shows a sequence of measured sounding profiles. This is the data from
which we wish to draw our contour map. One can imagine the contours as a
sequence of shoreline snapshots — each one taken with the level at progressively
lower elevations. Figure lb shows the situation at a particular water level say
10m below datum. The lower level problem is this — how do we connect-up the
protuberances above each water level in a meaningful way ?
In order to draw contours we need to predict the behaviour of the contours
between the survey lines. To do so we want depth estimates at regular intervals
between the observations. The closer together the depth estimates, the smoother the
contours. Figure lc illustrates one popular approach called gridding. In this
method, one drops a uniform grid over the survey area and, at the grid intersections
(called 'n odes’), estimates depths by using the observed depths. How these
estimates are made is the crux of the matter.
The threading of the individual contour lines through the survey area can be
a straightforward procedure if the data is established on a uniform and tightly
spaced sampling plan. Contouring a typical field sheet for instance, where the
soundings are spaced every 5 mm at scale, is relatively straightforward and a set
of rules can be established to define the contouring procedure. When the data is
sparse, however, the rules become less meaningful and, as a consequence, more and
more judgement is called for. This becomes a case of interpretation, not interpola-
tion. Such procedures cannot, in general, be mechanized. To overcome the problem
of sparse or non-uniform sampling, researchers have found it expeditious to re-cast
the survey so that it would appear to have been sampled in a more convenient
manner. Density and uniformity are the two characteristics of the data which make
machine contouring more viable. The uniform grid is an obvious choice but others,
including triangulation schemes, are used in practice. We concentrate on gridding
because that is the technique with the widest usage and because, in the end, the
differences between gridding and its alternatives are often academic.
The actual contouring itself is done by threading the individual contours
through the grid. Once depth values have been established for each grid node, these
grid nodes can be used as gate-posts, allowing or denying access to the interior of
the grid box. If access is allowed, then progressively finer grids can be established
inside the main box in order to guide the course of the contour. In this way the
contour’s apparent smoothness is governed by the fineness of the gridding. Finer
gridding can improve the smoothness of the contours and make them appear more
realistic . Appearances can be deceiving, however. The accuracy of the contour
position is governed by the survey sampling resolution — not by the resolution of
the grid.
There are essentially two main methods for making these estimates : point
models and area models. Point models estimate depths at fixed points in the area
such as at the grid nodes. Area models, on the other hand, estimate surface
continuously over an area. That is, a smooth mathematical function (often a
polynomial in 2-dimensions) is fitted to the data points within an area. This
estimated surface then defines depth estimates at every point within the region over
which the surface is fitted. Thus the knowledge of the contours’ location is
continuous — resulting in very smooth contours. Of course, if the surface model
is wrong (i.e. if the fit is not very good) then the contours are also wrong.
We now examine these two methods in some detail.
POINT MODELS
NEIGHBOURHOODS
A typical field sheet contains about 20,000 soundings. How should we use this
vast volume of data ? In general, the soundings in the lower-left corner of the field
sheet cannot be used to predict the behaviour of the bottom portrayed in the
upper-right corner. In hand contouring, hydrographers examine only those sound
ings which sit close to the spot where his pen lies. We need some similar
mechanism to limit the blind inclusion of excessive and unconnected data.
Neighbourhoods are required to limit the number of data points included in
the linear combination. In its simplest form the neighbourhood is defined to be a
circular area of user-set radius, centred upon the point to be estimated. Any
observations found within this area are included in the computation. Figure 2b
shows that six observations were found in the neighbourhood of the central grid
node. The observations are found by doing a search of the data record and
checking each point to see if it falls within the neighbourhood.
Unfortunately, defining the neighbourhood as some simple circular area
surrounding the grid node won’t always work. Figure 3 shows some of the
drawbacks of using a simple, pre-defined, static neighbourhood.
In Figure 3a we have the problem of sparse data. In order to include a
minimum number of observations in the calculation, some programs expand the
search circle in increments until either the required minimum number of points is
included or some maximum radius is achieved. Alternatively, if there are too many
points in the standard neighbourhood, the radius is reduced incrementally until the
maximum number of desired observations is achieved or until some pre-set
minimum radius is reached. Note that the neighbourhood in these cases is
independent of the value of the data and is dependent only on its spatial structure.
a
b
F ig . 3. — N eighbourhood problem s.
In the depth estimation process, weights are applied to the observed data to
allow observations of higher “quality” to have a greater influence than points of
lesser “ quality” . This quality feature can refer to the relative quality of the various
depth measurements, but is usually used as a means of ensuring that “closer”
observations have a higher weight than observations farther away. In this restrictive
sense, quality is a function of the distance between the observations and the grid
node to be estimated. The fact that closer observations should have a higher weight
than ones farther away is initially appealing, but is not universally applicable. This
can be seen in Figure 3b where the geometrically closest observation is hidden
behind a barrier. Hence a more sophisticated distance-weighiing scheme is
required.
The distance weight can be simply the inverse-distance between the observa
tions and the grid node to be estimated but, usually, the inverse-square of the
distance is used. This ensures a faster drop-off of the influence with distance. This
inverse-square weighting is commonly referred to as the “Gravity Model — the
relevance being the inverse-square relationship between two bodies in Newton’s
Universal Law of Gravitation. Note, again, that the weighting is independent of the
value of the observations but is based on the spatial relationship alone.
Figure 4b illustrates another problem associated with simple distance
weighting — trends in the data. The data points on the left hand track will clearly
have a greater influence on the estimate than those on the right hand track.
Suppose that there is a left-right linear trend to the data with the soundings on the
left considerably deeper than those on the right. Then the estimated depth at the
\
\
\
a b
d
point indicated will be deeper than it should be. Weighted averaging systems have
no explicit way of handling data which has clear trends in it. Such data sets are
better handled by contouring algorithms which model these trends.
Clustering of data points can also place emphasis on the wrong data. This
situation is shown in Figure 4c. The data points to the upper right will have a
greater effect on the estimate than the lone point to the left — yet this point should
be included in any estimate. Why ? Because, firstly, it is closer than the other points,
and secondly, it acts as a means of determining trend.
Data shielding is also an important consideration. If one were to hand-
contour the data shown in Figure 4d, one would nor consider the values of the data
points in the line to the right of the right-hand sounding. This would be
inappropriate since the right-hand data point shields the line data. For instance, if
the track data was considerably deeper than the other two soundings, then the
interpolated depth would be influenced by the nearness of these deep soundings.
This would result in a depth estimate which is too deep — thus showing a
depression where one probably does not exist. To get around this problem, one can
apply a second level of weighting — directional weighting. In this case, the
algorithm must seek out observations within the neighbourhood which are shielded
by other observations. This can be accomplished by examining the spatial
relationships of the observations vis-a-vis the grid node, determining the associated
angles and de-weighting any observations which fall within the shadow cast by
closer observations. Many modern contouring programs feature such directional
weighting automatically.
Another of the consequences of simple weighted averages is the fact that each
observation is considered as a local extremum. That is, each observation is
considered to lie on either the peak of a local hummock or the pit of a local
depression. This is a direct fall-out from the use of the weighted average. The grid
estimates will always be bounded by the observations. One cannot estimate a value
deeper than the deepest observation within the neighbourhood and neither can one
estimate one shallower than the shallowest. The effect of this is most apparent
when a rugged area is contoured at a close contour spacing. The observations are
all ringed by contours. This might make sense for topographic surveying where
observations are taken upon the local extrema, but it would never make sense in
hydrography where we never see the surface we are mapping and consequently the
chances of occupying a local peak or pit are slight.
Including some slope information is the way to get around this particular
problem. But slopes are not observable in hydrography, so they have to be inferred.
Geomorphologists use external information on the surrounding geology and
geomorphology to help them create models for the unseen surface. If this
information is not available, then the observations alone must be used to estimate
the slopes. Essentially this involves calculating the slopes from the differences in
depth of the observations. Several of the contouring packages available commer
cially offer slope estimation as a program option. Once slope information is
available, it can be used to predict extrema other than at the observation points.
AREA MODELS
Q U A D R A T IC TREND
X
C
profile regression
F ig . 9. — Problems in extrapolation.
It will come as no surprise that one method is not clearly superior to another,
oint models offer safer interpolation in areas where the bottom undulates
randomly about some near-constant level whereas area models are safer where the
ottom has a clear trend. Area models are cosmetically cleaner and more efficient
in storage space and in execution - but these are, by and large, irrelevant issues
for hydrographers who have the time and equipment to do the job right. One can’t
help but feel uneasy about fitting smooth functions to surfaces which are by nature
rugged and unpredictable. Point models can handle the ruggedness but are
defeated by trends. Clearly some combination of the two techniques might be the
ticket; an area model to detect and model the trends and a point model to work
on the residual surface which rides on top of the trend. This is the essence behind
universal kriging — a topic beyond the scope of this particular paper. There are two
other techniques, however, which do, to some degree, incorporate features from
both area and point models, namely triangulation and parallel line techniques We
investigate those now.
TRIANGULATED NETWORKS
Many, many techniques have been developed to generate the grid estimates
upon which the contour placement will be based. Literally dozens are available
each one considered optimal in one way or another by its author. Some are
designed to exploit features inherent in the data itself or in the geometric structure
of the sampling program. For instance, some surfaces are very smooth and slowly
changing — such as our perception of a gravity surface. Others, including
bathymetry, can be very rugged. Some surveys are very dense and regular like
that of Gestalt Orthophotography — while others, like borehole surveys, are often
sparse and irregular. A technique developed particularly for one type of data will
not necessarily perform as well on another, radically different, type of data. A
technique developed for a large mam-frame computer wili not perform weii — or
at all _ on a medium-size mini-computer. Storage and processing speed are two
of the chief considerations which many designers hold supreme.
With such a variety of programs and techniques available, it is not surprising
that a certain amount of controversy is apparent in the literature as to which
technique is really the best. One of the most frequently debated items is the grid
versus triangulated irregular network (TIN). The grid, as we have already discussed,
is a square mesh applied over the measured surface with the non-regularly-spaced
observations being used in some interpolation procedure to derive estimates at the
mesh nodes. A TIN, on the other hand, joins together the observations into a
triangulated network (see Figure 10) and then interpolates. New estimates inside
the network are interpolated by sub-dividing the network triangles into a series of
smaller clone triangles. Depth estimates are then made at each of the new vertices.
TINs are particularly appealing to hydrographers because they honour the data.
We have previously argued ( M o n a h a n and C a s e y , 1983) that honouring is an
important issue. We included it in our “ Musts and Wants” list as a must. We can
now differentiate between two kinds of honouring : strict honouring and weak
honouring. By strict we mean that the observed data points are honoured in the
strict mathematical sense. The observations lie on the interpolated surface. Weak
honouring implies that the observations do not lie on the surface but lie so close
as to appear, for all intents, to be on the surface. This might seem like a minor
academic point but, in fact, is quite important. The important card triangulation
proponents have to play is that their technique strictly honours the observations
and most other systems do not. We do not differentiate between strict and weak
honouring, since to do so would be to unfairly categorize systems on what we feel
is a minor technicality. Most gridding systems do not honour the observations in
either sense and so, for hydrographic purposes, are suspect.
Interpolation in triangulation can be relatively straightforward. In its simplest
guise, the three data points at the vertices define a planar surface. Contour location
on this planar surface is then linearly interpolated (See Figure 11). The survey area
can be imagined as being built-up with a network of triangular facets, like, say, a
geodetic dome. The contours drawn on such a surface have a very characteristic
“angular” look. This is a consequence of the discontinuity of slopes which occurs
at each triangular boundary. Much more elaborate and sophisticated techniques
are also available to overcome some of these limitations.
One of the most sophisticated of the triangulation algorithms is due to A k im a
( A k i m a , 1978). This algorithm applies a 5th order bivariate polynomial to the
surface of each of the observation triangles. First and second partial derivatives are
computed at each of the vertices and directional derivatives computed along each
side to ensure the smoothness within the triangles and along the triangle’s borders.
These measures ensure that the strong contour angularity, which is so characteristic
of triangulation schemes, is minimized. A k i m a ’ s algorithm is also designed to
suppress the unsubstantiated undulations or ringing effect which the fitting of
polynomials usually involves.
40
Triangulation systems are not without their problems. A re-examination of
Figure 10 will show that many other networks could be established from the same
data points. Early generation triangulation schemes did not address this problem
to any great extent, so the impression has grown that these schemes do not have
unique ways of defining the network. If different networks are used, the resulting
surfaces can look very different. Primitive triangulators used the order that the data
was entered as a guide to establishing the network. If the data was re-ordered, then
a different set of contours would be derived. This is clearly distressing. Such
incongruities have not helped the proponents of machine contouring in promoting
the use of computers in, what is for many, the final and most visible outcome of
their work. Fortunately, many researchers have been working on this problem. The
result is that there now are standards for the definition of triangulation networks.
The de-facto standard is known as Delaunay Triangulation ( S ib s o n , 1978).
One popular method for achieving Delaunay triangles is to first form a set
of polygons (called Thiessen Polygons) from which the triangles can be formed.
This is known as Dirichlet Tesselation ( G r e e n and S ib s o n , 1978). See Figure 12.
TINs and Thiessen networks have attracted much attention in the recent
literature and are the subjects of continued study but are relatively rare in
commercial usage ( S a l l a w a y , 1981).
Can triangulation schemes do the job in hydrography ? Yes, in some specific
cases. The vast amounts of data produced by area-mapping systems such as the
Navitronix or the Larsen could be successfully contoured using a triangulation
scheme if (and only if) sufficient controls were added to ensure a bias for safety.
This could be feasible.
If the object was to contour an existing digital field sheet using only the data
presented there, then a triangulation scheme would be superior to a gridding
scheme. But there is a far better way to contour digital hydrographic data and that
is by using all of the observed depths — not just the ones portrayed on the field
sheet. We investigate one method now which does just that.
PARALLEL-LINE DATA
All the contouring techniques we have discussed above assume that the data
is irregularly and randomly distributed throughout the survey area. Indeed, most
users of machine contouring packages have data which is in this form. Hydrogra
phic data, on the other hand, is blessed with a very important characteristic _
continuity of information along the sounding lines. This feature can, and should,
be exploited.
Most users of contouring packages have to be satisfied with interpolated
contours. In our case, however, the position is known along the sounding lines
because we measured it there. This measured position can then be used to anchor
the contour s position as it intersects each sounding line. These points, commonly
known as contour intercepts, are as well known as any of the soundings we normally
plot on our field sheets. Thus only the contour’s path across the inter-line zone has
to be interpolated. The use of contour intercepts is the modus operandi of
geomorphologists who create bathymetric charts such as edition V o f GEBCO. Its
use in conventional hydrography can be traced to Q u ir k (Q u ir k , 1966).
We can also exploit the parallel line nature o f our sounding lines in
determining the contour. To control the contour’s position between the lines
requires an interpolated grid of one kind or another. The sounding lines can be
used to establish this grid. Consider the situation shown in Figure 13. In this case
we have a series of sounding lines crossed by a set of uniformly spaced parallel
lines. These lines will form the column lines in the regular grid we are about to
construct. At each point where the column lines intersect the sounding lines the
digital data record is searched for the appropriate depth associated with that
position. These cross-over points are called the intersection nodes. These nodal
soundings are then used in the generation of the grid estimates.
SURVEY LINES
(p l a n )
SUMMARY
ACKNOWLEDGEMENT
(*) This same reluctance apparently does not hold for positions, which are smoothed; tides, which
are modelled; speed of sound, which is averaged; or heave, which is filtered.
REFERENCES