0% found this document useful (0 votes)
35 views11 pages

(Rohr) - Modelling and Identification

This document introduces an approach to model and identify characteristic intensity variations caused by polygonal structures in 3D scenes. It develops a general analytical model to represent step edges, corners, T-junctions, and other junctions as the superposition of elementary model functions. The model agrees fairly well with real image intensities, and estimates of unknown parameters are found by minimizing differences between the model and observed grey values. The approach is tested on real images.

Uploaded by

Sakhi Shokouh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views11 pages

(Rohr) - Modelling and Identification

This document introduces an approach to model and identify characteristic intensity variations caused by polygonal structures in 3D scenes. It develops a general analytical model to represent step edges, corners, T-junctions, and other junctions as the superposition of elementary model functions. The model agrees fairly well with real image intensities, and estimates of unknown parameters are found by minimizing differences between the model and observed grey values. The approach is tested on real images.

Uploaded by

Sakhi Shokouh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Modelling and identification

of characteristic intensity
variations
Karl Rohr

transitions (e.g. Canny3) have difficulties when the


An approach is introduced for the modelling and grey-value structure under consideration has variations
identification of a certain class of characteristic intensity in two dimensions (see, for example, Li et a1.4). In
variations resulting essentially from polygonal structures many cases, the detected edge contours are inter-
of a depicted 3D-scene. We develop a general analytical rupted at such positions. Especially these image
model for the structural grey-value transitions in an features with high content of information are in general
image which is the superposition of elementary model poorly recovered. Often, rather heuristic approaches
functions. Special cases of this general model are the are used in a subsequent step to fill in missing contour
grey-value variations of step edges, grey-value corners points. Local operators for recovering two-dimensional
(L-junctions), T-, Y-, ARROW-junctions, and all other intensity variations in many cases fail to infer the actual
junction types represented in the labelling system of structure. To overcome this problem Lowe’, for
Waltz’. It will be shown that this parametric model example, suggests a global search as opposed to local
agrees fairly well with real image intensities. Estimates of operations in the neighbourhood of each potential
the unknown model parameters are found by minimiz- corner, i.e. to combine local and global information to
ing the sum of squared differences between the model recover characteristic intensity variations. Another
and the observed grey values. The approach has been problem arises from the usage of a Gaussian filter for
tested on real image data. noise reduction. Depending on the width of the filter,
the blurring of the grey-value image leads in general to
Keywords: model-based recognition, corners, junctions a displacement of the prominent structures (in some
cases structures can disappear or new structures can
even be created - see Damone). To cope with the
MOTIVATION tradeoff between noise reduction and localization, one
can take into account the dependency of the displace-
The robust and precise recognition of characteristic (or
ment on the width of the filter (see, for instance,
prominent) intensity variations, such as grey-value
Bergholm’,s). This can hardly be done without an
corners and T-junctions, is important both for the
explicit model of the characteristic grey-value struc-
visual perception of humans and for the automatic
tures.
interpretation of images by computers. The more
In order to treat the problems mentioned above, we
robustly and precisely those structures are derived from
introduce in this contribution a model-based approach
images, the more reliably can objects in the 3D-scene
to recognize a certain class of characteristic grey-value
be described and recognized, and the 3D-scene be
variations. This class of grey-value transitions can be
reconstructed.
characterized in the image by several regions of
Although many image analysis algorithms use
constant image intensities joining in one prominent
prominent features as picture domain cues, there are
point, where the contours (grey-value edges) of the
hardly any detailed investigations of the behaviour of
adjacent regions meet. It is supposed that the grey-
grey-value transitions in the vicinity of such two-
value edges of such junctions are approximately
dimensional grey-value structures’. It is well known
straight lines in the vicinity of this point, i.e. the image
that edge detectors incorporating the (implicit or
intensities represent essentially polygonal structures of
explicit) assumption of one-dimensional grey-value
the scene. Furthermore, we assume symmetric grey-
Institut fiir Algorithmen und Kognitive Systeme, Fakultit fiir
value transitions between two adjacent regions. This
Informatik der Universitlt Karlsruhe (TH), Postfach 6980, 7500 means that the grey-value transitions are (approxi-
Karlsruhe 1, Germany mately) symmetric with respect to the line of steepest
Paper received: 3I August 1990; revised paper received: 17 April I991 slope (locus of points where the gradient is maximal).
0262~8856/92/002066-11 0 1992 Butterworth-Heinemann Ltd
66 image and vision computing
We also assume the prominent grey-value variations to shaped structure (characterized by the aperture of the
be represented by a sufficiently large number of pixels. sector and the height of the wedge) with a Gaussian
This implies a picture function of sufficient resolution. function. The analytical corner model is similar to that
The considered grey-vatue structures should also be of Berzins”, who employed his model to anafyse the
well isolated, so that no interaction effects from other accuracy of Laplacian edge detectors. The results of
features take place. Specularities in the image are not Bergholm’ are superior with respect to Canny3.
considered in the present contribution. It seems that However, in the vicinity of grey-value corners and T-
the assumptions we make are very restrictive. How- junctions, for example, there are still gaps. Thus,
;ver, most existing approaches make implicitly use of junctions are not recovered as desired. In the present
them producing inaccurate results when the assump- contribution we use a similar mathematical model for
tions are not met in the image. grey-value corners to Berzins’” and Bergholm’. We
In order to model the proposed class of grey-value compare this model directly with the observed image
transitions, we develop a general analytical model. It intensities to adapt the model to individual grey-value
will be shown that the structural grey-value variations structures. The result of this process provides not only
i>f step edges, grey-value corners (L-junctions) T-, Y-, an estimate for the position of the corner point, but also
ARROW-junctions and all other junction types repre- determines all other parameters characterizing the
sented in the labelling system of Waltz’, which is an corner model. Even if the image is smoothed to reduce
expansion of the labelling system developed by Clowes” the noise, an estimate of the position of the corner
.md Huffman”~, are special cases of this general point in the unblurred image is found without explicit
pa~metric model. The model will be used for identifi- tracking.
cation, that is for the precise recognition of essential In order to derive a corner detector, Rangarajan et
nroperties of characteristic intensity variations in the al. I7 extend the approach of Canny” for detecting one-
‘mage. dimensional grey-value transitions to two dimensional
features. They derive two functions, one describing
the corner detector inside and the other outside the
RELATED LITERATURE sector of the underlying corner model. Contrary to
Rangarajan et al.“, in the present contribution the
Examples of characteristic grey-value variations are analytical description for recognizing grey-value
grey-value corners (L-junctions; see, for example, corners consists of just one function which charac-
Uagel’i). In the present paper, the term grey-vafue terizes the whole area around the corner point. By
corner denotes the grey-value transitions in a suffi- choosing suitable model parameters, all corners
ciently large area around the prominent point of the belonging to the proposed class can be modelled.
prey-value corner. This grey-value corner point is Guiducci2” characterizes corners by three para-
usually defined as the point of maximal planar curva- meters: amplitude, aperture and smoothness of the
ture in the line of steepest grey-value slope. Corner ideal wedge. Eased on the grey-value corner character-
detection can simply mean to localize this prominent ization of Dreschler and Nagel” he derives analytical
point or can, in addition, include the determination of expressions for these three parameters to estimate
inherent attributes. For this task, different methods are them from image data. In the present paper we
described in the literature. One can distinguish introduce a grey-value corner model which is charac-
between direct and indirect methods. Indirect methods terized by seven parameters: position of the corner
first detect edge points in the image. Then the corner point (2); orientation (1); aperture (1); intensities of
point is taken to be the point of intersection of straight the grey-value plateaus (2); and the strength of the
lines fit to edge points. Alternatively, the corner points blurring by a Gaussian function (1). All seven para-
are identified with local curvature extrema on a contour meters are determined by fitting this parametric model
chain. Methods operating directly on images define the to the observed image intensities. By this, we have a
corner point as local maxima of values, determined measure of how well the estimated model approximates
either by combinations of partial derivatives of the the actual data. All parameters are assigned real
picture function (e.g. see References 12-13, or by numbers. So the position of the corner point is
applying other masks directly on the image intensities determined to subpixel accuracy and indicates the
t e.g. see References 16 and 17). In most approaches origin of the ideal (unsmoothed) grey-value structure.
there is an underlying (implicit or explicit) model of the Moreover, this model is a special case of a more general
prey-value corner. model for characteristic grey-value structures intro-
In order to illustrate why it is difficult to compute a duced in the next section.
description of an image with measurements that are not
directionally selective, Marr’* uses a ‘corner-shaped’
mask with aperture of 90”. The model we propose can
MODELLING OF CHARACTERISTIC
be adapted to real corners appearing in any arbitrary
INTENSITY VARIATIONS
orientation and is therefore, in the sense of Marr’ ,
directionally selective. A digital picture, recorded by an imaging sensor, is not
Bergholm’ locates edges by tracking edge points only disturbed by noise but is also band-limited in the
from coarse to fine resolution along the direction in frequency range. Due to the recording process, sharp
which they are displaced when blurred with Gaussian transitions (e.g. step edges) are rounded-off and
masks of different width. He uses an explicit model of corrupted by noise. In order to reduce the noise one
grey-value corners (L-junction) to evaluate the dis- can apply a Gaussian filter. The advantages of such an
placement of the corner point depending on the mask operation are described elsewhere (see, for example,
size. This model is the convolution of an ideal wedge- Marr and Hildreth2’). However. the application of a

cot 10 no 2 march 1992 67


Figure I (left). Wedge shaped grey-value structure (ideal Figure 4 (left). Ideal T-junction
grey-value corner)
Figure 5 (right). Blurred ideal T-junction of Figure 4
Figure 2 (right). Grey-value corner model

Figure 6 (left). Sketch of a


Nage123
Figure 7 (right). Real T-junction
Figure 3. Grey-value corner as proposed by Nagel”

Gaussian filter smooths both the noise and the struc-


tural intensity variations. The blurring leads to a
further rounding-off of the original sharp transitions. If
the signal-to-noise ratio is large, the blurred step edge
can be interpreted as an ideal sharp step edge con-
volved with a Gaussian filter. We can also extend this
interpretation to other grey-value transitions in the
image.

Experimental model
Analogously to the modelling of a step edge described Figure 8 (left). 3D-plot of the T-junction in Figure 7
above, a grey-value corner (L-junction) can be
modelled by convolution of a wedge shaped structure Figure 9 (right). Blurred real T-junction of Figure 8
(ideal grey-value corner, ‘piece of cake’) shown in
Figure 1 with a two-dimensional Gaussian function (see value variations of this T-junction are more suitably
Figure 2). The aperture of the wedge in this example is described by a continuous function than by discrete
90”. The size of the Gaussian filter, specified by the description elements. The qualitative behaviour of this
parameter (T, determines the smoothness of the real T-junction agrees with the sketch of a continuous
blurred wedge. Comparison of Figure 2 with the T-junction in Nage12’ (see Figure 6).
qualitative model of Nagel”, depicted in Figure 3, By superposition of a number n of wedge shaped
shows that the two models agree (see also Dreschler22). structures like in Figure 1 (with arbitrary aperture and
Similarly to modelling step edges and grey-value intensity) and subsequent Gaussian filtering, we can
corners, we can model intensity variations consisting of model arbitrarily complex structures of the proposed
three adjacent regions. Figure 5 shows a T-junction class of grey-value structures. The models can be
obtained by convolution of the structure in Figure 4 created in any orientation by carrying out an additional
with a Gaussian filter. If we compare this synthetic rotation.
T-junction with a real T-junction (Figure 7 is the
original image and Figure 8 shows the 3D-plot belong-
Analytical model
ing to this T-junction), or with the blurred version in
Figure 9, we see that the structures are very similar. In In the preceding section, the grey-value structures were
addition, Figure 8 suggests that the structural grey- modelled by two separate steps: composition of ideal

68 image and vision computing


the sector:

Integration of G(x) yields the Gaussian error function


$(x). Using:

4(x) = ’? G(5) d5 4(x, Y) = d(x) 4(y)

D(.Y,y) = G(n) 4(y)

it follows that:
Y

?I)
W, t(l-X) +.v) G (2)
The lower part of the sector is just a reflection with
respect to the x-axis. So the model function of the
whole sector can be obtained by superposition of two
model functions in (2):

gM,(x, Y, P, a, p.) =
‘0

Figure 10. Grey-value corner in the absolute coordinate


.iystem

We use the formulation in (3) for describing the


wedge shaped structures, and subsequent smoothing intensity variations of an L-junction because this func-
with a Gaussian filter. Indeed, these two steps can be tion is valid for angles in the whole range of
combined by just one analytical function. o”Sp< 180”. Moreover, more complex junctions (see
below) can again be obtained by superposition of this
Two adjacent regions (step edge, grey-value corner) function. The construction of grey-value structures by
.4n ideal grey-value corner .5(x, y) (wedge shaped superposition of model functions is possible because
structure) can be characterized, as shown in Figure 10, the convolution with a Gaussian function is a linear
by three parameters: the aperture j3, the amplitude a operation, and linear operations satisfy the superposi-
and an angle of rotation LYwhich denotes the orienta- tion principle2”. The whole grey-value structure can be
tion of E(x, y) with respect to the x-axis. Let the local decomposed into several elementary components (in
Cartesian coordinate system (5, q) be aligned in such our case wedge shaped structures) in such a way that
a way that the sector of the wedge is symmetric to the the result of an operation (in our case the convolution
i-axis. Then this sector is divided into two parts with with a Gaussian function) can ‘easily’ be calculated.
aperture p/2. Let the origin of the coordinate system The result of the operation onto the whole structure is
(origin of the grey-value corner, corner point) be the then the composition of the results obtained for the
point (xg, y,,). Convolution of E(x, y) with a two- elementary components.
dimensional Gaussian function G(x, y) results in the In order to compute function values of ~,,,,,.(x, y), we
model of a grey-value corner (L-junction) given by chose Romberg’s method2’ for numerical integration
c’.~~.(x, Y) = g,~/ (x, y, ~0, y0, o. P7 a, a): using rational approximations of the $-function in
Abramowitz and Stegun26. The employed limits of
g,,,(x. y) = E(x, y) * G(x, y) = integration can be obtained by estimating the integrand
of (2). Because the +-function is monotonically
E(5. n) G(x-t.y-n)d5dn (I) increasing from 0 to 1 the integrand is surely smaller
than or equal to c if G([)<E, i.e. ).5/?~-2In(V&).
A synthetic image of a grey-value corner can now be
In the following we derive an analytical expression for
created by superposition of the model function g,,~(x,
gMl.(x. y, p. a, CT), that means we consider the
y) with a surface of constant intensity a,). However, we
problem in the local coordinate system (x0 = y,, = (Y can also imagine this complete grey-value structure as
= 0). The grey-value corner in the absolute coordinate
the superposition of two model functions with aperture
system g,,_(x, y, x0, y,,, (Y, fi, a, o) can then be p and 360”-p:
evaluated by an additional translation (xc,, yO) and a
rotation about an angle a. With:

G(x) =& c
-(\-a)
G(x. Y) = G(x) G(Y)

and t = tan (p/2) we hence obtain for the upper part of


Equation (3) is valid for 0”</3 < 180”. The choice p = such a grey-value structure is 9. Analogously to the
180” defines the model of a step edge: realization of synthetic grey-value corners, we can
model the intensity variations of three adjacent regions
g,,(x, y) by the superposition of two model functions
gitlL(x, y) and one plateau of constant intensities.
Likewise, we can create such a grey-value structure by
If p > 180” then g,,_(x, y) can be computed using two superposing three model functions g,,(x, y):
model functions with half of the aperture and same
intensity or by superposition of gMSh.(x, y) and g,,(x,
y). Thus, general intensity variations of two adjacent
regions of the proposed class of grey-value structures,
which will be denoted by g,,(x, y), can be created
t=l
using the model function of the step edge g,,,(x, y)
and the L-junction g,,(x, y). In the general case, the
grey-value structures are specified by seven para-
meters:

&w2k Y) = ,$&w&~ Y> with 360” -PI -Pz, au, g> (51

Particular choices of the parameters of g,,(x, y) yield


the grey-value variations of a T-, Y- or ARROW-
junction. The T-junction is specified by /3i +& =
Three adjacent regions (T-, Y-, ARROW-Junction) 180”. A Y- and ARROW-junction is given by pi +
Grey-value variations which are characterized by three &> 180” and /3, +&< 180”, respectively, if pi and
plateaus of constant intensities joining in one point can pZ are the two smaller angles. The model of the
also be modelled by superposition of model functions Y-junction proposed in De Micheli et aLz7 is a special
g&x, y). One possible parametrization of the image case of (5) if we choose, for example, x0 = y0 = 0, Q =
of a trihedral vertex is sketched in Figure 11. Each 90”, & = 180”--28, /3,=90°+0, aO=A, a, =0
wedge is described by an angle of rotation, an aperture and a2 = B. In De Micheli et aL2’, this model is
and an amplitude. Because the angle of rotation of the employed to compare existing edge detection algor-
second sector a2 can be expressed by cy2= CY~ -&I ithms according to their accuracy of localization and
2 - p2/2, the number of parameters characterizing sensitivity to noise.

n adjacent regions
The validity of the superposition principle is the reason
why we can model grey-value structures by superimpos-
ing an arbitrary number n of wedge shaped structures
(elementary components). Conversely, we can derive
from the general case of ?z adjacent regions gmJx, y)
the grey-value structures for n = 2 and y1= 3 intro-
duced in the preceding sections:

g‘&(x, Y> = (&(x_Y)).G(r. Y)

= ,$t&(x, Y) * Gfx, Y))


Y

YO = ig,&,y)
i=l
na2 (6)

The choice n = 2 specifies the intensity variations of


a step edge or a L-junction. For n = 3 we obtain T-, Y-
and ARROW-junctions. In the case of n = 4 we have
PEAK-, K-, X-, MULTI- and XX-junctions in the
notation of the labelling system of Waltz’. For n = 5 we
have KA- and KX-junctions, and n 36 leads to even
more complex grey-value structures. The specific
so number of model parameters is m = 3 + 2n. For illus-
tration the 3D-plot of a K-junction is depicted in
Figure 12.

IDENTIFICATrON OF CHARACTERISTIC
INTENSITY VARIATIONS
Figure I I. Three adjacent regions The aim of identification is to find a suitable description

70 image and vision computing


for a real object. The task is to determine a mathemati-
cal model which reflects the essential properties of the
real object sufficiently well 28,2’) First it is necessary to
select a certain model class, and second to choose a
representative of this class by fixing the model para-
meters. The model class is chosen by using u priori
knowledge, and the estimation of the parameters takes
into account current empirical observations (measure-
ments}. For the identification of grev-value variations
in images, i.e. the precise recogmtion of grey-value
structures with respect to their characteristics, the
;malytical model introduced above is selected as model Figure 12 (left). 3Dpfot of nn ~~na~yt~c~~
K-juncti~F~
ctass. Let us simplify the notation by denoting the
Figure 13 (right). Original image of a cut cube
image plane coordinates with x= (x, y) and the model
parameters we wish to estimate with p = (p,, . , pm).
The function g,(x, p), the model description, should
be chosen in such a way that the agreement with the
grey values in the image g(x) is as close as possible. We
require the sum of the squared differences between the
model function and the grey values in an area 0
itround the origin of the grey-value structure to be as
small as possible. Then the nonlinear cost function S
depending on the parameters p is as follows:
Figure 14 (left). Detected fentures of Figure 13
Figure 15 (right). Considered features of Figure 13
S(p) = IiI (g,v(x,p>-g(x))’ dx--+ min (7)

iind one discrete version of the above is: Rohr-” which detects image points indicating high
intensity variations. This local approach operates
directly on the image utilizing the matrix Csuggested in
Nagel’“. The elements of Care c~~mbinati[~ns of partial
derivatives of the picture function. In Rohr3- locat
We search the values for p where S(p) takes a (local) maxima of the determinant of C are identified with
minimum. In order to get first experimental results we points of high intensity variations. In order to suppress
use the descent method of Powell’O. Reducing the local maxima in homogeneous regions due to noise
computation time will be the subject of future work. Lier(C) is compared with a threshold. Edge points
After choosing an initial starting vectorp”, the iteration outside corners are removed using det(Q/(titrace(C))’
process proceeds until the differences of the values of by Nagel and Enkelman34. After estimating the partial
1he function S(p) in subsequent iteration steps normal- derivatives with 5 x5-masks”‘, the localization of the
ized by the absolute values of the function, drop below corner points are refined by applying 3 x3-masks.
a predefined threshold. Figure 14 shows the result of this approach for the
image in Figure 13.
EXPERIMENTAL RESULTS Tnitiai values for the angles are evaluated using
straight lines fit to linked edge points in the vicinity of
This section demonstrates the behaviour of the pro- the examined grey-value structure. Single edge points
posed method for real image data. We show identifica- are localized by the approach of Kerr?‘. The mean
tion results for the image depicted in Figure 13, which values of the intensities in the sectors (bounded by the
was recorded with a HDTV-camera (1024x 1024 straight lines) are taken as an estimate for the
pixels) and the image in Figure 26 taken with an amplitude of the grey-value plateaus. For illustration
ordinary camera (512 x 512 pixels). The considered we determine initial values for the Y-junction in Figure
features are marked in Figures 15 and 28. Preliminary 16 appearing in the centre of the image shown in Figure
results are described by Rohr”‘. 13. The initial position of this grey-value structure is
marked by a square (see Figure 17). The straight lines
Initial values for the optimization process indicate the boundaries of the sectors and the marked
points inside the sectors contribute to the estimation of
The initial values p. needed for the optimization the mean values for the grey-value plateaus. Only those
process should be chosen automatically. One possibility points are considered where &r(C) is smaller than a
is to construct prototypes, representing certain grey- threshold.
value variations, by using the analytical model outlined
above. By applying these prototypes we would get a
Identification results
rough indication of the individual grey-value structure
being observed. The analysis of the grey-value varia- We start with the identification of a step edge (see
tions could then be refined by the proposed identifica- Figures 15 and 18). Because an edge point is only
tion approach. Here, we suggest another possibility. determined uniquely in one direction, i.e. the direction
Initial values for the position of the grey-value struc- of the grey-value gradient belonging to it, we restrict
tures are determined by applying the approach of the position of the edge point during the optimization

WI IO FZO 2 march 19% 71


Figure 18 (left). Original step edge
Figure 19 (right). Identified model of the step edge in
Figure 18

Table 1. Identification result of the step edge in


Figure 18

0.0
0.5
1.0
1.5

row in Table 1. The mean error t? (positive root of the


Figure 16. Original image of a Y-junction mean squared error) between the original image
and the model function is 2.48. The model function
approximates the original step edge fairly well (see
Figures 18 and 19). The estimated value for u is 1.52
and represents a measure for the width of the grey-
value transition.
In order to reduce the noise we will now smooth the
original image with a Gaussian filter (TF before
applying the optimization method. For values ffF=
0.5, 1.0 and 1.5 we get the estimated parameters as
shown in Table 1. The mean error 2 gets smaller with
increasing (TF. Obviously, the more we smooth the
image beforehand the better is the agreement of the
image with the model function. In addition, we see that
the parameters, with exception of u, remain nearly
constant, i.e. they are (nearly) independent of (TF.
However, the changes of u are to be expected because
rn8rnrnrnrn~81
~mmmmmammv filtering the image increases the width of the grey-value
mmmmmmmmn~ transitions. Imagine the original step edge to be a
8mmmmmmmmm8
r mmmmmmmmmm~
smoothed ideal step edge, then the filtered original step
~8mmammmmmm~ edge is the result of two Gaussian filters crO (the
l mmmmmmmmm~
mmmmmmm88a1 optimal value for the unfiltered original image) and (TV
mm8mmmmmm8~ applied successively to the ideal step edge. Equally, an
n ammammmmm~
Jmmmmmm~mmm~ ideal step edge can be smoothed with just one Gaussian
mmmmmmmmmm1 filter of u = M.. This is not only true for step
l mmmmmmmmms
/mmammmmmm~ edges but for arbitrary grey-value structures g(x):
mmmmmsmmm~
jmmmmmmmmm~
mm1mmmmmmm~
11.1.1.11.m k(x) * G(x> u,I>]* (3x7 uF)
Figure 17. Illustration of the initial data for the = g(x) * [G(x, co>
*G(x, UF)]
Y-junction in Figure 16: position, angles and grey values = g(X) * G(x, WF) (9)
of the plateaus
Thus, the width of a grey-value transition in the
process by an add tional requirement. In our case, we original image ug can be evaluated using:
can choose a certain row in the image (fixing the value
of yO). Then the number of the remaining unknown (T{,= SF (10)
parameters is 5: (x0, LY, a,,, a, CT). Applying the
identification approach to a ca. 20 x 20 portion of the If we now consider the values for u0 (last column in
original image, we get the parameters shown in the first Table l), we see that (10) is confirmed. Consequently,

72 image and vision computing


step edge was located between columns 9 and IO. then
the optimal position of the identified model would be
9.5.
For some further tests of our approach on other
images we refer to Bergholm and Rohr”‘. where the
estimates for the width and height of step edges were
compared with the estimates of the approach of Zhang
and Bergholm3x. The comparison showed that the two
different approaches (assuming the same step edge
model) yield similar results if the step edge model is
valid. If. however, the width of the transitions is large
Figure 20 (left). Original grey-value corner the approach described in the current contribution
seems to be more precise.
Figure 21 (right). Identified model of the grey-value Nalwa and Binford”” also fit an explicit model to step
x~orner in Figure 20 edges. They use the tanh. However, they did not
extend their approach to two-dimensional features such
Table 2. Identification result of the grey-value corner in
as grey-value corners and more complex junctions.
Figure 20
Next we want to identify the grey-value corner (set
Figure 20) in the rear of the cube in Figure 13. The
(TY XII YCI a P % a CT
aperture of the sector is very big. Consequently. the
0.0 27.1X 28.60 267.72 158.33 15.33 64.12 0.96 1.60 O.Yh recognition of this grey-value structure is difficult
0.5 27.22 2X.64 267.63 159.17 15.32 64.01 1.09 1.03 0.97 because the position of the corner point is not very well
.o 27.24 28.50 267.82 159.65 15.35 63.87 1.42 0.43 I.01 defined. The estimated model parameters p are shown
I .5 27.23 28.39 267.96 150.34 15.30 63.95 1.78 0.26 0.96
in Table 2. The identified model is depicted in Figure
21. Similar to the example of the step edge, the
all parameters are nearly independent of smoothing the deviation between the image and the model gets
original image beforehand. However, if the image is smaller with increasing values of Us.. The obtained
smoothed too much, the obtained parameters will be parameters are (nearly) independent of v~.
inaccurate. The position of the prominent point of the The identification result of the Y-junction in Figure
grey-value structure is determined (nearly) indepen- 22 appearing in the middle of the image in Figure 13 is
dently of the width of the grey-value structure and of displayed in Figure 23. The values for the estimated
the amount of filtering, because the estimated position parameter vectors p are presented in Table 3. Qualita-
is the origin of the grey-value structure, i.e. the position tively, we recognize the same behaviour as for the step
for the model parameter u = 0. Hence, the influence edge and the grey-value corner. For this more complex
of, for example, a badly focused camera or of different recognition problem, the parameters obtained are
sharpness of the grey-value structures due to variations (nearly) independent of v~, too. The same holds for
of the distances of objects in the scene to the camera is the identification of the ARROW-junction (see Figures
reduced. The application of the identification method 24 and 25, and Table 4).
for ideal synthetic images (sharp transitions and no We also show identification results for the image in
noise) also yielded good results. The step edges were Figure 26. The detected features can be seen in Figure
identified almost exactly. The mean error would be 27. For the features marked in Figure 28 (L-, Y- and
approximately zero. If, for example, the jump of the ARROW-junction) the identification results are shown

Figure 22 (lqft). Original Y-junction Figure 24 (left). Original ARROW-junction


Figure 23 (right). Identified model of the Y-junction in Figure 25 (right). Identified model of the ARROW-
Figure 22 junction in Figure 24

Table 3. Identification result of the Y-junction in Figure 22

~0110 no 2 march 1992 73


Table 4. Identification result of the ARROW-junction in Figure 24

OF XII YII a PI P2 a0 aI a2 u c ~0

0.0 21.85 32.00 230.15 68.53 31.01 12.93 42.10 69.01 1.00 2.47 1.00
0.5 21.84 32.03 229.80 67.36 31.16 12.88 42.15 69.10 1.10 2.15 0.98
1.0 21.82 32.13 228.58 64.14 31.43 12.76 42.30 69.10 1.40 1.73 0.97
1.5 21.77 32.16 228.51 63.56 31.45 12.76 42.34 68.95 1.72 1.41 0.83

Figure 26. Original image Figure 28. Considered features of Figure 26

in Tables 5, 6 and 7. Again, these examples illustrate


the properties of our approach mentioned above.

CONCLUSION AND FUTURE WORK


We have proposed an approach for the modelling and
identification of a certain class of characteristic inten-
sity variations. Although the underlying assumptions of
the model class (see the first section) seem to be very
restrictive, most existing approaches make implicit
use of them producing inaccurate results when the
assumptions are not met in the image. For modelling
this class of intensity variations, we have derived a
Figure 27. Detected features of Figure 26 parametric model which is the superposition of elemen-

Table 5. Identifcation result of the L-junction in Figure 28

0.0 166.68 204.87 147.51 121.78 15.28 136.19 1.47 3.21 1.47
0.5 166.69 204.86 147.40 122.06 15.29 136.20 1.55 2.86 1.47
I.0 166.74 204.78 146.98 123.37 15.17 136.35 1.81 2.02 1.51
1.5 166.77 204.79 147.04 123.23 14.81 136.57 2.13 1.52 151

Table 6. Identification result of the Y-junction in Figure 28

OF XI) YCl a PI I% alI aI a2 l7 F et1

0.0 165.51 99.45 329.67 118.64 104.40 93.72 138.92 54.22 1.50 2.32 1.50
0.5 165.50 99.45 329.83 118.63 104.05 93.69 138.91 54.36 1.58 2.11 1.49
1.0 165.51 99.43 330.52 118.67 103.99 93.48 138.78 54.72 1.82 1.60 1.52
1.5 165.53 99.43 330.56 118.70 104.22 93.47 139.00 54.38 2.14 1.20 1.52

Table 7. Identification result of the ARROW-junction in Figure 28

0.0 59.13 83.62 2.55 48.05 68.96 11.31 83.10 56.70 1.24 1.48 1.24
0.5 59.12 83.64 2.33 48.42 68.83 II.34 83.13 56.53 1.34 1.26 1.24
1.0 59.08 83.66 1.87 48.99 68.17 II.36 83.10 56.18 1.63 0.90 1.29
1.5 59.05 83.68 1.77 48.83 68.15 11.27 83.48 56.35 1.97 0.68 1.28

74 image and vision computing


tary model functions. By this it is possible to describe 4 Li, D, Sullivan, G D and Baker, K D ‘Edge
step edges, grey-value corners (L-junctions) T-, Y-, detection at junctions’, Proc. 5th Afvey Vision
ARROW-junctions and all other junction types repre- Conf., Reading, UK (25-28 September 1989) pp
sented in the labelling system of Waltz’. Each grey- 121-12.5
value structure can be expressed by just one analytical Lowe, D G ‘Organization of smooth image curves
function and is a special case of the proposed general at multiple scales’, Int. J. C’omput. Vision, Vol 3
model. The identification of real intensity variations is (1989) pp 119-130
done by minimizing the squared difference between the Damon, J Local Morse Theory for Solutions to the
image intensities and the model. By this, we have a Heat Equation, Preliminary Announcement,
measure of how well the estimated model describes the Department of Mathematics. University of North
actual data. All parameters can be assigned real values. Carolina (1989)
So the position of the prominent point is determined up Bergholm, F ‘Edge focusing’, lEEE Trans. PA MI,
to subpixel accuracy and indicates the origin of the Voi 9 (1987) pp 726-741
ideal (unsmoothed) grey-value structure. We also 8 Bergholm, F On the Content of Inf&-mation in
obtain a measure for the width of the grey-value Edges and Optical Flow. Dissertation, Royal
transitions. Moreover, the estimated parameters are Institute of Technology Stockholm. Sweden (May
(nearly) independent of smoothing the original image 1989)
beforehand in order to reduce the noise. The applica- 9 Clowes, M ‘On seeing things’. Artif. Infell.. Vol 2
tion of the proposed method to real image data No 1 (1971) pp 79-116
demonstrates that the identified model functions agree 10 Huffman, D ‘Impossible objects as nonsense
fairly well with the original grey-value structures. sentences, in Meltzer, B and Michie, D (edsf,
Because of the relatively large size of the grey-value Marh~ne Zntel~igence 6, Edinburgh University
structures being considered (approximately 20 x 20 Press, Scotland (1971) pp 295-323
pixels), the approach is computationally expensive. I1 Nagel, H-H ‘Displacement vectors derived from
But, in our opinion, it is necessary to examine such second-order intensity variations in image
large neighbourhoods of each grey-value structure in sequences’. Comput. Vision, <;raph. (e Irnnge
order to restore prominent features with high accuracy. Process., Vol 21 (1983) pp 85-l 17
The high precision we achieve with our approach 12 Dreschler, I, and Nagel, H-H ‘Volumetric model
should increase the reliability of evaluated properties of and 3D-trajectory of a moving car derived from
the 3D-scene and should consequently justify the monocular TV-frame sequences of a street scene’,
greater cost. Reducing the computation time will be the Proc. lJCAf. Vancouver, BC (1981) pp 692-697
subject of future work. One possibility is to perform the (see also ~ornp~~t. Graph. & ~l~a~~e Process.,
algorithm within a pyramid structure of images. A Vol 20 ( 1982) pp 199-228)
possible extension of our work is the utilization of 13 Kitchen, L and Rosenfeld, A ‘Gray-level corner
linearly increasing (descending) grey-value plateaus detection’, Patt. Recogn. I,ett., Vol I (1982) pp 91%
instead of constant plateaus. Another generalization 102
could be the modelling of curved edges joining in the 14 Zuniga, 0 A and Haralick, R M ‘Corner detection
prominent point of the grey-value structure. using the facet model’. Proc. IEEE Conjlf: on
C’ompmt. Vision & Putt. Recogn.. Washington DC
(June 19-23 1983) pp 30-37
ACKNOWLEDGEMENTS
1.5 Noble, J A ‘Finding corners’, Proc. 3rd Aive)
I thank H.-H. Nagel for his suggestion to take a high Vis~~~n Co~tf: Cambridge, UK (IS- 17 September
resolution image of a T-junction and for his idea to 1987) pp 267-274
extend the model of the grey-value corner in Figure 2 to 16 Noble, J A ‘Morphological feature detection’,
the model of the T-junction in Figure 5. 1 thank C. Proc. 4th Afvey Vision Corzj:, Manchester, UK (31
Schnorr and J. Rieger for many inspiring discussions August-2 September 1988) pp 203-210
and for their help in a variety of ways. Valuable critical 17 Rangarajan, K, Shah, M and Van Brackle, D
comments by G. Hager, A. Korn, H.-H. Nagel, J. ‘Optimal corner detector’, Cornput. Vision, Graph.
Rieger, C. Schnorr and the two referees on a draft d Image Process., Vol 48 ( 1989) pp 230-245
version of this contribution are gratefully acknow- 18 Marr, D ‘Early processing of visual information’,
ledged. C. Miiller helped me in providing Figure 13. Phi!. Trans. Royal Sot. London, I?. Vol 275 ( Biol.
Sciences) ( 1976) QQ 483-5 19
REFERENCES 19 Berzins, V ‘Accuracy of Laplacian edge detectors’.
Comput. Vision, Graph. & Image Process.. Vol
1 Waltz, D ‘Understanding line drawings of scenes 27 (1984) QQ 195-210
with shadows’, in Winston, P H (ed), The Psycho- 20 Guiducci, A ‘Corner characterization by differen-
fogy of Computer Vision, McGraw-Hill, New York tial geometry techniques’, Patt. Recogn. lxtt. +Vol
(1975) pp 19-91 8 (1988) QQ 31 I-318
2 Nagel, H-H ‘Wissensgesttitzte Ansatze beim 21 Marr, D and Hildreth, E ‘Theory of edge detec-
maschinellen Sehen: Helfen sie in der Praxis?’ in tion’, Proc. Royal Sot. Lond. R Vol 207 ( 1980) QQ
Brauer, W and Radig, B (eds), Wissensbasierte 187-217
Systeme, GI - Fachkongrelj Mtinchen, (Germany 22 Dreschler, L Ermitti~~n~ Fnarkanter Punkte auf den
28-29 October 1985) pp 170-198 ~~~dern bewegter Ubjekte und ~ere~hnun~ einer
3 Canny, F ‘A computational approach to edge .~~-~e.~ehre~bun~ auf dieser Grundlage, Disserta-
detection’, IEEE Trans. PAMI Vol 8 (1986) tion, Universitat Hamburg, Germany (1981)
679-698 23 Nagel, H-H ‘Principles of (low-level) computer

vol IO no 2 march 1992 7.5


vision’, in Haton, J-P fed), Fundamentals in FLusses in ~ildfo~gen, Diplomarbeit, Institut fiir
Computer Understanding: Speech and Vision, Nachri~htensysteme, Universit~t Karlsruhe, Ger-
Cambridge University Press, UK (1987) pp 113- many (1987)
139 33 Nagel, H-H ‘Constraints for the estimation of
24 Wolf, H Lineare Systeme und Netzwerke, Springer- displacement vector fields from image sequences’,
Verlag, Berlin (1985) Proc. Z./CAL, Karlsruhe, Germany (8-12 August
25 Press, W H, Flannery, B P, Teukolsky, S A and 1983) pp 94.5-951
Vetterling, W T Numerical Recipes, Cambridge 34 Nagel, H-H and Enkelmann, W ‘Iterative estima-
University Press, UK (1988) tion of displacement vector fields from TV-frame
26 Abramowitz, M and Stegun, I A Handbook of sequences’, Proc. 2nd Euro. Signal Process.
Mathematical Functions, Dover Publications, New Conf.: EUSZPCO-83, Erlangen, Germany (1983)
York (1965) pp 299-302
27 De Micheli, E, Caprile, B, Ottonello, P and Torre 35 Beaudet, P R ‘Rotationally invariant image oper-
V ‘Localization and noise in edge detection’, IEEE ators’, Proc. Int. Joint Conf. on Pattern Recogn.,
Trans. PAMZ, Vol 11 (1989) pp 1106-1117 Kyoto, Japan (November 7-10 1978) pp 579-583
28 Norton, J P An Introduction to Identification, 36 Korn, A ‘Towards a symbolic representation of
Academic Press, London (1986) intensity changes in images’, IEEE Trans. PAMI,
29 Hager, G Computational Methods for Sensor Data Vol 10 (1988) pp 610-625
Fusion and Sensor Planning, Kluwer, Boston 37 Bergholm, F and Robr, K A Comparison between
(1990) Two Approaches Applied for Estimating Diffuse-
30 Powell, M J D ‘An efficient method for finding the ness and Height of Step Edges, Hausbericht Nr.
minimum of a function of several variables without 10262, Fraunhofer-Institut fur Informations- und
calculating derivatives’, Comput. J., Vol 7 (1964) Datenverarbeitung (IITB), Karlsruhe, Germany
155-162 (March 1991)
31 Robr, K iiber die ModeiIierung und Identifikation 38 Zbang, W and Bergbolm, F An extension of Marr’s
ch~rakteristischer Grauwertverl~ufe in ~e~iweit- ‘signature’ based edge classification, Technical
biidern, 12. DAGM - Symposium Mustererken- Note, CVAP-TN04, Department of Numerical
nung (24-26 September 1990) Uberko~hen-Aale~~, Analysis and Computing Science, Royal Institute
Informatik-~achberichte 254, GroBkopf, RE (ed), of Technology, Stockholm, Sweden (September
Springer-Verlag, Berlin (1990) pp 217-224 1990)
32 Robr, K Untersuchung van grauwertabhiingigen 39 Nalwa, V S and Binford, T 0 ‘On detecting edges’,
Transformationen zur Ermittlung des opt&hen IEEE Trans. PA MI, Vo18 No 6 (1986) pp 699-714

76 image and vision computing

You might also like