ICPR
ICPR
&
the future of pattern recognition
Lev Goldfarb
ETS group
Faculty of Computer Science
UNB
Fredericton, Canada
Outline
1. The wisdom of modern physicists (3 slides)
2. The maturity of a science (1 slide)
3. The currently prevailing wisdom in our field (1 slide)
4. Why should this be the guiding wisdom? (4 slides)
5. Are we mature enough for the task? (2 slides)
6. The (social) reason for the status quo (1 slide)
7. The forgotten history: syntactic pattern recognition (8 slides)
8. Syntactic pattern recognition: the unrealized hopes (4 slides)
9. How should we apply the wisdom of physicists? (2 slides)
10. ETS formalism: its inspiration (2 slides)
11. ETS formalism: temporal information (3 slides)
12. ETS formalism (13 slides)
13. ETS formalism: representational completeness (1 slide)
14. ETS formalism: the intelligent process (1 slide)
15. Learning without representation? (2 slides)
16. Conclusion (3 slides)
[ The wisdom of modern physicists
From: Freeman Dyson, Innovations in Physics, Scientific American, September 1958:
The objection that they are not crazy enough applies to all the
attempts which have so far been launched at a radically new theory of
elementary particles. It applies especially to crackpots. Most of the
crackpot papers that are submitted to the Physical Review are rejected,
not because it is impossible to understand them, but because it is
possible. Those that are impossible to understand are usually
published. When the great innovation appears, it will almost certainly
be in a muddled, incomplete, and confusing form. To the discoverer
himself it will be only half-understood. To everybody else it will be a
mystery. For any speculation that does not at first glance look crazy,
there is no hope.
This is one of the main messages I would like you to keep in mind,
and I hope we will discuss it in this workshop.
The proceeding rather superficial discourse should prove two general theses:
(1) Basic information theory concepts must and can be founded without
recourse to the probability theory . . . .
(2) Introduced in this manner, information theory concepts can form the basis
of the concept random, which [would then] naturally suggest that the
random is the absence of periodicity.
One of the main reasons for the status quo is the forgotten part of our
history, due to the emergence during the last 15-20 years of two “new”
popular areas, neural networks and machine learning. Both of them
are dealing with the same subject matter as pattern recognition
however starting, basically, all over again, and eventually
rediscovering the importance of symbolic representations.
(In contrast to pattern recognition, the professional milieu is not any more
engineering, but psychological and computational/statistical, respectively,
although both of them attracted many young physicists.)
Lev Goldfarb, ICPR, Aug. 2004 14
[ The forgotten history: syntactic pattern recognition
• Among English books that came out in the ’70s and ’80s and devoted
entirely to this topic, we had those by Fu (Syntactic Pattern
Recognition and Applications), Grenander (Lectures in Pattern
Theory), Gonzalez and Thomason (Syntactic Pattern Recognition),
Watanabe (Pattern Recognition) and several others.
Narasimhan (1964):
“Fools can learn from their own experience; the wise learn from
the experience of others.”
____________
So, let’s try to be wise and learn as much as we can from the
experience of physicists, mathematicians, and biologists.
From the very beginning, the ETS framework has been inspired by
the formal/esthetical beauty and power of a dynamic (and
generalized) version of the generative grammar model:
(In that sense, if besides ETS there is another formal realization of this vision, it
should definitely be investigated.)
Note how one event (particle on the left or string on the right) is immediately
followed by two events (two particles/strings).
[ ETS formalism: (class) primitive transformations
initial sights
time
terminal sights
• Think of a primitive as an “elementary” process that transforms the initial “objects” into
terminal ones: it is a symbolic “notation” of a typically nontrivial process (structured event).
• The circle and the square denote two site types: letters {a, b} and {x, y} are names of the
variables that are allowed to vary over non-overlapping sets of numeric labels.
• Brackets [ ] signify that we are, in fact, dealing with a class of (original) primitives, where
each original primitive carries concrete numeric labels.
ETS formalism: structs (segments of formative history)
number 3
Each i denotes an ETS primitive transformation (the order in which the primitive
transformations are applied is captured in the representation).
ETS formalism: extructs (contexts)
• Examples of extructs: heavy lines identify the interface sites and crosses identify
detached sites.
• Contexts should be thought of as parts of the formative history that are necessary
for the presence of the (immediately following) “important” segments of history.
ETS formalism: transformation
context
body
context body
40
ETS formalism: class supertransform
(structural class representation)
41
ETS formalism: (single level) class representation
47
ETS model basics: the evolution of a class ]
This is the job of the intelligent process (which includes the learning and
recognition stages):
• the modification of the structural memory (at various levels), i.e. of the class
supertransforms, and occasionally, introduction of new levels
• the modification of the numeric memory (at various levels), which is needed,
at present, to record the statistics related to various observed recurring
associations (between various primitives as well as between the contexts and
the bodies).
and as a consequence
• the results of (carefully crafted) VS learning algorithms can hardly be used for any
information processing needs other than “classification”,
Going back to slide 6, I would like you to keep in mind the question
raised there.