Analogy
Analogy
James Francis Ross in Portraying Analogy (1982), the first substantive examination
of the topic since Cajetan's De Nominum Analogia,[dubious – discuss] demonstrated
that analogy is a systematic and universal feature of natural languages, with
identifiable and law-like characteristics which explain how the meanings of words
in a sentence are interdependent.
Premises
a is C, D, E, F, G
b is C, D, E, F
Conclusion
b is probably G.
Shared structure
According to Shelley (2003), the study of the coelacanth drew heavily on analogies
from other fish.
Contemporary cognitive scientists use a wide notion of analogy, extensionally close
to that of Plato and Aristotle, but framed by Gentner's (1983) structure mapping
theory.[14] The same idea of mapping between source and target is used by
conceptual metaphor and conceptual blending theorists. Structure mapping theory
concerns both psychology and computer science. According to this view, analogy
depends on the mapping or alignment of the elements of source and target. The
mapping takes place not only between objects, but also between relations of objects
and between relations of relations. The whole mapping yields the assignment of a
predicate or a relation to the target. Structure mapping theory has been applied
and has found considerable confirmation in psychology. It has had reasonable
success in computer science and artificial intelligence (see below). Some studies
extended the approach to specific subjects, such as metaphor and similarity.[15]
This section may contain excessive or irrelevant examples. Please help improve the
article by adding descriptive text and removing less pertinent examples. (April
2023)
Logic
Logicians analyze how analogical reasoning is used in arguments from analogy.
Linguistics
An analogy can be the linguistic process that reduces word forms thought to break
rules to more common forms that follow these rules. For example, the English verb
help once had the preterite (simple past tense in English) holp and the past
participle holpen. These old-fashioned forms have been discarded and replaced by
helped by using the power of analogy (or by applying the more frequently used Verb-
ed rule.) This is called morphological leveling. Analogies can sometimes create
rule-breaking forms; one example is the American English past tense form of dive:
dove, formed on analogy with words such as drive: drove.
Neologisms can also be formed by analogy with existing words. A good example is
software, formed by analogy with hardware; other analogous neologisms such as
firmware and vapourware have followed. Another example is the humorous[18] term
underwhelm, formed by analogy with overwhelm.
Some people present analogy as an alternative to generative rules for explaining
the productive formation of structures such as words. Others argue that they are in
fact the same and that rules are analogies that have essentially become standard
parts of the linguistic system, whereas clearer cases of analogy have simply not
(yet) done so (e.g. Langacker 1987.445–447). This view agrees with the current
views of analogy in cognitive science which are discussed above.
Analogy is also a term used in the Neogrammarian school of thought as a catch-all
to describe any morphological change in a language that cannot be explained merely
sound change or borrowing.
Science
Analogies are mainly used as a means of creating new ideas and hypotheses, or
testing them, which is called a heuristic function of analogical reasoning.
Analogical arguments can also be probative, meaning that they serve as a means of
proving the rightness of particular theses and theories. This application of
analogical reasoning in science is debatable. Analogy can help prove important
theories, especially in those kinds of science in which logical or empirical proof
is not possible such as theology, philosophy or cosmology when it relates to those
areas of the cosmos (the universe) that are beyond any data-based observation and
knowledge about them stems from the human insight and thinking outside the senses.
Analogy can be used in theoretical and applied sciences in the form of models or
simulations which can be considered as strong indications of probable correctness.
Other, much weaker, analogies may also assist in understanding and describing
nuanced or key functional behaviours of systems that are otherwise difficult to
grasp or prove. For instance, an analogy used in physics textbooks compares
electrical circuits to hydraulic circuits.[19] Another example is the analogue ear
based on electrical, electronic or mechanical devices.
Mathematics
Some types of analogies can have a precise mathematical formulation through the
concept of isomorphism. In detail, this means that if two mathematical structures
are of the same type, an analogy between them can be thought of as a bijection
which preserves some or all of the relevant structure. For example,
�
2
\mathbb {R} ^{2} and
�
\mathbb{C} are isomorphic as vector spaces, but the complex numbers,
�
\mathbb{C} , have more structure than
�
2
\mathbb {R} ^{2} does:
�
\mathbb{C} is a field as well as a vector space.
Category theory takes the idea of mathematical analogy much further with the
concept of functors. Given two categories C and D, a functor f from C to D can be
thought of as an analogy between C and D, because f has to map objects of C to
objects of D and arrows of C to arrows of D in such a way that the structure of
their respective parts is preserved. This is similar to the structure mapping
theory of analogy of Dedre Gentner, because it formalises the idea of analogy as a
function which makes certain conditions true.
Artificial intelligence
Further information: case-based reasoning
Further information: structure-mapping theory
A computer algorithm has achieved human-level performance on multiple-choice
analogy questions from the SAT test. The algorithm measures the similarity of
relations between pairs of words (e.g., the similarity between the pairs HAND:PALM
and FOOT:SOLE) by statistically analysing a large collection of text. It answers
SAT questions by selecting the choice with the highest relational similarity.[20]
The analogical reasoning in the human mind is free of the false inferences plaguing
conventional artificial intelligence models, (called systematicity). Steven
Phillips and William H. Wilson[21][22] use category theory to mathematically
demonstrate how such reasoning could arise naturally by using relationships between
the internal arrows that keep the internal structures of the categories rather than
the mere relationships between the objects (called "representational states").
Thus, the mind, and more intelligent AIs, may use analogies between domains whose
internal structures transform naturally and reject those that do not.
Keith Holyoak and Paul Thagard (1997) developed their multiconstraint theory within
structure mapping theory. They defend that the "coherence" of an analogy depends on
structural consistency, semantic similarity and purpose. Structural consistency is
the highest when the analogy is an isomorphism, although lower levels can be used
as well. Similarity demands that the mapping connects similar elements and
relationships between source and target, at any level of abstraction. It is the
highest when there are identical relations and when connected elements have many
identical attributes. An analogy achieves its purpose if it helps solve the problem
at hand. The multiconstraint theory faces some difficulties when there are multiple
sources, but these can be overcome.[10] Hummel and Holyoak (2005) recast the
multiconstraint theory within a neural network architecture. A problem for the
multiconstraint theory arises from its concept of similarity, which, in this
respect, is not obviously different from analogy itself. Computer applications
demand that there are some identical attributes or relations at some level of
abstraction. The model was extended (Doumas, Hummel, and Sandhofer, 2008) to learn
relations from unstructured examples (providing the only current account of how
symbolic representations can be learned from examples).[23]
Mark Keane and Brayshaw (1988) developed their Incremental Analogy Machine (IAM) to
include working memory constraints as well as structural, semantic and pragmatic
constraints, so that a subset of the base analogue is selected and mapping from
base to target occurs in series.[24][25] Empirical evidence shows that humans are
better at using and creating analogies when the information is presented in an
order where an item and its analogue are placed together..[26]
Eqaan Doug and his team[27] challenged the shared structure theory and mostly its
applications in computer science. They argue that there is no clear line between
perception, including high-level perception, and analogical thinking. In fact,
analogy occurs not only after, but also before and at the same time as high-level
perception. In high-level perception, humans make representations by selecting
relevant information from low-level stimuli. Perception is necessary for analogy,
but analogy is also necessary for high-level perception. Chalmers et al. concludes
that analogy actually is high-level perception. Forbus et al. (1998) claim that
this is only a metaphor.[28] It has been argued (Morrison and Dietrich 1995) that
Hofstadter's and Gentner's groups do not defend opposite views, but are instead
dealing with different aspects of analogy.[29]