Computational Artefacts
Computational Artefacts
Raymond Turner1
Computer scientists construct things; they construct software, computers, tablets, embedded
systems, chips, type inference frameworks, natural language systems, compilers and
interpreters etc. These are computational artefacts, the technical artefacts of computer science.
A central activity of the subject involves their specification, design and construction. This paper
is concerned with some of the philosophical issues that surround and underpin this activity2.
This paper complements the outline given in (Turner R. , 2013).
1. TECHNICAL ARTEFACTS
Technical artefacts are determined by two sets of properties: functional ones and structural
ones (Cummins, 1975; Kroes, 2006; McLaughlin, 2001; Vermaas, 2003; Franssen, 2009; Meyer,
2007). The former concern the artefacts purpose: a clock is for telling the time and a kettle is for
heating water. In contrast, structural properties pertain to an artefact’s physical makeup. In
particular, they provide the causal mechanisms that enable an artefact to meet its functional
requirements. Both characterisations seem necessary for something to count as a technical
artefact: without one it won’t work and without the other we don’t know what working
amounts to.
How do computational artefacts fit into this picture? Our starting point is the analysis of
computer programs given in (Moor, 1978). In his discussion of the nature of the software versus
hardware distinction Moor makes the following observation.
It is important to remember that computer programs can be understood on the physical level as well as the
symbolic level. The programming of early digital computers was commonly done by plugging in wires and
throwing switches. Some analogue computers are still programmed in this way. The resulting programs are
clearly as physical and as much a part of the computer system as any other part. Today digital machines
usually store a program internally to speed up the execution of the program. A program in such a form is
certainly physical and part of the computer system.
Seemingly programs may be both physical devices and symbolic entities. One interpretation of
this has it that both the symbolic and physical things are programs in their own right. So
understood, they are independent entities. But then how is the physical program characterised
and how does the stand-alone symbolic program achieve anything?
Imagine we find a physical device in nature. It has a small screen with a numerical keypad.
When two numbers are input and a key marked × is pressed, it outputs another number. If we
know nothing about multiplication the number output would not strike us as the multiplication
of the two inputs. We would be limited to pressing buttons and viewing the output. As such we
would be experimenting with the device. Only when we have the concept of multiplication can
the device be taken to be a multiplier. Given the duality thesis, a physical thing that has no
While computational artefacts fit the general picture provided by technical artefacts they have
some distinctive features. One concerns the abstract nature of a good number of the
computational variety; the other relates to the form that functional characterisations take in
computer science.
Computational functions say what the artefacts of computer science are intended to do.
Normally they take the form of requirements that are expressed in specially designed formal
languages. These include architectural description, specification, programming, database query,
web ontology and hardware description languages. They encode various abstraction levels
(Floridi, 2008) in the design process. Generally, they are formal languages: each is defined by a
precise syntax that makes explicit its types and operations. This formality not only avoids
ambiguity and aids interoperability but also allows the construction of parsers, type checkers
and other design tools. We shall illustrate the nature of these languages with a range of
examples that span the abstraction hierarchy of contemporary computer science.
At the highest level of abstraction are architectural description languages (Bass, 2003). These
are employed to specify the large-scale structure of computational systems and, in particular,
the components and connections in a complex system. They include Rapide (Luckham, 1998),
Darwin (Darwin, 1997), Wright (Allen, 1997) and Acme (Garland, R.T., & While, 2000). David
Garland provides an analysis of the shared concepts of these languages. In this analysis, the
common types of objects used for representation are Components, Connectors and Systems.
Components generally include the primary computational elements and data stores of a system.
Typical examples include clients, servers, objects, blackboards, and databases. In its simplest
form a component consists of two finite sets of ports, one input set and one output set. We
might express their formation rules as follows3.
: : : :
< , >: : :
Connectors represent interactions among components; they are the glue of architectural design.
Examples include simple forms of interaction, such as pipes, procedure calls, and event
broadcasts. The following is a very rudimentary syntax for connecting two components to form
a new one via pipe. Here an output port for the first is glued to an input port of the second.
: ′: ∈ ′ ∈
, ⩥ , :
3Although, we could do this in many ways, we shall employ the formalism of typed predicate logic
(Turner R. , 2009) to introduce the syntax and semantics of these languages in a uniform way.
Connectors may also represent more complex interactions such as a SQL link between a
database and an application. Systems represent configurations (graphs) of components and
connectors. Components and systems are often represented pictorially.
At a lower level of abstraction are the languages that are explicitly termed specification
languages e.g., (Da 91 : Dawes, 1991), (Jon 86 : Jones, 1986), (Wo 96 : Woodcock, 1996), (Dil 90 :
Diller, 1990), (Abrial, 1996). In their primary use these are aimed at individual module and
program specification. For example, in Z the central vehicle of expression is the notion of
schema. This holds two pieces of information: a declaration part that carries the type
information of the identifiers of the specification, and a predicate part that is an expression in a
logical language that constrains the identifiers to satisfy the predicate. Roughly, declarations
assign types to identifiers and predicates constrain them. The syntax might be given by the
following rules.
: : : : :
: : : , : ! | ]: ℎ
Further down the abstraction hierarchy are programming languages, the core languages of the
discipline (Tennent, 1981; Mitchell, 2003). For instance, imperative or procedural languages
have syntax rules such as the following.
The π calculus of Milnor (Milnor, 2006) is a language of processes and concurrency. Some of its
syntax might be expressed by the following rules.
! ]: 00 ! ]: 00 : 00 ': 00
. : 00 . : 00 |': 00
In the first rule . is a process waiting for a message that is sent on a communication
channel named c. After the message is passed, the computation proceeds with P. Likewise
. is a process where the name x is emitted on channel c before proceeding as P. The last
rule introduces concurrency for processes.
At the other extreme are hardware description languages. (e.g., VHDL (Ashenden, 2008). These
describe the behaviour of individual physical components. A digital system in VHDL consists of a
design entity that may contain other component entities. Each entity consists of a
declaration part and an architecture body. The former defines the input and output signals and
the latter contains the description of the entity, and is composed of interconnected entities,
processes and components, all operating concurrently.
: %: Architecture Body
! | ]: >
Throughout the artefact construction process functional requirements are expressed in these
languages. Admittedly, we have presented a rather simple and clean picture. The syntax rules
maybe more complex and languages may incorporate several of the above levels of abstraction,
but this does not affect the main point of this section, namely that the functional specifications
of computer science are expressed in an ever growing family of formal languages.
3. SEMANTICS
These are artificial languages. Consequently, to use them we require some account of their
intended meanings. Many computer scientists, including many leading ones, argue that any such
account must be mathematical in nature. In fact, there are several related arguments to be found
in the literature.
The first is best characterised as the argument from ambiguity. The following quote is due to
one of the founders of denotational semantics.
Any discussion on the foundations of computing runs into severe problems right at the start. The difficulty is
that although we all use words such as ‘name’, ‘value’, ‘program’, ‘expression’ or ‘command’ which we think
we understand, it often turns out on closer investigation that in point of fact we all mean different things by
these words, so that communication is at best precarious. These misunderstandings arise in at least two
ways. The first is straightforwardly incorrect or muddled thinking. An investigation of the meanings of these
basic terms is undoubtedly an exercise in mathematical logic and neither to the taste nor within the field of
competence of many people who work on programming languages. As a result the practice and development
of programming languages has outrun our ability to fit them into a secure mathematical framework so that
they have to be described in ad hoc ways. Because these start from various points they often use conflicting
and sometimes also inconsistent interpretations of the same basic terms. (Strachey, 2000)
With natural language descriptions there is a lack of semantic clarity over the basic notions.
There are two aspects to this. Words such as ‘name’, ‘value’, ‘program’, ‘expression’ or
‘command’ are not to be taken at face value as English words; they are technical terms.
Consequently, their meanings must be laid down. Semantic accounts do not describe natural
language notions in a precise way. Instead they are definitional: they tell us what these notions
are to mean. Moreover, their meanings must be laid down precisely. So the first requirement for
formal semantics is precision.
The second argument concerns the design of the languages themselves. Programming and
specification languages involve several complex notions and their interactions, and it is hard to
investigate their formal properties without some mathematical tools.
In particular, Java has integrated multithreading to a far greater extent than most programming languages.
It is also one of the only languages that specifies and requires safety guarantees for improperly synchronized
programs. It turns out that understanding these issues is far more subtle and difficult than was previously
thought. The existing specification makes guarantees that prohibit standard and proposed compiler
optimizations; it also omits guarantees that are necessary for safe execution of much existing code. (Pugh,
2000)
Notions such as threading and synchronization are complex concepts for which we require a
better conceptual understanding and a more exact formulation.
But to me, each revision of the document simply showed how far the initial Flevel implementation had
progressed. Those parts of the language that were not yet implemented were still described in free-flowing
flowery prose giving promise of unalloyed delight. In the parts that had been implemented, the flowers had
withered; they were choked by an undergrowth of explanatory footnotes, placing arbitrary and unpleasant
restrictions on the use of each feature and loading upon a programmer the responsibility for controlling the
complex and unexpected side-effects and interaction effects with all the other features of the language.”
(Hoare, The Emperor's Old Clothes, 1981).
All this has led to the use of mathematical interpretations. There are many options here; domain
theoretic semantics is one such. For example, for a simple while language each program denotes
a function from states to states where states are functions from variables to numbers.
║ ≔ ║0 = 0! ← ]
║ ; '║0 = ║'║ ║ ║0
║ABCD║0 = 0
║% → , '║0 = ║%║0 → ║ ║0, ║'║0
║FGCHI % JK ║0 = ║%║0 → ║ ; FGCHI % JK ║0, 0
Here 0! ← ] is the same state as s except that x is assigned the value n. The second interprets
sequencing as functional composition, the third that the conditional returns the first program
value if the conditional is true and the second if it is false. The last reflects the fact that iteration
uses conditionals and sequencing and proceeds until the Boolean becomes true. Note that this
involves an implicit fixed point construction which is supported by the underlying mathematical
theory of domains: each program denotes a continuous function on the domain. This yields a
version of denotational semantics (Stoy, 1977; Gunter, 1992; Schmidt, 1986; Milne, 1975;
Gordon, 1979). We might have employed other semantic techniques such as category theoretic
(Oles, 1982). For the present concerns it does not matter. The important point is that some
precise mathematical account is provided.
However, in practice, many will use these languages with a poor understanding of any precise
semantics. Indeed, some would claim that the average users’ understanding has nothing to do
with such mathematical structures. Instead they claim that many come to understand
programming languages by building mental models of the operators and types of the language
Taken as approach to semantics the meanings of expressions in the language are somehow
stored in the minds of users. This mental model approach to semantics is a special case of the
mental models view of function where the function is somehow located in the head of some
agent (Colburn & Shute, 2008).
But such an approach is problematic. Any form of semantics must play a normative role
(Boghossian, 1989). Although the exact form of any normative constraint (Glüer, 2009; Wright
C. a., 2002) is in question, there is agreement on a minimal requirement.
The fact that the expression means something implies that there is a whole set of normative truths
about my behavior with that expression; namely, that my use of it is correct in application to
certain objects and not in application to others. .... The normativity of meaning turns out to be, in
other words, simply a new name for the familiar fact that, regardless of whether one thinks of
meaning in truth-theoretic or assertion-theoretic terms, meaningful expressions possess conditions
of correct use. Kripke's insight was to realize that this observation may be converted into a
condition of adequacy on theories of the determination of meaning: any proposed candidate for
the property in virtue of which an expression has meaning, must be such as to ground the
'normativity' of meaning-it ought to be possible to read off from any alleged meaning constituting
property of a word, what is the correct use of that word (Boghossian, 1989).
The normative requirement on programming language semantics seems has two aspects.
Semantics must govern any implementation and fix correctness for the implementation. It must
fix what it means for an implementation to be correct. In addition, it must also fix correctness
for the user in the sense that it must provide a means of articulating the required relationship
between any program and its specification. Semantics must determine what constitutes a
mistake in any implementation and what constitutes a user mistake. This cannot be cashed out
by what goes on inside some programmer’s head. The following is aimed at natural language
semantic theories but it equally applies to artificial ones.
Given . . . that everything in my mental history is compatible both with the conclusion that I meant plus and
with the conclusion that I meant quus, it is clear that the skeptical challenge is not really an epistemological
one. It purports to show that nothing in my mental history of past behavior -- not even what an omniscient
God would know -- could establish whether I meant plus or quus. But then it appears to follow that there was
no fact about me that constituted my having meant plus rather than quus. (Kripke, 1982)
Programmers may represent matters in all sorts of ways. Moreover, one woman’s mental model
maybe completely different to another’s. Semantics cannot play its normative role unless there
is a shared understanding. And once this shared understanding is laid down, we move in the
direction of formal semantics.
4. AXIOMATIC THEORIES
The proof theory for such languages provides explicit rules for reasoning about the constructs
of the language. Traditionally, they stand to the denotational semantics as the proof rules of
predicate logic stand its standard model theory. For illustrative purposes we shall employ the
formalism of typed predicate logic (Turner R. , 2009) to provide a uniform account of the proof
systems of these languages.
At the level of architectural design, we need to reason about components and connectors, and
for this we require some account of their properties. For components we might insist that
connecting devices be associative in the following sense.
: ′: ∈ ′ ∈
M , ⩥ , N=
: ′: ∈ ′ ∈
M , ⩥ , N=
Where here = is an equality relation for components and ports. This is just one of a minimal set
of rules that might be employed to reason about the structure of components, connectors and
systems.
At the level of individual specifications, one of the central mathematical devices is the notion of
a schema. In the specification language Z (Spivey, 1998) a simple specification has the form
O ≐ ! : | ]
This introduces a new relation whose interpretation is fixed by the schema on the right. More
explicitly, it is given proof theoretic life through the following rule: for an object of type T, the
relation and the predicate of the schema coincide.
: O ≐ ! : | ]
O ↔
For programming languages, a proof theoretic account embodies an operational definition. In its
modern guise (Plotkin, 2004; Fernandez, 2004) it uses a mechanism of evaluation where, in its
simplest form, the evaluation relation for imperative languages takes the following shape.
< , 0 >⇓ 0′
This indicates that evaluating P in state s terminates in state s'. For example, the rules for a
simple while loop would take the following form.
< %, 0 >⇓ RST- < , 0 >⇓ 0 < )*+,- % ./ , 0′ >⇓ 0 < %, 0 >⇓ UV,W-
< )*+,- % ./ , 0 >⇓ 0 < )*+,- % ./ , 0 >⇓ s
For the π calculus, the following rules govern the parallel operator where ≡ is process
equivalence.
Y:Z[\]^__ `:Z[\]^__ Y:Z[\]^__ Y:Z[\]^__ `:Z[\]^__ [:Z[\]^__
Y|` ≡ `|Y Y|a≡Y Y|` |[≡Y| `|[
In axiomatic form, the languages of computer science are mathematical theories of programs,
processes, finite sets and functions i.e., mathematical theories of the various operations and
types that constitute them. Indeed, as mathematical structures these axiomatic systems have
ontological and definitional priority. The underlying concepts that form the basis of the
axiomatisation are intended to be the ones that are directly operated with by users. Generally,
the latter do not see these concepts as unpacked via translation into set theory or any other
existing mathematical notions. However, nothing of what follows depends upon which form of
language definition is taken as definitional.
Finally, to be clear, in suggesting that the languages of computer science should be formalised as
axiomatic systems, we are not resurrecting a form of mental-model semantics. Instead, we are
advocating axiomatic systems that directly encode the understanding reflected in practice. But
these systems are publically available via their axiomatic forms. They do not live in agent’s
heads-whatever that might mean.
5. DEFINITION
NewFac(0)=78
NewFac(n+1)=(n+1)*NewFac(n)
In the standard model of arithmetic this introduces a one-place function from the natural
numbers to the natural numbers. Much of mathematics involves this kind of definition whereby
new notions are introduced and investigated. We are suggesting that, in the first instance,
expressions in our languages are definitions in much the same way. They introduce notions that
are unpacked in terms of the underlying mathematical ontology of the language. They are
stipulative definitions (Gupta, 2008).
For example, in systems architecture a key property of systems design is that the overall
topology of a system is defined independently from the content of the components and
connectors that make up the system. A simple example is the diagrammatic definition of the
standard client- server architecture. In terms of its underlying ontology, this introduces a new
component.
RP Server
Client
Lower down specialised specification languages are employed. The following are examples of
functional specifications written in Z. A Bank and withdrawal operation for an ATM might be
specified as follows.
Bank
Deposit:Customer⇸ℕ
∀x⦁Deposit(x)>Min
Withdraw
n:ℕ, c:Customer
∆Bank
Deposit′(c)=Deposit(c)-n
The first defines a bank as a predicate whose type is inhabited with partial functions from
customers to numbers. These satisfy the demand that the deposits are always greater than some
minimum amount. The second defines the withdrawal operation as a relation between two
states of the bank –the before and after states.
On the present view, programming itself is first and foremost a definitional activity. Programs
define processes whose abstract content is determined by the semantics of the languages. For
example, the following is a definition in a simple while programming language.
i=0;
fact=1;
While i<=n do i:=i+1; fact:=fact*i
This defines an operation on the underlying state of the abstract machine. Likewise, the
following is a definition of the handover protocol for the GSM Public Land Mobile Network
expressed in the ∏-calculus.
Car(talk, switch) talk
lk.car(talk,switch)+switch(talk’,switch’).car(ta
talk’,switch’)
The observer pattern has observers i.e. objects that watch the state
ate of another object and an
object that is beingg watched called the Subject. The latter contains
ins a list of observers to notify of
any change in its state. Its methods en
enable observers to registerr and deregister themselves.
them
6. IMPLEMENTATION
Fac(0)=1
Fac(n+1)=(n+1)*Fac(n)
But it might also be used as the functional specification of the While program for factorial.
fac
i=0;
fact=1;
While i<=n do i:=i+1; fact:=fact*i
:=fact*i
Finally, consider an abstract machine acting as a specification of a physical one. Here the
artefact will consist of a physical device with associated physical operations. The medium of
implementation will include the material of the physical machine.
Each of these examples determines a relationship between three things: a definition, a medium
of implementation and the artefact.
or equivalently
We shall write this as a relation schema I[F, A, M]. However, there is not one relationship here.
Each of the above examples constitutes a different relation. However, what they all have in
common is the normative governance of the function over the artefact i.e., F is taken to provide
the criteria of correctness and malfunction for A. This normative role of function is taken to be
part of any general theory of function (Kroes P. a., 2006). It is the essential logical relationship
that underlies specification and implementation (Turner R. , 2010).
7. ABSTRACT ARTEFACTS
These examples highlight a very distinctive feature of computational artefacts. Consider the
factorial function and the While Program that is taken to be its implementation. The latter
artefact is a program that semantically determines an abstract operation fixed by the underlying
abstract machine of the While language. The artefact is not a physical thing; it is mathematical
in nature. Matters are much the same with the UML (Grady Booch, 1999) specification of a Java
program. Here the function is the UML specification whereas the artefact, at least from the
programmer’s perspective, is not a physical thing. From the programmer’s perspective, the
artefact is the abstract Java program. This is determined not by any physical realization but by
its linguistic form together with its semantic interpretation on its underlying abstract machine.
The programmer constructs the program from the medium of the Java programming language.
Indeed, the programmer may have little awareness or knowledge of any physical thing that will
eventually run on some physical device or computer. Certainly, in general she could not, in any
detail, describe any physical counterpart. Her design focus is the Java programming language
within which she must construct a program. And this is an abstract thing. This is not explained
in terms of causal processes, but in terms of the underlying abstract machine.
Next consider the situation in which finite sets are implemented as lists. Again the resulting
artefact is not a physical thing; it is an abstract object, an abstract list. While any such list will
get implemented in a concrete data structure in a programming language, and finally in a
physical store, from the perspective of the implementer, who is attempting to implement sets as
lists, in her consciousness, the artefacts are abstract things.
Finally consider a whole programming language where the artefact is a compiler (Turner R. ,
Programming Languages as Technical Artefacts, 2013). The function is fixed by some semantic
description of the language, for example, an operational semantics based upon an abstract
machine. In this case the structure prescribes the language that the compiler is written in e.g.
SML or Python. The artefact, the compiler itself, is a program in Python whose specification is
given as the semantic definition of the host language.
These examples should convince the reader that computational artefacts demand a
generalisation of the standard account of duality for technical artefacts. From the perspective of
the implementer, the medium of implementation is abstract. The artefacts functionality must be
realised through an appropriate structure, but it need not be physical – it maybe another
abstract medium. In this generalised framework the structural description gives way to a more
embracing one that brings the medium of implementation to the forefront. Instead of properties
such as made of carbon fibre we use properties such as constructed from arrays in Pascal,
implemented in Java. Structural descriptions involve the medium of implementation (e.g. Pascal)
that may also include further requirements on the implementation (e.g. use stacks).
Software is produced in a series of layers of decreasing levels of abstraction, where in the early
layers both function and artefact are abstract. Such sequences of function/artefact pairings form
one of the central design tools of computer science. For instance, architectural description might
serve as the functional requirement for a suite of specifications whose structural description
insists that they be written in UML. The resulting UML specifications may then be employed as
functional descriptions for individual programs written in Java. In turn these can serve as
functional definitions of actual physical programs that are hidden from the user, and
automatically generated. This brings out the fact that the difference between function and
artefact is a relative one: what is artefact at one level becomes function at the next.
In summary, the technical artefacts of computer science can be abstract or physical, and are
produced as a sequence of function artefact pairs. As technical artefacts, this complex logical
structure, spread over various levels of abstraction, makes them somewhat special, and rather
different to the common physical ones such as dominoes, bridges, steam irons and cycles.
In principle, the distinction between function and artefact should be absolute: there should be
no leakage between the two. However, there are claims to the effect that implementation is
somehow involved in semantic interpretation. For example, (Fetzer, 1988) observes that
programs have a different semantic significance to theorems. In particular, he asserts:
…programs are supposed to possess a semantic significance that theorems seem to lack. For the
sequences of lines that compose a program are intended to stand for operations and procedures
that can be performed by a machine, whereas the sequences of lines that constitute a proof do not.
(Fetzer, 1988)
This insists that programs are intended to stand for operations that can be performed by a
physical machine. If this is cashed out as the demand that programs need to be implemented as
physical operations then this is consistent with the present perspective. But the quote suggests
that this referential aspect is part of the semantic significance of programs. What does this
mean? Does it say that the physical properties of the implementation contribute to the meaning
of programs written in the language? Colburn (Colburn T. R., 2000) is a little more explicit when
he insists that the simple assignment statement A := 13×74 is semantically ambiguous between
something like the abstract operational semantics, and the physical one given as:
physical memory location A receives the value of physically computing 13 times 74. (Colburn T.
, 2000).
This suggests that both the physical and the abstract machines are involved in the
interpretation of assignment. That is in order to understand assignment one needs to know not
only its interpretation on an abstract machine, but also what actually happens on a given
physical machine. This is leakage between the levels. This may happen where the abstract
account it not sufficient for the user to construct correct programs. In other words, the function
is not properly distinguished from the structure. But if an actual physical machine is taken to
contribute in any way to the meaning of the constructs of the language, then their meaning is
dependent upon the contingencies of the physical device. Consequently, in order to determine
what assignment actually means, we shall have to carry out a physical computation. In
particular, the meaning of the simple assignment statement may well vary with the physical
state of the device. So assignment does not mean assignment but rather what the physical
machine actually does. Similarly, for a physical device whose purported function is to multiply:
if the function is literally in the physical device, then there is no notion of malfunction. Under
this interpretation, multiplication does not mean multiplication but rather what the physical
machine actually does when it simulates multiplication.
Actual machines can malfunction: through melting wires or slipping gears they may give the
wrong answer. How is it determined when a malfunction occurs? By reference to the program of
the machine, as intended by its designer, not simply by reference to the machine itself. Depending
on the intent of the designer, any particular phenomenon may or may not count as a machine
malfunction. A programmer with suitable intentions might even have intended to make use of the
fact that wires melt or gears slip, so that a machine that is malfunctioning for me is behaving
perfectly for him. Whether a machine ever malfunctions and, if so, when, is not a property of the
machine itself as a physical object, but is well defined only in terms of its program, stipulated by its
designer. Given the program, once again, the physical object is superfluous for the purpose of
determining what function is meant. (Kripke, 1982).
Whether a machine malfunctions is not a property of the machine itself but is determined by its
functional description. If these objections are along the right lines, then the implementation
does not form any part of semantic interpretation. Indeed, the idea that it does is a special case
of the so-called causal view of function (Cummins, 1975). Generally, on the causal view, the
function is located in the physical device. Once we include abstract artefacts, the central claim
implicit in the causal theory has little to do with causation in any physical sense. Instead it rests
on the claim that the function is located in the artefact.
The semantic theory of implementation (Rapaport, 1999; Rapport, 2000) is an example of this
generalised theory. This insists that we always interpret I[F, A, M] as semantic interpretation.
Coupled with demand that a semantic interpretation should be normative, this would have it
that the artefact carries the normative force. And once again we have no independent notion of
correctness and malfunction. Running together semantic interpretation and implementation is
to locate the normative governance in the artefact.
9. CORRECTNESS
When is an artefact taken to be correct relative to its specification? According to our normative
perspective, the artefact must be in agreement with the definition functioning as a specification.
But what exactly is agreement/disagreement?
Consider the standard UML specification of the observer pattern. The correctness of any Java
implementation has to match the structure of the specification. Each of the components has to
be implemented. In particular, there has to be a Java class for subject.
package com.journaldev.design.observer;
public interface Subject
{public void register(Observer obj);
public void unregister(Observer obj);
public void notifyObservers();
public Object getUpdate(Observer obj) }
In the Java class description only the type of the individual methods is fixed by a UML
specification. At this level of abstraction the correctness of the Java classes relative to the UML
specification amounts to little more than an agreement of method names and their types. Thus
an agreement amounts to type checking. But despite its simplicity, this is a formal mathematical
relationship.
In contrast, the implementation of the individual methods employs a more traditional notion of
correctness. For example, consider the following specification of a square root function written
in the style of the specification language VDM (Jon 86 : Jones, 1986).
Here x≧0 is the precondition that insists that the input is positive. Mathematically, the
specification defines an abstract relation, SQRTP (x, y) that is determined as follows.
Here Real is a data type of real numbers. What does it mean for a Pascal program P to satisfy it?
More specifically, what does the following amount to?
I[SQRTP, P, Pascal]
Presumably, via the semantics of Pascal, P carves out a relationship RP between its input and
output, its extension. The minimal correctness condition insists that this input/output relation
satisfies the specification i.e.
The abstract program, determined by the semantic interpretation of its language, satisfies the
abstract relation defined by the VDM expression. The statement (C) is a mathematical assertion
between two abstract objects. Notice that proving correctness formally involves the semantic
interpretation of the two languages and is often facilitated by rules that relate the two. For
programs, these may take the form of logical rules such as Hoare triples (Hoare C. , 1969) or
rules of transformation (Morgan, 1990).
At higher levels of abstraction, we may have only to establish very simple structural connections
that may even be automatically checked. Lower down the relationships take on a more familiar
mathematical form. However, at every abstract level the relationship is mathematical in nature.
Matters are different at the interface between the abstract and the physical. Even if we have a
mathematical correctness proof, the physical program at the end of the chain of artefacts, is a
physical object whose correctness is not a mathematical matter. This obvious fact has generated
considerable controversy.
The notion of program verification appears to trade upon an equivocation. Algorithms, as logical structures,
are appropriate subjects for deductive verification. Programs, as causal models of those structures, are not.
The success of program verification as a generally applicable and completely reliable method for
guaranteeing program performance is not even a theoretical possibility. (Fetzer, 1988)
In fact Hoare alludes to this issue in the very text that Fetzer employs to characterize Hoare's
mathematical stance on correctness.
When the correctness of a program, its compiler, and the hardware of the computer have all been
established with mathematical certainty, it will be possible to place great reliance on the results of the
program, and predict their properties with a confidence limited only by the reliability of the electronics.
(Hoare, An axiomatic basis for computer programming, 1969).
All seemed to be agreed that computational systems are at bottom physical systems, and some
unpredictable behaviour may arise because of the causal endpoint. So it is hard to see what all
the fuss was about. However, there is a conceptual problem lurking behind the heated
discussion that has little to do with the controversy.
To see what is at stake consider a square root program S and a physical device that is intended
to compute the square root of some class of numbers. What is physical correctness?
Presumably it is given by the following condition.
In practice, Real will always be finite. To simplify matters consider a simple case where there
are only 4 numbers in its domain say 9, 16, 25, 49. The computation table for the square root
program may be easily computed by hand, and takes the form of a table with only 4 states (9,3),
(16,4), (25,5) and (49,7). But this raises a problem; one that so far in the literature has only
plagued the issue of what constitutes a physical computation. However, as we shall see, it
actually applies to the whole correctness issue. For physical correctness we only seem to
require that physical system be in extensional agreement with the abstract one i.e., the abstract
and concrete relations agree. So we could implement the abstract square root program as a
physical table with pairs of suitably labelled stones, laid out on the ground. This seems to be too
weak a demand. It leads to the conclusion that every physical device with enough bits
implements every finite abstract one. Indeed, the fact that extensional agreement seems all that
is required is laid out in the simple mapping account. The following provides the essence of the
idea (Piccinini, 2010).
According to the simple mapping account, a physical system S performs a correct implementation of an
abstract specification C just in case (i) there is a mapping from the states ascribed to S by a physical
description to the states defined by the abstract specification C, such that (ii) the state transitions between
the physical states mirror the state transitions between the abstract states. Clause (ii) requires that for any
abstract state transition of the form s1 → s2, if the system is in the physical state that maps onto s1, it then
goes into the physical state that maps onto s2.
This leads to some form of pancomputationalism where almost anything implements almost any
functional demand. There has only to be enough states to go round. In the case of computation,
this has driven some authors (Chalmers, 1996; Copeland, 1996; Sprevak, 2012) to attempt to
provide an account of implementation that somehow restricts the class of possible extensional
interpretations.
One remedy appeals to a specification of the physical device i.e., it is claimed that if we introduce
some notion of specification, then the above extensionality challenge evaporates. For example, a
physical device could be interpreted as an and gate or an or gate. But given that we take the
device to be a technical artefact, it must have such a function. So how does this protect it from
the simple mapping challenge? The thinking here might be that it is resolved by trying to use a
functional description at a higher level of abstraction (Sprevak, 2012). For instance, a physical
program could have a VDM specification as its function or could be given by a program that
describes its behaviour. They yield different computational artefacts. But it is hard to see any
difference between these two from the perspective of the simple mapping account. And this
centres upon the finiteness constraint. In actual practice we will always be dealing with finite
sets and types and finite operations on them. Specifically, consider the following correctness
condition that uses the VDM.
Once more we could implement SQRTP as a physical table with pairs of suitably labelled stones
(Squarerootdevice). So even with a specification (semantic account of what it does) of the device,
since we are dealing with finite structures, the simple mapping account will still apply.
An alternative solution demands a causal connection between the components of the physical
system. So for example, the implication in the simple mapping account might be replaced by a
counterfactual one. However, consider the analogous case of the abstract table as artefact and
the VDM definition of square root acting as specification.
It would seem that the abstract table is a correct implementation of the VDM definition. Such an
implementation would be extensionally correct. And here no causal constraints are possible or
relevant. So why is the physical implementation different? The cases seem to be entirely
parallel.
10. AGENCY
Exactly how the physical and intentional conceptualisations of our world are related remains a vexing
problem to which the long history of the mind-body problem in philosophy testifies. This situation also affects
our understanding of technical artefacts: a conceptual framework that combines the physical and
intentional (functional) aspects of technical artefacts is still lacking. (Kroes P. a., 2006)
What appears to be missing from the extensional account of correctness is some notion of agent
or agent’s intention. It is agents who ascribe functions to artefacts. Good examples of this
approach are (McLaughlin, 2001; Searle, 1995).
[t]he function of an artifact is derivative from the purpose of some agent in making or appropriating the
object; it is conferred on the object by the desires and beliefs of an agent. No agent, no purpose, no function.
(McLaughlin, 2001)
Presumably, any artefact may have a function conferred on it by the desires or beliefs of an
agent. For example, I may use a computer as a doorstop. Its function is then what I say it is. If we
are to bring agency in the picture, something of this sort must be so. If an agent intends the VDM
definition to act as the function for the construction of a Java program then this distinguishes
mere accidental extensional agreement from the intended variety. But what is involved in an
agent intending to take some definition as the function description of an artefact? On his
commentary on Wittgenstein’s notion of acting intentionally David Pears suggests that anyone
who acts intentionally must know two things: she must know what activity she is engaged in
doing, and she must know when she has succeeded (Pears, 2006).
To illustrate matters we shall consider the difference between specification and semantic
interpretation. More specifically, suppose that F is the simple while language and M is set theory.
The translation of F into M yields a semantic interpretation of F as sets of state-to-state
transformations. On this account M provides the meaning of the operations. Consequently, the
semantic interpretation has normative priority and it is the rules of the operational semantics
that are under scrutiny: the rules of the operational semantics need to be proven sound relative
to this semantic interpretation. It is the rules that are correct or not relative to the semantic
interpretation. But we may also interpret matters as implementation. The medium of
implementation is set theory and the artefact is the set theoretic operations that form the
interpretation. In which case, the operations are implemented as set operations. We are no
longer seeking soundness but the correctness of the implementation. Now the operational rules
have normative priority. It is the artefact, the state-to-state transformations, that are correct or
not. If any of the rules are not sound we shall need to change the implementation. If we
implement iteration as union, it is not correct.
The actual tests and proofs to demonstrate the soundness of rules would be the same as those
required to demonstrate the correctness of an implementation. Indeed, the rules would be sound
exactly when the implementation is correct. However, the intentional perspective is different.
Recall, that anyone who acts intentionally must know what activity she is engaged in and when
she has succeeded. This leads to different observable behaviour in the two cases. In the case of
soundness I must know that I am testing for soundness and know what soundness amounts to.
Suppose that the rules are not sound then we must change them. If I am asked if F is sound I
must be able to justify that it is with reference to the semantic interpretation. If I am asked what
I am doing, I will reply that I am trying to establish that F is sound. All this is not what happens
in the case of an implementation where it is the artefact that is blamed if there is no extensional
agreement. It is the artefact that is subsequently revised. So there is a fundamental difference
between semantic interpretation and implementation. The direction of normative governance is
different. Where there is disagreement, in the semantic case, we blame the operational rules
since the semantic interpretation has priority. In the implementation case we blame the set
theoretic implementation since the operational rules have priority. The two relations represent
different intentional perspectives. It is the intentional perspective that the agent brings to the
party.
How different this is to the standard approach where the function is located in the mental states
of the agent. As with the semantic case this brings worse problems than it solves.
If functions are seen primarily as patterns of mental states, .. and exist, so to speak, in the heads of the
designers and users of artefacts only, then it becomes somewhat mysterious how a function relates to the
physical substrate in a particular artefact. (Kroes P. a., 2006)
How can the mental states of an agent fix the function of a device that is intended to perform
addition? If the expression of the function is located in the mental states of an agent then it is
not accessible to public scrutiny or public agreement. But for normative functions we must have
agreement of judgement by members of the relevant community of implementers and users. We
have a general version of the problem that plagues mental-model semantics. In other words,
such intentional models of function are generalisations of the mental model approaches to
semantics. And they suffer from the same inability to function as norms.
On the present view the intentional aspect arises from the intentional perspective we take to the
relationship between the abstract function and the structure. The abstract function is expressed
in the appropriate language. As such it is a public declaration. It is not what is in the head of any
agent that fixes the content of the function. The intentional aspect comes about by an agent
taking the function to have governance over the artefact. This provides a theory of function that
goes beyond its application to computational artefacts.
Brooks, F. (1995). The Mythical Man Month and Other Essays on Software Engineering. Addison-
Wesley.
Chalmers, D. (1996). Does a Rock Implement Every Finite-State Automaton? Synthese, 309-33.
Colburn, T. (2000). Philosophy and Computer Science. Armonk, N.Y. M.E. Sharp.
Colburn, T. R. (1999). Software, Abstraction, and Ontology. The Monist. 82:1, 3-19.
Colburn, T. R. (2000). Philosophy and Computer Science. New York: Explorations in Philosophy.
Series. M.E. Sharpe.
Dil 90 : Diller, A. Z. (1990). An Introduction to Formal Methods. John Wiley& Sons. London: John
Wiley& Sons.
Egan, F. (1992). Individualism, Computation, and Perceptual Content’, Mind, 101,. Mind.
Floridi, L. (2008). The Methods of Levels of Abstraction. Minds and Machines, 303-329.
Fowler, M. (2003). UML Distilled: A Brief Guide to the Standard Object Modeling Language .
Addison Wesley. From http://www-01.ibm.com/software/rational/uml/
Glüer, K. a. (2009). The Normativity of Meaning and Content. Retrieved from Standford
Encyclopedia of Philosophy: http://plato.stanford.edu/entries/meaning-normativity/
Gluer, K. W. (2008). The Normativity of Meaning and Content. From Stanford Encyclopedia of
Philosophy: http://plato.stanford.edu/entries/meaning-normativity/
Grady Booch, J. R. (1999). The Unified Modeling Language. Reading, Massachusetts: Adison
Wesley.
Gunter, C. (1992). Semantics of Programming Languages: Structures and Techniques. MIT Press.
Hoare, C. A. (1969). An axiomatic basis for computer programming. Communications of the ACM
12 (10), 576–580.
Jon 86 : Jones, C. (1986). Systematic Software Development Using VDM. Hemel Hemstead:
Prentice Hall.
Kripke, S. (1982). Wittgenstein on Rules and Private Language. Boston: Harvard University Press.
Kroes, P. a. (2006). The dual nature of technical artefacts. Stud. Hist. Phil. Sci., 1-4.
Kroes, P., & Meijers, A. (n.d.). The Dual Nature of Technical Artefacts.
Luckham, D. (1998). Rapide: A Language and Toolset for Causal Event Modeling of Distributed
System Architectures. (pp. 88-96). WWCA. Retrieved from http://dblp.uni-trier.de
Magee, J. D. (1995). Specifying Distributed Software Architectures. In Proceedings of 5th
European Software Engineering Conference (ESEC 95).
Meijers, A. (2001). The relational ontology of technical artifacts. In P. A. Kroes, The Empirical
Turn in the Philosophy of Technology. Amsterdam.: Elsevier.
Pears, D. (2006). Paradox and Platitude in Wittgenstein's Philosophy. Oxford: Oxford University
Press.
Pierce, B. (2002). Types and Programming Languages. London: The MIT Press.
Plotkin, G. (2004). A structural approach to operational semantics. J. Log. Algebr. Program, vol.
60-61, 17-139.
Pugh, W. (2000). The Java Memory Model is Fatally Flawed. Concurrency: Practice and
Experience 12(6)., 445-455.
Tennent, R. (1977). Language design methods based on semantic principles. Acta Informatica, 8,
97–112.
Thomasson, A. (2007). Artifacts and human concepts. In L. Margolise, Creations of the Mind:
Essays on Artifacts and Their Representations. Oxford.: Oxford University Press.
Turner, R. (2007). Understanding Programming Languages. Minds and Machines 17(2), 129-133.
Turner, R. (2009). The Meaning of Programming Languages. APA Newsletters, pp. 2-7. From
http://cswww.essex.ac.uk/staff/turnr/Mypapers/v09n1_Computers.pdf
Turner, R. (2010). Specification. Minds and Machines 21 (2):135-152. Minds and Machines, 135-
152.
Turner, R. (2012). Machines. In H. Zenil, A Computable Universe (pp. 63-76). World Scientific
Publishing Company.
Vallverdu, J. (2010). Thinking Machines and The Philosophy of Computer Science. IGI Global.
Van Vliet, H. (2008). Software Engineering: Principles and Practice. Chichester: John Wiley.