0% found this document useful (0 votes)
38 views5 pages

BPT Initia

The document discusses the philosophical underpinnings of computation and cognition. It describes how the notion of computation originated from philosophers like Leibniz who envisioned mechanical devices for calculation and reasoning. The development of formal systems and electronics led to modern computers. The Church-Turing thesis formalized the idea of effective computability and the Turing machine model. Computationalism views cognition as information processing that can be described by computer programs or algorithms, with the mind analogous to software and the brain analogous to hardware.

Uploaded by

kihtraka
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views5 pages

BPT Initia

The document discusses the philosophical underpinnings of computation and cognition. It describes how the notion of computation originated from philosophers like Leibniz who envisioned mechanical devices for calculation and reasoning. The development of formal systems and electronics led to modern computers. The Church-Turing thesis formalized the idea of effective computability and the Turing machine model. Computationalism views cognition as information processing that can be described by computer programs or algorithms, with the mind analogous to software and the brain analogous to hardware.

Uploaded by

kihtraka
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

S

COMPUTATION AND COGNITION LOOKING THROUGH THE PHILOSOPHICAL LENSE


By Bijoy H Boruah

Brics
S S

The cognitive frame of the human mind may appear to be notoriously elusive to the program-specific computational gaze. Even if a programmer attempts to build in cross-connections amongst multiple specialized programs in order to overcome limitations of the required frame, there might be no final success because there probably is no end to the number of different ways in which one intelligent activity in one context may give way to another in the course of an intelligent beings life.

The Notion of Computation Lexically speaking, 'computation' is 'calculation or controlling operation that is expressible in numerical or logical terms'. However, the idea of computation is rooted in a mathematical theory that defines it in the abstract sense of execution of algorithms. Roughly speaking, an algorithm consists of a finite sequence of effective and exact instructions that operate on formal symbols and can be implemented (or executed) in some mechanism. The mechanism carries out the instructions for any given input in a deterministic, discrete and step-by-step fashion, and transforms the input into some output in a finite amount of time. For all the instructions specified by the algorithm,the mechanism goes through a sequence of atomic steps in such a way that these steps correspond to one or more of the specified instructions. The nature of the mechanism could be concrete or abstract, natural or artificial. Depending on the kind of mechanism, the algorithmic specification will take different forms : in the case of humans, instructions may be given in natural language (as long as the individual steps are distinct and precisely described, e.g. a cooking recipe or instructions on public phones for making phone calls). The basic point is that the particular language in which an instruction is written does not matter much. So long as it meets the above conditions, it matters little whether the algorithm is written as a flow chart, a Java program, an English prose, or a Japanese haiku. All that is important is that the 'environment' in which the algorithm is to be executed can interpret and follow its instructions exactly and unambiguously.

A Brief History The history of computation can be traced back at least to the seventeenth century philosopher-mathematician Leibniz, who had dared to ponder over the possibility of constructing mechanical systems that could aid humans in performing calculations. Having constructed mechanical calculators himself, Leibniz was one of the first to envision a unique kind of calculator that would literally be a 'mechanical reasoner'. Indeed, the view that human thinking, in so far as it involves logical reasoning, could be mechanized the view that lies at the heart of the notion of computation as used in Cognitive Science today owes its theoretical origin to Leibniz. The modern notion of computation can also be seen as triggered by the concept of reasoning upheld by the masters of modern western philosophy, such as Descartes, Hobbes, Locke and others, namely the view that reasoning, or rational thinking in general, involves representations (or symbols). The mathematical practice of using marks and signs as representations in calculations became a paradigm for thought itself, as expressed by Hobbes's famous dictum that everything done by our mind is a computation (Pratt , 1987). The notion of computation thus was very much tied to the idea of mechanically m a n i p u l a t i n g r e p r e s e n t a t i o n s, a n d prototypical manipulators were found in the mechanical calculators of the seventeenth century. These calculators, with very modest

70

Figure 1

computing capability, were in vogue till the nineteenth century. It is in the twentieth century that we witness major progress in the construction of computers with much higher capability, and this is largely due to two quite independent developments : 1) the thorough logical analysis of the notions of 'formal systems' and 'formal proof' (leading to the notion of 'effectively computable function'), and 2) the rapid progression in the engineering of electronic components (from vacuum tubes, to transistors, to integrated circuits, and beyond). A Logico -Philosophical Perspective The philosophical groundwork for a welldefined formal notion of computation is provided by the logicians by rendering the intuitive idea of 'effective computability'the class of functions (over positive integers) that can be effectively calculated (in principle) a formal characterization through a definition postulate known as 'Church's Thesis'. Two ingredients that are implied by the concept of 'effectively calculable function', and that need to be referred to in spelling out what computation is, are : 1) a notion of effective procedure or algorithm and 2) a notion of function computed by an algorithm. A function computed by an algorithm is the mapping obtained by pairing all possible inputs with the corresponding outputs resulting from applying the algorithm to them. The notion of effective procedure or algorithm in turn is explicated in terms of the machine model of a computer, introduced by Alan Turing (1936), through an analysis of the possible processes that a human being can go through while performing a calculation using paper and penci, and applying rules from a given finite set. Turing defines a mathematical model of

animagined mechanical device' referred to as a 'Turing Machine' (TM) : a finite state controller (say, a type-writing device) able to read, write and move sequentially back and forth along with an indefinitely long tape that is inscribed with a finite but indefinite number of tokens or symbols. By specifying more precisely what symbols the device can use, and how it is disposed to react to them as it passes along the tape, we can get the TM to transform one set of symbols and spaces (the 'input') into another set (the 'output'). Turing shows that this device could take any input and transform it into any output so long as there is some computable relation between them. Figure 1 above shows the device poised over a section of an infinitely long paper tape which is divided into squares, some of which are empty and some contain an 's' (standing for 'symbol'). The theory of computation that is central to theoretical computer science is largely an automata-theoretic conception based on the TM. It is the TM-based theory that explains the notion of a Universal Computer on which the Church-Turing Thesis (CT thesis) rests , that is, the thesis that anything that can be done algorithmically at all can be done by a computer. Turing intends his analysis to show that any function computable by a human being following a fixed set of rules can also be computed by a TM. The fundamental assumption that CT thesis leads us to make is that, at some level of description more g eneral than the neurophysiologica, the brain is a device that receives complex inputs from the sensory systems and effects equally complex outputs to the motor systems. These relations between inputs and outputs are functionally precise enough to be describable by various mathematical relationships. And so long as such mathematical relationships exist between

71

inputs and outputs, some version of TM will be able to mimic them. That is , there will be a TM that perfectly simulates input- output structure of any specific brain. A philosophically significant feature of the idea of computation is that all the possible computational algorithmic formalisms share the common property of being independent from the physical. That is, computations in any of these formalisms are defined without recourse to the nature of the physical systems that (potentially) realize them. Even the TM model, which is often considered to be the prototype of a 'mechanical device', does not incorporate physical descriptions of its inner workings, but abstracts from the mechanical details of a physical realization. Computationalism : Cognition as Computation The independence of TM computations from their physical realizers, and the informationprocessing capabilities of computers along with the potential of computer programs to specify exactly how information is processed, together are responsible for the 'cognitivist revolution' in contemporary psychology and the rise of cognitive science. Cognitivism views human cognition as the 'processing of information' and believes that cognitive functions are computations. Indeed, the slogan, 'cognition is computation', has become the motto of cognitive science and artificial intelligence (AI). Johnson-Laird (1988) has famously summarized the view in his claim that 'the mind is to the brain as the [computer] program is to the [computer] hardware'. This view is now well-known as Computationalism or the computational theory of the mind. Computationalism in the above sense professes the view that the same kind of relation that obtains between programs (or software) and computer hardware (i.e. the implementation relation) also obtains between mental processes and brain events. Since mental processes are computational processes, cognitive functions and states can be described

by, and explained in terms of, programs. More precisely, mental states bear a relation to their neurophysiological realizers analogous to the relation 'abstract' computer programs bear to the 'concrete' hardware devices on which they run. Our brain constitutes the hardware (or 'wetware' as it is called ) on which our mental software runs. Brain states represent states of the outside world including the body itself, where transitions between states can be explained as computational operations on discrete representations. In so far as the brain is a device whose inputs and outputs can be characterized in terms of some algorithm, it can be mimicked by a TM, so much so that it can, in this formal sense, be considered as equivalent to a TM. Thus, under computationalism, the nature of mental states is abstractly characterized as a formalizable process that mediates between environmental input and behavioral output. The explanation of cognitive functions can fully be given in terms of programs run in the brain without having to take neurophysiological implementation mechanism into consideration. It is emphasized that the right level of abstraction at which to understand cognition is the computational level and not the level of implementing mechanism, i.e. the brain. Indeed, a theoretically striking implication of this view is the assumption that the same mental program of cognition can be run or implemented in mechanisms different from the neurophysiological one. Mental states as bearers of cognition are multiply realizable. Skeptical Observations and Alternatives Computationalism in its classical variety is wedded to the view that cognition consists in the manipulation of formal symbols in accordance with fixed r ules. Neural representations are symbols in this formal sense. Within cognitive science, the label 'computationalism' is often associated with the specific class of cognitive architecture that is symbolic (Fodor and Pylyshyn,1988). This architecture is held to be effective in dealing with the systematicity, productivity and

72

compositionality of high level thought. That computational processeses manipulate symbols by virtue of their causally efficacious formal properties is regarded by the computationalists to be a virtue of their approach. But critics have hit upon this very fact to demonstrate the major weakness of computationalism. One fundamental criticism is John Lucas's Gdelian argument (1961) to the effect that human minds are not TMcomputable. But there is another argument that has become a centerpiece in the criticism of strong AI. To the extent that computationalism is committed to strong AI , it has met with a strong criticism in what is known as the 'Chinese Room Thought Experiment' advanced by John Searle (1980). This criticism, directed at TM-based account of intelligence and understanding, asserts that formal symbol manipulation is not sufficient for human intentionality and semantics. Intelligent thought and understanding cannot consist merely in the execution of a computer program, which consists merely in the making of transitions between strings of symbols in accordance with formal, syntactically defined rules. Besides, there is an important question concerning relevantly applicable general knowledge that is believed to characterize ordinary human cognition. Can computationalism account for this feature? This is known as the Frame Problem : the apparent impossibility of writing a program which not only embodies the general knowledge available to an average human being , but which also specifies how that knowledge is to be applied appropriately in relevant circumstances (Pylyshyn, 1987; Dennett, 1984 ). For it seems to be impossible to specify in advance what all the relevant circumstances and appropriate applications might actually be . In other words, there is presumably no set of algorithms, or computational rules, that can embody the qualities of common sense and sound judgement which characterize a rational human being of average intelligence. It would, of course, be foolhardy to proclaim too firmly that the challenge posed by the

Frame Problem is insuperable.Human ingenuity has often managed to overcome the restrictions that fellow humans have tried to impose upon it. Nevertheless, the problem at issue is a genuine one and it persists so long as circumstantial contingencies of everyday life often require us to shift from engaging in one intelligent activity to engaging in another, in the l i g h t o f (s o m eti m es u n exp ected )n ew information and our own priorities. The 'cognitive frame' of the human mind may appear to be notoriously elusive to the program-specific computational gaze. Even if a programmer attempts to build in crossconnections amongst multiple specialized programs in order to overcome limitations of the required 'frame', there might be no final success because there probably is no end to the number of different ways in which one intelligent activity in one context may give way to another in the course of an intelligent being's life. Granted that such critiques are expressive of a radical scepticism about the viability of computationalism, there are alternative views that reflect significant modifications and departures in the overall cognitive-scientific enter prise rather than a wholesale abandonment of the project of understanding cognition by reference to computation. A prominent example of this trend is Connectionism. Connectionist cognitive modeling rests on the view that the brain, in as much as it is an 'information processor', utilizes parallel rather than serial processing on a massive scale (Rumelhart and MaClelland , 1986). There is no 'central processor' in the brain that carries out all the steps of a program in a sequential fashion in order to solve an information-processing problem. The brain rather is a network of interconnected neuronal units that exchange activation until they settle into stable patterns of activation. It represents information in a 'widely distributed' fashion, i.e. representation is distributed over a large array of nodes or units, and computation is transformation of activation vectors of these units. No one piece of information is located in any particular set of units, but rather all of the information processed by the brain is

73

embodied in its overall pattern of activation and connection-strengths. Whereas classical computationalist serial architecture is more suited to accounting for the cognitive function of logical inference, connectionist PDP architecture is better able to represent cognitive tasks such as pattern recognition. Controversies abound on the issue of which of these two cognitive performances is central to cognition proper. Perhaps we could say, with an optimistic outlook , that we are not forced to choose between two stark alternative paradigms of cognition, classical and connectionist, when it comes to deciding how best to model aspects of human cognition. There already are indications of the possibility of 'hybrid' approaches to integrate the alternatives (Clark, 1989 ; Smolensky , 1990). The more radical departure from, and critique of, classical computationalism has been proposed by Dynamicism, or the Dynamic System Theory, which advocates explanation of human cognition in terms of dynamic systems, and reg ard the symbolicReferences 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

representational level of computational description as entirely superfluous. The dynamicist argument, which is an extension of connectionism, is undergirded by neurobiologically relevant information. Stressing the time-dependent nature of nervous systems, some researchers see the brain, together with its body and environment, as dynamical systems, best characterized by systems of differential equations describing the temporal evolution of states of the brain (Port & van Gelder, 1995 ; van Gelder, 1998 ). The significant difference made of this approach is that it accepts temporal richness as a non-negligible variable in contrast to the temporal austerity of classical computationalism. This 'real-time' orientation implies that there needs to be a close coupling between the temporality of the computational process and the temporality of the subject domain. Cognition as computational representation is taken to be something continuous with the environment , thereby questioning the very idea of 'inner' cognition so central to the classical view.

Church, A. (1936) 'An Unsolvable Problem of Elementary Number Theory', American Journal of Mathematics 58, 345-363. Clark, A. (1989) 'Microcognition : Philosophy', Cognitive Science and Parallel Distributed Processing (Cambridge, MA: MIT Press). Dennett, D. (1984) 'Cognitive Wheels : The Frame Problem in AI', in Hookway (ed.), Minds, Machines and Evolution (Cambridge : Cambridge University press). Fodor, J. & Pylyshyn, Z. (1988) 'Connectionism and Cognitive Architecture : A Critical Analysis', Cognition 28, 3-71. Johnson-Laird, P. (1988) 'The Computer and the Mind' (Cambridge, MA : Harvard University Press). Lucas, J. (1961) 'Minds, Machines and Gdel', Philosophy 36, 122-127. Port, R. & van Gelder, T. (1995) 'Mind as Motion: Explorations in the Dynamics of Cognition' (Cambridge, MA: MIT Press). Pratt, V, (1987) 'Thinking Machines: The Evolution of Artificial Intelligence' (Oxford: Basil Blackwell). Pylyshyn, Z. (1987) 'The Robot's Dilemma: The Frame Problem in Artificial Intelligence' (Norwood, NJ: Ablex) Rumelhart, D. & McClelland, J. (1986) 'Parallel Distributed Processing: Explorations in the Microstructure of Cognition', Volume I. (Cambridge, MA: MIT Press). Searle, J. (1980) 'Minds, Brains and Program', Behavioral and Brain Sciences 3, 417-424. Smolensky, P. (1990) 'Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems', Artificial Intelligence 46, 217-257. Turing, A.(1936) 'On Computable Numbers, with an Application to the Entscheidungsproblem', Proceedings of the London Mathematical Society, 2nd Series, 42, 230-265. Van Gelder, T. (1998) 'The Dynamical Hypothesis in Cognitive Science', Behavioral and Brain Sciences 21, 615-665.

About the author: Dr. Bijoy H Boruah is a Professor of Philosophy, Department of Humanities & Social Sciences, IIT Kanpur. His research interests include Philosophy of Mind, Philosophical Aesthetics, Metaphysics and Value Theory.

74

You might also like