0% found this document useful (0 votes)
23 views13 pages

Kowalski R. - Algorithm Logic + Control-ACM (1971)

The document discusses the separation of logic and control components in algorithms to improve their efficiency. It emphasizes that the logic component defines the knowledge used in problem-solving, while the control component determines the strategies for utilizing that knowledge. The paper explores how this distinction can enhance algorithm structure and performance through logical analysis and various interpretations of predicate logic.

Uploaded by

pmast1234567890
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
23 views13 pages

Kowalski R. - Algorithm Logic + Control-ACM (1971)

The document discusses the separation of logic and control components in algorithms to improve their efficiency. It emphasizes that the logic component defines the knowledge used in problem-solving, while the control component determines the strategies for utilizing that knowledge. The paper explores how this distinction can enhance algorithm structure and performance through logical analysis and various interpretations of predicate logic.

Uploaded by

pmast1234567890
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 13
References 1 Xoetchak. B. and Naor, P. Some queueing problems with the service sation whjet to Breakdowns. Oper. Res 11.3 (1963) 30% a0 2. Basket. F. Chandy. KIM. Monts Rt and Palacot. FG ‘pen, closed and mised networks with diffcent clases of customers PCM 22.1 17S) 288-260 3. “Chambetin'D D., Fuller. SH and Lin, LY. page allocavoa Strategy for multiprogiamming ystems wth vital memory Proc ‘in Symp. Opetang Syeeme Principles, 1973. pp 66-72 4° Coltman E.G. and Mian, | Selecting a scheduling rule that test pe specified response ime demands Prec. Sth Symp. Operating Stems Priteples, 1978, pp. 187-19) 15) Couto, P Decompoabity, stably and saturation io imutiprogramming systems. Comm. ACM 18.7 July 1975) 371-377 6 Denning, PA ote on paging drum efficiency. Computing Survey 4, 1(March 1972, ot 7." Ghanem, MZ. Study of memory paniioing for Imutiprogratnning systems with virtual memory TBM J. Res Devel. 1.5 (1995) 481-357 Hine. JH. Mirani, and Tour S. The use of memory lication to eontol response limes paged computer systems with Siferestyob clases 2nd Int Workshop on Modeling and Performance Evaluation of Computer Systems, Stesa. Oct. 1976. pp 201-216 9. Lynch, WL. Do dik arms move? Tech Rep. 1118, Jennings ‘Compe. Centre, Cave Western Reserve U, Cleveland, Oho. To. Micank and Avilahak. B.A many server queve with service tnterspions. Oper Ret 16.3 (1968), 628-638 TE Miran 1 and Hine. H Complete parameteried fares of job scheduling stateies Tech, Rep. 81, Compig. Lab. U of NNewsustle upon Tyne, England, 17S Ta Rewer Man Kobayashi H, Queveing networks with multiple loved chains Theor) and computational slgoethms 1284 Res Dever 19.3 (May 1975), 285-294 12. Wilhelm, NC. An anomaly in disk scheduling: A compan of FCFS and SSTF seek scheduling using an empincal mode or disk accesee Comm ACM 9. (lan 196), Lo? 4 CO Programming JJ, Homing Languages Editor Algorithm = Logic + Control Robert Kowalski Imperial College, London “An algorithm can be regarded av consisting of @ logic component, which specifies the knowledge to be used in solving problems, and a control component, ‘which determines the problem-solving strategies by ‘means of which that knowledge is used. The logic ‘component determines the meaning of the algorithm whereas the control component only affects its ‘efficiency. The efficiency of an algorithm can often be improved by improving the control component without ‘changing the logic of the algorithm. We argue that computer programs would be more often correct and more easily improved and modified if their logic and ‘control aspects were identified and separated in the program text. Key Words and Phrases: control language. logic programming, nonprocedural language, programming methodology. program specification, relational data structures CR Categories: 3.64, 4.20, 4.30, 5.21, $24 Introduction Predicate logic is a high level, human-oriented lan- guage for describing problems and problem-solving methods to computers. In this paper, we are concerned not with the use of predicate logic as a programming Tanguage in its own right, but with its use as a tool for the analysis of algorithms. Our main aim will be to study ‘ways in which logical analysis can contribute to improv- ing the structure and efficiency of algorithms. Perms wo sipy wihout Tea oF par oF The material ranted promded thatthe copes ae not made or disibuted for direst Eommersia) advantage, the ACM copyngbn notice and the tile ofthe pobliation ands date appeas and potce gree that copying by Petmsson of the Associaton for Computing Machinery Te copy ‘Stherwie orto republish, reques a fee andor specific permission ‘This research was supported by a grant from the Science Research Counes, ‘Authors address R.A, Kowalski. Dept. of Computing and Con trol. Impenal Collegeof Science and Technology. 180 Queens Gate Uondon SW? 287, England £21579" ACM 01.0790/79/0708.042 $0075, Communveaions Jaly 1979 or Volume 22 the ACM Nomber Aloe Minna ‘The notion that computation = controlled deduction was first proposed by Pay Hayes (19] and more recently by Bibel [2] and Vaughn Pratt [31]. A similar thesis that database sys:ems should be regarded as consisting of a relational component, which defines the logic ofthe data, ‘and a control component, which stores and retrieves it has been successfully argued by Codd [10]. Hewit’s argument [20} for the programming language PLAN- NER, though generally regarded as an argument against logic, can also be regarded as an argument for the thesis that algorithins be regarded as consisting of both logic and control components, In this paper we shall explore some of the usefll consequences of that thesis, We represent the analysis of an algorithm 4 into a logic component L. which defines the logic of the algo- rithm, and s conirol component C. which specifies the manner in which the definitions are used. symbolically by the equation Algorithms for computing factorial are a simple exam- ple. The definition of factorial constitutes the logic com. ponent of the algorithms: Lis the fastona of seve te Factonal of «+ Lf ris the factional of © and w is vues xl The definition can be used bottom-up to derive a se- uence of assertions about factorial or it can be used Zop down to teduce the problem of computing the factorial (of x + 1 to the subproblems of computing the factorial of x and multiplying the result by x + 1. Different ways of using the same definition give rise to different algo- rithms. Bottom-up use of the definition behaves like iteration, Top-down use behaves like recursive evalua. tion, ‘The manner in which the logic component is used to solve problems constitutes the control component. AS a first approxization, we restrict the control component C to general-purpose problem-solving strategies which do not affect the meaning of the algorithm as it is deter- mined by the logic component L. Thus different algo- rithms A, and As, obtained by applying different meth- ‘ods of control C and C; to the same logic definitions L, ‘are equivalent in the sense that they solve the same problems with the same results. Symbolically, if ay = L+ Cand do = L + Ca then A, and z are equivalent The relationship of equivalence between algorithms, be- ‘cause they have the same logic, is the basis for using logical analysis to improve the efficiency of an algorithm by retaining its logic but improving the way itis used. In particular, replacing botiom-up by top-down control ‘often (but not always) improves efficiency, whereas re- placing top-down sequential solution of subproblems by top-down parallel solution seems never to decrease effi ciency. Both the logic and the control components of an algorithm affect efficiency. The logic component ex- as presses the knowledge which can be used in solving problems and the control component determines the way in which that knowledge can be used, The distinction between logic and control is not wholly unambiguous, ‘The same algorithm 4 can often be analyzed in different ways hee Lee ‘One analysis might include in the logic component what another analysis includes in the control component. In general, we prefer an analysis which places the preatest burden for achieving efficiency on the control compo: rent, Such an analysis has two advantages: (1) the logic component can afford to be a clearer and more obviously ‘correct statement of the problem and the knowledge which can be used in its solution and (2) the contrel component assumes greater responsibility for the effi ciency of the algorithm, which consequently can be more readily improved by upgrading the efficiency of the control It is the intention that this paper should be self contained, The first part, accordingly, introduces the clausal form of predicate logic and defines the top-down ‘and bottom-up interpretations of Horn clauses. The body of the paper investigates the following decomposition of algorithms into their various components. mee — ‘We study the affect of altering each of the above com- ponents of an algorithm, The final section of the paper introduces a graphical notation for expressing, more formally than in the rest of the paper, certain kinds of control information, Much of the material in this paper has been abstracted from lecture notes (23] prepared for the advanced summer school on foundations of comput- ing held at the Mathematical Centre in Amsterdam in May 1974, Notation We use the clausal form of predicate logie. Simple assertions are expressed by clauses: Father (Zeus, Ares) = Mother (Hera Ares) = Father (Ares Harmonia) ~ Mother (Semele, Dioniss) — Father (Zeus. Dionsias) = Communicabons daly 1999 or Volume 22 the ACM Number? Here Father (x, ») states that x is the father of y and Mother (x, ») staies that x is the mother of y Clauses can also express general conditional propo sitions Female (x) — Masher x Male (x) + Father 9) Parent rs) = Mother 5) Patent rs} = Father (1), These state that female mother of sis male father of = sparen of is moter of 3nd "is patent of ry cs father of The arrow «is the logical connective “if; “x" and “y’ are variables representing any individuals; “Zeus.” “Ares,” ete, are constant symbols representing particular individuals; “Father.” “Mother.” “Female.” etc. are predicate symbols representing relations among individ ‘uals. Variables in different clauses are distinct even if they have the same names. ‘A clause can have several joint conditions or several alternative conclusions, Thus Grandparent (x99 = Parent i=) Parent 261 Male (x). Female (s) — Parent (4.9) Ancesor (x) = Patent x3 Ancestor (v9) = Aneesor (2), ABGEMOE (50 where x. y.and rare variables, state that for allx, y, and sw grandparent of vis parent of = ands patent of {ris male or cis female fi parent of sss ancestor of fi patent of 2n8 Tip ancestor of =i ancesoy of ¢ and =i ancestor of» Problems to be solved are represented by clauses which are denials. The clauses ‘~ Grandparent Zeus, Hatmonia) 1 Aneesioe (Zev, 0) Male Cr, Ancestor (x Divisias) where « is a variable state that Zeus snot grandparent of Harmonia, for nos Zeus aneewor of and for no sx mile ond xan ancestor of Dri. A typical problem-solver (or theorem-prover) reacts, to a denial by using other clauses to try to refute the denial. I the denial contains vaviabes, then itis possible to extract from the refutation the values ofthe variables ‘which account for the refutation and represeat a solution Of the problem to be solved. In this example, different, fefutations of the second denial find differen x of which Zeas is the ancestor: es. «= Harmonia. x = Dionisus More generally, we define clauses and their interpre- tation as follows, A clause is an expression of the form Bio Bot Aide mae. where Bi. By As, -y Ay are atoms, The atoms Ay, 26 ‘An ate conditions of the clause and the atoms Bi. Bo. are alternative conclusions of the clause. If the clause contains the variables x, ...r then interpret it as stating that for all sith Brot. ot Bei Ay and. and Ae If n = 0, then interpret it as stating unconditionally that forall te Bice oF Be If'm = 0, then interpret it as stating that for no site Avand snd An ifm 0, then interpret the clause as a sentence which is always false ‘An atom (ot atomic formula) is an expression of the form Plt where P is an n-place predicate symbol and t;,... fs are terms. Interpret the atom as asserting that the relation called P holds among the individuals called 1... tn A terms a variable, a constant symbol. or an expres- sion of the form Sint) where fis an m-place function symbol and «),... fare terms. The sets of predicate symbols, function symbols. con- stant symbols, and variables are any mutually disjoint sets, (By convention, we reserve the lower case Jetters u with or without adornments. for variables. he a LUrUrr——esS positon they ceeupy in eaves) Clase form hat the ste cipresive power a the standard formulation of predate logic. Al arabes ‘a which occur ina cause Care impliily governed by universal quantifiers Win... Wie (or ll and tnd for al Thus Cis n abbreviation for Yea voc ‘The existential quantifier Bx (there existsan x) is avoided by using constant symbols or function symbols to name individuals. For example, the clauses Father (dad (21.2) = Human x) Mother imum (2) )— Hamas (x) state that for all humans x, there exists an individual called dad (x), who is father of x, and there exists an individual called mum (x), who is mother of x. Although the clausal form has the same power as the standard form, it is not always as natural or as easy to use. The definition of subset is an example: “x isa subset of y if for all z z belongs to y if z belongs to x.” The definition in the standard form of logic royeWeleey sexy isa direct translation of the English, The clausal form of Communications daly 1999 ry Voisme 22 the ACM Number? the definition can be systematically derived from the standard form. It can take considerable effort, however. to recognize the notion of subset in the resulting pair of clauses: ACh abi nee (Here we have used infix notation for predicate symbols. writing «Pr instead of PCx. 9) In this paper. we shall avoid the awkwardness of the clausal definition of subset by concentrating attention on clauses which contain at the most one conclusion. Such clauses, called Horn clauses. can be further classified into four kinds cron othe foes B- procedure dstoraone,o he oem! BA find conn Assertions can be regarded as the special case of proce- dure declarations where n = 0. ‘The Horn clause subset of logic resembles conven: tional programming languages more closely than either the full clausal of standard forms of logic. For example, the notion of subset can be defined recursively by means of Horn clauses: x1 Emp 1) RC) OSpiiC Ee EY Here itis intended that the relationship Empty (x) holds when x is empty. and Split (x. zx’) holds when © consists of element z and subset x’. Horn clauses used in this way. to define relations recursively, are related to Herbrand-Godel recursion equations as described by Kleene (22). elaborated by McCarthy [28]. employed for program transformation by Burstall and Darlington UI3}, and avgmented with control annotations. by Schwarz [34] Top-Down and Bottom-Up Interpretations of Horn Clauses A typical Horn clause problem has the form of (1) a set of clauses which defines a problem domain and 2) a theorem which consists of (a) hypotheses repre- sented by assertions A) =. ... a, = and (b) a conclusion which is negated and represented by a denial — By... Br In top-down problem-solving. we reason backwards from the conclusion, repeatedly reducing goals to subgoals until eventually all subgoals are solved directly by the original assertions. In bottom-up problem-solving, we reason forwards from the hypotheses. repeatedly deriving new assertions from old ones until eventually the original goal is solved directly by derived assertions, a The problem of showing that Zeus is a grandparent of Harmonia can be solved either top-down of bottom- up. Reasoning bottom-up, we start with the assertions Father Zeus. ates = Father (Ares, Hatmonias — and use the clause Parent (x, ») — Father (x. ») to derive aren (Zeus res) = Parent (Ares Harmonia — Continuing bottom-up we derive. from the definition of grandparent, the new assertion Grandparen (Zeus, Marmonia)« which matches the original goal Reasoning top-down, we start with the original goal of showing that Zeus is a grandparent of Harmonia = Grandparent (Zeus, Hormona) and use the definition of grandparent 10 derive 1wo new subgoals Parent (Zeus), Patent (= Harmonia by denying that any z is both a child of Zeus and a parent of Harmonia. Continuing top-down and consid ering both subgoals (either one at a time or both simul taneously), we use the clause Parent (x, ¥) — Father (x. ») to replace the subproblem Parent (Zeus. 2) by Father (Zeus, :) and the subproblem Parent (2, Har- monia) by Father (z, Harmonia). The newly derived subproblems are solved compatibly by assertions which determine “Ares” as the desired value of = In both the top-down and bottom-up solutions of the {grandparent problem, we have mentioned the derivation of only those clauses which directly contribute to the eventual solution, In addition to the derivation of rele- vant clauses it is often unavoidable, during the course of searching for a solution, to derive assertions or subgeals which do not contribute to the Solution. For example, in the bottom-up search for a solution to the grandparent problem, itis possible to derive the irrelevant assertions Patent (Hers Ares) = Mate Zeus) = In the top-down search it is possible to replace the subproblem Parent (Zeus, 2) by the irrelevant and un- solvable subproblem Mother (Zeus, 2) There ate both proof procedures which understand logic top-down (eg. model elimination [17}, SL-resol tion [20}, and interconnectivity graphs [35)) as well as ones which understand lopie bottom-up (notably hyper resolution [35)), These proof procedures operate with the clausal form of predicate logic and deal with both Horn clauses and non-Horn clauses. Among clausal proof procedures, the connection graph procedure [25} is able to mix top-down and bottom-up reasoning. Among non- clausal proof procedures, Gentzen systems (I] and Bled- Soe’ related natural deduction systems [5] provide facil ities for mixing top-down and bottom-up reasoning ‘Communications daly 1979 of Volume 22 the ACM Number The terminology “top-down” and “bottom-up” ap- plied to proof procedures derives from our investigation of the parsing problem formulated in predicate logic (23, 25). Given a grammar formulated in clausal form, top- down reasoning behaves as a top-down parsing algo- rithm and bottom-up reasoning behaves as a bottom-up algorithm. David Warren (unpublished) has shown how to define a general proof procedure for Horn clauses, which when applied o the parsing problem, behaves like the Earle parsing algorithm [16 ‘The Procedural Interpretation of Horn Clauses ‘The procedural interpretation is the top-down inter pretation. A clause of the form cote net is interpreted as a procedure. The name of the procedure is the conclusion B which identifies the form of the problems which the procedure can solve, The body of the procedure isthe set of procedure calls A,. A clause of the form HB Be EO consisting entirely of procedure ealls (or problems to be solved) behaves as a goal statement. A procedure Bente is invoked by a procedure call B, in the goal statement (1) By matching the call B, with the name B of the procedure: (2) By replacing the call B, with the body of the pro- cedure obtaining the new goal statement ee rea oe oe and: (3) By applying the matching substitution 8 (Bye ons Bete Ate ons Am Bios ony Bo) 8. (The matching substitution 6 replaces variables by terms ina manner which makes B and B, identical: B@ = BO.) The part of the substitution @ which affects variables in the original procedure calls 8), ... Br, transmits ouput The part which affects variables in the new procedure calls Ay... An transmits input For example, invoking the grandparent procedure by the procedure call in = Grandparent (Zeus, Harmensa} derives the new goal statement “= Pare (Zeus, 2), Parent, Harmonia ‘The matching substitution x= Ze y= Harmonia transmits input only. Invoking the assertional procedure Father (Zeus. es) = on by the first procedure cal in the goal statement «= Fane (Zeus, 2), Patent = Harmonia) derives the new goal statement ‘= Parent (Ares, Harmon) ‘The matching substitution transmits output only. In general, however. a single procedure invocation may transmit both input and out- pu ‘The top-down interpretation of Horn clauses differs in several important respects from procedure invocation in conventional programming languages: (1). The body of a procedure is a set rather than a sequence of procedure calls. This means that pro- ‘cedure calls can be executed in any sequence or in parallel. 2) More than one procedure can have a name which ‘matches a given procedure call. Finding the “right” procedure is a search problem which can be solved by trying the different procedures in sequence. in parallel, or in other more sophisticated ways G) The input-output arguments ofa procedure are not fixed but depend upon the procedure call. A pro: ‘cedure which tests that a relationship holds among given individuals can also be used to find individ- vals for which the relationship holds. The Relationship Between Logic and Control In the preceding sections we considered alternative top-down and bottom-up control strategies for a fixed predicate logic representation of a problem-domain. Dif- ferent control strategies for the same logical representa tion generate different behaviors. However, information about a problem-domain can be represented in logic in different ways. Alternative representations can have a mote significant effect on the efficiency of an algorithm than alternative control strategies for the same represen: tation ‘Consider the problem of sorting @ list, In one repre- sentation, we have the definition soning « gives» = yis a permutation of x. is ordered (Here we have used distributed infix notation for predi- cate symbols, writing P,x;Paxz .. PyXePowt instead of PAxty ny Xn) Where the P, (possibly empty) are paris of P.) As described in [24], different control strategies ap- plied to the definition generate different behaviors. None of these behaviors, however, is efficient enough to qualify asa reasonable sorting algorithm. By contrast, even the simplest top-down, sequential control behaves efficiently with the logic of quicksort [17] CCommsnicatins duty 1979 or Volume 22 the ACM Number sorting x gives» — x ssemply. £15 empty First element of «8. #81 oF i parivoning by «Eves wand foring pes Soring Eves appending =o gives fis clement of» 2 test of ws toring x eves v= Like the predicates “permutation” and the predicates “empty.” “first.” “rest. and “appending” can be defined independently from the definition of “sorting.” (Partitioning x, by x; is intended to give the list w of the elements of xs which ate smaller than or equal to x; and the list v of the elements of x: which are greater than x.) ‘Our thesis is that, in the systematic development of well-structured programs by successive refinement. the logic component needs to be specified before the control component, The logic component defines the problem- domain-specific part of an algorithm. It not only deter- mines the meaning of the algorithm but also influences the way the algorithm behaves. The control component specifies the problem-solving strategy. It affects the be- havior of the algorithm without affecting its meaning. Thus the efficiency of an algorithm can be improved by two very different approaches, either by improving the Togie component or by leaving the logic component ‘unchanged and improving the control over its use. Bibel (3, 4}, Clark, Darlington, Sickel {7. 8. 9]. and Hogger {21] have developed strategies for improving algorithms by ignoring the control component and using deduction to derive a new logic component. Their meth- ‘ods are similar to the ones used for transforming formal ‘grammars and programs expressed as recursion equa- tions [13] In a logic programming system, specification of the ‘control component is subordinate to specification of the logic component. The control component can either be expressed by the programmer in a separate control-spec- ifying language, or it can be determined by the system itself. When logic is used, as in the relational calculus, for example [11}, to specify queries for a database, the control component is determined entirely by the system. In general, the higher the level of the programming language and the less advanced the level of the program- mer, the more the system needs to assume responsibility for efficiency and to exercise control over the use of the information which itis given. ‘The provision of a separate control-specifying tan- ‘guage is more likely to appeal to the more advanced programmer. Greater efficiency can often be achieved When the programmer is able to communicate control information «© the computer. Such information might be, for example, that in the relation Fix, y) the value of 1 i a function of the argument x. This could be used by 'a backtracking interpreter to avoid looking for another solution to the first goal in the goal statement ee a when the second goal fails. Another example of such information might be that one procedure remo is more likely to solve P than another procedure Por ‘This kind of information is common in fault diagnosis where, on the basis of past experience, it might be known that symptom P is more likely to have been caused by Q than R Notice, in both of these examples. that the control information is problem-specific. However. ifthe control information is correct and the interpreter is correctly implemented, then the control information should not affect the meaning of the algorithm as determined by its logie component Data Structures In a well-structured program itis desirable to separate data structures from the procedures which interrogate and manipulate them, Such separation means that the representation of data structures can be altered without altering the higher level procedures. Alteration of data structures is a way of improving algorithms by replacing an inefficient data structure by a more effective one. In ‘large, complex program. the demands for information made on the data structures are often fully determined only in the final stages of the program design. By sepa rating data structures from procedures, the higher levels of the program can be written before the data structures hhave been finally decided. ‘The data structures ofa program are already included in the logic component, Lists for example can be repre- sented by terms, where nt fares forthe empy ist and Cons(x 9) names the lst with fist clement x and test which is nother is ‘Thus the term cons (2, cons (1 cons (mi) names the three-element list consisting ofthe individuals 2, 1, 3 im that order. ‘The data-structure-free definition of quicksort in the preceding section interacts with the data structure for lists via the definitions ils empty ~ Gist element of cons (x 996 — fest of con (x, )) 83 « If the predicates “empty.” “first,” and “rest” are elimi nated from the definition of quicksort by a preliminary bottom-up deduction, then the original data-structure- free definition can be replaced by a definition which mixes the data structures with the procedures Communications uy 1979 of Volume 22 the ACM Number? soning mt ges il sorting cons (ei) BES Y= paraioning x: By: gives wand » sorung eves © appending 1a the lst const. 0) gives Clark and Tarnlund [6] show how to obtain a more efficient version of quicksort from the same abstract definition with a different data structure for lists ‘Comparing the original data-stracture-free definition with the new data-structure-dependent one, we notice another advantage of data-structure-independence: the fact that. with well-chosen names for the interfacing procedures, data-structure-independent programs are Virtually self-documenting For the conventional pro- gram which mixes procedures and data structures. the programmer has to provide documentation. external to the program, which explains the data structures. For the well-structured, data-independent program, such docu- mentation is provided by the interfacing procedures and is part of the program. Despite the arguments for separating data structures and procedures, programmers mix them for the sake of run-time efficiency. One way of reconciling efficiency with good program structure is to use the macroexpan- sion facilities provided in some programming languages. Macroexpansion flattens the hierarchy of procedure calls before run-time and is the computational analog of the bottom-up and middle-out reasoning provided by some theorem: proving systems, Macro-expansion is also fea- ture of the program improving transformations devel- oped by Burstall and Darlington Notice how our terminology conflicts with Wirth’s 39: program = algorithm + data structure. In our terminology the definition of data structures belongs to the logic component of algorithms. Even more confus- ingly. we would like to call the logic component of algorithms “logic programs.” This is because, given a fixed Horn clause interpreter. the programmer need only specify the logic component, The interpreter can exercise its own control over the way in which the information in the logic component is used. OF course, f the program- mer knows how the interpreter behaves, then he can express himself in a manner which is designed to elicit the behavior he desires. Top-Down Execution of Procedure Calls In the simplest top-down execution strategy, proce- dure calls are executed one at a time in the sequence in which they are written. Typically an algorithm can be improved by executing the same procedure calls either as coroutines or as communicating parallel processes, The new algorithm, AeLee 20 is obtained from the original algorithm ebeG by replacing one control strategy by another leaving the logic of the algorithm unchanged. For example, executing procedure calls in sequence, the procedure sorting «gives ¥ is a permutation of s is dered first generates permutations of x and then tests whether they are ordered. Executing procedure calls as corou- tines, the procedure generates permutations. one element ata time, Whenever a new element is generated, the generation of other elements is suspended while it is determined whether the new element preserves the of deredaess of the existing partial permutation, This ex- ample is discussed in more detail elsewhere [24] Similarly the procedure calls in the body of the quicksort definition can be executed either in sequence or as coroutines or parallel processes. Executed in par allel, partitioning the rest of x can be initiated as soon as the fisst elements of the rest are generated. Sorting the ‘output, u and ». of the partitioning procedure can begin and proceed in parallel as soon as the first elements of and v are generated. Appending can begin as soon as the first elements of w’. the sorted version of u. are known, Philippe Roussel [33] has investigated the problem of showing that two trees have the same lists of leaves: ‘and yhave the same leaves —the leaves of ate sand are the same — Executing procedure calls in the sequence in which they are written, the procedure first constructs the list 2 of leaves of x, then constructs the list -’ of leaves of y, and finally tests that 2 and 2" are the same, Roussel has argued that a more sophisticated algorithm is obtained, without changing the logic ofthe algorithm. by executing the same procedure calls as communicating parallel pro- cesses, When one process finds a leaf, it suspends activity and waits until the other process finds a leaf. A third process then tests whether the two leaves are identical. If the leaves are identical, then the first two processes resume, If the leaves are different, then the entire pro- cedure fails and terminates. The paralle] algorithm is significantly more efficient than the simple sequential one when the to trees have different lists of leaves. In this case the sequential algo- rithm recognizes failure only afier both lists have been completely generated, The parallel algorithm recognizes failure as soon as the two lists begin to differ ‘The sequential algorithm. whether it eventually suc- ceeds or fails, constructs the intermediate lists z and = which are no longer needed when the procedure termi nates. In contrast. the parallel algorithm can be imple- mented in such a way that it compares the two lists z and =’, one element at a time, without constructing and Communications uty 1979 of Volume 22 the ACM Number? saving the initial lists of those elements already compared and found to be identical In a high level programming language like SIMULA. itis possible to write both the usual sequential algorithms and also coroutining ones in which the programmer controls when coroutines are suspended and resumed, But, as in other conventional programming languages. logic and control are inextricably intertwined in the program text. It is aot possible to change the control strategy of an algorithm without rewriting the program entirely. The arguments for separating logic and control are like the ones for separating procedures and data struc~ tures. When procedures are separated from data struc tures. it is possible to distinguish (in the procedures) what funetions the data structures fulfill from the manner in which the data steuctures fulfill them, When logic is separated from control. it is possible to distinguish (in the logic) what the program does from how the progeam does it (in the control). In both cases itis more obvious what the program does. and therefore it is easier to determine whether it correctly does what itis intended to do. The work of Clark and Tarnlund [6] (on the correct ness of sorting algorithms) and the unpublished work of Warren and Kowalski (on the correctness of plan-for mation algorithms) supports the thesis that correctness. proofs are simplified when they deal only with the logic component and ignore the control component of algo rithms. Similarly. ignoring control simplifies the deriva: tion of programs [rom specifications (3. 4. 7. 8, 9. 21} Top-Down 1s. Bottom-Up Execution Recursive definitions are common in mathematics, where they are more likely to be understood bottom-up rather than top-down, Consider, for example, the defi nition of factorial The facoria of O 8 1 ‘The faconal of + eum plus Vas Ihe Factorial of pis» ‘The mathematician is likely to understand such a defi- nition bottom-up, generating the sequence of assertions The facoral of Os | = The faconal of 1 The factorial of 252 = The sonal of 3186 = Conventional programming language implementations understand recursions top-down. Programmers, accord- ingly, tend to identify recursive definitions with top- down execution An interesting exception to the rule that recursive definitions are more efficiently executed top-down than bottom-up isthe definition ofa Fibonacci number. which o js both more intelligible and efficient whew interpreted bottom-up: ‘the O-th Fibonace number» the -th Fibomace sumer | = the w+ 2th Fibonaccr number ic = thew 2 [th Fibonacci number i he wth itoaace! number is plas» (Here the terms u +2 and w + 1 are expressions to be evaluated rather than terms representing data structures. This notation is an abbreviation for the one which has ‘explicit procedure calls in the body to evaluate w + 2 andu +L) Interpreted top-down, finding the w+ 1-th Fibonacci ‘number reintroduces the subproblem of finding the u-th Fibonacei number. The top-down computation is a tree whose nodes are procedure calls. the number of which is fan exponential function of w. Interpreting the same definition bottom-up generates the sequence of assertions ‘he Hoth Fabonace number the Ith Fibonacer number i |= the 2th Fihonacet umber is? — the Sah Fiboracetnamber 3 © ‘The number of computation steps is a linear function of In this example. bottom-up execution is also less space-consuming than top-down execution. Top-down execution uses space which is proportional to u, whereas bottom-up execution needs to store only two assertions and can use a small constant amount of storage. That only two assertions need to be stored during bottom-up execution is a consequence of the deletion rules for the connection graph proof procedure [25]. As Bibel ob. serves, the greater efficiency of bottom-up execution disappears if similar procedure calls are executed top- down only once. This strategy is, in fact. an application of Warren's generalization of the Earley parsing algo rithm, Strategies for Investigating Alternative Procedures When more than one procedure has a conclusion which matches a given procedure call, the logic compo- nent does not determine the manner in which the alter- native procedures are investigated, Sequential explora tion of alternatives gives rise to iterative algorithms, Although in theory all iterations can be replaced by top-dowa execution of recursive definitions. in practice some iterations might better be thought of as bottom-up execution of recursive definitions (as in the definition of factorial). Other iterations can better be regarded as controlling a sequential search among alternatives. ‘Assume. for example. that we are given data about individuals in the parenthood relationship. Communications daly 19 o Volume 2 the ACM Nomber? arent (Zeus Aes) Patent Hera, Ates) — Parent «Aces Harmonia) — Parent (Semele, rons] Patent Zeus, Dionsis) Suppose that the problem isto find a grandchild of Zeus ‘Grandparent (Zeus, 2) using the definition of grandparent, Ina conventional programming language, the parenthood relationship ight be stored in a (wo-dimensional array. A general procedure for finding grandchildren (given grandpar- ents) might involve two iterative loops, one nested inside the other, with a jump out when a solution has been found. Similar behavior is obtained by interpreting the grandparent procedure top-down, executing procedure calls one at a time, in the sequence in which they are ‘written, trying alternative procedures (assertions in this case) one ata time in the order in which they are written. ‘The logical analysis of the conventional iterative algo- rithr= does not eoncetn recursion but involves sequential search through a space of alternatives. The sequential search strategy is identical to the backtracking strategy for executing nondeterministic programs [18] Representation of data by means of clauses. as in the family relationships example, rather than by means of terms, is similar to the relational mode! of databases [10]. In both cases data is regarded as relationships among individuals. When data is represented by conven tional data structures (terms), the programmer needs to specify in the logic component of programs and queries both how data is stored and how it 3s retrieved. When data is represented relationally (by clauses), the program: mer needs only to specify data in the logic component: the control component manages both storage and re- trieval The desirability of separating logic and control is now generally accepted in the field of databases. An important advantage is that storage and retrieval schemes ‘can be changed and improved in the control component without affecting the user's view of the data as defined by the logic component. The suitability of a search strategy for retrieving data depends upon the structure in which the data is stored, eration. regarded as sequential search, is suitable for data stored sequentially in arrays or linked lists. Other search strategies are more appropriate for other data structures, such as hash tables, binary trees, of semantic networks. MeSkimin and Minker [29], for example, store clauses in a manner which facilitates both parallel search and finding all answers to a database query Deliyanni and Kowalski [15]. on the other hand, propose 1 path-finding strategy for retrieving data stored in se- mantic networks. Representation of data by means of terms is a com- mon feature of Horn clause programs written in PROLOG (12. 33, 38}. Tiirnlund [36]. in particular. has investigated the use of terms as data structures in Hom a clause programs. Several PROLOG programs employ a relational representation of data. Notable among these are Warren's [37] for plan-formation and those for drug analysis written in the Ministry of Heavy Industry in Budapest (14) ‘Two Analyses of Path-Finding Algorithms The same algorithm 4 can often be analyzed in different ways: Anh t Gabe Some of the behavior determined by the control com- ponent C; in one analysis might be determined by the logic component Lz in another analysis. This inas signif ieance for understanding the relationship between pro- ‘gramming style and execution facilities. In the short term sophisticated behavior can be obtained by employing simple execution strategies and by writing complicated programs. In the longer term the same behavior may be obtained from simpler programs by using more sophis- ticated execution strategies. The path-finding problem illustrates a situation in ‘which the same algorithm can be analyzed in different ways, Consider the problem of finding a path from 4 to Zin the following directed graph In one analysis, we employ a predicate Go (x) which states that itis possible to go to x. The problem of going from A to Z is then represented by two clauses. One asserts that itis possible to go to 4. The other denies that it is possible to go to Z. The are directed from 4 to B is, represented by a clause which states that itis possible 10 0 t0 B if it is possible to go to A: Goa) = ~Goiz) Gob) GotA) Got) = Go(X) Go(C)=GotA) Gotz —G01¥) GoD) = Go(B)—GotY) Go) GoKE)= Go(B)— GotY) = Got CK) Cole) ee Different control strategies determine different path- finding algorithms. Forward search from the initial node 4 is Bottom-up reasoning from the initial assertion Go (A) — , Backward search from the goal node Z is top-down reasoning from the initial goal statement = Go (Z), Bidirectional search from both the initial node and the goal node is the combination of top-down and bottom-up reasoning. Whether the path-finding al- gorithm investigates one path at a time (depth-first) oF develops all paths simultaneously (breadth-first) is a Communications July 1979 or Volume 22 Number? ‘he ACM matter of the search strategy used to investigate alter- natives. In the second analysis, we employ a predicate Go" (x. ») which states that it is possible to go from x to In addition to the assertions which describe the graph ‘and the goal statement which describes the problem there is a single clause which defines the logic of the path-finding algorithms = Gora, 2) Go A.B GOK ZV Gola Cle Goth Zi Gor (BD Got, YY Ge BES Go YD Gor EXH ee Go" (56) Got Ua GOP U9) Here both forward search from the initial node A land backwar{ search from the goal node Z are top-down reasoning from the initial goal statement — Go* (A. Z). The difference between forward and backward search is the difference in the choice of subproblems in the body Of the path-finding procedure. Solving Go* (x. 2) before Go* (c. ») is forward search, and solving Go* (2. 9) before Go* (x. 2) is backward search. Coroutining between the ‘owo subproblems is bidirectional search. Bottom-up rea- soning from the assertions which define the graph gen- erates the transitive closure, effectively adding a new are to the graph directed from node x (0 node 1: whenever there isa path from x toy. Many problem domains have in common with path: finding that an initial state is given and the goal is 10 achieve some final state, The two representations of the path-finding problem exemplify alternative representa- tions which apply more generally in other problem do- ‘mains, Warren's plan-formation program [37}, for ex: ample, is of the type which contains both the given and the goal state as different arguments of a single predicate. It runs efficiently with the sequential execution facilities provided by PROLOG. The alternative formulation, which employs a predicate having one state as argument, is easier to understand but more difficult to execute efficiently Even the definition of factorial can be represented in two ways. The formulation discussed earlier corresponds tothe one-place predicate representation of path-finding. The formulation here corresponds to the two-place pred- icate representation. Read Finan as stating that the factorial of x» tgven that the factorial of wi Fornaio Fla yan) a plas Visio times vis, Fx ya To find the factorial of an integer represented by a term 1a single goal statement incorporates both the goal and the basis of the recursion = FO. ‘The new formulation of factorial executed in a simple a top-down sequential manner behaves in the same way as the original formulation executed in a mixed top: down, bottom-up fashion A Notation for Expressing Control Information The distinction between top-down and bottom-up ex- ecution can be expressed in a graphical notation which uses arrows to indicate the flow of control, The same notation can be used to represent different combinations of top-down and bottom-up execution. The notation does not, however. aim (0 provide a complete language for ‘expressing useful control information. ‘Arrows are attached to atoms in clauses to indicate the direction of transmission of processing activity from clause to clause. For every pair of matching atoms in the initial set of clauses (one atom in the conclusion of a ‘clause and the other among the conditions of a clause), there is an arrow directed from one atom (0 the other For top-down execution, arrows are directed from ‘conditions to conclus. ns. For the grandparent problem, we have the graph Processing activity starts with the initial goal statement It transmits activity to the body of the grandparent procedure, whose procedure calls. in turn, activate the parenthood definitions. The database of assertions pas- sively accepts processing activity without transmitting it to other clauses. For bottom-up execution, arrows are directed from conclusions to conditions: A: Commsnications Jas 1979 or Volume 22 the ACM amber? Processing activity originates with the database of initial assertions. They transmit activity to the parenthood def- initions, which, in turn, activate the grandparent defini tion, Processing terminates when it reaches all the con- ditions in the passive initial goal statement. The grandparent definition can be used in a combi, nation of top-down and bottom-up methods. Using num bers to indicate sequencing, we can represent different combinations of top-down and bottom-up execution. For simplicity we only show the control notation associated with the grandparent definition. The combination of logic and control indicated by 3 Grandparent (x9) — Parent (3) Patent 2.99 1 v t 2 represents the algorithm which (1) waits until x is asserted to be parent of z, then (2) finds a child y of z. and finally GB) asserts that x is grandparent of The combination indicated by ' Grandpatent (x9) = Patent (2, Parent (2.3) 1 ' ' 2 represents the algorithm which (1) waits until x is asserted to be parent of. then. (2) waits until ic is given the problem of showing that x is grandparent of y (3). which it then attempts to solve by showing that is parent of y The algorithm represented by Grandparent (x9) = Patent (x21 Parent (29) t t 2 3 (1) responds to the problem of showing that xis grand- parent of y, 2) by waiting until x then (2) attempting to show that = is parent of y asserted to be parent of 2, and Using the arrow notation. we can be more precise than before about the bottom-up execution of the recur- sive definition of Fibonacci number. The bottom-up execution referred to previously is, in fact, a mixture of Dottom-up and top-down execution: 1 thew + 2 Fibs he + 1 Fibs 9 thew Fib is .y plus: ise 1 1 1 2 ' 3 a ‘Arrow notation can also be used to give a procedural interpretation of non-Horn clauses. The definition of subset, for example. "x is a subset of y if. for all z, if = belongs to x. then z belongs to 3” gives rise to a procedure which shows that x isa subset of y by showing that every member of x is a member of y. It does this by asserting that some individual belongs to x and by Attempting to show that the same individual belongs to y. The name of the individual must be different from the ‘name of any individual mentioned elsewhere, and it must depend upon x and y (being different for different x and»), In clausal notation with arrows to indicate control, the definition of subset becomes 1 2 4 1 isa subset of, arb Lx 9 belongs to © isa subaet of y arb (4 9) Belongs 10, t 1 Given the goal of showing that x isa subset of y. the first clause asserts that the individual named arb (x. ») belongs to x and the second clause generates the goal of showing, that arb (x. ») belongs to » The grandparent definition illustrates the inadequacy of the arrow notation for expressing certain kinds of control information. Suppose that the grandparent defi- nition is to be used entirely top-down 1 Grandparent») — Patent, 2, Parent (11 4 ' The effective sequencing of procedure calls in the body of the procedure depends upon the parameters of the activating procedure call: (1). If the problem is to find a grandchild y of a given x. then itis more effective (i) first to find a child = ‘of x: (i) and then to find a child y-of = (2) Itthe problem isto find a grandparent x of a given them itis better (i) frst to find a parent of ¥: (i) ‘and then to find a parent x of Such sequencing of procedure calls depending on the pattern of input and output cannot be expressed in the arrow notation In relational database query languages. input-sensi tive sequencing of procedure cails needs to be determined by the data retrieval system rather than by the user Consider, for example, a database which defines the following relations: Supplier (x Supply fe supplier number x has name» and satu 2 art number has name # snd unt cost Supplier number «supplies pant number an quay = Given the query What is she name of supplier of peas? Asier (0) = Supplier x9. Supply (x Past a ea.) Communicasons daly 1978 o Volume 22 the ACM Nomber? the system needs to determine that, for the sake of efficiency. the last procedure call should be executed first: whereas given the query What the name of pars supplied hy fone? Answer (0 Fanswer i = Supplier (Jones). Supply. PAPE a eh the first procedure call should be executed before the others ‘The arrow notation can be used to control the behav ior of @ connection graph theorem-prover [12]. The links of @ connection graph are turned into arrows by giving them a direction. A link may be activated (giving rise to a resolvent) only if the link is connected 10 a clause all lof whose links are outgoing. The links of the derived resolvent inherit the direction of the links from which they descend in the parent clauses, Connection graphs controlled in such a manner are similar to Petri nets 6) Conclusion We have argued that conventional algorithms can usefully be regarded as consisting of to components: (1) a logic component which specifies what iy to be done and (2) a conteol component which determines how itis to be done. ‘The efficiency of an algorithm can often be improved by improving the efficiency of the control component without changing the logic and therefore without chang- ing the meaning of the algorithm, The same algorithm can often be formulated in dif- ferent ways, One formulation might incorporate a clear statement. in the logic component. of the knowledge to bbe used in solving the problem and achieve efficiency by employing sophisticated problem-solving strategies in the control component. Another formulation might pro: duce the same behavior by complicating the logic com- ponent and employing a simple problem-solving strat- egy. Although the trend in databases is towards the sep- aration of logic and control, programming languages today do not distinguish between them. The programmer specifies both logic and control in a single language while the execution mechanism exercises only the most rudimentary problem-solving capabilities. Computer programs will be more often correct. more easily im- proved, and more readily adapted to new problems when programming languages separate logic and control, and When execution mechanisms provide more powerful problem-solving facilities of the kind provided by intel- ligent theorem-proving systems. Acknowledgments. The author has benefited from valuable discussions with K. Clark, A, Colmerauer, M. van Emden. P. Hayes, P. Roussel. S. Tarmlund. and 1D. Warren, Special thanks are due to W. Bibel. K. Clark, as M. van Emden, P. Hayes. and D, Warren for their helpful comments on earlier drafts of this paper. This research was supported by a grant from the Science Research Council. The final draft of this paper was completed during a visiting professorship held in the School of Computer and Information Science at the University of Syracuse. Received December 19% References Trbibel W, and Schreber 5 Proof procedures ima Gentzen-ike Sistem of first-order logic roe Int Comping. Ssenp- Non Holtand Pub Go. Amterdum, 19°S pp 208-212 2, pibe W.Proprammeren a der Sprache det Prasikaenloph, ingerech als Habiiatonsathent Fachberech Mathemauk, Techn Munchen Jan, 1975 Shorter versions published as Prauikalwes Programmicren. Lesyure Notes Computer Science. 1. GL-2 Fachtapung uber Autemateniheone und forbale Sprachen, Spinger- Verlag Betin Henctbere, New Yak 19° pp 2-38) And ae Prednatne Programming SemnaitesIRIA, hee de agora des langtages et dela programmaton 1935-197 TRUS Roguencourt France. 1972 3. Biel W. Symteses of strategic efinions and their cote riche Nr Toib. thr Mathem. Techn Stanchen, 17% 4 "Bibel W. A uniform approach i programing 3633, tnt Mathers Teche Munchen 1916 5. Bledsoe, WAV. and Bruel P-A manemachine theorem pring stern Arup mel #¢Spuing 1874, I-72 Clark KU aod Tarnlund, S.A first onder theory of data and programs Information Pocening *~ North-Holland Pay Smeerdam 1997 pp 939-933, 7 Ghat K sand Sikel.§ Pre Somat derivation of programs P 4 Clatk K, The syatheu and sencation of loge programs. Res Rep. Dept. Comping. and Conte. Impenal Celege, Landce, 197 9. Teter Kind Darlngton 1 Algom analvae through Snthests Rev Rep. Dep Comping and Cont. Imperial College Tendon, Oct 1977 TO. Covid EF. A relaonal model for large shared databases Comoe ACM 15,8 hone 170), 379-387 Tr Codd, EF Relanonal compleenes of data bate sublangusges In Date Boe Syems, ®. Rusin, Ed Prentice-Hall Englewood tte ws. 1972 12, Colmeraver, A Kanous H Paseto, Rand Rowse, P.Un Systeme de commission hamme-machine en Iran. oppor: Drelninore. Groupe de Res en Inet 4rif.U_& Ai Maretle Coming. 1972 13" Darington, J. and Bursall, RMA sytem which svtomatcally improves programs roe of This Int, Joint Cont Art Iie SIRI. Menlo Park, Caf, 1973 pp. 437-342. TA, Darvas F Fate Land Seer, P. Lope based program for presicting drug interactions To appear in nt J ama! Computing IS’ Delissnni, A. and Kowalski R.A. Lopic and semanite neworks Comm ACM 22.3 (March 1919), 184192 1 "Eailey An ecient conieat-ie parsing algorhe, Comm. AEM 13.2 (Feb 1970), 94-102, 1D, van Emden, MH. Programming in resclution lop. To appear Machine Repretnttions of Rnowledge published as Machine Inveligence 8 EW. Eleock and D. Michie, Ed Elis Horwood and John Wye, 1, Floyd, RW. Non-dterminisic algorithms. | ACM 14, (Oct 1960), 66-544, 19. Hayes PJ. Computation and deduction, Prot. 2nd MPCS ‘Somp, Crechoslovak Acad of Sciences 1973, pp. 105-118 2. Hewitt C. Planner. language for proving theorems in ro roe of te, Join Cont Ani Inet. Washington, DE 1968 pp 295-201 21, Hogger. C. Deducivesymibesis of lope programs Res Rep Dept. Compirg and Contol, penal Coleg, London. 1977 22 Kleene, SC Iraducton 10 Mesumarhemance Van Nostrand New York. 18 echt Nr opie: eaeuls forthe TInt fevne Conf vu Intell Communications aly 1979 or Volume The ACM Nomber aeicntaans sen sntneceitintnimssaiinsnthae ee AAAS SE AS aE 23. Kowalski. RA. Lope for problem solving Memo No. 7S. Dept. Comput Logic, U.of Edinbu'gh, 1974 Se Rowatats R'A. Prediate lope an programming language Injormaton Processing "4 North-Holland Pub Co. Amsterdam. 1914 pp. 809-574 4S. Kewalsh, RA. A proof procedure using connection graphs DACM 22,4 1Oe, 1998) 57298 46. Kowalshs- RA. and Kuehne D. Linear solution with selection fusion Aref. Jel? (0971), 227-260 P7 Loveland, DW_ A simplified format forthe mode-elimination theorem prosing procedure. J CM 6.3 aly 196), 349-263, Bh Mae Cantie JA Basis or 8 mathematical hear of Computation In Computer Programming and Formal Stems Baton and D Hinehherg Eas. North-Holland Pub Co, ‘Amsterdam, 1967 Bor MeSkiman 1 R. and Minkr. J The yee ofa semantic network in {eductnve queson-answering system. Pros Int int Cont APE inet, 1977p. 30-5 30. Poi, CA Grundsiashes zur Beschreibung dskecerProzesse 3" collog uber Automathentheore Birkhauser Verag. Buse, Switzerland, 1967 51 Prat, VR The competence performance dichotomy in Principles of Programming Languages, Santa Monica, Calif, an. 197, pp. 198-200. 32, RObinson. A. Automatic deduction with hyper reolion, In J compar Math (1965) 127-34 3x. Rousse, Manual de reference et € Uissation, Groupe del, Arif. UER, Matelle-Laminy. Frapee, 1975 4M Schwarr.J. Using annotations lo make recursion equations Ipehave. Rex Memo, Dept Aa Intel. U. of Edinburgh 1977 36, Sikel§ A search technique for clause iterconneciity graphs. TERE Trane. Compre (Specat Ise on Automatic Theotem Proving) Aug 196 M6 Taenlond. S.A. An imerpreter for the programming language predicate lap. Proc Int Jomt Con AME Ine, Tbs. 1978p, 01-608 137, Warsen,D. system for generating plans, Mem No. 76, Dept Comput Logic, U of Edinburgh. 1974 ‘38 Watcen,D, Perera LM and Pevera. F PROLOG ~The language and x implemesation compared ith LISP. Proc, Symp eal A nl. ana Progaming Langu SIGPLAN Noy (ACM) 2. SIGART Rewsleters (ACMI 64 (Aug. 192). pp 109- 45, 3B. Winh, Algorithms + Date Sincures ~ Programs. Preise Hall, Englewood Ciiffa NJ, 1976 programming Proc, Fourth ACM SIGACT, SIGPLAN Sip on Professional Activities: Calendar of Events ___ an SE Selene a Wye ate lees onkelie CaS IBN eas Resturant Repaie Of Sour AtTiCN Re eee or Nel bie Bl het Wl pee Tyce tet Cutie Eogt — TREYTOUS TRTNGS Fie Gitta i a yeabdaly 19 Spoor Harvre Caemay rt See nees s sere Nees i fo oenten 179 Reet eas A ee Siemunetirs is 36 Communications tat d Relind Wekirobce Compre Slence ‘eain'Bave. Eocene CA Saty 1979 Volume 22 Number 7

You might also like