Logic Programming

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

LOGIC PROGRAMMING

DISCLAIMER

This document does not claim any originality and cannot be used
as a substitute for prescribed textbooks. The information
presented here is just for the completion of teaching
assignments. Contents of various sources as mentioned at the end
of the document were taken verbatim for preparing this document.
The ownership of the information lies with the respective
authors or institutions.

Sudhakar Singh
Department of Electronics and Communication
University of Allahabad
Logic Programming
Logic programs are declarative rather than procedural, which means that only the specifications
of the desired results are stated rather than detailed procedures for producing them. Programs in
logic programming languages are collections of facts and rules. Such a program is used by
asking it questions, which it attempts to answer by consulting the facts and rules.

Programming that uses a form of symbolic logic as a programming language is often


called logic programming, and languages based on symbolic logic are called logic
programming languages, or declarative languages. Prolog is the widely used logic language.

1. A Brief Introduction to Predicate Calculus


A proposition can be thought of as a logical statement that may or may not be true. It consists of
objects and the relationships among objects. Formal logic was developed to provide a method for
describing propositions, with the goal of allowing those formally stated propositions to be
checked for validity.

Symbolic logic can be used for the three basic needs of formal logic: to express
propositions, to express the relationships between propositions, and to describe how new
propositions can be inferred from other propositions that are assumed to be true.

The particular form of symbolic logic that is used for logic programming is called first-
order predicate calculus (though it is a bit imprecise, we will usually refer to it as predicate
calculus). In the following subsections, we present a brief look at predicate calculus. Our goal is
to lay the groundwork for a discussion of logic programming and the logic programming
language Prolog.

1.1 Propositions

The objects in logic programming propositions are represented by simple terms, which are
either constants or variables. A constant is a symbol that represents an object. A variable is a
symbol that can represent different objects at different times, although in a sense that is far closer
to mathematics than the variables in an imperative programming language.

The simplest propositions, which are called atomic propositions, consist of compound
terms. A compound term is one element of a mathematical relation, written in a form that has
the appearance of mathematical function notation.

A compound term is composed of two parts: a functor, which is the function symbol that
names the relation, and an ordered list of parameters, which together represent an element of the
relation. A compound term with a single parameter is a 1-tuple; one with two parameters is a 2-
tuple, and so forth. For example, we might have the two propositions
man(jake)
like(bob, steak)

which state that {jake} is a 1-tuple in the relation named man, and that {bob, steak} is a 2-tuple
in the relation named like. If we added the proposition

man(fred)

to the two previous propositions, then the relation man would have two distinct elements, {jake}
and {fred}. All of the simple terms in these propositions—man, jake, like, bob, and steak—are
constants. Note that these propositions have no intrinsic semantics. They mean whatever we
want them to mean. For example, the second example may mean that bob likes steak, or that
steak likes bob, or that bob is in some way similar to a steak.

Propositions can be stated in two modes: one in which the proposition is defined to be
true, and one in which the truth of the proposition is something that is to be determined. In other
words, propositions can be stated to be facts or queries.

Compound propositions have two or more atomic propositions, which are connected by
logical connectors, or operators, in the same way compound logic expressions are constructed in
imperative languages. The names, symbols, and meanings of the predicate calculus logical
connectors are as follows:

The following are examples of compound propositions:

a ∩ b ‫ ﬤ‬c
a ∩ ¬ b ‫ ﬤ‬d

The ¬ operator has the highest precedence. The operators ∩, ∪, and ≡ all have higher
precedence than ‫ ﬤ‬and ⊂. So, the second example is equivalent to

(a ∩ (¬ b)) ‫ ﬤ‬d
Variables can appear in propositions but only when introduced by special symbols called
quantifiers. Predicate calculus includes two quantifiers, as described below, where X is a variable
and P is a proposition:

The period between X and P simply separates the variable from the proposition. For example,
consider the following:

∀X.(woman(X) ‫ ﬤ‬human(X))
∃X.(mother(mary, X) ∩ male(X))

The first of these propositions means that for any value of X, if X is a woman, then X is a human.
The second means that there exists a value of X such that mary is the mother of X and X is a
male; in other words, mary has a son. The scope of the universal and existential quantifiers is the
atomic propositions to which they are attached. This scope can be extended using parentheses, as
in the two compound propositions just described. So, the universal and existential quantifiers
have higher precedence than any of the operators.

1.2 Clausal Form

One problem with predicate calculus is that there are too many different ways of stating
propositions that have the same meaning; that is, there is a great deal of redundancy. This is not
such a problem for logicians, but if predicate calculus is to be used in an automated
(computerized) system, it is a serious problem. To simplify matters, a standard form for
propositions is desirable. Clausal form, which is a relatively simple form of propositions, is one
such standard form. All propositions can be expressed in clausal form. A proposition in clausal
form has the following general syntax:

B1 ∪ B2 ∪ . . . ∪ Bn ⊂ A1 ∩ A2 ∩ . . . ∩ Am

in which the A’s and B’s are terms. The meaning of this clausal form proposition is as follows: If
all of the A’s are true, then at least one B is true.

The primary characteristics of clausal form propositions are the following: Existential quantifiers
are not required; universal quantifiers are implicit in the use of variables in the atomic
propositions; and no operators other than conjunction and disjunction are required. Also,
conjunction and disjunction need to appear only in the order shown in the general clausal form:
disjunction on the left side and conjunction on the right side. All predicate calculus propositions
can be algorithmically converted to clausal form.
The right side of a clausal form proposition is called the antecedent. The left side is
called the consequent because it is the consequence of the truth of the antecedent. As examples
of clausal form propositions, consider the following:

likes(bob, trout) ⊂ likes(bob, fish) ∩ fish(trout)


father(louis, alex) ∪ father(louis, violet) ⊂ father(alex, bob)
∩ mother(violet, bob) ∩ grandfather(louis, bob)

The English version of the first of these states that if bob likes fish and a trout is a fish, then bob
likes trout. The second states that if alex is bob’s father and violet is bob’s mother and louis is
bob’s grandfather, then louis is either alex’s father or violet’s father.

2. Predicate Calculus and Proving Theorems


Predicate calculus provides a method of expressing collections of propositions. One use of
collections of propositions is to determine whether any interesting or useful facts can be inferred
from them. This is exactly analogous to the work of mathematicians, who strive to discover new
theorems that can be inferred from known axioms and theorems.

The early days of computer science (the 1950s and early 1960s) saw a great deal of
interest in automating the theorem-proving process. One of the most significant breakthroughs in
automatic theorem proving was the discovery of the resolution principle by Alan Robinson
(1965) at Syracuse University.

Resolution is an inference rule that allows inferred propositions to be computed from


given propositions, thus providing a method with potential application to automatic theorem
proving. Resolution was devised to be applied to propositions in clausal form. The concept of
resolution is the following:

Suppose there are two propositions with the forms

P1 ⊂ P2
Q1 ⊂ Q2

Their meaning is that P2 implies P1 and Q2 implies Q1. Furthermore, suppose that P1 is
identical to Q2, so that we could rename P1 and Q2 as T. Then, we could rewrite the two
propositions as

T ⊂ P2
Q1 ⊂ T

Now, because P2 implies T and T implies Q1, it is logically obvious that P2 implies Q1, which
we could write as
Q1 ⊂ P2

The process of inferring this proposition from the original two propositions is resolution.

As another example, consider the two propositions:

older(joanne, jake) ⊂ mother(joanne, jake)


wiser(joanne, jake) ⊂ older(joanne, jake)

From these propositions, the following proposition can be constructed using resolution:

wiser(joanne, jake) ⊂ mother(joanne, jake)

The mechanics of this resolution construction are simple: The terms of the left sides of the two
clausal propositions are OR’d together to make the left side of the new proposition. Then the
right sides of the two clausal propositions are AND’d together to get the right side of the new
proposition. Next, any term that appears on both sides of the new proposition is removed from
both sides.

For example, if we have

father(bob, jake) ∪ mother(bob, jake) ⊂ parent(bob, jake)


grandfather(bob, fred) ⊂ father(bob, jake) ∩ father(jake, fred)

resolution says that

mother(bob, jake) ∪ grandfather(bob, fred) ⊂ parent(bob, jake) ∩


father(jake, fred)

In English, we would say

if: bob is the parent of jake implies that bob is either the father or mother of jake
and: bob is the father of jake and jake is the father of fred implies that bob is the grandfather of
fred
then: if bob is the parent of jake and jake is the father of fred then: either bob is jake’s mother or
bob is fred’s grandfather

Resolution is actually more complex than these simple examples illustrated. In particular,
the presence of variables in propositions requires resolution to find values for those variables that
allow the matching process to succeed. This process of determining useful values for variables is
called unification. The temporary assigning of values to variables to allow unification is called
instantiation.

A critically important property of resolution is its ability to detect any inconsistency in a


given set of propositions. This is based on the formal property of resolution called refutation
complete. What this means is that given a set of inconsistent propositions, resolution can prove
them to be inconsistent. This allows resolution to be used to prove theorems, which can be done
as follows: We can envision a theorem proof in terms of predicate calculus as a given set of
pertinent propositions, with the negation of the theorem itself stated as a new proposition. The
theorem is negated so that resolution can be used to prove the theorem by finding an
inconsistency. This is proof by contradiction, a frequently used approach to proving theorems in
mathematics. Typically, the original propositions are called the hypotheses, and the negation of
the theorem is called the goal.

Theoretically, this process is valid and useful. The time required for resolution, however,
can be a problem. Although resolution is a finite process when the set of propositions is finite,
the time required to find an inconsistency in a large database of propositions may be huge.

Theorem proving is the basis for logic programming. Much of what is computed can be
couched in the form of a list of given facts and relationships as hypotheses, and a goal to be
inferred from the hypotheses, using resolution.

Resolution on a hypotheses and a goal that are general propositions, even if they are in
clausal form, is often not practical. Although, it may be possible to prove a theorem using clausal
form propositions, it may not happen in a reasonable amount of time. One way to simplify the
resolution process is to restrict the form of the propositions. One useful restriction is to require
the propositions to be Horn clauses (Horn clauses are named after Alfred Horn (1951), who
studied clauses in this form).

Horn clauses can be in only two forms: They have either a single atomic proposition on
the left side or an empty left side. The left side of a clausal form proposition is sometimes called
the head, and Horn clauses with left sides are called headed Horn clauses. Headed Horn clauses
are used to state relationships, such as

likes(bob, trout) ⊂ likes(bob, fish) ∩ fish(trout)

Horn clauses with empty left sides, which are often used to state facts, are called headless Horn
clauses. For example,

father(bob, jake)

Most, but not all, propositions can be stated as Horn clauses. The restriction to Horn clauses
makes resolution a practical process for proving theorems.

3. An Overview of Logic Programming


Languages used for logic programming are called declarative languages, because programs
written in them consist of declarations rather than assignments and control flow statements.
These declarations are actually statements, or propositions, in symbolic logic.
One of the essential characteristics of logic programming languages is their semantics,
which is called declarative semantics. The basic concept of this semantics is that there is a
simple way to determine the meaning of each statement, and it does not depend on how the
statement might be used to solve a problem. Declarative semantics is considerably simpler than
the semantics of the imperative languages.

Programming in a logic programming language is nonprocedural. Programs in such


languages do not state exactly how a result is to be computed but rather describe the form of the
result. The difference is that we assume the computer system can somehow determine how the
result is to be computed. What is needed to provide this capability for logic programming
languages is a concise means of supplying the computer with both the relevant information and a
method of inference for computing desired results. Predicate calculus supplies the basic form of
communication to the computer, and resolution provides the inference technique.

4. The Origins of Prolog


Alain Colmerauer and Phillippe Roussel at the University of Aix-Marseille, with some assistance
from Robert Kowalski at the University of Edinburgh, developed the fundamental design of
Prolog.

The development of Prolog and other research efforts in logic programming received
limited attention outside of Edinburgh and Marseille until the announcement in 1981 that the
Japanese government was launching a large research project called the Fifth Generation
Computing Systems (FGCS; Fuchi, 1981; Moto-oka, 1981). One of the primary objectives of the
project was to develop intelligent machines, and Prolog was chosen as the basis for this effort.
The announcement of FGCS aroused in researchers and the governments of the United States
and several European countries a sudden strong interest in artificial intelligence and logic
programming.

After a decade of effort, the FGCS project was quietly dropped. Despite the great
assumed potential of logic programming and Prolog, little of great significance had been
discovered. This led to the decline in the interest in and use of Prolog, although it still has its
applications and proponents.

5. The Basic Elements of Prolog


There are now a number of different dialects of Prolog. These can be grouped into several
categories: those that grew from the Marseille group, those that came from the Edinburgh group,
and some dialects that have been developed for microcomputers, such as micro-Prolog, which is
described by Clark and McCabe (1984). The syntactic forms of these are somewhat different.
Rather than attempt to describe the syntax of several dialects of Prolog or some hybrid of them,
we have chosen one particular, widely available dialect, which is the one developed at
Edinburgh. This form of the language is sometimes called Edinburgh syntax. Its first
implementation was on a DEC System-10 (Warren et al., 1979). Prolog implementations are
available for virtually all popular computer platforms, for example, from the Free Software
Organization (http://www.gnu.org).

5.1 Terms

All Prolog statements, as well as Prolog data, are constructed from terms. A Prolog term is a
constant, a variable, or a structure. A constant is either an atom or an integer. Atoms are the
symbolic values of Prolog. In particular, an atom is either a string of letters, digits, and
underscores that begins with a lowercase letter or a string of any printable ASCII characters
delimited by apostrophes.

A variable is any string of letters, digits, and underscores that begins with an uppercase
letter or an underscore ( _ ). Variables are not bound to types by declarations. The binding of a
value, and thus a type, to a variable is called an instantiation. Instantiation occurs only in the
resolution process. A variable that has not been assigned a value is called uninstantiated.
Instantiations last only as long as it takes to satisfy one complete goal, which involves the proof
or disproof of one proposition.

Structures represent the atomic propositions of predicate calculus, and their general form
is the same:

functor(parameter list)

The functor is any atom and is used to identify the structure. The parameter list can be any list of
atoms, variables, or other structures. Structures are the means of specifying facts in Prolog. They
can also be thought of as objects, in which case they allow facts to be stated in terms of several
related atoms. In this sense, structures are relations, for they state relationships among terms.

5.2 Fact Statements

Prolog has two basic statement forms; these correspond to the headless and headed Horn clauses
of predicate calculus. The simplest form of headless Horn clause in Prolog is a single structure,
which is interpreted as an unconditional assertion, or fact. Logically, facts are simply
propositions that are assumed to be true.

The following examples illustrate the kinds of facts one can have in a Prolog program.
Notice that every Prolog statement is terminated by a period.
female(shelley).
male(bill).
female(mary).
male(jake).
father(bill, jake).
father(bill, shelley).
mother(mary, jake).
mother(mary, shelley).

These simple structures state certain facts about jake, shelley, bill, and mary. For example, the
first states that shelley is a female. The last four connect their two parameters with a relationship
that is named in the functor atom; for example, the fifth proposition might be interpreted to mean
that bill is the father of jake. Note that these Prolog propositions, like those of predicate calculus,
have no intrinsic semantics. They mean whatever the programmer wants them to mean. For
example, the proposition
father(bill, jake).

could mean bill and jake have the same father or that jake is the father of bill. The most common
and straightforward meaning, however, might be that bill is the father of jake.

5.2 Rule Statements

The other basic form of Prolog statement for constructing the database corresponds to a headed
Horn clause. The right side is the antecedent, or if part, and the left side is the consequent, or
then part. If the antecedent of a Prolog statement is true, then the consequent of the statement
must also be true. Because they are Horn clauses, the consequent of a Prolog statement is a
single term, while the antecedent can be either a single term or a conjunction.

Conjunctions contain multiple terms that are separated by logical AND operations. In
Prolog, the structures that specify atomic propositions in a conjunction are separated by commas,
so one could consider the commas to be AND operators. As an example of a conjunction,
consider the following:
female(shelley), child(shelley).

The general form of the Prolog headed Horn clause statement is


consequence :- antecedent_expression.

It is read as follows: “consequence can be concluded if the antecedent expression is true or can
be made to be true by some instantiation of its variables.” For example,
ancestor(mary, shelley) :- mother(mary, shelley).

states that if mary is the mother of shelley, then mary is an ancestor of shelley. Headed Horn
clauses are called rules, because they state rules of implication between propositions.

As with clausal form propositions in predicate calculus, Prolog statements can use
variables to generalize their meaning. Recall that variables in clausal form provide a kind of
implied universal quantifier. The following demonstrates the use of variables in Prolog
statements:
parent(X, Y) :- mother(X, Y).
parent(X, Y) :- father(X, Y).
grandparent(X, Z) :- parent(X, Y) , parent(Y, Z).

These statements give rules of implication among some variables, or universal objects. In this
case, the universal objects are X, Y, and Z. The first rule states that if there are instantiations of X
and Y such that mother(X, Y) is true, then for those same instantiations of X and Y, parent(X, Y) is
true.

The = operator, which is an infix operator, succeeds if its two term operands are the
same. For example, X = Y. The not operator, which is a unary operator, reverses its operand, in
the sense that it succeeds if its operand fails. For example, not(X = Y) succeeds if X is not equal
to Y.

5.3 Goal Statements

The goal statements for logical propositions are used to describe both known facts and rules that
describe logical relationships among facts. These statements are the basis for the theorem-
proving model. The theorem is in the form of a proposition that we want the system to either
prove or disprove. In Prolog, these propositions are called goals, or queries. The syntactic form
of Prolog goal statements is identical to that of headless Horn clauses. For example, we could
have
man(fred).

to which the system will respond either yes or no. The answer yes means that the system has
proved the goal was true under the given database of facts and relationships. The answer no
means that either the goal was determined to be false or the system was simply unable to prove
it.

Conjunctive propositions and propositions with variables are also legal goals. When
variables are present, the system not only asserts the validity of the goal but also identifies the
instantiations of the variables that make the goal true. For example,
father(X, mike).

can be asked. The system will then attempt, through unification, to find an instantiation of X that
results in a true value for the goal.

REFERENCES

1 Robert W. Sebesta, Concepts of Programming Languages, 10th Edition, Pearson

You might also like