0% found this document useful (0 votes)
73 views63 pages

Block-1 Elementary Logic

This document provides an introduction to propositional calculus. It discusses propositions as statements that can be either true or false. Logical connectives like disjunction, conjunction, negation, conditional, and precedence rules are introduced for combining propositions. Truth tables are used to determine the truth value of compound propositions. Logical equivalence and quantifiers are also discussed. The objectives are to distinguish propositions from non-propositions, construct truth tables, use logically equivalent statements, and use logical quantifiers. Examples and exercises are provided to help explain these concepts of propositional calculus.

Uploaded by

nikhil goyal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views63 pages

Block-1 Elementary Logic

This document provides an introduction to propositional calculus. It discusses propositions as statements that can be either true or false. Logical connectives like disjunction, conjunction, negation, conditional, and precedence rules are introduced for combining propositions. Truth tables are used to determine the truth value of compound propositions. Logical equivalence and quantifiers are also discussed. The objectives are to distinguish propositions from non-propositions, construct truth tables, use logically equivalent statements, and use logical quantifiers. Examples and exercises are provided to help explain these concepts of propositional calculus.

Uploaded by

nikhil goyal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Propositional Calculus

UNIT 1 PROPOSITIONAL CALCULUS


Structure
1.0 Introduction
1.1 Objectives
1.2 Propositions
1.3 Logical Connectives
1.3.1 Disjunction
1.3.2 Conjunction
1.3.3 Negation
1.3.4 Conditional Connectives
1.3.5 Precedence Rule
1.4 Logical Equivalence
1.5 Logical Quantifiers
1.6 Summary
1.7 Solutions/ Answers

1.0 INTRODUCTION
According to the theory of evolution, human beings have evolved from the lower
species over many millennia. The chief asset that made humans “superior” to their
ancestors was the ability to reason. How well this ability has been used for scientific
and technological development is common knowledge. But no systematic study of
logical reasoning seems to have been done for a long time. The first such study that
has been found is by Greek philosopher Aristotle (384-322 BC). In a modified form,
this type of logic seems to have been taught through the Middle Ages.

Then came a major development in the study of logic, its formalisation in terms of
mathematics.It was mainly Leibniz (1646-1716) and George Boole (1815-1864) who
seriously studied and development this theory, called symbolic logic. It is the basics
of this theory that we aim to introduce you to in this unit and the next one.

In the introduction to the block you have read about what symbolic logic is. Using it
we can formalise our arguments and logical reasoning in a manner that can easily
show if the reasoning is valid, or is a fallacy. How we symbolise the reasoning is what
is presented in this unit.

More precisely, in Section 1.2 (i.e., Sec. 1.2, in brief) we talk about what kind of
sentences are acceptable in mathematical logic. We call such sentences statements or
propositions. You will also see that a statement can either be true or false.
Accordingly, as you will see, we will give the statement a truth value T or F.

In Sec. 1.3 we begin our study of the logical relationship between propositions. This
is called prepositional calculus. In this we look at some ways of connecting simple
propositions to obtain more complex ones. To do so, we use logical connectives like
“and” and “or”. We also introduce you to other connectives like “not”, “implies” and
“implies and is implied by”. At the same time we construct tables that allow us to
find the truth values of the compound statement that we get.

In Sec. 1.4 we consider the conditions under which two statements are “the same”. In
such a situation we can safely replace one by the other.

And finally, in Sec 1.5, we talk about some common terminology and notation which
is useful for quantifying the objects we are dealing with in a statement.

It is important for you to study this unit carefully, because the other units in this block
are based on it. Please be sure to do the exercises as you come to them. Only then
will you be able to achieve the following objectives.
7
Elementary Logic
1.1 OBJECTIVES
After reading this unit, you should be able to:

• distinguish between propositions and non-propositions;


• construct the truth table of any compound proposition;
• identify and use logically equivalent statements;
• identify and use logical quantifiers.

Let us now begin our discussion on mathematical logic.

1.2 PROPOSITIONS
Consider the sentence ‘In 2003, the President of India was a woman’. When you read
this declarative sentence, you can immediately decide whether it is true or false. And
so can anyone else. Also, it wouldn’t happen that some people say that the statement
is true and some others say that it is false. Everybody would have the same answer.
So this sentence is either universally true or universally false.

Similarly, ‘An elephant weighs more than a human being.’ Is a declarative sentence
which is either true or false, but not both. In mathematical logic we call such
sentences statements or propositions.

On the other hand, consider the declarative sentence ‘Women are more intelligent than
men’. Some people may think it is true while others may disagree. So, it is neither
universally true nor universally false. Such a sentence is not acceptable as a statement
or proposition in mathematical logic.

Note that a proposition should be either uniformly true or uniformly false. For
example, ‘An egg has protein in it.’, and ‘The Prime Minister of India has to be a
man.’ are both propositions, the first one true and the second one false.

Would you say that the following are propositions?


‘Watch the film.
‘How wonderful!’
‘What did you say?’

Actually, none of them are declarative sentences. (The first one is an order, the
second an exclamation and the third is a question.) And therefore, none of them are
propositions.

Now for some mathematical propositions! You must have studied and created many
of them while doing mathematics. Some examples are

Two plus two equals four.


Two plus two equals five.
x + y > 0 for x > 0 and y > 0.
A set with n elements has 2n subsets.

Of these statements, three are true and one false (which one?).
Now consider the algebraic sentence ‘x + y > 0’. Is this a proposition? Are we in a
position to determine whether it is true or false? Not unless we know the values that x
and y can take. For example, it is false for
x = 1, y = -2 and true if x = 1, y = 0. Therefore,
‘x + y > 0’ is not a proposition, while
‘x + y > 0 for x > 0, y > 0’ is a proposition.

8
Why don’t you try this short exercise now? Propositional Calculus

E1) Which of the following sentences are statements? What are the reasons for your
answer?
i) The sun rises in the West.
ii) How far is Delhi from here?
iii) Smoking is injurious to health.
iv) There is no rain without clouds.
v) What is a beautiful day!
vi) She is an engineering graduates.
vii) 2n + n is an even number for infinitely many n.
viii) x + y = y + x for all x, y ∈ R.
ix) Mathematics is fun.
x) 2n = n2.

Usually, when dealing with propositions, we shall denote them by lower case
letters like p, q, etc. So, for example, we may denote

‘Ice is always cold.’ by p, or


‘cos2 θ + sin2 θ =1 for θ ∈ [ 0, 2π]’ by q.
We shall sometimes show this by saying
p: Ice is always cold., or
q: cos2 θ + sin2 θ = 1 for θ ∈ [ 0, 2π].

Now, given a proposition, we know that it is either true or false, but not both. If
it is true, we will allot it the truth value T. If it is false, its truth value will be Sometimes, as in the
F. So, for example, the truth value of context of logic circuits
(See unit 3), we will use 1
instead of T and 0 instead
‘Ice melts at 30o C.’ is F, while that of ‘x2 ≥ 0 for x ∈ R’ is T. of F.

Here are some exercises for you now.

E2) Give the truth values of the propositions in E1.

E3) Give two propositions each, the truth values of which are T and F, respectively.
Also give two examples of sentences that are not propositions.

Let us now look at ways of connecting simple propositions to obtain compound


statements.

1.3 LOGICAL CONNECTIVES


When you’re talking to someone, do you use very simple sentences only? Don’t you
use more complicated ones which are joined by words like ‘and’, ‘or’, etc? In the
same way, most statements in mathematical logic are combinations of simpler
statements joined by words and phrases like ‘and’. ‘or’, ‘if … then’. ‘if and only if’,
etc. These words and phrases are called logical connectives. There are 6 such
connectives, which we shall discuss one by one.

1.3.1 Disjunction
Consider the sentence ‘Alice or the mouse went to the market.’. This can be written as
‘Alice went to the market or the mouse went to the market.’ So, this statement is
actually made up of two simple statements connected by ‘or’. We have a term for such
a compound statement.

Definition: The disjunction of two propositions p and q is the compound statement


9
Elementary Logic p or q, denoted by p ∨ q.
For example, ‘Zarina has written a book or Singh has written a book.’ Is the
disjunction of p and q, where
p : Zarina has written a book, and
q : Singh has written a book.

Similarly, if p denotes ‘ 2 > 0’ and q denotes ‘2 < 5’, then p ∨ q denotes the statement
‘2 is greater than 0 or 2 is less than 5.’.

Let us now look at how the truth value of p ∨ q depends upon the truth values of p and
q. For doing so, let us look at the example of Zarina and Singh, given above. If even
one of them has written a book, then the compound statement p ∨ q is true. Also, if
both have written books, the compound statement p ∨ q is again true. Thus, if the
truth value of even one out of p and q is T, then that of ‘p ∨ q’ is T. Otherwise, the
truth value of p ∨ q is F. This holds for any pair of propositions p and q. To see the
relation between the truth values of p, q and p ∨ q easily, we put this in the form of a
table (Table 1), which we call a truth table.

Table 1: Truth table for disjunction

p q p∨q
T T T
T F T
F T T
F F F

How do we form this table? We consider the truth values that p can take – T or F.
Now, when p is true, q can be true or false. Similarly, when p is false q can be true or
false. In this way there are 4 possibilities for the compound proposition p ∨ q. Given
any of these possibilities, we can find the truth value of p ∨ q. For instance, consider
the third possibility, i.e., p is false and q is true. Then, by definition, p ∨ q is true. In
the same way, you can check that the other rows are consistent.

Let us consider an example.

Example 1: Obtain the truth value of the disjunction of ‘The earth is flat’.
and ‘3 + 5 = 2’.

Solution: Let p denote ‘The earth is flat,’ and q denote ‘3 + 5 = 2’. Then we know
that the truth values of both p and q are F. Therefore, the truth value of p ∨ q is F.

***
Try an exercise now.

E4) Write down the disjunction of the following propositions, and give its truth
value.
i) 2 + 3 = 7,
ii) Radha is an engineer.

We also use the term ‘inclusive or ‘ for the connective we have just discussed. This is
because p ∨ q is true even when both p and q are true. But, what happens when we
want to ensure that only one of them should be true? Then we have the following
connective.

Definition: The exclusive disjunction of two propositions p and q is the statement


‘Either p is true or q is true, but both are not true.’. Either p is true or q is true,
but both are not true.’. We denote this by p ⊕ q .
10
Propositional Calculus
So, for example, if p is ‘2 + 3 = 5’ and q the statement given in E4(ii), then p ⊕ q is
the statement ‘Either 2 + 3 = 5 or Radha is an engineer’. This will be true only if
Radha is not an engineer.

In general, how is the truth value of p ⊕ q related to the truth values of p and q? This
is what the following exercise is about.

E5) Write down the truth table for ⊕. Remember that p ⊕ q is not true if both p and
q are true.

Now let us look at the logical analogue of the coordinating conjunction ‘and’.

1.3.2 Conjunction
As in ordinary language, we use ‘and’ to combine simple propositions to make
compound ones. For instance, ‘ 1 + 4 ≠ 5 and Prof. Rao teaches Chemistry.’ is
formed by joining ‘1 + 4 ≠ 5’ and ‘Prof. Rao teaches Chemistry’ by ‘and’. Let us
define the formal terminology for such a compound statement.

Definition: We call the compound statement ‘p and q’ the conjunction of the


statements p and q. We denote this by p ∧ q.

For instance, ‘3 + 1 ≠ 7 ∧ 2 > 0’ is the conjunction of ‘3 + 1 ≠ 7’ and ‘2 > 0’.


Similarly, ‘2 + 1 = 3 ∧ 3 = 5’ is the conjunction of ‘2 + 1 = 3’ and ‘3 = 5’.

Now, when would p ∧ q be true? Do you agree that this could happen only when both
p and q are true, and not otherwise? For instance, ‘2 + 1 = 3 ∧ 3 = 5’ is not true
because ‘3 = 5’ is false.
So, the truth table for conjunction would be as in Table 2.
Table 2: Truth table for conjunction

P q p∧q
T T T
T F F
F T F
F F F

To see how we can use the truth table above, consider an example.

Example 2: Obtain the truth value of the conjunction of ‘2 ÷5 = 1’ and ‘Padma is in


Bangalore.’.

Solution: Let p : 2 ÷5 = 1, and


q: Padma is in Bangalore.

Then the truth value of p is F. Therefore, from Table 3 you will find that the truth
value of p ∧ q is F.

***
Why don’t you try an exercise now?

E6) Give the set of those real numbers x for which the truth value of p ∧ q is T,
where p : x > -2, and q:x+3≠7

If you look at Tables 1 and 2, do you see a relationship between the truth values in
their last columns? You would be able to formalize this relationship after studying the
next connective.
11
Elementary Logic 1.3.3 Negation
You must have come across young children who, when asked to do something, go
ahead and do exactly the opposite. Or, when asked if they would like to eat, say rice
and curry, will say ‘No’, the ‘negation’ of yes! Now, if p denotes the statement ‘I will
eat rice.’, how can we denote ‘I will not eat rice.’? Let us define the connective that
will help us do so.

Definition: The negation of a proposition p is ‘not p’, denoted by ~p.

For example, if p is ‘Dolly is at the study center.’, then ~ p is ‘Dolly is not at the study
center’. Similarly, if p is ‘No person can live without oxygen.’, ~ p is ‘At least one
person can live without oxygen.’.

Now, regarding the truth value of ~ p, you would agree that it would be T if that of p
is F, and vice versa. Keeping this in mind you can try the following exercises.

E7) Write down ~ p, where p is


i) 0–5≠5
ii) n > 2 for every n ∈ N.
iii) Most Indian children study till class 5.

E8) Write down the truth table of negation.

Let us now discuss the conditional connectives, representing ‘If …, then …’ and ‘if
and only if’.

1.3.4 Conditional Connectives


Consider the proposition ‘If Ayesha gets 75% or more in the examination, then she
will get an A grade for the course.’. We can write this statement as ‘If p, and q’,
where
p: Ayesha gets 75% or more in the examination, and
q: Ayesha will get an A grade for the course.

This compound statement is an example of the implication of q by p.

Definition: Given any two propositions p and q, we denote the statement ‘If p, then
q’ by p → q. We also read this as ‘p implies q’. or ‘p is sufficient for q’, or ‘p only if
q’. We also call p the hypothesis and q the conclusion. Further, a statement of the
form p → q is called a conditional statement or a conditional proposition.

So, for example, in the conditional proposition ‘If m is in Z, then m belongs to Q.’ the
hypothesis is ‘m ∈ Z’ and the conclusion is ‘m ∈ Q’.

Mathematically, we can write this statement as

m ∈ Z → m ∈ Q.

Let us analyse the statement p → q for its truth value. Do you agree with the truth
table we’ve given below (Table 3)? You may like to check it out while keeping an
example from your surroundings in mind.
Table 3: Truth table for implication

p q p→q
T T T
T F F
F T T
F F T
12
Propositional Calculus
You may wonder about the third row in Table 3. But, consider the example ‘3 < 0 →
5 > 0’. Here the conclusion is true regardless of what the hypothesis is. And
therefore, the conditional statement remains true. In such a situation we say that the
conclusion is vacuously true.

Why don’t you try this exercise now?

E9) Write down the proposition corresponding to p → q, and determine the values
of x for which it is false, where
p : x + y = xy where x, y ∈ R
q : x ⊀ 0 for every x ∈ Z.

Now, consider the implication ‘If Jahanara goes to Baroda, then the she doesn’t
participate in the conference at Delhi.’. What would its converse be? To find it, the
following definition may be useful.

Definition: The converse of p → q is q → p. In this case we also say ‘p is


necessary for q’, or ‘p if q’.

So, in the example above, the converse of the statement would be ‘If Jahanara doesn’t
participate in the conference at Delhi, then she goes to Baroda.’. This means that
Jahanara’s non-participation in the conference at Delhi is necessary for her going to
Baroda.

Now, what happens when we combine an implication and its converse?

To show ‘p → q and q → p’, we introduce a shorter notation.

Definition: Let p and q be two propositions. The compound statement


(p → q) ∧(q → p) is the biconditional of p and q. We denote it by p ↔ q, and read it
as ‘p if and only q’.
The two connectives → and
↔ are called conditional
We usually shorten ‘if and only ‘if’ to iff. connectives.

We also say that ‘p implies and is implied by q’. or ‘p is necessary and sufficient
for q’.

For example, ‘Sudha will gain weight if and only if she eats regularly.’ Means that
‘Sudha will gain weight if she eats regularly and Sudha will eat regularly if she gains
weight.’

One point that may come to your mind here is whether there’s any difference in the
two statements p ↔ q and q ↔ p. When you study Sec. 1.4 you will realize why they
are inter-changeable.

Let us now consider the truth table of the biconditional, i.e., of the two-way implication.

To obtain its truth values, we need to use Tables 2 and 3, as you will see in Table 4.
This is because, to find the value of ( p → q ) ∧ ( q → p) we need to know the values
of each of the simpler statements involved.
Table 4: Truth table for two-way implication.

p q p→q q→p p↔q


T T T T T
T F F T F
F T T F F
F F T T T 13
Elementary Logic As you can see from the last column of the table (and from your own experience),
p ↔ q is true only when both p and q are true or both p and q are false. In other
words, p ↔ q is true only when p and q have the same truth values. Thus, for
example,‘Parimala is in America iff 2 + 3 = 5’ is true only if ‘Parimala is in America,’
is true.

Here are some related exercises.

E10) For each of the following compound statements, first identify the simple
propositions p, q, r, etc., that are combined to make it. Then write it in symbols,
using the connectives, and give its truth value.
i) If triangle ABC is equilateral, then it is isosceles.
ii) a and b are integers if and only if ab is a rational number.
iii) If Raza has five glasses of water and Sudha has four cups of tea, then
Shyam will not pass the math examination.
iv) Mariam is in Class 1 or in Class 2.

E11) Write down two propositions p and q for which q → p is true but p ↔ q is
false.

Now, how would you determine the truth value of a proposition which has more than
one connective in it? For instance, does ~ p ∨ q mean ( ~ p) ∨ q or ~ ( p ∨ q)? We
discuss some rules for this below.

1.3.5 Precedence Rule


While dealing with operations on numbers, you would have realized the need for
applying the BODMAS rule. According to this rule, when calculating the value of an
arithmetic expression, we first calculate the value of the Bracketed portion, then apply
Of, Division, Multiplication, Addition and Subtraction, in this order. While
calculating the truth value of compound propositions involving more than one
connective, we have a similar convention which tells us which connective to apply
first.

Why do we need such a convention? Suppose we didn’t have an order of preference,


and want to find the truth of, say ~ p ∨ q. Some of us may consider the value of ( ~
p) ∨ q, and some may consider ~ ( p ∨ q). The truth values can be different in
these cases. For instance, if p and q are both true, then ( ~ p) ∨ q is true, but ~ (
p ∨ q) is false. So, for the purpose of unambiguity, we agree to such an order or
rule. Let us see what it is.

The rule of precedence: The order of preference in which the connectives are
applied in a formula of propositions that has no brackets is

i) ~
ii) ∧
iii) ∨ and ⊕
iv) → and ↔

Note that the ‘inclusive or’ and ‘exclusive or’ are both third in the order of preference.
However, if both these appear in a statement, we first apply the left most one. So, for
instance, in p ∨ q ⊕ ~ p, we first apply ∨ and then ⊕. The same applies to the
‘implication’ and the ‘biconditional’, which are both fourth in the order of preference.

To clearly understand how this rule works, let us consider an example.

Example 3: Write down the truth table of p → q ∧ ~ r ↔ r ⊕ q


14
Solution: We want to find the required truth value when we are given the truth values Propositional Calculus
of p, q and r. According to the rule of precedence given above, we need to first find
the truth value of ~ r, then that of ( q ∧ ~ r), then that of (r ⊕ q), and then that of p →
( q ∧ ~ r), and finally the truth value of [ p → ( q ∧ ~ r)] ↔ r ⊕ q.

So, for instance, suppose p and q are true, and r is false. Then ~ r will have value T, q
∧ ~ r will be T, r ⊕ q will be T, p → ( q ∧ ~ r) will be T, and hence, p → q ∧ ~ r ↔ r
⊕ q will be T.

You can check that the rest of the values are as given in Table 5. Note that we have 8
possibilities (=23) because there are 3 simple propositions involved here.
Table 5: Truth table for p → q ∧ ~ r ↔ r ⊕ q

p q r ~r q∧~r r⊕q p→q∧~r p→q∧~r↔r⊕q


T T T F F F F T
T T F T T T T T
T F T F F T F F
T F F T F F F T
F T T F F F T F
F T F T T T T T
F F T F F T T T
F F F T F F T F

***

You may now like to try some exercises on the same lines.

E12) In Example 3, how will the truth values of the compound statement change if
you first apply ↔ and then → ?

E13) In Example 3, if we replace ⊕ by ∧, what is the new truth table?

E14) From the truth table of p ∧ q ∨ ~ r and (p ∧ q ) ∨ ( ~ r) and see where they
differ.

E15) How would you bracket the following formulae to correctly interpret them?
[For instance, p ∨ ~ q ∧ r would be bracketed as p ∨ ((~ q) ∧ r).]
i) p ∨ q,
ii) ~ q → ~ p,
iii) p → q ↔ ~ p ∨ q,
iv) p ⊕ q ∧ r → ~ p ∨ q ↔ p ∧ r.

So far we have considered different ways of making new statements from old ones.
But, are all these new ones distinct? Or are some of them the same? And “same” in
what way? This is what we shall now consider.

1.4 LOGICAL EQUIVALENCE


‘Then you should say what you mean’, the March Have went on. ‘I do,’ Alice hastily
replied, ‘at least … at least I mean what I say – that’s the same thing you know.’
‘Not the same thing a bit!’ said the Hatter. ‘Why you might just as well say that “I see
what I eat” is the same thing as “I eat what I see”!’
-from ‘Alice in Wonderland’
by Lewis Carroll
15
Elementary Logic In Mathematics, as in ordinary language, there can be several ways of saying the same
thing. In this section we shall discuss what this means in the context of logical
statements.

Consider the statements ‘If Lala is rich, then he must own a car.’. and ‘if Lala doesn’t
own a car, then he is not rich.’. Do these statements mean the same thing? If we write
the first one as p → q, then the second one will be (~q) → (~ p). How do the truth
values of both these statements compare?

We find out in the following table.


Table 6

p q ~p ~q p→q ~ q → ~p
T T F F T T
T F F T F F
F T T F T T
F F T T T T

Consider the last two columns of Table 6. You will find that ‘p → q’ and ‘q → ~ p’
have the same truth value for every choice of truth values of p and q. When this
happens, we call them equivalent statements.

Definition: We call two propositions r and s logically equivalent provided they have
the same truth value for every choice of truth values of simple propositions involved
in them. We denote this fact by r ≡ s.

So, from Table 6 we find that ( p → q) ≡ (~ q → ~ p).

You can also check that ( p ↔ q) ≡ ( q ↔ p) for any pair of propositions p and q.

As another example, consider the following equivalence that is often used in


mathematics. You could also apply it to obtain statements equivalent to ‘Neither a
borrower, nor a lender be.’!

Example 4: For any two propositions p and q, show that ~ (p ∨ q ) ≡ ~ p ∧ ~ q.

Solution: Consider the following truth table.

Table 7

p q ~p ~q p∨q ~ ( p ∨ q) ~p∧~q
T T F F T F F
T F F T T F F
F T T F T F F
F F T T F T T

You can see that the last two columns of Table 7 are identical. Thus, the truth values
of ~ ( p ∨ q) and ~ p ∧ ~ q agree for every choice of truth values of p and q.
Therefore, ~ (p ∨ q) ≡ ~ p ∧ ~ q.
***

The equivalence you have just seen is one of De Morgan’s laws. You might have
already come across these laws in your previous studies of basic Mathematics.

The other law due to De Morgan is similar : ~ (p ∧ q) ≡ ~ p ∨ ~ q.


Fig. 1: Augustus De Morgan
(1806-1871) was born
in Madurai In fact, there are several such laws about equivalent propositions. Some of them are
the following, where, as usual, p, q and r denote propositions.
16
Propositional Calculus
a) Double negation law : ~ ( ~ p) ≡ p
b) Idempotent laws: p ∧ p ≡ p,
p∨p≡p
c) Commutativity: p∨q≡q∨p
p∧q≡q∧p
d) Associativity: (p ∨ q) ∨ r ≡ p ∨ (q ∨ r)
(p ∧ q) ∧ r ≡ p ∧ ( q ∧ r)
e) Distributivity: ∨ ( q ∧ r) ≡ (p ∨ q) ∧ (p ∨ r)
p ∧ ( q ∨ r) ≡ (p ∧ q) ∨ ( p ∧ r)

We ask you to prove these laws now.

E16) Show that the laws given in (a)-(e) above hold true.
E17) Prove that the relation of ‘logical equivalence’ is an equivalence relation.
E18) Check whether ( ~ p ∨ q) and ( p → q) are logically equivalent.

The laws given above and the equivalence you have checked in E18 are commonly
used, and therefore, useful to remember. You will also be applying them in Unit 3 of
this Block in the context of switching circuits.

Let us now consider some prepositional formulae which are always true or always
false. Take, for instance, the statement ‘If Bano is sleeping and Pappu likes ice-cream,
then Beno is sleeping’. You can draw up the truth table of this compound proposition
and see that it is always true. This leads us to the following definition.

Definition: A compound proposition that is true for all possible truth values of the
simple propositions involved in it is called a tautology. Similarly, a proposition that
is false for all possible truth values of the simple propositions that constitute it is
called a contradiction.

Let us look at some example of such propositions.

Example 5: Verify that p ∧ q ∧ ~ p is a contradiction and p → q ↔ ~ p ∨ q is a


tautology.

Solution: Let us simultaneously draw up the truth tables of these two propositions
below.
Table 8

p q ~p p∧q p∧q∧~p p→q ~p∨q p→q↔~p∨q


T T F T F T T T
T F F F F F F T
F T T F F T T T
F F T F F T T T

Looking at the fifth column of the table, you can see that p ∧ q ∧ ~p is a contradiction.
This should not be surprising since p ∧ q ∧ ~ p ≡ ( p ∧ ~ p) ∧ q (check this by using
the various laws given above).

And what does the last column of the table show? Precisely that p → q ↔ ~ p ∨ q is
a tautology.
***

Why don’t you try an exercise now?

E19) Let T denote a tautology ( i.e., a statement whose truth value is always T) and F
a contradiction. Then, for any statement p, show that
17
Elementary Logic i) p∨T ≡T
ii) p∧T ≡p
iii) p∨F≡p
iv) p∧F≡F

Another way of proving that a proposition is a tautology is to use the properties of


logical equivalence. Let us look at the following example.

Example 6: Show that [(p → q) ∧ ~ q] → ~ p is a tautology.

Solution: [( p → q) ∧ ~ q] → ~ p
Complementation law: ≡ [(~ p ∨ q) ∧ ~ q]→ ~ p, using E18, and symmetricity of ≡.
q ∧ ~ q is a contradiction. ≡ [(~ p ∧ ~ q) ∨ (q ∧ ~ q)] → ~ p, by De Morgan’s laws.
≡ [(~ p ∧ ~ q) ∨ F] → ~ p, since q ∧ ~ q is always false.
≡ (~ p ∧ ~ q) → ~ p, using E18.

Which is tautology.

And therefore the proposition we started with is a tautology.

***

The laws of logical equivalence can also be used to prove some other logical
equivalences, without using truth tables. Let us consider an example.

Example 7: Show that (p → ~ q) ∧ ( p → ~ r) ≡ ~ [ p ∧ ( q ∨ r)].

Solution: We shall start with the statement on the left hand side of the equivalence
that we have to prove. Then, we shall apply the laws we have listed above, or the
equivalence in E 18, to obtain logically equivalent statements. We shall continue this
process till we obtain the statement on the right hand side of the equivalence given
above. Now

(p → ~ q) ∧ (p → ~ r)
≡ (~ p ∨ q) ∧ (~ p ∨ ~ r), by E18
≡ ~ p ∨ ( ~ q ∧ ~ r), by distributivity
≡ ~ p ∨ [ ~ (q ∨ r)], by De Morgan’s laws
≡ ~ [p ∧ (q ∨ r)], by De Morgan’s laws

So we have proved the equivalence that we wanted to.

***

You may now like to try the following exercises on the same lines.

E20) Use the laws given in this section to show that


~ (~ p ∧ q) ∧ ( p ∨ q) ≡ p.

E21) Write down the statement ‘If it is raining and if rain implies that no one can go
to see a film, then no one can go to see a film.’ As a compound proposition.
Show that this proposition is a tautology, by using the properties of logical
equivalence.

E22) Give an example, with justification, of a compound proposition that is neither a


tautology nor a contradiction.

Let us now consider proposition-valued functions.


18
Propositional Calculus
1.5 LOGICAL QUANTIFIERS
In Sec. 1.2, you read that a sentence like ‘She has gone to Patna.’ Is not a proposition,
unless who ‘she’ is clearly specified.
Similarly, ‘x > 5’ is not a proposition unless we know the values of x that we are
considering. Such sentences are examples of ‘propositional functions’.

Definition: A propositional function, or a predicate, in a variable x is a sentence


p(x) involving x that becomes a proposition when we give x a definite value from the
set of values it can take. We usually denote such functions by p(x), q(x), etc. The set
of values x can take is called the universe of discourse.

So, if p(x) is ‘x > 5’, then p(x) is not a proposition. But when we give x particular
values, say x = 6 or x = 0, then we get propositions. Here, p(6) is a true proposition
and p(0) is a false proposition.

Similarly, if q(x) is ‘x has gone to Patna.’, then replacing x by ‘Taj Mahal’ gives us a
false proposition.

Note that a predicate is usually not a proposition. But, of course, every proposition is
a prepositional function in the same way that every real number is a real-valued
function, namely, the constant function.

Now, can all sentences be written in symbolic from by using only the logical
connectives? What about sentences like ‘x is prime and x + 1 is prime for some x.’?
How would you symbolize the phrase ‘for some x’, which we can rephrase as ‘there
exists an x’? You must have come across this term often while studying mathematics.
∃ is called the
We use the symbol ‘∃’ to denote this quantifier, ‘there exists’. The way we use it existential quantifier.
is, for instance, to rewrite ‘There is at least one child in the class.’ as‘(∃ x in U)p(x)’,
where p(x) is the sentence ‘x is in the class.’ and U is the set of all children.

Now suppose we take the negative of the proposition we have just stated. Wouldn’t it
be ‘There is no child in the class.’? We could symbolize this as ‘for all x in U, q(x)’
where x ranges over all children and q(x) denotes the sentence ‘x is not in the class.’,
i.e., q(x) ≡ ~ p(x).

We have a mathematical symbol for the quantifier ‘for all’, which is ‘∀’. So the
proposition above can be written as ∀ is called the
universal quantifier.
‘(∀ x ∈ U)q(x)’, or ‘q(x), ∀ x ∈ U’.

An example of the use of the existential quantifier is the true statement.

(∃ x ∈ R) (x + 1 > 0), which is read as ‘There exists an x in R for which x + 1 > 0.’.

Another example is the false statement


1 1
(∃ x ∈N) (x - = 0), which is read as ‘There exists an x in N for which x - = 0.’.
2 2
An example of the use of the universal quantifier is (∀ x ∉ N) (x2 > x), which is read
as ‘for every x not in N, x2 > x.’. Of course, this is a false statement, because there is
at least one x∉ N, x ∈ R, for which it is false.

We often use both quantifiers together, as in the statement called Bertrand’s


postulate:

(∀ n ∈ N\ {1}) ( ∃ x ∈ N) (x is a prime number and n < x < 2n).

19
Elementary Logic In words, this is ‘for every integer n > 1 there is a prime number lying strictly between
n and 2n.’

As you have already read in the example of a child in the class,


( ∀ x ∈U)p(x) is logically equivalent to ~ ( ∃ x ∈ U) (~ p(x)). Therefore,
~(∀ x ∈ U)p(x) ≡ ~~ (∃ x ∈U) (~ p(x)) ≡ ( ∃ x ∈ U) ( ~ p(x)).

This is one of the rules for negation that relate ∀ and ∃. The two rules are
~ (∀ x ∈ U)p(x) ≡ (∃ x ∈ U) (~ p(x)), and
~ (∃ x ∈ U)p(x) ≡ (∀ x ∈ U) (~ p(x))

Where U is the set of values that x can take.

Now, consider the proposition

‘There is a criminal who has committed every crime.’

We could write this in symbols as


(∃ c ∈A) ( ∀ x ∈ B) (c has committed x)
Where, of course, A is the set of criminals and B is the set of crimes (determined by
law).

What would its negation be? It would be


~ (∃ c ∈ A) (∀ x ∈ B) (c has committed x)
Where, of course, A is the set of criminals and B is the set of crimes (determined by
law).

What would its negation be? It would be


~ (∃ c ∈ A) (∀ x ∈ B) (c has committed x)
≡ (∀ c ∈ A) [~ (∀ x ∈B) (c has committed x)
≡ (∀ c ∈ A) (∃ x ∈ B) ( c has not committed x).

We can interpret this as ‘For every criminal, there is a crime that this person has not
committed.’.

These are only some examples in which the quantifiers occur singly, or together.
Sometimes you may come across situations (as in E23) where you would use ∃ or ∀
A predicate can be a function in
two or more variables. twice or more in a statement. It is in situations like this or worse [say, (∀ xi ∈ U1) (∃
x2 ∈ U2) (∃ x3 ∈ U2) (∃ x3 ∈ U3)(∀ x4 ∈ U4) … (∃ xn ∈ Un)p]
where our rule for negation comes in useful. In fact, applying it, in a trice we can say
that the negation of this seemingly complicated example is
(∃ x1 ∈U1) (∀ x2 ∈ U2 ) (∀ x3 ∈ U3)(∃ x4 ∈ U4) …(∀ xn ∈ Un ) (~ p).

Why don’t you try some exercise now?

E23) How would you present the following propositions and their negations using
logical quantifiers? Also interpret the negations in words.
i) The politician can fool all the people all the time.
ii) Every real number is the square of some real number.
iii) There is lawyer who never tell lies.

E24) Write down suitable mathematical statements that can be represented by the
following symbolic propostions. Also write down their negations. What is the
truth value of your propositions?
i) (∀ x) (∃ y)p
ii) (∃ x) (∃ y) (∀ z)p.

20
Propositional Calculus
And finally, let us look at a very useful quantifier, which is very closely linked to ∃.
You would need it for writing, for example, ‘There is one and only one key that fits
the desk’s lock.’ In symbols. The symbol is ∃! X which stands for ‘there is one and
only one x’ (which is the same as ‘there is a unique x’ or ‘there is exactly one x’).

So, the statement above would be (∃! X ∈ A) ( x fits the desk’s lock), where A is the
set of keys.

For other examples, try and recall the statements of uniqueness in the mathematics
that you’ve studied so far. What about ‘There is a unique circle that passes through
three non-collinear points in a plane.’? How would you represent this in symbols? If
x denotes a circle, and y denotes a set of 3 non-collinear points in a plane, then the
proposition is

(∀ y ∈ P) (∃! X ∈ C) (x passes through y).

Here C denotes the set of circles, and P the set of sets of 3 non-collinear points.
And now, some short exercises for you!

E25) Which of the following propositions are true (where x, y are in R)?
i) (x ≥ 0) → ( ∃ y) (y2 = x)
ii) (∀ x) (∃! y) (y2 =x3)
iii) (∃x) (∃! y) (xy = 0)

Before ending the unit, let us take quick look at what e have covered in it.

1.6 SUMMARY
In this unit, we have considered the following points.
1. What a mathematically acceptable statement (or proposition) is.
2. The definition and use of logical connectives:
Give propositions p and q,
i) their disjunction is ‘p and q’, denoted by p ∨ q;
ii) their exclusive disjunction is ‘either p or q’, denoted by p ⊕ q;
iii) their conjunction is ‘p and q’, denoted by p ∧ q;
iv) the negation of p is ‘not p’, denoted by ~ p;
v) ‘if p, then q’ is denoted by p → q;
vi) ‘p if and only if q’ is denoted by p ↔ q;
3. The truth tables corresponding to the 6 logical connectives.
4. Rule of precedence : In any compound statement involving more than one
connective, we first apply ‘~’, then ‘∧’, then ‘∨’ and ‘⊕’, and last of all ‘→’ and
‘↔’.
5. The meaning and use of logical equivalence, denoted by ‘≡’.
6. The following laws about equivalent propositions:
i) De Morgan’s laws: ~ (p ∧ q) ≡ ~ p ∨ ~ q
~ (p ∨ q) ≡ ~ p ∧ ~ q
ii) Double negation law: ~ (~p) ≡ p
iii) Idempotent laws: p ∧ p ≡ p,
p ∨p≡p
iv) Commutativity: p∨q≡q∨p
p∧q≡q∧p
v) Associativity: (p ∨ q) ∨ r ≡ p ∨ ( q ∨ r)
(p ∧ q) ∧ r ≡ p ∧ ( q ∧ r)
vi) Distributivity: p ∨ ( q ∧ r) ≡ (p ∨ q) ∧ (p ∨ r)
p ∧ (q ∨ r) ≡ ( p ∧ q) ∨ (p ∧ r)
21
Elementary Logic vii) (~ p ∨ q) ≡ p → q (ref. E18).
7. Logical quantifiers: ‘For every’ denoted by ‘∀’, ‘there exist’ denoted by ‘∃’, and
‘there is one and only one’ denoted by ‘∃!’.
8. The rule of negation related to the quantifiers:
~ ( ∀ x ∈U)p(x) ≡ (∃ x ∈ U) (~ p(x))
~ (∃ x ∈ U) p(x) ≡ (∀ x ∈ U) (~ p(x))

Now we have come to the end of this unit. You should have tried all the exercises as
you came to them. You may like to check your solutions with the ones we have given
below.

1.7 SOLUTIONS/ ANSWERS


E1) (i), (iii), (iv), (vii), (viii) are statements because each of them is universally true
or universally false.
(ii) is a question.
(v) is an exclamation.
The truth or falsity of (vi) depends upon who ‘she’ is.
(ix) is a subjective sentence.
(x) will only be a statement if the value(s) n takes is/are given.
Therefore, (ii), (v), (vi), (ix) and (x) are not statements.

E2) The truth value of (i) is F, and of all the others is T.

E3) The disjunction is


‘2+3 = 7 or Radha is an engineer.’.
Since ‘2+3 = 7’ is always false, the truth value of this disjunction depends on
the truth value of ‘Radha is an engineer.’. If this is T, them we use the third row
of Table 1 to get the required truth value as T. If Radha is not an engineer, then
we get the required truth value as F.

Table 9: Truth table for ‘exclusive or’

p q p⊕q
T T F
T F T
F T T
F F F

E4) p will be a true proposition for x ∈ ] –2, ∞ [ and


x ≠ 4, i.e., for x ∈] –2, 4 [ U ] 4, ∞ [.

E5) i) 0 – 5 = 5
ii) ‘n is not greater than 2 for every n ∈ N.’, or ‘There is at least one n n ∈ N
for which n ≤ 2.’
iii) There are some Indian children who do not study till Class 5.

E6) Table 10: Truth table for negation

p ~p
T F
F T

E7) p → q is the statement ‘If x + y = xy for x, y ∈ R, then x ⊄ 0 for every ∈ Z’.

In this case, q is false. Therefore, the conditional statement will be true if p is


false also, and it will be false for those values of x and y that make p true.
22
Propositional Calculus
y
So, p → q is false for all those real numbers x of the form , where
y −1
y
y ∈R \{1}. This is because if x = for some y ∈ R \{1}, then x + y = xy,
y −1
i.e., p will be true.

E8) i) p → q, where p : ∆ABC is isosceles. If q is true, then p → q is true. If q is


false, then p → q is true only when p is false. So, if ∆ABC is an isosceles
triangle, the given statement is always true. Also, if ∆ABC is not isosceles,
then it can’t be equilateral either. So the given statement is again true.

ii) p : a is an integer.
q : b is an integer.
r : ab is a rational number
The given statement is (p ∧ q ) ↔ r.
Now, if p is true and q is true, then r is still true.

So, (p ∧ q) ↔ r will be true if p ∧ q is true, or when p ∧ q is false and r is


false.
In all the other cases (p ∧ q) ↔ r will be false.

iii) p : Raza has 5 glasses of water.


q : Sudha has 4 cups of tea.
r : Shyam will pass the math exam.

The given statement is (p ∧ q) → ~ r.


This is true when ~ r is true, or when r is true and p ∧ q is false.
In all the other cases it is false.

iv) p : Mariam is in Class 1.


q : Mariam is in Class 2.

The given statement is p ⊕ q.


This is true only when p is true or when q is true.

E9) There are infinitely many such examples. You need to give one in which p is
true but q is false.

E10) Obtain the truth table. The last column will now have entries TTFTTTTT.

E11) According to the rule of precedence, given the truth values of p, q, r you should
first find those of ~ r, then of q ∧ ~ r, and r ∧ q, and p → q ∧ ~ r, and finally of
(p → q ∧ ~ r) ↔ r ∧ q.

Referring to Table 5, the values in the sixth and eighth columns will be replaced
by
r∧q p→q∧~r↔r∧q
T F
F F
F T
F T
T T
F F
F F
F F

23
Elementary Logic E12) They should both be the same, viz.,

p q r ~r p∧q (p ∧ q) ∨ (~ r)
T T T F T T
T T F T T T
T F T F F F
T F F T F T
F T T F F F
F T F T F T
F F T F F F
F F F T F T

E13) i) (~ p) ∨ q
ii) (~ q) → (~ p)
iii) (p → q) ↔ [(~p) ∨ q]
iv) [(p ⊕ (q ∧ r) → [(~ p) ∨ q]] ↔ (p ∧ r)

E14) a)
p ~p ~ (~ p)
T F T
F T F

The first and third columns prove the double negation law.

b) p q p∨q q∨p
T T T T
T F T T
F T T T
F F F F
The third and fourth columns prove the commutativity of ∨.

E15) For any three propositions p, q, r:


i) p ≡ p is trivially true.
ii) if p ≡ q, then q ≡ p ( if p has the same truth value as q for all choices of
truth values of p and q, then clearly q has the same truth values as p in all the
cases.
iii) if p ≡ q and q ≡ r, then p ≡ r ( reason as in (ii) above).

Thus, ≡ is reflexive, symmetric and transitive.

E16)
p q ~p ~p∨q p→q
T T F T T
T F F F F
F T T T T
F F T T T

The last two columns show that [(~p) ∨ q] ≡ (p → q).

E17) i)
p Ƭ p∨Ƭ
T T T
F T T

The second and third columns of this table show that p ∨ Ƭ = T.


24
Propositional Calculus
ii)
p Ƒ p∧Ƒ
T F F
F F F
The second and third columns of this table show that p ∧ Ƒ = F.
You can similarly check (ii) and (iii).

E18) ~ (~ p ∧ q) ∧ (p ∨ q)
≡(~(~p)∨ ~ q) ∧ (p ∧ q), by De Morgan’s laws.
≡ (p ∨ ~ q) ∧ (p ∨ q), by the double negation law.
≡ p ∨ (~ q ∧ q), by distributivity
≡ p ∨ Ƒ, where Ƒ denotes a contradiction
≡ p, using E 19.

E19) p: It is raining.
q: Nobody can go to see a film.
Then the given proposition is
[p ∧ (p → q)] → q
≡ p ∧ (~ p ∨ q) → q, since (p → q) ≡ (~ p ∨ q)
≡ ( p ∧ ~ p) ∨ (p ∧ q) → q, by De Morgan’s law
≡ Ƒ ∨ (p ∧ q) → q, since p ∧ ~ p is a contradiction
≡ (Ƒ ∨ p) ∧ (F ∨ q) → q, by De Morgan’s law
≡ p ∧ q → q, since Ƒ ∨ p ≡ p.
which is a tautology.

E20) There are infinitely many examples. One such is:


‘If Venkat is on leave, then Shabnam will work on the computer’.This is of the
form p → q. Its truth values will be T or F, depending on those of p and q.

E21) i) (∀ t ∈ [0, ∞[) (∀ x ∈ H)p(x,t) is the given statement where p(x, t) is the
predicate ‘The politician can fool x at time t second.’, and H is the set of
human beings.
Its negation is (∃ t ∈ [0, ∞[) (∃ x ∈ H) (~ p(x, t)), i.e., there is somebody
who is not fooled by the politician at least for one moment.

ii) The given statement is


(∀ x ∈ R) (∃ y ∈R) (x = y2). Its negation is
(∃ x ∈R) (∀ y ∈ R) ( x ≠ y2), i.e.,
there is a real number which is not the square of any real number.
iii) The given statement is
(∃ x ∈ L) (∀ t ∈ [0, ∞[)p(x, t), where L is the set of lawyers and p(x, t) : x
does not lie at time t. The negation is
(∀ x ∈ L) (∃ t ∈ [0, ∞[) (~p), i.e., every lawyer tells a lie at some time.

E22) i) For example,


x
( ∀ x ∈ N) (∃ y ∈ Z) ( ∈ Q) is a true statement. Its negation is
y
x
∃ x ∈N) (∀ y ∈ Z) ( ∉ Q )
y
You can try (ii) similarly.

E23) (i), (iii) are true.


(ii) is false (e.g., for x = -1 there is no y such that y2= x3).

25
Elementary Logic (iv) is equivalent to (∀ x ∈ R) [~ (∃! y ∈ R) (x + y = 0)], i.e., for every x there
is no unique y such that x + y = 0. This is clearly false, because for
every x there is a unique y(= - x) such that x + y = 0.

26
Elementary Logic& Proofs
UNIT 2 DYNAMIC PROGRAMMING

2.0 Introduction
2.1 Principal Of Optimality
2.2 Matrix Multiplication
2.3 Matrix Chain Multiplication
2.4 Optimal Binary Search Tree
2.5 Binomial Coefficient Computation
2.6 All Pair shortest Path
2.6.1 Floyd Warshall Algorithm
2.6.2 Working Strategy for FWA
2.6.3 Pseudo-code for FWA
2.6.4 When FWA gives the best result
2.7 Summary
2.8 Solutions/Answers

2.0 INTRODUCTION

Dynamic programming is an optimization technique. Optimization problems are those


which required either minimum result or maximum result.Means finding the optimal
solution for any given problem. Optimal means either finding the maximum answer or
minimum answer. For example if we want to find the profit, will always find out the
maximum profit and if we want to find out the cost will always find out the minimum
cost. Although both greedy and dynamic are used to solve the optimization problem
but there approach to finding the optimal solution is totally different. In greedy
method we try to follow definedprocedure to get the optimal result. Like kruskal’s for
finding minimum cost spanning tree. Always select edge with minimum weight and
that gives us best result or Dijakstra’s for shortest path, always select the shortest path
vertex and continue relaxing the vertices, so you get a shortest path. For any problem
there may be a many solutions which are feasible, we try to find out the all feasible
solutions and then pick up the best solution. Dynamic programming approach is
different and time consuming compare to greedy method. Dynamic programming
granted to find the optimal solution of any problem if their solution exist. Mostly
dynamic programming problems are solved by using recursive formulas, though we
will not use recursion of programming but formulas are recursive. It works on the
concept of recursion i.e. dividing a bigger problem into similar type of sub problems
and solving them recursively. The only difference is- DP does save the outputs of the
smaller sub problem into a table and avoids the task of repetition in calculating the
same sub problem. First it searches for the output of the sub problem in the table, if
the output is not available then only it calculates the sub problem. This reduces the
time complexity to a great extent. Due to its nature, DP is useful in solving the
problems which can be divided into similar sub problems. If each sub problem is
different in nature, DP does not help in reducing the time complexity.
So how the dynamic programming approach works.

 Breaks down the complex problem into simple sub problem.


 Find optimal solution to these sub problems.
 Store the results of sub problems.
 Re-use that result of sub problems so that same sub problem comes you don’t
need to calculate again.
 Finally calculates the results of complex problem.
1
 Storing the results of sub problems is called memorization.
Dynamic Programming

2.1 PRINCIPAL OF OPTIMALITY

Dynamic programming follows the principal of optimality. If a problem have an


optimal structure, then definitely it has principal of optimality.A problem has optimal
sub structure if an optimal solution can be constructed efficiently from optimal
solution of its sub-problems. Principal of optimality shows that a problem can be
solved by taking a sequence of decision to solve the optimization problem. In dynamic
programming in every stage we takes a decision. The Principle of Optimality states
that components of a globally optimum solution must themselves be optimal.
A problem is said to satisfy the Principle of Optimality if the sub solutions of an
optimal solution of the problem are themselves optimal solutions for their sub
problems.
Examples:
 The shortest path problem satisfies the Principle of Optimality.
 This is because if 𝑎, 𝑥1 , 𝑥2 , … , 𝑥𝑛 is a shortest path from node 𝑎 to node 𝑏 in a
graph, then the portion of 𝑥𝑖 to 𝑥𝑗 on that path is a shortest path from 𝑥𝑖 to 𝑥𝑗 .
 The longest path problem, on the other hand, does not satisfy the Principle of
Optimality. Forexample the undirected graph of nodes 𝑎, 𝑏, 𝑐, 𝑑, and 𝑒 and
edges(𝑎, 𝑏), (𝑏, 𝑐), (𝑐, 𝑑), (𝑑, 𝑒)and (𝑒, 𝑎). That is, G is a ring. The longest
(noncyclic) path from a to d to a,b,c,d. The sub-path from b to c on that path is
simply the edge b,c. But that is not the longest path from b to c. Rather
b,a,e,d,c is the longest path. Thus, the sub path on a longest path is not
necessarily a longest path.

Steps of Dynamic Programming:

Dynamic programming has involve four major steps:


1. Develop a mathematical notation that can express any solution and sub
solution for the problem at hand.
2. Prove that the Principle of Optimality holds.
3. Develop a recurrence relation that relates a solution to its sub solutions, using
the math notation of step 1. Indicate what the initial values are for that
recurrence relation, and which term signifies the final solution.
4. Write an algorithm to compute the recurrence relation.
 Steps 1 and 2 need not be in that order. Do what makes sense in each
problem.
 Step 3 is the heart of the design process. In high level algorithmic design
situations, one can stop at step 3.
 Without the Principle of Optimality, it won't be possible to derive a
sensible recurrence relation in step 3.
 When the Principle of Optimality holds, the 4 steps of DP are guaranteed
to yield an optimal solution. No proof of optimality is needed.

2.2 MATRIX MULTIPLICATION

 Matrix multiplication is a binary operation of multiplying two or more


matrices one by one that are conformable for multiplication. For example two
matrices A, B having the dimensions of 𝑝 × 𝑞 and 𝑠 × 𝑡 respectively; would
be conformable for 𝐴 × 𝐵multiplication only if q==s and for 𝐵 × 𝐴
multiplication only if t==p.

2
Elementary Logic& Proofs
 Matrix multiplication is associative in the sense that if A, B, and C are three
matrices of order𝑚 × 𝑛, 𝑛 × 𝑝 and 𝑝 × 𝑞 then the matrices (AB)C and
A(BC) are defined as (AB)C = A (BC) and the product is an 𝑚 × 𝑞 matrix.
 Matrix multiplication is not commutative. For example two matrices A and B
having dimensions𝑛 × 𝑛 and𝑛 × 𝑛 then the matrix AB = BA can’t be
defined. Because BA are not conformable for multiplication even if AB are
conformable for matrix multiplication.
 For 3 or more matrices, matrix multiplication is associative, yet the number of
scalar multiplications may very significantly depending upon how we pair the
matrices and their product matrices to get the final product.

Example: Suppose there are three matrices A is 100 × 10, B is 10 × 50 and C is 50 ×


5, then number of multiplication required for (AB)C is 𝐴𝐵 = 100 × 10 × 5 =
50000, (𝐴𝐵)𝐶 = 100 × 50 × 5 = 25000. Total multiplication for (AB)C= 75000
mulplication.Similarly number of multiplication required for A(BC) is BC = 10 ×
50 × 5 = 2500 and A(BC) = 100 × 10 × 5 = 5000. Total multiplication for A(BC) =
7500.

In short the product of matrices (AB)C takes 75000 multiplication when first the
product of A and B is computed and then product AB is multiplied with C. On the
other hand, if the product BC is calculated first and then product of A with matrix BC
is taken then 7500 multiplications are required. In case when large number of matrices
are to be multiplied for which the product is defined, proper parenthesizing through
pairing of matrices, may cause dramatic saving in number of scalar operations.
This raises the question of how to parenthesize the pairs of matrices within the
expression A1A2 … An, a product of n matrices which is defined; so as to optimize the
computation of the product A1A2 … An. The product is known as Chained Matrix
Multiplication.

2.3 MATRIX CHAIN MULTIPLICATION

Given a sequence of matrices that are conformable for multiplication in that sequence,
the problem of matrix-chain-multiplication is the process of selecting the optimal pair
of matrices in every step in such a way that the overall cost of multiplication would be
minimal. If there are total N matrices in the sequence then the total number of
different ways of selecting matrix-pairs for multiplication will be 2nCn/(n+1). We need
to find out the optimal one. In directly we can say that total number of different ways
to perform the matrix chain multiplication will be equal to the total number of
different binary trees that can be generated using N nodes i.e. 2nCn/(n+1). The
problem is not actually to perform the multiplications, but merely to decide in which
order to perform the multiplications.Since matrix multiplication follows associativity
property, rearranging the parentheses also yields the same result of multiplication.
That means any pair in the sequence can be multiplied and that will not affect the end
result: But the total number of multiplication operations will change accordingly.A
product of matrices is fully satisfied if it is either a single matrix or the product of two
fully parenthesized matrix products, surrounded by parentheses. For example, if the
chain of matrices is 〈𝐴1 , 𝐴2 , 𝐴3 , 𝐴4 〉 then we can fully parenthesize the product
𝐴1 𝐴2 𝐴3 𝐴4 in five distinct ways:
(𝐴1 (𝐴2 (𝐴3 𝐴4 ))),
(𝐴1 ((𝐴2 𝐴3 )𝐴4 ),
((𝐴1 𝐴2 )(𝐴3 𝐴4 )),
(𝐴1 (𝐴2 𝐴3 ))𝐴4 ),
(((𝐴1 𝐴2 )𝐴3 )𝐴4 ).

3
Dynamic Programming
How we parenthesize a chain of matrices can have a dramatic impact on the cost of
evaluating the product. We shall use the dynamic-programming method to determine
how to optimally parenthesize a matrix chain.
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution.
4. Construct an optimal solution from computed information.

Step 1: The structure of an optimal parenthesization:

 An optimal parenthesization of A1…An must break the product into two


expressions, each of which is parenthesized or is a single matrix
 Assume the break occurs at position𝑘.
 In the optimal solution, the solution to the product A1…Akmust be optimal
o Otherwise, we could improve A1…Anby improving A1…Ak.
o But the solution to A1…Anis known to be optimal
o This is a contradiction
o Thus the solution to A1…Anis known to be optimal
 This problem exhibits the Principle of Optimality:
o The optimal solution to product A1…Ancontains the optimal solution
to two subproducts
 Thus we can use Dynamic Programming
o Consider a recursive solution
o Then improve it's performance with memorization or by rewriting
bottom up

Step 2: A recursive solution

For the matrix-chain multiplication problem, we pick asour sub problems the
problems of determining the minimum cost of parenthesizing𝐴𝑖 𝐴𝑖+1 … 𝐴𝑗 for 1 ≤ 𝑖 ≤
𝑗 ≤ 𝑛.
 Let 𝑚[𝑖, 𝑗] be the minimum number of scalar multiplication needed to
compute the matrix 𝐴𝑖…𝑗 ; for the full problem, the lowest cost way to compute
𝐴1…𝑛 would thus be 𝑚[1, 𝑛].
 𝑚[𝑖, 𝑖] = 0, when i=j, problem is trivial. Chain consist of just one matrix
𝐴𝑖…𝑖 = 𝐴𝑖 , so that no scalar multiplications are necessary to compute the
product.
 The optimal solution of 𝐴𝑖 × 𝐴𝑗 must break at some point, k, with ≤ 𝑘 < 𝑗 .
Each matrix 𝐴𝑖 is 𝑝𝑖−1 × 𝑝𝑖 and computing the matrix multiplication
𝐴𝑖…𝑘 𝐴𝑘+1…𝑗 takes 𝑝𝑖−1 𝑝𝑘 𝑝𝑗 scalar multiplication.

Thus, 𝑚[𝑖, 𝑗] = 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗


Equation 1

3. Computing the optimal costs

It is a simple matter to write a recursive algorithm based on above recurrence to


compute the minimum cost 𝑚[1, 𝑛] for multiplying 𝐴1 𝐴2 … 𝐴𝑛 .

4
Elementary Logic& Proofs

MATRIX-CHAIN-ORDER (P)

1. 𝑛 ← 𝑙𝑒𝑛𝑔𝑡ℎ[𝑝] − 1
2. let 𝑚[1. . 𝑛, 1. . 𝑛] and 𝑠[1. . 𝑛 − 1,2. . 𝑛] be new tables
3. for i1 to n
4. do 𝑚[𝑖, 𝑖] ← 0
5. For l 2 to n
6. do for i1 to n-1+1
7. do ji+l-1
8. 𝑚[𝑖, 𝑗] ← ∞
9. for kI to j-1
10. do 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11. if q𝑚[𝑖, 𝑗]
12. then 𝑚[𝑖, 𝑗] ← 𝑞
13. 𝑠[𝑖, 𝑗] ← 𝑘
14. return m and s

The following pseudocode assumes that matrix 𝑨𝒊 has dimensions 𝒑𝒊−𝟏 × 𝒑𝒊 for
i=1,2,…,n. The input is a sequence𝒑 = 𝒑𝟎 , 𝒑𝟏 , … , 𝒑𝒏 , where length[p]=n+1. The
procedure uses an auxiliary table 𝒎[𝟏 … 𝒏, 𝟏 … 𝒏] for storing the 𝒎[𝒊, 𝒋] costs and the
auxiliary table 𝒔[𝟏 … 𝒏, 𝟏 … 𝒏] that records which index of 𝑘 achieved the optimal cost
in computing 𝑚[𝑖, 𝑗]. We will use the table s to construct an optimal solution.

 In line 3-4 algorithm first computes 𝑚[𝑖, 𝑖] = 0 for 𝑖 = 1, 2, … , 𝑛 (the


minimum costs for chains of length 1).
 During the first execution of for loop in line 5-13, it uses Equation (1) to
computes𝑚[𝑖, 𝑖 + 1] for 𝑖 = 1, 2, … , 𝑛 − 1 (the minimum costs for chains of
length 2).
 The second time through the loop, it computes 𝑚[𝑖, 𝑖 + 2] for 𝑖 = 1, 2, … , 𝑛 −
2 (the minimum costs for chains of length 3, and so forth.
 At each step, the 𝑚[𝑖, 𝑗] cost computed in lines 10-13 depends only on table
entries 𝑚[𝑖, 𝑘] and 𝑚[𝑘 + 1, 𝑗] already computed.

4. Constructing an Optimal Solution:

The MATRIX-CHAIN-ORDER determines the optimal number of scalar


multiplication needed to compute a matrix-chain product. To obtain the optimal
solution of the matrix multiplication, we call PRINT_OPTIMAL_PARES(s,1,n) to
print an optimal parenthesization of 𝑨𝟏 𝑨𝟐 , … , 𝑨𝒏 . Each entry 𝑠[𝑖, 𝑗] records a value of
k such that an optimal parenthesization of 𝑨𝟏 𝑨𝟐 , … , 𝑨𝒋 splits the product between 𝐴𝑘
and 𝐴𝑘+1 .

PRINT_OPTIMAL_PARENS(s,i,j)

1. If i==j
2. Then print “A”
3. else print “(“
4. PRINT_OPTIMAL_PARENS(s, i, s[i,j])
5. PRINT_OPTIMAL_PARENS(s, s[i,j]+1, j)
6. print “)”

5
Dynamic Programming

Example:Find an optimal parenthesization of a matrix-chain product whose sequence


of dimensions is as follows:

Matrix Dimension
A1 30 × 35
A2 35 × 15
A3 15 × 5
A4 5 × 10
A5 10 × 20
Solution:

Table m Table s

j→ 1
iꜜ
2 3 4 5
iꜜ
2 3 4 5 j→
0 15750 7875 9375 11875 1
0 2625 4375 7125 2 1 1 3 3 1
0 750 2500 3 2 3 3 2
0 1000 4 3 3 3
0 5 4 4

Step 1: l=2

m[1 2]= m[1 1]+m[2 2] + 𝑝0 𝑝1 𝑝2 = 0 + 0 + 30 ×35 × 15 = 15750 and k = 1 s[1 2]=1

m[2 3]= m[2 2] + m[3 3] + 𝑝1 𝑝2 𝑝3 = 0 + 0 + 35 ×15 × 5 = 2625 and k=2 s[2 3]= 2

m[3 4]= m[3 3] + m[4 4] + 𝑝1 𝑝3 𝑝4 = 0 + 0 + 15 ×5 × 10 = 750 and k=3 s[3 4]= 3

m[4 5]= m[4 4] + m[ 5 5]+ 𝑝3 𝑝4 𝑝5 = 0 + 0 + 5 ×10 × 20 = 1000 and k =4 s[4 5] = 4

Step 2: l=3

m[1 3] = min ( m[1 1] + m[2 3] + 𝑝0 𝑝1 𝑝3, m[1 2] + m[3 3] + 𝑝0 𝑝2 𝑝3)

= min (0 + 2625+ 30 ×35 × 5 , 15750+0+ 30 ×15 × 5)

= min(7875, 18000) = 7875 and k = 1 gives minimum s[1 3]= 1

m[2 4] = min ( m[2 2] + m[3 4] + 𝑝1 𝑝2 𝑝4, m[2 3] + m[4 4] + 𝑝1 𝑝3 𝑝4)

= min (0 + 750+ 3 ×15 × 10 , 2625+0+ 35 ×5 × 10)

= min(6000, 4375) = 4375 and k = 3 gives minimum s[2 4]= 3

m[3 5] = min ( m[3 3] + m[4 5] + 𝑝2 𝑝3 𝑝5, m[3 4] + m[5 5] + 𝑝2 𝑝4 𝑝5 )

= min (0 + 1000+ 15 ×5 × 20 , 750+0+ 15 ×10 × 20)

= min(2500, 3750) = 2500 and k = 3 gives minimum s[3 5]= 3

6
Elementary Logic& Proofs
Step 3: l=4

m[1 4] = min ( m[1 1] + m[2 4] + 𝑝0 𝑝1 𝑝4, m[1 2] + m[3 4] + 𝑝0 𝑝2 𝑝4 , 𝑚[1 3] +


𝑚[4 4] + 𝑝0 𝑝3 𝑝4)

= min(0 + 4375 + 30 ×35 × 10, 15750 + 750 + 30 ×15 × 10, 7875+0+ 30 ×5 × 10)

= min(14875, 21900, 9375) = 9375 and k = 3 gives minimum s[1 4] = 3

m[1 4] = min ( m[1 1] + m[2 4] + 𝑝0 𝑝1 𝑝4, m[1 2] + m[3 4] + 𝑝0 𝑝2 𝑝4 , 𝑚[1 3] +


𝑚[4 4] + 𝑝0 𝑝3 𝑝4)

= min(0 + 4375 + 30 ×35 × 10, 15750 + 750 + 30 ×15 × 10, 7875+0+ 30 ×5 × 10)

= min(14875, 21900, 9375) = 9375 and k = 3 gives minimum s[1 4] = 3

m[2 5] = min ( m[2 2] + m[3 5] + 𝑝1 𝑝2 𝑝5, m[2 3] + m[4 5] + 𝑝1 𝑝3 𝑝5 , 𝑚[2 4] +


𝑚[5 5] + 𝑝1 𝑝4 𝑝5)

= min(0 + 2500 + 35 ×15 × 20, 2625 + 1000 + 35 ×5 × 20, 4375+0 + 35 ×10 ×


20)

= min (13000, 7125, 11375) = 7125 and k = 3 gives minimum s[2 5] = 3

Step 4: l=5

m[1 5] = min ( m[1 1] + m[2 5] + 𝑝0 𝑝1 𝑝5, m[1 2] + m[3 5] + 𝑝0 𝑝2 𝑝5 , 𝑚[1 3] +


𝑚[4 5] + 𝑝0 𝑝3 𝑝5, m[1 4] + m[5 5] + 𝑝0 𝑝4 𝑝5 )

= min(0 + 7125 + 30 ×35 × 20, 15750+2500 + 30 ×15 × 20, 7875+ 1000 + 30 ×5


× 20, 9375+ 0 +30 ×10 × 20)

= min (28125, 27250, 11875, 15375) = 11875 and k = 3 gives minimum s[1 5] =
3

Now, print optimal parenthesis (POP) algorithm is runs:

POP(s, 1, 1)

POP(s, 1, 3) POP(s, 2, 2)

POP(s, 2, 3)

POP(s, 1, 5) POP(s, 3, 3)

POP(s, 4, 4)

POP(s, 4, 5)

POP(s, 5, 5)

Hence, the final solution is (𝐴1 (𝐴2 𝐴3 )(𝐴4 𝐴5 )).

7
Dynamic Programming
2.4 OPTIMAL BINARY SEARCH TREE
Binary Search Tree: A binary search tree is a binary tree such that all the key smaller
than root are on the left side and all the keys greater than root are on the right side.
Figure shown below is a binary search tree in which 40 is root node all the node in the
left sub tree of 40 is smaller and all the node in the right sub tree of 40 is greater. Now
question is why keys are arrange in order?. Because it gives an advantage in
searching. Suppose I want to search 30, we will start searching from the root node.
Check whether root is a 30, no, then 30 is smaller than 40, definitely 30 will be in left
sub tree of 40. Go in the left sub tree, check left sub tree root is 30, no, 20 is smaller
than 30, definitely 30 will be in the right hand side. Yes now 30 is found. So to search
the 30 how many comparison are needed. Out of 7 elements to search 30 it, takes 3
comparison. Number of comparison required to search any element in a binary search
tree is depend upon the number of levels in a binary search tree. We are searching for
the key that are already present in the binary search tree, it is a successful search. Now
let’s assume that key is 9 and search is an unsuccessful search because 9 is not found
in the binary search tree. Total number of comparison it takes is also 3. There are a
two possibility to search any element successful search and unsuccessful search. In
both the cases we need to know the amount of comparison we are doing. That amount
of comparison is nothing but the cost of searching in a binary search tree.

Let’s take an example: Keys: 10, 20, 30 How many binary search tree possible. T(n)=
2𝑛𝐶𝑛
So there are 3 keys number of binary search tree possible = 5. Let’s draw five
𝑛+1
possible binary search tree shown in below figure (a)-(e).

Now the cost of searching an element in the above binary tree is the maximum
number of comparison. In tree (a), (b), (d), and (e) takes maximum three comparison
whereas in tree (c) it takes only two comparison. It means that if you have set of n
keys, there is a possibility that any tree gives you less number of comparison compare
to other trees in all possible binary search tree of n keys. It means that if a height of
binary search tree is less, number of comparison will be less. Height play an important
role in searching any key in a binary search tree. So we want a binary search tree that
requires less number of comparison for searching any key elements. This is called an
optimal binary search tree.

8
Elementary Logic& Proofs
The problem of optimal binary search tree is, given a keys and their probabilities, we
have to generate a binary search tree such that the searching cost should be minimum.
Formally we are given a sequence 𝐾 = 〈𝑘1 , 𝑘2 , … , 𝑘𝑛 〉 of n distinct keys in sorted
order, and we wish to build a binary search tree from these keys. For each key 𝑘𝑖 , we
have a probability 𝑝𝑖 that a search will be for 𝑘𝑖 . Some searches may be for the values
that is not in 𝐾, so we have also n+1 dummy keys 𝑑0 , 𝑑1 , 𝑑3 , … , 𝑑𝑛 representing
values not in 𝐾. In particular, 𝑑0 represents all values less than 𝑘1 , 𝑑𝑛 represents all
values greater than 𝑘𝑛 , and for 𝑖 = 1, 2, 3, . . . , 𝑛 − 1, the dummy key 𝑑𝑖 represents all
values between 𝑘𝑖 and 𝑘𝑖+1 . For each dummy key 𝑑𝑖 , we have a probability 𝑞𝑖 that a
search corresponds to 𝑑𝑖 . Below Figure 2 shows the binary search tree for set of 5
keys. Each 𝑘𝑖 is a internal node and each 𝑑𝑖 is leaf node.

Figure 2: (a) (b)


Two Binary search tree with dummy keys with the following probabilities:

i 0 1 2 3 4 5
pi 0.15 0.10 0.05 0.10 0.20
qi 0.05 0.10 0.05 0.05 0.05 0.10

(a) A binary search tree with expected search cost 2.80. (b) A binary
search tree with expected search cost 2.75. This tree is optimal

Every search is either successful or unsuccessful, and so we have


𝑛 𝑛

∑ 𝑝𝑖 + ∑ 𝑞𝑖 = 1
𝑖=1 𝑖=0

Because we have probabilities of searches for each key and each key and each dummy
key, we can determine the expected cost of a search is a given binary search tree T.
Let we assume that actual cost of search equals the number of nodes examined, i.e.,
depth of the node found by the search in T, plus 1. Then the expected cost of a search
in T is
𝑛 𝑛

𝐸(𝑠𝑒𝑎𝑟𝑐ℎ 𝑐𝑜𝑠𝑡 𝑖𝑛 𝑇) = ∑(𝑑𝑒𝑝𝑡ℎ𝑇 (𝑘𝑖 ) + 1). 𝑝𝑖 + ∑(𝑑𝑒𝑝𝑡ℎ𝑇 (𝑑𝑖 ) + 1). 𝑞𝑖


𝑖=1 𝑖=0

= 1 + ∑𝑛𝑖=1 𝑑𝑒𝑝𝑡ℎ 𝑇 (𝑘𝑖 ). 𝑝𝑖 + ∑𝑛𝑖=0 𝑑𝑒𝑝𝑡ℎ 𝑇 (𝑑𝑖 ). 𝑞𝑖 ,

9
Dynamic Programming
Where 𝑑𝑒𝑝𝑡ℎ 𝑇 denotes a node depth in the tree T. Expected search cost node by node
given in following table:

node depth probability contribution


𝑘1 1 0.15 0.30
𝑘2 0 0.10 0.10
𝑘3 2 0.05 0.15
𝑘4 1 0.10 0.20
𝑘5 2 0.20 0.60
𝑑0 2 0.05 0.15
𝑑1 2 0.10 0.30
𝑑2 3 0.05 0.20
𝑑3 3 0.05 0.20
𝑑4 3 0.05 0.20
𝑑5 3 0.20 0.80
Total 2.80

For a given set of probabilities, we wish to construct a binary search tree whose
expected search cost is smallest. We call such tree an optimal binary search tree.
Figure 2(b) shows an optimal binary search tree for the probabilities given in figure
caption; its expected cost is 2.75. This example shows that an optimal binary search
tree is not necessarily a tree whose overall height is smallest. Now question is we have
n number of keys finding minimum cost means draw all possible trees and picking the
tree having minimum cost is an optimal binary search tree. But this approach is not an
efficient approach. Using dynamic programming we can find the optimal binary
search tree without drawing each tree and calculating the cost of each tree. Thus,
dynamic programming gives better, easiest and fastest method for trying out all
possible binary search tree and picking up best one without drawing each sub tree.

Dynamic Programming Approach:

 Optimal Substructure: if an optimal binary search tree T has a subtree T`


containing keys 𝑘𝑖 , . . . , 𝑘𝑗 , then this sub tree T’ must be optimal as well for the
sub problem with keys 𝑘𝑖 , . . . , 𝑘𝑗 and dummy keys 𝑑𝑖−1 , . . . , 𝑑𝑗.
 Algorithm for finding optimal tree for sorted, distinct keys 𝑘𝑖 , . . . , 𝑘𝑗 :
o For each possible root 𝑘𝑟 for 𝑖 ≤ 𝑟 ≤ 𝑗
o Make optimal subtree for𝑘𝑖 , . . . , 𝑘𝑟−1 .
o Make optimal subtree for 𝑘𝑟+1 , . . . , 𝑘𝑗 .
o Select root that gives best total tree
 Recursive solution: We pick our sub problem domain as finding an optimal
binary search tree containing the keys 𝑘𝑖 , . . . , 𝑘𝑗 , where 𝑖 ≥ 1, 𝑗 ≤ 𝑛, and 𝑗 ≥
𝑖 − 1. Let us define 𝑒[𝑖, 𝑗] as the expected cost of searching an optimal binary
search tree containing the keys 𝑘𝑖 , . . . , 𝑘𝑗 . Ultimately, we wish to compute
𝑒[1, 𝑛].

𝑒[𝑖, 𝑗] =
𝑞𝑖−1 𝑖𝑓 𝑗=𝑖−1
{ min {𝑒[𝑖, 𝑟 − 1] + 𝑒[𝑟 + 1, 𝑗] + 𝑤(𝑖, 𝑗)} 𝑖𝑓 𝑖 ≤ 𝑗 Equation 2
𝑖≤𝑟≤𝑗
where, 𝑤(𝑖, 𝑗 ) = ∑𝑗𝑘=𝑖 𝑝𝑖 is the increase in cost if𝑘𝑖 , . . . , 𝑘𝑗 is a subtree of a
node. When j=i-1, there are no actual keys; we have just the dummy key 𝑑𝑖−1 .

Computing the Optimal Cost:

We store the 𝑒[𝑖, 𝑗] values in a table 𝑒[1. . 𝑛 + 1,0. . 𝑛]. The first index needs to run to
n+1 rather than n because in order to have a sub tree containing only the dummy key

10
Elementary Logic& Proofs
𝑑𝑛 , we need to compute and store 𝑒[𝑛 + 1, 𝑛]. Second index needs to start from 0
because in order to have a sub tree containing only the dummy key 𝑑0 , we need to
compute and store 𝑒[1,0]. We use only the entries 𝑒[𝑖, 𝑗] for which 𝑗 ≥ 𝑖 − 1. We also
use a table 𝑟𝑜𝑜𝑡[𝑖, 𝑗], for recording the root of the sub tree containing keys 𝑘𝑖 , . . . , 𝑘𝑗 .
This table uses only entries for which 1 ≤ 𝑖 ≤ 𝑗 ≤ 𝑛. We will need one other table for
efficiency. Rather than compute the value of 𝑤(𝑖, 𝑗 ) from scratch every time we are
computing 𝑒[𝑖, 𝑗] which would take 𝜃(𝑗 − 1) additions- we store these values in a
table 𝑤[1. . 𝑛 + 1,0. . 𝑛]. For the base case, we compute 𝑤[𝑖, 𝑖 − 1] = 𝑞𝑖−1 for 1 ≤ 𝑖 ≤
𝑛 + 1. For 𝑗 ≥ 𝑖, we compute

𝑤[𝑖, 𝑗] = 𝑤[𝑖, 𝑗 − 1] + 𝑝𝑗 + 𝑞𝑗 Equation 3

The pseudocode that follows takes as inputs the probabilities 𝑝1 , . . . , 𝑝𝑛 and 𝑞0 , . . . , 𝑞𝑛


and the size n, and returns the table e and root.

OPTIMAL_BST(p, q, n)
1. let 𝑒[1. . 𝑛 + 1,0. . 𝑛], 𝑤[1. . 𝑛 + 1,0. . 𝑛] and root[1..n,1..n] be new tables.
2. for i=1 to n+1
3. 𝑒[𝑖, 𝑖 − 1] = 𝑞𝑖−1
4. 𝑤[𝑖, 𝑖 − 1] = 𝑞𝑖−1
5. for l=1 to n
6. for i=1 to n-l+1
7. J=i+l-1
8. 𝑒[𝑖, 𝑗] = ∞
9. 𝑤[𝑖, 𝑗] = 𝑤[𝑖, 𝑗 − 1] + 𝑝𝑗 + 𝑞𝑗
10. for r=i to j
11. t= e[i,r-1]+e[r+1,j]+w[i, j]
12. if 𝑡 < 𝑒[𝑖, 𝑗]
13. 𝑒[𝑖, 𝑗] = 𝑡
14. 𝑟𝑜𝑜𝑡[𝑖, 𝑗] = 𝑟
15. return e and root.

In the above algorithm:

 The for loop of lines 2-4 initializes the values of 𝑒[𝑖, 𝑖-1] and 𝑤[𝑖, 𝑖 − 1].
 The for loop of lines 5-14 then uses the recurrence in equation 2 to compute
𝑒[𝑖, 𝑗] and 𝑤[𝑖, 𝑗] for all 1 ≤ 𝑖 ≤ 𝑗 ≤ 𝑛.
 In the first iteration, when l=1, the loop computes 𝑒[𝑖, 𝑖] and 𝑤[𝑖, 𝑖] for 𝑖 =
1,2, … , 𝑛. The second iteration, when l=2, the loop computes 𝑒[𝑖, 𝑖 + 1] and
𝑤[𝑖, 𝑖 + 1] for 𝑖 = 1,2, … , 𝑛 − 1 and so forth.
 The innermost for loop, in lines 10-14, tries each candidates index 𝑟 to
determine which key 𝑘𝑟 to use as the root of an optimal binary search tree
containing keys 𝑘𝑖 , . . . 𝑘𝑗 . This for loops saves the current value of the index 𝑟
in 𝑟𝑜𝑜𝑡[𝑖, 𝑗]whenever it finds a better key to use as the root.

Example of running the Algorithm:

Find the optimal binary search tree for N=4, having successful search and
unsuccessful given in 𝑝𝑖 and 𝑞𝑖 rows.

11
Dynamic Programming
Keys 10 20 30 40
pi 3 3 1 1
qi 2 3 1 1 1

Row 1

Let’s fill up the values of w.

w[0 0]=q0 w[1 1]=q1 w[2 2]=q2 w[3 3]=q3 w[4 4]=q4

Let’s fill the cost e

Initial cost e00, e11, e22, e33, e44 will be 0. Similarly root r will be 0.

Row 2

Using equation 3w can be filled as:

w[0 1]= w[0 0] + p1+ q1 = 2+3+3 = 8

w[1 2]= w[1 1] + p2 + q2 = 3+ 3 +1 = 7

w[2 3]= w[2 2] + p3 + q3 =1+ 1 + 1=3

w[3 4]= w[3 3] + p4 + q4 = 1 + 1 +1 =3

e[0 1] = min(e[0 0] + e[ 1 1]) + w[0 1] = min (0 + 0) + 8 = 8

e[1 2] = min(e[1 1] + e[ 2 2]) + w[1 2] = min ( 0 + 0) + 7 = 7

e[2 3] = min(e[2 2] + e[ 3 3]) + w[2 3] = min ( 0 + 0) + 3 = 3

e[3 4] = min(e[3 3] + e[4 4]) + w[3 4] = min ( 0 + 0) + 3 = 3

r[0 1] = 1, r[1 2] = 2, r[2 3]= 3, r[ 3 4]=4

j 0 1 2 3 4
j-i=0 w00= 2 w11= 3 w22= 1 w33= 1 w44= 1

e00=0 e11=0 e22=0 e33=0 e44=0

r00=0 r11=0 r22=0 r33=0 r44=0


j-i=1 w01=8 w12= 7 w23= 3 w24= 3

e01= 8 e12= 3 e23= 3 e34= 3

r01= 1 r12=2 r23= 3 r34= 4


j-i=2 w02= 12 w13= 9 w24= 5

e02= 19 e13=12 e24=8

r02=1 r13=2 r24=3


j-i=3 w03= 14 w14= 11

12
e03=25 e14= 19 Elementary Logic& Proofs

r03=2 r14=2
j-i=4 w04= 16

e04=32

r04=2

Table 1 shows the calculation of cost of the optimal binary search tree

Row 3

w[0 2]= w[0 1] + p2 + q2 = 8 + 3 +1 = 12

w[1 3]= w[1 2] + p3 + q3 = 7 + 1 +1 =9

w[2 4]= w[2 3] + p4 + q4 = 3 + 1 +1 =5

e[0 2] = k= 1, 2 = min(e[0 0] + e[1 2], e[0 1]+ e[2 2])+w[0 2] =


min(0+7,8+0)+12 = 7+12=19 here k=1 has given minimum value thus, r[0 2]= 1

e[1 3]= k = 2, 3 = min(e[1 1]+ e[2 3], e[1 2]+e[3 3]) + w[1 3] = min(0+3, 7+0)+9 =
3+9= 12, here k=2 has given minimum value thus, r[1 3]=2

e[2 4]= k = 3, 4 = min(e[2 2]+e[3 4], e[2 3]+e[3 4])+w[2 4]= min(0+3, 3+3)+5 = 8,
here k=3 has given minimum value thus, r[2 4]=3

Row 4

w[0 3]= w[0 2] + p3 + q3 = 12 + 1 + 1 = 14

w[1 4]= w[1 3] + p4 + q4 = 9 + 1 + 1 = 11

e[0 3] = k= 1, 2, 3 = min(e[0 0] + e[1 3], e[0 1]+ e[2 3], e[0 2]+ e[ 3 3])+w[0 3] =
min(0+12,8+3, 19+0)+14 = 11+14=25 here k=2 has given minimum value thus, r[0
3]= 2

e[ 1 4]= k = 2, 3, 4 = min(e[1 1]+e[2 4], e[1 2]+e[3 4], e[1 3]+e[4 4])+w[1 4]=
min(0+8, 7+3, 12+0)+11 = 19, here k=2 has given minimum value thus, r[1 4]= 2

Row 5

w[0 4]= w[0 3] + p4 + q4 = 14 + 1 + 1 = 16

e[0 4]= k = 1, 2, 3, 4= min(e[0 0]+e[1 4], e[0 1]+ e[ 2 4], e[0 2]+e[3 4], e[0 3]+e[ 4
4] + w[ 0 4]=

min(0+19, 8+8, 19+3, 25+0]+16= 16+16=32 here k =2 has given minimum value
thus, r[0 4]=2

Table 1 shows the calculation of cost matrix, weight matrix and root. While
calculating the matrix, we have tried all possible binary search tree and select the
minimum cost each time, thus it gives the minimum cost of the optimal binary search

13
Dynamic Programming
tree. Now we can draw the optimal binary search tree from the table 1. Below is a
optimal binary search tree whose cost is minimum that is 32/16 = 2.

2.5 BINOMIAL COEFFICIENT COMPUTATION

Computing binomial coefficients is non optimization problem but can be solved using
dynamic programming. Binomial Coefficient is the coefficient in the Binomial
Theorem which is an arithmetic expansion. It is denoted as c(n, k) which is equal to
𝑛!
where ! denotes factorial.This follows a recursive relation using which we
𝑘! ×(𝑛−𝑘)!
will calculate the n binomial coefficient in linear time 𝑂 (𝑛 × 𝑘)using Dynamic
Programming.

What is Binomial Theorem?

Binomial Theorem is also called as Binomial Expansion describe the powers in


algebraic equations. Binomial Theorem helps us to find the expanded polynomial
without multiplying the bunch of binomials at a time. The expanded polynomial will
always contain one term more than the power you are expanding.
Following formula shows the General formula to expand the algebraic equations by
using Binomial Theorem:

𝑛
𝑛
(𝑥 + 𝑎) = ∑ ( ) 𝑥 𝑘 𝑎𝑛−𝑘
𝑛
𝑘
𝑘=0

Where, n= positive integer power of algebraic equation and(𝑛𝑘)= read as “n


choose k”.
According to theorem, expansion goes as following for any of the algebraic equation
containing any positive power,

𝑛 𝑛 𝑛 𝑛
(𝑎 + 𝑏)𝑛 = ( ) 𝑎𝑛 𝑏0 + ( ) 𝑎𝑛−1 𝑏1 + ( ) 𝑎𝑛−2 𝑏2 + . . . + ( ) 𝑎1 𝑏𝑛−1
0 1 2 𝑛−1
𝑛
+ ( ) 𝑎𝑛 𝑏𝑛
𝑛

14
Elementary Logic& Proofs
Where, (𝒏𝒌) are binomial coefficients and(𝒏𝒌) = 𝒏𝒄𝒌 gives combinations to choose k
elements from n-element set. The expansion of binomial coefficient can be calculated
𝑛!
as: .
𝑘! ×(𝑛−𝑘)!

We can compute 𝒏𝒄𝒌 for any n and k using the recurrence relation as follows:

𝒏 𝟏, 𝒊𝒇 𝒌=𝟎 𝒐𝒓 𝒏=𝒌
𝑪𝒌 ={𝒏−𝟏𝑪
𝒌−𝟏 + 𝒏−𝟏𝑪𝒌, 𝒇𝒐𝒓 𝒏>𝒌>𝟎

Computing C(n, k): Pseudocode:

BINOMIAL(n, k)

Input: A pair of nonnegative integers 𝑛 ≥ 𝑘 ≥ 0

Output: The value of 𝑐(𝑛, 𝑘)

1. for i0 to n do
2. for j 0 to min(i, k) do
3. if j=0 or j=i
4. c[i, j]1
5. else
6. c[i, j] c[i-1, j-1] + c[i-1, j]
7. return c[n, k]

Let’s consider an example for calculating binomial coefficient:

Compute binomial coefficient for n=5 and k=5

n/k 0 1 2 3 4 5
0 1
1 1 1
2 1 2 1
3 1 3 3 1
4 1 4 6 4 1
5 1 5 10 10 5 1

Table 3: gives the binomial coefficient

According to the formula when k=0 corresponding value in the table will be 1. All the
values in column1 will be 1 when k=0. Similarly when n=k, value in the diagonal
index will be 1.

c[2, 1] = c[1, 0]+c[1,1] = 1+1= 2 c[3, 1]= c[2,0] + c[2,1] = 1+2 = 3

c[4, 1] = c[3,0]+ c[3,1]= 1+3 = 4 c[5, 1] = c[4, 0] + c[4, 1] = 1 + 4 = 5

c[3, 2] = c[2,1] + c[2, 2] = 2+1=3 c[4, 2] = c[3, 1] + c[3, 2] = 3 + 3 = 6

c[5, 2] = c[4, 1]+ c[4, 2] = 4 + 6 =10 c[4, 3] = c[3,2] + c[3, 3] = 3 + 1= 4

15
Dynamic Programming
c[ 5, 3] = c[4, 2] + c[4, 3] = 6 + 4 = 10 c[5, 4] = c[4, 3] + c[4, 4]= 4 + 1= 5

Table 3 represent the binomial coefficient of each term.

2.6 ALL PAIRSHORTEST PATH

Till now we have seen single source shortest path algorithms to find the shortest path
from a given source S to all other vertices of graph G. But what if we want to know
the shortest distance among all pair of vertices? Suppose we require to design a
railway network where any two stations are connected with an optimum route. The
simple answer to this problem would be to apply single source shortest path
algorithms i.e. Dijkstra’s or Bellman Ford algorithm for each vertex of the graph. The
other approach may be to apply all pair shortest path algorithm which gives the
shortest path between any two vertices of a graph at one go. We would see which
approach will give the best result for different types of graph later on. But first we will
discuss one such All Pair Shortest Path Algorithm i.e. Floyd Warshall Algorithm.
2.6.1 Floyd Warshall Algorithm
Floyd Warshall Algorithm uses the Dynamic Programming (DP) methodology. Unlike
Greedy algorithms which always looks for local optimization, DP strives for global
optimization that means DP does not relies on immediate best result. It works on the
concept of recursion i.e. dividing a bigger problem into similar type of sub problems
and solving them recursively. The only difference is- DP does save the outputs of the
smaller subproblem and avoids the task of repetition in calculating the same
subproblem. First it searches for the output of the sub problem in the table, if the
output is not available then only it calculates the sub problem. This reduces the time
complexity to a great extent. Due to its nature, DP is useful in solving the problems
which can be divided into similar subproblems If each subproblem is different in
nature, DP does not help in reducing the time complexity.
Now when we have the basic understanding of how the Dynamic Programming works
and we have found out that the main crux of using DP in any problem is to identify
the recursive function which breaks the problem into smaller subproblems. Let us see
how can we create the recursive function for Floyd WarshallAlgorithm (thereafter
refers as FWA) which manages to find all pair shortest path in a graph.
Note- Floyd Warshall Algorithm works on directed weighted graph including negative
edge weight. Although it does not work on graph having negative edge weight cycle.
FWA uses the concept of intermediate vertex in the shortest path. Intermediate vertex
in any simple path P = {𝒗𝟏,𝒗𝟐 … 𝒗𝒍 } is any vertex in P except source vertex 𝒗𝟏 and
destination vertex 𝒗𝒍 . Let us take a graph G (V, E) having vertex set V= {1,2,3……n}.
Take a subset of V as 𝑽𝒔𝟏={1,2…k} for some value of k. For any pair of vertices 𝒊, 𝒋 ∈
𝑽 lists all paths from i to j such that any intermediate vertex in those paths belongs to
𝑽𝒔𝟏. Find the shortest path 𝑷𝒔 among them. Now there is one possibility between the
two cases listed below-
1. The vertex k does not belong to the shortest path 𝑷𝒔 .
2. The vertex k belongs to the shortest path 𝑷𝒔 .
Case 1- 𝑷𝒔 will also be the shortest path from i to j for another subset of V
i.e.𝑽𝒔𝟐 ={1,2…k-1}

16
Elementary Logic& Proofs
Case 2- We can divide path 𝑷𝒔 into two paths 𝑷𝟏 and 𝑷𝟐as below-

Where 𝑃1 is the shortest path between vertices iand k and 𝑷𝟐 is the shortest path
between vertices k and j. 𝑷𝟏 and 𝑷𝟐 both have the intermediate vertices from the set
𝑽𝒔𝟐= {1,2…k-1}.
Based on the above understanding the recursive function for FWA can be derived as
below-

(𝒌)
𝒘(𝒊, 𝒋) 𝒊𝒇 𝒌 = 𝟎
𝒅𝒊,𝒋 = { (𝒌−𝟏) (𝒌−𝟏) (𝒌−𝟏)
𝐦𝐢𝐧 {𝒅𝒊,𝒋 , 𝒅𝒊,𝒌 + 𝒅𝒌,𝒋 } 𝒊𝒇 𝒌 > 0

Where,
(𝒌)
𝒅𝒊,𝒋 - Weight of the shortest path from i to j for which all intermediate vertices ∈ {1,
2…k}
w(i,j) - Edge weight from vertex ito j
The recursive function implies that if there is no intermediate vertex from i to j then
the shortest path shall be the weight of the direct edge. Whereas, if there are multiple
intermediate vertices involved, the shortest path shall be the minimum of the two
paths, one including a vertex k and another without including the vertex k.
2.6.2 Working Strategy for FWA

 Initial Distance matrix 𝑫𝟎of order 𝒏 × 𝒏 consisting direct edge weight is


taken as the base distance matrix.
(𝟏)
 Another distance matrix 𝑫𝟏of shortest path 𝒅𝒊,𝒋 including one (1) intermediate
vertex is calculated where i, j ∈ V.
 The process of calculating distance matrices 𝑫𝟐 , 𝑫𝟑 … 𝑫𝒏 by including other
intermediate vertices continues until all vertices of the graph is taken.
 The last distance matrix 𝑫𝒏 which includes all vertices of a matrix as
intermediate vertices gives the final result.
2.6.3 Pseudo-code for FWA

 n is the number of vertices in graph G (V, E).


(𝒌)
 𝑫𝒌 is the distance matrix of order 𝒏 × 𝒏 consisting matrix element 𝒅𝒊,𝒋 as the
weight of shortest path having intermediate vertices from 1,2…k ∈ V
 M is the initial matrix of direct edge weight. If there is no direct edge
between two vertices, it will be considered as ∞.

FWA(M)
1. 𝐷0 = M
2. For (k=1 to n)
3. For (i=1 to n)
4. For (j=1 to n)
(𝑘) (𝑘−1) (𝑘−1) (𝑘−1)
5. 𝑑𝑖,𝑗 = min {𝑑𝑖,𝑗 , 𝑑𝑖,𝑘 + 𝑑𝑘,𝑗 }
6. Return 𝐷𝑛

17
Dynamic Programming

Since there are three nested for loops and inside which there is one statement
which takes constant time for comparison and addition, the time complexity
of FWA turns out to be O(𝒏𝟑).

Example- Apply Floyd-Warshall Algorithm on the following graph-

We will create the initial matrix 𝑫𝟎 from the given edge weights of the
graph-

For creating distance matrices𝑫𝟏,𝑫𝟐 etc. We use two simple tricks for
avoiding the calculations involved in each step-

1. Weight of shortest path from one vertex to itself will always be zero
since we are dealing with no negative weight cycle. So we will put
diagonal elements as zero in every iteration.
2. For the intermediate vertex j, we will put row and column elements of
𝑫𝒋as it is for 𝑫𝒋−𝟏.

If we don’t use the above tricks, still the result will be same.

Now for creating 𝑫𝟏, we have to find distance between every two vertices
via vertex 1. We will consider 𝑫𝟎as base matrix as below-

𝒅𝟏[2,3] = min {𝒅𝟎[2,3], 𝒅𝟎[2,1] + 𝒅𝟎 [1,3] = min {∞, 6+(-4)} = min {∞,2}
=2
(𝒌)
Note - We have used 𝒅𝒌 [i,j] instead of 𝒅𝒊,𝒋 for simplicity here.

𝒅𝟏[2,4] = min {𝒅𝟎[2,4], 𝒅𝟎[2,1] + 𝒅𝟎[1,4] = min {𝟐, 6+∞} = min {𝟐,∞} =2

𝒅𝟏[3,2] = min {𝒅𝟎[3,2], 𝒅𝟎[3,1] + 𝒅𝟎[1,2] = min {𝟓, ∞+9} = min {𝟓,∞} =5
Similarly, we will calculate for other vertices.

18
Elementary Logic& Proofs

For creating 𝑫𝟏, we have to find distance between every two vertices via vertex 2. We
will consider 𝑫𝟏 as base matrix as below-

𝒅𝟐[1,3] = min {𝒅𝟏[1,3], 𝒅𝟏[1,2] + 𝒅𝟏[2,3] = min {−𝟒, 9+2} = min {−𝟒,𝟏𝟏} = −𝟒

𝒅𝟐[1,4] = min {𝒅𝟏[1,4], 𝒅𝟏[1,2] + 𝒅𝟏[2,4] = min {∞, 9+2} = min {∞,𝟏𝟏} = 𝟏𝟏

𝒅𝟐[3,1] = min {𝒅𝟏[3,1], 𝒅𝟏[3,2] + 𝒅𝟏[2,1] = min {∞, 5+6} = min {∞,𝟏𝟏} = 𝟏𝟏
Similarly, we will calculate for other vertices.

Create distance matrix 𝑫𝟑 by taking vertex 3 as intermediate vertex and 𝑫𝟐 as base


matrix as below-

𝒅𝟑 [𝟏, 𝟐] = min{𝒅𝟐 [𝟏, 𝟐], 𝒅𝟐 [𝟏, 𝟑] + 𝒅𝟐 [𝟑, 𝟐] = min{𝟗, −𝟒 + 𝟓} = min{9,1} = 1

𝒅𝟑 [𝟏, 𝟒] = min{𝒅𝟐 [𝟏, 𝟒], 𝒅𝟐 [𝟏, 𝟑] + 𝒅𝟐 [𝟑, 𝟒] = min{𝟏𝟏, −𝟒 + 𝟕} = min{11,3} = 3


Similarly calculate for other pairs.

Create distance matrix 𝑫𝟒 by taking vertex 4 as intermediate vertex and 𝑫𝟑 as base


matrix as below-

𝒅𝟒 [𝟏, 𝟐] = min{𝒅𝟑 [𝟏, 𝟐], 𝒅𝟑 [𝟏, 𝟒] + 𝒅𝟑 [𝟒, 𝟐] = min{𝟏, 𝟑 + 𝟔} = min{𝟏, 𝟗} = 𝟏


Similarly calculate for other pairs-

𝑫𝟒 gives the shortest path between any two vertices of given graph. e.g. weight of
shortest path from vertex 1 to vertex 2 is 1.
2.6.4 When FWA gives the best result
Do you remember the starting point of this topic where we stated that one can use
Floyd Warshall Algorithm for finding all pair shortest path at one go or one can also
use Dijkstra’s or Bellman Ford algorithm for each vertex? We have analyzed the time
complexity of FWA. Now we will find out the time complexities of running Dijkstra’s

19
Dynamic Programming
and Bellman Ford algorithm for finding all pair shortest path to see which gives the
best result-
Using Dijkstra’s Algorithm - The time complexity of running Dijkstra’s Algorithm for
single source shortest path problem on graph G(V,E) is O(E+VlogV) using Fibonacci
heap. For G being complete graph, running Dijkstra’s Algorithm on each vertex will
result in time complexity of O(𝑽𝟑 ). For graph other than complete, it shall be O(n*n
log n), less than FWA but since Dijkstra’s Algorithm does not work with negative
edge weight for single source shortest path problem it will also not work for all pair
shortest path problem given negative edge weight.
Using Bellman Ford Algorithm – The time complexity of running Bellman Ford
algorithm for single source shortest path problem on Graph G (V,E) is O(VE). If G is
complete graph then this complexity turns out to be O(V*𝑽𝟐 ) i.e. O(𝑽𝟑 ). Therefore,
the time complexity of running Bellman Ford Algorithm on each vertex of graph G
shall be O(𝑽𝟒 ).
From the above two points it can be concluded that FWA is the best choice for all pair
shortest path problem when the graph is dense whereas Dijkstra’s Algorithm is
suitable when the graph is sparse and no negative edge weight exist. For graph having
negative edge weight cycle the only choice among the three is Bellman Ford
Algorithm.

2.7 SUMMARY

 The Dynamic Programming is a technique for solving optimization Problems,


using bottom-up approach. The underlying idea of dynamic programming is to
avoid calculating the same thing twice, usually by keeping a table of known
results that fills up as substances of the problem under consideration are solved.
 In order that Dynamic Programming technique is applicable in solving an
optimization problem, it is necessary that the principle of optimality is applicable
to the problem domain.
 The principle of optimality states that for an optimal sequence of decisions or
choices, each subsequence of decisions/choices must also be optimal.
 The Chain Matrix Multiplication Problem: The problem of how to
parenthesize the pairs of matrices within the expression A1A2 … An, a product of
n matrices which is defined; so as to minimize the number of scalar
multiplications in the computation of the product A1A2 … An. The product is
known as Chained Matrix Multiplication.

2.8 SOLUTIONS/ANSWERS

Check Your Progress:


Q1. Find an optimal parenthesization of a matrix-chain product whose sequence of
dimensions is <5, 10, 3, 12, 5, 50, 6>.

Q2. Show that a full parenthesization of an n-element has exactly n-1 pairs of
parenthesis.

Q3. Determine the cost and structure of an optimal binary search tree for a set of n=7
keys with the following properties:
20
i 0 1 2 3 4 5 6 7 Elementary Logic& Proofs
pi 0.04 0.06 0.08 0.02 0.10 0.12 0.14
qi 0.06 0.06 0.06 0.06 0.05 0.05 0.05 0.05

Q4.Determine the cost and structure of an optimal binary search tree for a set of n=7
keys with the following properties:

i 0 1 2 3 4 5
pi 0.15 0.10 0.05 0.10 0.20
qi 0.05 0.10 0.05 0.05 0.05 0.10

Q5. Apply Floyd-Warshall algorithm on the following graph. Show the matrix 𝐷5 of
the graph.

Q6. When graph is dense and have negative edge weight cycle. Which of the
following is a best choice to find the shortest path
a) Bellman-ford algorithm
b) Dijakstra’s algorithm
c) Floyd-Warshall algorithm
d) Kruskal’s Algorithm

Q7. What procedure is being followed in Floyd Warshall Algorithm?


a) Top down
b) Bottom up
c) Big bang
d) Sandwich

Q8. What happens when the value of k is 0 in the Floyd Warshall Algorithm?
a) 1 intermediate vertex
b) 0 intermediate vertex
c) N intermediate vertices
d) N-1 intermediate vertices

Check Your Progress Answer:


Q1. Multiplication sequence is (A1A2)(A3A4)(A5A6)

21
Dynamic Programming

0 3 8 3 -4 Q5.
3 0 11 1 -1
-3 0 0 -5 -7
2 5 10 0 -2
8 11 16 6 0

Q6.Floyd-warshall algorithm
Q7. Bottom up
Q8. When k=0, a path from vertex i to vertex j has no intermediate vertices at all.
0
Such a path has at most one edge and hence 𝑑𝑖𝑗 = 𝑤𝑖𝑗

22
Boolean Algebra and
UNIT 3 BOOLEAN ALGEBRA AND CIRCUITS Circuits

Structure
3.0 Introduction
3.1 Objectives
3.2 Boolean Algebras
3.3 Logic Circuits
3.4 Boolean Functions
3.5 Summary
3.6 Solutions/ Answers

3.0 INTRODUCTION
This unit is very closely linked with Unit 1. It was C.E.Shannon, the founder of
information theory, who observed an analogy between the functioning of switching
circuits and certain operations of logical connectives. In 1938 he gave a technique
based on this analogy to express and manipulate simple switching circuits
algebraically. Later, the discovery of some new solid state devices (called electronic
switches or logic gates) helped to modify these algebraic techniques and, thereby,
paved a way to solve numerous problems related to digital systems algebraically.
Fig. 1: Claude Shannon
In this unit, we shall discuss the symbolic logic techniques which are required for the
algebraic understanding of circuits and computer logic. In Sec. 3.2, we shall introduce
you to Boolean algebras with the help of certain examples based on objects you are
already familiar with. You will see that such algebras are apt for describing operations
of logical circuits used in computers.

In Sec. 3.3, we have discussed the linkages between Boolean expressions and logic
circuits.

In Sec. 3.4, you will read about how to express the overall functioning of a circuit
mathematically in terms of certain suitably defined functions called Boolean
functions. In this section we shall also consider a simple circuit design problem to
illustrate the applications of the relationship between Boolean functions and circuits.

Let us now consider the objectives of this unit.

3.1 OBJECTIVES
After reading this unit, you should be able to:

• define and give examples of Boolean algebras, expressions and functions;


• give algebraic representations of the functioning of logic gates;
• obtain and simplify the Boolean expression representing a circuit;
• construct a circuit for a Boolean expression;
• design and simplify some simple circuits using Boolean algebra techniques.

3.2 BOOLEAN ALGEBRAS


Let us start with some questions: Is it possible to design an electric/electronic circuit
without actually using switches(or logic gates) and wires? Can a circuit be redesigned,
to get a simpler circuit with the help of pen and paper only?

The answer to both these questions is `Yes'. What allows us to give this reply is the
concept of Boolean algebras. Before we start a formal discussion on these types of
algebras, let us take another look at the objects treated in Unit 1. 47
Elementary Logic As before, let the letters p, q, r,. . . denote statements (or propositions). We write S for
the set of all propositions. As you may recall, a tautology T (or a contradiction F ) is
any proposition which is always true (or always false, respectively). By abuse of
notation, we shall let T denote the set of all tautologies and F denote the set of all
contradictions. Thus,
T ≤ S, F ≤ S.

You already know from Unit 1 that, given two propositions p and q, both p ^ q and p v
q are again propositions. And so, by the definition of a binary operation, you can see
that both ∧ (conjunction) and ∨ (disjunction) are binary operations on the set S,
where we are writing ∧ (p, q) as p ∧ q and v (p, q) as p ∨ q ∀ p, q ∈ S.

Again, since ~ p is also a proposition, the operation ~ (negation) defines a unary


function ~: S → S. Thus, the set of propositions S, with these operations, acquires an
algebraic structure.

As is clear from Sec.1.3, under these three operations, the elements of S satisfy
associative laws, commutative laws, distributive laws and complementation laws.

Also, by E19 of Unit 1, you know that p ∨ F = p and p ∧ T = p, for any proposition p.
These are called the identity laws. The set S with the three operations and properties
In the NCERT textbook, ‘+’
and ‘.’ are used instead of listed above is a particular case of an algebraic structure which we shall now define.
‘∨’ and ‘∧’ , respectively.
Definition: A Boolean algebra B is an algebraic structure which consists of a set X
(≠ Ø) having two binary operations (denoted by ∨ and ∧), one unary operation
(denoted by ' ) and two specially defined elements O and I (say), which satisfy the
following five laws for all x, y, z ∈ X.

B1. Associative Laws: x ∨ (y ∨ z) = (x ∨ y) ∨ z,


x ∧ (y ∧ z) = (x ∧ y) ∧ z

B2. Commutative Laws: x ∨ y = y ∨ x,


x∧y=y∧x

B3. Distributive Laws: x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z),


x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z)

B4. Identity Laws: x ∨ O = x,


x∧I=x

B5. Complementation Laws: x ∧ x' = O,


x ∨ x'= I.

We write this algebraic structure as B = (X, ∨, ∧, ' , O, I), or simply B, if the context
makes the meaning of the other terms clear. The two operations ∨ and ∧ are called the
join operation and meet operation, respectively. The unary operation ' is called the
complementation.

From our discussion preceding the definition above, you would agree that the set S of
propositions is a Boolean algebra, where T and F will do the job of I and O,
respectively. Thus, (S, ∧, ∨, ~, F , T ) is an example of a Boolean algebra.

We give another example of a Boolean algebra below.

Example 1: Let X be a non-empty set, and P (X) denote its power set, i.e., P (X) is the
set consisting of all the subsets of the set X. Show that P (X) is a Boolean algebra.

48
Boolean Algebra and
Solution: We take the usual set-theoretic operations of intersection (∩), union (∪), Circuits
and complementation ( c ) in P (X) as the three required operations. Let ø and X play
the roles of O and I, respectively. Then you can verify that all the conditions for (P
(X), ∪, ∩, c , Φ, X) to be a Boolean algebra hold good.

For instance, the identity laws (B4) follow from two set-theoretic facts, namely, `the
intersection of any subset with the whole set is the set itself' and `the union of any set
with the empty set is the set itself'. On the other hand, the complementation laws (B5)
follow from another set of facts from set theory, namely, `the intersection of any
subset with its complement is the empty set' and `the union of any set with its
complement is the whole set'.
***
Yet another example of a Boolean algebra is based on switching circuits. For this, we
first need to elaborate on the functioning of ordinary switches in a mathematical way.
In fact, we will present the basic idea which helped the American, C.E.Shannon, to
detect the connection between the functioning of switches and Boole's symbolic logic.

You may be aware of the functioning of a simple on-off switch which is commonly
used as an essential component in the electric (or electronic) networking systems. A x= 1
switch is a device which allows the current to flow only when it is placed in the ON
position, i.e., when the gap is closed by a conducting rod. Thus, the ON position of a
switch is one state of a switch, called a closed state. The other state of a switch is the x= 0
open state, when it is placed in the OFF position. So, a switch has two stable states.
Fig. 2: OFF-ON position
There is another way to talk about the functioning of a switch. We can denote a switch
by x, and use the values 0 and 1 to depict its two states, i.e., to convey that x is open
we write x = 0, and to convey that x is closed we write x = 1 (see Fig.2).

These values which denote the state of a switch x are called the state-values
(s.v., in short) of that switch.

We shall also write x′ for a switch which is always in a state opposite to x. So that,
x is open → x' is closed and x is closed → x' is open.

The switch x' is called the invert of the switch x. For example, the switch a' shown in Table 1: s.v. of x'
Fig.3 is an invert of the switch a. x x'
0 1
Table 1 alongside gives the state value of x' for a given state value of the switch x.
1 0
These values are derived from the definition of x' and our preceding discussion.

a b

a' b
Fig. 3: a` is the invert of a.

Note that the variable x that denotes a switch can only take on 2 values, 0 and 1. Such
a variable (which can only take on two values) is called a Boolean variable. Thus, if
Do you see a connection
x is a Boolean variable, so is x'. Now, in order to design a circuit consisting of several between Table 1 above and
switches, there are two ways in which two switches can be connected: parallel Table 10, Unit 1 ?
connections and series connections (see Fig.4).

a
a b
b
(i) Parallel Connection (ii) Series Connection

Fig. 4: Two ways of connecting switches


49
Elementary Logic From Fig.4(i) above, you can see that in case of a parallel connection of switches a
and b (say), current will flow from the left to the right extreme if at least one of the
two switches is closed. Note that ‘parallel’ does not mean that both the switches are in
the same state.

On the other hand, current can flow in a series connection of switches only when both
the switches a and b are closed (see Fig.4 (ii) ).

Given two switches a and b, we write a par b and a ser b for these two types of
connections, respectively.

In view of these definitions and the preceding discussion, you can see that the state
values of the connections a par b and a ser b, for different pairs of state values of
switches a and b, are as given in the tables below.
Table 2: State values of a par b and a ser b.

s.v. s.v. s.v. of s.v. s.v. s.v. of


of a of b a par b of a of b a ser b
0 0 0 0 0 0
0 1 1 0 1 0
1 0 1 1 0 0
1 1 1 1 1 1

We have now developed a sufficient background to give you the example of a


Boolean algebra which is based on switching circuits.

Example 2: The set S = {0, 1} is a Boolean algebra.

Solution: Take ser and par in place of ∧and ∨, respectively, and inversion( ' )
instead of ~ as the three required operations in the definition of a Boolean algebra,.
Also take 0 for the element O and 1 for the element I in this definition.
Now, using Tables 1 and 2, you can check that the five laws B1-B5 hold good. Thus,
(S, par, ser,', 0, 1) is a Boolean algebra.
***
A Boolean algebra whose underlying set has only two elements is very important in
the study of circuits. We call such an algebra a two-element Boolean algebra, and
denote it by B. From this Boolean algebra we can build many more, as in the
following example.

Example 3: Let Bn = B × B × · · · × B = {(e1 , e2 , . . . , e n ) | each ei = 0 or 1}, for n ≥1,


be the Cartesian product of n copies of B. For ik , jk ∈ {0, 1} (1 ≤ k ≤ n), define

(i1 , i2 , . . . , in ) ∧ (j1 , j2 , . . . , jn ) = (i1 ∧ j1 , i2 ∧ j2 , . . . , in ∧ jn ) ,


(i1 , i2 , . . . , in ) ∨ (j1 , j2 , . . . , jn ) = (i1 ∨ j1 , i 2 ∨ j2 , . . . , in ∨ jn ) , and
(i1 , i2 , . . . , in )' = (i'1 , i'2 , . . . , i' n ) .

Then Bn is a Boolean algebra, for all n ≥ 1.

Solution: Firstly, observe that the case n = 1 is the Boolean algebra B.


Now, let us write 0 = (0, 0, . . . , 0) and I = (1, 1, . . . , 1), for the two elements of Bn
consisting of n-tuples of 0's and 1's, respectively. Using the fact that B is a Boolean
algebra, you can check that Bn , with operations as defined above, is a Boolean algebra
for every n ≥1.
***

The Boolean algebras Bn , n ≥ 1, (called switching algebras) are very useful for the
study of the hardware and software of digital computers.

50
We shall now state, without proof, some other properties of Boolean algebras, which Boolean Algebra and
Circuits
can be deduced from the five laws (B1-B5).

Theorem 1: Let B = (S, ∨, ∧, ٰ, O, I) be a Boolean algebra. Then the following laws


hold ∀x, y ∈ S.

a) Idempotent laws : x ∨ x = x, x ∧ x = x.
b) Absorption laws : x ∨ (x ∧ y) = x, x ∧ (x ∨ y) = x.
c) Involution law : (x' )' = x.
d) De Morgan’s laws : (x ∨ y)' = x' ∧ y' , (x ∧ y)' = x' ∨ y'.

In fact, you have already come across some of these properties for the Boolean
algebras of propositions in Unit 1. In the following exercise we ask you to verify
them.

E1) a) Verify the identity laws and absorption laws for the Boolean algebra
(S, ^, v, ~, T , F) of propositions.
b) Verify the absorption laws for the Boolean algebra
(P(X),∪,∩, c ,Φ, X).

In Theorem 1, you may have noticed that for each statement involving ∨ and ∧, there
is an analogous statement with ∧ (instead of ∨) and ∨ (instead of ∧). This is not a
coincidence, as the following definition and result shows.

Definition : If p is a proposition involving ~, ∧ and ∨, the dual of p, denoted by pd, is


the proposition obtained by replacing each occurrence of ∧ (and/or ∨) in p by ∨
(and/or ∧, respectively) in pd .
For example, x ∨ (x ∧ y) = x is the dual of x ∧(x ∨ y) = x.

Now, the following principle tells us that if a statement is proved true, then we have
simultaneously proved that its dual is true.

Theorem 2 (The principle of duality): If s is a theorem about a Boolean algebra,


then so is its dual sd .
It is because of this principle that the statements in Theorem 1 look so similar.

Let us now see how to apply Boolean algebra methods to circuit design.

While expressing circuits mathematically, we identify each circuit in terms of some


Boolean variables. Each of these variables represents either a simple switch or an
input to some electronic switch.

Definition: Let B = (S, ∨, ∧, ', O, I) be a Boolean algebra. A Boolean expression in


variables x1, x2 , . . . , xk (say), each taking their values in the set S is defined
recursively as follows:

i) Each of the variables x1 , x2 , . . . , xk , as well as the elements O and I of the


Boolean algebra B are Boolean expressions.

ii) If X1 and X2 are previously defined Boolean expressions, then X1 ∧ X2 , X1 ∨ X


2 and X'1 are also Boolean expressions.

For instance, x1 ∧ x'3 is a Boolean expression because so are x1 and x'3 , Similarly,
because x1 ∧ x2 is a Boolean expression, so is (x1 ∧ x2 ) ∧ (x1 ∧ x'3 ).

If X is a Boolean expression in n variables x1 , x2 , . . . , xn (say), we write this as


X = X(x1 , . . . , xn ) .
In the context of simplifying circuits, we need to reduce Boolean expressions to 51
Elementary Logic simpler ones. `Simple' means that the expression has fewer connectives, and all the
literals involved are distinct. We illustrate this technique now.

Example 4: Reduce the following Boolean expressions to a simpler form.

(a) X(x1, x2 ) = (x1 ∧ x2 ) ∧ (x1 ∧ x'2 );


(b) X(x1 , x2 , x3 ) = (x1 ∧ x2 ) ∨ (x1 ∧ x2 ∧ x3 ) ∨ (x1 ∧ x3 ).

Solution: (a) Here we can write


(x1 ∧ x2 ) ∧ (x1 ∧ x'2 ) = ((x1 ∧ x2 ) ∧ x1 ) ∧ x'2 (Associative law)
= (x1 ∧ x2) ∧ x'2 (Absorption law)
= x1 ∧ (x2 ∧ x'2 ) (Associative law)
= x1 ∧ O (Complementation law)
= O. (Identity law)

Thus, in its simplified form, the expression given in (a) above is O, i.e., a null
expression.

(b) We can write


(x1 ∧ x2 ) ∨ (x1 ∧ x'2 ∧ x3 ) ∨ (x1 ∧ x3 )
= [x1 ∧ {x2 ∨ (x'2 ∧ x3}] ∧ (x1 ∧ x3 ) (Distributive law)
= [x1 ∧{(x2 ∨ x'2 ) ∧ (x2 ∨ x3 )}] ∧ (x1 ∧ x3 ) (Distributive law)
= [x1 ∧ {I ∧ (x2 ∨ x3 )}] ∧ (x1 ∧ x3 ) (Complementation law)
= [x1 ∧ (x2 ∨ x3 )] ∧ (x1 ∧ x3 ) (Identity law)
= [(x1 ∧ x2 ) ∨ (x1 ∧ x3 )] ∧ (x1 ∧ x3 ) (Distributive law)
= [(x1 ∧ x2 ) ∧ (x1 ∧ x3 )] ∨ [(x1 ∧ x3 ) ∧ (x1 ∧ x3 )] (Distributive law)
= (x1 ∧ x2 ∧ x3 ) ∨ (x1 ∧ x3 ) (Idemp.,& assoc. laws)
= x1 ∧ [(x2 ∧ x3 ) ∨ x3 ] (Distributive law)
= x1 ∧ x3 (Absorption law)
Thus, the simplified form of the expression given in (b) is (x1 ∧ x3 ).

***

Now you should find it easy to solve the following exercise.

E2) Simplify the Boolean expression


X(x1 , x2 , x3 ) = (x1 ∧ x2 ) ∨ ((x1 ∧ x2 ) ∧ x3 ) ∨ (x2 ∧ x3 ).

With this we conclude this section. In the next section we shall give an important
application of the concepts discussed here.

3.3 LOGIC CIRCUITS


If you look around, you would notice many electric or electronic appliances of daily
use. Some of them need a simple switching circuit to control the auto-stop (such as in
a stereo system). Some would use an auto-power off system used in transformers to
control voltage fluctuations. Each circuit is usually a combination of on-off switches,
wired together in some specific configuration. Nowadays certain types of electronic
blocks (i.e., solid state devices such as transistors, resistors and capacitors) are more
in use. We call these electronic blocks logic gates, or simply, gates. In Fig. 5 we have
shown a box which consists of some electronic switches (or logic gates), wired
together in a specific manner. Each line which is entering the box from the left
represents an independent power source (called input), where all of them need not
supply voltage to the box at a given moment. A single line coming out of the box
gives the final output of the circuit box. The output depends on the type of input.

52
Boolean Algebra and
Input Circuits
power Output lead
lines Circuit Box

Fig. 5: A Logic circuit

This sort of arrangement of input power lines, a circuit box and output lead is basic
to all electronic circuits. Throughout the unit, any such interconnected assemblage of
logic gates is referred to as a logic circuit.

As you may know, computer hardwares are designed to handle only two levels of
voltage, both as inputs as well as outputs. These two levels, denoted by 0 and 1, are
called bits (an acronym for binary digits). When the bits are applied to the logic gates
by means of one or two wires (input leads), the output is again in the form of voltages Table 3:Outputs of AND-gate
0 and 1. Roughly speaking, you may think of a gate to be on or off according to
whether the output voltage is at level 1 or 0, respectively. x1 x2 x1 ∧ x2
0 0 0
Three basic types of logic gates are an AND-gate, an OR-gate and a NOT-gate. We
1 1 0
shall now define them one by one.
0 1 0
Definition : Let the Boolean variables x1 and x2 represent any two bits. An AND-gate 1 1 1
receives inputs x1 and x2 and produces the output, denoted by x1 ∧ x2 , as given in
Table 3 alongside.

The standard pictorial representation of an AND-gate is shown in Fig.6 below.


x1
x1 ∧ x2
x2

Fig. 6: Diagrammatic representation of an AND -gate

From the first three rows of Table 3, you can see that whenever the voltage in any one
of the input wires of the AND-gate is at level 0, then the output voltage of the gate is
also at level 0. You have already encountered such a situation in Unit 1. In the
following exercise we ask you to draw an analogy between the two situations.
Table 4: Output of an OR-gate.

E3) Compare Table 3 with Table 2 of Unit 1. How would you relate x1 ∧ x2 with p ∧ x1 x2 x1 ∨ x2
q, where p and q denote propositions? 0 0 0
0 1 1
Let us now consider another elementary logic gate. 1 0 1
1 1 1
Definition : An OR-gate receives inputs x1 and x2 and produces the output, denoted
by x1 ∨ x2, as given in Table 4. The standard pictorial representation used for the
OR-gate is as shown in Fig.7.
x1 x1 ∨ x2
x2

Fig. 7: Diagrammatic representation of an OR-gate

From Table 4 you can see that the situation is the other way around from that in Table
3, i.e., the output voltage of an OR-gate is at level 1 whenever the level of voltage in
even one of the input wires is 1. What is the analogous situation in the context of
propositions? The following exercise is about this.

53
Elementary Logic
E4) Compare Table 4 with Table 1 of Unit 1. How would you relate x1 ∨ x2 with p ∨
q, where p and q are propositions?

And now we will discuss an electronic realization of the invert of a simple switch
about which you read in Sec. 3.2.

Definition : A NOT-gate receives bit x as input, and produces an output denoted by


x', as given in Table 5. The standard pictorial representation of a NOT-gate is shown
Table 5: Output of a
NOT-gate
in Fig. 8 below.
x x′
x x'
0 1 Fig. 8: Diagrammatic representation of NOT-gate
1 0
If you have solved E5 and E6, you would have noticed that Tables 3 and 4 are the
same as the truth tables for the logic connectives ∧ (conjunction) and ∨(disjunction).
Also Table 3 of Unit 1, after replacing T by 1 and F by 0, gives Table 5. This is why
the output tables for the three elementary gates are called logic tables. You may find
it useful to remember these logic tables because they are needed very often for
computing the logic tables of logic circuits.

Another important fact that these logic tables will help you prove is given in the
following exercise.

E5) Let B = {0, 1} consist of the bits 0 and 1. Show that B is a Boolean algebra, i.e.,
that the bits 0 and 1 form a two-element Boolean algebra.

As said before, a logic circuit can be designed using elementary gates, where the
output from an AND-gate, or an OR-gate, or a NOT-gate is used as an input to other
such gates in the circuitry. The different levels of voltage in these circuits, starting
from the input lines, move only in the direction of the arrows as shown in all the
figures given below. For instance, one combination of the three elementary gates is
shown in Fig.9.

Fig. 9: A logic circuit of elementary gates.

Now let us try to see the connection between logic circuits and Boolean expressions.
We first consider the elementary gates. For a given pair of inputs x1 and x2 , the output
in the case of each of these gates is an expression of the form x1 ∧ x2 or x1 ∨ x2 or x' .

Next, let us look at larger circuits. Is it possible to find an expression associated with a
logic circuit, using the symbols ∧, ∨ and ' ? Yes, it is. We will illustrate the technique
of finding a Boolean expression for a given logic circuit with the help of some
examples. But first, note that the output of a gate in a circuit may serve as an input to
some other gate in the circuit, as in Fig. 9. So, to get an expression for a logic circuit
the process always moves in the direction of the arrows in the circuitry. With this in
mind, let us consider some circuits.

Example 5: Find the Boolean expression for the logic circuit given in Fig.9 above.
54
Solution: In Fig.9, there are four input terminals. Let us call them x1 , x2 , x3 and x4 . Boolean Algebra and
Circuits
So, x1 and x2 are inputs to an OR-gate, which gives x1 v x2 as an output expression
(see Fig. 9(a) ).

Similarly, the other two inputs x3 and x4 , are inputs to an AND-gate. They will give x3
∧ x4 as an output expression. This is, in turn, an input for a NOT-gate in the circuit.
So, this yields (x3 ∧ x4 )' as the output expression. Now, both the expressions x1 ∨ x2
and (x3 ∧ x4 )' are inputs to the extreme right AND-gate in the circuit. So, they give (x1
∨ x2 ) ∧ (x3 ∧ x4 )' as the final output expression, which represents the logic circuit.
x1 x1∨ x2
x2
(x1 ∨ x2) ∧ (x3 ∧ x4)′
x3 x3 ∧ x4 (x3 ∧ x4)′
x4

Fig. 9 (a)

***
You have just seen how to find a Boolean expression for a logic circuit. For more
practice, let us find it for another logic circuit.

Example 6: Find the Boolean expression C for the logic circuit given in Fig. 10.

x1 A B
x2
C
x3

Fig. 10

Solution: Here the first output is from an OR-gate, i.e., A is x1 ∨ x2 . This, in turn,
serves as the input to a NOT-gate attached to it from the right. The resulting bit B is
(x1 ∨ x2 )' . This, and x3 , serve as inputs to the extreme right AND-gate in the circuit
given above. This yields an output expression (x1∨ x2 )' ∧ x3 , which is C, the required
expression for the circuit given in Fig.10.

***

Why don't you try to find the Boolean expressions for some more logic circuits now?

E6) Find the Boolean expression for the output of the logic circuits given below.

x1
x2
x1
x2
x3

(a) (b)

So far, you have seen how to obtain a Boolean expression that represents a given
circuit. Can you do the converse? That is, can you construct a logic circuit
corresponding to a given Boolean expression? In fact, this is done when a circuit
55
Elementary Logic designing problem has to be solved. The procedure is quite simple. We illustrate it
with the help of some examples.

Example 7: Construct the logic circuit represented by the Boolean expression


(x'1 ∧ x2 ) ∨ (x1 ∨ x3 ), where xi (1 ≤ i ≤ 3) are assumed to be inputs to that circuitry.
Solution: Let us first see what the portion (x'1 ∧ x2 ) of the given expression
contributes to the complete circuit. In this expression the literals x′1 and x2 are
connected by the connective ∧ (AND). Thus the circuit corresponding to it is as
shown in Fig.11(a) below, by the definitions of NOT-gate and AND-gate.

x1 x′1

x1 x1 ∨ x3
x′1 ∧ x2 x3
x2

a) (b)
Fig. 11: Logic circuits for the expressions x'1 ∧ x2 and x1 ∨ x3.

Similarly, the gate corresponding to the expression x1 v x3 is as shown in Fig.11(b)


above. Finally, note that the given expression has two parts, namely, x'1 ∧ x2 and
x1 ∨ x3 , which are connected by the connective ∨ (OR). So, the two logic circuits
given in Fig.11 above, when connected by an OR-gate, will give us the circuit shown
in Fig. 12 below.

x1 x′1
x′1∧ x2

(x′1 ∧ x2) ∨ (x3 ∨ x3)


x1 ∨ x3
x3

Fig.12: Circuitry for the expression (x'1 ∧ x2 ) ∨ (x1 ∨ x3 )

This is the required logic circuit which is represented by the given expression.

***

Example 8: Given the expression (x'1 ∨ (x2 ∧ x'3 )) ∧ (x2 ∨ x'4 ), find the
corresponding circuit, where xi (1 ≤ i ≤4) are assumed to be inputs to the circuitry.

Solution: We first consider the circuits representing the expressions x2 ∧ x' 3 and
x2 ∨ x'4 . They are as shown in Fig.13(a).

x2
x2 ∨ x′3 x3 x′3 x′1
x′3 x1
x′1 ∨ (x2 ∧ x′3)
x2
x2 x4 x2 ∧ x′3
x2 ∨ x′4 x′4 x3 x′3
x′4

(a) (b) (c)

Fig. 13: Construction of a logic circuitry.

Also you know that the literals x'3 and x'4 are outputs of the NOT-gate. So, these can
be represented by logic gates as shown in Fig.13(b). Then the circuit for the part x'1 ∨
(x2 ∧ x'3 ) of the given expression is as shown in Fig.13(c). You already know how to
56 construct a logic circuit for the expression x2 ∨ x'4 .
Boolean Algebra and
Finally, the two expressions (x'1 ∨ (x2 ∧ x'3 )) and (x2 ∨ x'4 ) being connected by the Circuits
connective ∧ (AND), give the required circuit for the given expression as shown in
Fig.14.

x1 x′1
x′1 ∨ (x2 ∧ x′3)
(x′1 ∨ (x2 ∧ x′3)) ∧ (x2 ∧ x′4)
x2 x2 ∨ x′3

x3 x′3

x2 ∨ x′4
x4 x′4

Fig. 14: Circuitry for the expression (x'1 ∨ (x2 ∧ x'3 )) ∧ (x2 ∨ x'4 ).

***

Why don't you try to solve some exercises now?

E7) Find the logic circuit corresponding to the expression x'1∧ (x2 ∨ x'3 ) .

E8) Construct the logic circuit and obtain the logic table for the expression
x1 ∨ (x'2 ∧ x3 ).

So far we have established a one-to-one correspondence between logic circuits and


Boolean expressions. You may wonder about the utility of this. The mathematical
view of a circuit can help us understand the overall functioning of the circuit. To
understand how, consider the circuit given in Fig.10 earlier.

You may think of the inputs bits x1 , x2 , and x3 as three variables, each one of which
is known to have two values only, namely, 0 or 1, depending upon the level of voltage
these inputs have at any moment of time. Then the idea is to evaluate the expression
(x1 ∨ x2 )' ∧ x3 , which corresponds to this circuit, for different values of the 3-tuple
(x1 , x2 , x3 ).

How does this evaluation help us to understand the functioning of the circuit? To see
this, consider a situation in which the settings of x1 , x2 and x3 at a certain stage of the
process are x1 = x3 = 0 and x2 = 1. Then we know that x1 ∨ x2 = 0 ∨ 1 = 1 (see the
second row of Table 3 given earlier). Further, using the logic table of a NOT-gate, we
get (x1 ∨ x2 )' = 1' = 0. Finally, from Table 3, we get (x1 ∨ x2 )' ∧ x3 = 0 ∧ 1 = 0.
Thus, the expression (x1 ∨ x2 )' ∧ x3 has value 0 for the set of values (0, 1, 0) of input
bits (x1 , x2 , x3 ). Thus, if x 1 and x 3 are closed, while x2 is open, the circuit
remains closed.

Using similar arguments, you can very easily calculate the other values of the
expression (x1 ∨ x2 )' ∧ x3 in the set
{0,1}3 = {(x1 , x2 , x3 ) | xi = 0 or 1, 1 ≤ i ≤ 3}

of values of input bits. We have recorded them in Table 6.

Observe that the row entries in the first three columns of Table 6 represent the
different values which the input bits (x1 , x2 , x3 ) may take. Each entry in the last
column of the table gives the output of the circuit represented by the expression
(x1 ∨ x2)' ∧ x3 for the corresponding set of values of (x1 , x2 , x3 ). For example, if
(x1 , x2 , x3 ) is (0,1,0), then the level of voltage in the output lead is at a level 0
(see the third row of Table 6).
57
Elementary Logic
You should verify that the values in the other rows are correct.

Table 6: Logic table for the expression (x1 ∨ x2)' ∧ x3 .

Table 6 is the logic


x1 x2 x3 x1 v x2 (x1 v x2) (x1 v x2)´ ^ x3
table for the circuit 0 0 0 0 1 0
given in Fig. 10. 0 0 1 0 1 1
0 1 0 1 0 0
1 0 0 1 0 0
0 1 1 1 0 0
1 1 0 1 0 0
1 0 1 1 0 0
1 1 1 1 0 0

Why don't you try an exercise now?

E9) Compute the logic table for the circuit given in E6(b) above.

You have seen how the logic table of an expression representing a circuit provides a
functional relationship between the state (or level) of voltage in the input terminals
and that in the output lead of that logic circuitry. This leads us the concept of Boolean
functions, which we will now discuss.

3.4 BOOLEAN FUNCTIONS


In the last section you studied that an output expression is not merely a device for
representing an interconnection of gates. It also defines output values as a function of
input bits. This provides information about the overall functioning of the
corresponding logic circuit. So, this function gives us a relation between the inputs to
the circuit and its final output .

This is what helps us to understand control over the functioning of logic circuits from
a mathematical point of view. To explain what this means, let us reformulate the logic
tables in terms of functions of the input bits.

Let us first consider the Boolean expression


X(x1 , x2 ) = x1 ∧ x'2,

where x1 and x2 take values in B = {0, 1}. You know that all the values of this
expression, for different pairs of values of the variables x1 and x2 , can be calculated
by using properties of the Boolean algebra B. For example,

0 ∧ 1' = 0 ∧ 0 = 0 ⇒ X(0, 1) = 0.

Similarly, you can calculate the other values of X(x1, x2 ) = x1 ∧ x'2 over B.

In this way we have obtained a function f : B2 → B, defined as follows:


f(e1 , e2 ) = X(e1 , e2 ) = e1 ∧ e'2 , where e1 , e2 ε {0, 1}.

So f is obtained by replacing xi with ei in the expression X(x1 , x2 ). For example,


when e1 = 1, e2 = 0, we get f(1, 0) = 1 ∧ 0' = 1.

58 More generally, each Boolean expression X(x1 , x2 , . . . , xk ) in k variables, where


each variable can take values from the two-element Boolean algebra B, defines a Boolean Algebra and
Circuits
function f : Bk → B : f(e1 , . . . , ek ) = X(e1 , . . . , ek ).

Any such function is called a Boolean function.


Thus, each Boolean expression over B = {0, 1} gives rise to a Boolean function.
In particular, corresponding to each circuit, we get a Boolean function.
Therefore, the logic table of a circuit is just another way of representing the Boolean
function corresponding to it.

For example, the logic table of an AND-gate can be obtained using the function ∧ :
B2 → B : ∧ (e1 , e2 ) = e1 ∧ e2 .

To make matters more clear, let us work out an example.

Example 9: Let f : B2 → B denote the function which is defined by the Boolean


expression X(x1 , x2 ) = x'1 ∧ x'2 . Write the values of f in tabular form.

Solution: f is defined by f(e1 , e2 ) = e'1 ∧ e'2 for e1 , e2 ∈ {0, 1}. Using Tables 3, 4 and
5, we have
f(0, 0) = 0' ∧ 0' = 1 ∧ 1 = 1, f(0, 1) = 0' ∧ 1' = 1 ∧ 0 = 0,
f(1, 0) = 1' ∧ 0' = 0 ∧1 = 0, f(1, 1) = 1' ∧ 1' = 0 ∧ 0 = 0.

We write this information in Table 7.

Table 7: Boolean function for the expression x′1 ^ x′2 .

e1 e2 e'1 e'2 f(e1, e) = e'1 ∧ e'2


0 0 1 1 1∧1=1
0 1 1 0 1∧0=0
1 0 0 1 0∧1=0
1 1 0 0 0∧0=0

***
Why don't you try an exercise now?

E10) Find all the values of the Boolean function f : B2 → B defined by the Boolean
expression (x1 ∧ x2 ) ∨ (x1 ∧ x'3 ).

Let us now consider the Boolean function g : B2 → B, defined by the expression


X(x1 , x2 ) = (x1 ∨ x2 )'.
Then g(e1 , e2 ) = (e1∨e2 )' , e1 , e2 ε B.

So, the different values that g will take are


g(0, 0) = (0 ∨ 0)' = 0' = 1, g(0, 1) = (0 ∨ 1)'= 1' = 0,
g(1, 0) = (1 ∨ 0)' = 1' = 0, g(1, 1) = (1 ∨1)' = 1' = 0.

In tabular form, the values of g can be presented as in Table 8.


Table 8: Boolean function of the expression (x1 v x2) ′ .

e1 e2 e1 ∨ e2 g(e1, e2) =
(e1 v e2)'
0 0 0 1
0 1 1 0
1 0 1 0
1 1 1 0
By comparing Tables 7 and 8, you can see that f(e1 , e2 ) = g(e1 , e2 ) for all 59
Elementary Logic (e1 , e2 ) ∈ B2 . So f and g are the same function.

What you have just seen is that two (seemingly) different Boolean expressions can
have the same Boolean function specifying them. Note that if we replace the input
bits by propositions in the two expressions involved, then we get logically equivalent
statements. This may give you some idea of how the two Boolean expressions are
related. We give a formal definition below.

Definition : Let X = X(x1 , x2 , . . . , xk ) and Y = Y(x1 , x2 , . . . , xk ) be two Boolean


expressions in the k variables x1 , . . . , xk . We say X is equivalent to Y over the
Boolean algebra B, and write X ≡ Y, if both the expressions X and Y define the same
Boolean function over B, i.e.,

X(e1 , e2 , . . . , ek ) = Y(e1 , e2 , . . . , ek ), for all ei ∈ {0, 1}.

So, the expressions to which f and g (given by Tables 7 and 8) correspond are
equivalent.

Why don't you try an exercise now?

E11) Show that the Boolean expressions


X = (x1 ∧ x2 ) ∨ (x1 ∧ x3 ) and Y = x1 ∧ (x2 ∨ x'3 )
are equivalent over the two-element Boolean algebra B = {0, 1}.

So far you have seen that given a circuit, we can define a Boolean function
corresponding to it. You also know that given a Boolean expression over B, there is a
circuit corresponding to it. Now, you may ask:

Given a Boolean function f : Bn → B, is it always possible to get a Boolean expression


which will specify f over B? The answer is `yes', i.e., for every function f : Bn → B (n
≥ 2) there is a Boolean expression (in n variables) whose Boolean function is f itself.

To help you understand the underlying procedure, consider the following examples.

Example 10: Let f : B2 → B be a function which is defined by


f(0, 0) = 1, f(1, 0) = 0, f(0, 1) = 1, f(1, 1) = 1.

Find the Boolean expression specifying the function f.

Solution: f can be represented by the following table.

Input Output
x1 x2 f(x1, x2)
0 0 1
1 0 0
0 1 1
1 1 1

In Boolean algebra We find the Boolean expression according to the following algorithm:
terminology this is known
as the ‘disjunctive normal Step 1: Identify all rows of the table where the output is 1: these are the 1st,
form’ (DNF) of the 3rd and 4th rows.
expression.
Step 2: Combine the variables in each of the rows identified in Step 1 with `and'.
Simultaneously, apply `not' to the variables with value zero in these rows. So,
for the
1st row: x'1 ∧ x'2 ,
60
Boolean Algebra and
3rd row: x'1 ∧ x2 ,
Circuits
4th row: x1 ∧ x2 .
Step 3: Combine the Boolean expressions obtained in Step 2 with `or' to get
the compound expression representing f:
So, f(x1 , x2 ) = (x'1 ∧ x'2 ) ∨ (x'1 ∧ x2 ) ∨ (x1 ∧ x2 ).

***
You can complete Example 10, by doing the following exercise.

E12) In the previous example, show that X(e1 , e2 ) = f(e1 , e2 ) ∀e1 , e2 ∈ B.

E13) By Theorem 2, we could also have obtained the expression of f in Example


10 in `conjunctive normal form' (CNF). Please do so.

An important remark: To get a Boolean expression for a Boolean function h (say),


we should first see how many points vi there are at which h(vi ) = 0, and how many
points vi there are at which h(vi ) = 1. If the number of values for which the
function h is 0 is less than the number of values at which h is 1, then we shall
choose to obtain the expression in CNF, and not in DNF. This will give us a shorter
Boolean expression, and hence, a simpler circuit. For similar reasons, we will prefer
DNF if the number of values at which h is 0 is more.

Why don't you apply this remark now?

E14) Find the Boolean expressions, in DNF or in CNF (keeping in mind the
remark made above), for the functions defined in tabular form below.

x1 x2 x3 f (x1, x2, x3) x1 x2 x3 g (x1, x2, x3)


1 1 1 1 1 1 1 1
1 1 0 0 1 1 0 1
1 0 1 0 1 0 1 0
1 0 0 1 1 0 0 1
0 1 1 0 0 1 1 0
0 1 0 0 0 1 0 0
0 0 1 0 0 0 1 1
0 0 0 1 0 0 0 1
(a) (b)

Boolean functions tell us about the functioning of the corresponding circuit.


Therefore, circuits represented by two equivalent expressions should essentially do the
same job. We use this fact while redesigning a circuit to create a simpler one. In fact,
in such a simplification process of a circuit, we write an expression for the circuit and
then evaluate the same (over two-element Boolean algebra B) to get the Boolean
function. Next, we proceed to get an equivalent, simpler expression. Finally, the
process terminates with the construction of the circuit for this simpler expression.
Note that, as the two expressions are equivalent, the circuit represented by the
simpler expression will do exactly the same job as the circuit represented by the
original expression.

Let us illustrate this process by an example in some detail.

Example 11: Design a logic circuit capable of operating a central light bulb in
a hall by three switches x1 , x2 , x3 (say) placed at the three entrances to that hall.

Solution: Let us consider the procedure stepwise.


Step 1: To obtain the function corresponding to the unspecified circuit.
61
Elementary Logic To start with, we may assume that the bulb is off when all the switches are off.
Mathematically, this demands a situation where x1 = x2 = x3 = 0 implies f(0, 0, 0)= 0,
where f is the function which depicts the functional utility of the circuit to be
designed.
Let us now see how to obtain the other values of f. Note that every change in the state
of a switch should alternately put the light bulb on or off. Using this fact repeatedly,
we obtain the other values of the function f.

Now, if we assign the value (1,0,0) to (x1 , x2 , x3 ), it brings a single change in the
state of the switch x1 only. So, the light bulb must be on. This can be written
mathematically in the form f(1, 0, 0) = 1. Here the value 1 of f stands for the on state
of the light bulb.
Then, we must have f(1, 1, 0) = 0, because there is yet another change, now in the
state of switch x2 .

You can verify that the other values of f(x1 , x2 , x3 ) are given as in Table 9.

Table 9: Function of a circuitry for a three-point functional bulb.

x1 x2 X3 f(x1, x2, x3)


0 0 0 0
1 0 0 1
1 1 0 0
1 1 1 1
0 1 0 1
0 1 1 0
0 0 1 1
1 0 1 0

Step 2: To obtain a Boolean expression which will specify the function f. Firstly,
note that the number of 1's in the last column of Table 9 are fewer than the number of
0's. So we shall obtain the expression in DNF (instead of CNF).

By following the stepwise procedure of Example 10, you can see that the
required Boolean expression is given by
X(x1 , x2 , x3 ) = (x1 ∧ x'2 ∧ x3 ) ∨ (x'1 ∧ x2 ∧ x'3 ) ∨ (x'1 ∧ x'2 ∧ x3 ) ∨ (x1 ∧ x2 ∧ x3 )

At this stage we can directly jump into the construction of the circuit for this
expression (using methods discussed in Sec.3.3). But why not try to get a simpler
circuit?

Step 3 : To simplify the expression X(x1 , x2 , x3 ) given above. Firstly, observe that
(x1 ∧ x'2 ∧ x3) ∨ (x1 ∧ x2 ∧ x3 ) = x1 ∧ [(x'2 ∧ x3 ) ∨ (x2 ∧ x3 )]
= x1 ∧ [(x'2 ∨ x2 ) ∧ x3 ]
= x1 ∧ (1 ∧ x3 )
= x1 ∧ x3 ,
by using distributive, complementation and identity laws (in that order).
Similarly, you can see that
(x'1 ∧ x'2 ∧ x3 ) ∨ (x1 ∧ x3 ) = (x'2 ∨ x1 ) ∧ x3 .

We thus have obtained a simpler (and equivalent) expression, namely,


X(x1 , x2 , x3 ) = (x'1 ∧ x2 ∧ x'3 ) ∨ [(x'2 ∨ x1 ) ∧ x3 ] ,
whose Boolean function is same as the function f. (Verify this!)

Step 4: To design a circuit for the expression obtained in Step 3.

Now, the logic circuit corresponding to the simpler (and equivalent) expression
62
obtained in Step 3 is as shown in Fig.15. Boolean Algebra and
Circuits

x1
x2
x3

Fig. 15: A circuit for the expression (x'1 ∧ x2 ∧ x'3 ) ∨ ((x'2 ∨ x1 ) ∧ x3 )

So, in 4 steps we have designed a 3-switch circuit for the hall.

***

We can't claim that the circuit designed in the example above is the simplest circuit.
How to get that is a different story and is beyond the scope of the present course.

Why don't you try an exercise now?

E15) Design a logic circuit to operate a light bulb by two switches, x1 and x2 (say).

We have now come to the end of our discussion on applications of logic. Let us
briefly recapitulate what we have discussed here.

3.5 SUMMARY
In this unit, we have considered the following points.

1. The definition and examples of a Boolean algebra. In particular, we have


discussed the two-element Boolean algebra B = {0, 1}, and the switching algebras
Bn , n ≥ 2.
2. The definition and examples of a Boolean expression.
3. The three elementary logic gates, namely, AND-gate, OR-gate and NOT-gate;
and the analogy between their functioning and operations of logical connectives.
4. The method of construction of a logic circuit corresponding to a given Boolean
expression, and vice-versa.
5. How to obtain the logic table of a Boolean expression, and its utility in the
understanding of the overall functioning of a circuit.
6. The method of simplifying a Boolean expression.
7. The method of construction of a Boolean function f : Bn → B, corresponding to a
Boolean expression, and the concept of equivalent Boolean expressions.
8. Examples of the use of Boolean algebra techniques for constructing a logic circuit
which can function in a specified manner.

3.6 SOLUTIONS/ ANSWERS


E1) a) In E19 of Unit 1, you have already verified the Identity laws. Let us proceed
to show that the propositions p ∨ (p ∧ q) and p are logically equivalent. It
suffices to show that the truth tables of both these propositions are the same.
This follows from the first and last columns of the following table.

63
Elementary Logic
p q p∧q p ∨ (p ∧q)
F F F F
F T F F
T F F T
T T T T
Similarly, you can see that the propositions p ∧ (p ∨ q) and
p are equivalent propositions. This establishes the absorption laws for the
Boolean algebra (S, ∧, ∨ ' , T , F).
b) Let A and B be two subsets of the set X. Since A ∩ B ⊆ A, (A ∩ B) U A =
A. Similarly, as A ⊆ A U B, we have (A U B) ∩ A = A. Thus, both the forms
of the absorption laws hold good for the Boolean algebra (P(X), U, ∩, c , X, ).

E2) We can write


X(x1 , x2 , x3 ) = ((x1 ∧ x2 ) ∨ ((x1 ∧ x2 ) ∧ x3 )) ∨ (x2 ∧ x3 )
= (x1 ∧ x2 ) ∨ (x2 ∧ x3 ) (by Absorption law)
= x2 ∧ (x1 ∨ x3 ) (by Distributive law)
This is the simplest form of the given expression.

E3) Take the propositions p and q in place of the bits x1 and x2 , respectively.
Then, when 1 and 0 are replaced by T and F in Table 3 here, we get the
truth table for the proposition p ∧ q (see Table 2 of Unit 1).
This establishes the analogy between the functioning of the AND-gate and the
conjunction operation on the set of propositions.

E4) Take the propositions p and q in place of the bits x1 and x2 , respectively.
Then, when 1 and 0 are replaced by T and F in Table 4 here, we get the truth
table for the proposition p∨q (see Table 1 of Unit 1).
This establishes the analogy between the functioning of the OR-gate and the
disjunction operation on the set of propositions.

E5) Firstly, observe that the information about the outputs of the three elementary
gates, for different values of inputs, can also be written as follows:

0 ∧ 0 = 0 ∧1 = 1 ∧ 0 = 0, 1 ∧1 = 1; (see Table 3)
0 ∨ 0 = 0, 0 ∨ 1 = 1 ∨ 0 = 1 ∨ 1 = 1; and (see Table 4)
0' = 1, 1' = 0. (see Table 5)
Clearly, then both the operations # and # are the binary operations on B and ' :
B → B is a unary operation. Also, we may take 0 for O and 1 for I in the
definition of a Boolean algebra.
Now, by looking at the logic tables of the three elementary gates, you can
see that all the five laws B1-B5 are satisfied. Thus, B is a Boolean algebra.

E6) a) Here x1 and x2 are inputs to an OR-gate, and so, we take x1 ∨ x2 as input to
the NOT-gate next in the chain which, in turn, yields (x1 ∨ x2 )' as the
required output expression for the circuit given in (a).
b) Here x1 and x2 are the inputs to an AND-gate. So, the expression x1 ∧ x2
serves as an input to the NOT-gate, being next in the chain.
This gives the expression (x1 ∧ x2 )' which serves as one input to the extreme
right AND-gate. Also, since x'3 is another input to this AND-gate (coming out
of a NOT-gate), we get the expression (x1 ∧ x2 )' ∧ x'3 as the final output
expression which represents the circuit given in (b).

E7) You know that the circuit representing expressions x1 and x2 ∨ x'3 are as
shown in Fig.16 (a) and (b) below.
x1
x1 x2 ∨ x′3
x′1
64
x3 x′3
Boolean Algebra and
Circuits

(a) (b)

Fig. 16

Thus, the expression x'1 ∨ (x2 ∨ x'3 ), being connected by the symbol ∧, gives
the circuit corresponding to it as given in Fig.17 below.

x1 x1
x′1 ∧ (x2 ∨ x′3)
x2
x2 ∨ x′3
x3 x′3

Fig. 17: A logic circuit for the expression x'1 ∧ (x2 ∨ x'3 )

E8) You can easily see, by following the arguments given in E9, that the circuit
represented by the expression x1∨(x'2 ∧ x3) is as given in Fig.18.

x1
x1 ∨ (x′2 ∧ x3)
x2 x′2
x′2 ∧ x3
x3
Fig. 18

The logic table of this expression is as given below.

x1 x2 x3 x'2 x'2 ∧ x3 X1 ∨ (x'2 ∧ x3)


0 0 0 1 0 0
0 0 1 1 1 1
0 1 0 0 0 0
1 0 0 1 0 1
0 1 1 0 0 0
1 1 0 0 0 1
1 0 1 1 1 1
1 1 1 0 0 1

E9) Since the output expression representing the circuit given in E8(b) is found to
be (x1 ∧ x2 )' ∧ x'3, the logic table for this circuit is as given below.

x1 x2 x3 x1 ∧ x2 (x1 ∧ x2)' x'3 (x1 ∧ x2)' ∧ x'3


0 0 0 0 1 1 1
0 0 1 0 1 0 0
0 1 0 0 1 1 1
1 0 0 0 1 1 1
0 1 1 0 1 0 0
1 1 0 1 0 1 0
1 0 1 0 1 0 0
1 1 1 1 0 0 0

E10) Because the expression (x1 ∧ x2 ) ∨ (x1 ∧ x'3 ) involves three variables, the
65
Elementary Logic corresponding Boolean function, f (say) is a three variable function, i.e. f :
B3 → B. It is defined by
f(e1, e2, e3 ) = (e1 ∧ e2 ) ∨ (e1 ∧ e'3 ), e1, e2 and e3 ∈ B.

Now, you can verify that the values of f in tabular form are as given in the
following table.

e1 e2 E3 e1 ∧ e2 e'3 e1∧ e'3 f(e1, e2, e3) =


(e1∧ e2) ∨ (e1 ∧ e'3)
0 0 0 0 1 0 0
0 0 1 0 0 0 0
0 1 0 0 1 0 0
1 0 0 0 1 1 1
0 1 1 0 0 0 0
1 1 0 1 1 1 1
1 0 1 0 0 0 0
1 1 1 1 0 0 1
E11)
To show that the Boolean expressions X and Y are equivalent over the
two-element Boolean algebra B = {0, 1}, it suffices to show that the Boolean
functions f and g (say) corresponding to the expressions X and Y,
respectively, are the same. As you can see, the function f for the expression X
is calculated in E10 above.
Similarly, you can see that the Boolean function g for the expression Y in
tabular form is as given below.

x1 x2 x3 x'3 x2 ∨ x'3 G(x1, x2, x3) =


X1∧ (x2 ∨ x'3)
0 0 0 1 1 0
0 0 1 0 0 0
0 1 0 1 1 0
1 0 0 1 1 1
0 1 1 0 1 0
1 1 0 1 1 1
1 0 1 0 0 0
1 1 1 0 1 1
Comparing the last columns of this table and the one given in E10 above,
you can see that f(e1, e2, e3 ) = g(e1, e2, e3 ) ∀ e1, e2, e3 ∈ B = {0, 1}. Thus,
X and Y are equivalent.

E12) Firstly, let us evaluate the given expression X(x1 , x2 ) over the two-element
Boolean algebra B = {0, 1} as follows:
X(0, 0) = (0' ∧ 0' ) ∨ (0' ∧0) ∨ (0 ∧ 0)
= (1 ∧1) ∨ (1 ∧ 0) ∨ (0 ∧ 0)
= 1 ∨ 0 ∨ 0 = 1 = f(0, 0);
X(1, 0) = (1' ∧0') ∨ (1' ∧ 0) ∨ (1 ∧ 0)
= (0 ∧ 1) ∨ (0 ∧ 0) ∨ (1 ∧ 0)
= 0 ∨ 0 ∨ 0 = 0 = f(1, 0);

X(0, 1) = (0' ∧1' ) ∨ (0' ∧1) ∨ (0 ∧1)


= (1 ∧ 0) ∨ (1 ∧1) ∨ (0 ∧ 1)
= 0 ∨ 1 ∨ 0 = 1 = f(0, 1);

X(1, 1) = (1' ∧1') ∨ (1' ∧1) ∨ (1 ∧1)


= (0 ∧ 0) ∨ (0 ∧ 1) ∨ (1 ∧1)
66
Boolean Algebra and
= 0 ∨ 0 ∨ 1 = 1 = f(1, 1).
Circuits
It thus follows that X(e1, e2 ) = f(e1, e2 ) ∀ e1 , e2 ∈ B = {0, 1}.

E13) Step 1: Identify all rows of the table where output is 0: This is the 2nd row.
Step 2: Combine x1 and x2 with `or' in these rows, simultaneously applying
`not' to xi if its value is 0 in the row: So, for the 2nd row the expression we
have is x1 ∨ x2 .
Step 3: Combine all the expressions obtained in Step 2 with `and' to get the
CNF form representing f: In this case there is only 1 expression.
So f is represented by x1 ∨ x2 in CNF.

E14) a) Observe from the given table that, among the two values 0 and 1 of the
function f(x1, x2, x3 ), the value 1 occurs the least number of times. Therefore,
by the remark made after E 13, we would prefer to obtain the Boolean
expression in DNF. To get this we will use the stepwise procedure adopted in
Example 10.
Accordingly, the required Boolean expression in DNF is given by
X(x1, x2, x3 ) = (x1 ∧ x2 ∧ x3 ) ∧ (x1 ∧ x'2 ∧ x'3 ) ∨ (x'1 ∧ x'2 ∧ x'3 ).
b) By the given table, among the two values 0 and 1 of the function the
points v i at which g(vi ) = 0 are fewer than the points vi at which g(vi) = 1. So
we would prefer to obtain the corresponding Boolean expression in CNF.
Applying the stepwise procedure in the solution to E13, the required Boolean
expression (in CNF) is given by
X(x1, x2, x3 ) = (x'1 ∨ x2 ∨ x'3 ) ∧ (x1 ∨ x'2 ∨ x'3 ) ∧ (x1 ∨ x'2 ∨ x3).

E15) Let g denote the function which depicts the functional utility of the circuit to
be designed. We may assume that the light bulb is off when both the switches
x1 and x2 are off, i.e., we write g(0, 0) = 0.
Now, by arguments used while calculating the entries of Table 9, you can
easily see that all the values of the function g are as given below:
g(0, 0) = 0, g(0, 1) = 1, g(1, 0) = 1, g(1, 1) = 0.
Thus, proceeding as in the previous exercise, it can be seen that the Boolean
expression (in DNF), which yields g as its Boolean function, is given by the
expression
X(x1, x2 ) = (x'1 ∧ x2 ) ∨ (x1 ∧ x'2),
because g(0, 1) = 1 and g(1, 0) = 1.

Finally, the logic circuit corresponding to this Boolean expression is shown


in Fig. 19.
x1

x2

Fig. 19

67

You might also like