Discrete Mathematics Code
Discrete Mathematics Code
Mathematics (CDOE)
Semester – III
Paper Code – 21MAT23DA1
DISCRETE
MATHEMATICS
1.1. Introduction. This chapter contains results related to various types of recurrence relations,
generating functions and finding the relative solutions.
1.1.1. Objective. The objective of the study of these results is to understand the basic concepts and have
an idea to apply them in further studies about recurrence relations.
1.2. Recurrence Relations.
A recurrence relation relates the nth term of a sequence to its predecessors. These relations are related to
recursive algorithms. A recurrence relation for a sequence 𝑏0 , 𝑏1 , 𝑏2 , . . ., is a formula/equation that
relates each term 𝑎𝑛 to certain of its predecessors 𝑏0 , 𝑏1 , 𝑏2 , . . ., 𝑏𝑛−1 . The initial conditions for such a
recurrence relation specify values of 𝑏0 , 𝑏1 , 𝑏2 , . . . , 𝑏𝑛−1 .
For example, recursive formula for the sequence
2 Discrete Mathematics
3, 8, 13, 18, 23 . . .
is 𝑏1 = 3, 𝑏𝑛 = 𝑏𝑛−1 + 5, 2 d n < � .
Here, 𝑏1 = 3 is the initial condition.
Exercise. Find the sequence represented by the recursive formula
𝑏1 = 5, 𝑏𝑛 = 2𝑏𝑛−1 , 2 d n d 6.
1.3 Explicit Formula for a Sequence.
Consider the sequence
1, 4, 9, 16, 25, 36, 49, . . .
which is a sequence of the squares of all positive integers.
This sequence is described by the formula
𝑏𝑛 = 𝑛2 , 1 ≤ 𝑛 < ∞.
Thus, the terms of the sequence have been described using only its positive number. This type of
formula is called Explicit formula.
1.3.1. Exercise. Find the explicit formula for the finite sequence
87, 82, 77, 72, 67
Can this sequence be described by a recursive relation ?
1.3.2.Exercise. Find recursive formula for the factorial function.
1.3.3. Fibonacci sequence. The sequence
1, 1, 2, 3, 5, 8, 13, 21, 34, . . .
defined by the recurrence relation
𝑓0 = 1 , 𝑓1 = 1 , 𝑓𝑛 = 𝑓𝑛−1 + 𝑓𝑛−2
is called Fibonacci sequence.
1.3.4. Example. Derive recurrence relation for obtaining the amount A, at the end of n years on the
investment of Rs 10,000 at 5% interest compounded annually.
Solution. Suppose 𝐴𝑛 = amount at the end of n years.
Then, 𝐴𝑛 = 𝐴𝑛−1 + interest during n-1th year on An-1
= 𝐴𝑛−1 + 5 /100 An-1
= 𝐴𝑛−1 (1 + .05)
= 1.05 𝐴𝑛−1
Thus, the recurrence relation for calculating amount becomes
𝐴0 = Rs. 10,000
Recurrence Relations 3
for the price in the economic model, where a, b, k are positive parameters and 𝑝0 is the initial price.
Solution. To obtain the solution, we use the technique of backtracking by taking
𝑏
-𝑘=c
and have
𝑝𝑛 = a + c 𝑝𝑛−1
= a + c (a + c 𝑝𝑛−2) = a + ac + 𝑐 2 𝑝𝑛−2
In general, we have
𝑝𝑛 = a + ac + a𝑐 2 + a𝑐 𝑘−1 + 𝑐 𝑘 𝑝𝑛−𝑘
If we set k = n, then
𝑝𝑛 = a + ac + ac² + ....... + a𝑐 𝑛−1 + 𝑐 𝑛 𝑝0
𝑝𝑛 = a(1 + c + c² + ....... + 𝑐 𝑛−1 ) + 𝑐 𝑛 𝑝0
= a(1- 𝑐 𝑛 ) / 1 – c + 𝑐 𝑛 𝑝0
𝑎−𝑎𝑐 𝑛
= + 𝑐 𝑛 𝑝0
1−𝑐
𝑏 −𝑎𝑘 𝑎𝑘
= (− 𝑘)(𝑘+𝑏 + 𝑝0 ) +𝑘+𝑏
𝑏 −𝑎𝑘 𝑏
= (− 𝑘)(𝑘+𝑏 + 𝑝0 ), if 𝑘
<1
𝑎𝑘 𝑏
becomes very small for large n and thus the price 𝑝𝑛 tends to stabilize at approximately 𝑘+𝑏 . If 1𝑘 = 1,
𝑏
then 𝑝𝑛 oscillates between 𝑝0 and 𝑝1. If 𝑘
> 1, then the difference between the successive prices
increase.
1.5.Homogeneous Recurrence Relations with Constant Coefficients.
A linear recurrence relation of order k with constant coefficient is a recurrence relation of the form
𝑎𝑛 = 𝑐1 𝑎𝑛−1 + 𝑐2 𝑎𝑛−2 + . . . + 𝑐𝑘 𝑎𝑛−𝑘 , 𝑐𝑘 ` 0.
For example, the recurrence relation 𝑎𝑛 = (-2) 𝑎𝑛−1 , is a linear homogeneous recurrence relation of
Recurrence Relations 5
order 1. The recurrence relation 𝑎𝑛 = 𝑎𝑛−1 + 𝑎𝑛−2 , is a linear recurrence relation of order 2.
The equation
𝑥 𝑘 = 𝑟1 𝑥 𝑘−1 + 𝑟2 𝑥 𝑘−2 + ...........+ 𝑟𝑘
of degree k is called the characteristic equation of the linear homogeneous recurrence relation
𝑎𝑛 = 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2+ ...........+ 𝑟𝑘 𝑎𝑛−𝑘
of order k.
1.5.1. Theorem. If the characteristic equation x²- 𝑟1x-𝑟2 = 0 of the homogeneous recurrence relation
𝑎𝑛 = 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2, has two distinct roots 𝑠1 and 𝑠2 then
𝑎𝑛 = u 𝑠1𝑛 + v 𝑠2𝑛
where u and v depend on the initial conditions, is the explicit formula for the sequence.
Proof. Since 𝑠1 and 𝑠2 are root of the characteristic equation x²- 𝑟1x-𝑟2 = 0 , we have
𝑠12 - 𝑟1 𝑠1 - 𝑟2 = 0 (1)
𝑠22 - 𝑟1 𝑠2 - 𝑟2 = 0 (2)
Let
𝑎𝑛 = u 𝑠1𝑛 + v 𝑠2𝑛 for n e 1 (3)
It is sufficient to show that (3) defines the same sequence as 𝑎𝑛 = 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2 . We have
𝑎1 = u 𝑠1 + v 𝑠2
𝑎2 = u 𝑠12 + v 𝑠22
and the initial conditions are satisfied. Further,
𝑎𝑛 = u 𝑠1𝑛 + v 𝑠2𝑛
= u 𝑠1𝑛−2. 𝑠12 + v 𝑠2𝑛−2 . 𝑠22
= u 𝑠1𝑛−2. (𝑟1 𝑠1 + 𝑟2 ) + v 𝑠2𝑛−2.( 𝑟1 𝑠2 + 𝑟2 ) (using (1) and (2))
= 𝑟1( u 𝑠1𝑛−1 + v 𝑠2𝑛−1 ) + 𝑟2 (u 𝑠1𝑛−1 + v 𝑠2𝑛−1 )
= 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2 ( using expression of 𝑎𝑛−1 and 𝑎𝑛−2 from (3)
Hence (3) defines the same sequence as 𝑎𝑛 = 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2 . Hence 𝑎𝑛 = u 𝑠1𝑛 + v 𝑠2𝑛 is the
solution to the given linear homogeneous recurrence relation.
1.5.2. Theorem. If the characteristic equation x²- 𝑟1x-𝑟2 = 0 of the linear homogeneous recurrence
relation 𝑎𝑛 = 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2 has a single root s, then the explicit formula (solution) for the
recurrence relation is 𝑎𝑛 = u 𝑠 𝑛 + v 𝑠 𝑛 , where u and v depend on the initial conditions.
1.5.3. Example. Find an explicit formula for the sequence defined by the recurrence relation
6 Discrete Mathematics
𝑎𝑛 = 𝑎𝑛−1 + 2 𝑎𝑛−2 , ne 2
with the initial conditions
𝑎0 = 1 and 𝑎1 = 8
Solution. The recurrence relation
𝑎𝑛 = 𝑎𝑛−1 + 2 𝑎𝑛−2
is a linear homogeneous relation of order 2. Its characteristic equation is
𝑥2 - x – 2 = 0
which yields x = 2,-1.
Hence
𝑎𝑛 = u (2)n + v (-1)n (1)
and, we have
𝑎0 = u + v = 1 (given)
𝑎1 = 2u - v = 8 (given)
Solving for u and v, we have
u = 3, v = -2.
Hence
𝑎𝑛 = 3(2)n -2 (-1)n , ne 0
is the explicit formula for the sequence.
1.5.4. Exercise. Solve the recurrence relation
𝑑𝑛 = 2 𝑑𝑛−1 - 𝑑𝑛−2
with initial conditions d� = 1.5 and d‚ = 3.
1.5.5. Exercise. Find explicit formula for Fibonacci sequence.
1.6. Total Solution. The total solution of a linear difference equation
𝑎𝑛 = 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2+ . . . + 𝑟𝑘 𝑎𝑛−𝑘 = f(n)
where f(n) is constant or a function of n, with constant coefficients, is the sum of two parts, the
homogeneous solution satisfying the difference equation when the right hand side of the equation is
set to be 0, and the particular solution, which satisfies the difference equation with f(n) on the right
hand side.
1.6.1. Particular Solution of a Difference Equation.
There is no general procedure to find particular solution of a given difference equation. So, the particular
solution is obtained by the method of inspection as discussed in the following cases:
Recurrence Relations 7
= M(M(43)
= M(M(54)) since 43 d 100
= M(M(65)) since 54 d 100
= M(100) since 110 > 100
= M(M(111)) since 100 d 100
= M(101) since 111 > 100
= 91 since 101 > 100
From this calculation, it is clear that
M(21) = M(99) = M(100) = M(101) = 91
Interestingly, the value of this function comes out to be 91 for all positive integers less than or equal to
101. Also M(n) is well defined for n > 101 because then it is equal to n - 10. Thus, McCarthy 91
function is well defined.
For example,
M(102) = 102 – 10 = 92
M(106) = 106 – 10 = 96
and so on.
1.10. The Collatz Function. The function F: Z+ ’ Z defined by
1 𝑖𝑓 𝑛 = 1
F(n) = �1 + 𝐹 (𝑛/2) 𝑖𝑓 𝑛 𝑖𝑠 𝑒𝑣𝑒𝑛
𝐹(3𝑛 + 1) 𝑖𝑓 𝑛 𝑖𝑠 𝑜𝑑𝑑 𝑎𝑛𝑑 𝑛 > 1
is called the Collatz Function.
Collatz has conjectured that the function is well defined on the set of all positive integers. At present,
F(n) is computable for all integers n with 1 d n < 109.
For example
F(1) = 1
F(2) = 1 + F(1) = 1 + 1 = 2
F(3) = F(9 + 1) = F(10) = 1 + F(5) = 1 + (1 + F(16))
= (1 + (1 + (1 + (1 + F(18)))
= (1 + (1 + (1 + (1 + F(4))))
= (1 + (1 + (1 + (1 + ( 1 + F(2)))))
= (1 + (1 + (1 + (1 + ( 1 + 2)))
= (1 + 1 + 1 + 1 + 3) = 7
and so on .
Recurrence Relations 11
1.12.1. Example. Let (a0, a1, .... an,...) be an arbitrary numeric function and (1,1, 1, 1, ...) be numeric
function. Suppose c be the convolution of these two numeric functions. Find generating function C(z),
Solution. We have
c=a*b
where
a = (a0, a1, ... an,...)
b = (b0, b1..., bn....) = (1,1, 1, 1, ...)
so that
cn = a0 bn + a1 bn-1 + a2 bn-2 + ... + an-1 b1 + an b0
= a0 + a1 + ... + an + ...
since each bi = 1, and the generating function of c is
1
C(z) = A(z) B(z) = A(z) 1−𝑧
1
In particular, if we take A(z) = 1−𝑧 then
1
C(z) = (1−𝑧 )2
is the generating function of the numeric function (1, 2, 3, ..., n,...) because
c0 = a0 b0 = 1. 1 = 1
c1 = a0 b1 + a1 b0 = 1 + 1 = 2
c2 = a0 b2 + a1 b1 + a2 b0 = 1 + 1 + 1 = 3 = 2 + 1
cn = 1 + 1 + 1 + .... + (n + 1 times) = n+1
1
Thus, the generating function of the sequence an = n+1 is (1−𝑧 )2
1.12.2. Exercise. Let c = a + b, where
an = 2n bn = 4n ne 0
Determine the generating function C(z).
1
1.12.3. Exercise. Show that the generating function (1−4𝑧)2 can be expressed as
an = (n + 1) 4n
1.12.4. Solution of Recurrence Relations by the Method of Generating Function.
In this method, the given recurrence relations are first converted in the form of a generating function
and then solved.
1.12.5. Example. Find explicit formula for the recurrence relation
an = 3 an-1 + 1 , ne 2
with the initial condition a0 = 0, a1 = 1
Solution. We are given that
Recurrence Relations 13
an = 3 an-1 + 1 , ne 2 (1)
an zn = 3 an-1 zn + zn , ne 2 (2)
Summing (2) for all n e 2, we obtain
∑∞
𝑛=2 𝑎𝑛 𝑧
𝑛
= ∑∞ 𝑛 ∞
𝑛=2 𝑎𝑛−1 𝑧 + ∑𝑛=2 𝑧
𝑛
(3)
But
∑∞ 2
𝑛=2 𝑎𝑛 = a2 z + ..........+ an z
n
= z ( a1 z + a2 z2 + ..........+ an zn + ........)
= z (A(z) - ao ) = z A(z)
𝑧2
∑∞
𝑛=2 𝑧
𝑛
= z 2 ∑∞
𝑛=2 𝑧
𝑛−2
= 1−𝑧
Hence
1 1
an = 2 (3)n + (2) 1 , ne 0
3𝑛 −1
= 2
, ne 0
1.13. Exercises.
1. Using technique of backtracking, find the explicit formula for the recurrence relation
Sn = 2 Sn-1 , S0 = 1
2. Using technique of backtracking, find explicit formula for the recurrence relation
an = an-1 + n , a1 = 4
3. Solve the recurrence relation
an = 2 an-1 - an-2 , ne 2
14 Discrete Mathematics
11. Using generating function methods, find the explicit formula for Fibonacci sequence.
Books Recommended:
1. Kenneth H. Rosen, Discrete Mathematics and Its Applications, Tata McGraw-Hill, Fourth Edition.
2. Seymour Lipschutz and Marc Lipson, Theory and Problems of Discrete Mathematics, Schaum
Outline Series, McGraw-Hill Book Co, New York.
3. John A. Dossey, Otto, Spence and Vanden K. Eynden, Discrete Mathematics, Pearson, Fifth
Edition.
4. J.P. Tremblay, R. Manohar, “Discrete mathematical structures with applications to computer
science”, Tata-McGraw Hill Education Pvt.Ltd.
5. J.E. Hopcraft and J.D.Ullman, Introduction to Automata Theory, Langauages and Computation,
Narosa Publishing House.
6. M. K. Das, Discrete Mathematical Structures for Computer Scientists and Engineers, Narosa
Publishing House.
7. C. L. Liu and D.P.Mohapatra, Elements of Discrete Mathematics- A Computer Oriented Approach,
Tata McGraw-Hill, Fourth Edition.
2
Propositions and Lattices
Structure
2.1.Introduction.
2.2.Proposition.
2.3.Quantifiers.
2.4.Lattices.
2.1. Introduction. This chapter contains results related to propositions, truth tables, quanitifiers and its
types, lattices and its properties.
2.1.1. Objective. The objective of the study of these results is to understand the basic concepts and have
an idea to apply them in problem solving and various situations in life having logics.
2.2. Proposition. A proposition (or statement) is a declarative sentence which is true or false but not
both.
Examples. The following statements are all propositions:
(i) Paris is in France (ii) a < 6 (iii) It rained yesterday
However, the following statements are not propositions.
(i) What is your name? (ii) x2 = 9 (iii) Do your homework
The lower case letters such as p, q, r etc. are used to represent propositions.
For example, p : 2+2 = 4
q : India is in Asia.
2.2.1. Compound Propositions. Many propositions are composite, that is, composed of subpropositions
and various connective discussed subsequently. Such propositions are called compound propositions.
A proposition is said to be primitive if it cannot be broken down into simpler propositions, that is, if it
is not composite.
For example, “Roses are red and Violets are blue” is a compound proposition with subpropositions
“Roses are red” and “Violets are blue”.
On the other hand, the proposition “London is in Denmark” is primitive.
16 Discrete Mathematics
2.2.2. Basic Logical Operations. The three basic logical operations are
(i) Conjunction (ii) Disjunction (iii) Negation
which correspond, respectively, to “and”, “or” and “not”.
The conjunction of two propositions p and q is the proposition p and q, denoted by p ∧ q .
For example, Let p : He is rich
q : He is generous
Then, p∧q : He is rich and generous.
Thus, conjunction of p and q, that is, p ∧ q is true, if he is rich and generous both. Also even if one of
the component is false, p ∧ q is false. Thus “the proposition p ∧ q is true if and only if the propositions p
and q are both true”. The truth table of p ∧ q is given as
p q p∧q
T T T
T F F
F T F
F F F
The disjunction of two propositions p and q is the proposition p or q, denoted by p ∨ q.
The compound statement p ∨ q is true if atleast one of the p or q is true and it is false when both p and q
are false.
The truth value of the compound proposition p ∨ q is given by
p q p∨q
T T T
T F T
F T T
F F F
For example, if p. 1 + 1 = 3
q. A decade is 10 years.
Then, p is false, q is true and so the disjunction p ∨ q is true.
Given any proposition p another proposition, called the negation of p, can be formed by writing “It is
Propositions and Lattices 17
not the case that ......” or “It is false that .....” before p or if possible, by inserting in p the word “not”.
Symbolically,
~p
read “not p” denotes the negation of p.
If p is true then ~ p is false and if p is false, then ~ p is true.
2.2.3. Propositional Form. A “statement form” or “propositional form” is an expression made up of
statement variables (such as p, q and r) and logical connectives (such as ~, ∧ , ∨) that becomes a
statement when actual statements are substituted for the component statement variables.
The truth table for a given statement form displays the truth values that correspond to the different
combinations of truth value for the variables.
For example, Construct a truth table for the statement ( p ∨ q ) ∧ ∼ ( p ∧ q )
p q p∨q p∧q ~ (p ∧ q) (p ∨ q) ∧ ∼ (p ∧ q)
T T T T F F
T F T F T T
F T T F T T
F F F F T F
p q r p∧q ~r (p ∧ q) ∨ ~ r
T T T T F T
T T F T T T
T F T F F F
T F F F T T
F T T F F F
F T F F T T
F F T F F F
F F F F T T
18 Discrete Mathematics
Remark. For 2 variables, 4 rows are necessary. For 3 variables, 8 rows are necessary. In general, for n
variables, 2n rows are necessary.
2.2.5. Logically Equivalent Propositions. Two different compound propositions (or statement form or
propositional form) are said to be logically equivalent if they have the same truth values no matter what
truth values there constituent propositions have.
OR
Two different compound propositions are said to be logically equivalent if they have identical truth
table. We use the symbol ‘≡’ for logical equivalence.
For example, Consider the statement form
(a) Dogs bark and cats mew. (b) Cats mew and Dogs bark
If we take, p. Dogs bark
q. Cats mew
Then (a) and (b) are logically expressed as
(a) p ∧ q (b) q ∧ p
If we construct the truth tables for p ∧ q and q ∧ p , we observe that p ∧ q and q ∧ p have truth
values. Thus, p ∧ q and q ∧ p are logically equivalent , that is, p ∧ q ≡ q ∧ p .
2.2.6. Exercise. Negation of the negative of statement is equivalent to the statement. Thus, ~ (~p) ≡ p.
The logical equivalence ∼(∼ p) ≡ p is called Involution law.
2.2.7. Exercise. Show that the statement forms ∼(p ∧ q) and ∼p ∧ ∼ q are not logically equivalent.
2.2.8. Exercise. Show that ∼( p ∧ q ) and ∼ p ∨ ∼ q are logically equivalent and ∼( p ∨ q ) ≡ ∼ p ∧ ∼ q .
The above two logical equivalence are known as De Morgan’s laws of logic.
2.2.9. Tautology. A compound proposition which is always true regardless of truth values assigned to
its component propositions is called a Tautology.
2.2.10. Contradiction. A compound proposition which is always false regardless of truth values
assigned to its component propositions is called a contradiction.
2.2.11. Contingency. A compound proposition which is either true or false depending upon the truth
values of its component propositions is called a contingency.
2.2.12. Exercise. Show that p ∨ ∼ p is a Tautology.
2.2.13. Exercise. Show that p ∧ ∼ p is a contradiction.
Remark. If t and c denote tautology and contradiction, then we notice that
∼t≡c (1)
and ∼c≡t (2)
Propositions and Lattices 19
The logical equivalence (1), (2), (3) and (4) are known as complement laws.
2.2.14. Logical Equivalence involving tautologies and contradictions. If t is a tautology and c is a
contradiction then p ∧ t ≡ p . So, p ∧ c ≡ c .
Similarly, p ∨ t ≡ t . So, p ≡ p ∨ c .
Solution.
p q r p∧q p∧r q∨r p ∧ (q ∨ r ) ( p ∧ q) ∨ ( p ∧ r )
T T T T T T T T
T T F T F T T T
T F T F T T T T
T F F F F F F F
F T T F F T F F
F T F F F T F F
F F T F F T F F
F F F F F F F F
Hence p ∧ ( q ∨ r ) ≡ ( p ∧ q ) ∨ ( p ∧ r ) .
2.2.20. Conditional Proposition. If p and q are the proposition, the compound proposition
If p then q or p implies q
is called a conditional proposition (or implication) and is denoted by p → q . The proposition p is called
the hypothesis or antecedent whereas the proposition q is called the conclusion or consequence.
The connective “If .....then” is denoted by the symbol ‘→’. It is false when p is true and q is false,
otherwise it is true. In particular, if p is false then p → q is true for any q.
A conditional statement that is true by virtue of the fact that its hypothesis is false is called true by
default or vacuously true. For example, the conditional statement “If 3 + 3 = 7 then I am being of
Japan”. Then conditional statement is true simply because 3 + 3 = 7 is false.
Thus, truth values of the conditional proposition p → q is defined by the truth table
p q p→q
T T T
T F F
F T T
F F T These are true by default or
vacuously true
p implies q
q if p
p is only if q
p is sufficient condition of q
q is necessary condition for p
2.2.21. Exercise. Restate each proposition in the form of a conditional proposition.
(a) I will eat if I am hungry.
(b) 3 + 5 = 8 if it is snowing.
(c) When you sing, my ear hurt.
(d) Ram will be a good teacher if he teaches well.
(e) A necessary condition for England to win the world series is that they sign a right handed pitcher.
(f) A sufficient condition for Sohan to Visit Calcutta is that he goes to Disney land.
Propositions and Lattices 21
Thus the negation of “If p then q” is logically equivalent to “p and not q”.
2.2.25. Converse of a conditional statement. If p → q is an implication then the converse of p → q is
the implication q → p.
2.2.26. Contrapositive of a Conditional Statement. The contrapositive of a conditional statement “if p
then q” is “If ∼q then ∼p”.
In symbols, the contrapositive of p → q is ∼q → ∼p.
2.2.27. Lemma. A conditional statement is logically equivalent to its contrapositive.
Proof. The truth table of p → q and ∼ q → ∼ p are
p q p→q p q ∼p ∼q ∼q→∼p
T T T T T F F T
T F F T F F T F
F T T F T T F T
F F T F F T T T
q → p ≡ ∼ q∨ p
Hence, p ↔ q ≡ p → q ∧ q → p ≡ (∼ p ∨ q ) ∧ (∼ q ∨ p )
2.2.32. Definition. Let p and q be statements then p is a sufficient condition for q means “if p then q”
and p is a necessary condition for q means “If not p then not q”.
Remark. The order of operations of connective is ∼ , ∧ , ∨ , → , ↔ .
2.2.33. Argument. An argument is a sequence of statements. All statements except the final one are
called premises (or assumption or hypothesis).
The final statement is called the conclusion. The symbol ∴ yields “Therefore” and is generally placed
just before the conclusion. The logical form of an argument can be obtained from contents of given
argument.
For example, Consider the argument
If a man is bachelor, he is unhappy. If a man is unhappy, he dies young. [Premises]
∴ Bachelors die young. [Conclusion]
The argument has abstracted form
If p then q,
Propositions and Lattices 23
If q then r
∴p→r,
where p = Man is Bachelor
q = He is unhappy
r = He dies young
2.2.34. Valid Argument. An argument is said to be valid if the conclusion is true whenever all its
premises are true.
2.2.35. Definition. An argument which is not valid is called a fallacy (invalid).
Method to test validity for an argument.
(i) Identify the premises and conclusion of argument.
(ii) Construct a truth table of all the premises and conclusion showing their truth values.
(iii) Find the rows (called critical rows) in which all the premises are true.
(iv) In each critical row determine whether the conclusion is also true.
(a) If in each critical row, the conclusion is also true then argument form is valid.
(b) If there is atleast one critical row in which conclusion is false, the argument form is fallacy.
2.2.36. Example. Show that the argument
p,
p → q,
∴q
is valid.
Solution. The premises are p and p → q.
The conclusion is q. The truth table is
p q p p→q q
T T T T T
T F T F F
F T F T T
F F F T F
Premisess Conclusion
In the first row all the premises are true and therefore it is a critical row. The conclusion in this critical
row is also true. Hence the argument is valid.
24 Discrete Mathematics
T T T T T T ← Critical row
T T F T F F
T F T F T T
T F F F T F
F T T T T T ← Critical row
F T F T F T
F F T T T T ← Critical row
F F F T T T ← Critical row
Propositions and Lattices 25
The critical row for the premises p → q, q → r are first row, 5th row, 7th row and 8th row. The conclusion
p → r in these rows is always true. Hence the given argument is valid.
2.2.40. Example. Consider the argument
(i) If you invest in the stock market, then you will get rich.
(ii) If you get rich, then you will be happy.
Therefore, if you invest in stock market, then you will be happy.
Solution. By rule of inference, the argument is valid.
2.2.41. Exercise. The following arguments are valid
(a) p , ∴ p ∨ q (b) q , ∴ p ∨ q
These arguments are called disjunctive addition.
2.2.42. Exercise. The following arguments are valid
(a) p ∧ q , ∴ p (b) p ∧ q , ∴ q
These arguments are called conjunctive simplification. For example, (a) says if both p and q are true
then in particular p and (b) says if p and q are true then in particular q. (It can be proved by truth table).
2.2.43. Exercise. The arguments
(a) p∨q (b) p∨q
∼q ∼p
∴ p ∴ q
are valid.
These arguments are called disjunctive syllogism.
2.2.44. Exercise. Prove that the following argument is valid:
p → ∼ q, r → q, r ∴ ∼ p .
Solution. Let
p : My glasses are on the kitchen table.
q : I saw my glasses at breakfast.
r : I was reading the newspaper in the living room.
s : I was reading the newspaper in the kitchen.
t : My glasses are on the coffee table.
u : I was reading my book in bed.
v : My glasses are on the bed table.
Then the given statements are
(a) p → q (b) r ∨ s (c) r → t (d) ∼ q (e) u → v (f) s → p
The following deduction can be mode
(i) p→q [by (a)] (ii) s→p [by (f)]
∼ q [by (d)] ∼p [by the conclusion of (i)]
∴ ∼p [by Modus Tollens] ∴ ∼s [by Modus Tollens]
(iii) r∨s [by (b)] (iv) r→t [by (c)]
∼s [by conclusion of (ii)] r [by conclusion of (iii)]
∴ r [by disjunctive syllogism] ∴ t [by Modus Ponens]
Hence t is true and the glasses are on the coffee table.
Remark. Note that (e) was not required to derive the conclusion. In mathematics, as in real life, we
frequently deduce a conclusion from just a part of the information available to us.
2.2.47. Exercise. Show that the following argument is invalid.
If taxes are lowered, then Income rises.
Income rises.
∴ Taxes are lowered.
2.2.48. Exercise. Test the validity of the following argument.
If two sides of a triangle are equal, then opposite angles are equal.
Two sides of triangle are not equal.
∴ the opposite angles are not equal.
2.2.49. Exercise. Consider the following argument for validity
(i) If I study then I will not fail in Mathematics.
Propositions and Lattices 27
T F F T T ← Critical row
F T F F F
2.3.2. Definition. A predicate is a sentence that contains a finite number of variable and becomes a
statement when specific values are substituted for the variable. The domain of a predicate variable is the
set of all values that may be substituted in place of the variables. The predicates are also known as
“Propositional functions or open sentences”.
2.3.3. Definition. Let P( x ) be a predicate and x has domain D. Then the set { x ∈ D : P( x ) is tr ue} is
called the truth set of P( x ) .
For example, Let P( x ) be “x is an integer less than 8” and suppose the domain of x is the set of all
positive integers. Then the truth set of P( x ) is {1, 2, 3, 4, 5, 6, 7} . Let P( x ) and Q( x ) be predicate with
common domain D of x. The notation P( x ) ⇒ Q( x )
means that every element in the truth set of P( x ) is in the truth set of Q( x ) . Similarly P( x ) ⇔ Q( x )
means that P( x ) and Q( x ) have identically truth sets.
For example, Let P( x ) be “x is a factor of 8”
Q( x ) be “x is a factor of 4”
R( x ) be “x < 5 and x ≠ 3”
and let the domain of x be the set of positive integers.
Then, Truth set of P( x ) = {1, 2, 4, 8}
Truth set of Q( x ) = {1, 2, 4}
Truth set of R( x ) = {1, 2, 4}
Since every element of truth set of Q( x ) is in the truth set of P( x ) , Q( x ) ⇒ P( x )
Truth set of R( x ) is identical to the truth set of Q( x ) so R( x ) ⇔ Q( x )
2.3.4. Definition. The words that refer to quantities such as “all” or “some” and tell for how many
element a given predicate is true, is called quantifiers.
By adding quantifiers, we can obtain statements from a predicate.
2.3.5. Definition. The symbol ‘∀’ denotes ‘for all’ and is called universal quantifier. Thus the sentence
“All human beings are mortal”.
can be written as
∀ x ∈ S, x is mortal.
where S denotes the set of all human beings.
2.3.6. Definition. Let P( x ) be a predicate and D be the domain of x. A statement of the form “
∀ x ∈ D, P( x ) ” is called a universal statement. A universal statement P( x ) is true if and only if P( x ) is
true for every x in D. A universal statement P( x ) is false if and only if P( x ) is false for atleast one value
of x in D.
Propositions and Lattices 29
A value for x, for which P( x ) is false is called a Counter example to the universal statement For
example, Let D = {1, 2, 3, 4} and consider the universal statement
P( x ) : ∀ x ∈ D, x 3 ≥ x
But if we consider the universal statement Q( x ) : ∀ n ∈ N, n + 2 > 8 is not true because if we take, say n =
6.
6 + 2 >/ 8 (which is absurd).
2.3.7. Definition. The symbol “∃” denotes “there exist” and is called the existential quantifier. For
example, The sentence
“There is a university in Kurukshetra”
can be expressed as
∃ a university u such that u is in Kurukshetra
or we can write
∃ u ∈U : u is in Kurukshetra, where U is the set of universities.
The words ‘such that’ are inserted just before the predicate.
2.3.8. Definition. Let P( x ) be a predicate and D is a domain of x. A statement of the form “ ∃ x ∈ D
such that P( x ) ” is called an existential statement. It is defined to be true if and only if P( x ) is true for
atleast one x in D. It is false if and only if P( x ) is false for all x in D.
For example, the existential statement
“ ∃ n ∈ N : n + 3 < 9 ” is true since the set { n : n + 3 < 9} = {1, 2, 3, 4, 5} ≠ φ
For example, Let A = {2, 3, 4, 5}. Then the existential statement “ ∃ n ∈ A : n 2 = n ” is false because there
is no element in A whose square is equal to itself.
2.3.9. Definition. A statement of the form ∀ x, if P( x ) then Q( x ) is called universal conditional
statement. Consider the statement ∀ x ∈ R, if x > 2 then x 2 > 4 .
This can be written in any of the form (i) If a real number is greater than 2, then its square is greater than
4.
(ii) Whenever a real number is greater than 2, its square is greater than 4.
On the other hand consider the statements
(i) All bytes have eight bits. (ii) No fire tracks are green
These can be written as
30 Discrete Mathematics
Hence the negation of universal statement “(all are)” is logically equivalent to an existential statement
“some are not”. For example, The negation of
(i) “∀ positive integer n, we have n + 2 > 9” is “ ∃ positive integer n such that n + 2 >/ 9 ”.
(ii) The negation of “all students are intelligent” is “Some students are not intelligent”.
2.3.12. Definition. The negation of a universal conditional statement is defined by
∼ (∀ x , P( x ) → Q( x )) ≡ ∃ x such that ∼ ( P( x ) → Q( x )) ......(1)
For example, the negation of “ ∀x people p, if p is bloned then p has blue eyes” is “∃ a people p such
that p is bloned and p does not have blue eyes”.
For example, suppose there is a bowl and we have no ball in the bowl. Then the statement
“All the balls in the bowl are blue”
is true by default or vacuously true, because there is no ball in the bowl which is not blue. For example,
If P( x ) is a predicate and the domain of x is
D = { x1, x 2 ,...... x n } .
Then, the statement “ ∀ x ∈ D, P( x ) ” and P( x1 ) ∧ P( x2 ) ∧ ... ∧ P( xn ) are logically equivalent. For example,
let P( x ) be “ x . x = x ” and let D = {0, 1}.
Propositions and Lattices 31
0 . 0 = 0 and 2.1 = 1
which can be written as P( 0 ) ∧ P(1) .
An argument of this form is called Syllogism. The first and second premises are called its major
premises and minor premises respectively.
For example, Consider the argument.
(i) If a number is even then its square is even.
(ii) If k is a particular number that is even ∴ k2 is even.
The major premises of this argument can be written as “∀ x, if x is even then x2 is even”.
Let P( x ) : x is even , Q( x ) : x 2 is even. Let k be an even number, then the argument is
∼ Q(z)
∴ ∼ P(z)
This form of argument is valid by Modus Tollens.
2.3.16. Use of Diagrams for validity. Consider (i) All human beings are mortal. (ii) is not mortal.
Mortal Mortal
Zeus
Huma
n
Mortal
Zeus
Huma
n
Since Zeus is outside the Mortal disk it is necessarily outside the human being disk. Hence the
conclusion that ‘Zeus is not human’ is true.
2.3.17. Use of Diagram for Invalidity. (i) All human being are mortal.
(ii) Sohan is mortal. ∴ Sohan is a human being
Mortal Mortal
Huma Sohan
n
Major Minor
Propositions and Lattices 33
Mortal Mortal
Huma Huma
.nSohan
n
.
Sohan
The conclusion “Sohan is a human being” is true in the first case but not in the second. Hence the
argument is not valid.
2.4. Lattices
2.4.1. Definition. A relation R on a set X is said to be partial ordered relation if it is reflexive,
antisymmetric and transitive. A set X with partial relation R is called a partially ordered set or poset and
is denoted by ( X , R) . Note that a relation R is said to be antisymmetric if aRb, bRa ⇒ a = b .
For example, (1) Let A be collection of subsets of a set S, then the relation ⊆ of set inelusion is a
partial order relation on A .
(i) A ⊆ A (Reflexive)
2. Let N be set of natural numbers. Relation R is ≤ (less than or equal to) is a partial order relation on N.
(i) a ≤ a for all a ∈ N (ii) a ≤ b and b ≤ a then a = b for all a , b ∈ N
So, we get ( N, ≤) is a poset. But the relation < is not a partial order relation because this relation is not
reflexive.
3. Let N be the set of natural numbers. Then relation of divisibility is a partial order relation of N.
(i) a / a ∀ a ∈ N (Reflexivity)
(iii) a / b, b / c ⇒ a/c
2.4.2. Definition. Let ( A, R) be a poset, then elements ‘a’ and ‘b’ are said to be comparable if
aRb or bRa . For example, We know that the relation of divisibility is a partial order relation on the set
of natural numbers, but we see that 3 / 7 and 7 / 3.
Thus 3 and 7 are numbers in N which are not comparable (in such a case we write 3 7)
2.4.3. Definition. If every pair of elements in a poset (A, R) is comparable, we say that A is linearly
ordered or totally ordered or chain. The partial ordered relation is then called linear ordered or total
ordering relation. The number of elements in a chain is called the length of chain.
For example, (1) The set N of natural numbers with relation ≤ will be linearly ordered or a chain.
(2) Let A be a set with two or more elements and let ‘⊆’ (set inclusion) be taken as relation on the subset
of A. If a and b are two distinct elements of A then {a} and {b} are subsets of A and are not comparable.
So, power set of A, , that is, P(A) is not a chain.
A subset of ‘A’ is called antichain, if no two distinct elements in the subset are related. If we consider
the subsets φ, {a}, A in the power set of A then the collection {φ, {a}, A} is a chain but {{a}, {b}} is an
antichain.
2.4.4. Definition. A relation R on a set A is called asymmetric if aRb and bRa do not both hold for any
a, b belonging to A.
2.4.5. Directed Graph representation of a relation from a finite set A to itself.
In this representation we draw a small circle for every element of the set A. These circles are called
vertices. We then draw arrow from vertex ai to vertex aj iff ( a i , a j ) ∈ R . These arrows are called edges.
The pictorial representation of R so obtained is called directed graph or digraph of R.
2.4.6. Example. Let A = {1, 2, 3} and R = {(1, 1), (1, 2), (1, 3), (3, 1) (2, 3), (2, 1)}. Draw the directed
graph of R.
Solution.
2
1
1 2
3 4
Propositions and Lattices 35
Solution. Clearly the relation R is defined on the set A = {1, 2, 3, 4}. Also we know that ( a i , a j ) ∈ R iff
there is an edge from a i to a j . Thus
2.4.8. Theorem. The diagraph of a partial order has no cycle of length greater than one.
Proof. Suppose on the contrary that the digraph of the partial order ≤ on the set A contains a cycle of
length n (n ≥ 2). Then there are distinct elements a1, a 2 ,......, a n such that
a1 ≤ a 2 , a 2 ≤ a 3 ,......, a n −1 ≤ a n , a n ≤ a1 . By the transitivity of partial order used (n −1) times,
we have a1 ≤ a n . Also a n ≤ a1 .
which is a contradiction to the supposition that a1, a 2 ,......, a n are distinct. Hence the proof.
2.4.9. Hasse Diagram. Let A be a finite set. By the theorem proved above, the digraph of a partial order
on A has only cycles of length one. Infact since a partial order is reflexive, every vertex in the digraph of
the partial order is contained in a cycle of length one. To simplify the matter we shall delete all such
cycles of the digraph. Thus the digraph shown in first figure can be represented by second figure.
Let V = { a , b, c}
E = { ( a , a ), ( b, b ) , ( c, c ), ( a , b ), ( b, c ), ( a , c )}
b c
b c
a a
(i) (ii)
We also eliminate all edges that are implied by transitive property. In above we omit the edge from a to
c as a ≤ c follows from a ≤ b, b ≤ c.
c
b
a
36 Discrete Mathematics
We also draw the digraph of a partial order with all edges pointing upward, omit the arrows and to
replace then the circles by dots. Thus the final part of digraph becomes
c
a
Thus “The diagram of a partial order obtained from its digraph by omitting cycles of length one, the
edge implied by transitivity and arrows (after arranging them pointing upward) is called Hasse Diagram
of the partial order of the poset.
2.4.10. Definition. Let A be a partially ordered set w.r.t. the relation ‘≤’. An element a ∈ A is called a
maximal element of A iff for all b in A, either b ≤ a or b and a are non-comparable. An element a in A
is called greatest element of A iff for all b in A, b ≤ a.
An element a in A is called a minimal element of A iff for all b in A either a ≤ b or b and a are non-
comparable. An element a in A is called least element of A iff for all b in A, a ≤ b.
Remark. (1) A greatest element is certainly maximal but a maximal element need not be greatest
element. Similarly, a least element is minimal but a minimal element need not be least.
(2) A partially ordered set w.r.t. a relation can have atmost one greatest element and atmost one least
element but it may have more than one maximal and minimal elements.
For example, consider the poset A whose Hasse diagram is
a3
a2
a1
b3
b1 b2
The elements a1, a2, a3 are maximal elements of A and the elements b1, b2 , b3 are minimal elements.
Observe that since there is no line between b2 and b3 and we can conclude neither b3 ≤ b2 nor b2 ≤ b3
showing that b2 and b3 are non-comparable.
2.4.11 Lattice. A lattice is a partially ordered set (L, ≤) in which every subset {a, b} consisting of two
elements has a least upper bound and a greatest lower bound. We denote l.u.b. of {a, b} by
a ∨ b and call it join or sum of a and b. Similarly, we denote greatest lower bound of {a, b} by a ∧ b and
call it meet or product of a and b. Other symbols used are
l.u.b.. ⊕, + , ∪
g.l.b.. ∗, . , ∩
Lattice is a mathematical structure with binary operation, join and meet. A totally ordered set
is obviously a lattice but not all partially ordered sets are lattices. For example, Let A be any
Propositions and Lattices 37
set and ρ(A) be its power set. The partially ordered set (ρ (A), ⊆) is a lattice in which the meet {a}
and join are same as the operations ∩ (intersection) and ∪ (union) respectively. If A has a
single element say ‘a’. Then ρ(A) = {φ, A} and least upper bound of ρ (A) is A = {a} and
greatest lower bound of ρ( A ) is φ. The Hasse diagram of (ρ( A ) , ⊆) is a chain containing two φ
{a, b}
{b} {a}
φ
The l.u.b. and g.l.b. exists for every two subsets and hence ρ(A) is a lattice.
2.4.12. Example. Consider the poset (N, ≤) where ≤ is a relation of divisibility. Then N is a lattice in
which Join of a and b = a ∨ b = LCM(a, b)
Meet of a and b = a ∧ b = GCD ( a , b ) for a , b ∈ N .
2.4.13. Example. Let n be a +ve integer. Let Dn be the set of all positive divisors of N. Then Dn is a
lattice under the operation of divisibility. The Hasse diagram of D20 and D30 are
30
20
4
6 10 15
10
2
5 2 5
1 3
1
D20 = {1, 2, 4, 5, 10, 20} D30 = {1, 2, 3, 5, 6, 10, 15, 30}
2.4.14. Theorem. If (L1, ≤) and (L2 , ≤) are lattices, then (L, ≤) is lattice, where L = L1 × L 2 and the
partial order ≤ of L is product partial order.
Proof. We denote the join and meet in L1 by ∨1 and ∧1 and the join and meet in L2 by ∨2 and ∧2
respectively. We know that Cartesian product of two posets is a poset. Therefore L = L1 × L2 is a poset.
Thus all we need to show is that ( a1, b1 ) and ( a 2 , b2 ) ∈ L then ( a1, b1 ) ∨ ( a 2 , b2 ) and ( a1, b1 ) ∧ ( a 2 , b2 )
exists in L. Further we know that ( a1, b1 ) ∨ ( a 2 , b2 ) = ( a1 ∨1 a 2 , b1 ∨ 2 b2 ) and
( a1, b1 ) ∧ ( a 2 , b2 ) = ( a1 ∧1 a 2 , b1 ∧2 b2 ) .
( L1, ≤ ) and ( L2 , ≤ ) .
2.4.15. Example. Let ( A, R ) and ( B, R′ ) be posets. Then ( A × B, R′′ ) is a poset with partial order R′′
defined by ( a , b ) R′′ ( a ′, b′ ) if aRa ′ in A and bR′ b′ in B.
b R′ b ′, b ′ R′ b in B ......(2)
Since ( A, R) and ( B, R′) are posets, (1) and (2) implies a = a ′ and b = b ′ .
The partial order R′′ defined on the Cartesian product A × B as above is called the product partial order.
2.4.16. Example. Let L1 and L2 be lattices whose Hasse diagram are
I2
I1
a b
01
02
L1 L2
Propositions and Lattices 39
(I1, I2)
(01,a) (01,b)
(I1,02)
(01,02)
L = L1 × L2
2.4.17. Properties of lattices. Let (L, ≤) be a lattice and let a , b, c ∈ L then from the definition of ∨
(join) and ∧ (meet) , we have
(i) a ≤ a ∨ b and b ≤ a ∨ b, wher e a ∨ b is an least upper bound of a and b.
Conversely, if a ≤ b and since a ≤ a , ‘a’ is lower bound of a and b, and so by the definition of greatest
lower bound, we have a ≤ a ∧ b .
Hence a ∧ b = a
40 Discrete Mathematics
b ∨ c ≤ a ∨ (b ∨ c) ......(2)
Also, b ≤ b ∨ c and c ≤ b ∨ c
and c ≤ a ∨ (b ∨ c) ......(4)
Now by (1) and (3), a ∨ ( b ∨ c ) is an upper bound of a and b and hence by definition of l.u.b. we have
a ∨ b ≤ a ∨ (b ∨ c) ......(5)
Therefore ( a ∨ b ) ∨ c ≤ a ∨ (b ∨ c) ......(6)
Similarly a ∨ (b ∨ c) ≤ ( a ∨ b ) ∨ c ......(7)
L4. (i) Since a ∧ b ≤ a and a ≤ a , it follows that a is an upper bound of a ∧ b and a. Therefore by the
definition of l.u.b. a ∨ (a ∧ b ) ≤ a ......(8)
Thus, we can express L.U.B. of { a1, a 2 ,......, a n } as a1 ∨ a 2 ∨ ...... ∨ a n and GLB of {a1, a 2 ,......, a n } as
a1 ∧ a 2 ∧ ..... ∧ a n .
2.4.20. Theorem. Let (L, ≤) be a lattice, then for any a, b, c ∈ L the following property holds
(1) If a ≤ b then (i) a ∨ c ≤ b ∨ c (ii) a ∧ c ≤ b ∧ c. This property is called “Isotonicity”.
(2) a ≤ c and b ≤ c iff a ∨ b ≤ c
(3) c ≤ a and c ≤ b iff c ≤ a ∧ b
(4) If a ≤ b and c ≤ d then (i) a ∨ c ≤ b ∨ d (ii) a ∧ c ≤ b ∧ d
Proof. (1) (i) We know that a ∨ b = b iff a ≤ b. Therefore to show that a ∨ c ≤ b ∨ c, we shall show that
( a ∨ c ) ∨ ( b ∨ c ) = b ∨ c . We know that
( a ∨ c ) ∨ ( b ∨ c ) = [( a ∨ c ) ∨ b ] ∨ c [Associativity]
= a ∨ c ∨ b ∨ c = a ∨ (b ∨ c) ∨ c [Commutativity]
= (a ∨ b ) ∨ (c ∨ c) [Associativity]
=b∨c [ a ∨ b = b, c ∨ c = c]
This proves (i). The part (ii) of (1) can be proved similarly.
(2) If a ≤ c then (1) (i) implies a ∨ b ≤ c ∨ b.
But b≤c ⇔ b∨c=c
⇔ c ∨b =c [Commutativity]
Hence by transitivity a ∨ c ≤ b ∨ d
(ii) We note that (1) (ii) implies that
if a ≤ b then a ∧ c ≤ b ∧ c = c ∧ b
if c ≤ d then c ∧ b ≤ d ∧ b = b ∧ d
which proves (i). The (ii) inequality follows by using the principle of duality.
2.4.22. Modular Inequality. Let (L, ≤) be a lattice. If a, b, c ∈ L then a ≤ c iff a ∨ ( b ∧ c ) ≤ ( a ∨ b ) ∧ c
.
Proof. We know that a ≤ c iff a ∨ c = c ......(1)
(i) a ∨ b = b ∧ c (ii) (a ∧ b) ∨ (b ∧ c) = (a ∨ b) ∧ (a ∨ c)
2.4.24. Second definition of Lattice as an algebraic system.
We have already defined that lattice is an partially ordered set in which every subset consisting of two
elements has a least upper bound and a greatest lower bound. We now present another definition of
lattice as an algebraic system.
2.4.25. Definition. Let L be a non-empty set with two binary operations, called join and meet, denoted
respectively by ∨ and ∧. Then L is called a lattice if the following axioms hold where a, b, c are
elements in L.
(1) Commutative law. (i) a ∨ b = b ∨ a and (ii) a ∧ b = b ∧ a
(2) Associative law. (i) ( a ∨ b ) ∨ c = a ∨ ( b ∨ c ) and (ii) ( a ∧ b ) ∧ c = a ∧ ( b ∧ c )
Remarks (1). We some time denote the lattice by (L, ∨, ∧) when we want to show which operations are
involved.
(2) Idempotent law can be derived using absorption law as follows.
Consider a ∨ a = a ∨ (a ∧ (a ∨ b)) [Absorption 3(ii) law]
=a [By 3(i)]
Hence a∨a =a
2.4.27. Theorem. Prove that relation defined above on a lattice L is a partial order relation , that is,
every lattice is a partially ordered set.
Proof. (i) Reflexivity. By the idempotent law, we know that
a∨a =a ∀ a∈ L
⇒ a ≤ a ∀ a∈L
So, a=b
(iii) Transitivity. Let a ≤ b and b ≤ c
⇒ a ∨ b = b and b ∨ c = c ......(1)
= (a ∨ b ) ∨ c [Associativity]
We have already proved that this is a partial order relation. Now, all we require is that l.u.b. and g.l.b. of
every two elements of L exist. To do so, we shall prove that l.u.b. of a and b is
a ∨ b and g.l.b. of a and b is a ∧ b. By absorption law, we have
b ∧ ( a ∨ b ) = b and a ∧ ( a ∨ b ) = a
⇒ a ∨ c = c and b ∨ c = c ......(1)
=c [By (1)]
⇒ a∨b ≤c
Hence a ∨ b is the least upper bound of a and b. Similarly, we can show that a ∧ b is the greatest lower
bound of a and b.
2.4.29. Sublattice. Let L be a lattice. A non-empty subset S of L is said to be a sub-lattice of L iff S is
closed under the operations ∨ and ∧ of L , that is, a ∨ b ∈ S and a ∧ b ∈ S ∀ a , b ∈ S
⇒ Dn is a sublattice of N.
2.4.31. Lattice Isomorphism. Let ( L1, ∨1, ∧1 ) and ( L2 , ∨ 2 , ∧2 ) be two lattices. Then a mapping
f : L1 → L2 is called a lattice homomorphism if for any a , b ∈ L1 .
f ( a ∨1 b ) = f ( a ) ∨ 2 f ( b ) and f ( a ∧1 b ) = f ( a ) ∧2 f ( b ) ,
and so f ( b ) = f ( a ∨1 b ) = f ( a ) ∨ 2 f ( b ) ⇒ f ( a ) ≤2 f ( b )
Thus a ≤1 b iff f ( a ) ≤2 f ( b )
(ii) If a lattice homomorphism is one-one and onto, then it is called lattice isomorphism. If there exists
an isomorphism between two lattices, then the lattices are called isomorphic.
(iii) Since lattice isomorphism preserves order relation, therefore isomorphic lattices can be represented
by the same diagram in which vertices are replaced by corresponding images.
2.4.32. Example. Let A = {a, b}, then the lattice (P(A), ⊆) is isomorphic to the lattice D6 under the
relation of divisibility.
Solution. P( A ) = {φ, { a } , { b} , { a , b} } and D6 = {1, 2, 3,6}
46 Discrete Mathematics
{a} {b} 3
2
φ 1
P(A) D6
We define mapping f : P( A ) → D6 by f (φ) = 1, f ({ a } ) = 2, f ({ b} ) = 3, f ({ a , b} ) = 6
10 01
00
Clearly, under these operations Bn becomes a lattice and also Bn contains 2n elements.
For example, B3 = {000, 001, 010, 100, 011, 101, 110, 111}
Its Hasse diagram is given as
111
100 001
010
000
From the diagram, it is clear that l.u.b. and g.l.b. of every two elements of B3 exist and hence B3 is a
Propositions and Lattices 47
lattice.
For example, lub { 010, 001} = 011 , glb { 010, 001} = 000
2.4.34. Bounded, Complemented and Distributive Lattices. We recall that an element x of a lattice L
is called greatest element of a ≤ x ∀ a ∈ L . Similarly, an element y of lattice L is called least element if y
≤ a ∀ a∈L.
Further, let L be a lattice and S = {a1, a2,......,an} be a finite subset of L, then we shall denote l.u.b. and
g.l.b. of S as follows. l.u.b. of S = a1 ∨ a 2 ∨ ...... ∨ a n
Also, we shall denote the greatest and least elements by I and 0 respectively.
2.4.35. Bounded Lattice. A lattice L is said to bounded if L has both a greatest element and a least
element. If L = { a1, a 2 ,......, a n } , then a1 ∨ a 2 ∨ ...... ∨ a n = I and a1 ∧ a 2 ∧ ...... ∧ a n = 0 .
2.4.36. Example. (1) The lattice + of all positive integers under partial order relation of divisibility is
not a bounded lattice since it has least element, namely 1, but no greatest element.
(2) Let A be a non-empty set then the lattice P(A) under the partial order relation of inclusion is a
bounded lattice since its greatest element is A and the least element is φ.
Remark. If (L, ≤) is a bounded lattice, then clearly 0 ≤ a ≤ I ∀ a∈L
But c ∨ 0 = c , so we have
c = I , a contradiction.
Similarly, 0 is the only complement of I.
2.4.38. Complemented Lattice. A lattice L is called complemented if it is bounded and if every element
of L has at least one complement. For example,
(1) The power set P( A ) of any set is a bounded lattice under inclusion relation where join and meet are
48 Discrete Mathematics
∪ and ∩ respectively. Its bounds are φ and A. The lattice ( P( A ) , ⊆ ) is complemented in which the
complement of any subset B of A is A − B.
111
(2) The lattice (B3 , ≤) is a bounded lattice and its bounds are 000 and 111.
Further, complement of an element of Bn can be obtained by interchanging 110 101 011
1 and 0 in the sequence.
100 001
010
For example, complement of 101 is 010,
000
l.u.b.(101, 010) = 111 = 101 ∨ 010
and g.l.b. (101, 010) = 000 = 101 ∧ 010
Remark. It should be noted that in a bounded lattice complements need not exist and need not be unique
as well. For example, in the bounded lattice shown as in the figure below, we note that a and c are both
complements of b. Also, in the chain represented as in the figure below, the element a, b, c have no
complements.
So, these two lattices are bounded but one is complemented other is not.
I I
c
c
b b
a a
0
0
For example, the power set P(A) of any set A is a distributive lattice. We know that join and meet
operations in P(A) are union and intersection respectively. Also, we know that union and intersection are
distributive over each other, that is,
R ∪ ( S ∩ T ) = ( R ∪ S) ∩ ( R ∪ T )
and R ∩ ( S ∪ T ) = ( R ∩ S) ∪ ( R ∩ T )
2.4.40. Theorem. (Without proof). A lattice L is non-distributive if and only if it contains a sublattice
Propositions and Lattices 49
I
I
a
c
c a b
b
0
0
d c e
a b
0 I
= ( a ∨ b ) ∧ (b ∨ c) = I ∧ (b ∨ c) = b ∨ c
Similarly, c = c ∨ 0 = c ∨ (a ∧ b ) = (c ∨ a ) ∧ (c ∨ b )
= ( a ∨ c) ∧ (b ∨ c) = I ∧ (b ∨ c) = b ∨ c
Hence b = c.
2.4.43. Join Irreducible elements and atoms.
Definition. Let L be a lattice, then an element a ∈ L is called join-irreducible if it can not be expressed
as the join of two distinct elements of L other than a.
50 Discrete Mathematics
Remark. (i) Prime numbers under multiplication have this property , that is, if p = ab then p = a or p = b
where p is prime.
(ii) Clearly, 0 is join-irreducible.
(iii) If a has at least two immediate predecessors say, b1 and b2 as shown in the figure.
a
b1 b2
(iv) On the other hand, if a has a unique immediate predecessor c, then a ≠ b1 ∨ b2 for any other
elements b1 and b2 because c would lie between b1 , b2 and a.
a
c
b1 b2
(v) By above two remarks, it is clear that a ≠ 0 is join irreducible if and only if ‘a’ has a unique
immediate predecessor.
2.4.44. Definition. Those elements, which immediately succeed 0, are called atoms. For example,
Elements a, b, c are atoms in the adjoining figure (i) .
From the above discussion, it follows that atoms are join-irreducible but converse may not be true. For
example, c is a join-irreducible element in the adjoining lattice (ii) but c is not an atom.
I
I
c c
b b
a
(i) (ii)
a
0
0
Remark. If an element a in a finite lattice L is not join irreducible, then we can write a = b1 ∨ b2 . Then
we can write b1 and b2 as the join of other elements if they are not join irreducible and so on.
Since L is finite, we finally have a = d1 ∨ d2 ∨ ...... ∨ d n I
where the d’s are join irreducible. If di ≤ dj then di ∨ d j = d j ;
a b c
so, we can delete the di from the expression. In other words,
we can assume that the d’s are irredundant , that is, no d precedes 0
Propositions and Lattices 51
any other d. Hence a can be expressed as join of irredundant join irreducible elements. However, we
give an example to show that such an expression need not be unique. For an example, consider the
lattice given in the figure, we see that
I = a∨b and I = b∨c
2.4.45. Theorem Let L be a finite distributive lattice. Then every a in L can be written uniquely (except
for order) as the join of irredundant join irreducible elements. Let L be a finite distributive lattice. Then
every a in L can be written uniquely (except for order) as the join of irredundant join irreducible
elements.
Proof. Since L is finite, we can write a as the join of irredundant join irreducible elements as discussed
above. Thus, we need to prove uniqueness. Suppose
a = b1 ∨ b2 ∨ ...... ∨ br = c1 ∨ c2 ∨ ...... ∨ cs
where b’s are irredundant and join-irreducible and c’s are also irredundant and join irreducible. For any
given i, we have
bi ≤ b1 ∨ b2 ∨ ...... ∨ br = c1 ∨ c2 ∨ ..... ∨ cs
Hence bi = bi ∧ ( c1 ∨ c2 ∨ ...... ∨ cs )
= ( bi ∧ c1 ) ∨ ( bi ∧ c2 ) ∨ ...... ∨ ( bi ∧ cs )
Since bi is join irreducible, there exist a j such that bi = bi ∧ c j and so bi ≤ c j . By a similar argument, for
cj there exists a bk such that c j ≤ bk . Therefore bi ≤ c j ≤ bk
which gives bi = c j = bk since the b’s are irredundant. Accordingly the b’s and c’s may be paired off.
Thus the representation for a is unique except for order.
2.4.46. Theorem. Let L be a complemented lattice with unique complements. Then the join irreducible
elements of L, other than 0, are its atoms.
Proof. Suppose a is join irreducible and is not an atom. Then a has unique
a
immediate predecessor b ≠ 0 . Let b′ be the complement of b. Since b ≠ 0 we have b′
b ′ ≠ I . If a ≤ b ′ then b ≤ a ≤ b ′ and so b ∨ b ′ = b ′ , which is impossible since b
b ∨ b ′ = I . Thus a does not precede b′ and so a ∧ b ′ must strictly precede a. Since b
is the unique immediate predecessor of a, we also have that a ∧ b ′ precedes b as a ∧ b′
shown in the figure:
But a ∧ b ′ precedes b′. Hence a ∧ b ′ ≤ glb { b , b ′} = b ∧ b ′ = 0 . Thus a ∨ b ′ = 0 . Since a ∨ b = a . We also
have that a ∨ b ′ = ( a ∨ b ) ∨ b ′ = a ∨ ( b ∨ b ′) = a ∨ I = I
Therefore b′ is complement of a. Since complements are unique, a= b. This contradicts the assumption
that b is an immediate predecessor of a. Thus the only join irreducible elements of L are its atoms.
Remark. Since every finite lattice is a bounded lattice, so theorem on pg. 47, can be given as “Let L be
52 Discrete Mathematics
a finite distributive lattice, if a complement of any element exists, it is unique”. Combining this result,
with above two theorems, we get
2.4.47. Theorem. Let L be a finite complemented distributive lattice. Then every element a in L is the
join of unique set of atoms.
Books Recommended:
1. Kenneth H. Rosen, Discrete Mathematics and Its Applications, Tata McGraw-Hill, Fourth Edition.
2. Seymour Lipschutz and Marc Lipson, Theory and Problems of Discrete Mathematics, Schaum
Outline Series, McGraw-Hill Book Co, New York.
3. John A. Dossey, Otto, Spence and Vanden K. Eynden, Discrete Mathematics, Pearson, Fifth
Edition.
4. J.P. Tremblay, R. Manohar, “Discrete mathematical structures with applications to computer
science”, Tata-McGraw Hill Education Pvt.Ltd.
5. J.E. Hopcraft and J.D.Ullman, Introduction to Automata Theory, Langauages and Computation,
Narosa Publishing House.
6. M. K. Das, Discrete Mathematical Structures for Computer Scientists and Engineers, Narosa
Publishing House.
7. C. L. Liu and D.P.Mohapatra, Elements of Discrete Mathematics- A Computer Oriented Approach,
Tata McGraw-Hill, Fourth Edition.
3
Boolean Algebra
Structure
3.1. Introduction.
3.2. Boolean Algebra.
3.3. Logic Gates and Circuits.
3.4. Karnaugh Maps.
3.1. Introduction. This chapter contains results related to Boolean algebra, Switching theory and
Karnaugh maps.
3.1.1. Objective. The objective of the study of these results is to understand the concepts and relations
between the elements of Boolean algebra, AND, OR and NOT gates.
3.2. Boolean Algebra. Let B be a non-empty set with two binary operations ∨ and ∧, a unary operation ′
and two distinct elements 0 and I. Then B is called a Boolean algebra if the following axioms hold where
a, b, c are any elements in B.
[B1] Commutative laws. a ∨ b = b ∨ a and a ∧ b = b ∧ a
[B2] Distributive laws. a ∧ (b ∨ c) = ( a ∧ b ) ∨ ( a ∧ c)
and a ∨ (b ∧ c) = ( a ∨ b ) ∧ ( a ∨ c)
We shall call 0 as zero element, I as unit element and a′ as the complement of a. We denote a Boolean
algebra by ( B, ∨, ∧, ', 0, I ) .
3.2.1. Example. Let A be a non-empty set and ρ(A) be its power set. Then the collection ρ(A) is a
Boolean algebra with the empty set φ as the zero element and the set A as the unit element under the set
operations of union, intersection and complement i.e., (ρ( A) , ∪, ∩, ', φ, A ) is a Boolean algebra.
L ∪ (M ∩ N) = (L ∪ M ) ∩ (L ∪ N)
L ∩ L′ = L ∩ ( A − L ) = φ
3.2.2. Example. Let B = {0, 1} be the set of bits (binary digits) with the binary operation ∨ and ∧ and
the unary operation ′ is defined by the following tables.
∨ 1 0 ∧ 1 0 ′ 1 0
1 1 1 1 1 0 0 1
0 1 0 0 0 0
Here complement of 1 is zero and complement of zero is 1, and ( B, ∨, ∧, ', 0, 1) is a Boolean algebra.
3.2.3. Example. Let Bn be the set of n tuples whose members are either 0 or 1, that is, Bn = B × B
×... × B (n times).
Let a = ( a1, a 2 ,......, a n ) and b = ( b1, b2 ,......, bn ) be any two members of Bn. Then we define
a ∨1 b = ( a1 ∨ b1, a 2 ∨ b2 ,......, a n ∨ bn )
a ∧1 b = ( a1 ∧ b1, a 2 ∧ b2 ,......, a n ∧ bn )
where ∨ and ∧ are logical operations on {0, 1}, as defined above in example 2, and a′ is equal to
a ′ = ( a1′ , a 2′ ,......, a n′ ) where 0′ = 1 and 1′= 0
Then ( B n , ∨1, ∧1, ' , 0 n , I n ) is a Boolean algebra. This algebra is known as switching algebra and
represents a switching network with n inputs and 1 output.
3.2.4. Example 4. The poset D30 = {1, 2, 3, 5, 6, 10, 15, 30} has eight elements. Define ∨, ∧ and ′ on D30
30
by a ∨ b = lcm {a, b} , a ∧ b = gcd {a, b} and a ′ = .
a
Then D30 is a Boolean algebra with 1 as the zero element and 30 as the unit element.
Remark. If a set A has n elements then ρ(A) has 2n elements and the partial order relation on ρ( A ) is the
set inclusion ‘⊆’ . If A has 1 element, 2 elements and 3 elements, then the corresponding Boolean
algebras are shown by the following diagrams.
I={a}
Boolean algebra for a singleton set
0=φ
Boolean Algebra 55
{a} {b}
φ
Boolean algebra for the set having three elements
{a,b,c}
{a} {c}
{b}
3.2.5. Example. Let S be the set of statement formulae involving n statement variables. The algebraic
system (S, ∧, ∨, , F, T) is a Boolean algebra in which ∨, ∧, denote the operations of conjunction,
disjunction and negation respectively. The elements F and T denotes the formulas which are
contradiction and Tautology respectively. The partial ordering corresponding to conjunction and
disjunction is implication.
3.2.6. Definition. A second definition of a Boolean algebra is given as follows.
A finite lattice is called a Boolean algebra if it is isomorphic with Bn for some non-negative integer n.
For example, in example (4), D30 is isomorphic to B3. In fact the mapping f : D30 → B3 defined by
f (1) = 000 f (6 ) = 110 f (2 ) = 100 f (10 ) = 101
Proof. Let A = {p1 , p2,......, pk}, if B ⊆ A and aB is the product of the primes in B, then aB divides n. Also
any divisor of n must be of the form aB for some subset B of A, where we assume that a φ = 1 .
Also, a C ∩ B = a C ∧ a B = gcd ( a C , a B )
and a C ∪ B = a C ∨ a B = lcm ( a C , a B )
Proof. It is sufficient to prove (i) part of each law, since (ii) part follows from (i) by principle of duality.
(1) (i) We have a =a∨0 [Identity law in Boolean algebra]
= ( a ∨ a ) ∧ ( a ∨ a ′) [Distributive law]
= (a ∨ a ) ∧ I [Complement law]
= ( a ∨ I ) ∧ ( a ∨ a ′) [Complement law]
= a ∨ ( I ∧ a ′) [Distributive law]
= a ∨ a′ [Identity law]
=I [Complement law]
which proves (i).
(3) (i) We note that a ∨ ( a ∧ b ) = ( a ∧ I ) ∨ ( a ∧ b ) [Identity law]
= a ∧ (I ∨ b) [Distributive law]
= a ∧ (b ∨ I ) [Commutative law]
=a∧I [Identity law]
=a [Identity law]
which proves (i).
(4) (i) Let L = ( a ∨ b ) ∨ c, R = a ∨ (b ∨ c)
Let a ∧ L = a ∧ [( a ∨ b ) ∨ c ]
= [a ∧ (a ∨ b ) ] ∨ (a ∧ c) [Distributive law]
= a ∨ (a ∧ c) [Absorption law]
=a [Absorption law]
and a ∧ R = a ∧ [a ∨ (b ∨ c) ]
= ( a ∧ a ) ∨ [a ∧ (b ∨ c) ] [Distributive law]
= a ∨ [a ∧ (b ∨ c) ] [Idempotent law]
=a [Absorption law]
Thus, a∧L =a∧R (1)
Further, a ′ ∧ L = a ′ ∧ [ ( a ∨ b ) ∨ c]
= [a ′ ∧ (a ∨ b ) ] ∨ (a ′ ∧ c) [Distributive law]
= [( a ′ ∧ a ) ∨ ( a ′ ∧ b ) ] ∨ ( a ′ ∧ c ) [Distributive law]
= [0 ∨ ( a ′ ∧ b ) ] ∨ ( a ′ ∧ c) [Complement law]
= (a ′ ∧ b ) ∨ (a ′ ∧ c) [Identity law]
58 Discrete Mathematics
= a ′ ∧ (b ∨ c) [Distributive law]
Similarly, a ′ ∧ R = a ′ ∧ [a ∨ (b ∨ c) ]
= ( a ′ ∧ a ) ∨ [ a ′ ∧ ( b ∨ c )] [Distributive law]
= 0 ∨ [a ′ ∧ (b ∨ c) ] [Complement law]
= a ′ ∧ (b ∨ c) [Identity law]
Hence, a′ ∧ L = a′ ∧ R (2)
Therefore, L = 0∨ L
= ( a ∧ a ′) ∨ L [Complement law]
= ( a ∧ L) ∨ ( a′ ∧ L) [Distributive law]
= ( a ∧ a ′) ∨ R [Distributive law]
=R [Identity law]
Hence ( a ∨ b ) ∨ c = a ∨ ( b ∨ c ) , which proves (i).
= a ′ ∨ (a ∧ x ) [By (2)]
= (a ′ ∨ a ) ∧ (a ′ ∨ x ) [Distributive law]
= I ∧ (a ′ ∨ x ) [By (1)]
= a′ ∨ x [Identity law]
= x ∨ ( a ∧ a ′) [By (1)]
Boolean Algebra 59
= ( x ∨ a ) ∧ ( x ∨ a ′) [Distributive law]
= I ∧ ( x ∨ a ′) [By (2)]
= x ∨ a′ [Identity law]
= a′ ∨ x [Commutative law]
= b ∨ [( a ∨ a ′) ∧ ( a ∨ b ′)] [Distributivity]
= b ∨ [ I ∧ ( a ∨ b ′) ] [Complement law]
= b ∨ ( a ∨ b ′) [Identity law]
= ( b ∨ b ′) ∨ a [Associative law]
= I ∨a [Complement law]
=I [Boundedness law]
Also, ( a ∨ b ) ∧ ( a ′ ∧ b ′ ) = [ ( a ∨ b ) ∧ a ′] ∧ b ′ [Associative law]
= [( a ∧ a ′) ∨ ( b ∧ a ′) ] ∧ b ′ [Distributive law]
= (b ∧ a ′) ∧ b ′ [Identity law]
= ( b ∧ b ′) ∧ a ′ [Associative law]
= 0 ∧ a′ [Complement law]
=0 [Boundedness law]
So, by the uniqueness of complement, we have ( a ∨ b )′ = a ′ ∧ b ′
The other part follows by principle of duality.
3.2.13. Boolean Algebras as lattices. It follows from the above discussion that, every Boolean algebra
B satisfies the associative, commutative and absorption laws and hence is a lattice where ∨ and ∧ are the
join and meet operations respectively. With respect to this lattice, a ∨ I = I implies a ≤ I and a ∧ 0 = 0
implies 0 ≤ a for any element a ∈ B . Thus B is a bounded lattice. Furthermore axioms [B2 ] and [B4 ]
show that B is also distributive and complemented. Conversely, every bounded, distributive, and
complemented lattice satisfies the axioms [B1] through [B4]. Hence, we can give an alternate definition
of a Boolean algebra as follows:
3.2.14. Definition. A Boolean algebra B is a bounded, distributive and complemented lattice. Now since
a Boolean algebra is a lattice so it must have a partial ordering. In case of lattice, we have define
a ≤ b if a ∨ b = b or a ∧ b = a holds.
3.2.15. Theorem. If a, b are in Boolean algebra then the following are equivalent
(i) a ∨ b = b (ii) a ∧ b = a (iii) a′ ∨ b = I (iv) a ∧ b′ = 0
Proof. (i) ⇔ (ii) has been already proved.
Now (i) ⇒ (iii)
Suppose a∨b=b ......(1)
Then a ′ ∨ b = a ′ ∨ (a ∨ b ) [By (1)]
= (a ′ ∨ a ) ∨ b [Associativity]
= I ∨b [Complement law]
=I [Boundedness law]
Conversely, let a′ ∨ b = I ......(2)
= (a ′ ∧ a ) ∨ b [Distributivity]
= 0 ∨b [Complement law]
=b
Boolean Algebra 61
But in a Boolean algebra, complement of an element is unique. Hence the given lattice is not a Boolean
algebra.
3.2.17. Definition. Let ( B, ∨, ∧, ', 0, I ) be a Boolean algebra and S ⊆ B. If S contains the element 0 and I
and is closed under the operation join (∨) and meet (∧) and complement ( ′ ). Then, (S ∨, ∧, ′, 0, I) is
called a sub-Boolean algebra.
In practice, it is sufficient to check closure w.r.t. the set of operations (∧, ′ ) or (∨ , ′ ).
The definition of sub-Boolean algebra implies that it is a Boolean algebra. But a subset of a Boolean
algebra can be a Boolean algebra but not necessarily a Boolean sub-algebra because it is not closed w.r.t.
the operation join and meet.
For any Boolean algebra (B, ∨, ∧, ′, 0, I) the subsets {0, I} and the set B are both sub-Boolean algebras.
In addition to these sub-Boolean algebras consider now any element a ∈ B s.t. a ≠ 0, a ≠ I and consider
the set { a , a ′, 0, I } .
follows.
( a1, b1 ) ∧3 ( a 2 , b2 ) = ( a1 ∧1 a 2 , b1 ∧2 b2 )
( a1, b1 ) ∨ 3 ( a 2 , b2 ) = ( a1 ∨1 a 2 , b1 ∨ 2 b2 )
0 3 = ( 01, 0 2 )
I 3 = ( I1 , I 2 )
f ( a ∨ b ) = f ( a ) ∪ f (b )
f ( a ′) = f (a )
f (0 ) = α
f (1) = β
The above definition of homomorphism can be simplified by asserting that f : B → P preserves either
the operations meet (∧) and ′ or the operation ∨ and ′
Now, we consider a mapping g : B → P in which the operations ∧ and ∨ are preserved. Thus g is a
lattice homomorphism. g preserves the order and hence it maps the bounds 0 and I into the least and
greatest elements respectively of the set g( B ) ⊆ P . It is however not necessary that
g( 0 ) = α and g(1) = β .
The complements, if defined in terms of g(0) and g(1) in g(B) are preserved and
( g( B ), ∩, ∪ , , g( 0 ), g(1) ) is a Boolean algebra.
Note that g : B → P is not a Boolean homomorphism. Although g : B → g( B ) is a Boolean
homomorphism. Thus for any mapping from a Boolean algebra which preserves the operations ∨ and ∧,
the image set is a Boolean algebra.
A Boolean homomorphism is called a Boolean isomorphism if it is bijective.
3.2.20. Representation Theorem. Let B be a finite Boolean algebra. We know that an element ‘a’ in B
is called an atom or minterm if ‘a’ immediately succeed the least element zero i.e., 0 ≤ a. Let A be the
set of the atoms of B and let P(A) be the Boolean algebra of all subsets of the set A of atoms. Then (as
Boolean Algebra 63
proved in chapter on lattices) each x ≠ 0 in B can be expressed uniquely (except for order) as the join of
atoms (i.e., elements of A).
i.e., x = a1 ∨ a2 ∨......∨ an.
Stone’s Representation Theorem. Any Boolean algebra is isomorphic to a power set algebra
( P( S), ∩, ∪, , φ, S ) for some set S. Restricting our discussion to a finite algebra B, the representation
theorem is.
Theorem. Let B be a finite Boolean algebra and let A be the set of atoms of B. If P(A) is the Boolean
algebra of all subsets of the set A of atoms, then there exists a mapping f : B → P(A) which is an
isomorphism.
Proof. Suppose B is a finite Boolean algebra and P(A) is the Boolean algebra of all subsets of the set A
of atoms of B. Consider the mapping f : B → P( A ) defined by f ( x ) = {a1 , a2 ,..., ar } where
is the unique representation of x ∈ B as the join of atoms ( a1 , a2 ,..., ar ) ∈ A . If ai are
x = a1 ∨ a2 ∨ ... ∨ ar
atoms, then we know that
ai ∧ ai = ai
but a i ∧ a j = 0 for a i ≠ a j
y = b1 ∨ b2 ∨ ...... ∨ b s ∨ c1 ∨ c2 ∨ ...... ∨ ct
where A = {a1, a 2 ,......, a r ; b1, b2 ,....., b s ; c1, c2 ,......, ct ; d1, d2 ,......, d k } be the set of atoms of B.
and x ∧ y = b1 ∨ b2 ∨ ...... ∨ b s
= f (x ) ∪ f (y)
and f ( x ∧ y ) = f ( b1 ∨ b2 ∨ ...... ∨ b s )
= { b1, b2 ,......, b s }
= f (x ) ∩ f (y)
and so y = x′
64 Discrete Mathematics
= ( f ( x ))′
Since the representation of any x is unique in terms of atoms, so f is one-one and onto.
Hence f is a Boolean algebra isomorphism.
Thus every finite Boolean algebra is strictly the same as a Boolean algebra of sets. If a set A has n
elements then its power set P(A) has 2n elements. Thus we have
3.2.21. Corollary. A finite Boolean algebra has 2n elements for some positive integer n.
e.g. Consider the Boolean algebra D70 = {1, 2, 5, 7, 10, 14, 35, 70} .
70
Then the set of atoms of D70 is A = { 2, 5, 7}
10 = 2 ∨ 5
2 7
14 = 2 ∨ 7 5
35 = 5 ∨ 7
1
70 = 2 ∨ 5 ∨ 7
Now, the diagram of the Boolean algebra of power set P(A) of the set A of atoms is
{2, 5, 7}
{2} {7}
{5}
Q( x , y, z ) = ( x ∧ y ′) ∨ ( y ∧ 0 )
R( x , y, z ) = ( x ∨ ( y ′ ∧ z )) ∨ ( x ∧ ( y ∧ I ))
are Boolean expressions. Note that a Boolean expression (or Boolean polynomial) in n variables may or
may not contain all the n variables. Obviously an infinite number of Boolean expressions may be
constructed in n variables.
3.2.23. Definition. A literal is a variable or complemented variable such as x, x ′, y, y ′ and so on. A
fundamental product is a literal or a product of two or more literals in which no two literals involve the
same variable.
Thus, x ∧ z ′, x ∧ y ′ ∧ z, x, y ′, x ′ ∧ y ∧ z are fundamental products, but x ∧ y ∧ x ′ ∧ z and x ∧ y ∧ z ∧ y are
not. Note that any product of literals can be reduced to either 0 or a fundamental product e.g.
x ∧ y ∧ x ′ ∧ z = 0 since x ∧ x ′ = 0 (complement law), and xyzy = xyz since y ∧ y = y (idempotent law)
3.2.24. Definition. A fundamental product P1 is said to be contained in (or included in) another
fundamental product P2 if the literals of P1 are also literals of P2. e.g.., x′z (i.e., x ′ ∧ z ) is contained in
x ′ yz but x ′z is not contained in xy ′z since x′ is not a literal of xy′z. Observe that if P1 is contained in P2
say P2 = P1 ∧ Q, then by the absorption law
P1 ∨ P2 = P1 ∨ ( P1 ∧ Q ) = P1
Although the first expression is a sum of products, it is not a sum-of-products expression. Specifically
the product xz′ is contained in the product xyz′. However by the absorption law, E1 can be expressed as
E1 = xz ′ + y ′z + xy z ′ = xz ′ + xy z ′ + y ′z = xz ′ + y ′z
This yields a sum of products form for E1. The second expression E2 is already a sum-of-products
expression.
Now, we give an algorithm to transform any Boolean expression into equivalent sum-of-products
expression.
3.2.29. Algorithm for finding sum-of-products forms. The input is a Boolean expression E. The
66 Discrete Mathematics
Step II. Repeat step I, until every product P in E is a minterm i.e., every product P involves all the
variables.
3.2.34. Example. Express E ( x , y , z ) = x ( y ′z )′ in its complete sum-of-products form.
Solution. First apply the algorithm for finding sum of products form on E to obtain
E = x ( y ′z )′ = x ( y + z ′) = xy + xz ′ .
= xyz + xyz ′ + xy ′z ′
Then, E L = 3 + 3 + 4 + 4 = 14 and ES = 4 .
Suppose E and F are equivalent Boolean sum-of-products expressions. We say E is simpler than F if
(i) E L < FL and ES < FL
We say E is minimal if there is no equivalent sum-of-products expression which is simpler than E. There
can be more than one equivalent minimal sum-of-products expressions.
3.2.37. Prime Implicants. A fundamental product P is called a prime implicant of a Boolean expression
E if P + E = E but no other fundamental product contained in P has this property. For example, suppose
E = xy ′ + xyz ′ + x ′yz ′ .
68 Discrete Mathematics
Since the complete sum-of-products form is unique, so A + E = E, wher e A ≠ 0 , if and only if summands
in the complete sum-of-products form for A are among the summands in the complete sum-of-products
form for E.
Now, by (1) and (2), we see that summands of xz′ are among those of E, so we have
xz ′ + E = E which proves (i)
(ii) Express x in complete sum-of-products form
x = x ( y + y ′) ( z + z ′) = xyz + xyz ′ + xy ′z + xy ′z ′
Proof. Since the literals commute, we can assume without loss of generality that
P1 = a1a 2 ......a r t
P2 = b1 b2 ...... b st ′
Q = a1 a 2 ...... a r b1 b2 ...... b s .
Now, Q = Q (t + t ′) = Qt + Qt ′ .
Hence, we have P1 + P2 + Q = P1 + P2 + Qt + Qt ′
= ( P1 + Qt ) + ( P2 + Qt ′)
= P1 + P2 .
= x ′z ′ + x ′y ′z + xy + x ′y ′ ( consensus of x ′z ′ and x ′y ′z )
= x ′z ′ + xy + x ′y ′ ( x ′y ′z includes x ′y ′)
Now, neither step in the consensus method will change E. Thus E is the sum of its prime implicants,
which appear in the last line i.e., x ′z ′, xy, x ′y ′ and yz ′ .
3.2.43. Finding a minimal sum-of-products form. The input is a Boolean expression E = P1 + P2 + ... + Pm
where the P’s are all the prime implicants of E. The output expresses E as a minimal sum-of-products.
Step I. Express each prime implicant P as a complete sum-of-products.
Step II. Delete one by one those prime implicants whose summands appear among the summands of the
remaining prime implicants.
3.2.44. Example. Consider E = xyz + x ′z ′ + xyz ′ + x ′y ′z + x ′yz ′ .
Solution. Reproduce above example here to obtain
E = x ′z ′ + xy + x ′y ′ + yz ′
x ′z ′ = x ′z ′( y + y ′) = x ′yz ′ + x ′y ′z ′
xy = xy ( z + z ′) = xyz + xyz ′
x ′y ′ = x ′y ′( z + z ′) = x ′y ′z + x ′y ′z ′
yz ′ = yz ′( x + x ′) = xyz ′ + x ′yz ′ .
Step II. The summands of x ′z ′ ar e x ′yz ′ and x ′y ′z ′ which appear among the other summands. Thus
delete x′z′ to obtain E = xy + x ′y ′ + yz ′ .
The summands of no other prime implicant appear among the summands of the remaining prime
implicants, and hence this is a minimal sum-of-products form for E. In other words, none of the
remaining prime implicants is superfluous, i.e., none can be deleted without changing E.
3.3. Logic Gates and Circuits. Logic circuits (also called logic networks) are structures which are built
up from certain elementary circuits called logic gates. Each logic circuit may be viewed as a machine L
which contains one or more input devices and exactly one output device. Each input device in L sends a
signal, specifically, a bit 0 or 1 to the circuit L, and L processes the set of bits to yield an output bit.
Accordingly an n bit sequence may be assigned to each input device, and L processes the input
sequences one bit at a time to produce an n-bit output sequence.
3.3.1. Logic Gates. There are three basic logic gates which are described below. We adopt the
convention that the lines entering the gate symbol from the left are input lines and the single line on the
right is the output line.
(1) OR gate. An OR Gate has inputs x and y and output z = x ∨ y or z = x + y , where addition or join is
defined by the truth table
x y x+y
1 1 1
1 0 1
0 1 1
0 0 0
Thus the output z = 0 only when inputs x = 0 and y = 0. Thus OR gate only yields 0 when both input bits
are 0.
The symbol for the OR gate is shown in the diagram below
x
OR z = x+y
y
Boolean Algebra 71
OR gate may have more than two inputs. Below figure shows an OR gate with four inputs A, B, C, D
and output Y = A + B + C + D .
B
OR Y = A+ B+C+D
C
The output Y = 0 if and only if all the inputs are 0. Suppose for example, the input data for the OR gate
in above figure are the following 8-bit sequences
A = 10000101 B = 10100001
C = 00100100 D = 10010101.
The OR gate only yields 0 when all input bits are 0. This occurs only in the 2nd , 5th and 7th positions.
Thus the output is the sequence Y = 10110101.
(2) AND Gate. In this gate the inputs are x and y and output is xy or x ∧ y, where multiplication is
defined by the truth table
x y z = xy
1 1 1
1 0 0
0 1 0
0 0 0
Thus the output z = 1 when inputs x = 1 and y = 1 otherwise z = 0. The symbol for the AND gate is
x
AND z = xy
y
The AND gate may have more than two inputs. Below figure shows an AND gate with four inputs A, B,
C, D and output Y = A.B.C.D. The output Y = 1 if and only if all the inputs are 1.
B
AND Y = A.B.C.D
C
Suppose, for example, the input data for the AND gate in above figure are the following 8-bit sequences.
72 Discrete Mathematics
A = 11100111 B = 01111011
C = 01110011 D = 11101110
The AND gate only yields 1 when all input bits are 1. This occurs only in the 2nd, 3rd and 7th positions.
Thus the output is the sequence Y = 01100010.
(3) NOT Gate. NOT gate is also known as inverter. The diagram below shows:
x NOT y = x′
NOT gate with input x and output x′ where inversion denoted by prime, is defined by the truth table
given below:
x x′
1 0
0 1
We emphasize that a NOT gate can have only one input, whereas the OR and AND gates may have two
or more inputs.
Suppose for example, a NOT gate is asked to process the following sequences:
x = 110001 , y = 10110111 , z = 10101010
The NOT gate changes 0 to 1 and 1 to 0. Thus,
x ′ = 001110 , y ′ = 01001000 , z ′ = 01010101
NOT
OR NOT
z
Working from left to right, we express t in terms of the inputs x, y, z as follows. The output of the AND
gate is x . y , which is then negated to yield (x . y)′. The output of the lower OR gate is x ′ + z which is
Boolean Algebra 73
then negated to yield ( x ′ + z )′ . The output of the OR gate on the right, with inputs (xy)′ and ( x ′ + z )′ gives
us our desired representation, that is,
t = ( xy )′ + ( x ′ + z )′ .
3.3.3. Logic Circuits as a Boolean Algebra. Observe that the truth tables for the OR, AND and NOT
gates are respectively identical to the truth tables for the propositions p ∨ q (disjunction), p ∧ q
(conjunction) and ∼p (negation). The only difference is that 1 and 0 are used instead of T and F. Thus
the logic circuits satisfy the same laws as do propositions and hence they form a Boolean algebra. So, all
terms used with Boolean algebras, such as, complements, literals, fundamental products, minterms, sum-
of-products and complete sum-of-products may also be used with our logic circuits.
3.3.4. AND-OR Circuits. The logic circuit L which corresponds to a Boolean sum-of-products
expression is called an AND-OR circuit. Such a circuit L has several inputs, where.
(1) Some of the inputs or their complements are fed into each AND gate.
(2) The outputs of all the AND gates are fed into a single OR gate.
(3) The output of the OR gate is the output for the circuit L.
e.g. Following figure represents a AND-OR circuit with three inputs x, y, z and output t. First we find the
output of each AND gate.
x
y AND
z
NOT
NOT
AND
OR t
AND
(i) The inputs of the first AND gate are x, y and z and hence x.y.z is the output.
(ii) The inputs of the second AND gate are x, y′, z and hence xy′z is the output.
(iii) The inputs of the third AND gate are x′, y and hence x′y is the output.
Then the sum of outputs of the AND gates is the output of the OR gate, which is the output t of the
circuit. Thus t = xyz + xy′z + x′y
3.3.5. NAND and NOR gates. There are two additional gates which are equivalent to combinations of
the above basic gates.
74 Discrete Mathematics
x
NAND z
y
x
y NOR z
The truth tables for these gates (using two inputs x and y) is given by
x y NAND NOR
1 1 0 0
1 0 1 0
0 1 1 0
0 0 1 1
The NAND and NOR gates can actually have two or more inputs just like corresponding AND and OR
gates. Furthermore, the output of a NAND gate is 0 if and only if all the inputs are 1, and the output of a
NOR gate is 1 if and only if all the inputs are 0.
3.3.6. Example. Express the output t as a Boolean expression in the inputs, x, y, z for the logic circuit in
following figure:
x
AND
OR t
y NOT
AND
z
Solution. The inputs to the first AND gate are x and y′ and to the second AND gate are y′ and z. Thus t =
xy′ + y′z
Boolean Algebra 75
3.3.7. Exercise.
1. Express the output t as a Boolean expression in the inputs x, y, z for the logic circuit below
x
AND
y NOT
OR t
NOT
AND
z
2. Express the output t as a Boolean expression in the inputs x, y, z for the logic circuit
x
y AND
z
AND t
OR
AND
3. Express the output t as a Boolean expression in the inputs x, y, z for the logic circuit in following two
figures
x
NOR
(i)
y
AND
z t
OR
___________________________________________________________________________
76 Discrete Mathematics
x
NAND
y
(ii)
OR t
NOR
z
4. Express the output z as a Boolean expression in the inputs x and y for the logic circuit given below
x
AND
y
NOR OR z
NAND
5. Draw the logic circuit L with inputs x, y, z and output t which corresponds to each Boolean expression
(i) t = xyz + x′z′ + y′z′ (ii) t = xy′z + xyz′ + xy′z′
3.3.8. Truth tables and Boolean functions. Consider a logic circuit L with n = 3 input devices x, y, z
and output t, say, t = xyz + xy ′z + x ′y
Each assignment of a set of three bits to the inputs x, y, z yields an output bit for t. All together there are
2 n = 2 3 = 8 possible ways to assign bits to the inputs as follows
T ( x , y, z ) = t or T ( L ) = [ x , y, z : t ]
This form for the truth table for L is essentially the same as the truth table for a proposition discussed in
UNIT I. The only difference is that here the values for x, y, z and t are written horizontally whereas in
UNIT I they are written vertically.
Consider a logic circuit L with n input devices. There are many ways to form n input sequences
n
x1, x 2 ,......, x n , so that they contain the 2 different possible combinations of the input bits. The
assignment scheme is given below.
x1. Assign 2n − 1 bits which are 0’s followed by 2 n −1 bits which are 1’s.
x2. Assign 2n − 2 bits which are 0’s followed by 2n − 2 bits which are 1’s.
x3. Assign 2n − 3 bits which are 0’s followed by 2n − 3 bits which are 1’s.
and so on. The sequences obtained in this way will be called special sequences. Replacing 0 by 1 and 1
by 0 in the special sequences, we get the complements of the special sequences.
Remark. Assuming the input are the special sequences, we frequently do not need to distinguish
between the truth table
T ( L ) = [ x1, x 2 ,......, x n ; t ]
Step II. Find each product appearing in t. (Recall that a product a1. a 2 ......a n = 1 in a position if and only
if all the a1, a 2 ,......, a n have 1 in the position).
Step III. Find the sum t of the products (Recall that a sum a1 + a 2 + ...... + a n = 0 in a position if and only
if all the a1, a 2 ,......, a n have 0 in the position).
78 Discrete Mathematics
x
y AND
z
AND t
OR
AND
3.3.11. Boolean Functions. Let E be a Boolean expression with n variables x1, x2,...,xn. The entire
discussion above can also be applied to E where now the special sequences are assigned to the variables
x1, x 2 ,......, x n . The truth table T = T ( E ) of E is defined in the same way as the truth table T = T(L) for a
logic circuit L as given in the above example.
Remark. The truth table for a Boolean expression E = E ( x1, x 2 ,......, x n ) with n variables may also be
viewed as a Boolean function from Bn into B. (The Boolean algebra Bn and B = {0, 1} are already
defined). That is, each element in Bn is a list of n bits which when assigned to the list of variables in E
produces an element in B.
3.3.12. Example. Consider a Boolean expression E = E(x, y, z) with three variables. The eight minterms
(fundamental products involving all three variables) are as follows:
xy z , xy z ′, xy ′z, x ′y z, xy ′z ′, x ′y z ′, x ′y ′z , x ′y ′z ′ .
The truth table for these minterms (using the special sequences for x, y, z) follows
Boolean Algebra 79
x ′y ′z = 01000000 , x ′y ′z ′ = 10000000 .
Note that each minterm assumes the value 1 in only one of the eight positions.
3.4. Karnaugh Maps. Karnaugh maps are pictorial devices for finding prime implicants and minimal
forms for Boolean expressions involving at most six variables. We will only treat the cases of two, three
and four variables. In the beginning of this unit, we have defined that a minterm is a fundamental
product which involves all the variables, and that a complete sum-of-products expression is a sum of
minterms.
Definition. Two fundamental products P1 and P2 are said to be adjacent if P1 and P2 have the same
variables and if they differ in exactly one literal. Thus there must be an uncomplemented variable in one
product and complemented in the other. In particular, the sum of two such adjacent products will be a
fundamental product with one less literal.
Remark. In Karnaugh maps minterms involving the same variables are represented by squares and we
will some times use the terms “squares” and “minterm” interchangeably.
3.4.1. Example. (i) Let P1 = xyz ′ and P2 = xy ′z ′ then P1 and P2 are adjacent products and
P1 + P2 = xyz ′ + xy ′z ′ = xz ′( y + y ′) = xz ′(1) = xz ′ .
P1 + P2 = x ′y zt + x ′y z ′t = x ′y t ( z + z ′) = x ′y t (1) = x ′y t .
Here P1 and P2 are not adjacent since they have different variables. Thus, in particular, they will not
appear as squares in the same Karnaugh map.
3.4.2. Case of Two Variables. The Karnaugh map (or K-map) corresponding to Boolean expression E =
E(x, y) with two variables x and y is given in below figure:
y y′
x
x′
80 Discrete Mathematics
Here, x is represented by the points in the upper half of the map and y is represented by the points in the
left half of the map. And x′ is represented by the points in the lower half of the map and y′ is represented
by the points in the right half of the map. We have the four possible minterms with two literals
xy, xy ′, x ′y, x ′y ′ are represented by the four squares in the map as follows:
y y′
x xy xy′
x′ x′y x′y′
Note that two squares (minterms) are adjacent by the definition given above if and only if they are
geometrically adjacent. Any complete sum-of-products Boolean expression E( x, y ) is a sum of
minterms and hence can be represented in the K-map by placing checks in the appropriate squares. A
prime implicant of E(x, y) will be either a pair of adjacent squares in E or an isolated square i.e., a square
which is not adjacent to any other square of E(x, y). A minimal sum-of-products form for E(x, y) will
consist of a minimal number of prime implicants which cover all the squares of E(x, y) as illustrated in
the next example:
3.4.3. Example. Find the prime implicants and a minimal sum-of-products form for each of the
following complete sum-of-products Boolean expressions.
(i) E1 = xy + xy ′ (ii) E2 = xy + x ′y + x ′y ′ (iii) E 3 = xy + x ′y ′
Solution. (i) Check the squares corresponding to xy and xy′ as in the figure below
y y′
x √ √
x′
Note that E1 consists of only one pair of adjacent squares and this pair of adjacent squares represents the
variable x, so x is the (only) prime implicant of E1. Consequently, E1 = x is its minimal sum.
y y′
(ii) Check the squares corresponding to xy, x′y and x′y′ as in the figure. Note that E2
contains two pairs of adjacent squares (designated by the two loops) which include √
x
all the squares (minterms) of E2. The vertical pair represents y and the horizontal
√ √
pair represents x′ ; hence y and x′ are the prime implicants of E2. Thus E2 = x′ + y is x′
its minimal sum.
y y′
(iii) Check the squares corresponding to xy and x′y′ as shown in the
x √
figure. Note that E3 consists of two isolated squares which represent xy
and x′y′ and hence xy and x′y′ are the prime implicants of E3 and x′ √
E3 = xy + x ′y ′ is its minimal sum.
Boolean Algebra 81
3.4.4. Case of Three Variables. The K-map corresponding to a Boolean expression E = E ( x, y, z ) with
three variables x, y, z is shown by the adjoining figure:
Recall that there are exactly eight minterms with three variables yz yz′ y′z′ y′z
x′
These minterms are listed so that they correspond to the
eight squares in the Karnaugh map in the obvious way.
Furthermore, in order that every pair of adjacent products in above figure are geometrically adjacent the
right and left edges of the map must be identified. This is equivalent to cutting out, bending and gluing
the map along the identified edges to obtain a cylinder in which adjacent products are represented by the
squares with one edge in common.
Viewing the K-map in above figure as a Venn diagram, the areas represented by the variables x, y and z
are shown in the below figure:
yz yz′ y′z′ y′z yz yz′ y′z′ y′z yz yz′ y′z′ y′z
x x x
x′ x′ x′
Specifically, the variable x is still represented by the points in the upper half of the map, and the variable
y is still represented by the points in the left half of the map. The new variable z is represented by the
points in the left and right quarters of the map. Thus x′, y′ and z′ are represented, respectively, by points
in the lower half, right half and middle two quarters of the map.
By a basic rectangle in K-map with three variables, we mean a square, two adjacent squares or four
squares, which form a one-by-four or two-by-two rectangle. These basic rectangles correspond to a
fundamental products of three, two and one literal respectively. Moreover, the fundamental product
represented by a basic rectangle is the product of just those literals that appear in every square of the
rectangle.
Suppose a complete sum-of-products Boolean expression E = E ( x, y, z ) is represented in K-map by
placing checks in the appropriate squares. A prime implicant of E will be a maximal basic rectangle of
E, i.e., a basic rectangle contained in E which is not contained in any larger basic rectangle in E. A
minimal sum-of-products form for E will consist of a minimal cover of E, that is, a minimal number of
basic rectangles of E which together include all the squares of E.
3.4.5. Example. Find the prime implicants and a minimal sum-of-products form for each of the
following sum-of-products Boolean expressions.
(i) E1 = xyz + xyz ′ + x ′yz ′ + x ′y ′z
Solution. (i) Check the squares corresponding to the four summands as yz yz′ y′z′ y′z
in the figure below. Observe that E1 has three prime implicants
x √ √
(maximal basic rectangles), which are circled, these are xy, yz′ and x′y′z.
All these are needed to cover E1 and hence the minimal sum of E1 is x′ √ √
E1 = xy + yz ′ + x ′y ′z
yz yz′ y′z′ y′z
(ii) Check the squares corresponding to the five summands as Note that
E2 has two prime implicants, which are circled. One is the two adjacent
squares which represent xy and the other is the two-by-two square
(spanning the identified edges) which represents z. Both are needed to x √ √ √
cover E2 , so the minimal sum of E2 is E2 = xy + z.
(iii) Check the squares corresponding to five summands as yz yz′ y′z′ y′z
As indicated by the loops, E3 has four prime implicants,
x √ √
xy, yz ′, x ′z ′ and x ′y ′ . However only one of two dashed ones i.e., one of
yz ′ or x ′z ′ , is needed in a minimal cover of E. Thus E3 has two minimal x′ √ √ √
sums
E2 = xy + yz ′ + x ′y ′ = xy + x ′z ′ + x ′y ′
3.4.6. Exercise.
1. Design a three-input minimal AND-OR circuit L with the following truth table:
T = [A, B,C. L] = [00001111, 00110011, 01010101 ; 11001101]
2. Find the fundamental product P represented by each basic rectangle in the K-map in below figures:
yz yz′ y′z′ y′z yz yz′ y′z′ y′z yz yz′ y′z′ y′z
x x √ √ x √ √
x′ √ √ x′ x′ √ √
4. Find all possible minimal sums for each Boolean expression E given by the Karnaugh maps in the
below figure:
yz yz′ y′z′ y′z yz yz′ y′z′ y′z yz yz′ y′z′ y′z
x √ √ √ x √ √ √ x √ √
x′ √ √ x′ √ √ √ x′ √ √ √ √
xy
xy′
x′y′
x′y
as indicated by the labels of the row and column of the square. Observe that the top line and the left side
are labeled so that the adjacent products differ in precisely one literal.
Again we must identify the left edge with the right edge (as we did with three variables) but we must
also identify the top edge with the bottom edge. These identification given rise to a donut-shaped surface
called a torus, and we may view our map as really being a torus. A basic rectangle in a four variable
Karnaugh map is a square, two adjacent squares, four squares which form a one-by-four or two by two
rectangle, or eight squares which form two-by-four rectangle. These rectangles correspond to a
fundamental products with four, three, two or one literal, respectively. Again maximal basic rectangles
are the prime implicants. The minimization technique for a Boolean expression E( x, y, z, t ) is the same
as before.
84 Discrete Mathematics
3.4.8. Example. Find the fundamental product P represented by the basic rectangle in the Karnaugh
maps shown in below figures.
zt zt′ z′t′ z′t zt zt′ z′t′ z′t zt zt′ z′t′ z′t
xy xy √ √ xy √ √
xy xy xy √ √ √ √
xy √ √
xy′ √ √
x′y′ √ √
x′y √
f ( s0 , b ) = s2 ; f ( s1, b ) = s1 ; f ( s2 , b ) = s1
g( s0 , b ) = y , g( s1, b ) = z , g( s2 , b ) = y
Next, we lead to study the ways to represent a finite state machine diagramatically.
4.2.1. Transition (state) Table and transition (state) diagram.
There are two ways of representing a finite state machine M, as explained below:
(A) Transition (State) Table. In this method, the functions f and g are represented by a table. For the
example given above, the transition table is
f g
I
a b a b
S
s0 s1 s2 x y
s1 s2 s1 x z
s2 s0 s1 z y
(B) Transition (state) Diagram. A transition diagram of a finite state machine M is a labeled directed
graph in which there is a node for each state symbol in S and each node is labeled by a state symbol with
which it is associated. The initial state is indicated by an arrow. Further, if
f ( si , a j ) = sk and g ( si , a j ) = Or , then there is an arrow (arc) from si to sk which is labelled with the pair
( a j , or ) . Usually, we put the input symbol aj near the base of the arrow (near si) and the output symbol
88 Discrete Mathematics
OR near the centre of the arrow. (Alternatively, we can represent it by ai oi near the centre of the
arrows). Using this method, above example can be represented as
x z
a b
s0 s1 a
b
y y
z b x
s2
a
f ( s0 , b ) = s1 ; f ( s1, b ) = s1
g( s0 , b ) = 1 ; g( s1, b ) = 0
f g
I
a b a b
S
s0 s0 s1 0 1
s1 s1 s1 1 0
b
0
1
Remark. We can regard the finite state machine M = M ( I , S, O, s0 , f , g ) as a simple calculator. We start
with state s0, input a string over I and produce a string of output.
4.2.3. Input and output string. Let M = M ( I , S, O, s0 , f , g ) be a finite state machine. An input string for
M is a string over I. The string y1 , y2 ,..., yn is the output string for M, corresponding to the input string
Finite State Machines, Languages and Grammers 89
si = f ( si −1 , xi )
for i ∈ {1, 2,..., n}
yi = g ( si −1 , xi )
f ( s1, a ) = s1 ; f ( s1, b ) = s1
and g( s0 , a ) = 0 ; g( s0 , b ) = 1
g( s1, a ) = 1 ; g( s1, b ) = 0
We had shown that M is a finite state machine. Let us find the output string to the input string
aababba
for this machine. Initially, we are in state s0. The first input symbol is a. Therefore the output
g( s0 , a ) = 0 . The edge points out to s0. Next symbol input is again a, so again we have g ( s0 , a ) = 0 as the
output and the edge points out to s0. Next input symbol is b and so g( s0 , b ) = 1 as the output and there is
a state of change s1. Next input symbol is a, so output is g( s1, a ) = 1 and the state is s1. Next, b is the
input and so g( s1, b ) = 0 as the output and state remains s1. Again, b is input , So we have g( s1, b ) = 0 as
the output and state is s1. Final input is a and state is s1, so g( s1, a ) = 1 is the output symbol. Hence the
output string is 0011001
4.2.5. Example. Consider the finite state machine of example on page-1. Let the input string be abaab.
We begin by taking s0 as the initial state. Proceeding in the same way, as in last example, we can find
the output string as
a,x b,z a, x a, z b, y
s0 → s1 → s1 → s2 → s0 → s2
f g
I
0 1 0 1
S
s0 s1 s0 1 0
s1 s3 s0 1 1
s2 s1 s2 0 1
s3 s2 s1 0 0
f g f g
(ii) (iii)
I I
0 1 0 1 0 1 0 1
S S
s0 s1 s0 0 0 s0 s0 s4 1 1
s1 s2 s0 1 1 s1 s0 s3 0 1
s2 s0 s3 0 1 s2 s0 s2 0 0
s3 s1 s2 1 0 s3 s1 s1 1 1
s4 s1 s0 1 0
4.2.7. Exercise. Give the state tables for finite state machines with the following diagram:
1
0
s0 s1
1 1
1
0
(i)
0 0 0
1
1 0 0
s2 0 s3
1
1
Finite State Machines, Languages and Grammers 91
0
0 1
s0 s1
1 1 1
0
(ii) 0
0
1
1 0
s2 s3
0 1 0
0
4.2.8. Alphabet and words. Consider a non-empty set A of symbols. A word or string w on the set A is
a finite sequence of its elements. For example, the sequences
u = ababb and v = accbaaa
are words on A = { a , b, c}
We call the set A the alphabet and its elements are called letters.
We can also denote above words u and v as
u = abab 2 , v = ac2 ba 3
The empty sequence of letters, denoted by λ or ε or 1, is also considered to be a word on A, called the
empty word.
The set of all words on A is denoted by A*.
The length of a word u, written |u| or l(u), is the number of elements in its sequence of letters. For above
words u and v, we have l (u ) = 5 and l ( v ) = 7
4.2.9. Cocatenation. Consider two words u and v on alphabet A. The concatenation of u and v written as
uv, in the word obtained by writing down the letters of u followed by the letters of v. e.g. for the above
words u and v, we have
uv = ababbaccbaaa = abab 2 ac2 ba 3
A2 = AA = { uv : u ∈ A, v ∈ A}
A3 = A2 A = { uv : u ∈ A2 , v ∈ A} and so on.
A 0 = { λ} , A1 = A = { a , b, c}
4.2.11. Generalization of f and g in finite state machine. Consider a sequence x0 x1 x2 ... of input
symbols. Let the initial state be s0, the next state s1 for the input x0 is given by
s1 = f ( s0 , x 0 ) = f1 ( s0 , x 0 ) , say where f = f1 : S × I → S
Now, next change in the state due to the input symbol x1 and the next state is
s2 = f ( s1, x1 ) = f ( f1 ( s0 , x 0 ) , x1 )
= f 2 ( s0 , x 0 x1 ) , say, where f 2 : S × I 2 → S .
= f n ( s0 , x 0 x1 x 2 ..... x n −1 )
In the same way, we can define output symbols O0 , O1, O2 , as given below
O0 = g( s0 , x 0 ) = g1 ( s0 , x 0 ) , say
O1 = g( s1, x1 ) = g( f1 ( s0 , x 0 ) , x1 )
= g2 ( s0 , x 0 x1 ) , say
O2 = g( s2 , x 2 ) = g( f 2 ( s0 , s0 x1 ), x 2 )
= g3 ( s0 , x 0 x1 x 2 ), say
where g2 : S × I 2 → O and g3 : S × I 3 → O
= g( f n −1 ( s0 , x 0 x1.... x n − 2 ) , x n −1 )
= gn ( s0 , x 0 x1..... x n −1 )
4.3.1. Equivalent States. Let M = M ( I , S, O, s0 , f , g ) be a finite state machine. Then two states si and s j
are said to be equivalent if and only if g( si , x ) = g( s j , x ) for every word x ∈ I * where I * is set of all
words on I and we write si = s j .
Remark. (i) The relation ‘≡’ is an equivalence relation, that is, it is reflexive, symmetric and transitive.
k
(ii) Clearly, equivalence of states is generalization of k-equivalence of states, that is, si ≡ s j ⇒ si ≡ s j
for all positive integer k but not conversely.
(iii) Two states are said to be equivalent if and only if they produce the same output for any input
sequence.
4.3.2. Theorem. Let s be any state in a finite state machine and let x and y be any two words. Then, we
have
(i) f ( s, xy ) = f ( f ( s, x ) , y ) (ii) g( s, xy ) = g ( f ( s, x ), y )
Proof. (i) We shall give the proof by induction on |y| i.e., length of y. Let length of y = 1 and let y = a,
where a ∈ I . Then, by generalization of f, we know that
f ( s, xa ) = f ( f ( s, x ) , a )
which shows that result is true for length one. We assume that the result is true for any word y of length
n i.e. f ( s, xy ) = f ( f ( s, x ) , y ) , where |y| = n
= f ( f ( f ( s, x ), y ), a ) [By induction]
Taking s′ = f ( s, x ), we get
which shows that result is true for length one. We assume that result is true for any word y of length one
i.e., g ( s, xy ) = g [ f ( s, x ), y ] , where |y| = n
Taking s′ = f ( s, x ) , we get
= g ( f ( s, x ), ya )
4.3.3. Theorem. If two states in a finite state machine are equivalent then their next states must be
equivalent. OR
If the states si and sj are equivalent in a finite state machine M, then f ( si , x ) = f ( sj , x ) for any input
sequence x.
Proof. Since si = s j , so we have g( s1, xy ) = g( s j , xy ) for any input word xy.
f g
I f g
0 1 0 1
S I
0 1 0 1
s0 s5 s3 0 1 S
and
s1 s1 s4 0 0 s′0 s′3 s′2 0 1
M = M ( I , S, O, s0 , f , g ) M ′ = M ′ ( I , S′, O, s0′ , f , g )
φ ( g( s, a ) ) = g′ ( φ( s ), a ) for all a ∈ I
If φ is one-one and onto function also then M and M′ are said to be isomorphic.
4.3.7. Finite state Automaton. This is a special kind of finite state machine and we define it as –
A finite state machine M ( I , S, O, s0 , f , g ) is said to be finite sate automaton if the finite set O of output
symbols is {0, 1} and where the current state determines the last output.
Those states for which the last output was 1, are called accepting states.
4.3.8. Example. We consider a finite state machine M = M ( I , S, O, s0 , f , g ) where
I = { a , b} , S = { s0 , s1, s2 } , O = { 0, 1} and s0 is the initial state and the function f and g are given as
96 Discrete Mathematics
f ( s0 , a ) = s1 and g( s0 , a ) = 1
f ( s0 , b ) = s0 and g( s0 , b ) = 0
f ( s2 , a ) = s2 and g( s0 , a ) = 1
f ( s2 , b ) = s0 and g( s2 , b ) = 0
f g
I
a b a b
S
s0 s1 s0 1 0
s1 s2 s0 1 0
s2 s2 s0 1 0
f ( s0 , b ) = s0 or f ( s1, b ) = s0 or f ( s2 , b ) = s0
then g( s0 , b ) = 0, g( s1, b ) = 0, g( s2 , b ) = 0
that is, the last output is 0. If we are in state s1 by f ( s0 , a ) = s1 then g( s0 , a ) = 1 and so the last output is 1
and also if we are in state s2 by f ( s1 , a) = s2 or f ( s2 , a) = s2 , then we see that g( s1, a ) = 1, g( s2 , a ) = 1 i.e.,
the last output is 1. Thus M is a finite state automaton. We note that the last output was 1 when we are in
states s1 and s2, thus the states s1 and s2 are accepting states.
In the transition diagram of a finite state automaton, the accepting states are represented by the
double circles. Keeping in this mind, the transition diagram can be drawn as
b/0
a/1 a/1
a/1
s0 s1 s2
b/0
b/0
Finite State Machines, Languages and Grammers 97
Remark. (i) It is clear that in a finite state automaton, the output symbol for accepting states is 1 and
output symbol for non-accepting states is 0. So, sometimes, we can omit the output symbols from the
transition diagram. Hence output symbols may be omitted from above transition diagram.
(ii) By the above example, we observe that a finite state machine is a finite state automaton if O = { 0, 1}
and if in all states, the incoming edges to any state s have the same output label. Further, incoming edges
in an accepting state has output 1 and incoming edges in non-accepting states has output 0.
So, an alternate definition, without output, of a finite state automaton is
4.4. Finite State Automaton. A finite state automaton (FSA) consists of
(i) A finite set I of input symbols.
(ii) A finite set of states.
A = { s0 , s1 } , yes states (or accepting states), s0 is the initial state and the next state
function f : S × I → S is given as
f ( s0 , a) = s0 , f ( s1 , a) = s0 , f ( s2 , a) = s2
f ( s0 , b) = s1 , f ( s1 , b) = s2 , f ( s2 , b) = s2
f
I
a b
S
s0 s0 s1
s1 s0 s2
s2 s2 s2
98 Discrete Mathematics
a
b a
b
s0 s1 s2
b
4.4.2. Exercise. Let I = {a, b}, S = {s0, s1, s2}, A = {s2} and s0 is the initial state. The next state
function f : S × I → S is given by the table
f
I
a b
S
s0 s0 s1
s1 s0 s2
s2 s0 s2
Hence we can say that FSA M accepts x1 x2 ...xn iff the path P ends at an accepting state.
4.4.4. Example. Let the FSA has the following transition diagram.
a b
s0 s1
b a
b a
s2
where I = { a , b} , S = { s0 , s1 , s2 } , A = { s2 } and s0 is the initial state. Does this FSA accepts the strings
given below
Finite State Machines, Languages and Grammers 99
= f ( f ( f ( s0 , abb ) , a ) , a )
= f ( f { f ( f ( s0 , a b ), b}, a ), a )
= f ( f ( f ( f ( f ( s0 , a ) , b ) , b ), a ) , a )
= f ( f ( f ( f ( s1, b ), b ), a ) , a ) = f ( f ( f ( s1, b ), a ), a )
= f ( f ( s1, a ), a ) = f ( s2 , a ) = s1 ∉ A
The final state is not an accepting state so, the given string abbaa is not accepted by above FSA. In
short, we can say that path determined by word abbaa is given as
a b b a a
s0 → s1 → s1 → s1 → s2 → s1 ∉ A
I = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9 } , S = { s0 , s1 , s2 } , A = { s0 }
f ( s1 , a ) = s1 , f ( s1 , b ) = s2 , f ( s1 , c ) = s0
f ( s2 , a ) = s2 , f ( s2 , b ) = s0 , f ( s2 , c ) = s1
Draw the transition table and transition diagram for this FSA. Does this automaton accept
(i) 258 (ii) 104 (iii) 142 (iv) 317
(v) 1247 (vi) 1947 (vii) 2001 (viii) 12045
4.5. Construction of finite state automata.
For a finite state automaton M, L(M) denotes the subset of all words of I*, which are accepted by M.
L(M) is said to be a language which, we shall discuss later on.
4.5.1. Example. Let I = {a, b} . Construct a finite state automaton which will accept precisely those
words on I which end in two b’s.
OR
Let I = {a, b}. Design a FSA M such that L(M) contains those words which end in two b’s.
Solution. Let the initial state be s0. Since bb is accepted, but not λ or b, we need three states, let s1 and
100 Discrete Mathematics
Thus the partial transition diagram (keeping in mind that s2 must be accepting state and s0, s1 are non-
accepting state) is
s0 b s1 b s2
Now, f ( s0 , a ) can not be equal to s1 or s2, since in that case ab and a will be accepted. So we must have
f ( s0 , a ) = s0 . Again, we can not take f ( s1, a ) = s1, s2 since in these cases bab and ba are accepted. So
we must have f ( s1, a ) = s0 .
Now, f ( s2 , b ) can not be equal to s0 or s1. Since in that case bbb is not accepted but it must be accepted.
So we must have f ( s2 , b ) = s2 . Also, we can not take f ( s2 , a ) = s1 or s2 , since in that case bbab and bba
will be accepted. So, we must have f ( s2 , a ) = s0 .
These additional conditions give the required automaton which is shown in below figure.
a
s0 b s1 b s2
b
a
M = M ( I , S , A, s0 , f ) a
4.5.2. Exercise. (i) Let I = {a, b}. Construct an finite state automaton M which will precisely accept
those words from I which have an even number of a’s.
(ii) Let I = {a, b}. Construct an finite state automaton M which will accept those works from I which
begin with an ‘a’ followed by zero or more b’s.
(iii) Let I = {a, b}. Construct an automaton M such that L(M) will consist of those words where the
number of b’s is divisible by 3.
(iv) Let I = {0, 1}. Construct a finite state automaton M such that L(M) contains precisely those strings
over I that contain no 1’s.
(v) Let I = {a, b}. Design a finite state automaton M which accepts precisely those strings which
contains exactly three b’s .
(vi) Let I = {a, b}. Construct an finite-state automaton M that precisely accepts those words which begin
with a and end in b.
(vii) Let I = {a, b}. Construct an automaton which will accept the language L( M ) = { ar bs : r > 0, s > 0} .
(viii) Construct a finite state automaton with I = {a, b} that accepts a set of all strings which start with
ab.
4.6. Non-Deterministic Finite state Automaton. A non-deterministic finite state automaton
Finite State Machines, Languages and Grammers 101
(NDFSA) M = M ( I , S, O, s0 , f ) consists of
(i) a finite set I of input symbols.
Remark. (i) We shall call finite state automaton now deterministic finite state automaton.
(ii) The difference between a NDFSA and DFSA is that in a NDFSA the next state function maps an
ordered pair of state and input letter to a subset of states (all possible next states) instead of to a single
state as in DFSA.
4.6.1. Exercise. Draw the transition diagram for the NDFSA.
(i) Let the given NDFSA is M = {I, S, A, s0, f) where I = {a, b}, S = {s0, s1, s2}, A = {s0}. and next
state function f is given by the table
f
I
a b
S
s0 φ {s1, s2}
s1 {s2} {s0, s1}
s2 {s0} φ
where I = { a , b} , S = { s0 , s1 , s2 , s3 } , A = {s2, s3} and the next state function f is given by following
transition table:
f
I
a b
S
s0 {s0, s1} {s3}
s1 {s0} {s1, s3}
s2 φ {s0, s2}
s3 {s1, s2, s3} {s1}
102 Discrete Mathematics
b
a
s2
Draw the transition table for this NDFSA and also give the next state function.
4.6.2. Definition. Let M = M ( I , S, A, s0, f ) be a non-deterministic finite state automaton. We say
that a string is accepted by M if starting from the initial state s0, among all the final states to which the
string will lead the automaton, one of them is an accepting state. i.e., set of final state intersect with the
set A.
It should be noted that the null string λ is accepted by M if and only if s0 ∈ A i.e., initial state is an
accepting state. The set of all strings which are accepted by NDFSA, M is denoted by AC(M).
4.6.3. Equivalent non-deterministic finite state automaton. Let M and M′ are two NDFSA, they are
said to be equivalent if
AC( M ) = AC( M ′)
that is, set of strings accepted by M is same as the set of strings accepted by M′.
4.6.4. Exercise. Let M = M ( I , S, A, s0 , f ) be a NDFSA with I = { a, b}, S = { s0 , s1 , s2 , s3 , s4 } ,
A = { s2 , s4 }, s0 is the initial state and the next state function is given by following table:
f
I
a b
S
s0 {s0, s3} {s0, s1}
s1 φ {s2}
s2 {s2} {s2}
s3 {s4} φ
s4 {s4} {s4}
The states of M′ are all the subsets of the set of all states of M, i.e., S′ = P(S).
Clearly, if S has n states, then S′ has 2n states. We define s′0 = {s0} as the initial state of M′, and A′ is the
set of all states in S′ containing an accepting state of M, i.e.,
A′ = { s ∈ S′ : s ∩ A ≠ φ} .
f ′ ( s, a ) =
σ∈ s
f (σ, a ) for s ∈ S′, a ∈ I .
We shall prove that M′ accept the same language as M. For this it is sufficient to prove that for any
string x ∈ I * , we must have f ′( s0′ , x ) = f ( s0 , x ) ......(1)
= f ( s0 , λ) [By definition of f]
We shall prove that (1) is true for any string of length k + 1. For this we shall prove that
f ′( s0′ , xa ) = f ( s0 , xa )
=
σ ∈ f ( s0 , x )
f (σ, a ) [By definition of f ′ ]
= f ( s0 , xa ) [By definition of f]
iff f ( s0 , x ) ∈ A′
iff x is accepted by M.
Thus x is accepted by M′ iff x is accepted by M.
This completes the proof of theorem.
4.7.2. Example. Let a NDFSA be defined as M = M ( I , S, A, s0 , f ) , where I = { a, b},
S = { s0 , s1 }, A = { s1 } , s0 is the initial state and next state function f : S × I → P( S) is given by the table
f
I
a b
S
s0 {s0, s1} {s1}
s1 φ {s0, s1}
A′ = { s∈ S′ : s ∩ A ≠ φ} = { { s1 } , { s0 , s1 } }
f ′( s, x ) = f (σ, x )
σ∈ s
for s ∈ P( S) = S′
f ′({ s0 } , a ) = f (s0 , a ) = { s0 , s1 }
f ′({ s0 } , b} = f ( s0 , b ) = { s1 }
Finite State Machines, Languages and Grammers 105
f ′({ s1 } , a ) = f ( s1, a ) = φ
f ′({ s1 } , b ) = f ( s1, b ) = { s0 , s1 }
f ′({ s0 , s1 } , a ) = f ( s0 , a ) ∪ f ( s1, a ) = { s0 , s1 } ∪ φ = { s0 , s1 }
f ′({ s0 , s1 } , b ) = f ( s0 , b ) ∪ f ( s1, b ) = { s1 } ∪ { s0 , s1 } = { s0 , s1 }
f
I
a b
S
φ φ φ
{s0} {s0, s1} {s1}
{s1} φ {s0, s1}
{s0, s1} {s0, s1} {s0, s1}
φ {s0} {s1}
{s0, s1}
If may be noted here that a state which is never entered may be deleted from the transition diagram. So,
the state {s0} can be deleted to obtain
a,b
b a,b
φ {s1} {s0,s1}
and s0 is the initial state. The next state function is given by the transition table
f
I
0 1
S
s0 {s0 } {s0, s1}
s1 {s2} {s2}
s2 { s3 } {s3}
s3 φ φ
Construct a DFSA equivalent to given NDFSA and also the transition diagram.
f
I
a b
S
s0 φ {s1, s2}
s1 {s2} {s0, s1}
s2 { s0 } φ
LM = { uv : u ∈ L, v ∈ M }
that is, LM denotes the set of all words which come from the cocatenation of a word from L with a word
from M. e.g. Suppose L = { a , b 2 } , M = { a 2 , ab, b 3 } then
LM = { a 3 , a 2 b, ab 3 , b 2 a 2 , b 2 ab, b 5 }
Clearly, the cocatenation of languages is associative since the cocatenation of words is associative.
4.8.3. Definition. Powers of language L are defined as
L0 = { λ} , L1 = L, L2 = LL, Ln +1 = Ln L for n >1
The unary operation L* of a language L, called the Kleene closure of L, is defined as the infinite union
∞
L * = L0 ∪ L1 ∪ L2 ∪ ....... = L
k =0
k
Remark. (i) The definition of L* agrees with the notation A* which contains all words over A.
∞
(ii) L+ is defined as L+ = L1 ∪ L2 ∪ ....... = L
k =1
k
i.e., L+ can be obtained from L* deleting the empty word
λ.
4.8.4. Regular Expressions. Let A be a non-empty finite alphabet. We shall define a regular expression
r over A and a language L ( r ) over A associated with regular expression r. The expression r and its
corresponding language L ( r ) are defined inductively as follows.
4.8.5. Definition. Each of the following is a regular expression over an alphabet A.
(i) The empty string λ and the pair ( ) (empty expression) are regular expressions.
(ii) Each letter a in A is a regular expression.
(iii) If r is a regular expression, then (r*) is a regular expression.
(iv) If r1 and r2 are regular expressions, then ( r1 ∨ r2 ) is a regular expression.
(v) If r1 and r2 are regular expressions, then (r1 r2) is a regular expression.
All regular expressions are formed in this way.
Remark. (i) Observe that a regular expression r is a special kind of a word (string) which uses the letters
of A and the five symbols ( ), *, ∨, λ, •.
(ii) It should be noted that no other symbols are used for regular expressions.
4.8.6. Definition. The language L(r) over A defined by a regular expression r over A as follows.
(i) L (λ) = { λ} and L ( ( )) = φ , the empty set.
(ii) L ( a ) = { a } , where a is a letter in A.
(iii) L ( r * ) = ( L ( r )) * , the Kleene closure of L ( r ) .
108 Discrete Mathematics
= { a } { a } * = { a } { λ, a , a 2 ,.....}
= { a , a 2 , a 3 ,.....}
Then L ( r ) = L ( a ∨ b* ) = L ( a ) ∪ L ( b* )
= L ( a ) ∪ ( L ( b )) *
= { a } ∪ { b} * = { a , λ, b, b 2 ,.....}
(iv) Let r = (a ∨ b ) *
Then L (r ) = L ( ( a ∨ b ) * ) = ( L ( a ∨ b )) *
= ( L ( a ) ∪ L ( b )) * = ({ a } ∪ { b} ) *
= ({ a , b} ) * = A *
Now, L contains those words beginning with one or more a’s followed by one or more b’s. Hence, we
can set r = aa * bb * .
4.9. Language determined by an automaton. Each automaton M with input alphabet A defines a
language over A, denoted by L(M). The language L(M) of M is the collection of all words from A, which
are accepted by M.
Recall that an automaton M accepts the word w if the final state is an accepting state.
4.9.1. Example. Consider the finite state automaton M given by following transition diagram:
b b
s0 s1 s2
a, b
a
a
Solution. By the diagram, we note that s2 is only non-accepting state and also we can not leave s2 once
entered. Thus the strings which lead us into the state s2 are not accepted. Thus a string containing two
successive b’s is not accepted by M. Thus L(M) contains all strings over {a, b} which do not have two
successive b’s.
4.9.2. Exercise. (i) Let M = M ( I , S, A, s0 , f ) be the automaton where I = { a , b} ,
S = { s0 , s1 , s2 } , A = { s1 } , s0 is the initial state and next state function f is given as
f
I
a b
S
s0 s0 s1
s1 s1 s2
s2 s2 s2
a b
b a
s0 s2 a, b
s1
110 Discrete Mathematics
4.9.3. Definition. Consider any word u = x1 x2...xn on an alphabet A. Any sequence v = x j x j +1 ... xk is
called a subword of u. In particular the subword v = x1 x2 ... xk , beginning with the first letter of u, is called
an initial segment of u.
Remark. In the above example, we can say that L(M) contains those strings which has ba as a subword.
e.g. aabab, ababbb etc.
4.9.4. Exercise. Describe the language L(M) accepted by the automaton M given by following
transition diagram:
b a
a a
s0 s1 s2
b
b
a,b
a
s3 b
s4
Note. The fundamental relationship between regular languages and automata is given by the following
theorem:
4.9.5. Kleene theorem. (without proof). A language L over an alphabet A is regular iff there is a finite
state automaton M such that L = L ( M ) .
4.9.6. Pumping Lemma. Suppose M is an automaton over an alphabet A such that
(i) M has k states. (ii) M accepts a word w from A where |w|> k .
Then, w is of the form w = xyz and for any positive integer m, wm = xym z is accepted by M.
Proof. Suppose w = a1 a2 ...an is a word over A accepted by M such that | w | = n > k , where k is the
number of states. Let P( s0 , s1 ,..., sn ) be the corresponding sequence of states determined by the word w.
Since n > k, at least two states in P must be equal, say si = sj where i < j.
Let us assume x = a1a2 ... ai , y = ai +1...a j , z = a j +1...an . Then, clearly, w = xyz.
We observe that x must end in state si and xy must end in s j = si , i.e., x and xy both end in si. Also xyz
ends in sn. So, we have transition diagram of M as
x
z
s0 si =sj sn
y
Finite State Machines, Languages and Grammers 111
From the above diagram it is obvious that xy 2 , xy 3 , xy 4 ,..., xy m (for all +ve m) also ends in si. Thus for
every m, w m = xy m z ends in sn which is an accepting state.
Solution. We assume on the contrary, that L is regular. Then, by Kleene theorem, there exist a finite
state automaton M which accepts L. Suppose M has k states. Let w = a k b k . Then | w | > k , so by the
pumping lemma, w = xyz where y is not empty and w m = xy m z, m > 0 is also accepted by M. In particular
w 2 = xy 2 z is accepted by m.
Now, if y contains only a’s or only b’s, then w2 will not have the same number of a’s and b’s. If y
contains both a’s and b’s then w2 will have a’s following b’s. In both cases, w2 does not belong to L,
which is a contradiction. Thus L is not regular.
4.10. Grammars. A phrase –structure grammar or simply, a grammer G consists of four parts.
(i) A finite set N of non-terminal symbols.
(ii) A finite set T of terminal symbols, where N ∩ T = φ.
(iii) A finite subset P of (( N ∪ T ) * − T * ) × ( N ∪ T ) * called the set of productions. A production is an
ordered pair (α, β) usually written as α → β where α ∈ ( N ∪ T ) * − T * i.e., α must contain atleast one
non-terminal symbol and β ∈ ( N ∪ T ) * i.e., β can contain any combination of non-terminal and terminal
symbols.
(iv) A starting symbol σ ∈ N .
We shall denote a grammar G defined above by
G = G( N, T , P, σ)
Remark. (i) Terminals will be denoted by lower case letters a, b, c,... and non-terminals will be denoted
by capital letters A, B, C,... with σ as starting symbol.
(ii) Sometimes, we define a grammar G by only giving its productions, assuming implicitly that σ is the
starting symbol and that the terminals and non-terminals of G are only those appearing in the
productions.
4.10.1. Example. Let N = { σ, A} , T = { a , b} , P = { σ → b, σ → bA, A → aA, A → b} where σ is the
starting symbol. Then G = G( N, T , P, σ) is a grammar.
Remark. Productions of above example can be given as σ → ( b, bA ) and A →( aA, b ) .
4.10.2. Definition. Let G = ( N, T , P, σ) . Let α → β is any production and x, y are strings over terminals
and non-terminals i.e., x, y ∈ ( N ∪ T ) * . Then, we say that xβy is directly derivable from x αy and we
write
x αy ⇒ xβy
Further, if x i ∈ ( N ∪ T ) * for i = 1, 2, ......., n and x i +1 is directly derivable from xi, then we say that xn is
derivable from x1 and we write x1 ⇒ x n
We call x1 ⇒ x2 ⇒ ........ ⇒ x n
4.10.3. Definition. Let G = G( N, T , P, σ) be a grammar. The language accepted (or generated) by the
grammar G, denoted by L ( G ) contains all words in T that can be obtained or derived from the starting
symbol, i.e., L ( G ) = { w ∈ T * : σ ⇒ ........... ⇒ w } .
4.10.4. Example. Consider the grammar with productions σ → bσ , σ → aA, A → bA, A → b . Find the
language L(G) accepted by this grammar.
Solution. We observe that the string bbab can be derived from σ by the derivation
σ ⇒ bσ ⇒ bbσ ⇒ bbaA ⇒ bbab
If we apply the production σ → bσ , n times, and then apply σ → aA and then apply A → bA m times
and finally , we apply A → b to get the derivation
σ ⇒ bσ ⇒ bbσ ⇒ ........ ⇒ b n σ ⇒ b n aA ⇒ b n abA
⇒ b n abbA ⇒ ........ ⇒ b n ab m A ⇒ b n ab m +1
where n ≥ 0, m ≥ 0
On the other hand, no sequence of productions can produce two or more a’s and also string will end
precisely in b.
Hence L(G) = { b n ab m +1 : n ≥ 0, m ≥ 0} .
4.10.5. Example. Find the language L(G) generated by the grammar G, where
N = { σ, A, B} , T = { a , b} and productions P = { σ → aB, B → b, B → bA, A → aB} .
Solution. Here, we observe that we can only use the first production once since the starting symbol σ
does not appear anywhere else. Also, we can only obtain a terminal word by finally using the second
production. Otherwise, we alternatively add a’s and b’s using the third and fourth productions.
We can describe the above process by following derivations.
σ ⇒ aB ⇒ ab
⇒ ababaB ⇒ ababab
and so on. Hence, we get
Finite State Machines, Languages and Grammers 113
L ( G ) = { ( ab )n : n ∈ N}
4.10.6. Exercise. (i) Find the language L(G) over {a, b, c} generated by the grammar G with
productions σ → a σ b, a σ → Aa , Aab → c .
4.11. Type of Grammars. Grammars are classified in terms of context sensitive, context free and
regular as follows:
4.13.1. Definition. Let G = ( N, T , P, σ) be a grammar and let λ be the null string. Then grammar G is
said to be context –sensitive or type -1, if every production is of the form
αA α′ → αβα′ where α, α′ ∈ ( N ∪ T )* , β ∈ ( N ∪ T ) * − { λ}
The name context-sensitive comes from the fact that we can replace the variable (non-terminal) A by β
in a word only when A lies between α and α′.
Further, it must be noted that for the production
α A α′ → αβα′
the length of left side α A α′ is less than or equal to length of right side α B α′ since β ≠ λ .
So, | αA α′ | ≤ | αβα′ |
Hence, a type-1 or context-sensitive grammar is one in which length of left side of every production is
less than the length of right side of the production.
4.11.2. Definition. A grammar G = G( N, T , P, σ) is said to be context-free or type-2 if every production
is of the form A → β where A ∈ N and β ∈( N ∪ T ) * , that is, left side of every production must
be a single non-terminal and right-side is any word in one or more symbols.
The name context free comes from the fact that we can now replace the variable A by β regardless of
where A appears.
4.11.3. Definition. A grammar G = G(N, T, P, σ) is said to be regular or type-3 grammar if every
production is of the form A → a , or A → a β, or A → λ , that is, left hand side a single non-terminal and
right side is λ or a single non-terminal or a terminal followed by a non-terminal.
Remarks. (i) Clearly a type-3 grammar is always a type-2 grammar and a type-2 grammar, if it does not
contain the productions of the form A → λ, is a type -1 grammar.
(ii) If a grammar is not of any type i.e., type-1, type-2 and type-3 then it is said to be type -0 grammar.
Thus a type-0 grammar has no restrictions on its productions and hence every grammar is a type-0
grammar.
4.11.4. Example. Determine the type of grammar G which contains the productions
114 Discrete Mathematics
(v) σ → aAB, AB → a , A → b, B → AB
Solution. (i) The production σ → aAB means that grammar is not regular. Also every production is of
the form A → β i.e., left side is a non-terminal. So G is type -2 i.e., context free.
(ii) The production aA → b says that grammar is not of type-1, type-2 or type-3. So, G is type-0
grammar.
(iii) G is a regular or type-3 grammar since each production has the form A → a or A → aB.
(iv) Each production is of the form A → α i.e., a non-terminal on the left, hence G is a context-free or
type-2 grammar.
(v) The production AB → a means G is a type-0 grammar.
4.11.5. Exercise. (i) What is the type of a grammar G defined by T = {a, b, c}, N = {σ, A, B, C, D, E},
starting symbol σ and productions are
σ → aAB, σ → aB, A → aAc, A → ac, B → Dc
D → b, CD → CE , CE → DE , DE → DC, Cc → Dcc
4.11.6. Theorem (without proof) . A language L can be generated by a regular grammar if and only if L
is a regular language.
4.11.7. Example. Consider the language L = {an bn. n > 0}. Find a context-free grammar G which
generates the language L.
Solution. Here T = {a, b}
If we consider the productions σ → ab, σ → a σ b then we note that
σ ⇒ aσ b ⇒ aabb = a 2 b 2
σ ⇒ aσ b ⇒ aa σ bb ⇒ aaabbb = a 3 b 3
Hence L ( G ) = { a n b n : n > 0} .
So, the grammar G with productions σ → ab and σ → a σ b generates the language and clearly G is
context-free.
Finite State Machines, Languages and Grammers 115
4.11.8. Exercise. Can we find a regular grammar G which generates the language L = { a n b n : n > 0} .
a B
a A B A
(σ → aAB )
B a
b
( A → Bba )
σ
σ
(iii) a (iv)
A B
a
A B
B a b B
b B a
b
c
(B → c) c
(B → bB)
(v) σ
a
A B
b B
B a
b
c
c
(B → c)
The sequence of leaves from left to right is the derived word w. i.e., w = acbabc. It should be noted that
116 Discrete Mathematics
every leaf of the tree is a terminal symbol and every non-leaf is a non-terminal symbol. If A is any non-
leaf and let its immediate successors (children) form a word α, then A → α is a production of G. e.g.
In (ii) of above figure, children of A forms the word Bba, and so A → Bba is a production of G.
4.12.1. Exercise. (i) The below figure is the derivation tree of a word w in the language L of a context
free grammar G:
σ
A
b σ
a σ
A b σ
b a
b a
b
(a) Find w.
(b) What are the terminals, non-terminals and production of G.
(ii) For the derivation tree of a word
a
A B
b a
a
B
b a
and σ ⇒ σa ⇒ a σa ⇒ aaa
Finite State Machines, Languages and Grammers 117
σ σ
a σ σ a
σ a a σ
a a
4.13. Type of Languages. A language L is said to be context-sensitive, context free or regular if these
exist a context-sensitive, context-free or regular grammar G such that L = L ( G ) respectively, e.g. Let G
be a grammar given by the productions
σ → b σ, σ → aA, A → bA, A → b
Solution. We can apply the production σ → aA only once since starting symbol σ does not appear
anywhere else. Then we apply the production A → bbA, n times and finally apply the production A → c,
to obtain
σ ⇒ aA ⇒ abbA ⇒ abbbbA ⇒ ...... ⇒ abbbb......bbA
⇒ ab2n c , where n ≥ 0
So, L ( G ) = { a b 2 n c : n ≥ 0} .
Here, the grammar G is context-free grammar and so the language L(G) is also context-free.
4.13.2. Backus-Naur Form. There is another notation, called the Backus Naur form, which is
sometimes used for describing the productions of a context-free grammar. In this form
(i) = is used instead of →
(ii) Every non-terminal is enclosed by brackets < >
(iii) All production with the same non-terminal left-hand side are combined into one statement with all
the right-hand sides listed on the right of.. = separated by vertical bars. e.g. the production A → aB, A →
b, A → BC are combined into one statement as given below
< A > : : = a < B >| b| < B > < C >
118 Discrete Mathematics
4.13.3. Example 22. Rewrite each grammar G in Backus-Naur form given below
(i) σ → aA , A → aAB, B → b, A→a
Since α is accepted by automaton M, there is a path (σ, s1, s2 ,....., sn ) such that sn is an accepting state
and we write
a1 a2 a3 an
σ → s1 → s2 → s3 → ........... → sn −1 → sn
Finite State Machines, Languages and Grammers 119
Now, suppose α∈ L ( G ) be any string. If α = λ i.e., null string, then α must have the derivation σ ⇒ λ,
which implies that the production σ → λ is in the grammar G. So σ must be an accepting state in M and
α ∈ AC( M ) .
which shows that there are edges from σ to s1 , from s1 to s2,...., from sn − 1 to sn, labeled with
a1, a 2 ,......., a n respectively in the finite state automaton M and the production sn → λ shows that sn is an
accepting state.
a1 a2 an
σ s1 s2 sn − 1 sn
Now, if in the transition diagram, we start with initial state σ and trace the path σ, s1, s2 ,......., sn taking
α = a1a 2 .......a n , we observe that the final state reached is sn which is an accepting state. So α is accepted
by M i.e., α ∈ AC( M )
⇒ L ( G ) ⊆ AC( M ) ......(2)
σ A
b
b a
Construct the regular grammar G for the automaton M and verify that AC(M) = L(G).
Solution. We know that the starting symbol is given by initial state, so σ is starting symbol.
N = { σ, A} , set of states.
We know by example -19 of last unit that AC( M ) is set of all strings that contains an odd number of a’s.
We now show that L(G) is also set of all strings that contains an odd number of a’s .
We use the production σ → bσ (n times) to get σ ⇒ bσ ⇒ .......... ⇒ b n σ , n ≥ 0
Now to eliminate σ, we must use the production σ → aA (which can be used only once at this stage) to
get the string b n aA, n ≥ 0 .....(1)
Now, if we use A → bA( m tim es ) and finally use A → λ, we get the string b n a b m , n ≥ 0, m ≥ 0 . This is a
string containing only one ‘a’.
Upto this stage, we have only on ‘a’. But if at stage (1), we use A → aσ instead of 4th and 5th production
, then string takes the form
b n aa σ, n > 0
Now, if we use production σ → bσ (r times), number of a’s will remain two and finally to get rid of σ,
we have to use the production σ → aA to get the string
b n aa b r aA, n ≥ 0, r ≥ 0
automaton as follows:
Let I = T, S = N ∪ {F}, where F ∉ N ∪ T , σ is the initial state,
A = { F} ∪ { s : s → λ ∈ P) and f is defined by
f ( s, x ) = { s′ : s → xs′ ∈ P ) ∪ { F : s → xF ∈ P}
But by above theorem, this regular grammar can be converted into a NDFSA which accepts L(G).
Further, we know that a NDFSA can be converted into a FSA which accepts the same strings as those of
NDFSA. Hence, we can get a finite state automaton M such that
L ( G ) = AC( M ) ......(2)
Books Recommended:
1. Kenneth H. Rosen, Discrete Mathematics and Its Applications, Tata McGraw-Hill, Fourth Edition.
2. Seymour Lipschutz and Marc Lipson, Theory and Problems of Discrete Mathematics, Schaum
Outline Series, McGraw-Hill Book Co, New York.
3. John A. Dossey, Otto, Spence and Vanden K. Eynden, Discrete Mathematics, Pearson, Fifth
Edition.
122 Discrete Mathematics