0% found this document useful (0 votes)
74 views127 pages

Discrete Mathematics Code

Uploaded by

Sonam Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views127 pages

Discrete Mathematics Code

Uploaded by

Sonam Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 127

M.Sc.

Mathematics (CDOE)
Semester – III
Paper Code – 21MAT23DA1

DISCRETE
MATHEMATICS

CENTRE FOR DISTANCE AND ONLINE EDUCATION


MAHARSHI DAYANAND UNIVERSITY, ROHTAK
(A State University established under Haryana Act No. XXV of 1975)
NAAC 'A+’ Grade Accredited University
Updated by : Dr Jagbir Singh
Assistant Professor,
Department of Mathematics,
Maharshi Dayanand University,
Rohtak, Haryana

Copyright © 2022, Maharshi Dayanand University, ROHTAK


All Rights Reserved. No part of this publication may be reproduced or stored in a retrieval system or transmitted
in any form or by any means; electronic, mechanical, photocopying, recording or otherwise, without the written
permission of the copyright holder.
Maharshi Dayanand University
ROHTAK – 124 001
Third Semester
Discrete Mathematics
Paper Code: 21MAT23DA1
Time: 03 Hours Max Marks : 80
Course Outcomes:
Students would be able to:
CO1 Be familiar with fundamental mathematical concepts and terminology of discrete mathematics and
discrete structures.
CO2 Express a logic sentence in terms of predicates, quantifiers and logical connectives.
CO3 Use finite-state machines to model computer operations.
CO4 Apply the rules of inference and contradiction for proofs of various results.
CO5 Evaluate boolean functions and simplify expressions using the properties of boolean algebra.
Section - I
Recurrence Relations and Generating Functions, Some number sequences, Linear homogeneous
recurrence relations, Non-homogeneous recurrence relations, Generating functions, Recurrences and
generating functions, Exponential generating functions.
Section – II
Statements Symbolic Representation and Tautologies, Quantifiers, Predicates and validity, Prepositional
Logic. Lattices as partially ordered sets, their properties, Lattices as Algebraic systems. Sub lattices,
Direct products and Homomorphism, Some special lattices e.g. complete, Complemented and
Distributive Lattices.
Section – III
Boolean Algebras as Lattices, Various Boolean Identities, The switching Algebra.Example,
Subalgebras, Direct Products and Homomorphism, Joint-irreducible elements,Atoms and Minterms,
Boolean forms and their equivalence, Minterm Boolean forms, Sum of Products, Cononical forms,
Minimization of Boolean functions, Applications ofBoolean Algebra to Switching Theory ( using AND,
OR and NOT gates.) The Karnaugh method.
Section – IV
Finite state Machines and their Transition table diagrams, Equivalence of Finite State, Machines,
Reduced Machines, Homomorphism. Finite automata, Acceptors, Nondeterministic, Finite Automata
and equivalence of its power to that of deterministic Finite automata, Moore and Mealy Machines.
Grammars and Language: Phrase-Structure Grammars, Requiting rules, Derivation, Sentential forms,
Language generated by a Grammar, Regular, Context -Free and context sensitive grammars and
Languages, Regular sets, Regular Expressions and the pumping Lemma.
Note : The question paper of each course will consist of five Sections. Each of the sections I to IV will
contain two questions and the students shall be asked to attempt one question from each. Section-V shall
be compulsory and will contain eight short answer type questions without any internal choice covering
the entire syllabus.
Books Recommended:
1. Kenneth H. Rosen, Discrete Mathematics and Its Applications, Tata McGraw-Hill, Fourth Edition.
2. Seymour Lipschutz and Marc Lipson, Theory and Problems of Discrete Mathematics, Schaum
Outline Series, McGraw-Hill Book Co, New York.
3. John A. Dossey, Otto, Spence and Vanden K. Eynden, Discrete Mathematics, Pearson, Fifth
Edition.
4. J.P. Tremblay, R. Manohar, “Discrete mathematical structures with applications to computer
science”, Tata-McGraw Hill Education Pvt.Ltd.
5. J.E. Hopcraft and J.D.Ullman, Introduction to Automata Theory, Langauages and Computation,
Narosa Publishing House.
6. M. K. Das, Discrete Mathematical Structures for Computer Scientists and Engineers, Narosa
Publishing House.
7. C. L. Liu and D.P.Mohapatra, Elements of Discrete Mathematics- A Computer Oriented Approach,
Tata McGraw-Hill, Fourth Edition.
CONTENTS

CHAPTER SECTION TITLE OF CHAPTER PAGE No.

1 1 Recurrence Relations 1-14

2 2 Propositions and Lattices 15-52

3 3 Boolean Algebra 53-85

4 4 Finite State Machines, 86-122


Languages and Grammers
1
Recurrence Relations
Structure
1.1. Introduction.
1.2. Recurrence Relations.
1.3. Explicit Formula for a Sequence.
1.4. Solution of Recurrence Relations.
1.5. Homogeneous Recurrence Relations with Constant Coefficients.
1.6. Total Solution.
1.7. Recursive Functions.
1.8. The Ackermann Function.
1.9. McCarthy's 91 Function.
1.10. The Collatz Function.
1.11. Generating Functions.
1.12. Convolution of Numeric Functions.
1.13. Exercises.

1.1. Introduction. This chapter contains results related to various types of recurrence relations,
generating functions and finding the relative solutions.
1.1.1. Objective. The objective of the study of these results is to understand the basic concepts and have
an idea to apply them in further studies about recurrence relations.
1.2. Recurrence Relations.
A recurrence relation relates the nth term of a sequence to its predecessors. These relations are related to
recursive algorithms. A recurrence relation for a sequence 𝑏0 , 𝑏1 , 𝑏2 , . . ., is a formula/equation that
relates each term 𝑎𝑛 to certain of its predecessors 𝑏0 , 𝑏1 , 𝑏2 , . . ., 𝑏𝑛−1 . The initial conditions for such a
recurrence relation specify values of 𝑏0 , 𝑏1 , 𝑏2 , . . . , 𝑏𝑛−1 .
For example, recursive formula for the sequence
2 Discrete Mathematics

3, 8, 13, 18, 23 . . .
is 𝑏1 = 3, 𝑏𝑛 = 𝑏𝑛−1 + 5, 2 d n < � .
Here, 𝑏1 = 3 is the initial condition.
Exercise. Find the sequence represented by the recursive formula
𝑏1 = 5, 𝑏𝑛 = 2𝑏𝑛−1 , 2 d n d 6.
1.3 Explicit Formula for a Sequence.
Consider the sequence
1, 4, 9, 16, 25, 36, 49, . . .
which is a sequence of the squares of all positive integers.
This sequence is described by the formula
𝑏𝑛 = 𝑛2 , 1 ≤ 𝑛 < ∞.
Thus, the terms of the sequence have been described using only its positive number. This type of
formula is called Explicit formula.
1.3.1. Exercise. Find the explicit formula for the finite sequence
87, 82, 77, 72, 67
Can this sequence be described by a recursive relation ?
1.3.2.Exercise. Find recursive formula for the factorial function.
1.3.3. Fibonacci sequence. The sequence
1, 1, 2, 3, 5, 8, 13, 21, 34, . . .
defined by the recurrence relation
𝑓0 = 1 , 𝑓1 = 1 , 𝑓𝑛 = 𝑓𝑛−1 + 𝑓𝑛−2
is called Fibonacci sequence.
1.3.4. Example. Derive recurrence relation for obtaining the amount A, at the end of n years on the
investment of Rs 10,000 at 5% interest compounded annually.
Solution. Suppose 𝐴𝑛 = amount at the end of n years.
Then, 𝐴𝑛 = 𝐴𝑛−1 + interest during n-1th year on An-1
= 𝐴𝑛−1 + 5 /100 An-1
= 𝐴𝑛−1 (1 + .05)
= 1.05 𝐴𝑛−1
Thus, the recurrence relation for calculating amount becomes
𝐴0 = Rs. 10,000
Recurrence Relations 3

𝐴𝑛 = 1.05 𝐴𝑛−1 , n>0


Using this recurrence relation, we can compute value of A, for any n. For example,
A� = 1.05(10000) = (1.05)' (10000) = 10500 rupees.
A‚ = (1.05) (1.05) (10000) = (1.05)(10000)
𝐴𝑛 = (1.05)𝑛 (10000)
1.4. Solution of Recurrence Relations.
A technique for finding an explicit formula for the sequence defined by a recurrence relation is called
backtracking. In this technique the values of a, are back tracked, substituting the values of 𝑎𝑛−1 ,
𝑎𝑛−2 and so on, till a pattern is clear.
1.4.1. Example. Find an explicit formula for the recurrence relation
𝑎0 = 1, 𝑎𝑛 = 𝑎𝑛−1 + 2
Solution. The recurrence relation
𝑎0 = 1 , 𝑎𝑛 = 𝑎𝑛−1 + 2
defines the sequence 1, 3, 5, 7, . . .
Backtracking the values, we have
𝑎𝑛 = 𝑎𝑛−1 + 2
𝑎𝑛 = 𝑎𝑛−2 + 2 + 2 = 𝑎𝑛−2 + 2 . 2
= 𝑎𝑛−3 + 2 + 2 + 2 = 𝑎𝑛−3 + 2 . 3
Thus, we have
𝑎𝑛 = 𝑎𝑛−𝑘 + 2k
If we set k = n, then
𝑎𝑛 = 𝑎𝑛−𝑛 + 2n
= 𝑎0 + 2 n = 1+2n,
which is the required explicit formula.
1.4.1. Exercise. Backtrack to find explicit formula for the sequence defined by the recurrence relation
𝑎1 = 1 , 𝑎𝑛 = 3 𝑎𝑛−1 + 1 , ne 2
A sequence 𝑎0 , 𝑎1 , 𝑎2 , . . . is called an arithmetic sequence if and only if there is a constant d such that
𝑎𝑛 = 𝑎𝑛−1 + d, for all integers n e 1.
For example, the recurrence relation
𝑎𝑛 = 𝑎𝑛−1 + 3 , 𝑎1 = 2
4 Discrete Mathematics

defines an arithmatic sequence


2, 5, 6, 11, 14, …
with constant difference 3.
1.4.2. Exercise. Let K be the picture obtained by drawing n dots (vertices) and joining each pair of
vertices by an edge. Develop a recurrence relation for the number of edges of K, and find explicit
formula for it.
1.4.3. Example. Solve the recurrence relation
𝑏
𝑝𝑛 = a - 𝑘 𝑝𝑛−1

for the price in the economic model, where a, b, k are positive parameters and 𝑝0 is the initial price.
Solution. To obtain the solution, we use the technique of backtracking by taking
𝑏
-𝑘=c

and have
𝑝𝑛 = a + c 𝑝𝑛−1
= a + c (a + c 𝑝𝑛−2) = a + ac + 𝑐 2 𝑝𝑛−2
In general, we have
𝑝𝑛 = a + ac + a𝑐 2 + a𝑐 𝑘−1 + 𝑐 𝑘 𝑝𝑛−𝑘
If we set k = n, then
𝑝𝑛 = a + ac + ac² + ....... + a𝑐 𝑛−1 + 𝑐 𝑛 𝑝0
𝑝𝑛 = a(1 + c + c² + ....... + 𝑐 𝑛−1 ) + 𝑐 𝑛 𝑝0
= a(1- 𝑐 𝑛 ) / 1 – c + 𝑐 𝑛 𝑝0
𝑎−𝑎𝑐 𝑛
= + 𝑐 𝑛 𝑝0
1−𝑐
𝑏 −𝑎𝑘 𝑎𝑘
= (− 𝑘)(𝑘+𝑏 + 𝑝0 ) +𝑘+𝑏
𝑏 −𝑎𝑘 𝑏
= (− 𝑘)(𝑘+𝑏 + 𝑝0 ), if 𝑘
<1
𝑎𝑘 𝑏
becomes very small for large n and thus the price 𝑝𝑛 tends to stabilize at approximately 𝑘+𝑏 . If 1𝑘 = 1,
𝑏
then 𝑝𝑛 oscillates between 𝑝0 and 𝑝1. If 𝑘
> 1, then the difference between the successive prices
increase.
1.5.Homogeneous Recurrence Relations with Constant Coefficients.
A linear recurrence relation of order k with constant coefficient is a recurrence relation of the form
𝑎𝑛 = 𝑐1 𝑎𝑛−1 + 𝑐2 𝑎𝑛−2 + . . . + 𝑐𝑘 𝑎𝑛−𝑘 , 𝑐𝑘 ` 0.
For example, the recurrence relation 𝑎𝑛 = (-2) 𝑎𝑛−1 , is a linear homogeneous recurrence relation of
Recurrence Relations 5

order 1. The recurrence relation 𝑎𝑛 = 𝑎𝑛−1 + 𝑎𝑛−2 , is a linear recurrence relation of order 2.
The equation
𝑥 𝑘 = 𝑟1 𝑥 𝑘−1 + 𝑟2 𝑥 𝑘−2 + ...........+ 𝑟𝑘
of degree k is called the characteristic equation of the linear homogeneous recurrence relation
𝑎𝑛 = 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2+ ...........+ 𝑟𝑘 𝑎𝑛−𝑘
of order k.
1.5.1. Theorem. If the characteristic equation x²- 𝑟1x-𝑟2 = 0 of the homogeneous recurrence relation
𝑎𝑛 = 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2, has two distinct roots 𝑠1 and 𝑠2 then
𝑎𝑛 = u 𝑠1𝑛 + v 𝑠2𝑛
where u and v depend on the initial conditions, is the explicit formula for the sequence.
Proof. Since 𝑠1 and 𝑠2 are root of the characteristic equation x²- 𝑟1x-𝑟2 = 0 , we have
𝑠12 - 𝑟1 𝑠1 - 𝑟2 = 0 (1)
𝑠22 - 𝑟1 𝑠2 - 𝑟2 = 0 (2)
Let
𝑎𝑛 = u 𝑠1𝑛 + v 𝑠2𝑛 for n e 1 (3)
It is sufficient to show that (3) defines the same sequence as 𝑎𝑛 = 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2 . We have
𝑎1 = u 𝑠1 + v 𝑠2
𝑎2 = u 𝑠12 + v 𝑠22
and the initial conditions are satisfied. Further,
𝑎𝑛 = u 𝑠1𝑛 + v 𝑠2𝑛
= u 𝑠1𝑛−2. 𝑠12 + v 𝑠2𝑛−2 . 𝑠22
= u 𝑠1𝑛−2. (𝑟1 𝑠1 + 𝑟2 ) + v 𝑠2𝑛−2.( 𝑟1 𝑠2 + 𝑟2 ) (using (1) and (2))
= 𝑟1( u 𝑠1𝑛−1 + v 𝑠2𝑛−1 ) + 𝑟2 (u 𝑠1𝑛−1 + v 𝑠2𝑛−1 )
= 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2 ( using expression of 𝑎𝑛−1 and 𝑎𝑛−2 from (3)
Hence (3) defines the same sequence as 𝑎𝑛 = 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2 . Hence 𝑎𝑛 = u 𝑠1𝑛 + v 𝑠2𝑛 is the
solution to the given linear homogeneous recurrence relation.
1.5.2. Theorem. If the characteristic equation x²- 𝑟1x-𝑟2 = 0 of the linear homogeneous recurrence
relation 𝑎𝑛 = 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2 has a single root s, then the explicit formula (solution) for the
recurrence relation is 𝑎𝑛 = u 𝑠 𝑛 + v 𝑠 𝑛 , where u and v depend on the initial conditions.
1.5.3. Example. Find an explicit formula for the sequence defined by the recurrence relation
6 Discrete Mathematics

𝑎𝑛 = 𝑎𝑛−1 + 2 𝑎𝑛−2 , ne 2
with the initial conditions
𝑎0 = 1 and 𝑎1 = 8
Solution. The recurrence relation
𝑎𝑛 = 𝑎𝑛−1 + 2 𝑎𝑛−2
is a linear homogeneous relation of order 2. Its characteristic equation is
𝑥2 - x – 2 = 0
which yields x = 2,-1.
Hence
𝑎𝑛 = u (2)n + v (-1)n (1)
and, we have
𝑎0 = u + v = 1 (given)
𝑎1 = 2u - v = 8 (given)
Solving for u and v, we have
u = 3, v = -2.
Hence
𝑎𝑛 = 3(2)n -2 (-1)n , ne 0
is the explicit formula for the sequence.
1.5.4. Exercise. Solve the recurrence relation
𝑑𝑛 = 2 𝑑𝑛−1 - 𝑑𝑛−2
with initial conditions d� = 1.5 and d‚ = 3.
1.5.5. Exercise. Find explicit formula for Fibonacci sequence.
1.6. Total Solution. The total solution of a linear difference equation
𝑎𝑛 = 𝑟1 𝑎𝑛−1 + 𝑟2 𝑎𝑛−2+ . . . + 𝑟𝑘 𝑎𝑛−𝑘 = f(n)
where f(n) is constant or a function of n, with constant coefficients, is the sum of two parts, the
homogeneous solution satisfying the difference equation when the right hand side of the equation is
set to be 0, and the particular solution, which satisfies the difference equation with f(n) on the right
hand side.
1.6.1. Particular Solution of a Difference Equation.
There is no general procedure to find particular solution of a given difference equation. So, the particular
solution is obtained by the method of inspection as discussed in the following cases:
Recurrence Relations 7

Case 1: If f(n) is a polynomial in n of degree m, then we take


P1 nm + P2 nm-1 + . . . + Pm+1
as the particular solution of the difference equation. Putting this solution in the given difference
equation, the values of P1, P2, . . ., Pm+1 are determined.
1.6.2. Example. Find the total solution of the difference equation
an - an-1 - 2 an-2 = 2 n2.
Solution. Suppose that the particular solution is of the form
P1 n2 + P2n + P3 (1)
where P1, P2 and P3 are constants to be determined. Substituting (1) in the given difference equation,
we obtain
(P1n2 + P2n + P3) - [P1(n-1)2 + P2(n-1) + P3] - 2[P1(n-2)2 + P2(n-2) + P3] = 2n2
Or -2P1n2 + n(10P1 – 2P2) + (-9P1 + 5P2 – 2P3) = 2n2
Comparing coefficients of the powers of n, we have
-2P1 = 2 , 10P1 -2P2 = 0 , 9P1 - 5P2 + 2P3 = 0
which yield
P1 = -1 , P2 = -5 , P3 = -8
Therefore, the particular solution is
-n²-5n-8
The homogeneous solution of this recurrence relation is
3(2)n – 2(-1)n
Hence the total solution is
3(2)n – 2(-1)n - n2 - 5n - 8
1.6.3. Exercise. Find the particular solution of the difference equation
an + 5 an-1 + 6 an-2 = 3 n2 – 2 n + 1
Case II: If f(n) is a constant, then the particular solution of the difference equation will also be a
constant P. provided that I is not a characteristic root of the difference equation.
1.6.4. Exercise. Find the particular solution of the difference equation
an - 4 an-1 + 5 an-2 = 2
Hence find the total solution of this recurrence relation.
Case III: If f(n) is of the form ±n , the corresponding particular solution is of the form P ±n
provided that ± is not a characteristic root of the difference equation of order n.
8 Discrete Mathematics

1.6.5. Exercise. Find the particular solution of the difference equation


an - 5 an-1 + 4 an-2 = 56 . 3n
Hence find the total solution of this difference equation.
Case IV: If ± is not a characteristic root of the difference equation and f(n) is of the form
( c1 nm + c2 nm-1 + . . . + cn+1 ) ±n
Then, the particular solution is of the form
( P1 nm + P2 nm-1 + . . . + Pn+1 ) ±n
1.6.6. Exercise. Find the total solution of the difference equation
an - an-1 - 2 an-2 = 3 n . 4n
Case V: If ± is a characteristic root of multiplicity m-1 and f(n) is of the form
( c1 np + c2 np-1 + ............. + cp+1 ) ±n ,
the corresponding particular solution of the recurrence relation will be of the form
nm-1 ( P1 np + P2 np-1 + ............. + Pp+1 ) ±n
1.6.7. Exercise. Find the particular solution of the difference equation
an - 4 an-1 = 6 . 4n
1.6.8. Exercise. Find the total solution of the difference equation
an - 6 an-1 + 9 an-2 = n . 3n
1.6.9. Exercise. Find the total solution of the difference equation
an - an-1 = 6
1.6.10. Exercise. Find the particular solution of the difference equation
an - 2 an-1 + an-2 = 3
1.6.11. Exercise. Find the particular solution of the difference equation
an - 5 an-1 + 6 an-2 = n + 3n
1.7. Recursive Functions.
A function is said to be a recursive function if its rule of definition refers to itself. Such functions are
used in the theory of computation in computer science.
1.8. The Ackermann Function.
This function answers the question of what can and what cannot be computed on a computer.
Ackermann function is defined on the set of all pairs of non-negative integers by the recurrence
relations.
A (m, 0) = A (m-1, 1) m = 1, 2, . . . (1)
Recurrence Relations 9

A (m, n) = A (m-1, A(m, n-1)), m, n = 1, 2, . . . (2)


and the initial conditions
A (0, n) = n + 1, n = 0, 1, 2, . . . (3)
The rate of growth of a Ackermann function is rapid. This function appears in the time complexity of
certain algorithms such the time to execute union/find algorithm.
We note that
(i) A (0, 0) = 0+1 = 1 by (3)
(ii) A (1, 1) = A (0, A (1, 0)) by (2)
= A (0. A (0, 1)) by (1)
= A (0,2) by (3)
=3 by (3)
(iii) A(1, 2) = A (0, A(1, 1)) by (2)
= A (0, 3) by (ii)
=5 by (3)
(iv) A (1, 3) = A (0, A(1, 2)) by(2)
= A (0, 4) by (iii)
=5 by (3)
Find A (2, 2).
Remark. It can be shown by Mathematical Induction that A(1, n) = n+2, A(2, n) = 3+2n,
A(3, n) = 8.2n - 3 for all non-negative integers n.
1.9. McCarthy's 91 Function
The function M : Z+ ’ Z defined by
𝑛 − 10 𝑖𝑓 𝑛>100
M (n) = �
𝑀 ( 𝑀 ( 𝑛 + 11)) 𝑖𝑓 𝑛 ≤ 100

for all positive integers n is called McCarthy's 91 Function.


We observe that
(1) M (21) = M (M(21 + 11)) since 21 d 100
= M(M(32))
=M(M(32 + 11)) since 32 d 100
= M(M(32 + 11)) since 32 d 100
10 Discrete Mathematics

= M(M(43)
= M(M(54)) since 43 d 100
= M(M(65)) since 54 d 100
= M(100) since 110 > 100
= M(M(111)) since 100 d 100
= M(101) since 111 > 100
= 91 since 101 > 100
From this calculation, it is clear that
M(21) = M(99) = M(100) = M(101) = 91
Interestingly, the value of this function comes out to be 91 for all positive integers less than or equal to
101. Also M(n) is well defined for n > 101 because then it is equal to n - 10. Thus, McCarthy 91
function is well defined.
For example,
M(102) = 102 – 10 = 92
M(106) = 106 – 10 = 96
and so on.
1.10. The Collatz Function. The function F: Z+ ’ Z defined by
1 𝑖𝑓 𝑛 = 1
F(n) = �1 + 𝐹 (𝑛/2) 𝑖𝑓 𝑛 𝑖𝑠 𝑒𝑣𝑒𝑛
𝐹(3𝑛 + 1) 𝑖𝑓 𝑛 𝑖𝑠 𝑜𝑑𝑑 𝑎𝑛𝑑 𝑛 > 1
is called the Collatz Function.
Collatz has conjectured that the function is well defined on the set of all positive integers. At present,
F(n) is computable for all integers n with 1 d n < 109.
For example
F(1) = 1
F(2) = 1 + F(1) = 1 + 1 = 2
F(3) = F(9 + 1) = F(10) = 1 + F(5) = 1 + (1 + F(16))
= (1 + (1 + (1 + (1 + F(18)))
= (1 + (1 + (1 + (1 + F(4))))
= (1 + (1 + (1 + (1 + ( 1 + F(2)))))
= (1 + (1 + (1 + (1 + ( 1 + 2)))
= (1 + 1 + 1 + 1 + 3) = 7
and so on .
Recurrence Relations 11

1.11. Generating Functions.


A function whose domain of definition is the set of natural numbers and whose range is the set of
real numbers is called a discrete numeric function or simply, numeric functions.
Thus, a sequence is a numeric function. For a numeric function a we use a0, a1, a2, . . . an, . . ., to denote
the value of the function at 0, 1, 2, . . ., n, . . . Instead of representing a numeric function by listing
its terms, we use short representation for its terms.
An infinite series
a0 + a1 z + a2 z2 +... + an zn + ...
We note that the coefficient of zn in the generating function is the value of the numeric function at
n.
1.11.1. Example. Find the generating function of the numeric function
An = 5 .2n , ne 0
1
Solution. The generating function for the numeric function (1, 2, 4, ...) is 1−2𝑧 .

Hence generating function for an = 5 . 2n is


1 5
A (z) = 5 (1−2𝑧) = 1−2𝑧

1.11.2. Exercise. Find the generating function of the numeric function


an = 1n + 2n + 3n , ne 0
1.11.3. Exercise. Find the generating function for the numeric function
an = 2n+3 , ne 0.
1.11.4. Exercise. Find the numeric function corresponding to generating function
3𝑧
A(z) = (1−𝑧)(1+2𝑧)

1.12. Convolution of Numeric Functions.


Let a = (a0, a1, ... an,...) and b = (b0, b1..., bn...) be numeric functions. Then,
A(z) = a0 + a1 z + ...+ an zn + ... and B(z) = b0 + b1 z + ... + bn zn +...
are their generating functions.
Let c = a * b. Then,
cn = a0 bn + a1 bn-1 + a2 bn-2 + ... + an-1 b1 + an b0 = ∑𝑛𝑖=0 𝑎𝑖 𝑏𝑛−𝑖
is the coefficient of zn in the Cauchy product
(a0 + a1 z + ...+ an zn ) + ..... ) (b0 + b1 z + ... + bn zn ) +...)
Hence C(z) = A(z) B(z)
12 Discrete Mathematics

1.12.1. Example. Let (a0, a1, .... an,...) be an arbitrary numeric function and (1,1, 1, 1, ...) be numeric
function. Suppose c be the convolution of these two numeric functions. Find generating function C(z),
Solution. We have
c=a*b
where
a = (a0, a1, ... an,...)
b = (b0, b1..., bn....) = (1,1, 1, 1, ...)
so that
cn = a0 bn + a1 bn-1 + a2 bn-2 + ... + an-1 b1 + an b0
= a0 + a1 + ... + an + ...
since each bi = 1, and the generating function of c is
1
C(z) = A(z) B(z) = A(z) 1−𝑧
1
In particular, if we take A(z) = 1−𝑧 then
1
C(z) = (1−𝑧 )2
is the generating function of the numeric function (1, 2, 3, ..., n,...) because
c0 = a0 b0 = 1. 1 = 1
c1 = a0 b1 + a1 b0 = 1 + 1 = 2
c2 = a0 b2 + a1 b1 + a2 b0 = 1 + 1 + 1 = 3 = 2 + 1
cn = 1 + 1 + 1 + .... + (n + 1 times) = n+1
1
Thus, the generating function of the sequence an = n+1 is (1−𝑧 )2
1.12.2. Exercise. Let c = a + b, where
an = 2n bn = 4n ne 0
Determine the generating function C(z).
1
1.12.3. Exercise. Show that the generating function (1−4𝑧)2 can be expressed as
an = (n + 1) 4n
1.12.4. Solution of Recurrence Relations by the Method of Generating Function.
In this method, the given recurrence relations are first converted in the form of a generating function
and then solved.
1.12.5. Example. Find explicit formula for the recurrence relation
an = 3 an-1 + 1 , ne 2
with the initial condition a0 = 0, a1 = 1
Solution. We are given that
Recurrence Relations 13

an = 3 an-1 + 1 , ne 2 (1)

an zn = 3 an-1 zn + zn , ne 2 (2)
Summing (2) for all n e 2, we obtain
∑∞
𝑛=2 𝑎𝑛 𝑧
𝑛
= ∑∞ 𝑛 ∞
𝑛=2 𝑎𝑛−1 𝑧 + ∑𝑛=2 𝑧
𝑛
(3)
But
∑∞ 2
𝑛=2 𝑎𝑛 = a2 z + ..........+ an z
n

= A(z) - a1 z – a0 = A(z) – z , since ao = 0 , a1 = 1.


∑∞ 𝑛 ∞
𝑛=2 𝑎𝑛−1 𝑧 = z ∑𝑛=2 𝑎𝑛−1 𝑧
𝑛−1

= z ( a1 z + a2 z2 + ..........+ an zn + ........)
= z (A(z) - ao ) = z A(z)
𝑧2
∑∞
𝑛=2 𝑧
𝑛
= z 2 ∑∞
𝑛=2 𝑧
𝑛−2
= 1−𝑧

Thus (3) reduces to


𝑧2
A(z) – z = 3 z A(z) + 1−𝑧
𝑧2
(1-3z) A(z) = z + 1−𝑧
𝑧
or = 1−𝑧
1 1
𝑧 2 2
A(z) = (1−𝑧) (1−3𝑧)
= +
1−3𝑧 1−𝑧

Hence
1 1
an = 2 (3)n + (2) 1 , ne 0
3𝑛 −1
= 2
, ne 0

1.13. Exercises.
1. Using technique of backtracking, find the explicit formula for the recurrence relation
Sn = 2 Sn-1 , S0 = 1
2. Using technique of backtracking, find explicit formula for the recurrence relation
an = an-1 + n , a1 = 4
3. Solve the recurrence relation
an = 2 an-1 - an-2 , ne 2
14 Discrete Mathematics

with the initial conditions: a0 = 1, a1 = 4.


4. Solve the recurrence relation
an = 4 an-1 - 4 an-2
with the initial conditions: a0 = 1, a1 = 1.
5. Find the total solution of the difference equation
an + 5 an-1 + 6 an-2 = 3n2 - 2n + 1.
6. Find the total solution of the difference equation
an - 5 an-1 + 6 an-2 = 1
7. Find the particular solution of the difference equation
an + an-1 = 3 n . 2n
8. For McCarthy's function 91, show that M(86) = 91.
9. Using definition of Ackermann function, find the value of A(2, 1) and A(2,3).
10. Find the numeric function corresponding to the generating function
−4𝑧
A (z) = (1+𝑧)(1−3𝑧)

11. Using generating function methods, find the explicit formula for Fibonacci sequence.
Books Recommended:
1. Kenneth H. Rosen, Discrete Mathematics and Its Applications, Tata McGraw-Hill, Fourth Edition.
2. Seymour Lipschutz and Marc Lipson, Theory and Problems of Discrete Mathematics, Schaum
Outline Series, McGraw-Hill Book Co, New York.
3. John A. Dossey, Otto, Spence and Vanden K. Eynden, Discrete Mathematics, Pearson, Fifth
Edition.
4. J.P. Tremblay, R. Manohar, “Discrete mathematical structures with applications to computer
science”, Tata-McGraw Hill Education Pvt.Ltd.
5. J.E. Hopcraft and J.D.Ullman, Introduction to Automata Theory, Langauages and Computation,
Narosa Publishing House.
6. M. K. Das, Discrete Mathematical Structures for Computer Scientists and Engineers, Narosa
Publishing House.
7. C. L. Liu and D.P.Mohapatra, Elements of Discrete Mathematics- A Computer Oriented Approach,
Tata McGraw-Hill, Fourth Edition.
2
Propositions and Lattices
Structure
2.1.Introduction.
2.2.Proposition.
2.3.Quantifiers.
2.4.Lattices.
2.1. Introduction. This chapter contains results related to propositions, truth tables, quanitifiers and its
types, lattices and its properties.
2.1.1. Objective. The objective of the study of these results is to understand the basic concepts and have
an idea to apply them in problem solving and various situations in life having logics.
2.2. Proposition. A proposition (or statement) is a declarative sentence which is true or false but not
both.
Examples. The following statements are all propositions:
(i) Paris is in France (ii) a < 6 (iii) It rained yesterday
However, the following statements are not propositions.
(i) What is your name? (ii) x2 = 9 (iii) Do your homework
The lower case letters such as p, q, r etc. are used to represent propositions.
For example, p : 2+2 = 4

q : India is in Asia.

2.2.1. Compound Propositions. Many propositions are composite, that is, composed of subpropositions
and various connective discussed subsequently. Such propositions are called compound propositions.
A proposition is said to be primitive if it cannot be broken down into simpler propositions, that is, if it
is not composite.
For example, “Roses are red and Violets are blue” is a compound proposition with subpropositions
“Roses are red” and “Violets are blue”.
On the other hand, the proposition “London is in Denmark” is primitive.
16 Discrete Mathematics

2.2.2. Basic Logical Operations. The three basic logical operations are
(i) Conjunction (ii) Disjunction (iii) Negation
which correspond, respectively, to “and”, “or” and “not”.
The conjunction of two propositions p and q is the proposition p and q, denoted by p ∧ q .
For example, Let p : He is rich
q : He is generous
Then, p∧q : He is rich and generous.
Thus, conjunction of p and q, that is, p ∧ q is true, if he is rich and generous both. Also even if one of
the component is false, p ∧ q is false. Thus “the proposition p ∧ q is true if and only if the propositions p
and q are both true”. The truth table of p ∧ q is given as

p q p∧q

T T T

T F F

F T F

F F F
The disjunction of two propositions p and q is the proposition p or q, denoted by p ∨ q.
The compound statement p ∨ q is true if atleast one of the p or q is true and it is false when both p and q
are false.
The truth value of the compound proposition p ∨ q is given by

p q p∨q

T T T

T F T

F T T

F F F
For example, if p. 1 + 1 = 3
q. A decade is 10 years.
Then, p is false, q is true and so the disjunction p ∨ q is true.
Given any proposition p another proposition, called the negation of p, can be formed by writing “It is
Propositions and Lattices 17

not the case that ......” or “It is false that .....” before p or if possible, by inserting in p the word “not”.
Symbolically,
~p
read “not p” denotes the negation of p.
If p is true then ~ p is false and if p is false, then ~ p is true.
2.2.3. Propositional Form. A “statement form” or “propositional form” is an expression made up of
statement variables (such as p, q and r) and logical connectives (such as ~, ∧ , ∨) that becomes a
statement when actual statements are substituted for the component statement variables.
The truth table for a given statement form displays the truth values that correspond to the different
combinations of truth value for the variables.
For example, Construct a truth table for the statement ( p ∨ q ) ∧ ∼ ( p ∧ q )

Some times, it is written as p ⊕ q .

p q p∨q p∧q ~ (p ∧ q) (p ∨ q) ∧ ∼ (p ∧ q)
T T T T F F
T F T F T T
F T T F T T
F F F F T F

2.2.4. Example. Construct a truth table for the statement ( p ∧ q ) ∨ ∼ r

p q r p∧q ~r (p ∧ q) ∨ ~ r
T T T T F T
T T F T T T
T F T F F F
T F F F T T
F T T F F F
F T F F T T
F F T F F F
F F F F T T
18 Discrete Mathematics

Remark. For 2 variables, 4 rows are necessary. For 3 variables, 8 rows are necessary. In general, for n
variables, 2n rows are necessary.
2.2.5. Logically Equivalent Propositions. Two different compound propositions (or statement form or
propositional form) are said to be logically equivalent if they have the same truth values no matter what
truth values there constituent propositions have.
OR
Two different compound propositions are said to be logically equivalent if they have identical truth
table. We use the symbol ‘≡’ for logical equivalence.
For example, Consider the statement form
(a) Dogs bark and cats mew. (b) Cats mew and Dogs bark
If we take, p. Dogs bark
q. Cats mew
Then (a) and (b) are logically expressed as
(a) p ∧ q (b) q ∧ p

If we construct the truth tables for p ∧ q and q ∧ p , we observe that p ∧ q and q ∧ p have truth
values. Thus, p ∧ q and q ∧ p are logically equivalent , that is, p ∧ q ≡ q ∧ p .

2.2.6. Exercise. Negation of the negative of statement is equivalent to the statement. Thus, ~ (~p) ≡ p.
The logical equivalence ∼(∼ p) ≡ p is called Involution law.
2.2.7. Exercise. Show that the statement forms ∼(p ∧ q) and ∼p ∧ ∼ q are not logically equivalent.
2.2.8. Exercise. Show that ∼( p ∧ q ) and ∼ p ∨ ∼ q are logically equivalent and ∼( p ∨ q ) ≡ ∼ p ∧ ∼ q .

The above two logical equivalence are known as De Morgan’s laws of logic.
2.2.9. Tautology. A compound proposition which is always true regardless of truth values assigned to
its component propositions is called a Tautology.
2.2.10. Contradiction. A compound proposition which is always false regardless of truth values
assigned to its component propositions is called a contradiction.
2.2.11. Contingency. A compound proposition which is either true or false depending upon the truth
values of its component propositions is called a contingency.
2.2.12. Exercise. Show that p ∨ ∼ p is a Tautology.
2.2.13. Exercise. Show that p ∧ ∼ p is a contradiction.
Remark. If t and c denote tautology and contradiction, then we notice that
∼t≡c (1)
and ∼c≡t (2)
Propositions and Lattices 19

Now, from this and above two examples


p∨∼ p≡t (3)

and p∧∼p≡c (4)

The logical equivalence (1), (2), (3) and (4) are known as complement laws.
2.2.14. Logical Equivalence involving tautologies and contradictions. If t is a tautology and c is a
contradiction then p ∧ t ≡ p . So, p ∧ c ≡ c .

Similarly, p ∨ t ≡ t . So, p ≡ p ∨ c .

Thus, we have the following logical equivalence


p∧t ≡ p p ∧c≡c p∨t ≡t p∨ c≡ p

These four logical equivalence are known as Identity laws.


2.2.15. Idempotent laws. Show that p ∧ p ≡ p and p ∨ p ≡ p .

2.2.16. Commutative laws. Show that p ∧ q ≡ q ∧ p and p ∨ q ≡ q ∨ p .

2.2.17. Absorption laws. Show that p ≡ p ∧ ( p ∨ q ) and p ≡ p ∨ ( p ∧ q ) .

2.2.18. Associative laws. To prove ( p ∧ q ) ∧ r ≡ p ∧ ( q ∧ r ) and ( p ∨ q ) ∨ r ≡ p ∨ ( q ∨ r )

2.2.19. Distributive laws. (i) p ∨ ( q ∧ r ) ≡ ( p ∨ q ) ∧ ( p ∨ r ) (ii) p ∧ ( q ∨ r ) ≡ ( p ∧ q ) ∨ ( p ∧ r )

Solution.
p q r p∧q p∧r q∨r p ∧ (q ∨ r ) ( p ∧ q) ∨ ( p ∧ r )

T T T T T T T T

T T F T F T T T

T F T F T T T T

T F F F F F F F

F T T F F T F F

F T F F F T F F

F F T F F T F F

F F F F F F F F

Hence p ∧ ( q ∨ r ) ≡ ( p ∧ q ) ∨ ( p ∧ r ) .

Similarly, it can be proved that p ∨ ( q ∧ r ) ≡ ( p ∨ q ) ∧ ( p ∨ r ) .

These two logical equivalences are called distributive laws.


20 Discrete Mathematics

2.2.20. Conditional Proposition. If p and q are the proposition, the compound proposition
If p then q or p implies q
is called a conditional proposition (or implication) and is denoted by p → q . The proposition p is called
the hypothesis or antecedent whereas the proposition q is called the conclusion or consequence.
The connective “If .....then” is denoted by the symbol ‘→’. It is false when p is true and q is false,
otherwise it is true. In particular, if p is false then p → q is true for any q.
A conditional statement that is true by virtue of the fact that its hypothesis is false is called true by
default or vacuously true. For example, the conditional statement “If 3 + 3 = 7 then I am being of
Japan”. Then conditional statement is true simply because 3 + 3 = 7 is false.
Thus, truth values of the conditional proposition p → q is defined by the truth table

p q p→q
T T T
T F F
F T T
F F T These are true by default or
vacuously true

Each of the following expression is an equivalent form of the conditional statement p → q

p implies q
q if p
p is only if q
p is sufficient condition of q
q is necessary condition for p
2.2.21. Exercise. Restate each proposition in the form of a conditional proposition.
(a) I will eat if I am hungry.
(b) 3 + 5 = 8 if it is snowing.
(c) When you sing, my ear hurt.
(d) Ram will be a good teacher if he teaches well.
(e) A necessary condition for England to win the world series is that they sign a right handed pitcher.
(f) A sufficient condition for Sohan to Visit Calcutta is that he goes to Disney land.
Propositions and Lattices 21

Solution. (a) If I am hungry then I will eat.


2.2.22. Lemma. For the proposition p and q, we have p → q ≡ ∼ p ∨ q .
2.2.23. Example. Rewrite the statement in “If ....then” form (a) Either you get to work on time or you
are fired.
Solution. Let ∼ p. you get to work on time
q. you are fired
Then, given statement is ∼ p ∨ q . Here, p. you do not get to work on time.
Hence the “If......then......” version of the given statement according to above lemma is
If you do not get to work on time then you are fired. [ ∼ p ∨ q ≡ p → q ]
2.2.24. Negation of a conditional statement. We know p → q is false if and only if p is true and its
conclusion q is false. Also we have shown above that
p →q ≡ ∼ p ∨ q

Taking negation of both sides


∼( p → q ) ≡ ∼ (∼ p ∨ q ) ≡ ∼(∼ p ) ∧ ∼ q [De Morgan law]
≡ p∧∼q

Thus the negation of “If p then q” is logically equivalent to “p and not q”.
2.2.25. Converse of a conditional statement. If p → q is an implication then the converse of p → q is
the implication q → p.
2.2.26. Contrapositive of a Conditional Statement. The contrapositive of a conditional statement “if p
then q” is “If ∼q then ∼p”.
In symbols, the contrapositive of p → q is ∼q → ∼p.
2.2.27. Lemma. A conditional statement is logically equivalent to its contrapositive.
Proof. The truth table of p → q and ∼ q → ∼ p are

p q p→q p q ∼p ∼q ∼q→∼p

T T T T T F F T

T F F T F F T F

F T T F T T F T

F F T F F T T T

Same truth values


So, p →q ≡ ∼q → ∼ p
22 Discrete Mathematics

2.2.28. Exercise. Give the converse and contrapositive of implication.


(a) If it is raining then I will use my umbrella.
(b) If today is Monday then tomorrow is Tuesday.
2.2.29. Inverse of the Conditional Statement. The inverse of the conditional statement p → q is ∼p →
∼q
For example, The inverse of “If today is Easter then tomorrow is Monday” is “If today is not Easter,
then tomorrow is not Monday.
Remark. If a conditional statement is true then its converse and inverse may or may not be true. For
example, On any Sunday except Easter, the conditional statement is true in the above example yet the
inverse is false.
2.2.30. Biconditional Statement. If p and q are statements, then compound statement “p if and only if
q” is called a biconditional statement or an equivalence. It is denoted by p ↔ q.
Observe that p ↔ q is true only when both p and q are true or when p and q are false (, that is, if both p
and q have same truth values and is false if p and q have opposite truth values).
2.2.31. Lemma. Show that p ↔ q ≡ ( p → q ) ∧ ( q → p ) .

Remark. We have proved above p ↔ q ≡ p → q ∧ q → p . Also we know that


p →q ≡ ∼ p ∨ q

q → p ≡ ∼ q∨ p

Hence, p ↔ q ≡ p → q ∧ q → p ≡ (∼ p ∨ q ) ∧ (∼ q ∨ p )

2.2.32. Definition. Let p and q be statements then p is a sufficient condition for q means “if p then q”
and p is a necessary condition for q means “If not p then not q”.
Remark. The order of operations of connective is ∼ , ∧ , ∨ , → , ↔ .

2.2.33. Argument. An argument is a sequence of statements. All statements except the final one are
called premises (or assumption or hypothesis).
The final statement is called the conclusion. The symbol ∴ yields “Therefore” and is generally placed
just before the conclusion. The logical form of an argument can be obtained from contents of given
argument.
For example, Consider the argument
If a man is bachelor, he is unhappy. If a man is unhappy, he dies young. [Premises]
∴ Bachelors die young. [Conclusion]
The argument has abstracted form
If p then q,
Propositions and Lattices 23

If q then r
∴p→r,
where p = Man is Bachelor
q = He is unhappy
r = He dies young
2.2.34. Valid Argument. An argument is said to be valid if the conclusion is true whenever all its
premises are true.
2.2.35. Definition. An argument which is not valid is called a fallacy (invalid).
Method to test validity for an argument.
(i) Identify the premises and conclusion of argument.
(ii) Construct a truth table of all the premises and conclusion showing their truth values.
(iii) Find the rows (called critical rows) in which all the premises are true.
(iv) In each critical row determine whether the conclusion is also true.
(a) If in each critical row, the conclusion is also true then argument form is valid.
(b) If there is atleast one critical row in which conclusion is false, the argument form is fallacy.
2.2.36. Example. Show that the argument
p,
p → q,
∴q
is valid.
Solution. The premises are p and p → q.
The conclusion is q. The truth table is

p q p p→q q

T T T T T
T F T F F
F T F T T
F F F T F

Premisess Conclusion

In the first row all the premises are true and therefore it is a critical row. The conclusion in this critical
row is also true. Hence the argument is valid.
24 Discrete Mathematics

The argument discussed above is known as law of detachment.


2.2.37. Exercise. The argument
p → q,
p,
∴q
is valid.
The fact that this argument form is valid is called Modus ponens.
2.2.38. Exercise. The argument form
p → q,
∼ q,
∴ ∼p
is valid. The fact that this argument is valid is called Modus Tollens, which means method of denying,
since the conclusion is denying.
2.2.39. Theorem. (Rule of inference, Law of syllogism or Hypothetical Syllogism)
The argument
p→q
q→r
∴ p→r
is universally valid and so is a rule of inference.
In other words, ( p → q ) ∧ ( q → r ) → ( p → r ) is a tautology.

Proof. The truth table for the premises and conclusion


Premises Conclusion
p q r p→q q→r p→r

T T T T T T ← Critical row
T T F T F F

T F T F T T

T F F F T F

F T T T T T ← Critical row
F T F T F T

F F T T T T ← Critical row
F F F T T T ← Critical row
Propositions and Lattices 25

The critical row for the premises p → q, q → r are first row, 5th row, 7th row and 8th row. The conclusion
p → r in these rows is always true. Hence the given argument is valid.
2.2.40. Example. Consider the argument
(i) If you invest in the stock market, then you will get rich.
(ii) If you get rich, then you will be happy.
Therefore, if you invest in stock market, then you will be happy.
Solution. By rule of inference, the argument is valid.
2.2.41. Exercise. The following arguments are valid
(a) p , ∴ p ∨ q (b) q , ∴ p ∨ q
These arguments are called disjunctive addition.
2.2.42. Exercise. The following arguments are valid
(a) p ∧ q , ∴ p (b) p ∧ q , ∴ q
These arguments are called conjunctive simplification. For example, (a) says if both p and q are true
then in particular p and (b) says if p and q are true then in particular q. (It can be proved by truth table).
2.2.43. Exercise. The arguments
(a) p∨q (b) p∨q
∼q ∼p
∴ p ∴ q
are valid.
These arguments are called disjunctive syllogism.
2.2.44. Exercise. Prove that the following argument is valid:
p → ∼ q, r → q, r ∴ ∼ p .

2.2.45. Exercise. The argument p ∨ q , p → r , q → r ∴ r is valid known as Dilemma.

2.2.46. Example. Using rules of valid inference, solve the problem


(a) If my glasses are on the kitchen table, then I saw them at breakfast.
(b) I was reading the newspaper in the living room or I was reading in the kitchen.
(c) If I was reading the newspaper in the living room, then my glasses are on the coffee table.
(d) I did not see my glasses at breakfast.
(e) If I was reading my book in the bed then my glasses are on the bed table.
(f) If I was reading the newspaper in the kitchen, then my glasses are on the kitchen table Where are the
glasses?
26 Discrete Mathematics

Solution. Let
p : My glasses are on the kitchen table.
q : I saw my glasses at breakfast.
r : I was reading the newspaper in the living room.
s : I was reading the newspaper in the kitchen.
t : My glasses are on the coffee table.
u : I was reading my book in bed.
v : My glasses are on the bed table.
Then the given statements are
(a) p → q (b) r ∨ s (c) r → t (d) ∼ q (e) u → v (f) s → p
The following deduction can be mode
(i) p→q [by (a)] (ii) s→p [by (f)]
∼ q [by (d)] ∼p [by the conclusion of (i)]
∴ ∼p [by Modus Tollens] ∴ ∼s [by Modus Tollens]
(iii) r∨s [by (b)] (iv) r→t [by (c)]
∼s [by conclusion of (ii)] r [by conclusion of (iii)]
∴ r [by disjunctive syllogism] ∴ t [by Modus Ponens]
Hence t is true and the glasses are on the coffee table.
Remark. Note that (e) was not required to derive the conclusion. In mathematics, as in real life, we
frequently deduce a conclusion from just a part of the information available to us.
2.2.47. Exercise. Show that the following argument is invalid.
If taxes are lowered, then Income rises.
Income rises.
∴ Taxes are lowered.
2.2.48. Exercise. Test the validity of the following argument.
If two sides of a triangle are equal, then opposite angles are equal.
Two sides of triangle are not equal.
∴ the opposite angles are not equal.
2.2.49. Exercise. Consider the following argument for validity
(i) If I study then I will not fail in Mathematics.
Propositions and Lattices 27

(ii) If I don’t play basketball then I will study.


(iii) But I failed in mathematics. ∴ I must have played basketball.
2.2.50. Contradiction Rule. If the supposition that the statement p is false leads logically to a
contradiction, then we can conclude that p is true. In symbol ∼ p → c , ∴ p
The truth table is
Premises Conclusion
p ∼p c ∼p→c p

T F F T T ← Critical row

F T F F F

Hence the argument form is valid.


2.2.51. Exercise. Description of an island containing two types of people. This island contain two type
of people Knights who always tell the truth and Knaves who always lie. A visitor visit the island and
approached two natives who spoke to the visitor as follows.
A says : B is a knight
B says : A and I are of opposite type
Then what are A and B?
2.3. Quantifiers. So far we have studied the compound statement which were made of simple
statements joined by connectives ∧ , ∨ , ∼, → , ↔ . That study cannot be used to determine validity in the
majority of everyday and mathematical situation.
To check the validity of such argument, it is necessary to separate into two parts.
Subjects and Predicates.
Also we must analyse and understand the special role played by words denoting quantities such as ‘All’
or ‘some’.
2.3.1. Definition. The symbolic analysis of predicates and quantifier statement is called the predicate
calculus, whereas symbolic analysis of ordinary compound statement is called the statement calculus.
(Propositional calculus). In English grammar, predicate is a part of the statement that gives information
about the subject. For example, Ram is a resident of Karnal.
The word ‘Ram’ is a subject and the phrase “is a resident of Karnal” is a predicate. This predicate is the
part of a sentence from which subject has been removed.
In logic, predicates can be obtained by removing only nouns from a statement. For example, If P stands
for “is a resident of Karnal” and Q stands for “is a resident of”. Then both P and Q are predicate
symbols. The sentence “x is a resident of Karnal” and “x is a resident of y” are denoted by
P( x ) and Q( x , y ) respectively where x and y are predicate variables.
28 Discrete Mathematics

2.3.2. Definition. A predicate is a sentence that contains a finite number of variable and becomes a
statement when specific values are substituted for the variable. The domain of a predicate variable is the
set of all values that may be substituted in place of the variables. The predicates are also known as
“Propositional functions or open sentences”.
2.3.3. Definition. Let P( x ) be a predicate and x has domain D. Then the set { x ∈ D : P( x ) is tr ue} is
called the truth set of P( x ) .
For example, Let P( x ) be “x is an integer less than 8” and suppose the domain of x is the set of all
positive integers. Then the truth set of P( x ) is {1, 2, 3, 4, 5, 6, 7} . Let P( x ) and Q( x ) be predicate with
common domain D of x. The notation P( x ) ⇒ Q( x )

means that every element in the truth set of P( x ) is in the truth set of Q( x ) . Similarly P( x ) ⇔ Q( x )
means that P( x ) and Q( x ) have identically truth sets.
For example, Let P( x ) be “x is a factor of 8”

Q( x ) be “x is a factor of 4”

R( x ) be “x < 5 and x ≠ 3”
and let the domain of x be the set of positive integers.
Then, Truth set of P( x ) = {1, 2, 4, 8}
Truth set of Q( x ) = {1, 2, 4}
Truth set of R( x ) = {1, 2, 4}
Since every element of truth set of Q( x ) is in the truth set of P( x ) , Q( x ) ⇒ P( x )
Truth set of R( x ) is identical to the truth set of Q( x ) so R( x ) ⇔ Q( x )
2.3.4. Definition. The words that refer to quantities such as “all” or “some” and tell for how many
element a given predicate is true, is called quantifiers.
By adding quantifiers, we can obtain statements from a predicate.
2.3.5. Definition. The symbol ‘∀’ denotes ‘for all’ and is called universal quantifier. Thus the sentence
“All human beings are mortal”.
can be written as
∀ x ∈ S, x is mortal.
where S denotes the set of all human beings.
2.3.6. Definition. Let P( x ) be a predicate and D be the domain of x. A statement of the form “
∀ x ∈ D, P( x ) ” is called a universal statement. A universal statement P( x ) is true if and only if P( x ) is
true for every x in D. A universal statement P( x ) is false if and only if P( x ) is false for atleast one value
of x in D.
Propositions and Lattices 29

A value for x, for which P( x ) is false is called a Counter example to the universal statement For
example, Let D = {1, 2, 3, 4} and consider the universal statement
P( x ) : ∀ x ∈ D, x 3 ≥ x

This is true for all values of x ∈ D . Since 13 ≥ 1, 2 3 ≥ 2, 3 3 ≥ 3, 4 3 ≥ 4 .

But if we consider the universal statement Q( x ) : ∀ n ∈ N, n + 2 > 8 is not true because if we take, say n =
6.
6 + 2 >/ 8 (which is absurd).

2.3.7. Definition. The symbol “∃” denotes “there exist” and is called the existential quantifier. For
example, The sentence
“There is a university in Kurukshetra”
can be expressed as
∃ a university u such that u is in Kurukshetra
or we can write
∃ u ∈U : u is in Kurukshetra, where U is the set of universities.

The words ‘such that’ are inserted just before the predicate.
2.3.8. Definition. Let P( x ) be a predicate and D is a domain of x. A statement of the form “ ∃ x ∈ D
such that P( x ) ” is called an existential statement. It is defined to be true if and only if P( x ) is true for
atleast one x in D. It is false if and only if P( x ) is false for all x in D.
For example, the existential statement
“ ∃ n ∈ N : n + 3 < 9 ” is true since the set { n : n + 3 < 9} = {1, 2, 3, 4, 5} ≠ φ

For example, Let A = {2, 3, 4, 5}. Then the existential statement “ ∃ n ∈ A : n 2 = n ” is false because there
is no element in A whose square is equal to itself.
2.3.9. Definition. A statement of the form ∀ x, if P( x ) then Q( x ) is called universal conditional
statement. Consider the statement ∀ x ∈ R, if x > 2 then x 2 > 4 .

This can be written in any of the form (i) If a real number is greater than 2, then its square is greater than
4.
(ii) Whenever a real number is greater than 2, its square is greater than 4.
On the other hand consider the statements
(i) All bytes have eight bits. (ii) No fire tracks are green
These can be written as
30 Discrete Mathematics

(i) ∀ x, if x is a byte, then x has eight bits.

(ii) ∀ x, if x is a fire track, then x is not green.

2.3.10. Example. Consider the statements


(i) ∀ polygons p, if p is a square then p is a rectangle.
This is equivalent to the universal statement that “∀ squares p, p is a rectangle”.
(ii) ∃ a number n such that n is prime and n is even.
This is equivalent to the statement “∃ a prime number n such that n is even”.
2.3.11. Definition. The negation of universal statement ‘ ∀ x in D, P( x ) holds’ is logically equivalent to
a statement of two form “∃ x in D such that ∼ P( x ) ”

Thus, ∼ (∀ x ∈ D, P( x ) holds ) ≡ ∃ x in D s.t. ∼ P( x ) .

Hence the negation of universal statement “(all are)” is logically equivalent to an existential statement
“some are not”. For example, The negation of
(i) “∀ positive integer n, we have n + 2 > 9” is “ ∃ positive integer n such that n + 2 >/ 9 ”.

(ii) The negation of “all students are intelligent” is “Some students are not intelligent”.
2.3.12. Definition. The negation of a universal conditional statement is defined by
∼ (∀ x , P( x ) → Q( x )) ≡ ∃ x such that ∼ ( P( x ) → Q( x )) ......(1)

whereas negation of ‘if ......then’ statement is


∼ ( P( x ) → Q( x )) ≡ P( x ) ∧ (∼ Q( x )) ......(2)

Hence by (1) and (2), negation of


(∀x , P( x ) → Q( x )) ≡ ∃ x s.t. P( x ) ∧ (∼ Q( x )) ,

that is, ∼(∀x , P( x ) → Q( x )) ≡ ∃ x s.t. P( x ) and ∼ Q( x )

For example, the negation of “ ∀x people p, if p is bloned then p has blue eyes” is “∃ a people p such
that p is bloned and p does not have blue eyes”.
For example, suppose there is a bowl and we have no ball in the bowl. Then the statement
“All the balls in the bowl are blue”
is true by default or vacuously true, because there is no ball in the bowl which is not blue. For example,
If P( x ) is a predicate and the domain of x is

D = { x1, x 2 ,...... x n } .

Then, the statement “ ∀ x ∈ D, P( x ) ” and P( x1 ) ∧ P( x2 ) ∧ ... ∧ P( xn ) are logically equivalent. For example,
let P( x ) be “ x . x = x ” and let D = {0, 1}.
Propositions and Lattices 31

Then, “ ∀ x ∈ D, P( x ) ” can be written as “∀ binary digit x, x . x = x”. This is equivalent to

0 . 0 = 0 and 2.1 = 1
which can be written as P( 0 ) ∧ P(1) .

Similarly, P( x ) is predicate and D = {x1 , x2 ,..., xn } .

Then, the statement “ ∃ x ∈ D, P( x ) ” and P( x1 ) ∨ P( x2 ) ∨ ... ∨ P( xn ) are logically equivalent.

2.3.13. Definition. Let “ ∀ x ∈ D if P( x ) then Q( x ) ” be a statement. Then

(i) Contrapositive of this statement is “ ∀ x ∈ D if ∼ Q( x ) then ∼ P( x ) ”.

(ii) Converse of this statement is “ ∀ x ∈ D if Q( x ) then P( x ) ”.

(iii) Inverse of this statement is “ ∀ x ∈ D if ∼ P( x ) then ∼ Q( x ) ”.

2.3.14. Universal Modus Ponens.


(i) Formal version. “∀ x if P( x ) then Q( x ) ” and P( a ) for particular ‘a’, ∴ Q( a ) .

(ii) Informal Version. If x makes P( x ) true then x makes Q( x ) true.

In particular ‘a’ makes P( x ) true , ∴ Q(a).

An argument of this form is called Syllogism. The first and second premises are called its major
premises and minor premises respectively.
For example, Consider the argument.
(i) If a number is even then its square is even.
(ii) If k is a particular number that is even ∴ k2 is even.
The major premises of this argument can be written as “∀ x, if x is even then x2 is even”.
Let P( x ) : x is even , Q( x ) : x 2 is even. Let k be an even number, then the argument is

“ ∀x, if P( x ) then Q( x ) ”. P( k ) for k , ∴ Q( k ) .

This form of argument is valid by Modus ponens.


2.3.15. Universal Modus Tollens.
Formal Version Informal Version
“ ∀ x if P( x ) then Q( x ) ” If x makes P( x ) true then

∼ Q( a ) for particular a x makes Q(x) true

∴ ∼ P(a) ‘a’ doesn’t makes Q(x) true


∴ ‘a’ doesn’t makes P(x) true.
32 Discrete Mathematics

For example, “All human being are mortal”


Zeus is not mortal.
∴ Zeus is not human.
The major premise can be written as ∀ x, if x is human then x is mortal. Let P(x). x is human, Q(x). x is
mortal.
and let z = zeus. Then we have ∀ x , if P( x ) then Q( x ) ,

∼ Q(z)
∴ ∼ P(z)
This form of argument is valid by Modus Tollens.
2.3.16. Use of Diagrams for validity. Consider (i) All human beings are mortal. (ii) is not mortal.

Mortal Mortal
Zeus
Huma
n

The two diagrams fit together in only one way as

Mortal
Zeus

Huma
n

Since Zeus is outside the Mortal disk it is necessarily outside the human being disk. Hence the
conclusion that ‘Zeus is not human’ is true.
2.3.17. Use of Diagram for Invalidity. (i) All human being are mortal.
(ii) Sohan is mortal. ∴ Sohan is a human being

Mortal Mortal

Huma Sohan
n

Major Minor
Propositions and Lattices 33

These two possibilities fit into single diagram in two ways

Mortal Mortal

Huma Huma
.nSohan
n
.
Sohan
The conclusion “Sohan is a human being” is true in the first case but not in the second. Hence the
argument is not valid.
2.4. Lattices
2.4.1. Definition. A relation R on a set X is said to be partial ordered relation if it is reflexive,
antisymmetric and transitive. A set X with partial relation R is called a partially ordered set or poset and
is denoted by ( X , R) . Note that a relation R is said to be antisymmetric if aRb, bRa ⇒ a = b .

For example, (1) Let A be collection of subsets of a set S, then the relation ⊆ of set inelusion is a
partial order relation on A .
(i) A ⊆ A (Reflexive)

(ii) A ⊆ B and B ⊆ A ⇒ A=B [Antisymmetric]

(iii) A ⊆ B and B ⊆ C ⇒ A⊆C [Transitive]

2. Let N be set of natural numbers. Relation R is ≤ (less than or equal to) is a partial order relation on N.
(i) a ≤ a for all a ∈ N (ii) a ≤ b and b ≤ a then a = b for all a , b ∈ N

(iii) a ≤ b and b ≤ c then a ≤ c

So, we get ( N, ≤) is a poset. But the relation < is not a partial order relation because this relation is not
reflexive.
3. Let N be the set of natural numbers. Then relation of divisibility is a partial order relation of N.
(i) a / a ∀ a ∈ N (Reflexivity)

(ii) a / b and b / a ⇒ a = b (anti symmetric)

(iii) a / b, b / c ⇒ a/c

Therefore, relation is a partial order relation.

4. The relation of divisibility is not a partial order on the set of integers .

For example, 3/− 3 and −3/3 but 3 ≠ − 3 .


34 Discrete Mathematics

2.4.2. Definition. Let ( A, R) be a poset, then elements ‘a’ and ‘b’ are said to be comparable if
aRb or bRa . For example, We know that the relation of divisibility is a partial order relation on the set
of natural numbers, but we see that 3 / 7 and 7 / 3.

Thus 3 and 7 are numbers in N which are not comparable (in such a case we write 3  7)

2.4.3. Definition. If every pair of elements in a poset (A, R) is comparable, we say that A is linearly
ordered or totally ordered or chain. The partial ordered relation is then called linear ordered or total
ordering relation. The number of elements in a chain is called the length of chain.
For example, (1) The set N of natural numbers with relation ≤ will be linearly ordered or a chain.
(2) Let A be a set with two or more elements and let ‘⊆’ (set inclusion) be taken as relation on the subset
of A. If a and b are two distinct elements of A then {a} and {b} are subsets of A and are not comparable.
So, power set of A, , that is, P(A) is not a chain.
A subset of ‘A’ is called antichain, if no two distinct elements in the subset are related. If we consider
the subsets φ, {a}, A in the power set of A then the collection {φ, {a}, A} is a chain but {{a}, {b}} is an
antichain.
2.4.4. Definition. A relation R on a set A is called asymmetric if aRb and bRa do not both hold for any
a, b belonging to A.
2.4.5. Directed Graph representation of a relation from a finite set A to itself.
In this representation we draw a small circle for every element of the set A. These circles are called
vertices. We then draw arrow from vertex ai to vertex aj iff ( a i , a j ) ∈ R . These arrows are called edges.
The pictorial representation of R so obtained is called directed graph or digraph of R.
2.4.6. Example. Let A = {1, 2, 3} and R = {(1, 1), (1, 2), (1, 3), (3, 1) (2, 3), (2, 1)}. Draw the directed
graph of R.
Solution.
2
1

2.4.7. Example. Find the relation R whose digraph is shown below

1 2

3 4
Propositions and Lattices 35

Solution. Clearly the relation R is defined on the set A = {1, 2, 3, 4}. Also we know that ( a i , a j ) ∈ R iff
there is an edge from a i to a j . Thus

R = { (1, 2 ), (1, 3 ), (2, 2 ), (3, 2 ), (3, 4 ), ( 4, 2 ), ( 4, 3 )} .

2.4.8. Theorem. The diagraph of a partial order has no cycle of length greater than one.

Proof. Suppose on the contrary that the digraph of the partial order ≤ on the set A contains a cycle of
length n (n ≥ 2). Then there are distinct elements a1, a 2 ,......, a n such that
a1 ≤ a 2 , a 2 ≤ a 3 ,......, a n −1 ≤ a n , a n ≤ a1 . By the transitivity of partial order used (n −1) times,

we have a1 ≤ a n . Also a n ≤ a1 .

∴ Anti symmetry implies a1 = a n .

which is a contradiction to the supposition that a1, a 2 ,......, a n are distinct. Hence the proof.

2.4.9. Hasse Diagram. Let A be a finite set. By the theorem proved above, the digraph of a partial order
on A has only cycles of length one. Infact since a partial order is reflexive, every vertex in the digraph of
the partial order is contained in a cycle of length one. To simplify the matter we shall delete all such
cycles of the digraph. Thus the digraph shown in first figure can be represented by second figure.
Let V = { a , b, c}

E = { ( a , a ), ( b, b ) , ( c, c ), ( a , b ), ( b, c ), ( a , c )}

b c
b c

a a

(i) (ii)
We also eliminate all edges that are implied by transitive property. In above we omit the edge from a to
c as a ≤ c follows from a ≤ b, b ≤ c.
c

b
a
36 Discrete Mathematics

We also draw the digraph of a partial order with all edges pointing upward, omit the arrows and to
replace then the circles by dots. Thus the final part of digraph becomes
c

a
Thus “The diagram of a partial order obtained from its digraph by omitting cycles of length one, the
edge implied by transitivity and arrows (after arranging them pointing upward) is called Hasse Diagram
of the partial order of the poset.
2.4.10. Definition. Let A be a partially ordered set w.r.t. the relation ‘≤’. An element a ∈ A is called a
maximal element of A iff for all b in A, either b ≤ a or b and a are non-comparable. An element a in A
is called greatest element of A iff for all b in A, b ≤ a.
An element a in A is called a minimal element of A iff for all b in A either a ≤ b or b and a are non-
comparable. An element a in A is called least element of A iff for all b in A, a ≤ b.
Remark. (1) A greatest element is certainly maximal but a maximal element need not be greatest
element. Similarly, a least element is minimal but a minimal element need not be least.
(2) A partially ordered set w.r.t. a relation can have atmost one greatest element and atmost one least
element but it may have more than one maximal and minimal elements.
For example, consider the poset A whose Hasse diagram is
a3

a2
a1

b3
b1 b2
The elements a1, a2, a3 are maximal elements of A and the elements b1, b2 , b3 are minimal elements.
Observe that since there is no line between b2 and b3 and we can conclude neither b3 ≤ b2 nor b2 ≤ b3
showing that b2 and b3 are non-comparable.
2.4.11 Lattice. A lattice is a partially ordered set (L, ≤) in which every subset {a, b} consisting of two
elements has a least upper bound and a greatest lower bound. We denote l.u.b. of {a, b} by
a ∨ b and call it join or sum of a and b. Similarly, we denote greatest lower bound of {a, b} by a ∧ b and
call it meet or product of a and b. Other symbols used are
l.u.b.. ⊕, + , ∪
g.l.b.. ∗, . , ∩
Lattice is a mathematical structure with binary operation, join and meet. A totally ordered set
is obviously a lattice but not all partially ordered sets are lattices. For example, Let A be any
Propositions and Lattices 37

set and ρ(A) be its power set. The partially ordered set (ρ (A), ⊆) is a lattice in which the meet {a}
and join are same as the operations ∩ (intersection) and ∪ (union) respectively. If A has a
single element say ‘a’. Then ρ(A) = {φ, A} and least upper bound of ρ (A) is A = {a} and
greatest lower bound of ρ( A ) is φ. The Hasse diagram of (ρ( A ) , ⊆) is a chain containing two φ

elements φ and {a}.


If A has two elements, say a and b then ρ( A ) = {φ, { a } , { b} , { a , b} } . The Hasse diagram is

{a, b}

{b} {a}

φ
The l.u.b. and g.l.b. exists for every two subsets and hence ρ(A) is a lattice.
2.4.12. Example. Consider the poset (N, ≤) where ≤ is a relation of divisibility. Then N is a lattice in
which Join of a and b = a ∨ b = LCM(a, b)
Meet of a and b = a ∧ b = GCD ( a , b ) for a , b ∈ N .

2.4.13. Example. Let n be a +ve integer. Let Dn be the set of all positive divisors of N. Then Dn is a
lattice under the operation of divisibility. The Hasse diagram of D20 and D30 are
30
20
4
6 10 15
10
2
5 2 5
1 3

1
D20 = {1, 2, 4, 5, 10, 20} D30 = {1, 2, 3, 5, 6, 10, 15, 30}

2.4.14. Theorem. If (L1, ≤) and (L2 , ≤) are lattices, then (L, ≤) is lattice, where L = L1 × L 2 and the
partial order ≤ of L is product partial order.
Proof. We denote the join and meet in L1 by ∨1 and ∧1 and the join and meet in L2 by ∨2 and ∧2
respectively. We know that Cartesian product of two posets is a poset. Therefore L = L1 × L2 is a poset.
Thus all we need to show is that ( a1, b1 ) and ( a 2 , b2 ) ∈ L then ( a1, b1 ) ∨ ( a 2 , b2 ) and ( a1, b1 ) ∧ ( a 2 , b2 )
exists in L. Further we know that ( a1, b1 ) ∨ ( a 2 , b2 ) = ( a1 ∨1 a 2 , b1 ∨ 2 b2 ) and
( a1, b1 ) ∧ ( a 2 , b2 ) = ( a1 ∧1 a 2 , b1 ∧2 b2 ) .

Since L1 and L2 are lattice so a1 ∨1 a 2 , a1 ∧1 a 2 , b1 ∨ 2 b2 and b1 ∧2 b2 exists. Hence ( a1, b1 ) ∨ ( a 2 , b2 ) and


( a1, b1 ) ∧ ( a 2 , b2 ) both exists and therefore ( L, ≤ ) is a lattice called the direct product of
38 Discrete Mathematics

( L1, ≤ ) and ( L2 , ≤ ) .

2.4.15. Example. Let ( A, R ) and ( B, R′ ) be posets. Then ( A × B, R′′ ) is a poset with partial order R′′
defined by ( a , b ) R′′ ( a ′, b′ ) if aRa ′ in A and bR′ b′ in B.

Solution. We note that


(i) Since R is partial order relation on A, aRa by reflexivity. Similarly b R′b by reflexivity. Let
( a, b ) ∈ A × B . Then ( a , b ) R′′ ( a , b ) since aRa and b R′ b

Then R′′ is reflexive.


(ii) Let ( a , b ) R′ ( a ′, b ′) and ( a ′, b ′) R′′( a , b ) .

Then by definition aRa ′, a ′ Ra in A ......(1)

b R′ b ′, b ′ R′ b in B ......(2)

Since ( A, R) and ( B, R′) are posets, (1) and (2) implies a = a ′ and b = b ′ .

Thus ( a , b ) R′′ ( a ′, b ′) and ( a ′, b ′) R′′ ( a , b ) implies


( a , b ) = ( a ′, b ′)

Hence R′′ is anti-symmetric.


(iii) Let ( a , b ) R′′ ( a ′, b ′) and ( a ′, b ′) R′′ ( a ′′ , b ′′) where a , a ′, a ′′ ∈ A and b, b ′, b ′′ ∈ B

Then aRa ′ and a ′ R a ′′ ......(3)

b R′ b ′ and b ′R′ b ′′ ......(4)

By transitivity of R and R′, (3) and (4) gives


a R a ′′ and b R′ b ′′

Hence ( a , b ) R′′ ( a ′′, b ′′) so R′′ is transitive.

So, ( A × B, R′′) is a poset.

The partial order R′′ defined on the Cartesian product A × B as above is called the product partial order.
2.4.16. Example. Let L1 and L2 be lattices whose Hasse diagram are

I2
I1

a b

01
02
L1 L2
Propositions and Lattices 39

(I1, I2)

Then L = L1 × L2 is the lattice shown in the diagram


(I1, a) (01, I2) (I1, b)

(01,a) (01,b)
(I1,02)

(01,02)

L = L1 × L2

2.4.17. Properties of lattices. Let (L, ≤) be a lattice and let a , b, c ∈ L then from the definition of ∨
(join) and ∧ (meet) , we have
(i) a ≤ a ∨ b and b ≤ a ∨ b, wher e a ∨ b is an least upper bound of a and b.

(ii) If a ≤ c and b ≤ c then a ∨ b ≤ c, wher e a ∨ b is the least upper bound of a and b.

(iii) If a ∧ b ≤ a and a ∧ b ≤ b ; where a ∧ b is g.l.b. of a and b.


(iv) If c ≤ a and c ≤ b then c ≤ a ∧ b ; where a ∧ b is the greatest lower bound of a and b.
2.4.18. Theorem. Let L be a lattice, then for every a and b in L
(i) a ∨ b = b iff a ≤ b (ii) a ∧ b = a iff a ≤ b (iii) a ∧ b = a iff a ∨ b = b
Proof. (i) Let a ∨ b = b. Since a ≤ a ∨ b, we have a ≤ b.
Conversely, if a ≤ b, then since b ≤ b, it follows that b is an upper bound of a and b.
Therefore, by definition of least upper bound a ∨ b ≤ b

Also, a ∨ b being an upper bound, b ≤ a ∨ b.


Hence a ∨ b = b.
(ii) Let a ∧ b = a. We have proved in (i) that a ∨ b = b iff a ≤ b.
Therefore b ∨ (a ∧ b ) = b ∨ a = a ∨ b [commutativity]

But b ∨ (a ∧ b ) = b [Absorption law]

Hence a ∨ b = b and so by (i) a ≤ b .

Conversely, if a ≤ b and since a ≤ a , ‘a’ is lower bound of a and b, and so by the definition of greatest
lower bound, we have a ≤ a ∧ b .

Since a ∧ b is a lower bound a ∧ b ≤ a

Hence a ∧ b = a
40 Discrete Mathematics

(iii) From part (ii), a ∧ b = a iff a ≤ b ......(1)


From part (i), a ∨ b = b iff a ≤ b ......(2)
Combining (1) and (2), we get a ∧ b = a iff a ∨ b = b

2.4.19. Theorem. Let (L, ≤) be a lattice and let a, b, c ∈ L. Then we have


L1 : Idempotent property
(i) a ∨ a = a (ii) a ∧ a = a
L2 : Commutative property
(i) a ∨ b = b ∨ a (ii) a ∧ b = b ∧ a
L3 : Associative property
(i) a ∨ (b ∨ c) = (a ∨ b) ∨ c (ii) a ∧ (b ∧ c) = (a ∧ b) ∧ c
L4 : Absorption property
(i) a ∨ (a ∧ b) = a (ii) a ∧ (a ∨ b) = a
Proof. L1. The Idempotent property follows from the definition of least upper bound and greatest lower
bound.
L2. The commutativity property follows from the symmetry of a and b in the definition of l.u.b. and
g.l.b.
L3. (i) From the definition of l.u.b., we have a ≤ a ∨ ( b ∨ c ) ......(1)

b ∨ c ≤ a ∨ (b ∨ c) ......(2)

Also, b ≤ b ∨ c and c ≤ b ∨ c

So, by transitivity, b ≤ a ∨ (b ∨ c) ......(3)

and c ≤ a ∨ (b ∨ c) ......(4)

Now by (1) and (3), a ∨ ( b ∨ c ) is an upper bound of a and b and hence by definition of l.u.b. we have
a ∨ b ≤ a ∨ (b ∨ c) ......(5)

Now by (4) and (5), a ∨ (b ∨ c) is an upper bound of c and a ∨ b .

Therefore ( a ∨ b ) ∨ c ≤ a ∨ (b ∨ c) ......(6)

Similarly a ∨ (b ∨ c) ≤ ( a ∨ b ) ∨ c ......(7)

Hence by anti-symmetry, (6) and (7) gives


( a ∨ b ) ∨ c = a ∨ (b ∨ c)

(ii) The proof of (ii) of L3 is analogous to (i).


Propositions and Lattices 41

L4. (i) Since a ∧ b ≤ a and a ≤ a , it follows that a is an upper bound of a ∧ b and a. Therefore by the
definition of l.u.b. a ∨ (a ∧ b ) ≤ a ......(8)

On the other hand, by def. of l.u.b. a ≤ a ∨ (a ∧ b ) ......(9)

By (8) and (9), we get a ∨ (a ∧ b ) = a

(ii) Since a ≤ a ∨ b and a ≤ a , it follows that a is a lower bound of a ∨ b and a .

Therefore by the definition of g.l.b. a ≤ a ∧ (a ∨ b ) .....(10)

Also by definition of g.l.b., we have a ∧ ( a ∨ b ) ≤ a .....(11)

Then (10) and (11) yields a = a ∧ (a ∨ b )

Remark. In view of L3, we can write a ∨ ( b ∨ c ) and ( a ∨ b ) ∨ c as a ∨ b ∨ c .

Thus, we can express L.U.B. of { a1, a 2 ,......, a n } as a1 ∨ a 2 ∨ ...... ∨ a n and GLB of {a1, a 2 ,......, a n } as
a1 ∧ a 2 ∧ ..... ∧ a n .

2.4.20. Theorem. Let (L, ≤) be a lattice, then for any a, b, c ∈ L the following property holds
(1) If a ≤ b then (i) a ∨ c ≤ b ∨ c (ii) a ∧ c ≤ b ∧ c. This property is called “Isotonicity”.
(2) a ≤ c and b ≤ c iff a ∨ b ≤ c
(3) c ≤ a and c ≤ b iff c ≤ a ∧ b
(4) If a ≤ b and c ≤ d then (i) a ∨ c ≤ b ∨ d (ii) a ∧ c ≤ b ∧ d
Proof. (1) (i) We know that a ∨ b = b iff a ≤ b. Therefore to show that a ∨ c ≤ b ∨ c, we shall show that
( a ∨ c ) ∨ ( b ∨ c ) = b ∨ c . We know that

( a ∨ c ) ∨ ( b ∨ c ) = [( a ∨ c ) ∨ b ] ∨ c [Associativity]

= a ∨ c ∨ b ∨ c = a ∨ (b ∨ c) ∨ c [Commutativity]

= (a ∨ b ) ∨ (c ∨ c) [Associativity]

=b∨c [ a ∨ b = b, c ∨ c = c]
This proves (i). The part (ii) of (1) can be proved similarly.
(2) If a ≤ c then (1) (i) implies a ∨ b ≤ c ∨ b.
But b≤c ⇔ b∨c=c
⇔ c ∨b =c [Commutativity]

Hence a ≤ c and b ≤ c iff a ∨ b ≤ c.


(3) If c ≤ a then (1) (ii) implies c ∧ b ≤ a ∧ b
42 Discrete Mathematics

But c≤b ⇔ c∧b=c


Hence c ≤ a and c ≤ b iff c ≤ a ∧ b .

(4) (i) We note that (1) (i) implies


if a ≤ c then a ∨ c ≤ b ∨ c = c ∨ b [Commutativity]
if c ≤ d then c ∨ b ≤ d ∨ b = b ∨ d [Commutativity]

Hence by transitivity a ∨ c ≤ b ∨ d
(ii) We note that (1) (ii) implies that
if a ≤ b then a ∧ c ≤ b ∧ c = c ∧ b
if c ≤ d then c ∧ b ≤ d ∧ b = b ∧ d

Therefore, transitivity implies that a ∧ c ≤ b ∧ d .

2.4.21. Theorem. Let (L, ≤ ) be a lattice. If a, b, c ∈ L then


(i) a ∨ (b ∧ c) ≤ (a ∨ b) ∧ (a ∨ c) (ii) a ∧ ( b ∨ c ) ≥ ( a ∧ b ) ∨ ( a ∧ c )

These inequalities are called distributive inequalities.


Proof. We have a≤a∨b and a≤a∨c ......(1)
Also, we know that if x ≤ y and x ≤ z in a lattice, then x ≤ y ∧ z [By (3) of above theorem]

Therefore (1) yields a ≤ (a ∨ b ) ∧ (a ∨ c) ......(2)

Also, b∧c≤b ≤a∨b and b∧c ≤ c ≤ a∨c

, that is, b∧c≤ a∨b and b∧c ≤ a∨c

and so by the above arguments b ∧ c ≤ ( a ∨ b ) ∧ ( a ∨ c ) ......(3)

Also, we know that if x ≤ z and y ≤ z, then x ∨ y ≤ z [By (2) of above theorem]

Hence (2) and (3) yields a ∨ (b ∧ c) ≤ ( a ∨ b ) ∧ ( a ∨ c)

which proves (i). The (ii) inequality follows by using the principle of duality.
2.4.22. Modular Inequality. Let (L, ≤) be a lattice. If a, b, c ∈ L then a ≤ c iff a ∨ ( b ∧ c ) ≤ ( a ∨ b ) ∧ c
.
Proof. We know that a ≤ c iff a ∨ c = c ......(1)

Also by distributive inequality a ∨ ( b ∧ c ) ≤ ( a ∨ b ) ∧ ( a ∨ c )

So, using (i), a ≤ c iff a ∨ ( b ∧ c ) ≤ ( a ∨ b ) ∧ c which proves the result.

2.4.23. Exercise. Let (L, ≤) be a lattice. If a, b, c ∈ L and if a ≤ b ≤ c then


Propositions and Lattices 43

(i) a ∨ b = b ∧ c (ii) (a ∧ b) ∨ (b ∧ c) = (a ∨ b) ∧ (a ∨ c)
2.4.24. Second definition of Lattice as an algebraic system.
We have already defined that lattice is an partially ordered set in which every subset consisting of two
elements has a least upper bound and a greatest lower bound. We now present another definition of
lattice as an algebraic system.
2.4.25. Definition. Let L be a non-empty set with two binary operations, called join and meet, denoted
respectively by ∨ and ∧. Then L is called a lattice if the following axioms hold where a, b, c are
elements in L.
(1) Commutative law. (i) a ∨ b = b ∨ a and (ii) a ∧ b = b ∧ a
(2) Associative law. (i) ( a ∨ b ) ∨ c = a ∨ ( b ∨ c ) and (ii) ( a ∧ b ) ∧ c = a ∧ ( b ∧ c )

(3) Absorption law. (i) a ∨ ( a ∧ b ) = a and (ii) a ∧ ( a ∨ b ) = a

Remarks (1). We some time denote the lattice by (L, ∨, ∧) when we want to show which operations are
involved.
(2) Idempotent law can be derived using absorption law as follows.
Consider a ∨ a = a ∨ (a ∧ (a ∨ b)) [Absorption 3(ii) law]
=a [By 3(i)]
Hence a∨a =a

Similarly, we can prove a ∧ a = a.


(3) In the above definition of lattice, we do not require a partial order relation. Now we shall construct a
partial order relation on a lattice and shall finally prove that two definitions of lattice are equivalent.
2.4.26. Partial order Relation on a lattice. Given a lattice L, we can define a partial order relation ≤ on
L as a ≤ b iff a ∨ b = b

or we can define as a ≤ b iff a ∧ b = a

2.4.27. Theorem. Prove that relation defined above on a lattice L is a partial order relation , that is,
every lattice is a partially ordered set.
Proof. (i) Reflexivity. By the idempotent law, we know that
a∨a =a ∀ a∈ L

⇒ a ≤ a ∀ a∈L

(ii) Anti-Symmetry. Let a ≤ b and b ≤ a


⇒ a ∨ b = b and b ∨ a = a

But a∨b = b∨a [Commutative law]


44 Discrete Mathematics

So, a=b
(iii) Transitivity. Let a ≤ b and b ≤ c
⇒ a ∨ b = b and b ∨ c = c ......(1)

Consider a ∨ c = a ∨ (b ∨ c) [By (1)]

= (a ∨ b ) ∨ c [Associativity]

=b∨c [By (1)]


=c ⇒ a≤c
Hence, every lattice is a partially ordered set.
2.4.28. Theorem. Prove that two definitions of lattice are equivalent to one-another.
Proof. In our first definition, we have defined lattice as an partially ordered set in which l.u.b. and g.l.b.
of every two elements exist. We denote l.u.b. {a, b} = a ∨ b.
and g.l.b. {a, b} = a ∧ b
So, l.u.b. and g.l.b., denoted by ∨ and ∧, works as two binary operations required in the second
definition of lattice, Further, we have already proved that, a lattice (considered as a poset), satisfies
commutative, associative and absorption laws. Hence first definition implies the second.
Conversely. Let (L, ∨, ∧) be a lattice according to our second definition , that is, ∨ and ∧ denote the join
and meet respectively. We shall prove that L is a lattice (considered as a poset) according to first
definition.
We define the relation ≤ as follows
a ≤ b iff a ∨ b = b or a ≤ b iff a ∧ b = a

We have already proved that this is a partial order relation. Now, all we require is that l.u.b. and g.l.b. of
every two elements of L exist. To do so, we shall prove that l.u.b. of a and b is
a ∨ b and g.l.b. of a and b is a ∧ b. By absorption law, we have
b ∧ ( a ∨ b ) = b and a ∧ ( a ∨ b ) = a

⇒ a ≤ a ∨ b and b ≤ a ∨ b [By def. of ≤]

⇒ a ∨ b is an upper bound of a and b.


Suppose c is any upper bound of a and b , that is, a ≤ c and b ≤ c

⇒ a ∨ c = c and b ∨ c = c ......(1)

Then, ( a ∨ b ) ∨ c = a ∨ (b ∨ c) [Associative law]

=a∨c [By (1)]


Propositions and Lattices 45

=c [By (1)]
⇒ a∨b ≤c

Hence a ∨ b is the least upper bound of a and b. Similarly, we can show that a ∧ b is the greatest lower
bound of a and b.
2.4.29. Sublattice. Let L be a lattice. A non-empty subset S of L is said to be a sub-lattice of L iff S is
closed under the operations ∨ and ∧ of L , that is, a ∨ b ∈ S and a ∧ b ∈ S ∀ a , b ∈ S

From the definition itself, it is clear that sublattice is itself is a lattice.


2.4.30. Example. The set Dn of divisors of n is a sublattice of the natural numbers N under the relation
of divisibility.
Proof. For any fixed n, let a , b ∈ Dn .

In Dn, we know that a ∨ b = lcm { a , b} and a ∧ b = gcd { a , b}

But lcm and gcd of a and b are again divisors of n, so we have


a ∨ b ∈ Dn and a ∧ b ∈ Dn ∀ a , b ∈ Dn

⇒ Dn is a sublattice of N.

2.4.31. Lattice Isomorphism. Let ( L1, ∨1, ∧1 ) and ( L2 , ∨ 2 , ∧2 ) be two lattices. Then a mapping
f : L1 → L2 is called a lattice homomorphism if for any a , b ∈ L1 .

f ( a ∨1 b ) = f ( a ) ∨ 2 f ( b ) and f ( a ∧1 b ) = f ( a ) ∧2 f ( b ) ,

that is, f is a homomorphism which preserves both the binary operations.


Remark. (i) A lattice homomorphism f : L1 → L2 always preserves the order relations. For this, let ≤1
and ≤2 be partial order relations on L1 and L2 respectively, then
a ≤1 b if a ∨1 b = b

and so f ( b ) = f ( a ∨1 b ) = f ( a ) ∨ 2 f ( b ) ⇒ f ( a ) ≤2 f ( b )

Thus a ≤1 b iff f ( a ) ≤2 f ( b )

(ii) If a lattice homomorphism is one-one and onto, then it is called lattice isomorphism. If there exists
an isomorphism between two lattices, then the lattices are called isomorphic.
(iii) Since lattice isomorphism preserves order relation, therefore isomorphic lattices can be represented
by the same diagram in which vertices are replaced by corresponding images.
2.4.32. Example. Let A = {a, b}, then the lattice (P(A), ⊆) is isomorphic to the lattice D6 under the
relation of divisibility.
Solution. P( A ) = {φ, { a } , { b} , { a , b} } and D6 = {1, 2, 3,6}
46 Discrete Mathematics

The Hasse diagram of these lattices are represented as follows.


{a, b}
6

{a} {b} 3
2

φ 1
P(A) D6
We define mapping f : P( A ) → D6 by f (φ) = 1, f ({ a } ) = 2, f ({ b} ) = 3, f ({ a , b} ) = 6

Then, clearly f is one-one and onto and f preserves order relation as


φ ⊆ { a } ⇔ 1 / 2 i.e., f (φ) / f ({ a } ) etc.

Hence f is an isomorphism and so P(A) and D6 are isomorphic.


2.4.33. Definition. We define a set of sequences of 0’s and 1’s of length n by Bn and we define partial
order relation on Bn as. If x = a1 . a 2 ..... a n and y = b1 . b2 ..... bn be any two elements of Bn, then
x ≤ y iff a k ≤ bk ; k = 1, 2,.....n where a k′s and bk′s are 0 and 1.

For example, B2 = { 00, 01, 10, 11} and its Hasse-diagram is


11

10 01

00

Further, we define l.u.b. and g.l.b. in Bn as


x ∨ y = lub { x , y } = d1 d 2 ......d n where d k = m ax { a k , bk }

and x ∧ y = glb{ x , y } = c1 c2 .....cn where ck = m in { a k , bk }

Clearly, under these operations Bn becomes a lattice and also Bn contains 2n elements.
For example, B3 = {000, 001, 010, 100, 011, 101, 110, 111}
Its Hasse diagram is given as
111

110 101 011

100 001
010

000

From the diagram, it is clear that l.u.b. and g.l.b. of every two elements of B3 exist and hence B3 is a
Propositions and Lattices 47

lattice.
For example, lub { 010, 001} = 011 , glb { 010, 001} = 000

2.4.34. Bounded, Complemented and Distributive Lattices. We recall that an element x of a lattice L
is called greatest element of a ≤ x ∀ a ∈ L . Similarly, an element y of lattice L is called least element if y
≤ a ∀ a∈L.

Further, let L be a lattice and S = {a1, a2,......,an} be a finite subset of L, then we shall denote l.u.b. and
g.l.b. of S as follows. l.u.b. of S = a1 ∨ a 2 ∨ ...... ∨ a n

and g.l.b. of S = a1 ∧ a 2 ∧ ...... ∧ a n

Also, we shall denote the greatest and least elements by I and 0 respectively.
2.4.35. Bounded Lattice. A lattice L is said to bounded if L has both a greatest element and a least
element. If L = { a1, a 2 ,......, a n } , then a1 ∨ a 2 ∨ ...... ∨ a n = I and a1 ∧ a 2 ∧ ...... ∧ a n = 0 .

So, every finite lattice is a bounded lattice.

2.4.36. Example. (1) The lattice + of all positive integers under partial order relation of divisibility is
not a bounded lattice since it has least element, namely 1, but no greatest element.
(2) Let A be a non-empty set then the lattice P(A) under the partial order relation of inclusion is a
bounded lattice since its greatest element is A and the least element is φ.
Remark. If (L, ≤) is a bounded lattice, then clearly 0 ≤ a ≤ I ∀ a∈L

Also, a ∨ 0 = a , a ∧ 0 = 0, a ∨ I = I , a ∧ I = a Thus 0 acts as identity of the operation ∨ and I acts as


identity of the operation ∧.
2.4.37. Definition. Let L be a bounded lattice with 0 and I as least and greatest element. Then an
element b ∈ L is called a complement of a if a ∨ b = I and a ∧ b = 0 .

Clearly 0 and I are complement of each other.


Remark. I is the only complement of 0. Let, if possible, c ≠ I is a complement of 0.
Then, c ∨ 0 = I and c ∧ 0 = 0

But c ∨ 0 = c , so we have

c = I , a contradiction.
Similarly, 0 is the only complement of I.
2.4.38. Complemented Lattice. A lattice L is called complemented if it is bounded and if every element
of L has at least one complement. For example,
(1) The power set P( A ) of any set is a bounded lattice under inclusion relation where join and meet are
48 Discrete Mathematics

∪ and ∩ respectively. Its bounds are φ and A. The lattice ( P( A ) , ⊆ ) is complemented in which the
complement of any subset B of A is A − B.
111
(2) The lattice (B3 , ≤) is a bounded lattice and its bounds are 000 and 111.
Further, complement of an element of Bn can be obtained by interchanging 110 101 011
1 and 0 in the sequence.
100 001
010
For example, complement of 101 is 010,
000
l.u.b.(101, 010) = 111 = 101 ∨ 010
and g.l.b. (101, 010) = 000 = 101 ∧ 010
Remark. It should be noted that in a bounded lattice complements need not exist and need not be unique
as well. For example, in the bounded lattice shown as in the figure below, we note that a and c are both
complements of b. Also, in the chain represented as in the figure below, the element a, b, c have no
complements.
So, these two lattices are bounded but one is complemented other is not.

I I
c
c
b b

a a

0
0

2.4.39. Distributive Lattice. A lattice L is called a distributive lattice if for any a, b, c in L


(i) a ∨ ( b ∧ c ) = ( a ∨ b ) ∧ ( a ∨ c ) (ii) a ∧ ( b ∨ c ) = ( a ∧ b ) ∨ ( a ∧ c ) ,

that is, operations ∧ and ∨ are distributive over each other.


We further note that, by the principle of duality, the condition (i) holds iff the conditions (ii) holds.
Therefore, it is sufficient to verify one of these conditions. If a lattice L is not distributive, we say that L
is non-distributive.

For example, the power set P(A) of any set A is a distributive lattice. We know that join and meet
operations in P(A) are union and intersection respectively. Also, we know that union and intersection are
distributive over each other, that is,

R ∪ ( S ∩ T ) = ( R ∪ S) ∩ ( R ∪ T )

and R ∩ ( S ∪ T ) = ( R ∩ S) ∪ ( R ∩ T )

2.4.40. Theorem. (Without proof). A lattice L is non-distributive if and only if it contains a sublattice
Propositions and Lattices 49

isomorphic to any one of the following two five element lattices

I
I
a
c
c a b
b
0
0

2.4.41. Example. Is the following lattice is a distributive lattice:

d c e

a b

0 I

Solution. The given lattice is not distributive since {0, a , d, e, I } is a d


e
sublattice of above given lattice which is isomorphic to the five-element a
lattice shown in figure:
0
2.4.42. Theorem. Let L be a bounded distributive lattice. If a complement of any element exists, it is
unique.
Proof. Suppose b and c are complements of an element a ∈ L . Then
a∨b = I and a ∧b =0

a∨c= I and a ∧c=0

Using distributivity, we have


b = b ∨ 0 = b ∨ ( a ∧ c) = (b ∨ a ) ∧ (b ∨ c)

= ( a ∨ b ) ∧ (b ∨ c) = I ∧ (b ∨ c) = b ∨ c

Similarly, c = c ∨ 0 = c ∨ (a ∧ b ) = (c ∨ a ) ∧ (c ∨ b )

= ( a ∨ c) ∧ (b ∨ c) = I ∧ (b ∨ c) = b ∨ c

Hence b = c.
2.4.43. Join Irreducible elements and atoms.
Definition. Let L be a lattice, then an element a ∈ L is called join-irreducible if it can not be expressed
as the join of two distinct elements of L other than a.
50 Discrete Mathematics

In other words, an element a ∈ L is said to be join irreducible if


a = x ∨ y im plies a = x or a = y

Remark. (i) Prime numbers under multiplication have this property , that is, if p = ab then p = a or p = b
where p is prime.
(ii) Clearly, 0 is join-irreducible.
(iii) If a has at least two immediate predecessors say, b1 and b2 as shown in the figure.
a

b1 b2

then a = b1 ∨ b2 , and so a is not join-irreducible element.

(iv) On the other hand, if a has a unique immediate predecessor c, then a ≠ b1 ∨ b2 for any other
elements b1 and b2 because c would lie between b1 , b2 and a.
a
c

b1 b2
(v) By above two remarks, it is clear that a ≠ 0 is join irreducible if and only if ‘a’ has a unique
immediate predecessor.
2.4.44. Definition. Those elements, which immediately succeed 0, are called atoms. For example,
Elements a, b, c are atoms in the adjoining figure (i) .
From the above discussion, it follows that atoms are join-irreducible but converse may not be true. For
example, c is a join-irreducible element in the adjoining lattice (ii) but c is not an atom.
I
I

c c
b b
a
(i) (ii)
a
0
0
Remark. If an element a in a finite lattice L is not join irreducible, then we can write a = b1 ∨ b2 . Then
we can write b1 and b2 as the join of other elements if they are not join irreducible and so on.
Since L is finite, we finally have a = d1 ∨ d2 ∨ ...... ∨ d n I
where the d’s are join irreducible. If di ≤ dj then di ∨ d j = d j ;
a b c
so, we can delete the di from the expression. In other words,
we can assume that the d’s are irredundant , that is, no d precedes 0
Propositions and Lattices 51

any other d. Hence a can be expressed as join of irredundant join irreducible elements. However, we
give an example to show that such an expression need not be unique. For an example, consider the
lattice given in the figure, we see that
I = a∨b and I = b∨c

2.4.45. Theorem Let L be a finite distributive lattice. Then every a in L can be written uniquely (except
for order) as the join of irredundant join irreducible elements. Let L be a finite distributive lattice. Then
every a in L can be written uniquely (except for order) as the join of irredundant join irreducible
elements.
Proof. Since L is finite, we can write a as the join of irredundant join irreducible elements as discussed
above. Thus, we need to prove uniqueness. Suppose
a = b1 ∨ b2 ∨ ...... ∨ br = c1 ∨ c2 ∨ ...... ∨ cs

where b’s are irredundant and join-irreducible and c’s are also irredundant and join irreducible. For any
given i, we have
bi ≤ b1 ∨ b2 ∨ ...... ∨ br = c1 ∨ c2 ∨ ..... ∨ cs

Hence bi = bi ∧ ( c1 ∨ c2 ∨ ...... ∨ cs )

= ( bi ∧ c1 ) ∨ ( bi ∧ c2 ) ∨ ...... ∨ ( bi ∧ cs )

Since bi is join irreducible, there exist a j such that bi = bi ∧ c j and so bi ≤ c j . By a similar argument, for
cj there exists a bk such that c j ≤ bk . Therefore bi ≤ c j ≤ bk

which gives bi = c j = bk since the b’s are irredundant. Accordingly the b’s and c’s may be paired off.
Thus the representation for a is unique except for order.
2.4.46. Theorem. Let L be a complemented lattice with unique complements. Then the join irreducible
elements of L, other than 0, are its atoms.
Proof. Suppose a is join irreducible and is not an atom. Then a has unique
a
immediate predecessor b ≠ 0 . Let b′ be the complement of b. Since b ≠ 0 we have b′
b ′ ≠ I . If a ≤ b ′ then b ≤ a ≤ b ′ and so b ∨ b ′ = b ′ , which is impossible since b
b ∨ b ′ = I . Thus a does not precede b′ and so a ∧ b ′ must strictly precede a. Since b
is the unique immediate predecessor of a, we also have that a ∧ b ′ precedes b as a ∧ b′
shown in the figure:
But a ∧ b ′ precedes b′. Hence a ∧ b ′ ≤ glb { b , b ′} = b ∧ b ′ = 0 . Thus a ∨ b ′ = 0 . Since a ∨ b = a . We also
have that a ∨ b ′ = ( a ∨ b ) ∨ b ′ = a ∨ ( b ∨ b ′) = a ∨ I = I

Therefore b′ is complement of a. Since complements are unique, a= b. This contradicts the assumption
that b is an immediate predecessor of a. Thus the only join irreducible elements of L are its atoms.
Remark. Since every finite lattice is a bounded lattice, so theorem on pg. 47, can be given as “Let L be
52 Discrete Mathematics

a finite distributive lattice, if a complement of any element exists, it is unique”. Combining this result,
with above two theorems, we get
2.4.47. Theorem. Let L be a finite complemented distributive lattice. Then every element a in L is the
join of unique set of atoms.
Books Recommended:
1. Kenneth H. Rosen, Discrete Mathematics and Its Applications, Tata McGraw-Hill, Fourth Edition.
2. Seymour Lipschutz and Marc Lipson, Theory and Problems of Discrete Mathematics, Schaum
Outline Series, McGraw-Hill Book Co, New York.
3. John A. Dossey, Otto, Spence and Vanden K. Eynden, Discrete Mathematics, Pearson, Fifth
Edition.
4. J.P. Tremblay, R. Manohar, “Discrete mathematical structures with applications to computer
science”, Tata-McGraw Hill Education Pvt.Ltd.
5. J.E. Hopcraft and J.D.Ullman, Introduction to Automata Theory, Langauages and Computation,
Narosa Publishing House.
6. M. K. Das, Discrete Mathematical Structures for Computer Scientists and Engineers, Narosa
Publishing House.
7. C. L. Liu and D.P.Mohapatra, Elements of Discrete Mathematics- A Computer Oriented Approach,
Tata McGraw-Hill, Fourth Edition.
3
Boolean Algebra

Structure
3.1. Introduction.
3.2. Boolean Algebra.
3.3. Logic Gates and Circuits.
3.4. Karnaugh Maps.
3.1. Introduction. This chapter contains results related to Boolean algebra, Switching theory and
Karnaugh maps.
3.1.1. Objective. The objective of the study of these results is to understand the concepts and relations
between the elements of Boolean algebra, AND, OR and NOT gates.
3.2. Boolean Algebra. Let B be a non-empty set with two binary operations ∨ and ∧, a unary operation ′
and two distinct elements 0 and I. Then B is called a Boolean algebra if the following axioms hold where
a, b, c are any elements in B.
[B1] Commutative laws. a ∨ b = b ∨ a and a ∧ b = b ∧ a
[B2] Distributive laws. a ∧ (b ∨ c) = ( a ∧ b ) ∨ ( a ∧ c)

and a ∨ (b ∧ c) = ( a ∨ b ) ∧ ( a ∨ c)

[B3] Identity laws. a ∨ 0 = a and a ∧ I = a

[B4] Complement laws. a ∨ a′ = I and a ∧ a′ = 0

We shall call 0 as zero element, I as unit element and a′ as the complement of a. We denote a Boolean
algebra by ( B, ∨, ∧, ', 0, I ) .

3.2.1. Example. Let A be a non-empty set and ρ(A) be its power set. Then the collection ρ(A) is a
Boolean algebra with the empty set φ as the zero element and the set A as the unit element under the set
operations of union, intersection and complement i.e., (ρ( A) , ∪, ∩, ', φ, A ) is a Boolean algebra.

(i) Commutative laws. L ∪ M = M ∪ L and L ∩ M = M ∩ L

(ii) Distributive laws. L ∩ (M ∪ N) = (L ∩ M ) ∪ (L ∩ N)


54 Discrete Mathematics

L ∪ (M ∩ N) = (L ∪ M ) ∩ (L ∪ N)

(iii) Identity laws. L ∪ φ = L and L ∩ A = L

(iv) Complement laws. L ∪ L′ = L ∪ ( A − L ) = A

L ∩ L′ = L ∩ ( A − L ) = φ

3.2.2. Example. Let B = {0, 1} be the set of bits (binary digits) with the binary operation ∨ and ∧ and
the unary operation ′ is defined by the following tables.

∨ 1 0 ∧ 1 0 ′ 1 0

1 1 1 1 1 0 0 1

0 1 0 0 0 0

Here complement of 1 is zero and complement of zero is 1, and ( B, ∨, ∧, ', 0, 1) is a Boolean algebra.

3.2.3. Example. Let Bn be the set of n tuples whose members are either 0 or 1, that is, Bn = B × B
×... × B (n times).
Let a = ( a1, a 2 ,......, a n ) and b = ( b1, b2 ,......, bn ) be any two members of Bn. Then we define
a ∨1 b = ( a1 ∨ b1, a 2 ∨ b2 ,......, a n ∨ bn )

a ∧1 b = ( a1 ∧ b1, a 2 ∧ b2 ,......, a n ∧ bn )

where ∨ and ∧ are logical operations on {0, 1}, as defined above in example 2, and a′ is equal to
a ′ = ( a1′ , a 2′ ,......, a n′ ) where 0′ = 1 and 1′= 0

If 0 n = ( 0, 0,......,0 ) and I n = (1, 1, 1,......,1) .

Then ( B n , ∨1, ∧1, ' , 0 n , I n ) is a Boolean algebra. This algebra is known as switching algebra and
represents a switching network with n inputs and 1 output.
3.2.4. Example 4. The poset D30 = {1, 2, 3, 5, 6, 10, 15, 30} has eight elements. Define ∨, ∧ and ′ on D30
30
by a ∨ b = lcm {a, b} , a ∧ b = gcd {a, b} and a ′ = .
a

Then D30 is a Boolean algebra with 1 as the zero element and 30 as the unit element.
Remark. If a set A has n elements then ρ(A) has 2n elements and the partial order relation on ρ( A ) is the
set inclusion ‘⊆’ . If A has 1 element, 2 elements and 3 elements, then the corresponding Boolean
algebras are shown by the following diagrams.

I={a}
Boolean algebra for a singleton set
0=φ
Boolean Algebra 55

Boolean algebra formed by the set having two elements


{a, b}

{a} {b}

φ
Boolean algebra for the set having three elements
{a,b,c}

{a,b} {a,c} {b,c}

{a} {c}
{b}

3.2.5. Example. Let S be the set of statement formulae involving n statement variables. The algebraic
system (S, ∧, ∨,  , F, T) is a Boolean algebra in which ∨, ∧,  denote the operations of conjunction,
disjunction and negation respectively. The elements F and T denotes the formulas which are
contradiction and Tautology respectively. The partial ordering corresponding to conjunction and
disjunction is implication.
3.2.6. Definition. A second definition of a Boolean algebra is given as follows.
A finite lattice is called a Boolean algebra if it is isomorphic with Bn for some non-negative integer n.
For example, in example (4), D30 is isomorphic to B3. In fact the mapping f : D30 → B3 defined by
f (1) = 000 f (6 ) = 110 f (2 ) = 100 f (10 ) = 101

f (3 ) = 010 f (15 ) = 011 f ( 5 ) = 001 f (30 ) = 111

is an isomorphism. Hence D30 is Boolean algebra.


But if we examine D20 = {1, 2, 4, 5, 10, 20} that has 6 elements and 6 ≠ 2 n for any integer

n ≥ 0. Therefore, D20 is not a Boolean algebra.


Remark. If a finite lattice L does not contain 2n elements for some non-negative integer n, then L cannot
be a Boolean algebra. If |L| = 2n, then L may or may not be Boolean algebra. If L is isomorphic to Bn,
then it is a Boolean algebra, otherwise it is not. For large values of n, we use the following theorem for
determining whether Dn is a Boolean algebra or not.
3.2.7. Theorem. If n = p1.p2 . . . pk where pi’s are distinct primes, known as the set of atoms, then Dn is a
Boolean algebra.
56 Discrete Mathematics

Proof. Let A = {p1 , p2,......, pk}, if B ⊆ A and aB is the product of the primes in B, then aB divides n. Also
any divisor of n must be of the form aB for some subset B of A, where we assume that a φ = 1 .

Further, if C and B are subsets of A. Then C ⊆ B if and only if a C a B .

Also, a C ∩ B = a C ∧ a B = gcd ( a C , a B )

and a C ∪ B = a C ∨ a B = lcm ( a C , a B )

Thus the function f : ρ( A ) → Dn defined by f ( B ) = a B is an isomorphism. Since ρ( A ) is a Boolean


algebra, it follows that Dn is also an Boolean algebra.
For example, D20 , D30 , D210 , D66 , D646 . We notice that
(i) 20 cannot be represented as product of distinct primes and so D20 is not Boolean algebra.
(ii) 30 = 2.3.5, where 2, 3, 5 are distinct primes, so D30 is a Boolean algebra. Similarly other can be
examined.
3.2.8. Duality. The dual of any statement in a Boolean algebra B is obtained by exchanging ∧ and ∨ and
interchanging the zero element and unit element in the original statement.
e.g. the dual of a ∧ 0 = 0 is a ∨ I = I , i.e., ∧ ↔ ∨, 0 ↔ I
3.2.9. Principle of Duality. The dual of any theorem in a Boolean algebra is also a theorem. In other
words, if any statement is a consequence of the axioms of a Boolean algebra, then the dual is also a
consequence of those axioms since the dual statement can be proven by using the dual of each step of
the proof of the original statement.
3.2.10. Theorem. Let a, b and c be elements in a Boolean algebra (B, ∨, ∧, ′, 0, I). Then
(1) Idempotent laws. (i) a ∨ a = a (ii) a ∧ a = a
(2) Boundedness laws. (i) a ∨ I = I (ii) a ∧ 0 = 0
(3) Absorption laws. (i) a ∨ ( a ∧ b ) = a (ii) a ∧ ( a ∨ b ) = a

(4) Associative laws. (i) a ∨ ( b ∨ c ) = ( a ∨ b ) ∨ c (ii) a ∧ ( b ∧ c ) = ( a ∧ b ) ∧ c

Proof. It is sufficient to prove (i) part of each law, since (ii) part follows from (i) by principle of duality.
(1) (i) We have a =a∨0 [Identity law in Boolean algebra]

= a ∨ ( a ∧ a ′) [By complement law]

= ( a ∨ a ) ∧ ( a ∨ a ′) [Distributive law]

= (a ∨ a ) ∧ I [Complement law]

=a∨a [Identity law]


which proves (i).
Boolean Algebra 57

(2) (i) We have a ∨ I = (a ∨ I ) ∧ I [Identity law]

= ( a ∨ I ) ∧ ( a ∨ a ′) [Complement law]

= a ∨ ( I ∧ a ′) [Distributive law]

= a ∨ a′ [Identity law]
=I [Complement law]
which proves (i).
(3) (i) We note that a ∨ ( a ∧ b ) = ( a ∧ I ) ∨ ( a ∧ b ) [Identity law]

= a ∧ (I ∨ b) [Distributive law]

= a ∧ (b ∨ I ) [Commutative law]
=a∧I [Identity law]
=a [Identity law]
which proves (i).
(4) (i) Let L = ( a ∨ b ) ∨ c, R = a ∨ (b ∨ c)

Let a ∧ L = a ∧ [( a ∨ b ) ∨ c ]

= [a ∧ (a ∨ b ) ] ∨ (a ∧ c) [Distributive law]

= a ∨ (a ∧ c) [Absorption law]

=a [Absorption law]
and a ∧ R = a ∧ [a ∨ (b ∨ c) ]

= ( a ∧ a ) ∨ [a ∧ (b ∨ c) ] [Distributive law]

= a ∨ [a ∧ (b ∨ c) ] [Idempotent law]

=a [Absorption law]
Thus, a∧L =a∧R (1)

Further, a ′ ∧ L = a ′ ∧ [ ( a ∨ b ) ∨ c]

= [a ′ ∧ (a ∨ b ) ] ∨ (a ′ ∧ c) [Distributive law]

= [( a ′ ∧ a ) ∨ ( a ′ ∧ b ) ] ∨ ( a ′ ∧ c ) [Distributive law]

= [0 ∨ ( a ′ ∧ b ) ] ∨ ( a ′ ∧ c) [Complement law]

= (a ′ ∧ b ) ∨ (a ′ ∧ c) [Identity law]
58 Discrete Mathematics

= a ′ ∧ (b ∨ c) [Distributive law]

Similarly, a ′ ∧ R = a ′ ∧ [a ∨ (b ∨ c) ]

= ( a ′ ∧ a ) ∨ [ a ′ ∧ ( b ∨ c )] [Distributive law]

= 0 ∨ [a ′ ∧ (b ∨ c) ] [Complement law]

= a ′ ∧ (b ∨ c) [Identity law]

Hence, a′ ∧ L = a′ ∧ R (2)

Therefore, L = 0∨ L

= ( a ∧ a ′) ∨ L [Complement law]

= ( a ∧ L) ∨ ( a′ ∧ L) [Distributive law]

= ( a ∧ R) ∨ ( a ′ ∧ R) [Using (1) and (2)]

= ( a ∧ a ′) ∨ R [Distributive law]

=0∨R [Complement law]

=R [Identity law]
Hence ( a ∨ b ) ∨ c = a ∨ ( b ∨ c ) , which proves (i).

Similarly (4) (ii) can be proved.


3.2.11. Theorem. Let ‘a’ be any element of a Boolean algebra B, then
(i) Complement of a is unique. (ii) Involution law. (a′)′ = a (iii) 0′ = I and I′ = 0
Proof. (i) Let a′ and x be two complements of a ∈ B , then
a ∨ a′ = I and a ∧ a′ = 0 (1)

and a∨x=I and a∧x =0 (2)

and, we have a′ = a′ ∨ 0 [Identity law]

= a ′ ∨ (a ∧ x ) [By (2)]

= (a ′ ∨ a ) ∧ (a ′ ∨ x ) [Distributive law]

= I ∧ (a ′ ∨ x ) [By (1)]

= a′ ∨ x [Identity law]

Also, x = x∨0 [Identity law]

= x ∨ ( a ∧ a ′) [By (1)]
Boolean Algebra 59

= ( x ∨ a ) ∧ ( x ∨ a ′) [Distributive law]

= I ∧ ( x ∨ a ′) [By (2)]

= x ∨ a′ [Identity law]

= a′ ∨ x [Commutative law]

Hence a ′ = x , so complement of a is unique.

(ii) Let a ′ be complement of a. Then


a ∨ a′ = I and a ∧ a′ = 0

or a′ ∨ a = I and a′ ∧ a = 0 [Commutative law]

This implies that a is complement of a′. i.e, a = ( a ′)′ .

(iii) By boundedness law, 0 ∨ I = I and by Identity law 0 ∧ I = 0


The two relations implies that I is complement of 0. i.e., I = 0′.
By principal of duality, we have 0 = I′
which completes the proof.
3.2.12. De-Morgan laws. Let a and b be elements of a Boolean algebra then
( a ∧ b )′ = a ′ ∨ b ′ and ( a ∨ b )′ = a ′ ∧ b ′

Proof. We have ( a ∨ b ) ∨ ( a ′ ∧ b ′) = ( b ∨ a ) ∨ ( a ′ ∧ b ′) [Commutative law]

= b ∨ [ a ∨ ( a ′ ∧ b ′)] [Associative law]

= b ∨ [( a ∨ a ′) ∧ ( a ∨ b ′)] [Distributivity]

= b ∨ [ I ∧ ( a ∨ b ′) ] [Complement law]

= b ∨ ( a ∨ b ′) [Identity law]

= b ∨ (b′ ∨ a ) [Commutative law]

= ( b ∨ b ′) ∨ a [Associative law]

= I ∨a [Complement law]

=I [Boundedness law]
Also, ( a ∨ b ) ∧ ( a ′ ∧ b ′ ) = [ ( a ∨ b ) ∧ a ′] ∧ b ′ [Associative law]

= [( a ∧ a ′) ∨ ( b ∧ a ′) ] ∧ b ′ [Distributive law]

= [ 0 ∨ ( b ∧ a ′)] ∧ b ′ [Complement law]


60 Discrete Mathematics

= (b ∧ a ′) ∧ b ′ [Identity law]

= ( b ∧ b ′) ∧ a ′ [Associative law]

= 0 ∧ a′ [Complement law]

=0 [Boundedness law]
So, by the uniqueness of complement, we have ( a ∨ b )′ = a ′ ∧ b ′
The other part follows by principle of duality.
3.2.13. Boolean Algebras as lattices. It follows from the above discussion that, every Boolean algebra
B satisfies the associative, commutative and absorption laws and hence is a lattice where ∨ and ∧ are the
join and meet operations respectively. With respect to this lattice, a ∨ I = I implies a ≤ I and a ∧ 0 = 0
implies 0 ≤ a for any element a ∈ B . Thus B is a bounded lattice. Furthermore axioms [B2 ] and [B4 ]
show that B is also distributive and complemented. Conversely, every bounded, distributive, and
complemented lattice satisfies the axioms [B1] through [B4]. Hence, we can give an alternate definition
of a Boolean algebra as follows:
3.2.14. Definition. A Boolean algebra B is a bounded, distributive and complemented lattice. Now since
a Boolean algebra is a lattice so it must have a partial ordering. In case of lattice, we have define
a ≤ b if a ∨ b = b or a ∧ b = a holds.

3.2.15. Theorem. If a, b are in Boolean algebra then the following are equivalent
(i) a ∨ b = b (ii) a ∧ b = a (iii) a′ ∨ b = I (iv) a ∧ b′ = 0
Proof. (i) ⇔ (ii) has been already proved.
Now (i) ⇒ (iii)
Suppose a∨b=b ......(1)
Then a ′ ∨ b = a ′ ∨ (a ∨ b ) [By (1)]

= (a ′ ∨ a ) ∨ b [Associativity]

= I ∨b [Complement law]

=I [Boundedness law]
Conversely, let a′ ∨ b = I ......(2)

then a ∨ b = I ∧ (a ∨ b ) = (a ′ ∨ b ) ∧ (a ∨ b ) [By (2)]

= (a ′ ∧ a ) ∨ b [Distributivity]

= 0 ∨b [Complement law]

=b
Boolean Algebra 61

Thus (i) ⇔ (iii)


We now prove that (iii) ⇔ (iv)
Suppose first that (iii) holds, then using De-Morgan’s law and involution law
0 = I ′ = ( a ′ ∨ b )′ = a ′′ ∧ b ′ = a ∧ b ′

Conversely, let (iv) holds


I = 0 ′ = ( a ∧ b ′)′ = a ′ ∨ b ′′ = a ′ ∨ b

Hence (iii) ⇔ (iv). So all four are equivalent.


I
3.2.16. Example. Show that the lattice whose Hasse diagram is given in adjoining
a f
figure is not a Boolean algebra.
e
b d c
Solution. Element ‘a’ and ‘e’ are both complement of c since
0
c ∨ a = I and c ∧ a = 0 and c ∨ e = I , c ∧ e = 0

But in a Boolean algebra, complement of an element is unique. Hence the given lattice is not a Boolean
algebra.
3.2.17. Definition. Let ( B, ∨, ∧, ', 0, I ) be a Boolean algebra and S ⊆ B. If S contains the element 0 and I
and is closed under the operation join (∨) and meet (∧) and complement ( ′ ). Then, (S ∨, ∧, ′, 0, I) is
called a sub-Boolean algebra.
In practice, it is sufficient to check closure w.r.t. the set of operations (∧, ′ ) or (∨ , ′ ).
The definition of sub-Boolean algebra implies that it is a Boolean algebra. But a subset of a Boolean
algebra can be a Boolean algebra but not necessarily a Boolean sub-algebra because it is not closed w.r.t.
the operation join and meet.
For any Boolean algebra (B, ∨, ∧, ′, 0, I) the subsets {0, I} and the set B are both sub-Boolean algebras.
In addition to these sub-Boolean algebras consider now any element a ∈ B s.t. a ≠ 0, a ≠ I and consider
the set { a , a ′, 0, I } .

Obviously, this set is a sub-Boolean algebra of the given Boolean algebra.


For example, D70 = {1, 2, 5, 7, 10, 14, 35, 70} is a Boolean algebra and set consisting of {1, 2, 35, 70} is
a sub-Boolean algebra of D70.
Every element of a Boolean algebra generates a sub-Boolean algebra.
More generally, any subset of B generates a sub-Boolean algebra.
3.2.18. Definition. Let ( B1, ∧1, ∨1, ′, 01, I1 ) and ( B2 , ∧2 , ∨ 2 , ′′, 0 2 , I 2 ) be two Boolean algebra’s. The direct
product of the two Boolean algebra’s is defined to be a Boolean algebra, denoted by
( B1 × B2 , ∧3 , ∨ 3 , ′′′, 0 3 , I 3 ) in which the operations are defined for any ( a1, b1 ) and ( a 2 , b2 ) ∈ B1 × B2 as
62 Discrete Mathematics

follows.
( a1, b1 ) ∧3 ( a 2 , b2 ) = ( a1 ∧1 a 2 , b1 ∧2 b2 )

( a1, b1 ) ∨ 3 ( a 2 , b2 ) = ( a1 ∨1 a 2 , b1 ∨ 2 b2 )

( a1, b1 )′′′ = ( a1′ , b1′′ )

0 3 = ( 01, 0 2 )

I 3 = ( I1 , I 2 )

Thus from a Boolean algebra B , we can generate


B 2 = B × B and B 3 = B × B × B etc.

3.2.19. Definition. Let ( B, ∧, ∨, ′, 0, I ) and ( P, ∩, ∪, , α, β) be two Boolean algebras. A mapping


f :B → P is called a Boolean homomorphism if all the operations of the Boolean algebra are preserved
i.e., for any a, b ∈ B.
f ( a ∧ b ) = f ( a ) ∩ f (b )

f ( a ∨ b ) = f ( a ) ∪ f (b )

f ( a ′) = f (a )

f (0 ) = α

f (1) = β

The above definition of homomorphism can be simplified by asserting that f : B → P preserves either
the operations meet (∧) and ′ or the operation ∨ and ′
Now, we consider a mapping g : B → P in which the operations ∧ and ∨ are preserved. Thus g is a
lattice homomorphism. g preserves the order and hence it maps the bounds 0 and I into the least and
greatest elements respectively of the set g( B ) ⊆ P . It is however not necessary that
g( 0 ) = α and g(1) = β .

The complements, if defined in terms of g(0) and g(1) in g(B) are preserved and
( g( B ), ∩, ∪ , , g( 0 ), g(1) ) is a Boolean algebra.
Note that g : B → P is not a Boolean homomorphism. Although g : B → g( B ) is a Boolean
homomorphism. Thus for any mapping from a Boolean algebra which preserves the operations ∨ and ∧,
the image set is a Boolean algebra.
A Boolean homomorphism is called a Boolean isomorphism if it is bijective.
3.2.20. Representation Theorem. Let B be a finite Boolean algebra. We know that an element ‘a’ in B
is called an atom or minterm if ‘a’ immediately succeed the least element zero i.e., 0 ≤ a. Let A be the
set of the atoms of B and let P(A) be the Boolean algebra of all subsets of the set A of atoms. Then (as
Boolean Algebra 63

proved in chapter on lattices) each x ≠ 0 in B can be expressed uniquely (except for order) as the join of
atoms (i.e., elements of A).
i.e., x = a1 ∨ a2 ∨......∨ an.
Stone’s Representation Theorem. Any Boolean algebra is isomorphic to a power set algebra
( P( S), ∩, ∪, , φ, S ) for some set S. Restricting our discussion to a finite algebra B, the representation
theorem is.
Theorem. Let B be a finite Boolean algebra and let A be the set of atoms of B. If P(A) is the Boolean
algebra of all subsets of the set A of atoms, then there exists a mapping f : B → P(A) which is an
isomorphism.
Proof. Suppose B is a finite Boolean algebra and P(A) is the Boolean algebra of all subsets of the set A
of atoms of B. Consider the mapping f : B → P( A ) defined by f ( x ) = {a1 , a2 ,..., ar } where
is the unique representation of x ∈ B as the join of atoms ( a1 , a2 ,..., ar ) ∈ A . If ai are
x = a1 ∨ a2 ∨ ... ∨ ar
atoms, then we know that
ai ∧ ai = ai

but a i ∧ a j = 0 for a i ≠ a j

Let x and y are in the Boolean algebra B and suppose


x = a1 ∨ a 2 ∨ ...... ∨ a r ∨ b1 ∨ b2 ∨ ...... ∨ b s

y = b1 ∨ b2 ∨ ...... ∨ b s ∨ c1 ∨ c2 ∨ ...... ∨ ct

where A = {a1, a 2 ,......, a r ; b1, b2 ,....., b s ; c1, c2 ,......, ct ; d1, d2 ,......, d k } be the set of atoms of B.

Then x ∨ y = a1 ∨ a 2 ∨,......, ∨ a r ∨ b1 ∨ b1 ∨ b2 ∨ ...... ∨ b s ∨ c1 ∨ c2 ∨ ...... ∨ ct

and x ∧ y = b1 ∨ b2 ∨ ...... ∨ b s

Hence, f ( x ∨ y ) = { a1, a 2 ,......, a r , b1, b2 ,......, b s , c1, c2 ,......, ct }


= { a1, a 2 ,......, a r , b1, b2 ,....., b s } ∪ { b1, b2 ,......, b s , c1, c2 ,......ct }

= f (x ) ∪ f (y)

and f ( x ∧ y ) = f ( b1 ∨ b2 ∨ ...... ∨ b s )

= { b1, b2 ,......, b s }

= { a1, a 2 ,......, a r , b1, b2 ,......, b s } ∩ { b1, b2 ,......, b s , c1, c2 ,......, ct }

= f (x ) ∩ f (y)

Let y = c1 ∨ c2 ∨ ...... ct ∨ d1 ∨ d 2 ∨ ...... ∨ d k then x ∨ y = I and x ∧ y = 0

and so y = x′
64 Discrete Mathematics

Thus, f ( x ′) = f ( y ) = { c1, c2 ,....., ct , d1, d 2 ,......, d k }

= { a1, a 2 ,......, a r , b1, b2 ,....., b s } ′

= ( f ( x ))′

Since the representation of any x is unique in terms of atoms, so f is one-one and onto.
Hence f is a Boolean algebra isomorphism.
Thus every finite Boolean algebra is strictly the same as a Boolean algebra of sets. If a set A has n
elements then its power set P(A) has 2n elements. Thus we have
3.2.21. Corollary. A finite Boolean algebra has 2n elements for some positive integer n.
e.g. Consider the Boolean algebra D70 = {1, 2, 5, 7, 10, 14, 35, 70} .
70
Then the set of atoms of D70 is A = { 2, 5, 7}

The unique representation of each non-atoms by atoms is 10 14 35

10 = 2 ∨ 5
2 7
14 = 2 ∨ 7 5

35 = 5 ∨ 7
1
70 = 2 ∨ 5 ∨ 7

Now, we consider the power set of A


P( A ) = { φ, { 2} , { 5} , {7} , { 2, 5} , { 5, 7} , { 2,7} ,{ 2,5, 7}

Now, the diagram of the Boolean algebra of power set P(A) of the set A of atoms is

{2, 5, 7}

{2, 5} {2, 7} {5, 7}

{2} {7}
{5}

We see that diagram of D70 and P(A) are same in structure.


3.2.22. Definition. Consider a set of variables (or letters or symbols), say x1, x 2 ,......, x n . A boolean
expression P in these variables, written as P ( x1, x 2 ,......, x n ) is any variable or any expression built up
from the variables using the Boolean operations ∨, ∧, ′.
e.g. P( x , y, z ) = ( x ∨ y ) ∧ z
Boolean Algebra 65

Q( x , y, z ) = ( x ∧ y ′) ∨ ( y ∧ 0 )

R( x , y, z ) = ( x ∨ ( y ′ ∧ z )) ∨ ( x ∧ ( y ∧ I ))

are Boolean expressions. Note that a Boolean expression (or Boolean polynomial) in n variables may or
may not contain all the n variables. Obviously an infinite number of Boolean expressions may be
constructed in n variables.
3.2.23. Definition. A literal is a variable or complemented variable such as x, x ′, y, y ′ and so on. A
fundamental product is a literal or a product of two or more literals in which no two literals involve the
same variable.
Thus, x ∧ z ′, x ∧ y ′ ∧ z, x, y ′, x ′ ∧ y ∧ z are fundamental products, but x ∧ y ∧ x ′ ∧ z and x ∧ y ∧ z ∧ y are
not. Note that any product of literals can be reduced to either 0 or a fundamental product e.g.
x ∧ y ∧ x ′ ∧ z = 0 since x ∧ x ′ = 0 (complement law), and xyzy = xyz since y ∧ y = y (idempotent law)

3.2.24. Definition. A fundamental product P1 is said to be contained in (or included in) another
fundamental product P2 if the literals of P1 are also literals of P2. e.g.., x′z (i.e., x ′ ∧ z ) is contained in
x ′ yz but x ′z is not contained in xy ′z since x′ is not a literal of xy′z. Observe that if P1 is contained in P2
say P2 = P1 ∧ Q, then by the absorption law
P1 ∨ P2 = P1 ∨ ( P1 ∧ Q ) = P1

Thus, for example, x ′z + x ′yz = x ′z

3.2.25. Definition. A Boolean expression E is called a sum-of-products expression if E is a fundamental


product or sum of two or more fundamental products none of which is contained in another.
3.2.26. Definition. Two Boolean expressions P( x1, x 2 ,......, x n ) and Q( x1, x 2 ,......, x n ) are called equivalent
if one can be obtained from the other by a finite number of applications of the identities of a Boolean
algebra.
3.2.27. Definition. Let E be any Boolean expression. A sum-of-products form of E is an equivalent
Boolean sum-of-products expression.
3.2.28. Example. Consider the expression E1 = xz ′ + y ′z + xyz ′ and E2 = xz ′ + x ′yz ′ + xy ′z .

Although the first expression is a sum of products, it is not a sum-of-products expression. Specifically
the product xz′ is contained in the product xyz′. However by the absorption law, E1 can be expressed as
E1 = xz ′ + y ′z + xy z ′ = xz ′ + xy z ′ + y ′z = xz ′ + y ′z

This yields a sum of products form for E1. The second expression E2 is already a sum-of-products
expression.
Now, we give an algorithm to transform any Boolean expression into equivalent sum-of-products
expression.
3.2.29. Algorithm for finding sum-of-products forms. The input is a Boolean expression E. The
66 Discrete Mathematics

output is a sum-of products expression equivalent to E.


Step I. Use De-Morgan-law’s and involution to move the complement operation into any parenthesis
until finally the complement operation only applies to variables. Then E will consist only of sums and
products of literals.
Step II. Use the distributive operation to next transform E into a sum of products.
Step III. Use the commutative, idempotent, and complement laws to transform each product in E into 0
or a fundamental product.
Step IV. Use the absorption and identity laws to finally transform E into a sum-of-products expression.
3.2.30. Example. Consider the Boolean expression E = (( xy )′ z )′ (( x ′ + z ) ( y ′ + z ′ ))′ .

Solution. Step I. Using De-Morgan’s laws and involution, we obtain


E = ( xy ′′ + z ′) (( x ′ + z )′ + ( y ′ + z ′)′) = ( xy + z ′) ( xz ′ + yz )

E now consists only of sums and product of literals.


Step II. Using the distributive laws, we obtain
E = xy xz ′ + xy y z + xz ′z ′ + y zz ′

E now is a sum of products.


Step III. Using the commutative, idempotent and complement laws, we obtain
E = xyz ′ + xyz + xz ′ + 0

Each term in E is a fundamental product or 0.


Step IV. The product xz′ is contained in xyz′, hence by the absorption law xz ′ + ( xz ′y ) = xz ′ . Hence
E = xz ′ + xyz + 0

Now, using identity law E = xz ′ + xyz , which is required sum-of-products expression.

3.2.31. Definition. A Boolean expression E = E ( x1, x 2 ,......, x n ) is said to be a complete sum-of-products


expression if E is a sum-of-products expression where each product P involves all the n variables. Such
a fundamental product P which involves all the variables is called a minterm, and there is a maximum of
2n such products for n variables.
3.2.32. Theorem. (Without proof). Every non-zero Boolean expression E = E ( x1, x 2 ,......, x n ) is
equivalent to a complete sum-of-products expression and such a representation is unique.
3.2.33. Algorithm for obtaining complete sum-of-products expression. The input is a Boolean sum-
of-products expression E = E ( x1 , x2 ,..., xn ) . The output is a complete sum-of-products expression
equivalent to E.
Step I. Find a product P in E which does not involve the variable xi, and then multiply P by x i + x i′ ,
deleting any repeated products. (This is possible since x i + x i′ = 1 and P + P = P).
Boolean Algebra 67

Step II. Repeat step I, until every product P in E is a minterm i.e., every product P involves all the
variables.
3.2.34. Example. Express E ( x , y , z ) = x ( y ′z )′ in its complete sum-of-products form.

Solution. First apply the algorithm for finding sum of products form on E to obtain
E = x ( y ′z )′ = x ( y + z ′) = xy + xz ′ .

Now, E is represented by a sum-of-products expression.


Now, apply the above algorithm to obtain
E = xy ( z + z ′) + xz ′( y + y ′)

= xyz + xyz ′ + xyz ′ + xy ′z ′

= xyz + xyz ′ + xy ′z ′

Now, E is represented by its complete sum-of-products form.


Remark. The sum-of-products form for a Boolean expression E is also called the disjunctive normal
form or DNF of E. The complete sum of products form for E is also called the full disjunctive normal
form, or the disjunctive canonical form or the minterm canonical form of E.
3.2.35. Exercise.
1. Express E ( x , y , z ) as a sum-of-products and then in its complete sum-of-products form where
E ( x, y , z ) = z( x ′ + y ) + y ′ .

2. Express E ( x , y , z ) = ( x ′ + y )′ + x ′y in its complete sum-of-products form.

3.2.36. Minimal sum-of-products. Consider a Boolean sum-of-products expression E. Let EL denote


the number of literals in E (counted according to multiplicity), and let ES denote the number of
summands in E. For example, suppose
E = xyz ′ + x ′y ′t + x ′y ′z ′t + x ′yzt .

Then, E L = 3 + 3 + 4 + 4 = 14 and ES = 4 .

Suppose E and F are equivalent Boolean sum-of-products expressions. We say E is simpler than F if
(i) E L < FL and ES < FL

or (ii) E L ≤ FL and ES < FL .

We say E is minimal if there is no equivalent sum-of-products expression which is simpler than E. There
can be more than one equivalent minimal sum-of-products expressions.
3.2.37. Prime Implicants. A fundamental product P is called a prime implicant of a Boolean expression
E if P + E = E but no other fundamental product contained in P has this property. For example, suppose
E = xy ′ + xyz ′ + x ′yz ′ .
68 Discrete Mathematics

We shall prove that (i) xz ′ + E = E (ii) x + E ≠ E (iii) z ′ + E ≠ E .

First we find the complete sum-of-products form for E


E = xy ′( z + z ′) + xyz ′ + x ′yz ′ = xy ′z + xy ′z ′ + xyz ′ + x ′yz ′ ......(1)

Now, we express xz′ in complete sum-of-products form


xz ′ = xz ′( y + y ′) = xyz ′ + xy ′z ′ ......(2)

Since the complete sum-of-products form is unique, so A + E = E, wher e A ≠ 0 , if and only if summands
in the complete sum-of-products form for A are among the summands in the complete sum-of-products
form for E.
Now, by (1) and (2), we see that summands of xz′ are among those of E, so we have
xz ′ + E = E which proves (i)
(ii) Express x in complete sum-of-products form
x = x ( y + y ′) ( z + z ′) = xyz + xyz ′ + xy ′z + xy ′z ′

The summand xyz of x is not a summand of E hence x + E ≠ E .


(iii) Express z′ in complete sum-of-products form
z ′ = z ′( x + x ′) ( y + y ′) = xyz ′ + xy ′z ′ + x ′yz ′ + x ′y ′z ′

The summand x ′y ′z ′ of z ′ is not a summand of E ; hence z ′ + E ≠ E .


Thus, xz ′ is a prime implicant of E.
3.2.38. Theorem (without proof). A minimal sum-of-products form for a Boolean expression E is a
sum of prime implicants of E.
3.2.39. Consensus of Fundamental Products. Let P1 and P2 be fundamental products such that exactly
one variable, say x k , appears uncomplemented in one of P1 and P2 and complemented in the other. Then
the consensus of P1 and P2 is the product (without repetitions) of the literals of P1 and the literals of P2
after x k and x k′ are deleted (we do not define, the consensus of P1 = x and P2 = x ′ ).

3.2.40. Lemma. Suppose Q is the consensus of P1 and P2. Then P1 + P2 + Q = P1 + P2 .

Proof. Since the literals commute, we can assume without loss of generality that
P1 = a1a 2 ......a r t

P2 = b1 b2 ...... b st ′

Q = a1 a 2 ...... a r b1 b2 ...... b s .

Now, Q = Q (t + t ′) = Qt + Qt ′ .

Now, Qt contains P1, so P1 + Qt = P1 and Qt′ contains P2, so P2 + Qt′ = P2.


Boolean Algebra 69

Hence, we have P1 + P2 + Q = P1 + P2 + Qt + Qt ′

= ( P1 + Qt ) + ( P2 + Qt ′)

= P1 + P2 .

3.2.41. Consensus method for finding prime implicants.


The input is a Boolean expression E = P1 + P2 + ...... + Pm where the P’s are fundamental products. The
output expresses E as a sum of its prime implicants.
Step I. Delete any fundamental product Pi which includes any other fundamental product Pj.
(Permissible by the absorption law).
Step II. Add the consensus of any Pi and Pj providing Q does not include any of the P’s (Permissible by
the above lemma).
Step III. Repeat step I and /or step II until neither can be applied.
3.2.42. Example. Let E = xyz + x ′z ′ + xyz ′ + x ′y ′z + x ′yz ′ .

Solution. Then, we have E = xyz + x ′z ′ + xyz ′ + x ′y ′z ( x ′yz ′ includes x ′z ′)

= xyz + x ′z ′ + xyz ′ + x ′y ′z + xy (consensus of xyz and xyz′)


= x ′z ′ + x ′y ′z + xy ( xyz and xyz ′ include xy )

= x ′z ′ + x ′y ′z + xy + x ′y ′ ( consensus of x ′z ′ and x ′y ′z )

= x ′z ′ + xy + x ′y ′ ( x ′y ′z includes x ′y ′)

= x ′z ′ + xy + x ′y ′ + yz ′ (consensus of x′z′ and xy).

Now, neither step in the consensus method will change E. Thus E is the sum of its prime implicants,
which appear in the last line i.e., x ′z ′, xy, x ′y ′ and yz ′ .

3.2.43. Finding a minimal sum-of-products form. The input is a Boolean expression E = P1 + P2 + ... + Pm
where the P’s are all the prime implicants of E. The output expresses E as a minimal sum-of-products.
Step I. Express each prime implicant P as a complete sum-of-products.
Step II. Delete one by one those prime implicants whose summands appear among the summands of the
remaining prime implicants.
3.2.44. Example. Consider E = xyz + x ′z ′ + xyz ′ + x ′y ′z + x ′yz ′ .
Solution. Reproduce above example here to obtain
E = x ′z ′ + xy + x ′y ′ + yz ′

E is now expressed as the sum of all its prime implicants.


Step I. Express each prime implicant of E as a complete sum-of-products to obtain
70 Discrete Mathematics

x ′z ′ = x ′z ′( y + y ′) = x ′yz ′ + x ′y ′z ′

xy = xy ( z + z ′) = xyz + xyz ′

x ′y ′ = x ′y ′( z + z ′) = x ′y ′z + x ′y ′z ′

yz ′ = yz ′( x + x ′) = xyz ′ + x ′yz ′ .

Step II. The summands of x ′z ′ ar e x ′yz ′ and x ′y ′z ′ which appear among the other summands. Thus
delete x′z′ to obtain E = xy + x ′y ′ + yz ′ .

The summands of no other prime implicant appear among the summands of the remaining prime
implicants, and hence this is a minimal sum-of-products form for E. In other words, none of the
remaining prime implicants is superfluous, i.e., none can be deleted without changing E.
3.3. Logic Gates and Circuits. Logic circuits (also called logic networks) are structures which are built
up from certain elementary circuits called logic gates. Each logic circuit may be viewed as a machine L
which contains one or more input devices and exactly one output device. Each input device in L sends a
signal, specifically, a bit 0 or 1 to the circuit L, and L processes the set of bits to yield an output bit.
Accordingly an n bit sequence may be assigned to each input device, and L processes the input
sequences one bit at a time to produce an n-bit output sequence.
3.3.1. Logic Gates. There are three basic logic gates which are described below. We adopt the
convention that the lines entering the gate symbol from the left are input lines and the single line on the
right is the output line.
(1) OR gate. An OR Gate has inputs x and y and output z = x ∨ y or z = x + y , where addition or join is
defined by the truth table

x y x+y
1 1 1

1 0 1
0 1 1
0 0 0

Thus the output z = 0 only when inputs x = 0 and y = 0. Thus OR gate only yields 0 when both input bits
are 0.
The symbol for the OR gate is shown in the diagram below

x
OR z = x+y
y
Boolean Algebra 71

OR gate may have more than two inputs. Below figure shows an OR gate with four inputs A, B, C, D
and output Y = A + B + C + D .

B
OR Y = A+ B+C+D
C

The output Y = 0 if and only if all the inputs are 0. Suppose for example, the input data for the OR gate
in above figure are the following 8-bit sequences
A = 10000101 B = 10100001
C = 00100100 D = 10010101.
The OR gate only yields 0 when all input bits are 0. This occurs only in the 2nd , 5th and 7th positions.
Thus the output is the sequence Y = 10110101.
(2) AND Gate. In this gate the inputs are x and y and output is xy or x ∧ y, where multiplication is
defined by the truth table

x y z = xy
1 1 1
1 0 0
0 1 0
0 0 0

Thus the output z = 1 when inputs x = 1 and y = 1 otherwise z = 0. The symbol for the AND gate is

x
AND z = xy
y

The AND gate may have more than two inputs. Below figure shows an AND gate with four inputs A, B,
C, D and output Y = A.B.C.D. The output Y = 1 if and only if all the inputs are 1.

B
AND Y = A.B.C.D
C

Suppose, for example, the input data for the AND gate in above figure are the following 8-bit sequences.
72 Discrete Mathematics

A = 11100111 B = 01111011

C = 01110011 D = 11101110
The AND gate only yields 1 when all input bits are 1. This occurs only in the 2nd, 3rd and 7th positions.
Thus the output is the sequence Y = 01100010.
(3) NOT Gate. NOT gate is also known as inverter. The diagram below shows:

x NOT y = x′

NOT gate with input x and output x′ where inversion denoted by prime, is defined by the truth table
given below:

x x′
1 0
0 1

We emphasize that a NOT gate can have only one input, whereas the OR and AND gates may have two
or more inputs.
Suppose for example, a NOT gate is asked to process the following sequences:
x = 110001 , y = 10110111 , z = 10101010
The NOT gate changes 0 to 1 and 1 to 0. Thus,
x ′ = 001110 , y ′ = 01001000 , z ′ = 01010101

are the three corresponding outputs.


3.3.2. Logic Circuits. A logic circuit L is a well-formed structure where elementary components are the
above OR, AND and NOT gates. Below figure is an example of a logic circuit with inputs x, y, z and
output t. A dot indicates a place where the input line splits so that its bit signal is sent in more than one
direction:
x
AND NOT
y
OR t

NOT
OR NOT
z

Working from left to right, we express t in terms of the inputs x, y, z as follows. The output of the AND
gate is x . y , which is then negated to yield (x . y)′. The output of the lower OR gate is x ′ + z which is
Boolean Algebra 73

then negated to yield ( x ′ + z )′ . The output of the OR gate on the right, with inputs (xy)′ and ( x ′ + z )′ gives
us our desired representation, that is,
t = ( xy )′ + ( x ′ + z )′ .

3.3.3. Logic Circuits as a Boolean Algebra. Observe that the truth tables for the OR, AND and NOT
gates are respectively identical to the truth tables for the propositions p ∨ q (disjunction), p ∧ q
(conjunction) and ∼p (negation). The only difference is that 1 and 0 are used instead of T and F. Thus
the logic circuits satisfy the same laws as do propositions and hence they form a Boolean algebra. So, all
terms used with Boolean algebras, such as, complements, literals, fundamental products, minterms, sum-
of-products and complete sum-of-products may also be used with our logic circuits.
3.3.4. AND-OR Circuits. The logic circuit L which corresponds to a Boolean sum-of-products
expression is called an AND-OR circuit. Such a circuit L has several inputs, where.
(1) Some of the inputs or their complements are fed into each AND gate.
(2) The outputs of all the AND gates are fed into a single OR gate.
(3) The output of the OR gate is the output for the circuit L.
e.g. Following figure represents a AND-OR circuit with three inputs x, y, z and output t. First we find the
output of each AND gate.

x
y AND
z

NOT
NOT
AND
OR t

AND

(i) The inputs of the first AND gate are x, y and z and hence x.y.z is the output.
(ii) The inputs of the second AND gate are x, y′, z and hence xy′z is the output.
(iii) The inputs of the third AND gate are x′, y and hence x′y is the output.
Then the sum of outputs of the AND gates is the output of the OR gate, which is the output t of the
circuit. Thus t = xyz + xy′z + x′y
3.3.5. NAND and NOR gates. There are two additional gates which are equivalent to combinations of
the above basic gates.
74 Discrete Mathematics

(a) A NAND gate is equivalent to an AND gate followed by a NOT gate.

x
NAND z
y

(b) A NOR gate is equivalent to an OR gate followed by a NOT gate.

x
y NOR z

The truth tables for these gates (using two inputs x and y) is given by

x y NAND NOR
1 1 0 0
1 0 1 0
0 1 1 0
0 0 1 1
The NAND and NOR gates can actually have two or more inputs just like corresponding AND and OR
gates. Furthermore, the output of a NAND gate is 0 if and only if all the inputs are 1, and the output of a
NOR gate is 1 if and only if all the inputs are 0.
3.3.6. Example. Express the output t as a Boolean expression in the inputs, x, y, z for the logic circuit in
following figure:

x
AND

OR t
y NOT

AND
z

Solution. The inputs to the first AND gate are x and y′ and to the second AND gate are y′ and z. Thus t =
xy′ + y′z
Boolean Algebra 75

3.3.7. Exercise.
1. Express the output t as a Boolean expression in the inputs x, y, z for the logic circuit below
x
AND
y NOT

OR t

NOT
AND
z

2. Express the output t as a Boolean expression in the inputs x, y, z for the logic circuit

x
y AND
z

AND t
OR

AND

3. Express the output t as a Boolean expression in the inputs x, y, z for the logic circuit in following two
figures

x
NOR

(i)

y
AND
z t
OR

___________________________________________________________________________
76 Discrete Mathematics

x
NAND
y
(ii)
OR t

NOR
z

4. Express the output z as a Boolean expression in the inputs x and y for the logic circuit given below

x
AND
y

NOR OR z

NAND

5. Draw the logic circuit L with inputs x, y, z and output t which corresponds to each Boolean expression
(i) t = xyz + x′z′ + y′z′ (ii) t = xy′z + xyz′ + xy′z′
3.3.8. Truth tables and Boolean functions. Consider a logic circuit L with n = 3 input devices x, y, z
and output t, say, t = xyz + xy ′z + x ′y

Each assignment of a set of three bits to the inputs x, y, z yields an output bit for t. All together there are
2 n = 2 3 = 8 possible ways to assign bits to the inputs as follows

000, 001, 010, 011, 100, 101, 110, 111


The assumption is that the sequence of first bits is assigned to x, the sequence of second bits to y, and the
sequence of third bits to z.
Thus the above set of inputs may be rewritten in the form
x = 00001111, y = 00110011 , z = 01010101
Here it must be noted that these three 2n = 8-bit sequences contain the eight possible combinations of the
input bits.
The truth table T = T(L) of the above circuit L consists of the output sequence t that corresponds
to the input sequences x, y, z. This truth table T may be expressed using fractional or relational notation,
i.e., T may be written in the form
Boolean Algebra 77

T ( x , y, z ) = t or T ( L ) = [ x , y, z : t ]

This form for the truth table for L is essentially the same as the truth table for a proposition discussed in
UNIT I. The only difference is that here the values for x, y, z and t are written horizontally whereas in
UNIT I they are written vertically.
Consider a logic circuit L with n input devices. There are many ways to form n input sequences
n
x1, x 2 ,......, x n , so that they contain the 2 different possible combinations of the input bits. The
assignment scheme is given below.
x1. Assign 2n − 1 bits which are 0’s followed by 2 n −1 bits which are 1’s.
x2. Assign 2n − 2 bits which are 0’s followed by 2n − 2 bits which are 1’s.
x3. Assign 2n − 3 bits which are 0’s followed by 2n − 3 bits which are 1’s.
and so on. The sequences obtained in this way will be called special sequences. Replacing 0 by 1 and 1
by 0 in the special sequences, we get the complements of the special sequences.
Remark. Assuming the input are the special sequences, we frequently do not need to distinguish
between the truth table
T ( L ) = [ x1, x 2 ,......, x n ; t ]

and the output t itself.


For example, suppose a logic circuit L has n = 4 input devices x, y, z, u. Then 2 n = 2 4 = 16 bit special
sequences for x, y, z, u are given as
x = 0000000011111111 y = 0000111100001111
z = 0011001100110011 u = 0101010101010101.
Similarly, suppose a logic circuit L has n = 3 input devices x, y, z. Then 2 n = 2 3 = 8 bit special sequences
for x, y, z and their complements x′, y′, z′ are as follows.
x = 00001111 , y = 00110011 , z = 01010101

x′ = 11110000 , y′ = 11001100 , z′ = 10101010.


3.3.9. Algorithm for finding the truth table for a logic circuit L where the output t is given by a
Boolean sum-of-products expression in the inputs.
Step I. Write down the special sequences for the inputs x1, x 2 ,......, x n and their complements.

Step II. Find each product appearing in t. (Recall that a product a1. a 2 ......a n = 1 in a position if and only
if all the a1, a 2 ,......, a n have 1 in the position).

Step III. Find the sum t of the products (Recall that a sum a1 + a 2 + ...... + a n = 0 in a position if and only
if all the a1, a 2 ,......, a n have 0 in the position).
78 Discrete Mathematics

3.3.10. Example. Consider the logic circuit L given below:

x
y AND
z

AND t
OR

AND

Solution. Here output t in sum-of-products expression is


t = xyz + xy ′z + x ′y

(I) The special sequences and their complements are


x = 00001111 , y = 00110011 , z = 01010101
x′ = 11110000 , y′ = 11001100 , z′ = 10101010
(II) The products are given as
xyz ′ = 00000001 , xy ′z = 00000100 , x ′y = 00110000

(III) The sum t is t = 00110101


Accordingly, T ( 00001111, 00110011, 01010101) = 00110101

or simply T ( L ) = 00110101 where we assume that input consists of special sequences.

3.3.11. Boolean Functions. Let E be a Boolean expression with n variables x1, x2,...,xn. The entire
discussion above can also be applied to E where now the special sequences are assigned to the variables
x1, x 2 ,......, x n . The truth table T = T ( E ) of E is defined in the same way as the truth table T = T(L) for a
logic circuit L as given in the above example.
Remark. The truth table for a Boolean expression E = E ( x1, x 2 ,......, x n ) with n variables may also be
viewed as a Boolean function from Bn into B. (The Boolean algebra Bn and B = {0, 1} are already
defined). That is, each element in Bn is a list of n bits which when assigned to the list of variables in E
produces an element in B.
3.3.12. Example. Consider a Boolean expression E = E(x, y, z) with three variables. The eight minterms
(fundamental products involving all three variables) are as follows:
xy z , xy z ′, xy ′z, x ′y z, xy ′z ′, x ′y z ′, x ′y ′z , x ′y ′z ′ .

The truth table for these minterms (using the special sequences for x, y, z) follows
Boolean Algebra 79

xyz = 0000001, xyz ′ = 00000010, xy ′z = 00000100

xy ′z ′ = 00001000 , x ′ yz = 00010000 , x ′yz ′ = 00100000

x ′y ′z = 01000000 , x ′y ′z ′ = 10000000 .

Note that each minterm assumes the value 1 in only one of the eight positions.
3.4. Karnaugh Maps. Karnaugh maps are pictorial devices for finding prime implicants and minimal
forms for Boolean expressions involving at most six variables. We will only treat the cases of two, three
and four variables. In the beginning of this unit, we have defined that a minterm is a fundamental
product which involves all the variables, and that a complete sum-of-products expression is a sum of
minterms.
Definition. Two fundamental products P1 and P2 are said to be adjacent if P1 and P2 have the same
variables and if they differ in exactly one literal. Thus there must be an uncomplemented variable in one
product and complemented in the other. In particular, the sum of two such adjacent products will be a
fundamental product with one less literal.
Remark. In Karnaugh maps minterms involving the same variables are represented by squares and we
will some times use the terms “squares” and “minterm” interchangeably.
3.4.1. Example. (i) Let P1 = xyz ′ and P2 = xy ′z ′ then P1 and P2 are adjacent products and

P1 + P2 = xyz ′ + xy ′z ′ = xz ′( y + y ′) = xz ′(1) = xz ′ .

(ii) Let P1 = x ′yzt and P2 = x ′yz ′t .

Here, P1 and P2 are adjacent products and

P1 + P2 = x ′y zt + x ′y z ′t = x ′y t ( z + z ′) = x ′y t (1) = x ′y t .

(iii) Let P1 = x ′yzt and P2 = xyz ′t .


Here, P1 and P2 are not adjacent since they differ in two literals. In particular,
P1 + P2 = x ′yzt + xyz ′t = ( x ′ + x ) y ( z + z ′) t = 1. y . 1 . t = yt .

(iv) Let P1 = xyz ′ and P2 = xyzt .

Here P1 and P2 are not adjacent since they have different variables. Thus, in particular, they will not
appear as squares in the same Karnaugh map.
3.4.2. Case of Two Variables. The Karnaugh map (or K-map) corresponding to Boolean expression E =
E(x, y) with two variables x and y is given in below figure:

y y′
x
x′
80 Discrete Mathematics

Here, x is represented by the points in the upper half of the map and y is represented by the points in the
left half of the map. And x′ is represented by the points in the lower half of the map and y′ is represented
by the points in the right half of the map. We have the four possible minterms with two literals
xy, xy ′, x ′y, x ′y ′ are represented by the four squares in the map as follows:

y y′
x xy xy′
x′ x′y x′y′

Note that two squares (minterms) are adjacent by the definition given above if and only if they are
geometrically adjacent. Any complete sum-of-products Boolean expression E( x, y ) is a sum of
minterms and hence can be represented in the K-map by placing checks in the appropriate squares. A
prime implicant of E(x, y) will be either a pair of adjacent squares in E or an isolated square i.e., a square
which is not adjacent to any other square of E(x, y). A minimal sum-of-products form for E(x, y) will
consist of a minimal number of prime implicants which cover all the squares of E(x, y) as illustrated in
the next example:
3.4.3. Example. Find the prime implicants and a minimal sum-of-products form for each of the
following complete sum-of-products Boolean expressions.
(i) E1 = xy + xy ′ (ii) E2 = xy + x ′y + x ′y ′ (iii) E 3 = xy + x ′y ′

Solution. (i) Check the squares corresponding to xy and xy′ as in the figure below

y y′
x √ √
x′

Note that E1 consists of only one pair of adjacent squares and this pair of adjacent squares represents the
variable x, so x is the (only) prime implicant of E1. Consequently, E1 = x is its minimal sum.
y y′
(ii) Check the squares corresponding to xy, x′y and x′y′ as in the figure. Note that E2
contains two pairs of adjacent squares (designated by the two loops) which include √
x
all the squares (minterms) of E2. The vertical pair represents y and the horizontal
√ √
pair represents x′ ; hence y and x′ are the prime implicants of E2. Thus E2 = x′ + y is x′
its minimal sum.

y y′
(iii) Check the squares corresponding to xy and x′y′ as shown in the
x √
figure. Note that E3 consists of two isolated squares which represent xy
and x′y′ and hence xy and x′y′ are the prime implicants of E3 and x′ √
E3 = xy + x ′y ′ is its minimal sum.
Boolean Algebra 81

3.4.4. Case of Three Variables. The K-map corresponding to a Boolean expression E = E ( x, y, z ) with
three variables x, y, z is shown by the adjoining figure:
Recall that there are exactly eight minterms with three variables yz yz′ y′z′ y′z

xyz, xyz ′, xy ′z ′ , xy ′z, x ′yz , x ′yz ′, x ′y ′z ′, x ′y ′z x

x′
These minterms are listed so that they correspond to the
eight squares in the Karnaugh map in the obvious way.
Furthermore, in order that every pair of adjacent products in above figure are geometrically adjacent the
right and left edges of the map must be identified. This is equivalent to cutting out, bending and gluing
the map along the identified edges to obtain a cylinder in which adjacent products are represented by the
squares with one edge in common.
Viewing the K-map in above figure as a Venn diagram, the areas represented by the variables x, y and z
are shown in the below figure:
yz yz′ y′z′ y′z yz yz′ y′z′ y′z yz yz′ y′z′ y′z
x x x

x′ x′ x′

Specifically, the variable x is still represented by the points in the upper half of the map, and the variable
y is still represented by the points in the left half of the map. The new variable z is represented by the
points in the left and right quarters of the map. Thus x′, y′ and z′ are represented, respectively, by points
in the lower half, right half and middle two quarters of the map.
By a basic rectangle in K-map with three variables, we mean a square, two adjacent squares or four
squares, which form a one-by-four or two-by-two rectangle. These basic rectangles correspond to a
fundamental products of three, two and one literal respectively. Moreover, the fundamental product
represented by a basic rectangle is the product of just those literals that appear in every square of the
rectangle.
Suppose a complete sum-of-products Boolean expression E = E ( x, y, z ) is represented in K-map by
placing checks in the appropriate squares. A prime implicant of E will be a maximal basic rectangle of
E, i.e., a basic rectangle contained in E which is not contained in any larger basic rectangle in E. A
minimal sum-of-products form for E will consist of a minimal cover of E, that is, a minimal number of
basic rectangles of E which together include all the squares of E.
3.4.5. Example. Find the prime implicants and a minimal sum-of-products form for each of the
following sum-of-products Boolean expressions.
(i) E1 = xyz + xyz ′ + x ′yz ′ + x ′y ′z

(ii) E2 = xyz + xyz ′ + xy ′z + x ′yz + x ′y ′z

(iii) E 3 = xyz + xyz ′ + x ′yz ′ + x ′y ′z ′ + x ′y ′z


82 Discrete Mathematics

Solution. (i) Check the squares corresponding to the four summands as yz yz′ y′z′ y′z
in the figure below. Observe that E1 has three prime implicants
x √ √
(maximal basic rectangles), which are circled, these are xy, yz′ and x′y′z.
All these are needed to cover E1 and hence the minimal sum of E1 is x′ √ √

E1 = xy + yz ′ + x ′y ′z
yz yz′ y′z′ y′z
(ii) Check the squares corresponding to the five summands as Note that
E2 has two prime implicants, which are circled. One is the two adjacent
squares which represent xy and the other is the two-by-two square
(spanning the identified edges) which represents z. Both are needed to x √ √ √
cover E2 , so the minimal sum of E2 is E2 = xy + z.
(iii) Check the squares corresponding to five summands as yz yz′ y′z′ y′z
As indicated by the loops, E3 has four prime implicants,
x √ √
xy, yz ′, x ′z ′ and x ′y ′ . However only one of two dashed ones i.e., one of
yz ′ or x ′z ′ , is needed in a minimal cover of E. Thus E3 has two minimal x′ √ √ √
sums
E2 = xy + yz ′ + x ′y ′ = xy + x ′z ′ + x ′y ′

3.4.6. Exercise.
1. Design a three-input minimal AND-OR circuit L with the following truth table:
T = [A, B,C. L] = [00001111, 00110011, 01010101 ; 11001101]
2. Find the fundamental product P represented by each basic rectangle in the K-map in below figures:
yz yz′ y′z′ y′z yz yz′ y′z′ y′z yz yz′ y′z′ y′z

x x √ √ x √ √

x′ √ √ x′ x′ √ √

(i) (ii) (iii)


3. Find a minimal sum-of-products form for the Boolean expression E with the following truth tables:
(i) T(00001111, 00110011, 01010101) = 10100110
(ii) T(00001111, 00110011, 01010101) = 00101111
Boolean Algebra 83

4. Find all possible minimal sums for each Boolean expression E given by the Karnaugh maps in the
below figure:
yz yz′ y′z′ y′z yz yz′ y′z′ y′z yz yz′ y′z′ y′z

x √ √ √ x √ √ √ x √ √

x′ √ √ x′ √ √ √ x′ √ √ √ √

(i) (ii) (iii)


3.4.7. Case of Four Variables. The Karnaugh map corresponding to Boolean expression E = ( x, y, z, t )
with four variables x, y, z, t is shown below:
zt zt′ z′t′ z′t

xy

xy′

x′y′

x′y

Each of the 16 squares correspond to one of 16 minterms with four variables


xyzt , xyzt ′, xyz ′t ′, xyz ′t , ......, x ′yz ′t

as indicated by the labels of the row and column of the square. Observe that the top line and the left side
are labeled so that the adjacent products differ in precisely one literal.
Again we must identify the left edge with the right edge (as we did with three variables) but we must
also identify the top edge with the bottom edge. These identification given rise to a donut-shaped surface
called a torus, and we may view our map as really being a torus. A basic rectangle in a four variable
Karnaugh map is a square, two adjacent squares, four squares which form a one-by-four or two by two
rectangle, or eight squares which form two-by-four rectangle. These rectangles correspond to a
fundamental products with four, three, two or one literal, respectively. Again maximal basic rectangles
are the prime implicants. The minimization technique for a Boolean expression E( x, y, z, t ) is the same
as before.
84 Discrete Mathematics

3.4.8. Example. Find the fundamental product P represented by the basic rectangle in the Karnaugh
maps shown in below figures.
zt zt′ z′t′ z′t zt zt′ z′t′ z′t zt zt′ z′t′ z′t

xy xy √ √ xy √ √

xy′ √ √ xy′ xy′ √ √

x′y′ x′y′ x′y′ √ √

x′y x′y √ √ x′y √ √

(i) (ii) (iii)


Solution. In each case, find the literals which appear in all the squares of the basic rectangle and P is the
product of such literals.
(i) xy and z′ appear in both squares, hence P = xy ′z ′ .

(ii) Only y and z appear in all four squares, so P = yz.


(iii) Only t appears in all eight squares, so P = t.
3.4.9. Exercise.
1. Find the fundamental product P represented by each basic rectangle R in the Karnaugh map in below
figures.
zt zt′ z′t′ z′t zt zt′ z′t′ z′t zt zt′ z′t′ z′t

xy xy xy √ √ √ √

xy′ xy′ √ √ xy′

x′y′ √ √ x′y′ √ √ x′y′

x′y x′y x′y √ √ √ √

(i) (ii) (iii)


2. Use a Karnaugh map to find a minimal sum-of-products form for
E = xy ′ + xyz + x ′y ′z ′ + x ′yzt ′ .

3. Use a K-map to find a minimal sum for


(i) E1 = x ′yz + x ′yz ′t + y ′zt ′ + xyzt ′ + xy ′z ′t ′

(ii) E2 = y ′t ′ + y ′z ′t + x ′y ′zt + yzt ′ .

4. Find a minimal sum for each Boolean expression.


Boolean Algebra 85

(i) E1 = y ′z + y ′z ′t ′ + z ′t (ii) E2 = y ′zt + xzt ′ + xy ′z ′

5. Let E be the Boolean expression given in the following K-map


zt zt′ z′t′ z′t

xy √ √

xy′ √ √

x′y′ √ √

x′y √

(i) Write E in its complete sum of products form.


(ii) Find a minimal form for E.
Books Recommended:
1. Kenneth H. Rosen, Discrete Mathematics and Its Applications, Tata McGraw-Hill, Fourth Edition.
2. Seymour Lipschutz and Marc Lipson, Theory and Problems of Discrete Mathematics, Schaum
Outline Series, McGraw-Hill Book Co, New York.
3. John A. Dossey, Otto, Spence and Vanden K. Eynden, Discrete Mathematics, Pearson, Fifth
Edition.
4. J.P. Tremblay, R. Manohar, “Discrete mathematical structures with applications to computer
science”, Tata-McGraw Hill Education Pvt.Ltd.
5. J.E. Hopcraft and J.D.Ullman, Introduction to Automata Theory, Langauages and Computation,
Narosa Publishing House.
6. M. K. Das, Discrete Mathematical Structures for Computer Scientists and Engineers, Narosa
Publishing House.
7. C. L. Liu and D.P.Mohapatra, Elements of Discrete Mathematics- A Computer Oriented Approach,
Tata McGraw-Hill, Fourth Edition.
4
Finite State Machines, Languages and
Grammers
Structure
4.1. Introduction.
4.2. Finite State Machine.
4.3. Equivalence of finite state machines..
4.4. Finite State Automaton.
4.5. Construction of finite state automata.
4.6. Non-Deterministic Finite state Automaton.
4.7. Equivalence of DFSA and NDFSA.
4.8. Languages.
4.9. Language determined by an automaton.
4.10. Grammars.
4.11. Type of Grammars.
4.12. Derivation Trees of context-free Grammars.
4.13. Type of Languages.
4.1. Introduction. This chapter contains results related to various machines, grammers and languages.
4.1.1. Objective. The objective of the study of these results is to understand the basic concepts and
apply them in various problem solving situations.
4.2. Finite State Machine. A finite state machine (or complete sequentially machine) is an abstract
model of a machine with a primitive internal memory. A finite state machine M consists of
(i) A finite set I of “Input symbols”.
(ii) A finite set S of “Internal states”.
(iii) A finite set O of “Output Symbols”.
(iv) An initial state s0 in S.
Finite State Machines, Languages and Grammers 87

(v) A next state function f. S × I → S


(vi) An output function g. S × I → O
Using all these notations a finite state machine M is denoted as
M = M ( I , S, O, s0 , f , g )

For example, let us consider


I = {a, b}, S = {s0 , s1 , s2 ) and O = { x , y, z} .

Initial state is s0. Next state function f : S × I → S is defined by


f ( s0 , a ) = s1 ; f ( s1, a ) = s2 ; f ( s2 , a ) = s0

f ( s0 , b ) = s2 ; f ( s1, b ) = s1 ; f ( s2 , b ) = s1

Output function g : S × I → O is defined by


g( s0 , a ) = x , g( s1, a ) = x , g( s2 , a ) = z

g( s0 , b ) = y , g( s1, b ) = z , g( s2 , b ) = y

Then, we have that M = M ( I , S, O, s0 , f , g ) is a finite state machine.

Next, we lead to study the ways to represent a finite state machine diagramatically.
4.2.1. Transition (state) Table and transition (state) diagram.
There are two ways of representing a finite state machine M, as explained below:
(A) Transition (State) Table. In this method, the functions f and g are represented by a table. For the
example given above, the transition table is

f g
I
a b a b
S
s0 s1 s2 x y
s1 s2 s1 x z
s2 s0 s1 z y

(B) Transition (state) Diagram. A transition diagram of a finite state machine M is a labeled directed
graph in which there is a node for each state symbol in S and each node is labeled by a state symbol with
which it is associated. The initial state is indicated by an arrow. Further, if
f ( si , a j ) = sk and g ( si , a j ) = Or , then there is an arrow (arc) from si to sk which is labelled with the pair
( a j , or ) . Usually, we put the input symbol aj near the base of the arrow (near si) and the output symbol
88 Discrete Mathematics

OR near the centre of the arrow. (Alternatively, we can represent it by ai oi near the centre of the
arrows). Using this method, above example can be represented as
x z
a b
s0 s1 a
b
y y

z b x
s2
a

4.2.2. Example. Let I = { a , b} , O = { 0, 1} , S = { s0 , s1 } , s0 is the initial state. Next state function f :


S× I → S is defined by
f ( s0 , a ) = s0 ; f ( s1, a ) = s1

f ( s0 , b ) = s1 ; f ( s1, b ) = s1

and the output function, g : S × I → O is defined by


g( s0 , a ) = 0 ; g( s1, a ) = 1

g( s0 , b ) = 1 ; g( s1, b ) = 0

Then, M = M ( I , O, S, s0 , f , g ) is a finite state machine. Its transition table is

f g
I
a b a b
S
s0 s0 s1 0 1
s1 s1 s1 1 0

Its transition diagram representation is given as


0 1
a a
s0 s1 b

b
0
1
Remark. We can regard the finite state machine M = M ( I , S, O, s0 , f , g ) as a simple calculator. We start
with state s0, input a string over I and produce a string of output.
4.2.3. Input and output string. Let M = M ( I , S, O, s0 , f , g ) be a finite state machine. An input string for
M is a string over I. The string y1 , y2 ,..., yn is the output string for M, corresponding to the input string
Finite State Machines, Languages and Grammers 89

x1 , x2 ,..., xn if there exist states s0 , s1 ,..., sn ∈ S such that

si = f ( si −1 , xi ) 
 for i ∈ {1, 2,..., n}
yi = g ( si −1 , xi ) 

4.2.4. Example. In example (1), we have taken, I = { a , b} , O = { 0, 1} , S = { s0 , s1} with


f ( s0 , a ) = s0 ; f ( s0 , b ) = s1

f ( s1, a ) = s1 ; f ( s1, b ) = s1

and g( s0 , a ) = 0 ; g( s0 , b ) = 1

g( s1, a ) = 1 ; g( s1, b ) = 0

We had shown that M is a finite state machine. Let us find the output string to the input string
aababba
for this machine. Initially, we are in state s0. The first input symbol is a. Therefore the output
g( s0 , a ) = 0 . The edge points out to s0. Next symbol input is again a, so again we have g ( s0 , a ) = 0 as the
output and the edge points out to s0. Next input symbol is b and so g( s0 , b ) = 1 as the output and there is
a state of change s1. Next input symbol is a, so output is g( s1, a ) = 1 and the state is s1. Next, b is the
input and so g( s1, b ) = 0 as the output and state remains s1. Again, b is input , So we have g( s1, b ) = 0 as
the output and state is s1. Final input is a and state is s1, so g( s1, a ) = 1 is the output symbol. Hence the
output string is 0011001
4.2.5. Example. Consider the finite state machine of example on page-1. Let the input string be abaab.
We begin by taking s0 as the initial state. Proceeding in the same way, as in last example, we can find
the output string as
a,x b,z a, x a, z b, y
s0  → s1  → s1  → s2  → s0  → s2

Hence the output string is xzxzy.


Examples on conversion of transition table into transition diagram and vice-versa.
90 Discrete Mathematics

4.2.6. Exercise. Convert the following state tables to state diagram:


(i)

f g

I
0 1 0 1
S

s0 s1 s0 1 0

s1 s3 s0 1 1

s2 s1 s2 0 1

s3 s2 s1 0 0

f g f g
(ii) (iii)
I I
0 1 0 1 0 1 0 1
S S

s0 s1 s0 0 0 s0 s0 s4 1 1

s1 s2 s0 1 1 s1 s0 s3 0 1

s2 s0 s3 0 1 s2 s0 s2 0 0

s3 s1 s2 1 0 s3 s1 s1 1 1

s4 s1 s0 1 0

4.2.7. Exercise. Give the state tables for finite state machines with the following diagram:
1
0
s0 s1
1 1
1
0
(i)
0 0 0
1

1 0 0
s2 0 s3
1
1
Finite State Machines, Languages and Grammers 91

0
0 1
s0 s1
1 1 1
0

(ii) 0

0
1

1 0
s2 s3
0 1 0
0

4.2.8. Alphabet and words. Consider a non-empty set A of symbols. A word or string w on the set A is
a finite sequence of its elements. For example, the sequences
u = ababb and v = accbaaa

are words on A = { a , b, c}

We call the set A the alphabet and its elements are called letters.
We can also denote above words u and v as
u = abab 2 , v = ac2 ba 3

The empty sequence of letters, denoted by λ or ε or 1, is also considered to be a word on A, called the
empty word.
The set of all words on A is denoted by A*.
The length of a word u, written |u| or l(u), is the number of elements in its sequence of letters. For above
words u and v, we have l (u ) = 5 and l ( v ) = 7

Also, l(λ) = 0 where λ is the empty word.

4.2.9. Cocatenation. Consider two words u and v on alphabet A. The concatenation of u and v written as
uv, in the word obtained by writing down the letters of u followed by the letters of v. e.g. for the above
words u and v, we have
uv = ababbaccbaaa = abab 2 ac2 ba 3

As with letters, we define


u 2 = uu , u 3 = uuu and so on.

4.2.10. Powers of Alphabet ‘A’. Powers of alphabet A are defined as follows


A 0 = { λ} , A1 = A
92 Discrete Mathematics

A2 = AA = { uv : u ∈ A, v ∈ A}

A3 = A2 A = { uv : u ∈ A2 , v ∈ A} and so on.

e.g. let A = { a , b, c} , then

A 0 = { λ} , A1 = A = { a , b, c}

A2 = { aa , ab, ac, ba , bb, bc, ca , cb, cc} and so on.

4.2.11. Generalization of f and g in finite state machine. Consider a sequence x0 x1 x2 ... of input
symbols. Let the initial state be s0, the next state s1 for the input x0 is given by
s1 = f ( s0 , x 0 ) = f1 ( s0 , x 0 ) , say where f = f1 : S × I → S

Now, next change in the state due to the input symbol x1 and the next state is
s2 = f ( s1, x1 ) = f ( f1 ( s0 , x 0 ) , x1 )
= f 2 ( s0 , x 0 x1 ) , say, where f 2 : S × I 2 → S .

Here it should be noted that


I 2 = I I = { xy : x ∈ I , y ∈ I }

The next state due to third input symbol x2 is given by


s3 = f ( s2 , x 2 ) = f ( f 2 ( s0 , x 0 x1 ) , x 2 )
= f 3 ( s0 , x 0 x1 x 2 ) , say, where f 3 : S × I 3 → S

Continuing like this, we can define the function f n : S × I n → S such that


sn = f ( f n −1 ( s0 , x 0 x1 x 2 ...... x n − 2 ) , x n −1 )

= f n ( s0 , x 0 x1 x 2 ..... x n −1 )

In the same way, we can define output symbols O0 , O1, O2 , as given below

O0 = g( s0 , x 0 ) = g1 ( s0 , x 0 ) , say
O1 = g( s1, x1 ) = g( f1 ( s0 , x 0 ) , x1 )

= g2 ( s0 , x 0 x1 ) , say

O2 = g( s2 , x 2 ) = g( f 2 ( s0 , s0 x1 ), x 2 )

= g3 ( s0 , x 0 x1 x 2 ), say

where g2 : S × I 2 → O and g3 : S × I 3 → O

Continuing like this, we can define the function gn : S × I n → O such that


On −1 = g( sn −1 , x n −1 )
Finite State Machines, Languages and Grammers 93

= g( f n −1 ( s0 , x 0 x1.... x n − 2 ) , x n −1 )

= gn ( s0 , x 0 x1..... x n −1 )

4.3. Equivalence of finite state machines.


Let M = M ( I , S, O, s0 , f , g ) be a finite state machine. Then two states si and s j are said to be k-
equivalent, for some positive integer k, if and only if g( si , x ) = g( s j , x ) for all x s.t. |x| ≤ k and we write
k
si ≡ s j .

4.3.1. Equivalent States. Let M = M ( I , S, O, s0 , f , g ) be a finite state machine. Then two states si and s j
are said to be equivalent if and only if g( si , x ) = g( s j , x ) for every word x ∈ I * where I * is set of all
words on I and we write si = s j .

Remark. (i) The relation ‘≡’ is an equivalence relation, that is, it is reflexive, symmetric and transitive.
k
(ii) Clearly, equivalence of states is generalization of k-equivalence of states, that is, si ≡ s j ⇒ si ≡ s j
for all positive integer k but not conversely.
(iii) Two states are said to be equivalent if and only if they produce the same output for any input
sequence.
4.3.2. Theorem. Let s be any state in a finite state machine and let x and y be any two words. Then, we
have
(i) f ( s, xy ) = f ( f ( s, x ) , y ) (ii) g( s, xy ) = g ( f ( s, x ), y )

Proof. (i) We shall give the proof by induction on |y| i.e., length of y. Let length of y = 1 and let y = a,
where a ∈ I . Then, by generalization of f, we know that
f ( s, xa ) = f ( f ( s, x ) , a )

which shows that result is true for length one. We assume that the result is true for any word y of length
n i.e. f ( s, xy ) = f ( f ( s, x ) , y ) , where |y| = n

We shall prove that result is true for y having length n + 1.


From the generalization of f, we know
f ( s, xya ) = f ( f ( s, xy ) , a )

= f ( f ( f ( s, x ), y ), a ) [By induction]

Taking s′ = f ( s, x ), we get

f ( s, xya ) = f ( f ( s, xy ) , a ) = f ( f ( s′, y ) , a ) = f ( s′, ya ) = f ( f ( s, x ) , ya )

So, the result is true for length n + 1.


Hence by mathematical induction, result (1) holds.
94 Discrete Mathematics

(ii) We shall again give the proof by induction on length of y.


Let y=a where a ∈ I

Then, by generalization of g, we know that


g( s, xa ) = g ( f ( s, x ) , a )

which shows that result is true for length one. We assume that result is true for any word y of length one
i.e., g ( s, xy ) = g [ f ( s, x ), y ] , where |y| = n

We shall prove that result is true for y of length n + 1.


From the generalization of g, we know that
g( s, xya ) = g ( f ( s, xy ) , a )

= g ( f ( f ( s, x ), y ) , a ) [By part (i)]

Taking s′ = f ( s, x ) , we get

g( s, xya ) = g ( f ( s, xy ) , a ) = g ( f ( s′, y ) , a ) = g( s′, ya ) [By induction]

= g ( f ( s, x ), ya )
4.3.3. Theorem. If two states in a finite state machine are equivalent then their next states must be
equivalent. OR
If the states si and sj are equivalent in a finite state machine M, then f ( si , x ) = f ( sj , x ) for any input
sequence x.
Proof. Since si = s j , so we have g( s1, xy ) = g( s j , xy ) for any input word xy.

By above theorem, this reduces to


g ( f ( si , x ), y ) = g ( f ( s j , x ) , y ) for any y.

which implies by definition of equivalence of states that


f ( si , x ) ≡ f ( s j , x )

i.e., the next states are equivalent.


4.3.4. Equivalent Machines. Let M = M (I, S, O, s0, f, g) and M ′ = M ′ ( I , S′, O, s0′ , f ′, g′) be two finite
state machines for which input and output sets are same. Then the machine M is said to be equivalent to
machine M′ if and only if for all si ∈ S , there exist an s′j ∈ S′ such that si = s′j and for all s j ∈ S′ , there
exist an si ∈ S s.t. si ≡ s′j and then we write M ≡ M ′

The relation ≡ is an equivalence relation.


For example, we consider two finite state machines given by the following tables:
Finite State Machines, Languages and Grammers 95

f g

I f g
0 1 0 1
S I
0 1 0 1
s0 s5 s3 0 1 S
and
s1 s1 s4 0 0 s′0 s′3 s′2 0 1

s2 s1 s3 0 0 s′1 s′1 s′0 0 0

s3 s1 s2 0 0 s′2 s′1 s′2 0 0

s4 s5 s2 0 1 s′3 s′0 s′1 0 1


s5 s4 s1 0 1

M = M ( I , S, O, s0 , f , g ) M ′ = M ′ ( I , S′, O, s0′ , f , g )

We note that s0′ in M′ is equivalent to s0, s4 and s5 in M, s′1 in M′ is equivalent to s1, s2 , s3 in M,


s2′ in M ′ is equivalent to s1, s2 , s3 in M and s3′ in M ′ is equivalent to s0 , s4 and s5 in M. Also note that
the functions g and g′ are same for the indicated correspondence.
4.3.5. Reduced Finite State Machine. A finite state machine M = M ( I , S, O, s0 , f , g ) is said to be
reduced finite state machine if each state is equivalent to itself and no other i.e., two distinct states are
not equivalent.
OR
si ≡ s j ⇒ si = s j for all si , s j ∈ S

4.3.6. Finite State Homomorphism. Let M = M ( I , S , O, s0 , f , g ) and M ′ = M ′( I , S ′, O, s0′ , f ′, g ′) are two


finite state machines in which input and output sets are same. A mapping φ : S → S′ is said to be finite
state homomorphism if φ ( f ( s, a ) ) = f ′ ( φ( s ), a )

φ ( g( s, a ) ) = g′ ( φ( s ), a ) for all a ∈ I

If φ is one-one and onto function also then M and M′ are said to be isomorphic.
4.3.7. Finite state Automaton. This is a special kind of finite state machine and we define it as –
A finite state machine M ( I , S, O, s0 , f , g ) is said to be finite sate automaton if the finite set O of output
symbols is {0, 1} and where the current state determines the last output.
Those states for which the last output was 1, are called accepting states.
4.3.8. Example. We consider a finite state machine M = M ( I , S, O, s0 , f , g ) where
I = { a , b} , S = { s0 , s1, s2 } , O = { 0, 1} and s0 is the initial state and the function f and g are given as
96 Discrete Mathematics

f ( s0 , a ) = s1 and g( s0 , a ) = 1

f ( s0 , b ) = s0 and g( s0 , b ) = 0

f ( s1, a ) = s2 and g( s1, a ) = 1

f ( s1, b ) = s0 and g( s1, b ) = 0

f ( s2 , a ) = s2 and g( s0 , a ) = 1

f ( s2 , b ) = s0 and g( s2 , b ) = 0

The transition table for this machine M is given as

f g
I
a b a b
S
s0 s1 s0 1 0
s1 s2 s0 1 0
s2 s2 s0 1 0

Here, we observe that, if we are in state s0 by

f ( s0 , b ) = s0 or f ( s1, b ) = s0 or f ( s2 , b ) = s0

then g( s0 , b ) = 0, g( s1, b ) = 0, g( s2 , b ) = 0

that is, the last output is 0. If we are in state s1 by f ( s0 , a ) = s1 then g( s0 , a ) = 1 and so the last output is 1
and also if we are in state s2 by f ( s1 , a) = s2 or f ( s2 , a) = s2 , then we see that g( s1, a ) = 1, g( s2 , a ) = 1 i.e.,
the last output is 1. Thus M is a finite state automaton. We note that the last output was 1 when we are in
states s1 and s2, thus the states s1 and s2 are accepting states.
In the transition diagram of a finite state automaton, the accepting states are represented by the
double circles. Keeping in this mind, the transition diagram can be drawn as

b/0
a/1 a/1
a/1

s0 s1 s2

b/0

b/0
Finite State Machines, Languages and Grammers 97

Remark. (i) It is clear that in a finite state automaton, the output symbol for accepting states is 1 and
output symbol for non-accepting states is 0. So, sometimes, we can omit the output symbols from the
transition diagram. Hence output symbols may be omitted from above transition diagram.
(ii) By the above example, we observe that a finite state machine is a finite state automaton if O = { 0, 1}
and if in all states, the incoming edges to any state s have the same output label. Further, incoming edges
in an accepting state has output 1 and incoming edges in non-accepting states has output 0.
So, an alternate definition, without output, of a finite state automaton is
4.4. Finite State Automaton. A finite state automaton (FSA) consists of
(i) A finite set I of input symbols.
(ii) A finite set  of states.

(iii) A subset A of  of accepting states.

(iv) An initial state s0 in .


(v) A next state function f : S × I → 
It is denoted by M = M ( I , S, A, s0 , f )
Remark. (i) A finite state automaton is also simply called an automaton or finite state acceptor.
(ii) In the above definition of finite state automaton, we does not have an output alphabet instead we
have a set of A of accepting states.
(iii) The plural of automaton is automata.
4.4.1. Example. Draw the transition table and transition diagram for the FSA which is defined as
I = { a, b}, S = { s0 , s1 , s2 }

A = { s0 , s1 } , yes states (or accepting states), s0 is the initial state and the next state
function f : S × I → S is given as
f ( s0 , a) = s0 , f ( s1 , a) = s0 , f ( s2 , a) = s2

f ( s0 , b) = s1 , f ( s1 , b) = s2 , f ( s2 , b) = s2

Solution. The transition table for finite state automaton M = M ( I , S, A, s0 , f ) is given as

f
I
a b
S
s0 s0 s1
s1 s0 s2
s2 s2 s2
98 Discrete Mathematics

and the transition diagram is given as

a
b a
b

s0 s1 s2
b

4.4.2. Exercise. Let I = {a, b}, S = {s0, s1, s2}, A = {s2} and s0 is the initial state. The next state
function f : S × I → S is given by the table

f
I
a b
S
s0 s0 s1
s1 s0 s2
s2 s0 s2

Draw the transition diagram for this FSA.


4.4.3. Accepted String. Let M = M ( I , S, A, s0 , f ) be a finite state automaton. Let x1 , x2 ,..., xn be an input
string. We say that the string x1 x2 ... xn is accepted by FSA M if there exist states s0 , s1 ,..., sn such that
f ( si −1 , xi ) = si for i = 1, 2,..., n , and sn ∈ A i.e., final state is accepting. We call that the directed
path P( s0 , s1 ,..., sn ) is the path which represents x1 x2 ...xn in M.

Hence we can say that FSA M accepts x1 x2 ...xn iff the path P ends at an accepting state.

4.4.4. Example. Let the FSA has the following transition diagram.

a b

s0 s1

b a

b a
s2

where I = { a , b} , S = { s0 , s1 , s2 } , A = { s2 } and s0 is the initial state. Does this FSA accepts the strings
given below
Finite State Machines, Languages and Grammers 99

(i) abbaa (ii) abab (iii) aaabbb


Solution. (i) We have f ( s0 , abbaa ) = f ( f ( s0 , abba ), a )

= f ( f ( f ( s0 , abb ) , a ) , a )

= f ( f { f ( f ( s0 , a b ), b}, a ), a )

= f ( f ( f ( f ( f ( s0 , a ) , b ) , b ), a ) , a )
= f ( f ( f ( f ( s1, b ), b ), a ) , a ) = f ( f ( f ( s1, b ), a ), a )
= f ( f ( s1, a ), a ) = f ( s2 , a ) = s1 ∉ A

The final state is not an accepting state so, the given string abbaa is not accepted by above FSA. In
short, we can say that path determined by word abbaa is given as
a b b a a
s0 → s1  → s1  → s1 → s2 → s1 ∉ A

Remaining left as exercise for students.


4.4.5. Exercise. Let M = M ( I , S, A, s0 , f ) be a finite state automaton, where

I = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9 } , S = { s0 , s1 , s2 } , A = { s0 }

a ∈{0, 3, 6, 9} , b ∈{1, 4, 7 } , c ∈{2, 5, 8}

The next state function f : S × I → S is defined as


f ( s0 , a ) = s0 , f ( s0 , b ) = s1 , f ( s0 , c ) = s2

f ( s1 , a ) = s1 , f ( s1 , b ) = s2 , f ( s1 , c ) = s0

f ( s2 , a ) = s2 , f ( s2 , b ) = s0 , f ( s2 , c ) = s1

Draw the transition table and transition diagram for this FSA. Does this automaton accept
(i) 258 (ii) 104 (iii) 142 (iv) 317
(v) 1247 (vi) 1947 (vii) 2001 (viii) 12045
4.5. Construction of finite state automata.
For a finite state automaton M, L(M) denotes the subset of all words of I*, which are accepted by M.
L(M) is said to be a language which, we shall discuss later on.
4.5.1. Example. Let I = {a, b} . Construct a finite state automaton which will accept precisely those
words on I which end in two b’s.
OR
Let I = {a, b}. Design a FSA M such that L(M) contains those words which end in two b’s.
Solution. Let the initial state be s0. Since bb is accepted, but not λ or b, we need three states, let s1 and
100 Discrete Mathematics

s2 be two other states. We define f ( s0 , b ) = s1 and f ( s1, b ) = s2

Thus the partial transition diagram (keeping in mind that s2 must be accepting state and s0, s1 are non-
accepting state) is

s0 b s1 b s2

Now, f ( s0 , a ) can not be equal to s1 or s2, since in that case ab and a will be accepted. So we must have
f ( s0 , a ) = s0 . Again, we can not take f ( s1, a ) = s1, s2 since in these cases bab and ba are accepted. So
we must have f ( s1, a ) = s0 .

Now, f ( s2 , b ) can not be equal to s0 or s1. Since in that case bbb is not accepted but it must be accepted.
So we must have f ( s2 , b ) = s2 . Also, we can not take f ( s2 , a ) = s1 or s2 , since in that case bbab and bba
will be accepted. So, we must have f ( s2 , a ) = s0 .
These additional conditions give the required automaton which is shown in below figure.
a

s0 b s1 b s2
b
a

M = M ( I , S , A, s0 , f ) a

4.5.2. Exercise. (i) Let I = {a, b}. Construct an finite state automaton M which will precisely accept
those words from I which have an even number of a’s.
(ii) Let I = {a, b}. Construct an finite state automaton M which will accept those works from I which
begin with an ‘a’ followed by zero or more b’s.
(iii) Let I = {a, b}. Construct an automaton M such that L(M) will consist of those words where the
number of b’s is divisible by 3.
(iv) Let I = {0, 1}. Construct a finite state automaton M such that L(M) contains precisely those strings
over I that contain no 1’s.
(v) Let I = {a, b}. Design a finite state automaton M which accepts precisely those strings which
contains exactly three b’s .
(vi) Let I = {a, b}. Construct an finite-state automaton M that precisely accepts those words which begin
with a and end in b.
(vii) Let I = {a, b}. Construct an automaton which will accept the language L( M ) = { ar bs : r > 0, s > 0} .
(viii) Construct a finite state automaton with I = {a, b} that accepts a set of all strings which start with
ab.
4.6. Non-Deterministic Finite state Automaton. A non-deterministic finite state automaton
Finite State Machines, Languages and Grammers 101

(NDFSA) M = M ( I , S, O, s0 , f ) consists of
(i) a finite set I of input symbols.

(ii) A finite set  of states.

(iii) A subset A of  of accepting states.

(iv) An initial state s0 ∈ 

(v) A next state function f : S × I → P( S) where P( S) is power set of .

Remark. (i) We shall call finite state automaton now deterministic finite state automaton.
(ii) The difference between a NDFSA and DFSA is that in a NDFSA the next state function maps an
ordered pair of state and input letter to a subset of states (all possible next states) instead of to a single
state as in DFSA.
4.6.1. Exercise. Draw the transition diagram for the NDFSA.
(i) Let the given NDFSA is M = {I, S, A, s0, f) where I = {a, b}, S = {s0, s1, s2}, A = {s0}. and next
state function f is given by the table

f
I
a b
S
s0 φ {s1, s2}
s1 {s2} {s0, s1}
s2 {s0} φ

(ii) Consider the NDFSA defined as M = M ( I , S, A, s0 , f )

where I = { a , b} , S = { s0 , s1 , s2 , s3 } , A = {s2, s3} and the next state function f is given by following
transition table:

f
I
a b
S
s0 {s0, s1} {s3}
s1 {s0} {s1, s3}
s2 φ {s0, s2}
s3 {s1, s2, s3} {s1}
102 Discrete Mathematics

Exercise. The transition diagram of a NDFSA is given as


a
a
s0 s1
b

b
a

s2

Draw the transition table for this NDFSA and also give the next state function.
4.6.2. Definition. Let M = M ( I , S, A, s0, f ) be a non-deterministic finite state automaton. We say
that a string is accepted by M if starting from the initial state s0, among all the final states to which the
string will lead the automaton, one of them is an accepting state. i.e., set of final state intersect with the
set A.
It should be noted that the null string λ is accepted by M if and only if s0 ∈ A i.e., initial state is an
accepting state. The set of all strings which are accepted by NDFSA, M is denoted by AC(M).
4.6.3. Equivalent non-deterministic finite state automaton. Let M and M′ are two NDFSA, they are
said to be equivalent if
AC( M ) = AC( M ′)

that is, set of strings accepted by M is same as the set of strings accepted by M′.
4.6.4. Exercise. Let M = M ( I , S, A, s0 , f ) be a NDFSA with I = { a, b}, S = { s0 , s1 , s2 , s3 , s4 } ,
A = { s2 , s4 }, s0 is the initial state and the next state function is given by following table:

f
I
a b
S
s0 {s0, s3} {s0, s1}
s1 φ {s2}
s2 {s2} {s2}
s3 {s4} φ
s4 {s4} {s4}

Determine whether M accepts the strings


(i) aba (ii) aabb (iii) abaab
Finite State Machines, Languages and Grammers 103

4.7. Equivalence of DFSA and NDFSA.


For a given NDFSA we can always construct an equivalent DFSA. For this we have the following
theorem:
4.7.1. Theorem. Let L be a set accepted by a NDFSA , M = M(I, S, A, s0, f). Then there exists a DFSA
M′ = M′(I, S′, A′, s0′, f ′) that accepts L.
Proof. Given NDFSA is M = M ( I , S, A, s0 , f ) . We know that f is defined from S × I → P( S) where
P( S) is power set of .

We define a DFSA , M ′ = M ′ ( I , S′, A′, s0′ , f ′) as follows:

The states of M′ are all the subsets of the set  of all states of M, i.e., S′ = P(S).

Clearly, if S has n states, then S′ has 2n states. We define s′0 = {s0} as the initial state of M′, and A′ is the
set of all states in S′ containing an accepting state of M, i.e.,
A′ = { s ∈ S′ : s ∩ A ≠ φ} .

We define next state function f ′ : S′ × I → S′ by

f ′ ( s, a ) = 
σ∈ s
f (σ, a ) for s ∈ S′, a ∈ I .

We shall prove that M′ accept the same language as M. For this it is sufficient to prove that for any
string x ∈ I * , we must have f ′( s0′ , x ) = f ( s0 , x ) ......(1)

We shall prove (1) by using induction on length of string x.


Let |x| = 0, i.e., length of x = 0 i.e., x = λ = empty string
Then f ′( s0′ , λ) = s0′ = { s0 } [By definition of s′0]

= f ( s0 , λ) [By definition of f]

Thus (1) holds for |x| = 0, i.e., for x =λ .


Let us assume as our induction hypothesis that (1) is true for any string x s.t. |x| = k.
i.e., f ′( s0′ , x ) = f ( s0 , x ) , wher e| x | = k

We shall prove that (1) is true for any string of length k + 1. For this we shall prove that
f ′( s0′ , xa ) = f ( s0 , xa )

To show it, we have


f ′( s0′ , xa ) = f ′( f ′( s0′ , x ) , a ) = f ′( f ( s0 , x ), a ) [By induction hypothesis]
104 Discrete Mathematics

= 
σ ∈ f ( s0 , x )
f (σ, a ) [By definition of f ′ ]

= f ( s0 , xa ) [By definition of f]

Hence by induction (1) holds for all strings.


Now, we know that a string x is accepted by M′ iff f ′( s0′ , x ) ∈ A′

iff f ( s0 , x ) ∈ A′

iff f ( s0 , x ) ∩ A ≠ φ [By definition of A′]

iff x is accepted by M.
Thus x is accepted by M′ iff x is accepted by M.
This completes the proof of theorem.
4.7.2. Example. Let a NDFSA be defined as M = M ( I , S, A, s0 , f ) , where I = { a, b},
S = { s0 , s1 }, A = { s1 } , s0 is the initial state and next state function f : S × I → P( S) is given by the table

f
I
a b
S
s0 {s0, s1} {s1}
s1 φ {s0, s1}

Construct an DFSA equivalent to given NDFSA.


Solution. Let M ′ = M ′ ( I , S′ , A′, s0′ , f ′) is required DFSA, where we define
S′ = { φ, { s0 } , { s1 } , { s0 , s1 } }

A′ = { s∈ S′ : s ∩ A ≠ φ} = { { s1 } , { s0 , s1 } }

and s0′ = { s0 } and I = { a , b}

We define next state function f ′ : P( S) × I → P( S) by

f ′( s, x ) =  f (σ, x )
σ∈ s
for s ∈ P( S) = S′

Now, f ′({ φ} , a ) = φ , f ′(φ, b) = φ

f ′({ s0 } , a ) = f (s0 , a ) = { s0 , s1 }

f ′({ s0 } , b} = f ( s0 , b ) = { s1 }
Finite State Machines, Languages and Grammers 105

f ′({ s1 } , a ) = f ( s1, a ) = φ

f ′({ s1 } , b ) = f ( s1, b ) = { s0 , s1 }

f ′({ s0 , s1 } , a ) = f ( s0 , a ) ∪ f ( s1, a ) = { s0 , s1 } ∪ φ = { s0 , s1 }

f ′({ s0 , s1 } , b ) = f ( s0 , b ) ∪ f ( s1, b ) = { s1 } ∪ { s0 , s1 } = { s0 , s1 }

The table for the next state function f′ is now

f
I
a b
S
φ φ φ
{s0} {s0, s1} {s1}
{s1} φ {s0, s1}
{s0, s1} {s0, s1} {s0, s1}

The transition diagram for this DFSA is given as


a
a,b
b b a,b

φ {s0} {s1}
{s0, s1}

If may be noted here that a state which is never entered may be deleted from the transition diagram. So,
the state {s0} can be deleted to obtain

a,b
b a,b

φ {s1} {s0,s1}

4.7.3. Exercise. (i) Consider the NDFSA


M = M ({0, 1}, { s0 , s1 , s2 , s3 }, s0 , { s3 }, f )
106 Discrete Mathematics

Here I = {0, 1}, S = { s0 , s1 , s2 , s3 } , A = { s3 }

and s0 is the initial state. The next state function is given by the transition table

f
I
0 1
S
s0 {s0 } {s0, s1}
s1 {s2} {s2}
s2 { s3 } {s3}
s3 φ φ

Construct a DFSA equivalent to given NDFSA and also the transition diagram.

(ii) Consider the NDFSA, M = M(I, S, A, s0, f) where I = {a, b}, S = { s0 , s1 , s2 } , A = { s0 } ; s0 is


initial state and next state function f is given by

f
I
a b
S
s0 φ {s1, s2}
s1 {s2} {s0, s1}
s2 { s0 } φ

(i) Draw the transition diagram for this NDFSA.


(ii) Find out equivalent DFSA and draw the transition table and transition diagram.
4.8. Languages. Let A be a non-empty finite set of symbols, we know that A* denotes the set of all
words on A. Here A is called alphabet and symbols of A are called letters.
4.8.1. Definition. A language L over an alphabet A is simply a subset of A*.
For example, let A = {a, b}, then L1 = { a , ab, ab 2 ,.....} and L2 = { a m b n : m > 0, n > 0} are languages over
alphabet A. We may describe these languages verbally as follows:
L1 is set of all words beginning with a and followed by zero or more b’s.
L2 is set of all words beginning with one or more a’s followed by one or more b’s.
4.8.2. Definition. Suppose L and M are languages over an alphabet A. Then the co-catenation of L and
M, denoted by LM, is the language defined as follows:
Finite State Machines, Languages and Grammers 107

LM = { uv : u ∈ L, v ∈ M }

that is, LM denotes the set of all words which come from the cocatenation of a word from L with a word
from M. e.g. Suppose L = { a , b 2 } , M = { a 2 , ab, b 3 } then

LM = { a 3 , a 2 b, ab 3 , b 2 a 2 , b 2 ab, b 5 }

Clearly, the cocatenation of languages is associative since the cocatenation of words is associative.
4.8.3. Definition. Powers of language L are defined as
L0 = { λ} , L1 = L, L2 = LL, Ln +1 = Ln L for n >1

The unary operation L* of a language L, called the Kleene closure of L, is defined as the infinite union

L * = L0 ∪ L1 ∪ L2 ∪ ....... = L
k =0
k

Remark. (i) The definition of L* agrees with the notation A* which contains all words over A.

(ii) L+ is defined as L+ = L1 ∪ L2 ∪ ....... = L
k =1
k
i.e., L+ can be obtained from L* deleting the empty word

λ.
4.8.4. Regular Expressions. Let A be a non-empty finite alphabet. We shall define a regular expression
r over A and a language L ( r ) over A associated with regular expression r. The expression r and its
corresponding language L ( r ) are defined inductively as follows.
4.8.5. Definition. Each of the following is a regular expression over an alphabet A.
(i) The empty string λ and the pair ( ) (empty expression) are regular expressions.
(ii) Each letter a in A is a regular expression.
(iii) If r is a regular expression, then (r*) is a regular expression.
(iv) If r1 and r2 are regular expressions, then ( r1 ∨ r2 ) is a regular expression.
(v) If r1 and r2 are regular expressions, then (r1 r2) is a regular expression.
All regular expressions are formed in this way.
Remark. (i) Observe that a regular expression r is a special kind of a word (string) which uses the letters
of A and the five symbols ( ), *, ∨, λ, •.
(ii) It should be noted that no other symbols are used for regular expressions.
4.8.6. Definition. The language L(r) over A defined by a regular expression r over A as follows.
(i) L (λ) = { λ} and L ( ( )) = φ , the empty set.
(ii) L ( a ) = { a } , where a is a letter in A.
(iii) L ( r * ) = ( L ( r )) * , the Kleene closure of L ( r ) .
108 Discrete Mathematics

(iv) L ( r1 ∨ r2 ) = L ( r1 ) ∪ L( r2 ) , union of languages.


(v) L ( r1 r2 ) = L ( r1 ) L ( r2 ) , cocatenation of languages.
4.8.7. Example. Let A = {a, b}
(i) Consider the regular expression r = a*.
Then L ( r ) = L ( a * ) = ( L ( a )) * = { a } *

= set containing all powers of a including λ.


(ii) Consider the regular expression r = a a*.
Then L ( r ) = L ( aa * ) = L ( a ) L ( a * ) = L ( a ) ( L ( a )) *

= { a } { a } * = { a } { λ, a , a 2 ,.....}

= { a , a 2 , a 3 ,.....}

= set containing all powers of a excluding λ.


(iii) Let r = a ∨ b *

Then L ( r ) = L ( a ∨ b* ) = L ( a ) ∪ L ( b* )

= L ( a ) ∪ ( L ( b )) *

= { a } ∪ { b} * = { a , λ, b, b 2 ,.....}

(iv) Let r = (a ∨ b ) *

Then L (r ) = L ( ( a ∨ b ) * ) = ( L ( a ∨ b )) *

= ( L ( a ) ∪ L ( b )) * = ({ a } ∪ { b} ) *

= ({ a , b} ) * = A *

= set of all words over A.


(v) Let r = a ∧ b * . Then L ( r ) does not exist since r is not a regular expression (Note ∧ is not one of the
symbols used for regular expression).
4.8.8. Definition. Let L be a language over A. Then L is called a regular language over A if there exists a
regular expression r over A such that L = L ( r ) .

e.g. Consider the language L = { a m b n : m > 0, n > 0} over the alphabet A = { a , b} .

We want to find out a regular expression r over A such that L = L ( r ) .

Now, L contains those words beginning with one or more a’s followed by one or more b’s. Hence, we
can set r = aa * bb * .

Note that r is not unique, e.g., r = a * a bb * is another solution.


Finite State Machines, Languages and Grammers 109

4.9. Language determined by an automaton. Each automaton M with input alphabet A defines a
language over A, denoted by L(M). The language L(M) of M is the collection of all words from A, which
are accepted by M.
Recall that an automaton M accepts the word w if the final state is an accepting state.
4.9.1. Example. Consider the finite state automaton M given by following transition diagram:
b b

s0 s1 s2
a, b
a
a

Find out L(M).

Solution. By the diagram, we note that s2 is only non-accepting state and also we can not leave s2 once
entered. Thus the strings which lead us into the state s2 are not accepted. Thus a string containing two
successive b’s is not accepted by M. Thus L(M) contains all strings over {a, b} which do not have two
successive b’s.
4.9.2. Exercise. (i) Let M = M ( I , S, A, s0 , f ) be the automaton where I = { a , b} ,
S = { s0 , s1 , s2 } , A = { s1 } , s0 is the initial state and next state function f is given as

f
I
a b
S
s0 s0 s1
s1 s1 s2
s2 s2 s2

Describe the language L(M) accepted by M.


(ii) Describe the words w in the language L accepted by the automaton M given below

a b
b a

s0 s2 a, b
s1
110 Discrete Mathematics

4.9.3. Definition. Consider any word u = x1 x2...xn on an alphabet A. Any sequence v = x j x j +1 ... xk is
called a subword of u. In particular the subword v = x1 x2 ... xk , beginning with the first letter of u, is called
an initial segment of u.
Remark. In the above example, we can say that L(M) contains those strings which has ba as a subword.
e.g. aabab, ababbb etc.
4.9.4. Exercise. Describe the language L(M) accepted by the automaton M given by following
transition diagram:

b a
a a

s0 s1 s2

b
b
a,b
a

s3 b
s4

Note. The fundamental relationship between regular languages and automata is given by the following
theorem:
4.9.5. Kleene theorem. (without proof). A language L over an alphabet A is regular iff there is a finite
state automaton M such that L = L ( M ) .
4.9.6. Pumping Lemma. Suppose M is an automaton over an alphabet A such that
(i) M has k states. (ii) M accepts a word w from A where |w|> k .
Then, w is of the form w = xyz and for any positive integer m, wm = xym z is accepted by M.

Proof. Suppose w = a1 a2 ...an is a word over A accepted by M such that | w | = n > k , where k is the
number of states. Let P( s0 , s1 ,..., sn ) be the corresponding sequence of states determined by the word w.
Since n > k, at least two states in P must be equal, say si = sj where i < j.
Let us assume x = a1a2 ... ai , y = ai +1...a j , z = a j +1...an . Then, clearly, w = xyz.

We observe that x must end in state si and xy must end in s j = si , i.e., x and xy both end in si. Also xyz
ends in sn. So, we have transition diagram of M as
x
z

s0 si =sj sn

y
Finite State Machines, Languages and Grammers 111

From the above diagram it is obvious that xy 2 , xy 3 , xy 4 ,..., xy m (for all +ve m) also ends in si. Thus for
every m, w m = xy m z ends in sn which is an accepting state.

4.9.7. Example. Show that the language L = { a m b m : m > 0} is not regular.

Solution. We assume on the contrary, that L is regular. Then, by Kleene theorem, there exist a finite
state automaton M which accepts L. Suppose M has k states. Let w = a k b k . Then | w | > k , so by the
pumping lemma, w = xyz where y is not empty and w m = xy m z, m > 0 is also accepted by M. In particular
w 2 = xy 2 z is accepted by m.
Now, if y contains only a’s or only b’s, then w2 will not have the same number of a’s and b’s. If y
contains both a’s and b’s then w2 will have a’s following b’s. In both cases, w2 does not belong to L,
which is a contradiction. Thus L is not regular.
4.10. Grammars. A phrase –structure grammar or simply, a grammer G consists of four parts.
(i) A finite set N of non-terminal symbols.
(ii) A finite set T of terminal symbols, where N ∩ T = φ.
(iii) A finite subset P of (( N ∪ T ) * − T * ) × ( N ∪ T ) * called the set of productions. A production is an
ordered pair (α, β) usually written as α → β where α ∈ ( N ∪ T ) * − T * i.e., α must contain atleast one
non-terminal symbol and β ∈ ( N ∪ T ) * i.e., β can contain any combination of non-terminal and terminal
symbols.
(iv) A starting symbol σ ∈ N .
We shall denote a grammar G defined above by
G = G( N, T , P, σ)

Remark. (i) Terminals will be denoted by lower case letters a, b, c,... and non-terminals will be denoted
by capital letters A, B, C,... with σ as starting symbol.
(ii) Sometimes, we define a grammar G by only giving its productions, assuming implicitly that σ is the
starting symbol and that the terminals and non-terminals of G are only those appearing in the
productions.
4.10.1. Example. Let N = { σ, A} , T = { a , b} , P = { σ → b, σ → bA, A → aA, A → b} where σ is the
starting symbol. Then G = G( N, T , P, σ) is a grammar.
Remark. Productions of above example can be given as σ → ( b, bA ) and A →( aA, b ) .
4.10.2. Definition. Let G = ( N, T , P, σ) . Let α → β is any production and x, y are strings over terminals
and non-terminals i.e., x, y ∈ ( N ∪ T ) * . Then, we say that xβy is directly derivable from x αy and we
write
x αy ⇒ xβy

i.e., xβy can be obtained from x αy by using the production α → β .


112 Discrete Mathematics

Further, if x i ∈ ( N ∪ T ) * for i = 1, 2, ......., n and x i +1 is directly derivable from xi, then we say that xn is
derivable from x1 and we write x1 ⇒ x n

We call x1 ⇒ x2 ⇒ ........ ⇒ x n

the derivation of xn from x1.


Remark. By convention it is assumed that every string of ( N ∪ T ) * is derivable from itself.

4.10.3. Definition. Let G = G( N, T , P, σ) be a grammar. The language accepted (or generated) by the
grammar G, denoted by L ( G ) contains all words in T that can be obtained or derived from the starting
symbol, i.e., L ( G ) = { w ∈ T * : σ ⇒ ........... ⇒ w } .

4.10.4. Example. Consider the grammar with productions σ → bσ , σ → aA, A → bA, A → b . Find the
language L(G) accepted by this grammar.
Solution. We observe that the string bbab can be derived from σ by the derivation
σ ⇒ bσ ⇒ bbσ ⇒ bbaA ⇒ bbab

If we apply the production σ → bσ , n times, and then apply σ → aA and then apply A → bA m times
and finally , we apply A → b to get the derivation
σ ⇒ bσ ⇒ bbσ ⇒ ........ ⇒ b n σ ⇒ b n aA ⇒ b n abA

⇒ b n abbA ⇒ ........ ⇒ b n ab m A ⇒ b n ab m +1

where n ≥ 0, m ≥ 0
On the other hand, no sequence of productions can produce two or more a’s and also string will end
precisely in b.
Hence L(G) = { b n ab m +1 : n ≥ 0, m ≥ 0} .

4.10.5. Example. Find the language L(G) generated by the grammar G, where
N = { σ, A, B} , T = { a , b} and productions P = { σ → aB, B → b, B → bA, A → aB} .

Solution. Here, we observe that we can only use the first production once since the starting symbol σ
does not appear anywhere else. Also, we can only obtain a terminal word by finally using the second
production. Otherwise, we alternatively add a’s and b’s using the third and fourth productions.
We can describe the above process by following derivations.
σ ⇒ aB ⇒ ab

or σ ⇒ aB ⇒ abA ⇒ abaB ⇒ abab


or σ ⇒ aB ⇒ abA ⇒ abaB ⇒ ababA

⇒ ababaB ⇒ ababab
and so on. Hence, we get
Finite State Machines, Languages and Grammers 113

L ( G ) = { ( ab )n : n ∈ N}

4.10.6. Exercise. (i) Find the language L(G) over {a, b, c} generated by the grammar G with
productions σ → a σ b, a σ → Aa , Aab → c .

(ii) Let G = G( N, T , P, σ) be a grammar where N = { σ} , T = { a , b} and productions P are


P = { σ → a , σ → σa , σ → b, σ → bσ} . Find the language generated by this grammar.

4.11. Type of Grammars. Grammars are classified in terms of context sensitive, context free and
regular as follows:
4.13.1. Definition. Let G = ( N, T , P, σ) be a grammar and let λ be the null string. Then grammar G is
said to be context –sensitive or type -1, if every production is of the form
αA α′ → αβα′ where α, α′ ∈ ( N ∪ T )* , β ∈ ( N ∪ T ) * − { λ}

The name context-sensitive comes from the fact that we can replace the variable (non-terminal) A by β
in a word only when A lies between α and α′.
Further, it must be noted that for the production
α A α′ → αβα′

the length of left side α A α′ is less than or equal to length of right side α B α′ since β ≠ λ .

So, | αA α′ | ≤ | αβα′ |

Hence, a type-1 or context-sensitive grammar is one in which length of left side of every production is
less than the length of right side of the production.
4.11.2. Definition. A grammar G = G( N, T , P, σ) is said to be context-free or type-2 if every production
is of the form A → β where A ∈ N and β ∈( N ∪ T ) * , that is, left side of every production must
be a single non-terminal and right-side is any word in one or more symbols.
The name context free comes from the fact that we can now replace the variable A by β regardless of
where A appears.
4.11.3. Definition. A grammar G = G(N, T, P, σ) is said to be regular or type-3 grammar if every
production is of the form A → a , or A → a β, or A → λ , that is, left hand side a single non-terminal and
right side is λ or a single non-terminal or a terminal followed by a non-terminal.
Remarks. (i) Clearly a type-3 grammar is always a type-2 grammar and a type-2 grammar, if it does not
contain the productions of the form A → λ, is a type -1 grammar.
(ii) If a grammar is not of any type i.e., type-1, type-2 and type-3 then it is said to be type -0 grammar.
Thus a type-0 grammar has no restrictions on its productions and hence every grammar is a type-0
grammar.
4.11.4. Example. Determine the type of grammar G which contains the productions
114 Discrete Mathematics

(i) σ → aAB, σ → AB, A → a , B → b (ii) σ → aB, B → AB, aA → b, A → a , B → b

(iii) σ → aB, B → bB, B → bA, A → a , B → b (iv) σ → aA, A → aAB, B → b, A → a

(v) σ → aAB, AB → a , A → b, B → AB

Solution. (i) The production σ → aAB means that grammar is not regular. Also every production is of
the form A → β i.e., left side is a non-terminal. So G is type -2 i.e., context free.
(ii) The production aA → b says that grammar is not of type-1, type-2 or type-3. So, G is type-0
grammar.
(iii) G is a regular or type-3 grammar since each production has the form A → a or A → aB.
(iv) Each production is of the form A → α i.e., a non-terminal on the left, hence G is a context-free or
type-2 grammar.
(v) The production AB → a means G is a type-0 grammar.
4.11.5. Exercise. (i) What is the type of a grammar G defined by T = {a, b, c}, N = {σ, A, B, C, D, E},
starting symbol σ and productions are
σ → aAB, σ → aB, A → aAc, A → ac, B → Dc

D → b, CD → CE , CE → DE , DE → DC, Cc → Dcc

(ii) What is the type of grammar whose productions are


σ → aAB, AB → bB, B → b, A → aB .

4.11.6. Theorem (without proof) . A language L can be generated by a regular grammar if and only if L
is a regular language.
4.11.7. Example. Consider the language L = {an bn. n > 0}. Find a context-free grammar G which
generates the language L.
Solution. Here T = {a, b}
If we consider the productions σ → ab, σ → a σ b then we note that

σ ⇒ aσ b ⇒ aabb = a 2 b 2

σ ⇒ aσ b ⇒ aa σ bb ⇒ aaabbb = a 3 b 3

Continuing like this, we obtain


σ ⇒ aσ b ⇒ ......... ⇒ a n bn , n > 0

Hence L ( G ) = { a n b n : n > 0} .

So, the grammar G with productions σ → ab and σ → a σ b generates the language and clearly G is
context-free.
Finite State Machines, Languages and Grammers 115

4.11.8. Exercise. Can we find a regular grammar G which generates the language L = { a n b n : n > 0} .

4.12. Derivation Trees of context-free Grammars. Consider a context-free grammar G. Any


derivation of a word w in L(G) can be represented graphically by means of an ordered, rooted tree T,
called a derivation tree or parse tree.
For example, let G be a context free grammar with the following productions
σ → aAB , A → Bba, B → bB, B→c
The word w = acbabc can be derived as
σ ⇒ aAB ⇒ a BbaB ⇒ acbaB ⇒ acbabB ⇒ acbabc
To draw the derivation tree, we begin with σ as the root and then add branches to the tree according to
the production used in the derivation of w.
(i) (ii) σ
σ

a B
a A B A

(σ → aAB )
B a
b

( A → Bba )

σ
σ

(iii) a (iv)
A B
a
A B

B a b B
b B a
b
c
(B → c) c
(B → bB)
(v) σ

a
A B

b B
B a
b
c
c
(B → c)

The sequence of leaves from left to right is the derived word w. i.e., w = acbabc. It should be noted that
116 Discrete Mathematics

every leaf of the tree is a terminal symbol and every non-leaf is a non-terminal symbol. If A is any non-
leaf and let its immediate successors (children) form a word α, then A → α is a production of G. e.g.
In (ii) of above figure, children of A forms the word Bba, and so A → Bba is a production of G.
4.12.1. Exercise. (i) The below figure is the derivation tree of a word w in the language L of a context
free grammar G:
σ

A
b σ
a σ
A b σ
b a
b a
b

(a) Find w.
(b) What are the terminals, non-terminals and production of G.
(ii) For the derivation tree of a word

a
A B

b a
a
B

b a

(a) Find w (b) Find N, T and P


(iii) Consider the regular grammar G with productions
σ → aA , A → aB, B → bB, B → a

(i) Draw the derivation tree of the word aaba.


(ii) Find the language L(G) generated by G.
4.12.2. Definition. A context-free grammar G is said to be ambiguous grammar if there is at least one
word in L(G) which has more than one derivation trees.
4.12.3. Exercise. Show that the grammar G with productions σ → a σ, σ → σa , σ → a is ambiguous.

Solution. We consider the word w = aaa. Derivations of w are


σ ⇒ aσ ⇒ a σa ⇒ aaa

and σ ⇒ σa ⇒ a σa ⇒ aaa
Finite State Machines, Languages and Grammers 117

Derivation trees for these two derivations are

σ σ

a σ σ a

σ a a σ

a a
4.13. Type of Languages. A language L is said to be context-sensitive, context free or regular if these
exist a context-sensitive, context-free or regular grammar G such that L = L ( G ) respectively, e.g. Let G
be a grammar given by the productions
σ → b σ, σ → aA, A → bA, A → b

We have already proved for this grammar that


L ( G ) = { b m ab m +1 : n ≥ 0, m ≥ 0}

Now, this language is a regular language since it is generated by a regular grammar.


Note. In all the examples done before, type of the language L(G) is same as that of grammar G.
4.13.1. Example. Find
the language L(G) where G has the productions
σ → aA, A → bbA, A → c . What is type of language L(G) ?

Solution. We can apply the production σ → aA only once since starting symbol σ does not appear
anywhere else. Then we apply the production A → bbA, n times and finally apply the production A → c,
to obtain
σ ⇒ aA ⇒ abbA ⇒ abbbbA ⇒ ...... ⇒ abbbb......bbA
⇒ ab2n c , where n ≥ 0
So, L ( G ) = { a b 2 n c : n ≥ 0} .

Here, the grammar G is context-free grammar and so the language L(G) is also context-free.
4.13.2. Backus-Naur Form. There is another notation, called the Backus Naur form, which is
sometimes used for describing the productions of a context-free grammar. In this form
(i) = is used instead of →
(ii) Every non-terminal is enclosed by brackets < >
(iii) All production with the same non-terminal left-hand side are combined into one statement with all
the right-hand sides listed on the right of.. = separated by vertical bars. e.g. the production A → aB, A →
b, A → BC are combined into one statement as given below
< A > : : = a < B >| b| < B > < C >
118 Discrete Mathematics

4.13.3. Example 22. Rewrite each grammar G in Backus-Naur form given below
(i) σ → aA , A → aAB, B → b, A→a

(ii) σ → aB, B → bA , B → b, B → a, A → aB, A→a


(iii) σ → aAB, σ → AB , A → a, B→b
(iv) σ → aB, B → bB, B → bA, A → a, B→b
Solution. Using the above given rules, we have the following BF forms, for the above grammars all of
which are context-free.
(i) < σ > : : = a < A >, < A > : : = a < A > < B > | a , < B > : : = b
(ii) < σ > : : = a < B >, < B > : : = b < A > | b | a , < A > : : = a < B > | a
(iii) < σ > : : = a < A > < B >| < A > < B >, < A > : : = a , < B > : : = b
(iv) < σ > : : = a < B >, B : : = b < B > | b < A >| b, A : : = a
4.13.4. Regular Grammar and Finite state Automaton. For a given finite state automaton M, we can
always find a regular grammar G which accepts the same language as accepted by finite state automaton
M, that is, AC( M ) = L ( G )
In this regard, we have the following theorem:
4.13.5. Theorem. Let M be a finite-state automaton given as a transition diagram. Define a grammar G ,
G = (N, T, P, σ) as follows:
Let N be the set of states in M and initial state of M is taken as starting symbol σ of grammar G. Let T be
the set of input symbols in M. Let P be the set of productions s → xs′ if there is an edge labeled x from
the state s to the state s′, and s → λ if s is an accepting state. Then the grammar G = G (N, T , P, σ) is a
regular grammar and set of strings accepted by M is equal to language accepted by G. i.e., AC(M) =
L(G).
Proof. By definition of production, all productions have a non-terminal on left-side and a terminal
followed by a non-terminal or empty string λ on right side. So, the grammar G is a regular grammar.
To show that AC(M) = L(G), we first show that AC(M) ⊆ L(G).
Let α ∈ AC( M ) be any string. If α = λ, i.e., the null string, then σ must be an accepting state. So, G will
contain the production σ → λ and then derivation σ ⇒ λ implies that α ∈ L ( G ) .

Now, if α is not a null string, then let


α = a1 a 2 .....a n , wher e a i ∈ T .

Since α is accepted by automaton M, there is a path (σ, s1, s2 ,....., sn ) such that sn is an accepting state
and we write
a1 a2 a3 an
σ  → s1  → s2  → s3 → ........... → sn −1  → sn
Finite State Machines, Languages and Grammers 119

It follows that G contains the productions


σ → a1 s1, s1 → a 2 s2 ,......., sn −1 → a n sn and sn → λ

Now, the derivation


σ ⇒ a1 s1 ⇒ a1a 2 s2 ⇒ a1a 2 a 3 s3 ⇒ .......... ⇒ a1a 2 .........a n sn ⇒ a1a 2 ........a n

shows that α = a1 a 2 ......a n ∈ L ( G ) .


Hence, AC( M ) ⊆ L ( G ) ......(1)

Now, suppose α∈ L ( G ) be any string. If α = λ i.e., null string, then α must have the derivation σ ⇒ λ,
which implies that the production σ → λ is in the grammar G. So σ must be an accepting state in M and
α ∈ AC( M ) .

Now, let α is not a null string and let


α = a1 a 2 ........a n , a i ∈ T

So, α must have a derivation of the form


σ ⇒ a1 s1 ⇒ a1 a 2 s2 ⇒ .......... ⇒ a1 a 2 .......a n −1 sn −1

⇒ a1a 2 .....a n sn ⇒ a1a 2 .....a n = α


This derivation shows that G must contain the productions
σ → a1 s1, s1 → a 2 s2 ,....., sn −1 → a n sn , sn → λ

which shows that there are edges from σ to s1 , from s1 to s2,...., from sn − 1 to sn, labeled with
a1, a 2 ,......., a n respectively in the finite state automaton M and the production sn → λ shows that sn is an
accepting state.
a1 a2 an

σ s1 s2 sn − 1 sn

Now, if in the transition diagram, we start with initial state σ and trace the path σ, s1, s2 ,......., sn taking
α = a1a 2 .......a n , we observe that the final state reached is sn which is an accepting state. So α is accepted
by M i.e., α ∈ AC( M )

⇒ L ( G ) ⊆ AC( M ) ......(2)

By (1) and (2), we get


L ( G ) = AC( M )

This completes the proof.


120 Discrete Mathematics

4.13.6. Example. Consider the finite state automaton M given as

σ A

b
b a

Construct the regular grammar G for the automaton M and verify that AC(M) = L(G).
Solution. We know that the starting symbol is given by initial state, so σ is starting symbol.
N = { σ, A} , set of states.

T = {a, b} , set of input symbols.


and the productions are given by
σ → aA, σ → bσ, A → a σ, A → bA, A → λ

We know by example -19 of last unit that AC( M ) is set of all strings that contains an odd number of a’s.
We now show that L(G) is also set of all strings that contains an odd number of a’s .
We use the production σ → bσ (n times) to get σ ⇒ bσ ⇒ .......... ⇒ b n σ , n ≥ 0

Now to eliminate σ, we must use the production σ → aA (which can be used only once at this stage) to
get the string b n aA, n ≥ 0 .....(1)

Now, if we use A → bA( m tim es ) and finally use A → λ, we get the string b n a b m , n ≥ 0, m ≥ 0 . This is a
string containing only one ‘a’.
Upto this stage, we have only on ‘a’. But if at stage (1), we use A → aσ instead of 4th and 5th production
, then string takes the form
b n aa σ, n > 0

Now, if we use production σ → bσ (r times), number of a’s will remain two and finally to get rid of σ,
we have to use the production σ → aA to get the string

b n aa b r aA, n ≥ 0, r ≥ 0

This string has three a’s.


We are again at the same stage. If we use 4th and 5th production, we get a string having three a’s.
If we use A → aσ, then repeating above process, we will get string having five a’s. So, we conclude that
finally strings will be containing odd number of a’s.
Hence L(G) is the set of all strings containing odd number of a’s.
4.13.7. Theorem. Let G = G(N, T, P, σ) be a regular grammar. Let a non-deterministic finite state
Finite State Machines, Languages and Grammers 121

automaton as follows:
Let I = T, S = N ∪ {F}, where F ∉ N ∪ T , σ is the initial state,

A = { F} ∪ { s : s → λ ∈ P) and f is defined by
f ( s, x ) = { s′ : s → xs′ ∈ P ) ∪ { F : s → xF ∈ P}

Then the non-deterministic finite state automaton M = M ( I , S, A, σ, f ) accepts the strings L ( G ) .

Proof. Proof can be obtained as that of finite state automaton.


Note. In a finite state automaton , a string w = a1 a2 ...an is accepted if there is a path (σ, s1 , s2 ,..., sn ) such
that sn (final state) is an accepting state. However, in case of non-deterministic finite state automaton, we
know that there may be more than one paths and string w = a1 a 2 ......a n will be accepted if there exist
atleast one path (σ, s1 , s2 ,..., sn ) such that sn is an accepting state. That is why we can use the same proof
for both above theorems.
4.13.8. Kleene’s theorem. A language L is regular if and only if there exists a finite-state automaton M
such that AC(M) = L.
Proof. Let us suppose that language L is regular. Then there exists a regular grammar G such that
L = L(G) ......(1)

But by above theorem, this regular grammar can be converted into a NDFSA which accepts L(G).
Further, we know that a NDFSA can be converted into a FSA which accepts the same strings as those of
NDFSA. Hence, we can get a finite state automaton M such that
L ( G ) = AC( M ) ......(2)

By (1) and (2), we have AC ( M ) = L

Thus, we have constructed a finite state automaton M which accepts L.


Conversely, let there exist a finite state automaton M which accepts the language L i.e., AC(M) = L.
Then, by the first theorem of above two theorem, we get that there exists a regular grammar G such that
AC( M ) = L ( G )

Hence, L = L ( G ) where G is a regular grammar. So, the language L is regular.

Books Recommended:
1. Kenneth H. Rosen, Discrete Mathematics and Its Applications, Tata McGraw-Hill, Fourth Edition.
2. Seymour Lipschutz and Marc Lipson, Theory and Problems of Discrete Mathematics, Schaum
Outline Series, McGraw-Hill Book Co, New York.
3. John A. Dossey, Otto, Spence and Vanden K. Eynden, Discrete Mathematics, Pearson, Fifth
Edition.
122 Discrete Mathematics

4. J.P. Tremblay, R. Manohar, “Discrete mathematical structures with applications to computer


science”, Tata-McGraw Hill Education Pvt.Ltd.
5. J.E. Hopcraft and J.D.Ullman, Introduction to Automata Theory, Langauages and Computation,
Narosa Publishing House.
6. M. K. Das, Discrete Mathematical Structures for Computer Scientists and Engineers, Narosa
Publishing House.
7. C. L. Liu and D.P.Mohapatra, Elements of Discrete Mathematics- A Computer Oriented Approach,
Tata McGraw-Hill, Fourth Edition.

You might also like