Minimizing Boolean Sum of Products Functions: David Eigen
Minimizing Boolean Sum of Products Functions: David Eigen
David Eigen
Table of Contents
Boolean Algebra 4
Karnaugh Maps 8
Quine-McCluskey Algorithms 14
First Algorithm 14
Second Algorithm 16
Simplify 26
Cofactor 26
Unate Functions 28
Unate Simplify 29
Merge 30
Binate Select 32
Simplify 34
Espresso-II 42
First Algorithm A1
Second Algorithm A4
Glossary G1
Bibliography B
1
Digital design is the design of hardware for computers. At the lowest level,
computers are made up of switches and wires. Switches normally have two states and can
only be in one state at a time. They may be manually controlled or may be controlled by
the outputs of other parts of the computer. Wires also have two states, each corresponding
to the level of voltage in the wire. Although wires may conduct any level of voltage, in
digital design we restrict the amount of states to two: low voltage and high voltage.
In combinational design, one is given a truth table and must realize it into
hardware. Depending on what the application of the hardware is, it may be optimal to
have the hardware return the answer as quickly as possible (minimizing delay) or to use
as little hardware as possible (minimizing cost). There are many techniques for doing
this. Among them are Karnaugh Maps, the Quine-McCluskey algorithms, Simplify, and
Espresso. In this paper, I will explore a few of these techniques, and compare and
contrast them.
All of the techniques are derived from boolean algebra. Boolean algebra is a way
of manipulating boolean variables. A boolean variable has exactly two states, just as the
switches and wires at the lowest level of a computer have two states. Although the states
of wires in a computer are always called HI and LO, the states of a boolean variable may
be called true and false, on and off, or anything else. Usually, we use the states “1” and
returns a boolean output. Primitive functions take one or two inputs. A two-input truth
table has four lines: one line for each combination of 1s and 0s that can be assigned to the
two inputs. There are four because for each 1 or 0 for the first input, the second input can
2
be either 1 or 0. Therefore, there are 2*2 = 4 combinations. For an N-input truth table, the
first input can be either 1 or 0; the second input can be either 1 or 0 for each 1 or 0 in the
first input; the third input can be either 1 or 0 for any of the 2*2 different combinations of
the first two inputs, doubling the number of combinations to 2*2*2; the fourth input can
be either 1 or 0 for each of the different combinations of the first three inputs, doubling
the number of combinations to 2*2*2*2, and so on. By the time we get to the Nth input,
There are sixteen different 2-input functions. This is because a 2-input truth table
has 4 lines. The first line can correspond to an output of 1 or 0. For each of these
possibilities, the second line can either be a 1 or a 0, making 2*2 combinations. The third
line can correspond to an output of either 1 or 0 for each of the 2*2 combinations of the
first two lines, making 2*2*2 possibilities. The fourth line can correspond to an output of
either 1 or 0 for each of the 2*2*2 combinations of the first three lines, making
2*2*2*2 = 16 possibilities. Extending this to an L-line truth table, there are 2 L different
possibilities. And since for an N-input truth table there are 2 N lines, L = 2 N . Therefore,
N
an N-input truth table can have 2 2 different functions.
Of the sixteen functions for a 2-input truth table, only seven are commonly used. I
The invert (NOT) function takes one input, and its output is the opposite state of
the input. Thus, NOT 1 = 0, and NOT 0 = 1. The truth table of the invert function is the
following:
A NOT A
0 1
1 0
3
The AND function’s output is 1 when both of its two inputs are one. Thus,
A AND B is 1 only when A and B are both one. The truth table of the AND function is
the following:
A B A AND B
0 0 0
0 1 0
1 0 0
1 1 1
Since anything times 0 equals 0, and 1 times 1 equals 1, A AND B looks like a
multiplication table for A and B. Thus, A AND B is denoted as the product of A and B,
Α⋅Β, or AB.
The OR function’s output is 1 when any of its two inputs are 1. Thus, A OR B is 1
A B A OR B
0 0 0
0 1 1
1 0 1
1 1 1
Since 1 + 0 = 1 and 0 + 0 = 0, the truth table for A OR B looks somewhat like an addition
The exclusive OR (XOR) function is 1 whenever either, but not both, of its two
inputs is 1. Thus, A XOR B is 1 when only one of either A or B is 1. The truth table of
A B A XOR B
0 0 0
0 1 1
1 0 1
1 1 0
4
The other three primitive functions, NAND, NOR, and XNOR, are the
There are four properties for boolean algebra. Often, they are called axioms.
However, given the definitions of AND, OR, and NOT, they can be proved by making a
truth table for each of them (this technique is called perfect induction). They are:
• Associativity: ( A + B ) + C = A+ ( B + C) and ( A ⋅ B ) ⋅ C = A ⋅ (B ⋅ C)
• Commutativity: A + B = B + A and A ⋅ B = B ⋅ A
• Distributivity: A + (B ⋅ C) = ( A + B) ⋅ ( A + C) and A ⋅ ( B + C) = ( A ⋅ B) + ( A ⋅ C)
• A + A = 1 and A ⋅ A = 0
• A + 1 = 1 and A ⋅ 0 = 0
• A + 0 = A and A ⋅1 = A
• A + A = A and A ⋅ A = A
Note that each of the axioms and identities have two forms: each form can be created by
switching the ORs and ANDs, and the 1s and 0s of the other form, called its “dual.”
These identities can also be proved by perfect induction. For example, the truth tables and
A A A+A A⋅ A
0 1 1 0
1 0 1 0
5
With these axioms and identities, we can prove the following theorems:
Proof of Absorption:
By distribution, X + XY = X (1 + Y) . 1 + Y = 1. Therefore, X (1 + Y ) = X ⋅1 = X
X( X + Y ) = X .
Proof of Adsorption:
By distribution, X + XY = ( X + X )( X + Y ) . Because X + X = 1,
0 + A = A , 0 + XY = XY . Therefore, by transitivity, X ( X + Y ) = XY .i
Proof of Adjacency:
laws.
h, all ORs switch to ANDs, and all values become complemented: 1s to 0s and 0s to 1s.
Of course, we can just rename all variables with their complements and have no loss of
generality, since each form of the function (it and its dual) are independent. Similarly, if
When realizing a truth table, one must create a hardware circuit that takes in the
appropriate inputs and produces the correct outputs. It is important to have as small a
using boolean algebra, it is not suitable for use with a function that has many inputs.
7
However, there are other techniques that may not always arrive at the most optimal
solution, but are easier to use when dealing with large truth tables.
consists of many inputs being ANDed together, and then the ANDed terms being ORed
together. For example, an SOP function may look like: AB + CD + DEF. It consists of
three terms (AB, CD, and DEF) all being ORed together. Each instance of an input is
called a literal. In this expression, there are seven literals. A term is an ANDed group of
Any truth table can be realized with an SOP solution. For example, take the
A B C OUT
0 0 0 1
0 0 1 1
0 1 0 0
0 1 1 0
1 0 0 1
1 0 1 0
1 1 0 1
1 1 1 0
We want OUT to be 1 only when a “1” is listed under OUT in the truth table. One of the
when this is 1 is A ⋅ B ⋅ C . So an expression that is 1 when either (or both) of these cases
is 1 is:
A ⋅ B ⋅ C + A⋅ B ⋅C
8
Similarly, we can generate terms for the other two times OUT is 1, ultimately creating a
OUT = A ⋅ B ⋅C + A ⋅ B ⋅ C + A ⋅ B ⋅C + A ⋅ B ⋅ C
This can be generalized into an algorithm: First, look for lines in the truth table
where OUT is 1. For each of these rows, AND together one literal for each input: an
uninverted, normal literal if the input is 1, and an inverted input if the input is 0. Then,
OR together all of these terms. An SOP solution like this is not at all reduced, and is
called a canonical SOP solution. Each of the ORed terms in a canonical solution is called
a minterm.
Karnaugh map is a pictorial representation of an SOP expression. It is a grid with one box
for each minterm. The boxes are arranged such that each box differs with an adjacent box
by exactly one literal. For example, the K-map for the previous truth table is:
BC
00 01 11 10
A
0 1 1 0 0
1 1 0 0 1
In this K-map, B and C label the columns and A labels the rows, as marked with a
“A\BC” in the top left hand corner. For the column labels, B is the literal on the left, and
C is the literal on the right. Notice that the columns are labeled 00, 01, 11, 10. In this
sequence, each number is different from its two surrounding numbers by exactly one bit.
Also, the numbers at the beginning and end of the sequence “wrap around:” they also
9
differ by exactly one bit. Each entry has exactly 3 adjacent entries: one for each input
variable.
Because of this, the minterms associated with adjacent boxes that contain 1s
(horizontally and vertically only; not diagonally) differ by one literal. Thus, we can apply
the adjacency theorem to two adjacent boxes and condense them to one smaller term. For
example, both the 000 and 001 boxes contain 1s. Therefore, by adjacency, we can say:
ABC
1442 +4 3 = AB
ABC
4
minterms
Notice that AB corresponds to 00, which are the only two literals that the boxes have in
common in the K-map. We group together the boxes in the K-map to denote that we are
applying adjacency to them. The map with all the groups circled is:
BC
00 01 11 10
A
0 1 1 0 0
1 1 0 0 1
Each of these groups represents a term in the SOP cover. To find the term, we can
just look to see what literals all the boxes in each group have in common. The product of
these literals is a term in the SOP cover. Thus, the SOP cover represented here is:
AB + AC + BC
However, the B C term is redundant. Graphically, that group overlaps the other two
groups, and groups two boxes that are already circled. In boolean algebra, the theorem
that states that this redundant term can be eliminated is the Law of Consensus.
10
AB + AC + BC = AB + AC
The proof of Shannon’s Expansion Theorem is broken up into two cases: one case for
Case 1: x1 = 1
Case 2: x1 = 0
Now, with Shannon’s Expansion Theorem, the Law of Consensus can be proved.
AB + AC + BC = AB + AC .
Using this on the K-map example, the cover from the three circles can be
simplified:
AB + AC + BC = AB + AC
This new cover has only two terms and four literals, much less than the original function
with four terms and twelve literals. Thus, there is a 50% reduction in the number of terms
2
and a 66 3 % reduction in the number of literals. The new cover is graphically represented
by two circles, namely the ones that are not completely overlapped:
BC
00 01 11 10
A
0 1 1 0 0
1 1 0 0 1
another circle), the Law of Consensus can be applied. Thus, these circles do not even
Adjacency can be applied to grouping not only single boxes, but also entire
groups of boxes whose height and width are both powers of two. For example, take the
following K-map:
12
CD
00 01 11 10
AB
00 1 1 0 0
01 1 1 0 0
11 0 1 1 1
10 1 1 1 1
Circling every group of two boxes (and omitting those that can be eliminated by the Law
of Consensus), we get:
CD
00 01 11 10
AB
00 1 1 0 0
01 1 1 0 0
11 0 1 1 1
10 1 1 1 1
However, Adjacency can be applied to many of these terms (such as ABC + ABC ).
Applying adjacency to all possible terms and repeating, the cover becomes:
AC + AD + AC + AB
13
CD
00 01 11 10
AB
00 1 1 0 0
01 1 1 0 0
11 0 1 1 1
10 1 1 1 1
This cover of the function has only four terms and eight literals, reduced from eleven
terms and forty-four literals. This is approximately a 64% reduction in terms and a 82%
reduction in literals.
• Circle areas of the K-map where each box in the area contains a 1. Each area’s height and
width (measured in number of boxes) must be a power of 2. Areas that are already covered
areas that are not yet covered (since larger areas produce smaller terms).
• Each circle represents a term in the cover. Write down each term by ANDing the literals
that are the same in all boxes contained by the circle. Do not complement literals that
appear as a 1 in the column or row label and complement those that appear as a 0. Then, OR
K-maps are an intuitive and easy way to reduce covers of functions with four or
less inputs, but they do not work well for functions with over four inputs. It is possible to
use a K-map with more than two dimensions to represent these functions, but they are
14
hard to deal with in three dimensions, and almost impossible to use in more than three.
For a function with more than four inputs, other methods must be used. One such method
The Q-M algorithms are two algorithms that are used to minimize a cover of a
boolean function. The first algorithm generates a cover of prime implicants (implicants
(terms) that are fully reduced by adjacency). The second algorithm takes the prime
implicants from the first algorithm and eliminates those that are not needed.
In the first Q-M algorithm, the minterms of the function are listed by using a 1
when a literal is not complemented and a 0 when a literal is complemented. For example,
take the following 5-input function (1s are in bold to make it easier to see them):
15
A B C D E OUT
0 0 0 0 0 0
0 0 0 0 1 0
0 0 0 1 0 0
0 0 0 1 1 0
0 0 1 0 0 1
0 0 1 0 1 1
0 0 1 1 0 1
0 0 1 1 1 1
0 1 0 0 0 0
0 1 0 0 1 0
0 1 0 1 0 0
0 1 0 1 1 0
0 1 1 0 0 1
0 1 1 0 1 0
0 1 1 1 0 0
0 1 1 1 1 0
1 0 0 0 0 0
1 0 0 0 1 0
1 0 0 1 0 0
1 0 0 1 1 0
1 0 1 0 0 0
1 0 1 0 1 0
1 0 1 1 0 1
1 0 1 1 1 0
1 1 0 0 0 0
1 1 0 0 1 0
1 1 0 1 0 0
1 1 0 1 1 0
1 1 1 0 0 1
1 1 1 0 1 0
1 1 1 1 0 1
1 1 1 1 1 0
16
The first step of the first Q-M algorithm yields the following list:
ABCDE
00100
00101
00110
00111
01100
10110
11100
11110
In the next part of the Q-M algorithm, the theorem of Adjacency is applied. To
apply it, look at the implicants and find implicants that differ by exactly one literal.
Combine the two implicants by putting an X (for don’t-care) in place of the literal they
differ by, and write the new implicant in the next column. Throw out any duplicate
implicants that may have been generated. Draw a box around any implicants that could
not be combined with other implicants. Repeat this process with the next column until no
more combinations are possible. The boxed implicants are the ones that could not be
combined: they are the prime implicants. In this example, the first algorithm produces:
Therefore, the cover of the function using these prime implicants is:
The second Q-M algorithm can further minimize a cover generated by the first
algorithm. The first part of the second algorithm is to make a table whose rows are
labeled by the prime implicants and whose columns are labeled by the minterms of the
17
function. When labeling the columns with the minterms, it is usually easier to write down
the base 10 number represented by the binary (base 2) number that the minterm
represents. In this case, E would be the value in the 1’s place, and A would be the value
in the 16’s place. Then, put a check mark in each box where the minterm of that column
is covered by the implicant of that row. The table for this function is:
4 5 6 7 12 22 28 30
0X100 √ √
X0110 √ √
X1100 √ √
1X110 √ √
111X0 √ √
001XX √ √ √ √
The next part of the algorithm is to find all columns that contain exactly one
check (all columns contain at least one check). Draw a line across the row that this check
is in, and circle the minterms of all the checks that this line intersects to signify that that
minterm has been covered. Do not do this process for any circled columns. After doing
4 5 6 7 12 22 28 30
0X100 √ √
X0110 √ √
X1100 √ √
1X110 √ √
111X0 √ √
001XX √ √ √ √
18
At this point, 12, 22, 28, and 30 are uncovered. Since they each have more than
one check in their column, we must choose what checks to take first. Our choice will
affect the number of implicants in our cover. There is always a way to find the minimum
cover with the second algorithmii. However, there are many intricacies involved in
finding an algorithm to always find the correct choices. For example, take the following
diagram with the dots representing minterms—which may be spread out in a random,
implicants:
1 2 3 4 5
The essential prime implicants are the dark circles. The lighter circles are nonessential
and can be thrown away. A perfect Q-M algorithm would pick the dark circles and not
the light circles. Also, this picture is only in two dimensions. In a normal situation, these
minterms may be spread out over many more dimensions, increasing the complexity of
the problem.
I was not able to find the perfect second Q-M algorithm that always finds the
minimum cover*. My Q-M algorithm may choose circle 1 and then, because the minterms
in this diagram may not be in same order as they are in the algorithm’s list of minterms,
choose circle 4 instead of circle 3. The best second Q-M algorithm I found is the
*
MacEspresso 1.0, a port of the Espresso algorithm by Mikhail Fridberg, was able to find a more minimal
cover when running with the “–Dexact” parameter.
19
1. Pick the column with the least number of checks in it. If there is a tie, pick the
first one.
2. In this column, pick the check whose row will get us the greatest number of
uncovered minterms.
4 5 6 7 12 22 28 30
0X100 √ √
X0110 √ √
X1100 √ √
1X110 √ √
111X0 √ √
001XX √ √ √ √
Now, all the minterms are circled, so they are all covered. The algorithm stops,
This is a 62.5% reduction in the number of terms and a 72.5% reduction in the number of
literals.
This process may be tedious to go through by hand, especially if there are many
inputs. However, because of the algorithmic, recursive nature of these algorithms, they
are fit to be programmed into a computer. I programmed them in Java (see Appendix A
for source code). I implemented the first Q-M algorithm using a 3-dimensional
vector/array combination. In this case, a “vector” is a computer vector, not a math vector:
it is an array without a preset, limited size. The first vector holds the different “levels”
20
(iterations) of the first Q-M algorithm. Each element in the first vector is a vector of
arrays. These arrays in the third and last level of the structure are the terms. Their size is
the number of inputs, and they hold the literals. For the literals, I store a 0 for 0, a 1 for 1,
and a 2 for don’t-care (“X”). The following picture is a graphical representation of this
data structure:
0 1 0
Vector of “levels” 1 0 1
of the algorithm 0 1 2 0 1
0 0 0 1 2
1 1 1 1 0
1 0 0
1
⋅ ⋅ ⋅
⋅ ⋅ ⋅
⋅ 2 2 2
2 2 1
⋅ 2 0 2 Arrays of literals
⋅ 0 1 2 (terms)
1 2 0
⋅ ⋅ ⋅
Vector of terms
of “level” of
the algorithm
Note that no 2s are in any of the implicants in the first level, as no terms have yet
been combined by adjacency. Each implicant in the second level contains exactly one 2,
since exactly one iteration of the algorithm has been done, yielding one don’t-care per
new implicant. Each implicant in the third level contains two 2s, and so on. The first level
(the canonical cover) is read in from a file. Each level after the first is created by
The implementation of the first Q-M algorithm consists of four nested for loops.
The first (outer) loop goes through each level. The second loop goes through each term.
21
The third loop goes through all the terms that have not already been compared with that
term (all the ones below it), and the fourth loop compares each literal of the two terms to
see if there is exactly one difference in literals. If there is exactly one difference, the two
terms are combined by adjacency: a new term with a 2 in the different literal’s place and
all other literals the same as those in the two compared terms is put into the next level.
Also, the two compared terms’ indices are put into a set of non-primes. After each term
has been compared, the set of non-primes is used to find the terms that are prime. The
prime terms are added to the set of primes, which is the final result of the algorithm.
Pseudocode for this is on the next page (NUMINPUTS is the number of inputs).
22
return primes
24
Because of its four nested loops, the first Q-M algorithm takes a long time to
execute. For just one iteration of the outer loop (for only one level), there are
approximately L iterations of the term loop, and each of those has L – term iterations of
the compterm loop, where L is the size of the level. Thus, for each level, the total number
of times the literal loop is called is approximately:
L
L + (L − 1) + (L − 2) + ... + (L − L) = ∑ (L − k)
k=0
Simplifying this,
L L +1 L +1 L +1 L +1
(L + 1)(L + 2)
∑ (L − k) = ∑ (L − k + 1) =∑ L − ∑ k +∑ 1 = L 2
+ L−
2
+ L +1
k =0 k =1 k =1 k =1 k =1
L2 + 3L + 2 2L2 + 4L + 2 − L2 − 3L − 2 L2 + L
= L2 + 2L + 1 − = =
2 2 2
Then, for each of these iterations, there is an iteration of the literal loop. The literal loop
always has NUMINPUTS iterations. So, if N = NUMINPUTS, each level takes about
L2 + L
O N = O(L2 N) time. This is a very long time considering that the number of
2
inputs (N) and the number of terms in a level (L) can both commonly be in the hundreds.
To implement the second Q-M algorithm, I made a 2-dimensional array for the
table of checks and a 1-dimensional array to store which minterms are “circled.” I then
wrote several different versions of the algorithm, and put the best one in my final version.
• Pick the column with the least number of checks in it. If there is a tie, pick the first one.
• In this column, pick the check whose row will get us the greatest number of uncovered
minterms.
In the worst case, the second Q-M algorithm can take longer to run than the first
algorithm, but normally it takes much shorter. In the worst case, making checkTable takes
about O(I * M * N) time, where I is the number of prime implicants, M is the number of
minterms, and N is the number of inputs. In the best case, it takes about O(I * M) time.
26
Finding minChecksCol takes about O(M * I) time at first, but then, as more inputs are
covered, it decreases. Finding maxChecksRow takes about the same time. In the worst
case scenario, where the prime implicants are the minterms, making checkTable,
minChecksCol, and maxChecksRow all take the greatest amount of time. Worse still, the
amount of time to find minChecksCol and maxChecksRow in each iteration of the while
loop only decreases by 1 each time, since only 1 minterm gets covered by the chosen
implicant. This makes the total amount of time to find all minChecksCol’s values in all
Using the fact that I = M (because it’s the worst case) and simplifying,
M M +1 M +1 M +1 M +1
∑ ( M * I − k ) = ∑ (M 2 − k +1) = ∑ (M 2) − ∑ k + ∑ 1
k =0 k =1 k =1 k =1 k =1
(M + 1)(M + 2) 2M 2 − M 2 − 3M − 2 + 2M + 2
= M +M − + M +1= M +
3 2 3
2 2
M2 − M
=M +
3
This is also about the amount of time it takes to find all maxChecksRow’s values.
Therefore, in the worst case, the second Q-M algorithm takes about
checkTable). However, in a normal scenario, there are much fewer implicants than there
are minterms. Also, there are not normally M + 1 iterations of the while loop, since most
of the implicants cover more than one minterm. This makes the second algorithm
normally run in just more than a tenth of the time taken by the first algorithm.
27
Java (see Appendix B for source code). It does not reduce a given function as much as the
Q-M algorithms, but it runs in much less time. Simplify uses Shannon’s Expansion
Theorem to break up the given function into functions that are can be easily reduced, and
then merges the results. Shannon’s Expansion Theorem (on page 10) states:
where f is a boolean function with n inputs. This can be efficiently turned into computer
and 2s. The nth place in the matrix is the state of the nth literal of the implicant: 0 for
complemented, 1 for uncomplemented, and 2 for don’t-care (the variable does not appear
whose rows are the representations of the implicants that collectively cover the function.
1 1 2 0
f = 2 1 0 2
1 1 1 1
such that for every column index i between 1 and the number of inputs, inclusive, and
fi j if xi = 2 (Case 3)
For example, fi j is the number in the ith column and jth row of f. It is the ith literal of the jth
implicant of f. xi is number in the ith column (place) of x. It is the ith literal of x. The
cofactor operation forms a function fx, which has i columns and j or less rows. If Case 1
ever happens ( ( f x ) i is ∅), then ( f x ) —the jth row of f x —is deleted, and the jth row would
j j
become the (j+1)th row, the (j+1)th row the (j+2)th row, and so on. For example, the
cofactor of the following function f with respect to the following implicant x is the
1 1 2 0
1 2 2 0
f = 2 1 0 2 , x = [2 1 1 2] , f x =
1 2 2 1
1 1 1 1
cofactor of f with respect to an “implicant” whose literals are all 2s except for the ith
literal, whose value is xi. Effectively, it produces a function f xi whose implicants all have
a 2 in the ith literal and the literals of all the implicants in f that have a value of 2 or xi in
their ith literal in the other literals. For example, using the function f from the previous
1 1 2 0
1 1 2 0
f = 2 1 0 2 , x3 = [2 2 0 2] , f x3 =
2 1 2 2
1 1 1 1
f = xi fx i + xi f xi
for any i between 1 and the number of inputs of f, inclusive. This is because the each of
the cofactors of f do not contain the implicants that would evaluate to 0 if an algebraic
application of Shannon’s Expansion Theorem were used. The cofactor operation removes
those implicants with its Case 1. In addition, the cofactor operation turns the ith literal of
into xi, thus removing it from each term. Because xi is the variable “splitting” f, xi is called
given function f until it comes across a unate function. A unate function is a function that
function 1 (although the output of the function need not be 0 beforehand). A function is
function 0 (although the output of the function need not be 1 beforehand). Note that this
A cover of a function is unate if the ith literal of every implicant of the cover of the
the cover of the function contains only 1s and 2s or only 0s and 2s. For example, the
30
cover of the following function U defined by its matrix representation is unate, and the
1 2 0 1 0 2 1 1
1 0 2 2 0 2 0 1
U = 1 0 0 1 B = 2 0 1 2
1 2 2 2 0 1 1 2
1 0 2 1 2 0 0 1
The cover of B is binate because its 2nd and 3rd columns contain both 1s and 0s. The cover
unate, then the function must be unate. However, if a cover is not unate, it is possible that
the function is unate. For example, the following function F is unate, but its cover is not:
1 1 0
F= iv
2 0 2
In Simplify, however, we only deal with covers of functions, and it would not be
beneficial to the algorithm to check to see if the function is unate if the cover is not.
Therefore, from this point forward in the paper, I will not differentiate between unate
function that are covered by other implicants of the function. Since each column of the
unate function can only contain either 1s and 2s or 0s and 2s, it is more likely that this
simplification can happen than if the function were binate. For example, the following
1 0 2 2
2 0 1 1
1 0 2 2 1 0 2 2
F= is reduced to F ′ =
2 2 1 2 2 2 1 2
1 2 1 1
2 0 1 2
Note that an implicant a covers another implicant b if and only if each literal i of a is a 2
or equals the ith literal of b. That is, the ith literal of b cannot be a 1 when the ith literal of a
is a 0, a 0 when the ith literal of a is a 1, or a 2 when the ith literal of a is not 2. If it were,
then b would not be covered by a. The following is pseudocode for Unate Simplify:
put f into f ′
for every implicant implicant of f from 1 to the number of implicants in f – 1 do
for every implicant compImplicant of f from the number of implicants in f down
to the index of implicant + 1 do
if implicant contains compImplicant then
remove compImplicant from f ′
end if
end for of compImplicant
end for of implicant
return f ′
After Simplify simplifies the unate functions, it merges the functions back
together. The Merge algorithm puts together the two functions formed by performing
to the original function. Merge not only splices the two functions into one function, but
also performs the AND operation needed on each of the functions to complete Shannon’s
Expansion Theorem. However, if Merge just performed the AND operation and put the
32
two resulting functions together (effectively ORing them), many redundancies would
result. Thus, before Merge performs the AND and OR operations, it checks to see if it
the splitting variable). It returns a function h created by merging h1 and h0. To check for
redundancies, Merge first checks to see if any implicants of h1 and h0 are identical. Those
that are identical are put into a set of implicants h2 and removed from h1 and h0. Then,
Merge checks to see if any implicants in h1 cover any of those in h0, and vice versa.
Implicants in h0 that are covered by h1 are removed from h0 and put into h2. Implicants in
h1 that are covered by h0 are removed from h1 and put into h2. Because the implicants in
h2 are those that were covered by both h1 and h0, they should make h = 1 no matter xi is.
Thus, they are not ANDed with the splitting variable. Since the implicants of h1 should
make h = 1 only when the splitting variable is 1, they should be ANDed with xi, and
because the implicants of h0 should make h = 1 only when the splitting variable is 0, they
return h
Because Simplify breaks a function down into unate functions, we want to make
each of the two functions resulting from the application of Shannon’s Expansion
Theorem as “unate” as possible. Thus, we choose the splitting variable to be the “most”
Binate Select chooses the index of the splitting variable, splittingVar, such that
contains both 1s and 0s and the greatest number of 1s and 0s (the greatest sum of the
number of 1s and the number of 0s). This is done by the following pseudocode:
Simplify breaks down its given function f using the following line of pseudocode:
merge( simplify( f x ), simplify( f x splittingVar ), splittingVar )
splittingVar
It does this recursively until it encounters a unate function. Then, it calls unate_simplify
on the unate function. Note that this is, in effect, simply applying Shannon’s Expansion
Theorem. That is why the function generated by Simplify is equivalent to the given
1 0 1
0 0 1
f =
1 1 1
1 1 2
36
variable. Simplify then cofactors f with respect to the splitting variable and to the
1 0 1
0 0 1
1 1 1
1 1 2
x1 = [0 2 2] x1 = [1 2 2]
2 0 1
[2 0 1] 2 1 1
2 1 2
Simplify is then recursively called on each of these “subfunctions.” Because the function
on the left is unate, unate_simplify is called on it. Since this subfunction only has one
implicant, unate_simplify cannot simplify it further. For the right subfunction, column 2
2 0 1
2 1 1
2 1 2
x2 = [2 0 2] x2 = [2 1 2]
2 2 1
[2 2 1]
2 2 2
Because both the left and right subfunctions are unate, unate_simplify is called on each of
them. Since there is only one implicant in the left subfunction, unate_simplify cannot
37
removes [2 2 1].
Then, merge is called to merge together the left branch and the right branch,
and removed from h0, and is not ANDed with the splitting variable. Thus, merge yields:
[2 2 1] [2 2 2]
x2 x2
2 2 1
2 1 2
This is then merged with the subfunction obtained by cofactoring f with respect to
2 2 1
[2 0 1]
2 1 2
x1 x1
2 0 1
1 2 1
1 1 2
This is the final result by Simplify. It has three terms and six literals: a 25% reduction in
binary recursion tree, it is very efficient in terms of time. However, it is not guaranteed to
even come up with a set of prime implicants, much less find a minimum cover. To
compare Q-M and Simplify, I created a program that generated random functions by
38
making lists of random minterms. I then ran each on a variety of functions under
controlled conditions*. The results are shown in the following charts (on the next four
pages). The reduction results are measured in terms instead of literals because these
algorithms are used for large functions, and the hardware used to realize large functions,
programmable logic arrays (PLAs), is structured in such a way that it does not matter
how many literals are in each term, but it does matter how many terms there are.
*
PowerTower 180 with 64MB RAM running Mac OS 8.5.1 and MRJ 2.0. 5MB of RAM was allocated
each to Simplify and Q-M as applications generated by JBindery 2.0. Each ran with extensions off and no
background applications (besides the Finder) running. Three consecutive trials were taken for each
combination of minterms. Times are averages; other results were constant.
39
39
101 26
33
59 28 Simplify
Q-M
16
28 14
0 10 20 30 40
61.39%
101 74.26%
44.07%
59 52.54% Simplify
Q-M
42.86%
28 50.00%
Percent Reduction
0.165
101 4.799
0.099
59 0.552 Simplify
Q-M
0.043
28 0.157
0 1 2 3 4 5
Time (seconds)
40
66
210 41
67
155 48 Simplify
Q-M
31
50 30
0 20 40 60 80
68.57%
210 80.48%
56.77%
155 69.03%
Simplify
Q-M
38.00%
50 40.00%
Percent Reduction
0.382
210 38.778
0.328
155 7.315
Simplify
Q-M
0.093
50 0.291
0 10 20 30 40
Time (seconds)
41
131
413 70
144
265 96 Simplify
Q-M
76
114 64
0 50 100 150
68.28%
413 83.05%
45.66%
265 63.77%
Simplify
Q-M
33.33%
114 43.86%
Percent Reduction
0.989
413 88.4
0.927
265 24.717
Simplify
Q-M
0.331
114 1.312
0 20 40 60 80 100
Time (seconds)
42
262
823 135
257
500 171 Simplify
Q-M
93
119 84
68.17%
823 83.60%
48.60%
500 65.80% Simplify
Q-M
21.85%
119 29.41%
Percent Reduction
2.601
823 1,412.73
2.163
500 79.618 Simplify
Q-M
0.431
119 1.243
0 50 100
Time (seconds)
43
3. Essential Primes: Find the essential primes and make a list of them.
5. Reduce: Put 1s and 0s back into the 2s of the expanded implicants. The
improvement, repeat.
8. Makesparse: After the algorithm has completed, different parts of the function
may be in different places. Put together the function and simplify it as much
as possible.v
Unfortunately, this is a very intricate and technical algorithm, and I did not have enough
In the amount of time I had to complete this paper, I was not able to accomplish
several things. I was not able to explore the second Q-M algorithm enough to find the
algorithm that guarantees a minimum cover. Also, I was not able to find out enough
about the Espresso algorithm in order to program it. If I did have enough time to program
the Espresso algorithm, I would compare it with the Q-M and the Simplify algorithms.
44
Since I was able to obtain a program of Espresso written in native Macintosh code (this
makes it faster than programs in Java), MacEspresso 1.0, I am able to tell that Espresso
normally produces results with more literals than the results of the Q-M algorithms, and
fewer literals than the results of the Simplify algorithm. However, I cannot tell how it
compares in terms of time. I believe that it takes significantly less time than the Q-M
Notes
i
Proofs of Absorption and Adsorption from Jerry D. Daniels, Digital Design from Zero to
One, New York: John Wiley & Sons, Inc., 1996 p. 103
ii
Digital Design from Zero to One p. 178
iii Simplify adapted from Robert K. Brayton, Gary D. Hachtel, Curtis T. McMullen,