0% found this document useful (0 votes)
67 views47 pages

Minimizing Boolean Sum of Products Functions: David Eigen

The document summarizes techniques for minimizing Boolean sum-of-products functions including Karnaugh maps, Quine-McCluskey algorithms, Simplify, and Espresso-II. It describes Boolean algebra, primitive Boolean functions, and properties like absorption and adsorption that can be used to simplify Boolean expressions. Key techniques like Karnaugh maps and the Quine-McCluskey algorithms are introduced along with comparisons of different approaches.

Uploaded by

Reetesh Chandra
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views47 pages

Minimizing Boolean Sum of Products Functions: David Eigen

The document summarizes techniques for minimizing Boolean sum-of-products functions including Karnaugh maps, Quine-McCluskey algorithms, Simplify, and Espresso-II. It describes Boolean algebra, primitive Boolean functions, and properties like absorption and adsorption that can be used to simplify Boolean expressions. Key techniques like Karnaugh maps and the Quine-McCluskey algorithms are introduced along with comparisons of different approaches.

Uploaded by

Reetesh Chandra
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Minimizing Boolean Sum of Products Functions

David Eigen
Table of Contents

Introduction and Primitive Functions 1

Boolean Algebra 4

Karnaugh Maps 8

Quine-McCluskey Algorithms 14

First Algorithm 14

Second Algorithm 16

Implementation to Computer Code and Time Analysis 19

Simplify 26

Cofactor 26

Unate Functions 28

Unate Simplify 29

Merge 30

Binate Select 32

Simplify 34

Comparison between Simplify and the Qune-McCluskey Algorithms 36

Espresso-II 42

Appendix A: Source Code of the Quine-McCluskey Algorithms A1

First Algorithm A1

Second Algorithm A4

Appendix B: Source Code of Simplify A8

Glossary G1

Bibliography B
1

Digital design is the design of hardware for computers. At the lowest level,

computers are made up of switches and wires. Switches normally have two states and can

only be in one state at a time. They may be manually controlled or may be controlled by

the outputs of other parts of the computer. Wires also have two states, each corresponding

to the level of voltage in the wire. Although wires may conduct any level of voltage, in

digital design we restrict the amount of states to two: low voltage and high voltage.

In combinational design, one is given a truth table and must realize it into

hardware. Depending on what the application of the hardware is, it may be optimal to

have the hardware return the answer as quickly as possible (minimizing delay) or to use

as little hardware as possible (minimizing cost). There are many techniques for doing

this. Among them are Karnaugh Maps, the Quine-McCluskey algorithms, Simplify, and

Espresso. In this paper, I will explore a few of these techniques, and compare and

contrast them.

All of the techniques are derived from boolean algebra. Boolean algebra is a way

of manipulating boolean variables. A boolean variable has exactly two states, just as the

switches and wires at the lowest level of a computer have two states. Although the states

of wires in a computer are always called HI and LO, the states of a boolean variable may

be called true and false, on and off, or anything else. Usually, we use the states “1” and

“0” because they are the easiest to work with.

A boolean function is a function that takes boolean parameters (inputs) and

returns a boolean output. Primitive functions take one or two inputs. A two-input truth

table has four lines: one line for each combination of 1s and 0s that can be assigned to the

two inputs. There are four because for each 1 or 0 for the first input, the second input can
2

be either 1 or 0. Therefore, there are 2*2 = 4 combinations. For an N-input truth table, the

first input can be either 1 or 0; the second input can be either 1 or 0 for each 1 or 0 in the

first input; the third input can be either 1 or 0 for any of the 2*2 different combinations of

the first two inputs, doubling the number of combinations to 2*2*2; the fourth input can

be either 1 or 0 for each of the different combinations of the first three inputs, doubling

the number of combinations to 2*2*2*2, and so on. By the time we get to the Nth input,

there are 2 N combinations. Thus, an N-input truth table has 2 N lines.

There are sixteen different 2-input functions. This is because a 2-input truth table

has 4 lines. The first line can correspond to an output of 1 or 0. For each of these

possibilities, the second line can either be a 1 or a 0, making 2*2 combinations. The third

line can correspond to an output of either 1 or 0 for each of the 2*2 combinations of the

first two lines, making 2*2*2 possibilities. The fourth line can correspond to an output of

either 1 or 0 for each of the 2*2*2 combinations of the first three lines, making

2*2*2*2 = 16 possibilities. Extending this to an L-line truth table, there are 2 L different

possibilities. And since for an N-input truth table there are 2 N lines, L = 2 N . Therefore,
N
an N-input truth table can have 2 2 different functions.

Of the sixteen functions for a 2-input truth table, only seven are commonly used. I

will now examine these seven crucial functions.

The invert (NOT) function takes one input, and its output is the opposite state of

the input. Thus, NOT 1 = 0, and NOT 0 = 1. The truth table of the invert function is the

following:

A NOT A
0 1
1 0
3

NOT A is also called the complement of A, which is denoted A .

The AND function’s output is 1 when both of its two inputs are one. Thus,

A AND B is 1 only when A and B are both one. The truth table of the AND function is

the following:

A B A AND B
0 0 0
0 1 0
1 0 0
1 1 1

Since anything times 0 equals 0, and 1 times 1 equals 1, A AND B looks like a

multiplication table for A and B. Thus, A AND B is denoted as the product of A and B,

Α⋅Β, or AB.

The OR function’s output is 1 when any of its two inputs are 1. Thus, A OR B is 1

whenever A is 1 or B is 1. The truth table of the OR function is the following:

A B A OR B
0 0 0
0 1 1
1 0 1
1 1 1

Since 1 + 0 = 1 and 0 + 0 = 0, the truth table for A OR B looks somewhat like an addition

table for A and B. Thus, A OR B is denoted as the sum of A and B, A+B.

The exclusive OR (XOR) function is 1 whenever either, but not both, of its two

inputs is 1. Thus, A XOR B is 1 when only one of either A or B is 1. The truth table of

the XOR function is the following:

A B A XOR B
0 0 0
0 1 1
1 0 1
1 1 0
4

A XOR B is denoted as A ⊕ B. Note that A ⊕ B = ( A + B) ⋅ ( A⋅ B).

The other three primitive functions, NAND, NOR, and XNOR, are the

complements of AND, OR , and XOR, respectively. A NAND B = A ⋅ B ,

A NOR B = A + B , and A XNOR B = A ⊕ B .

There are four properties for boolean algebra. Often, they are called axioms.

However, given the definitions of AND, OR, and NOT, they can be proved by making a

truth table for each of them (this technique is called perfect induction). They are:

• Closure: A + B ∈{0,1} and A ⋅ B ∈{0,1}

• Associativity: ( A + B ) + C = A+ ( B + C) and ( A ⋅ B ) ⋅ C = A ⋅ (B ⋅ C)

• Commutativity: A + B = B + A and A ⋅ B = B ⋅ A

• Distributivity: A + (B ⋅ C) = ( A + B) ⋅ ( A + C) and A ⋅ ( B + C) = ( A ⋅ B) + ( A ⋅ C)

There are also four basic identities. They are:

• A + A = 1 and A ⋅ A = 0

• A + 1 = 1 and A ⋅ 0 = 0

• A + 0 = A and A ⋅1 = A

• A + A = A and A ⋅ A = A

Note that each of the axioms and identities have two forms: each form can be created by

switching the ORs and ANDs, and the 1s and 0s of the other form, called its “dual.”

These identities can also be proved by perfect induction. For example, the truth tables and

proof for the first identities are:

A A A+A A⋅ A
0 1 1 0
1 0 1 0
5

With these axioms and identities, we can prove the following theorems:

Absorption: X + XY = X and its dual, X ( X + Y ) = X

Adsorption: X + XY = X + Y and its dual, X ( X + Y ) = XY , and

Adjacency: XY + XY = Y and its dual, ( X + Y )( X + Y ) = Y

Proof of Absorption:

By distribution, X + XY = X (1 + Y) . 1 + Y = 1. Therefore, X (1 + Y ) = X ⋅1 = X

(because X⋅1 = X). Therefore, by transitivity, X + XY = X .

By the distributive property, X ( X + Y ) = XX + XY = X + XY (because X X = X).

By the form of absorption just proved, X + XY = X . Therefore, by transitivity,

X( X + Y ) = X .

Proof of Adsorption:

By distribution, X + XY = ( X + X )( X + Y ) . Because X + X = 1,

( X + X )( X + Y ) = 1⋅ ( X + Y) = X + Y (because 1⋅X = X). Therefore, by transitivity,


X + XY = X + Y .

By distribution, X ( X + Y ) = X X + XY = 0 + XY (because X X = 0). Because

0 + A = A , 0 + XY = XY . Therefore, by transitivity, X ( X + Y ) = XY .i

Proof of Adjacency:

By the distributive property, XY + XY = ( X + X )Y = 1⋅Y (because X + X = 1).

Because 1⋅A = A, 1⋅ Y = Y . Therefore, by transitivity, XY + XY = Y .

By the distributive property, ( X + Y )( X + Y ) = (X X) + Y = 0 + Y (because

X X = 0). Because 0+A = A, 0 + Y = Y . Therefore, by transitivity, ( X + Y )( X + Y ) = Y .


6

De Morgan’s Laws state:

A + B + C + ... = A⋅ B ⋅C ⋅... and A ⋅ B ⋅ C ⋅... = A + B + C + ...

Proof of De Morgan’s Laws:

Let G = X + F and H = X ⋅ F where F is some boolean function. Then

G ⋅ H = ( X + F ) ⋅ ( X ⋅ F) . By distribution and commutativity, this equals X XF + FF X .

And because A ⋅ A = 0 , this equals 0 ⋅ F + 0 ⋅ X = 0. Also, G + H = ( X + F) + ( X ⋅ F ) . By

distribution and commutativity, this equals ( X + X + F) ⋅ ( X + F + F ) . And because

A + A =1, this equals (1 + F ) ⋅ ( X +1) = 1. Assume G ≠ H . Then there exists some

ordered n-tuplet of the variables such that G ⋅ H = 1 or G + H = 0. But, by transitivity,

G ⋅ H = 0 and G + H = 1. Therefore, G = H and, because of this, H = G, proving both

laws.

Duality is a direct implication of De Morgan’s Laws. Given a function f = g + h ,

by De Morgan’s Laws, f = g ⋅ h . By recursively performing De Morgan’s Laws on g and

h, all ORs switch to ANDs, and all values become complemented: 1s to 0s and 0s to 1s.

Of course, we can just rename all variables with their complements and have no loss of

generality, since each form of the function (it and its dual) are independent. Similarly, if

f = g ⋅ h , f = g + h , so all ANDs switch to ORs, all 1s to 0s, and all 0s to 1s.

When realizing a truth table, one must create a hardware circuit that takes in the

appropriate inputs and produces the correct outputs. It is important to have as small a

function as possible. Although it is always possible to arrive at an optimal solution solely

using boolean algebra, it is not suitable for use with a function that has many inputs.
7

However, there are other techniques that may not always arrive at the most optimal

solution, but are easier to use when dealing with large truth tables.

These techniques create a Sum of Products (SOP) solution. An SOP function

consists of many inputs being ANDed together, and then the ANDed terms being ORed

together. For example, an SOP function may look like: AB + CD + DEF. It consists of

three terms (AB, CD, and DEF) all being ORed together. Each instance of an input is

called a literal. In this expression, there are seven literals. A term is an ANDed group of

literals. In this expression, there are three terms.

Any truth table can be realized with an SOP solution. For example, take the

following truth table:

A B C OUT
0 0 0 1
0 0 1 1
0 1 0 0
0 1 1 0
1 0 0 1
1 0 1 0
1 1 0 1
1 1 1 0

We want OUT to be 1 only when a “1” is listed under OUT in the truth table. One of the

times this happens is when A is 0, B is 0, and C is 0. Therefore, we need OUT to be 1

whenever A is 0 AND B is 0 AND C is 0. An expression that is 1 only when this is 1 is

A ⋅ B ⋅ C . When A, B, and C are all 0, this expression is 1. In addition, we want OUT to

be 1 whenever A is 0, B is 0, and C is 1 (the second line). An expression that is 1 only

when this is 1 is A ⋅ B ⋅ C . So an expression that is 1 when either (or both) of these cases

is 1 is:

A ⋅ B ⋅ C + A⋅ B ⋅C
8

Similarly, we can generate terms for the other two times OUT is 1, ultimately creating a

SOP expression that is 1 if and only if OUT is 1:

OUT = A ⋅ B ⋅C + A ⋅ B ⋅ C + A ⋅ B ⋅C + A ⋅ B ⋅ C

This can be generalized into an algorithm: First, look for lines in the truth table

where OUT is 1. For each of these rows, AND together one literal for each input: an

uninverted, normal literal if the input is 1, and an inverted input if the input is 0. Then,

OR together all of these terms. An SOP solution like this is not at all reduced, and is

called a canonical SOP solution. Each of the ORed terms in a canonical solution is called

a minterm.

A technique to reduce an SOP solution is to use Karnaugh maps (K-maps). A

Karnaugh map is a pictorial representation of an SOP expression. It is a grid with one box

for each minterm. The boxes are arranged such that each box differs with an adjacent box

by exactly one literal. For example, the K-map for the previous truth table is:

BC
00 01 11 10
A

0 1 1 0 0

1 1 0 0 1

In this K-map, B and C label the columns and A labels the rows, as marked with a

“A\BC” in the top left hand corner. For the column labels, B is the literal on the left, and

C is the literal on the right. Notice that the columns are labeled 00, 01, 11, 10. In this

sequence, each number is different from its two surrounding numbers by exactly one bit.

Also, the numbers at the beginning and end of the sequence “wrap around:” they also
9

differ by exactly one bit. Each entry has exactly 3 adjacent entries: one for each input

variable.

Because of this, the minterms associated with adjacent boxes that contain 1s

(horizontally and vertically only; not diagonally) differ by one literal. Thus, we can apply

the adjacency theorem to two adjacent boxes and condense them to one smaller term. For

example, both the 000 and 001 boxes contain 1s. Therefore, by adjacency, we can say:

ABC
1442 +4 3 = AB
ABC
4
minterms
Notice that AB corresponds to 00, which are the only two literals that the boxes have in

common in the K-map. We group together the boxes in the K-map to denote that we are

applying adjacency to them. The map with all the groups circled is:

BC
00 01 11 10
A

0 1 1 0 0

1 1 0 0 1

Each of these groups represents a term in the SOP cover. To find the term, we can

just look to see what literals all the boxes in each group have in common. The product of

these literals is a term in the SOP cover. Thus, the SOP cover represented here is:

AB + AC + BC

However, the B C term is redundant. Graphically, that group overlaps the other two

groups, and groups two boxes that are already circled. In boolean algebra, the theorem

that states that this redundant term can be eliminated is the Law of Consensus.
10

The Law of Consensus states:

AB + AC + BC = AB + AC

It can be proved using Shannon’s Expansion Theorem.

Shannon’s Expansion Theorem states:

f (x1 , x 2 ,...,x n ) = x1 ⋅ f (1, x2 ,..., xn ) + x1 ⋅ f (0, x 2 ,...,x n )

where f(x1,x2,…,xn) is a boolean function with n parameters and xi is a boolean variable.

Proof of Shannon’s Expansion Theorem:

The proof of Shannon’s Expansion Theorem is broken up into two cases: one case for

when x1 = 1 and one for when x1 = 0.

Case 1: x1 = 1

Since x1 = 1, f (x1 , x 2 ,...,x n ) = f (1, x 2 ,...,xn ) . Because X = 1⋅X, 0 = 0⋅X, and

X + 0 = X, f (1, x 2 ,...,xn ) = f (1, x2 ,..., x n ) ⋅1+ f (0, x2 ,..., xn ) ⋅ 0. Since x1 = 1,

f (1, x 2 , . ..,xn ) ⋅1 + f ( 0,x 2 ,...,xn ) ⋅0 = f (1, x 2 ,...,xn ) ⋅ x1 + f (0, x2 ,..., x n ) ⋅ x1 . Therefore, by

transitivity, f (x1 , x 2 ,...,x n ) = x1 ⋅ f (1, x2 ,..., xn ) + x1 ⋅ f (0, x 2 ,...,x n ) .

Case 2: x1 = 0

Since x1 = 0, f (x1 , x 2 ,...,x n ) = f ( 0,x 2 ,...,xn ) . Because X = 1⋅X, 0 = 0⋅X, and

0 + X = X, f (0,x 2 ,...,xn ) = f (1, x2 ,..., x n ) ⋅ 0 + f (0, x2 ,..., xn ) ⋅1. Since x1 = 0,

f (1, x 2 , . ..,xn ) ⋅0 + f ( 0,x 2 ,...,xn ) ⋅1 = f (1, x 2 ,...,xn ) ⋅ x1 + f (0, x2 ,..., x n ) ⋅ x1 . Therefore, by

transitivity, f (x1 , x 2 ,...,x n ) = x1 ⋅ f (1, x2 ,..., xn ) + x1 ⋅ f (0, x 2 ,...,x n ) .

Proof of the Law of Consensus:

Now, with Shannon’s Expansion Theorem, the Law of Consensus can be proved.

Using Shannon’s Expansion Theorem on f ( A,B,C ) = AB + AC + BC , we get:


11

AB + AC + BC = A( B + BC ) + A(C + CB) . By Absorption, we can combine both B + BC

and C + CB to get: A( B + BC ) + A(C + CB ) = A ⋅ B + A⋅ C . Therefore, by transitivity,

AB + AC + BC = AB + AC .

Using this on the K-map example, the cover from the three circles can be

simplified:

AB + AC + BC = AB + AC

This new cover has only two terms and four literals, much less than the original function

with four terms and twelve literals. Thus, there is a 50% reduction in the number of terms

2
and a 66 3 % reduction in the number of literals. The new cover is graphically represented

by two circles, namely the ones that are not completely overlapped:

BC
00 01 11 10
A

0 1 1 0 0

1 1 0 0 1

Whenever a circle is completely overlapped (every box it contains is also contained in

another circle), the Law of Consensus can be applied. Thus, these circles do not even

have to be circled to generate a cover of the function.

Adjacency can be applied to grouping not only single boxes, but also entire

groups of boxes whose height and width are both powers of two. For example, take the

following K-map:
12

CD
00 01 11 10
AB

00 1 1 0 0

01 1 1 0 0

11 0 1 1 1

10 1 1 1 1

Circling every group of two boxes (and omitting those that can be eliminated by the Law

of Consensus), we get:

CD
00 01 11 10
AB

00 1 1 0 0

01 1 1 0 0

11 0 1 1 1

10 1 1 1 1

This produces a cover:

ABC + ABC + ABD + ABC + ABC + ABC

However, Adjacency can be applied to many of these terms (such as ABC + ABC ).

Applying adjacency to all possible terms and repeating, the cover becomes:

AC + AD + AC + AB
13

Graphically, this is represented by combining adjacent circles:

CD
00 01 11 10
AB

00 1 1 0 0

01 1 1 0 0

11 0 1 1 1

10 1 1 1 1

This cover of the function has only four terms and eight literals, reduced from eleven

terms and forty-four literals. This is approximately a 64% reduction in terms and a 82%

reduction in literals.

Thus, the algorithm for using a K-map is:

• Circle areas of the K-map where each box in the area contains a 1. Each area’s height and

width (measured in number of boxes) must be a power of 2. Areas that are already covered

by circles do not need to be circled, although it may be beneficial to do so when grouping

areas that are not yet covered (since larger areas produce smaller terms).

• Each circle represents a term in the cover. Write down each term by ANDing the literals

that are the same in all boxes contained by the circle. Do not complement literals that

appear as a 1 in the column or row label and complement those that appear as a 0. Then, OR

together all the terms.

K-maps are an intuitive and easy way to reduce covers of functions with four or

less inputs, but they do not work well for functions with over four inputs. It is possible to

use a K-map with more than two dimensions to represent these functions, but they are
14

hard to deal with in three dimensions, and almost impossible to use in more than three.

For a function with more than four inputs, other methods must be used. One such method

is the Quine-McCluskey (Q-M) algorithms.

The Q-M algorithms are two algorithms that are used to minimize a cover of a

boolean function. The first algorithm generates a cover of prime implicants (implicants

(terms) that are fully reduced by adjacency). The second algorithm takes the prime

implicants from the first algorithm and eliminates those that are not needed.

In the first Q-M algorithm, the minterms of the function are listed by using a 1

when a literal is not complemented and a 0 when a literal is complemented. For example,

take the following 5-input function (1s are in bold to make it easier to see them):
15

A B C D E OUT
0 0 0 0 0 0
0 0 0 0 1 0
0 0 0 1 0 0
0 0 0 1 1 0
0 0 1 0 0 1
0 0 1 0 1 1
0 0 1 1 0 1
0 0 1 1 1 1
0 1 0 0 0 0
0 1 0 0 1 0
0 1 0 1 0 0
0 1 0 1 1 0
0 1 1 0 0 1
0 1 1 0 1 0
0 1 1 1 0 0
0 1 1 1 1 0
1 0 0 0 0 0
1 0 0 0 1 0
1 0 0 1 0 0
1 0 0 1 1 0
1 0 1 0 0 0
1 0 1 0 1 0
1 0 1 1 0 1
1 0 1 1 1 0
1 1 0 0 0 0
1 1 0 0 1 0
1 1 0 1 0 0
1 1 0 1 1 0
1 1 1 0 0 1
1 1 1 0 1 0
1 1 1 1 0 1
1 1 1 1 1 0
16

The first step of the first Q-M algorithm yields the following list:

ABCDE
00100
00101
00110
00111
01100
10110
11100
11110

In the next part of the Q-M algorithm, the theorem of Adjacency is applied. To

apply it, look at the implicants and find implicants that differ by exactly one literal.

Combine the two implicants by putting an X (for don’t-care) in place of the literal they

differ by, and write the new implicant in the next column. Throw out any duplicate

implicants that may have been generated. Draw a box around any implicants that could

not be combined with other implicants. Repeat this process with the next column until no

more combinations are possible. The boxed implicants are the ones that could not be

combined: they are the prime implicants. In this example, the first algorithm produces:

ABCDE ABCDE ABCDE


00100 0010X 001XX
00101 001X0 001XX (duplicate)
00110 0X100
00111 001X1
01100 0011X
10110 X0110
11100 X1100
11110 1X110
111X0

Therefore, the cover of the function using these prime implicants is:

ACD E + BCDE + BCD E + ACDE + A B C E+ ABC

The second Q-M algorithm can further minimize a cover generated by the first

algorithm. The first part of the second algorithm is to make a table whose rows are

labeled by the prime implicants and whose columns are labeled by the minterms of the
17

function. When labeling the columns with the minterms, it is usually easier to write down

the base 10 number represented by the binary (base 2) number that the minterm

represents. In this case, E would be the value in the 1’s place, and A would be the value

in the 16’s place. Then, put a check mark in each box where the minterm of that column

is covered by the implicant of that row. The table for this function is:

4 5 6 7 12 22 28 30

0X100 √ √
X0110 √ √
X1100 √ √
1X110 √ √
111X0 √ √
001XX √ √ √ √

The next part of the algorithm is to find all columns that contain exactly one

check (all columns contain at least one check). Draw a line across the row that this check

is in, and circle the minterms of all the checks that this line intersects to signify that that

minterm has been covered. Do not do this process for any circled columns. After doing

this, the table is:

4 5 6 7 12 22 28 30

0X100 √ √
X0110 √ √
X1100 √ √
1X110 √ √
111X0 √ √
001XX √ √ √ √
18

At this point, 12, 22, 28, and 30 are uncovered. Since they each have more than

one check in their column, we must choose what checks to take first. Our choice will

affect the number of implicants in our cover. There is always a way to find the minimum

cover with the second algorithmii. However, there are many intricacies involved in

finding an algorithm to always find the correct choices. For example, take the following

diagram with the dots representing minterms—which may be spread out in a random,

nonconsecutive order in the list of minterms—and the circles representing prime

implicants:

1 2 3 4 5

The essential prime implicants are the dark circles. The lighter circles are nonessential

and can be thrown away. A perfect Q-M algorithm would pick the dark circles and not

the light circles. Also, this picture is only in two dimensions. In a normal situation, these

minterms may be spread out over many more dimensions, increasing the complexity of

the problem.

I was not able to find the perfect second Q-M algorithm that always finds the

minimum cover*. My Q-M algorithm may choose circle 1 and then, because the minterms

in this diagram may not be in same order as they are in the algorithm’s list of minterms,

choose circle 4 instead of circle 3. The best second Q-M algorithm I found is the

following (on the next page):

*
MacEspresso 1.0, a port of the Espresso algorithm by Mikhail Fridberg, was able to find a more minimal
cover when running with the “–Dexact” parameter.
19

1. Pick the column with the least number of checks in it. If there is a tie, pick the

first one.

2. In this column, pick the check whose row will get us the greatest number of

uncovered minterms.

Using this algorithm on the above table yields:

4 5 6 7 12 22 28 30

0X100 √ √
X0110 √ √
X1100 √ √
1X110 √ √
111X0 √ √
001XX √ √ √ √

Now, all the minterms are circled, so they are all covered. The algorithm stops,

and we are left with three implicants in our cover:

BCDE + ACDE + ABC

This is a 62.5% reduction in the number of terms and a 72.5% reduction in the number of

literals.

This process may be tedious to go through by hand, especially if there are many

inputs. However, because of the algorithmic, recursive nature of these algorithms, they

are fit to be programmed into a computer. I programmed them in Java (see Appendix A

for source code). I implemented the first Q-M algorithm using a 3-dimensional

vector/array combination. In this case, a “vector” is a computer vector, not a math vector:

it is an array without a preset, limited size. The first vector holds the different “levels”
20

(iterations) of the first Q-M algorithm. Each element in the first vector is a vector of

arrays. These arrays in the third and last level of the structure are the terms. Their size is

the number of inputs, and they hold the literals. For the literals, I store a 0 for 0, a 1 for 1,

and a 2 for don’t-care (“X”). The following picture is a graphical representation of this

data structure:

0 1 0
Vector of “levels” 1 0 1
of the algorithm 0 1 2 0 1
0 0 0 1 2
1 1 1 1 0
1 0 0
1
⋅ ⋅ ⋅
⋅ ⋅ ⋅

⋅ 2 2 2
2 2 1
⋅ 2 0 2 Arrays of literals
⋅ 0 1 2 (terms)
1 2 0

⋅ ⋅ ⋅

Vector of terms
of “level” of
the algorithm

Note that no 2s are in any of the implicants in the first level, as no terms have yet

been combined by adjacency. Each implicant in the second level contains exactly one 2,

since exactly one iteration of the algorithm has been done, yielding one don’t-care per

new implicant. Each implicant in the third level contains two 2s, and so on. The first level

(the canonical cover) is read in from a file. Each level after the first is created by

performing the first Q-M algorithm on the previous level.

The implementation of the first Q-M algorithm consists of four nested for loops.

The first (outer) loop goes through each level. The second loop goes through each term.
21

The third loop goes through all the terms that have not already been compared with that

term (all the ones below it), and the fourth loop compares each literal of the two terms to

see if there is exactly one difference in literals. If there is exactly one difference, the two

terms are combined by adjacency: a new term with a 2 in the different literal’s place and

all other literals the same as those in the two compared terms is put into the next level.

Also, the two compared terms’ indices are put into a set of non-primes. After each term

has been compared, the set of non-primes is used to find the terms that are prime. The

prime terms are added to the set of primes, which is the final result of the algorithm.

Pseudocode for this is on the next page (NUMINPUTS is the number of inputs).
22

Function QM1( levels ) returns primes

put { } (empty set) into primes


for each level level do
irredundant level (remove duplicates from level)
put { } (empty set) into nonprimes
for every term term in level from 1 to the size of level –1 do
for every term compterm in level from term + 1 to the size of level do
put –1 into differentLiteral as a flag
for every literal literal from 1 to NUMINPUTS do
if literal literal of term ≠ literal literal of compterm then
if differentLiteral = –1 then
(there was no previous difference)
put literal into differentLiteral
else (there was a previous difference)
put –1 into differentLiteral as a flag
break (get out of this comparison loop)
end if
end if
end for of literal
if differentLiteral ≠ –1 then (there was exactly one difference)
add term to nonprimes
add compterm to nonprimes
add term with 2 in literal differentLiteral to level level + 1
end for of compterm
end for of term
if the size of nonprimes > 0 then
add all terms not in nonprimes to primes
else
break (get out of loop for levels)
end if
end for of level
23

return primes
24

Because of its four nested loops, the first Q-M algorithm takes a long time to
execute. For just one iteration of the outer loop (for only one level), there are
approximately L iterations of the term loop, and each of those has L – term iterations of
the compterm loop, where L is the size of the level. Thus, for each level, the total number
of times the literal loop is called is approximately:
L
L + (L − 1) + (L − 2) + ... + (L − L) = ∑ (L − k)
k=0

Simplifying this,
L L +1 L +1 L +1 L +1
(L + 1)(L + 2)
∑ (L − k) = ∑ (L − k + 1) =∑ L − ∑ k +∑ 1 = L 2
+ L−
2
+ L +1
k =0 k =1 k =1 k =1 k =1

L2 + 3L + 2 2L2 + 4L + 2 − L2 − 3L − 2 L2 + L
= L2 + 2L + 1 − = =
2 2 2

Then, for each of these iterations, there is an iteration of the literal loop. The literal loop

always has NUMINPUTS iterations. So, if N = NUMINPUTS, each level takes about

  L2 + L  
O   N = O(L2 N) time. This is a very long time considering that the number of
  2  

inputs (N) and the number of terms in a level (L) can both commonly be in the hundreds.

To implement the second Q-M algorithm, I made a 2-dimensional array for the

table of checks and a 1-dimensional array to store which minterms are “circled.” I then

wrote several different versions of the algorithm, and put the best one in my final version.

The best one is:

• Pick the column with the least number of checks in it. If there is a tie, pick the first one.

• In this column, pick the check whose row will get us the greatest number of uncovered

minterms.

Pseudocode for this is on the next page.


25

Function QM2( minterms, implicants ) returns essentialPrimes

put a 2-dimensional array into checkTable (a value of false indicates there


is no check. The first dimension is the minterms (columns) and the second
dimension is the implicants (rows))
for every implicant implicant of implicants do
for every minterm minterm of minterms do
if implicant implies (covers) minterm then
put true into checkTable[minterm][implicant]
else
put false into checkTable[minterm][implicant]
end if
end for of minterm
end for of implicant
put { } (empty set) into essentialPrimes
put a 1-dimensional array of all false into mintermsDone (to keep track of circles)
while not every index in mintermsDone contains true do
put the column of checkTable with the least number of checks into minChecksCol
put the row with a check in minChecksCol that contains the greatest number of
checks in columns not in mintermsDone into maxChecksRow
put true into every index of mintermsDone that corresponds to a column with a
check in maxChecksRow
add implicant maxChecksRow to essentialPrimes
end while
return essentialPrimes

In the worst case, the second Q-M algorithm can take longer to run than the first

algorithm, but normally it takes much shorter. In the worst case, making checkTable takes

about O(I * M * N) time, where I is the number of prime implicants, M is the number of

minterms, and N is the number of inputs. In the best case, it takes about O(I * M) time.
26

Finding minChecksCol takes about O(M * I) time at first, but then, as more inputs are

covered, it decreases. Finding maxChecksRow takes about the same time. In the worst

case scenario, where the prime implicants are the minterms, making checkTable,

minChecksCol, and maxChecksRow all take the greatest amount of time. Worse still, the

amount of time to find minChecksCol and maxChecksRow in each iteration of the while

loop only decreases by 1 each time, since only 1 minterm gets covered by the chosen

implicant. This makes the total amount of time to find all minChecksCol’s values in all

the iterations of the while loop about:


M
(M * I) + (M * I – 1) + (M * I – 2) + (M * I – 3) + … + (M * I – M) = ∑ ( M * I − k) .
k =0

Using the fact that I = M (because it’s the worst case) and simplifying,
M M +1 M +1 M +1 M +1

∑ ( M * I − k ) = ∑ (M 2 − k +1) = ∑ (M 2) − ∑ k + ∑ 1
k =0 k =1 k =1 k =1 k =1

(M + 1)(M + 2) 2M 2 − M 2 − 3M − 2 + 2M + 2
= M +M − + M +1= M +
3 2 3

2 2

M2 − M
=M +
3

This is also about the amount of time it takes to find all maxChecksRow’s values.

Therefore, in the worst case, the second Q-M algorithm takes about

O( 2M3 + M 2 − M + M 2N ) = O(M 3 + M 2 N) time to run (the M2N is from making

checkTable). However, in a normal scenario, there are much fewer implicants than there

are minterms. Also, there are not normally M + 1 iterations of the while loop, since most

of the implicants cover more than one minterm. This makes the second algorithm

normally run in just more than a tenth of the time taken by the first algorithm.
27

Simplify is another algorithm for minimizing a SOP functioniii. I programmed it in

Java (see Appendix B for source code). It does not reduce a given function as much as the

Q-M algorithms, but it runs in much less time. Simplify uses Shannon’s Expansion

Theorem to break up the given function into functions that are can be easily reduced, and

then merges the results. Shannon’s Expansion Theorem (on page 10) states:

f (x1 , x 2 ,...,x n ) = x1 ⋅ f (1, x2 ,..., xn ) + x1 ⋅ f (0, x 2 ,...,x n )

where f is a boolean function with n inputs. This can be efficiently turned into computer

code by making use of the cofactor of a function with respect to a variable.

The cofactor operation makes use of matrix representations of implicants and

SOP functions. An implicant can be represented by a one-dimensional matrix of 0s, 1s,

and 2s. The nth place in the matrix is the state of the nth literal of the implicant: 0 for

complemented, 1 for uncomplemented, and 2 for don’t-care (the variable does not appear

in the implicant). For example, the implicant x1 x2 x 4 is represented as [1 1 2 0]. A set of

implicants (a cover of a SOP function) can be represented by a two-dimensional matrix

whose rows are the representations of the implicants that collectively cover the function.

For example, the function f = x1 x2 x 4 + x2 x3 + x1 x2 x3 x4 is represented as:

1 1 2 0 
 
f = 2 1 0 2 
 
1 1 1 1 

The cofactor of a SOP function f with respect to an implicant x, f x , is defined

such that for every column index i between 1 and the number of inputs, inclusive, and

every row index j between 1 and the number of rows in f, inclusive:


28

∅ if ( fi j = 0 and xi = 1) or ( fi j = 1 and xi = 0) (Case 1)

( fx)i 2 (don’t-care) if xi ≠ 2 and Case 1 is false


j
= (Case 2)

fi j if xi = 2 (Case 3)

For example, fi j is the number in the ith column and jth row of f. It is the ith literal of the jth

implicant of f. xi is number in the ith column (place) of x. It is the ith literal of x. The

cofactor operation forms a function fx, which has i columns and j or less rows. If Case 1

ever happens ( ( f x ) i is ∅), then ( f x ) —the jth row of f x —is deleted, and the jth row would
j j

become the (j+1)th row, the (j+1)th row the (j+2)th row, and so on. For example, the

cofactor of the following function f with respect to the following implicant x is the

following function fx:

1 1 2 0 
  1 2 2 0
f = 2 1 0 2  , x = [2 1 1 2] , f x =  
  1 2 2 1
1 1 1 1 

The cofactor of a function f with respect to a variable xi of f is defined as the

cofactor of f with respect to an “implicant” whose literals are all 2s except for the ith

literal, whose value is xi. Effectively, it produces a function f xi whose implicants all have

a 2 in the ith literal and the literals of all the implicants in f that have a value of 2 or xi in

their ith literal in the other literals. For example, using the function f from the previous

example, f x is found by taking the cofactor of f with respect to the implicant


3

representing the literal x3 :


29

1 1 2 0 
  1 1 2 0 
f = 2 1 0 2  , x3 = [2 2 0 2] , f x3 =  
  2 1 2 2 
1 1 1 1 

Shannon’s Expansion Theorem can be stated as:

f = xi fx i + xi f xi

for any i between 1 and the number of inputs of f, inclusive. This is because the each of

the cofactors of f do not contain the implicants that would evaluate to 0 if an algebraic

application of Shannon’s Expansion Theorem were used. The cofactor operation removes

those implicants with its Case 1. In addition, the cofactor operation turns the ith literal of

each implicant into 2 (don’t-care). This corresponds to algebraically plugging in a 1 or 0

into xi, thus removing it from each term. Because xi is the variable “splitting” f, xi is called

the splitting variable.

Simplify makes use of Shannon’s Expansion Theorem to recursively break up the

given function f until it comes across a unate function. A unate function is a function that

is monotone increasing or monotone decreasing in each of its variables. A function is

monotone increasing in a variable xi if changing xi from 0 to 1 makes the output of the

function 1 (although the output of the function need not be 0 beforehand). A function is

monotone decreasing in a variable xi if changing xi from a 0 to a 1 makes the output of the

function 0 (although the output of the function need not be 1 beforehand). Note that this

says nothing about what happens if xi is changed from a 1 to a 0. Also, a non-unate

function is called binate.

A cover of a function is unate if the ith literal of every implicant of the cover of the

function is a 2 or a 1, or a 2 or a 0. That is, if each column in the matrix representation of

the cover of the function contains only 1s and 2s or only 0s and 2s. For example, the
30

cover of the following function U defined by its matrix representation is unate, and the

cover of the following function B is not unate:

1 2 0 1 0 2 1 1
   
1 0 2 2 0 2 0 1
   
U = 1 0 0 1 B = 2 0 1 2
1 2 2 2 0 1 1 2
 
 1 0 2 1  2 0 0 1

The cover of B is binate because its 2nd and 3rd columns contain both 1s and 0s. The cover

of U is unate because no column of U contains both 1s and 0s. If a cover of a function is

unate, then the function must be unate. However, if a cover is not unate, it is possible that

the function is unate. For example, the following function F is unate, but its cover is not:

1 1 0
F=  iv

2 0 2

In Simplify, however, we only deal with covers of functions, and it would not be

beneficial to the algorithm to check to see if the function is unate if the cover is not.

Therefore, from this point forward in the paper, I will not differentiate between unate

covers and unate functions.

In Simplify, a unate function is simplified by removing the implicants of the

function that are covered by other implicants of the function. Since each column of the

unate function can only contain either 1s and 2s or 0s and 2s, it is more likely that this

simplification can happen than if the function were binate. For example, the following

function F gets reduced to the function F ′ :


31

1 0 2 2
 
2 0 1 1
 
1 0 2 2 1 0 2 2 
F=  is reduced to F ′ =  
2 2 1 2 2 2 1 2 
1 2 1 1
 
 2 0 1 2

Note that an implicant a covers another implicant b if and only if each literal i of a is a 2

or equals the ith literal of b. That is, the ith literal of b cannot be a 1 when the ith literal of a

is a 0, a 0 when the ith literal of a is a 1, or a 2 when the ith literal of a is not 2. If it were,

then b would not be covered by a. The following is pseudocode for Unate Simplify:

Function unate_simplify( f ) returns f ′ (f is a cover of a function)

put f into f ′
for every implicant implicant of f from 1 to the number of implicants in f – 1 do
for every implicant compImplicant of f from the number of implicants in f down
to the index of implicant + 1 do
if implicant contains compImplicant then
remove compImplicant from f ′
end if
end for of compImplicant
end for of implicant
return f ′

After Simplify simplifies the unate functions, it merges the functions back

together. The Merge algorithm puts together the two functions formed by performing

Shannon’s Expansion Theorem on a function to form a new function logically equivalent

to the original function. Merge not only splices the two functions into one function, but

also performs the AND operation needed on each of the functions to complete Shannon’s

Expansion Theorem. However, if Merge just performed the AND operation and put the
32

two resulting functions together (effectively ORing them), many redundancies would

result. Thus, before Merge performs the AND and OR operations, it checks to see if it

can take out any redundancies.

Merge takes three parameters: h1 = f xi , h0 = f x , and splittingVar = i (the index of


i

the splitting variable). It returns a function h created by merging h1 and h0. To check for

redundancies, Merge first checks to see if any implicants of h1 and h0 are identical. Those

that are identical are put into a set of implicants h2 and removed from h1 and h0. Then,

Merge checks to see if any implicants in h1 cover any of those in h0, and vice versa.

Implicants in h0 that are covered by h1 are removed from h0 and put into h2. Implicants in

h1 that are covered by h0 are removed from h1 and put into h2. Because the implicants in

h2 are those that were covered by both h1 and h0, they should make h = 1 no matter xi is.

Thus, they are not ANDed with the splitting variable. Since the implicants of h1 should

make h = 1 only when the splitting variable is 1, they should be ANDed with xi, and

because the implicants of h0 should make h = 1 only when the splitting variable is 0, they

should be ANDed with xi . Therefore, h = h2 + x ih1 + xi h0 . Pseudocode for Merge is on

the next page.


33

Function merge( h0, h1, splittingVar ) returns h

put { } (empty set) into h2


for every implicant i of h0 do
for every implicant l of h1 do
if i = l then
add i to h2
remove i from h0
remove l from h1
end if
end for of l
end for of i
for every implicant i of h0 do
for every implicant l of h1 do
if i covers l then
add l to h2
remove l from h1
else if l covers i then
add i to h2
remove i from h0
end if
end for of l
end for of i
put h2 + xsplittingVar ⋅ h1 + xsplittingVar ⋅ h0 into h

return h

Because Simplify breaks a function down into unate functions, we want to make

each of the two functions resulting from the application of Shannon’s Expansion

Theorem as “unate” as possible. Thus, we choose the splitting variable to be the “most”

binate variable. This is done by the algorithm Binate Select.


34

Binate Select chooses the index of the splitting variable, splittingVar, such that

the column in the matrix representation of f corresponding to the variable xsplittingVar

contains both 1s and 0s and the greatest number of 1s and 0s (the greatest sum of the

number of 1s and the number of 0s). This is done by the following pseudocode:

Function binate_select( f ) returns splittingVar


put a 1-dimensional array into numZeros (numZeros[k] holds the number of zeros in the
kth column of the matrix representation of f)
put a 1-dimensional array into numOnes (numOnes[k] holds the number of ones in the
kth column of the matrix representation of f)
for each column col of the matrix representation of f do
put the number of 0s in col into numZeros[col]
put the number of 1s in col into numOnes[col]
end for of col
put { } (empty set) into binateColumns
for each column col of the matrix representation of f do
if numZeros[col] > 0 and numOnes[col] > 0 then
there are both 1s and 0s in col, so col is binate:
add col to binateColumns
end if
end for of col
put –1 into splittingVar as an initial value and flag saying f is unate
put 0 into maxVal as an initial value of the sum of 0s and 1s in a column
for each column col of the matrix representation of f do
if numZeros[col] + numOnes[col] > maxVal and col ∈ binateColumns then
put numZeros[col] + numOnes[col] into maxVal
put col into splittingVar (that is, the index of col)
end if
end for of col
return splittingVar
35

Simplify breaks down its given function f using the following line of pseudocode:
merge( simplify( f x ), simplify( f x splittingVar ), splittingVar )
splittingVar

It does this recursively until it encounters a unate function. Then, it calls unate_simplify

on the unate function. Note that this is, in effect, simply applying Shannon’s Expansion

Theorem. That is why the function generated by Simplify is equivalent to the given

function. Pseudocode for Simplify is the following:

Function simplify( f ) returns f ′


if f is unate then
return unate_simplify( f )
else
put binate_select( f ) into splittingVar
if ( splittingVar = -1 ) then
return unate_simplify( f )
else
put merge( simplify( f x ), simplify( f x splittingVar ), splittingVar) into f ′
splittingVar

if the number of implicants in f < the number of implicants in f ′ then


return f
else
return f ′
end if
end if
end if

For an example, let us call Simplify on the following function:

1 0 1
 
0 0 1
f = 
1 1 1
1 1 2
36

First, Simplify calls binate_select, which selects column 1 as the splitting

variable. Simplify then cofactors f with respect to the splitting variable and to the

complement of the splitting variable. This breaks up f:

1 0 1
 
0 0 1
 
1 1 1
1 1 2

x1 = [0 2 2] x1 = [1 2 2]

2 0 1
 
[2 0 1] 2 1 1
 
2 1 2

Simplify is then recursively called on each of these “subfunctions.” Because the function

on the left is unate, unate_simplify is called on it. Since this subfunction only has one

implicant, unate_simplify cannot simplify it further. For the right subfunction, column 2

is selected as the splitting variable, and the process repeats recursively:

2 0 1
 
2 1 1
 
2 1 2

x2 = [2 0 2] x2 = [2 1 2]

2 2 1
[2 2 1]  
2 2 2

Because both the left and right subfunctions are unate, unate_simplify is called on each of

them. Since there is only one implicant in the left subfunction, unate_simplify cannot
37

simplify it further. In the right subfunction, [2 2 2] covers [2 2 1], so unate_simplify

removes [2 2 1].

Then, merge is called to merge together the left branch and the right branch,

[2 2 1] and [2 2 2], respectively. Because [2 2 2] contains [2 2 1], [2 2 1] is put into h2

and removed from h0, and is not ANDed with the splitting variable. Thus, merge yields:

[2 2 1] [2 2 2]

x2 x2

2 2 1
 
2 1 2

This is then merged with the subfunction obtained by cofactoring f with respect to

x1 ([2 0 1] is not ANDed with x1 because it is covered in [2 2 1]):

2 2 1
[2 0 1]  
2 1 2

x1 x1

2 0 1
 
1 2 1
 
1 1 2

This is the final result by Simplify. It has three terms and six literals: a 25% reduction in

terms and a 50% reduction in literals.

Because Simplify breaks up a function into easy-to-reduce unate functions with a

binary recursion tree, it is very efficient in terms of time. However, it is not guaranteed to

even come up with a set of prime implicants, much less find a minimum cover. To

compare Q-M and Simplify, I created a program that generated random functions by
38

making lists of random minterms. I then ran each on a variety of functions under

controlled conditions*. The results are shown in the following charts (on the next four

pages). The reduction results are measured in terms instead of literals because these

algorithms are used for large functions, and the hardware used to realize large functions,

programmable logic arrays (PLAs), is structured in such a way that it does not matter

how many literals are in each term, but it does matter how many terms there are.

*
PowerTower 180 with 64MB RAM running Mac OS 8.5.1 and MRJ 2.0. 5MB of RAM was allocated
each to Simplify and Q-M as applications generated by JBindery 2.0. Each ran with extensions off and no
background applications (besides the Finder) running. Three consecutive trials were taken for each
combination of minterms. Times are averages; other results were constant.
39

Reduction for 7 inputs

39
101 26

33
59 28 Simplify
Q-M
16
28 14

0 10 20 30 40

Number of Terms in Result

Percent Reduction for 7 Inputs

61.39%
101 74.26%

44.07%
59 52.54% Simplify
Q-M
42.86%
28 50.00%

0.00% 20.00% 40.00% 60.00% 80.00%

Percent Reduction

Time for 7 inputs

0.165
101 4.799

0.099
59 0.552 Simplify
Q-M
0.043
28 0.157

0 1 2 3 4 5

Time (seconds)
40

Reduction for 8 inputs

66
210 41

67
155 48 Simplify
Q-M
31
50 30

0 20 40 60 80

Number of Terms in Result

Percent Reduction for 8 Inputs

68.57%
210 80.48%

56.77%
155 69.03%
Simplify
Q-M
38.00%
50 40.00%

0.00% 20.00% 40.00% 60.00% 80.00% 100.00%

Percent Reduction

Time for 8 inputs

0.382
210 38.778

0.328
155 7.315
Simplify
Q-M
0.093
50 0.291

0 10 20 30 40

Time (seconds)
41

Reduction for 9 inputs

131
413 70

144
265 96 Simplify
Q-M
76
114 64

0 50 100 150

Number of Terms in Result

Percent Reduction for 9 Inputs

68.28%
413 83.05%

45.66%
265 63.77%
Simplify
Q-M
33.33%
114 43.86%

0.00% 20.00% 40.00% 60.00% 80.00% 100.00%

Percent Reduction

Time for 9 inputs

0.989
413 88.4

0.927
265 24.717
Simplify
Q-M
0.331
114 1.312

0 20 40 60 80 100

Time (seconds)
42

Reduction for 10 inputs

262
823 135

257
500 171 Simplify
Q-M
93
119 84

0 100 200 300

Number of Terms in Result

Percent Reduction for 10 Inputs

68.17%
823 83.60%

48.60%
500 65.80% Simplify
Q-M
21.85%
119 29.41%

0.00% 20.00% 40.00% 60.00% 80.00% 100.00%

Percent Reduction

Time for 10 inputs

2.601
823 1,412.73

2.163
500 79.618 Simplify
Q-M
0.431
119 1.243

0 50 100

Time (seconds)
43

The Espresso-II algorithm performs the following steps:

1. Complement: Compute the complement of the given function (it will be

needed in the rest of the algorithm).

2. Expand: Make each implicant of the function contain as many don’t-cares as

possible without covering the complement of the function.

3. Essential Primes: Find the essential primes and make a list of them.

4. Irredundant: Remove redundancies from the current cover

5. Reduce: Put 1s and 0s back into the 2s of the expanded implicants. The

benefit of this is that implicants with fewer 2s can be expanded in more

directions when the loop is iterated again.

6. Repeat steps 2-5 until there is no improvement.

7. Lastgasp: Try the same process in a slightly different way. If there is

improvement, repeat.

8. Makesparse: After the algorithm has completed, different parts of the function

may be in different places. Put together the function and simplify it as much

as possible.v

Unfortunately, this is a very intricate and technical algorithm, and I did not have enough

time to research it past the Expand step.

In the amount of time I had to complete this paper, I was not able to accomplish

several things. I was not able to explore the second Q-M algorithm enough to find the

algorithm that guarantees a minimum cover. Also, I was not able to find out enough

about the Espresso algorithm in order to program it. If I did have enough time to program

the Espresso algorithm, I would compare it with the Q-M and the Simplify algorithms.
44

Since I was able to obtain a program of Espresso written in native Macintosh code (this

makes it faster than programs in Java), MacEspresso 1.0, I am able to tell that Espresso

normally produces results with more literals than the results of the Q-M algorithms, and

fewer literals than the results of the Simplify algorithm. However, I cannot tell how it

compares in terms of time. I believe that it takes significantly less time than the Q-M

algorithms and more time than Simplify.


45

Notes
i
Proofs of Absorption and Adsorption from Jerry D. Daniels, Digital Design from Zero to

One, New York: John Wiley & Sons, Inc., 1996 p. 103
ii
Digital Design from Zero to One p. 178

iii Simplify adapted from Robert K. Brayton, Gary D. Hachtel, Curtis T. McMullen,

Alberto L. Sangiovanni-Vincentielli, Logic Minimization Algorithms for VLSI Synthesis,

Boston: Kluwer Academic Publishers, 1997

iv Logic Minimization Algorithms for VLSI Synthesis p. 38


v
Logic Minimization Algorithms for VLSI Synthesis p. 13

You might also like