Two Level Logic Minimization
Two Level Logic Minimization
Tsutomu Sasao
1.1. INTRODUCTION
Let f be a function from {0, 1}n into {0, 1, ∗}. The care set of f is
defined as f −1 (1), and will be noted f 1 . The don’t-care set of f , f −1 (∗),
will be noted f ∗ . It is the set on which the Boolean function f is not
defined. We note f 1∗ the union of the care set and the don’t care set. If
the don’t care set is empty, the Boolean function is completely specified or
total, else is incompletely specified. We will make no distinction between
a completely specified Boolean function and its on-set, since f is uniquely
represented by its on-set. Let g and f be Boolean functions. We say
that g covers f iff f 1 ⊆ g 1 ⊆ f 1∗ .
A sum-or-products (SOP), or disjunctive normal form, represents
a total Boolean function. For instance, x1 x̄2 x3 + x1 x3 x4 + x̄2 x̄4 is a
SOP. Two-level logic minimization consists of finding a minimum SOP
that covers a given Boolean function f . Minimization of multi-output
Boolean functions (i.e., function from {0, 1} into {0, 1, ∗}m ) can be re-
duced to single-output Boolean function minimization [70, 3, 24, 15].
Two-level logic minimization arises often in logic synthesis, where try-
ing to represent Boolean functions with a two-level NOT, AND and OR
netlist [35, 8, 67]. It has various application in reliability analysis [33, 17]
and automated reasoning [28, 40, 41, 61, 62]. Section 1 presents some
aspects of exact minimization. Section 2 discusses heuristic minimiza-
tion.
1
2
(b) Consider the covering matrix hf 1 , P I(f 1∗ )i, with rows labeled by
the minterms of f and columns labeled by the PIs of f .
y1 y2 y3 y4 y5
x1 1 1 1 1 y1 y2 y3 y4
x2 1 1 1 1 x1 1 1 1 1
x3 1 1 → x2 1 1 1 1
x4 1 1 1 x3 1 1
x5 1 1 x5 1 1
x6 1
B1
B2
..
.
Bn
y1 y2 y3 y4 y5
x1 1 1 1 1
x2 1 1 1 1 y1 y2 y3 y4 y5
x3 1 1 → x3 1 1
x4 1 1 1 x6 1
x5 1 1
x6 1
y1 y2 y3 y4 y5 y1 y5
x1 1 1 1 1 x1 1
x2 1 1 1 1 x2 1
x3 1 1 → x3 1
x4 1 1 1 x4 1 1
x5 1 1 x5 1
x6 1 x6 1
y1 = y2 = y3 = y4 = y5 =
{1, 2, 3, 4, 5} {1, 2, 3} {1, 2, 3, 7} {1, 2, 3, 4, 5, 7} {4, 6, 7}
x1 = {1} 1 1 1 1
x2 = {2} 1 1 1 1
x3 = {3, 4} 1 1
x4 = {4} 1 1 1
x5 = {5} 1 1
x6 = {6, 7} 1
{1, 2, 3, 4, 5} {4, 6, 7}
{1, 2, 3, 4, 5} 1
{4, 6, 7} 1
y1 y2 y3 y4 y5
y2 y3 y4 y5
x 1 1
x1,1 1 1 1
x11 1 1
x1,2 1 1 1
x21 1 1 →
x2,1 1 1 1
x12 1 1 1
x2,2 1 1 1
x22 1 1
x6 1 1
x6 1 1
This function increases with the number of elements that y cover, but
also with the “quality” of the elements it covers. The less an element x
is covered, the harder it is to cover it, the larger will be its contribution
to the function. Thus this function favors elements y’s that cover x’s
covered by few y’s.
the global upper bound, i.e., the cost of the best global solution found
so far.
Obviously C.path + C.lower < C.upper must be enforced. If it is not
satisfied, the search tree rooted at C is pruned. This is the usual pruning
procedure.
In case C.lower is the lower bound computed thanks to an indepen-
dent set (Section 1.2.3.2), a stronger result can be derived: if C.path +
Cl .lower ≥ C.upper, then both Cl and Cr can be pruned, and Cl .lower
is a strictly better lower bound for C [20, 22]. What is interesting is
that if the lower bound of Cl satisfies the given condition, we do not
even need to examine Cr .
be the set of y’s that do not cover any element of X ′ , and whose cost
added to C.path + C.lower exceeds the upper bound. Then C can be
reduced to hX, Y − Y ′ i [20].
When the limit lower bound is reached for some y’s, reducing hX, Y i to
hX, Y − Y ′ i makes in practice the recursion terminate immediately, i.e.,
the lower bound of the latter nearly always exceeds the upper bound,
or a better solution is found. To illustrate the gain the limit lower
bound produces, assume that Cost(y) = 1 for all y. Then instead of
terminating the recursion when the global lower bound (i.e., C.path +
C.lower) reaches C.upper, it is nearly always pruned when the global
lower bound reaches C.upper−1. This gain of 1 in the depth of the search
can produce an exponential reduction of the space search and reduces
dramatically the exploration time. In practice this method dramatically
reduces the search space. An extension of the idea of limit lower bound,
negative thinking, is presented in [31].
S ← Ø;
while X 6= Ø do {
Cost(y)
y ← arg miny∈Y Γ(y∩X) ;
X ← X − y;
S ← S ∪ {y};
}
return S;
Figure 1.7. Greedy computation of a solution.
This log inequality holds for any function γ. Here, since we are looking
for the best lower bound, we are interested in minimizing r and max-
imizing Cost(S). Since γ(x) captures the difficulty of covering x (the
greater, the more difficult), we can reverse the criterion that would yield
a good upper bound in order to obtain a good lower bound. For instance,
we can take γ(x) = |x|.
1.2.4. CONCLUSION
Exact two-level logic minimization is a very hard problem, involving
non-polynomial and NP-complete subproblems.
Significant improvements on solving the set covering problem have
been done on the branch-and-bound procedure, thanks to better lower
bound procedures [20, 23, 31, 44].
The generation of the cyclic core expressing the two-level logic min-
imization problem has been dramatically improved thanks to a refor-
mulation of the problem in terms of transposing functions and efficient
BDD/ZDD based implicit algorithms. A full presentation can be found
in [20, 21].
x1 c1 x1
1 1 1 1 1 1
x3 1 1 1 x3 1 1 1
x2 x2
(a) (b)
c1(1) c1 x1 c2 x1
1 1 1 1 1 1
x3 1 1 1 x3 1 1 1
x2 x2
(a) (b)
x1 x1
1 1 1 1 1 1
x3 1 1 1 x3 1 1 1
x2 x2
(c) (d)
c2 x1 c2 x1
1 1 1 1 1 1
x3 1 1 1 x3 1 1 1
c1 x2 c1 x2
(a) (b)
c1 x1 c1 x1 c1 c2 x1 c3
c3 c2 c2
1 1 1 1 1 1 1 1 1
x3 1 1 1 x3 1 1 1 x3 1 1 1
c4 x2 c4 x2 x2
(a) (b) (c)
Example 1.3.3 Fig. 1.11(a) shows the SOP F3 = x2 x̄3 + x1 x̄2 + x̄1 x2 +
x̄1 x3 , which consists of prime implicants only. F3 can be written as
Note that the quality of the solutions depends on the order of the
DELETE operations. In Fig. 1.11(c), if we DELETE c2 first, then we
have an ISOP with four products. On the other hand, if we DELETE
c1 and then c3 , then we have a minimum SOP with only three products.
16
c1 x1 c3 x1
1 1 1 c2 1 1 1
x3 1 1 1 x3 1 1 1 c4
x2 x2
(a) (b)
c3 x1 c6 x1
c5 c7
1 1 1 1 1 1
x3 1 1 1 x3 1 1 1
c4
x2 x2
(a) (b)
function literal:
xS = x when S = {1},
= x̄ when S = {0},
= 1 when S = {0, 1}, and
= 0 when S = φ.
We obtain the SOP shown in Fig. 1.13(b). Note that we can then reduce
the number of products by applying the EXPAND operation to c7 .
v1 c1 v2 x1
c2
1 1 1
c3 1
x4
1 1 1 1
x3
1
x2
Example 1.3.5 Let us prove that the ISOP x̄1 x̄3 x̄4 +x1 x2 +x3 x4 shown
in Fig. 1.14 is minimum by showing that it consists only of EPIs. First,
consider c1 = x̄1 x̄3 x̄4 . It covers two minterms v1 and v2 . Note that c1 is
the only PI that covers v1 = x̄1 x̄2 x̄3 x̄4 . Thus, v1 is a distinguished
minterm. On the other hand, c1 and x2 x̄3 x̄4 cover v2 = x̄1 x2 x̄3 x̄4 .
Thus, v2 is not a distinguished minterm. Similarly, in c2 , x1 x2 x3 x̄4
and x1 x2 x̄3 x4 are distinguished minterms. In c3 , x̄1 x̄2 x3 x4 , x̄1 x2 x3 x4 ,
and x1 x̄2 x3 x4 are distinguished minterms. Since each of c1 , c2 , and
c3 covers distinguished minterm, they are all essential. Thus the ISOP
shown in Fig. 1.14 is made of prime implicants that are all essential,
which means it is a minimum SOP.
We say two products are distance 1 apart if they share no minterms and
a minterm of one is adjacent to a minterm of the other. Let c be a
product and G be a set of products. Then,
[
cons(c, G) = cons(c, ck ).
ck ∈G
3) H = cons(c, G2 ).
4) c is an EPI iff c 6⊆ (H ∪ G).
Example 1.3.6 Find the EPIs in Fig. 1.14. The set of PIs is F =
{c1 , c2 , c3 }, where c1 = x̄1 x̄3 x̄4 , c2 = x1 x2 , and c3 = x3 x4 .
First, let us decide whether c1 is essential or not.
1) Let c = c1 , G = {c2 , c3 }.
2) G1 = φ, G2 = {c2 }, G3 = {c3 }.
3) H = cons(c1 , c2 ) = (x̄1 + x1 )x2 x̄3 x̄4 = x2 x̄3 x̄4 .
4) Since c4 6⊆ (H ∪ G1 ), c1 is an EPI.
Next, let us look at c2 .
20
5) Let c = c2 , G = {c1 , c3 }.
6) G1 = {c3 }, G2 = {c1 }, G3 = φ.
8) Since c2 6⊆ (H ∪ G1 ), c2 is an EPI.
Then c3 .
9) Let c = c3 , G = {c2 , c3 }.
11) H = cons(c3 , 0) = 0.
15) H = cons(c4 , 0) = 0.
c2 c1 x1
c3 1 1 1
c4 1 1
x4
1 1 1
x3 c7
c5 x2 c6
4 Convert CV (f ) into an SOP, and find the product with the mini-
mum number of literals.
5 The set of PIs that corresponds to the above product together with
PIs in ER forms the minimal cover.
Example 1.3.7 Consider the SOP shown in Fig. 1.15.
1) F = {c1 , c2 , c3 , c4 , c5 , c6 , c7 } is a given set of PIs. Partition F into
three sets: ER = {c1 , c7 }. RT = φ. RP = {c2 , c3 , c4 , c5 , c6 }.
2) H(c2 ) = [{c2 , c3 }]; H(c3 ) = [{c2 , c3 }, {c3 , c4 }]; H(c4 ) = [{c3 , c4 }, {c4 , c5 }];
H(c5 ) = [{c4 , c5 }, {c5 , c6 }]; H(c6 ) = [{c5 , c6 }].
3-4) CV (f ) = (g2 + g3 )(g2 + g3 )(g3 + g4 )(g3 + g4 )
(g4 + g5 )(g4 + g5 )(g5 + g6 )(g5 + g6 )
= (g2 + g3 )(g3 + g4 )(g4 + g5 )(g5 + g6 )
= g2 g4 g5 + g2 g4 g6 + g3 g5 + g3 g4 g6 .
5) g3 g5 is the product with the fewest literals, and the corresponding
cover is {c3 , c5 }. Thus {c3 , c5 }∪ER = {c1 , c3 , c5 , c7 } is a minimum
cover.
Algorithm 1.3.3 and Example 1.3.7 show the principle of the method.
The actual procedure in ESPRESSO is more complicated, because steps
22
1.3.5. ESPRESSO
ESPRESSO is the most popular heuristic two-level logic minimizer.
First, it obtains the complement of the original cover, which will be used
in the EXPAND operation. Second, it applies the EXPAND and IRRE-
DUNDANT operation to obtain an ISOP. Third, it extracts the set of
EPIs, and it iterates the REDUCE, the EXPAND and the IRREDUN-
DANT operations until no product can be reduced any more. Fourth,
it attempts to REDUCE and EXPAND by using different heuristics.
Finally, ESPRESSO tries to reduce the number of connections in the
output part.
1.4. CONCLUSION
Two-level logic minimization, both exact and heuristics, have received
much of attentions, since they are critical in logic synthesis, as well as
other real-life applications. It is a very mature field, but due to the diffi-
culty of the problem, is still an active research domain. Beside improving
on the existing methods, with the most spectacular improvements being
implicit minimization and new efficient lower bound techniques during
branch-and-bound, there are attempts at identifying class of functions
for which logic minimization can be categorized in the polynomial space.
ESPRESSO-EXACT [69] and SCHERZO [18] are the most well known,
respectively explicit and implicit/explicit, exact minimization proce-
dures. Textbooks that explain exact SOP minimization include [20, 34,
42, 48, 56].
References
[1] Z. Arevalo, J. G. Bredeson, “A method to simplify a Boolean function into a
near minimal sum-of-products for programmable logic arrays,” IEEE Trans. on
24
[41] J. De Kleer and B.C. Williams, “Diagnosing multiple faults,” Artificial Intelli-
gence, Vol. 32, pp. 97–130, 1987.
[42] Z. Kohavi, Switching and Finite Automata Theory, McGraw-Hill Book Co.,
1970.
[43] Y. S. Kuo, “Generating essential primes for a Boolean function with multiple-
valued inputs,” IEEE Trans. on Comput., Vol. C-36, No. 3, March 1987.
[44] S. Liao and S. Devadas, “Solving covering problems using LPR-based lower
bounds,” Proc. 34th DAC Conference, Anaheim, CA, USA, pp. 117–120, June
1997.
[45] B. Lin, O. Coudert, and J.C. Madre, “Symbolic prime generation for multiple-
valued functions,” Proc. 29th DAC, pp. 40–44, June 1992.
[46] E.L. Jr. McCluskey, “Minimization of Boolean functions,” Bell System Technical
Journal, Vol. 35, pp. 1417–1444, April 1959.
[47] E.L. Jr. McCluskey and H. Schorr, “Essential multiple output prime implicants,”
Proc. Symp. on Math. Theory of Automata, Vol. 12, Polytech. Inst. of Brooklyn,
New York, NY, pp. 437–457, April 1962.
[48] E. J. McCluskey, Introduction to the Theory of Switching Circuits, McGraw-Hill,
New York, 1965.
[49] P.C. McGeer, J. Sanghavi, R.K. Brayton, and A.L. Sangiovanni-Vincentelli,
“ESPRESSO-SIGNATURE: A new exact minimizer fo logic functions,” IEEE
Trans. on VLSI, Vol. 1, No. 4, pp. 432–440, Dec. 1993.
[50] C. McMullen and J. Shearer, “Prime implicants, minimum covers, and the com-
plexity of logic simplification,” IEEE Trans. on Comp., Vol. C-35, pp. 761–762,
Aug. 1986.
[51] G. De Micheli, Synthesis and Optimization of Digital Circuits, McGraw-Hill,
1994.
[52] F. Mileto and G. Putzolu, “Average values of quantities appearing in Boolean
function minimization,” IEEE TEC, Vol. EC-13, No. 4, pp. 87–92, April 1964.
[53] S. Minato, “Fast generation of prime-irredundant covers from binary decision
diagrams,” IEICE Trans. Fundamentals, Vol. E76-A, No. 6, pp. 967–973, June
1993.
[54] E. Morreale, “Recursive operators for prime implicant and irredundant normal
form determination,” IEEE Trans. on Comp., Vol. C-19, PP. 504–509, June
1970.
[55] T. H. Mott Jr., “Determination of irredundant formal forms of a truth function
by iterated consensus of the prime implicants,” IRE TEC, pp. 245–252, June
1960.
[56] S. Muroga, Logic design and Switching Theory, Wiley-Interscience Publication,
1979.
[57] D. L. Ostapko and S. J. Hong, “Generating test examples for heuristic Boolean
minimization,” IBM J. Res. and Develop., Vol. 18, pp. 459–464, Sept. 1974.
[58] W.V.O. Quine, “The problem of simplifying truth functions,” American Math.
Monthly, Vol. 59, pp. 521–531, 1952.
[59] W.V.O. Quine, “A way to simplify truth functions,” American Math. Monthly,
Vol. 62, pp. 627–631, 1955.
REFERENCES 27
[60] W.V.O. Quine, “On cores and prime implicants of truth functions,” American
Math. Monthly, Vol. 66, pp. 755–760, 1959.
[61] R. Reiter, “A theory of diagnosis from first principles,” Artificial Intelligence,
Vol. 32, pp. 57–95, 1987.
[62] R. Reiter and J. de Kleer, “Foundations for assumption-based truth maintenance
systems,” Proc. AAAI National Conference’87, Seattle, pp. 183–188, July 1987.
[63] V.T. Rhyne, P.S. Noe, M.H. McKinney, and U.W. Pooch, “A new technique for
the fast minimization of switching functions,” IEEE Trans. on Comp., Vol. C-26,
No. 8, pp. 757–764, 1977.
[64] J.A. Robinson, “A machine-oriented logic based on the resolution principle,”
Journal of ACM, Vol. 12, pp. 23–41, 1965.
[65] S. Robinson and R. House, “Gimpel’s reduction technique extended to the cover-
ing problem with costs,” IEEE Trans. on Elect. Comp., Vol. EC-16, pp. 509–514,
Aug. 1967.
[66] J.P. Roth, “Algebraic Topological Methods for the Synthesis of Switching Sys-
tems,” Trans. of American Math. Society, Vol. 88, No. 2, pp. 301–326, 1958.
[67] R.L. Rudell, Multiple-Valued Logic Minimization for PLA Synthesis, Research
Report, UCB M86/65, 1986.
[68] R.L. Rudell and A.L. Sangiovanni-Vincentelli, “Multiple valued minimization
for PLA optimization,” IEEE Trans. on CAD, Vol. 6, No. 5, pp. 727–750, Sept.
1987.
[69] R.L. Rudell, Logic Synthesis for VLSI Design, PhD Thesis, UCB/ERL M89/49,
1989.
[70] T. Sasao, “An application of multiple-valued logic to a design of programmable
logic arrays,” Proc. Int’l Symp. on Multiple-Valued Logic, pp. 65–72, May 1978.
[71] T. Sasao, “Input variable assignment and output phase optimization of PLA’s,”
IEEE TC, Vol. C-33, No. 10, pp. 879–894 Oct. 1984.
[72] T. Sasao, “HART: A hardware for logic minimization and verification,”
ICCD’85, New York, pp. 713–718, Oct. 7–10, 1985.
[73] T. Sasao, “Ternary decision diagrams and their applications,” Chap. 12 of Rep-
resentations of Discrete Functions, Kluwer Academic Publishers, 1996.
[74] T. Sasao and J. T. Butler, “On the minimization of SOPs for bi-decomposable
functions,” ASP-DAC’2001, Japan, Jan. 2001.
[75] T. Sasao and J. T. Butler, “Worst and best irredundant sum-of-products ex-
pressions,” IEEE Trans. on Comp., (accepted).
[76] J.R. Slage, C.L. Chang, and R.C.T. Lee, “Completeness theorems for seman-
tics resolution in consequence finding,” Proc. Int. Join Conference on Artificial
Intelligence, pp. 281–285, 1969.
[77] J.R. Slage, C.L. Chang, and R.C.T. Lee, “A new algorithm for generating prime
implicants,” IEEE Trans. on Comp., Vol. C-19, No. 4, pp. 304–310, 1970.
[78] A. Svoboda and D. E. White, Advanced Logical Circuit Design Techniques, Gar-
land Press, New York, 1979.
[79] G.M. Swamy, P. McGeer, and R.K. Brayton, “A fully Quine–McCluskey proce-
dure using BDD’s,” Proc. IWLS’93, May 1993.
28