So here in lecture 7.
2 we're going to continue our discussion of extracting common
divisors from multi-level logic represented in the algebraic model. In the previous
lecture we showed the simple case for how to extract a single cube divisor. What
we're going to show in this lecture is how we extract the more complicated case of
a multiple cube divisor and one of the nice things that we're going to find is that
it looks just about exactly the same in that we are again going to build a big
matrix and attempt to cover it. And so one of the nice things about these methods
is that they are nicely reminiscent of Karnaugh maps which we all know and love. So
let's go look at how we pull out multiple cube divisors from complicated multi
level logic. We'd like to now go look for the multiple cube divisors. Remarkably a
very similar matrix rectangle prime concept still works. We're going to make an
appropriate matrix, we're going to find a prime rectangle, we're going to do some
literal count bookkeeping with the numbers associated with the rows and columns and
it all just works really very nice. It's a nice kind of unifying idea but I need a
new set of examples. I need a little more complicated set of examples in order to
find some interesting multiple cube divisor. So here my functions P is af + bf + ag
+ cg + ade + bde + cde. Q is af + bf + ace + bce. R is ade + cde. So you should not
be surprised that the first step is I need to go find the co-kernels and kernels of
each of these functions. Why is this? We have this big powerful theorem, Brayton
McMullen, which says multiple cube factors are intersections of the product terms
in the kernels for each of these functions. So what you can think of is that we're
going to be putting this matrix structure together that lays out in one nice,
pretty simple way. All of this interesting deep kernel, co-kernel architecture of a
complicated Boolean logic network in a way such that a prime rectangle identifies
what we're looking for in terms of a nice divisor. So let's just go look at that.
So here's a very complicated slide that is simply all of the kernesl and all of the
co-kernels of our P Q R example and this is pulled from Rick Rudell's Ph.D. thesis
from way back when from UC Berkeley in 1989. P = af + bf + ag + cg + ade + cde it's
got 1 2 3 4 5 6 real kernels. So for example co-kernel a has kernel de + f + g, co-
kernel b has kernel de + f, co-kernel de has kernel a + b + c, and so on you can
simply read the lines. I'll note that, you know, technically if you divide this
thing by co-kernel one you get a kernel that's the function back and since the
function is itself cube free that's a perfectly legitimate kernel but it's not an
interesting or useful one, so we're going to ignore it. Similarly Q which is af +
bf + ace + bce, it has two interesting co-kernels if you divide it by a or you
divide it by b you get kernel c + f and so I'll note here that you know you can
have more than one co-kernel when you divide it with algebraic division you get the
same kernel and co-kernel f or co-kernel ce also makes a +b that kernel and so we
have exactly the same concept. And again there's a co-kernel one if you divide it
by that you get the function Q back. Q Is cube free so we'll ignore that one. And
then R is ade + cde, it's only got one interesting co-kernel de and a one kernel a
+ c and R is not its own kernel. Why not? Because our is not cube free. If you go
look at it it's got a de in both terms. So those are all the co-kernels and all the
kernels. That's the sort of the raw material of this matrix that I want to go
build. So what's the matrix? The matrix is called the co-kernel cube matrix and
here's the way you put it together. So it has one column for each unique cube among
all the kernels in the problem. So it has columns that are labeled a b c ce de f
and g. Those are columns 1 2 3 4 5 6 and 7 respectively. And the key idea here is
that this is sort of the raw material for the multiple cube divisors that we're
looking for. And the other interesting and important idea here is that I only need
one sample if you will. From every unique cube in the kernel. So for example de
appears up here in one of the kernels for P and de appears in a different kernel
for P de + f + g G it appears in de + f it appears in and it also appears in yet
another kernel for P de + g. But nevertheless despite the fact that there are three
places that it appears this stuff only appears in one column. I only need to know
that it's there, I don't need to know that there are three of them there on the
columns. I will track this information actually in the rows. Now what are the rows?
There is one row for each unique function co-kernel pair. So for example I've got a
little highlight here that says P has a co-kernel c and so there is a row that says
P (c) and so there are in fact 11 rows P in (a) row one, P in (b) row two, P in
(de) row three, P in (f) row four, P in (c) row five, and P in (g) row six. Q in
(a) row seven, Q in (b) row eight, Q in (ce) row nine, Q in (f) row 10, R in (de)
row 11 and the one other thing that I will note is that if we look at just these
two rows. Q with co-kernel f has a kernel a + b and Q with co-kernel ce has the
same colonel a + b. I will in fact list those twice right because those are
different. In this particular example those are different function co-kernel pairs.
So that's the structure of this matrix. A little more complicated than the single
cube extraction but, you know, not too bad. And again entirely mechanical. So let
us answer the question, what goes in the individual row column entries here in the
matrix. And the answer is shown with some examples here. So from the row you take
the co-kernel and then you go look at the associated kernel. So for example if we
take P and co-kernel a we get the kernel de + f + g. So that's my first example
here. Then you look at the cubes in the kernel and you put ones in columns that are
the cubes in this kernel, otherwise you put a dot. So for the row that says P which
has a co-kernel of a you go and you put a 1 in column de, column f, and column g.
Similarly if you take P and you divided by co-kernel g, you get kernel a + c and so
what you do is you have a row of the matrix that has a P in (g), row 6 and you put
ones in just columns a and c and nothing else. And so if you simply read this
matrix, you know, the P (b) row 2 has ones for d e and f because that's what the
kernel is it's de + f. The P (de) row 3 has a + b + c as the columns that have ones
in it. The P (f) row has a and b as the columns, the P (c) row has de and g. The P
(g) row has columns a and c. The Q (a) row has column ce and f. The Q (b) row has
columns ce and f. The Q (ce) row has columns a and b. The Q (f) row has columns a
and b and the R (de) row has columns a and c. In effect each row of this co-kernel
cube matrix says here's a function P or Q or R, if I divide it by this co-kernel
cube and it's listed right next to the function name, that is the result I get. You
can read the result I get right off the row so, you know, we just go back, you
know, to our example here. On the top row P is a function. If I divide P by co-
kernel a what do I get? Answer de + f + g. I can just go read those things off.
Okay? So the matrix is a little more complicated. But again something that one can
construct mechanically. So as I said previously the matrix exposes the full deep
kernel co-kernel architecture of a complicated network of Boolean logic nodes. Now
what? Well, a beautiful and wonderful thing happens and that is that a prime
rectangle defined in exactly the same way as in the single cube extraction case is
again a good divisor. That's just a wonderful thing. And now you get a multiple
cube. So here I am showing you in this same example from the same matrix A prime
rectangle. And so the prime rectangle is columns a and b, columns one and two and
rows P (de) that's row three, row P (f) that's row four, row Q (ce) that's row nine
and row Q (f) that's row 10. And there are ones at each one of those row and column
intersections. And now how do you create the multiple cube divisor? And the answer
is that you OR the cubes together and those are the cubes in your multiple cube
divisor. And you can just see this I'm showing you this example the P (de) row of
the matrix says look P is equal to co-kernel de ANDed with a + b plus some other
stuff that I don't care about, and oh by the way there's a remainder that I don't
really care about at all. And the second row of the prime rectangle says that P is
equal to co-kernel f ANDed with a + b plus some other stuff potentially. Actually
in this case there isn't any other stuff if we technically look at the matrix. It
says that Q is equal to ce ANDed with a + b plus potentially some other stuff.
Actually there isn't any other stuff if we look at the matrix and it says Q is also
equal to f multiplied by a + b and possibly some other stuff. In this case not so
much, plus some other remainder. However, the one commonality of all of these
things is that P and Q are all equal to a + b ANDed with something ORed with a
bunch of other stuff. So if you OR together the cubes that represent the columns of
the prime rectangle this is your divisor. And we can just go draw it. Here we are
drawing it. The old network on the left a very tiny picture of the prime rectangle
covering the matrix in the middle and the new network on the right and so on the
left we see bubbles, three bubbles which say exactly what the function was when we
started P is af + Bf + ag + cg + ade + bde + cde. Q is af + bf + ace + bce. R is
ade + cde. Nothing new. What we have discovered from the prime rectangle is that a
+ b is a factor. So let's just call it capital X and so on the right hand side I've
got a new little yellow note that says X equals a +b and that new node feeds
a new P and a new Q but it does not feed into R so r still says ade + cde. But P
now says Xde + Xf + ag + cg + cde. That's that other stuff I was talking about and
Q is now Xce + Xf because the a + b stuff has been pulled out. So P and Q are
actually smaller. If you count the literals in the original network on the left
there's 33 of them. Go ahead do it. You know look at the right hand side and count
them off. If you count them in the new network, again, everything on the right hand
side of an equals and by the way you've got a new node there, there's an X there
now, you will find it's 25 and I've saved eight literals which is you know pretty
nice example. And again there is a formula. There's just some bookkeeping that you
can use if you know the rectangle you can compute how many literals are saved and
the formula's just a little bit more complicated than it was before. So for each
column c in the rectangle let the weight of that column be the number of literals
labeling the column cube. For each row in the rectangle. Let the weight of the row
be 1 plus the number of literals in the co-kernel labeling the row. Now there's
another one for each one covered at the row R Column c intersection. You want to
AND the co-kernel that labels the row with the column cube. That's actually the
thing in the original network that you just discovered and replaced. Okay? So AND
the co-kernel row label and the column cube label you're going to get a new product
term the value of that row column intersection is the number of literals in that
new ANDed product. And then there's a formula in the formula L for the number of
literals saved as you add up the values of the ones that you're covered in the
rectangle over all the rows in the columns and then you subtract the sum of the row
weights and you subtract the sum of the column weights and L is the number of
literals you save. Let's just go look to illustrate it. It's not really so
complicated. So here we have at the top the before picture of the P Q and R network
just as from the previous slide and on the bottom we have the after network which
has an X node, a modified P node, a modified Q node, and the same R node. We've
also got the matrix shown with the prime rectangle illustrated and now there's a
bunch of little boxes next to the rows and the columns and the values that I'm just
going to fill in. So what are the column weights? So remember the column weights
are just count the number of literals labeling the column. So the column weight of
column one is one because there's just an a and the column weight of column two is
just one because it's just a b. Now the row weights are one plus the number of
literals in the co-kernel. So row three which is P co-kernel de has a weight of
three because de has two things in it, One plus two is three. P divided by f, co-
kernel f is row four, that row weight is a two. And the reason that one is two is
because f just has one thing in it. Q row ce, Q co-kernel ce, row nine that's also
similarly a three for the same reason, ce has two things in it one plus two is
three. Q f row 10, f has one literal in it that co-kernel one plus one is two.
Right so those are the row weghts three and two three and two. Now the values are,
you go look at the ones that are being covered so there's a cluster of four ones in
the top two rows and a cluster of four ones in the bottom two rows so I'm expecting
four numbers on the top and four numbers on the bottom and remember what you do
here is you AND the co-kernel label on the row with the column cube label and you
take the product that you get and you count the literals because that is really
what you identified in the starting network and replaced. So this is part of the
accounting. So for the top left one we have de in row three and a in column one dea
has three literals in it. And so I get a three same row column b, deb also has
three literals in it, and so I get a three. Next row for f as one literal in it for
the colonel f a has two literals in it so I get two, f b has two literals in it so
I get a two. So I get three three two two and similarly if I do the same analysis
on the bottom what I will find in the cluster of four ones associated with rows
nine and 10, c, e and f respectively as the co-kernel's columns one and two a and b
respectively as the cubes. I will get three and three, two and two. Three, three
for the top row two, two for the bottom row. So if I take all of this information I
can now go compute what I'm interested in. What is the sum of the element values?
It's the sum of 3 and 3 and 2 and 2 and 3 and 3 and 2 and 2. That's a 20. What is
the sum of the row weights? Well that's three and two and three and two. That's 10.
What's the sum of the column weights? That's one in one that's pretty simple. The
value sum minus the row sum minus the column some 20 minus 10 minutes two is eight.
Hey it works. That is in fact the number of literals saved. There were 33 literals
in this network when we started, there are 25 literals in this network when we have
extracted this nice common divisor a + b 33 minus 25 is eight and it all just
works. Now I need to give you one little bit of a detail that we're just not going
to talk about in lecture 7. You can extract a second divisor or a third divisor or
a fourth divisor you can do that for single cube divisors and you can do that for
multiple cube divisors. But there's a bunch more bookkeeping to do that. And the
big thing is that you actually have to update the matrix to reflect the fact that
your logic in your Boolean network changed when you did this extraction. The stuff
on the right hand side of the equal sign of some of your nodes changed and you have
a new node because you actually divided something out. You've got to actually
update the matrix so because that node contents are different and there's a new
divisor node and it's sort of like adding don't cares to a Karnaugh map so it turns
out there are some things that are no longer essential when you put the prime
rectangles down. So it's like you get don't cares in the Karnaugh map that you can
optionally cover or not cover. And for the multiple cube case you actually have to
kernel the new divisor. The thing that you extracted, the multiple cube divisor,
you have to go kernel that input its kernels and its co-kernels back in the matrix
so that you actually can find the right new divisors. It's all entirely mechanical.
It's a bit tedious to be very honest and there are some special rules that happen
for how the accounting and the bookkeeping happens for accounting for the literals.
And I'm just going to skip it. It's just mechanical. I think for our purposes it is
entirely sufficient to know how to extract the good first divisor using the matrix
that we find either for the single cube case of the matrix that we find for the
multiple cube case. For us this is good enough. So we know now that the deep
structure of a multi level design expressed using the algebraic model and expressed
via appropriate extraction of all the kernels and the co-kernels can lead us to
good single cube divisors and good multiple cube divisors. That's really pretty
impressive. What we don't know yet is mechanically how you actually find the prime
rectangle. So let's go do that next.