Determinants and Alternating Sign Matrices: David Robbins and Howard Rumsey
Determinants and Alternating Sign Matrices: David Robbins and Howard Rumsey
Determinants and Alternating Sign Matrices: David Robbins and Howard Rumsey
1. INTRODUCTION
If we assign the value 1 to minors of size 0, then the identity is valid for
n > 2. This identity was derived by Dodgson in [2] as a special case of an
identity due to Jacobi [4, p. 9; 8, p. 771 relating the minors of a matrix to
the minors of its adjugate (or cofactor) matrix. We will call it Dodgson’s
identity. Dodgson pointed out in [2] that (1) could be used iteratively to
compute the determinant of a matrix. Given all the values of all connected
minors of sizes k - 1 and k, one uses (1) to compute all the connected
minors of size li + 1. Since we know the minors of sizes0 and 1, we can use
(1) repeatedly to obtain det M. The method is mentioned in some texts on
numerical analysis. For example, see[3].
If we think of the connected minors of size k and k + 1 as known, then
Dodgson’s method will yield a formula expressing the determinant of M as
a rational function of these minors. In Theorem 1 we give an explicit for-
mula for this rational function. It has several interesting properties. In par-
ticular, it can be written as a sum of terms each of which is plus or minus a
monomial in which each variable appears with exponent 1, - 1, or 0. In
Theorem 2 we give a modification of this formula expressing det M in
terms of minors of three consecutive sizes, k - 1, k, and k + 1. This formula
has a similar form but it has the additional property that the denominator
in each monomial involves only the minors of size k. In both formulas the
terms are indexed by sets of alternating sign matrices.
Let R, be the rational function which results when we apply the sub-
stitution T to the rational function y,, k times in succession.For example,
we have
ALTERNATING SIGN MATRICES 171
t “
nating sign matrices, the six 3 by 3 permutation matrices and the matrix
01 -1 1 01
where the sum ranges over all pairs of integers (k, I) with k < i and I < j
and we regard Ak, as 0 if k or I is out of the range l,..., n. A can be
recovered from A by taking mixed second differences:
Observe that the differences A, - AiP l,j and Ati - A,,j_, are partial sums of
the rows and columns of A. Using this observation one may prove the
following characterization of alternating sign matrices in terms of corner
sums. We omit the details.
(2)
(iii) A, 6 B, + r,,,
(iv) AijGBf3j+lt
Here the IAl is the sum of the entries A, for I,< i, j< n. Z, is the n by n
identity matrix. The number of flips is always a non-negative integer.
Let uti be any array of rational functions defined for pairs of positive
integers (i, j) and let A be a k by k matrix with integer entries. It is con-
venient to define
k
MA = fl u1;4?
r.i= I
Given any array of rational functions u,,, we define two new arrays s(u)
and d(u) by
THEOREM 1. For k 3 2,
Mihere the sum is over all compatible pairs (A, B) of alternating sign matrices
B in &k and A in &‘~.,.
Here R, is the rational function defined at the beginning of Section 2. dk
is the set of k by k alternating sign matrices, F is the function giving the
number of flips of an alternating sign matrix, and s is a shift operator. If
A = -1 and, for some n by n matrix M, s and y are the arrays of connected
minors of A4 of sizes n - k -t 1 and n-k, then Rk will be its determinant.
ProoJ We may check directly that the theorem is true when k = 2. We
proceed by induction. Suppose we already know the theorem for k. Then,
making the substitution T, we have
174 ROBBINS AND RUMSEY
R
AzB
= /l”B’d(X)BS(y)-B 1 (3)
A.A ZB
LEMMA 2. For every (i, j) with 0 < i, j < k - 1, the maximum on the left
of (6) is either equal to the minimum on the right side of (6) or is 1 lessthan
it. Moreover the left side is less than the right side exact1.v when
B;+ I,j+ I = -1.
Proof Part (ii) of Lemma 1 implies that the max does not exceed the
min in (6). Also their difference cannot exceed the difference between B,
and Bi,j+ I which is < 1. The difference is 1 if and only if
implies that D,+i,,+, =Bj,,+l. Similarly, 8,+i,,+, =B,+ l,j. The first
equality in (7) now follows by taking differences. 1
LEMMA 3. A~J matrix A with integer entries for vvhich ,d satisfes (6) is
an alternating sign matrix.
Proof: We show that 2 satisfies the conditions of Lemma 1. First we
have
68 ,+,.j+,-(~,+l,,+,-l)=l.
AO,=max(B,, B;+,.,+, - 1)
for all (i, j) with 0 < i, j< k - 1. Then all other compatible A’s can be
obtained from this one by adding l’s to a subset of those Ao,‘s for which
B r+l,J+l = - 1. For these pairs (i, j) the effect of one of these additions on
A0 is to add 1 to the entries of A0 with indices (i, j) and (i + 1, j + 1) and to
subtract 1 from those entries with indices (i + 1, j) and (i, j + 1). The effect
on s(x) PA’ is to multiply it by fi+ ],,+ 1 where
~~=-y~.i+Ix~+I.J~
xOxi+ l.j+ I
Since 1 has been added to a single entry of the corner sum matrix, the effect
176 ROBBINSANDRLJMSEY
on 1zFF’00 is to multiply by 2. Thus the sum on the left side of (5) can be
written as
The product on the right side is over all pairs (i, j) with Bi+ l,j+ I = -1.
After replacing i and j by i - 1 and j - 1 in this product, we can rewrite (8)
as
(9)
Now we make a similar calculation for the sum on the right of (5). If we
rewrite the conditions for compatibility in terms of the c’s, we see that they
imply that
max(gj,j~,,B,_,,~)~C,~min(B,i,B,_,,j~,+l) (10)
for all (i, j) with 1 6 i, j < k. Then we have the following two lemmas
analogous to Lemmas 2 and 3. We omit their proofs.
LEMMA 4. For every (i, j) with 1 < i, j < k, the maximum on the left of
(10) is either equal to the minimum on the right side of (10) or 1 less than it.
Moreover the left side is less than the right side exactly when B, = 1.
for all (i, j) with 1 < i, j < k and require that it satisfy the condition (i) of
Lemma 1. Then all other compatible Cs can be obtained from this one by
subtracting l’s from a subset of those cz’s for which B, = 1. The effect of
one of these subtractions on Co, for these pairs (i, j), is to subtract 1 from
the entries of Co with indices (i, j) and (i + 1, j + 1) and to add 1 to those
entries with indices (i + 1, j) and (i, j + 1). The effect on x@ is to multiply it
by fi/. Since 1 has been subtracted from a single entry of the corner sum
matrix, the effect on A”@’ is to multiply by 1. Thus the sum on the left side
of (5) can be written as
where the product on the right side of ( 11) is over all pairs (i, j) with
B,= 1. We can rewrite (11) as
(12)
Now, combining (5), (9), and (12) we see that we have reduced the
proof of Theorem 1 to showing that
whenever 1 d i, j< k. One may verify directly that (15) also holds if i orj is
0, and, from the condition (i) of Lemma 1, that ( 15) also holds if i or j is
k + 1. Thus we may take mixed second differences and conclude, recalling
that terms with out of range indices are to be regarded as zero, that
for all i andj. But the two sides of (16) are the exponents of I, on the two
sides of (13). Thus we have proved (13).
It is easily verified that
k(k+ 1)(2k+ 1)
ITAl = 6
Now using the definition of F, we can reduce the proof of Eq. (14) to show-
ing that
xs(z) = d(y).
p(B,.xB+s(zy
R,= 1 I.
s( y)B’
BE.dk
0 100
l-l 0 1
B
0 10’
i
=o 0 1 0 0 1
then, after we insert the arrows, we have
O+ltOt 0
1 T 1 1
1 t-1 + 0 + 1
T 1 1 T
0+0-l+ 0
T 1 T T
i o-+1+0+- 0i
180 ROBBINSAND RUMSEY
Fill in the boxes formed by the arrows according to the following rules:
(i) If two opposite sides of a box have parallel arrows, then fill the
box with a 0.
(ii) If the arrows circulate around the box in either direction, then
fill the box with a - 1.
(iii) Otherwise fill the box with a 1.
For example, if B is as above, then the resulting picture is
o-+1+--oto
1 1 t 1 1 0 1
1 e-1 -+ 0 + 1
t 1 1 0 1 1 t
0+0+14-o
t 0 1 1 t 0 t
0 --t 1 + 0 + 0
To begin our proof of Theorem 2b we need to interpret the rules for con-
structing B* algebraically. Define matrices of partial sums on the first and
second subscripts of B by
and
Bli”‘= 1 B,.
S < J’
Then Bf’ = 1 exactly when the arrow between B, and Bi+ 1,,points upward
and BI,“= 1 exactly when the arrow between B, and BLj+, points to the
left. Then the rules (i), (ii), and (iii) above are equivalent to
B+1,= (B!‘.’
r,,+ 1-B~“)(B~~l,j-B~f’), (20)
ALTERNATING SIGN MATRICES 181
B,T=B,:+B,_t,,j+,+(Ap,-B,J. l-21)
We will find expressions for each of the three terms on the right of (21) in
terms of the B”“s and the B’““s.
Suppose that 1~ i, j < k - 1. We have
B+ =B.B!“=(B!2’-B!“~,)B’!’ (22)
1, V r, 1, LJ 11
if 0 < i, j< k - 1. The right side has the value 1 when Bi+ ,,,+ , exceeds BiJ
by 2 and is 0 otherwise. It follows that
AO,-B,,=(B,+,.j+,-B,+ ,,,)(Bi+,.j-‘i,)=BI:‘,.,+,B1’:,,,
=(B;+,.~+,-‘,.,+,)(‘;.,+I-‘,)=BIZ,’,.,+,B~,::,. (24)
for 1 < i, j d k - 1 and where, in the third term on the right, we have used
the second form in (24). Now (20) and Theorem 2b follow by adding
Eqs. (22), (23), and (25).
Next we give a simplified description of the function P(B). For any k by
k alternating sign matrix B we define the number Z(B) of inversions of B by
z(B) = 1 B,B,n
where the sum is over all i, j, r, s with i< r and j>s.
THEOREM 2c. P(B) = I(B) - N(B), where N(B) is the number of negative
entries in B.
=,gl B;‘B::‘,
= i (B;“,,jB$‘, + B,B;;‘,)
i.j = 1
=1(B)-N(B).
for square matrices A4 of size 32. Here the subscripted M’s represent
A-minors rather than ordinary minors as in Section 1.
Let M be an n by n matrix with all its i--minors non-zero. From the
recursion defining R, it is straightforward to check that when we substitute
for the X’S and y’s the connected A-minors of size n-X- + 1, n-k of the
matrix M, then R, becomes the %-determinant of M.
We may use the case k = n of Theorem 2 to obtain a formula for the
J--determinant of M. Then the JJ’S are all l’s, so that the Z‘S satisfy
s(Z) x = 1 + A. After substituting the entries of M for the X’S, we obtain, in
the obvious notation,
a result which also appears in [7]. Note that if we could somehow replace
the 2 by a 1 inside the summation sign in the preceding equation, we would
be able to enumerate the n by n alternating sign matrices.
We remark that the l-determinant shares some properties with ordinary
determinants. For example, if M is the n by n Vandermonde matrix with
entries M, = ajP ‘, i, j = l,..., n, then
where the product is over all pairs (r, S) with 1 <S < r < n.
REFERENCES
1. G. E. ANDREWS, Plane partitions III. The weak Macdonald conjecture, Inoent. Math. 53
(1979), 193-225.
184 ROBBINS AND RUMSEY