Determinants and Alternating Sign Matrices: David Robbins and Howard Rumsey

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

ADVANCES IN MATHEMATICS 62, 169-184 (1986)

Determinants and Alternating Sign Matrices


DAVID P. ROBBINS AND HOWARD RUMSEY, JR.
Institute for Defense Analyses. Thanei Road,
Princeton, New Jersey 08540

Let M be an n by n matrix. By a connected minor of M of size k we mean a minor formed


from k consecutive rows and k consecutive columns. We give formulas for det M in terms of
connected minors, one involving minors of two consecutive sizes and one involving minors of
three consecutive sizes. The formulas express det A4 as sums indexed by sets of alternating sign
matrices. These matrices are described here and by W. H. Mills, D. P. Robbins, and H. Rum-
sey. Jr. (Invent. Math. 66 (1982) 73-87; J. Combin. Theory Ser. A 34 (1983). 34@359). The
former study has led to the solution of Macdonald’s conjecture on cyclically symmetric plane
partitions (G. E. Andrews, Invent. Math. 53 (1979). 193-225; I. G. Macdonald, “Symmetric
Functions and Hall Polynomials.” p. 53, Oxford Univ. Press (Clarendon). Oxford, 1979).
C’: 1986 Academic Press. Inc.

1. INTRODUCTION

By a connected minor of a matrix we mean a minor formed from con-


secutive rows and consecutive columns. We will give formulas which
express the determinant of a matrix in terms of certain connected minors.
These formulas are obtained by using a method of Dodgson [2]
(sometimes known as Lewis Carroll) for computing determinants. Both
formulas involve alternating sign matrices, which we shall describe below
and which are discussed at greater length in [6, 71. Alternating sign
matrices appear to have extremely close connections with the descending
plane partitions introduced by Andrews in [ 11. Their study led in [6] to a
proof of Macdonald’s conjecture [ 1; 5, p. 531 on cyclically symmetric plane
partitions.
Let M be an n by n matrix. We denote its entries by M, where i and j
vary from 1 to n. We will assume until the last section that all the connec-
ted minors of M are non-zero. Let M,,, MNE, Msw, M,, be the n - 1 by
n - 1 connected minors in the northwest, northeast, southwest, and
southeast corners of A4 and let M, be the central n -2 by n -2 minor.
Then it is known that

M, det M = MNwM,, - IV,~,M,~.. (1)


169
0001-X708/86 $7.50
Copyright t 1986 by Academic Press. Inc
All nphts of reproductmn m any form rercrved.
170 ROBBINSANDRUMSEY

If we assign the value 1 to minors of size 0, then the identity is valid for
n > 2. This identity was derived by Dodgson in [2] as a special case of an
identity due to Jacobi [4, p. 9; 8, p. 771 relating the minors of a matrix to
the minors of its adjugate (or cofactor) matrix. We will call it Dodgson’s
identity. Dodgson pointed out in [2] that (1) could be used iteratively to
compute the determinant of a matrix. Given all the values of all connected
minors of sizes k - 1 and k, one uses (1) to compute all the connected
minors of size li + 1. Since we know the minors of sizes0 and 1, we can use
(1) repeatedly to obtain det M. The method is mentioned in some texts on
numerical analysis. For example, see[3].
If we think of the connected minors of size k and k + 1 as known, then
Dodgson’s method will yield a formula expressing the determinant of M as
a rational function of these minors. In Theorem 1 we give an explicit for-
mula for this rational function. It has several interesting properties. In par-
ticular, it can be written as a sum of terms each of which is plus or minus a
monomial in which each variable appears with exponent 1, - 1, or 0. In
Theorem 2 we give a modification of this formula expressing det M in
terms of minors of three consecutive sizes, k - 1, k, and k + 1. This formula
has a similar form but it has the additional property that the denominator
in each monomial involves only the minors of size k. In both formulas the
terms are indexed by sets of alternating sign matrices.

2. THE RATIONAL FUNCTIONS FOR MINORS OF Two CONSECUTIVE SIZES

For each pair of positive integers (i, j) we introduce indeterminates x0


and yrr. In addition we let /1 be another indeterminate. We will work in the
field of rational functions in the x’s, y’s, and A with coefficients in an
arbitrary field. For any rational function R in our indeterminates we let
T(R) be the result of simultaneously replacing

xijxi+ l.j+ I + ixt+ I,J~~J,,+ 1


by
Yi+ 1./t 1

Let R, be the rational function which results when we apply the sub-
stitution T to the rational function y,, k times in succession.For example,
we have
ALTERNATING SIGN MATRICES 171

Let 2 = - 1. In R, substitute the connected minors of size n -k + 1 and


size n -k whose upper left entries are M, for .Y~and yo, respectively. Then
Rk becomes the determinant of M. One may verify this directly for
k = 0, 1, 2. It follows for larger k by induction once we observe that the
substitution T corresponds to expressing the size n-k + 1 connected
minors in terms of the size n-k and n -k - 1 connected minors with the
help of Dodgson’s identity. Thus a formula for the R’s will yield a formula
for the determinant. We derive such a formula with ,4 indeterminate since
the derivation is no more difficult and has some interesting applications.

3. THE FIRST FORMULA FOR THE R’s

We need to introduce some extra notation in order to state our formula


conveniently.

DEFINITION. A square matrix is an alternating sign nzutri.u if it satisfies:


(i) all entries are 1, - 1, or 0,
(ii) every row and column has sum 1,
(iii) in every row and column the non-zero entries alternate in sign.
Note that in every row and column of an alternating sign matrix the first
and last non-zero entries must be a 1 so that all the partial sums of every
row and column must be 0 or 1.
We denote by &, the set of II by n alternating sign matrices. All per-
mutation matrices are alternating sign matrices. These are the only alter-
nating sign 1 by 1 and 2 by 2 matrices. There are exactly seven 3 by 3 alter-

t “
nating sign matrices, the six 3 by 3 permutation matrices and the matrix

01 -1 1 01

Numerous conjectures concerning alternating sign matrices are described in


[7]. For example, there is overwhelming evidence that the number l&4,1 of
n by n alternating sign matrices is given by

but this has not yet been proved.


172 ROBBINSAND RUMSEY

Let A be an n by n matrix. We define the corner sum matrix ,? of A by

where the sum ranges over all pairs of integers (k, I) with k < i and I < j
and we regard Ak, as 0 if k or I is out of the range l,..., n. A can be
recovered from A by taking mixed second differences:

Observe that the differences A, - AiP l,j and Ati - A,,j_, are partial sums of
the rows and columns of A. Using this observation one may prove the
following characterization of alternating sign matrices in terms of corner
sums. We omit the details.

LEMMA 1. An n by n matrix A is an alternating sign matrix if and only if


A satisfies:
(i) A, = Ani = i for i = l,..., n,
(ii) A,- A,_ I,j and A, - Ji,,j- I are always 0 or 1 for 1 < i,j < n.

DEFINITION. Let A be a k - 1 by k - 1 alternating sign matrix and B a k


by k alternating sign matrix. We say that A and B are compatible and write
A z B if we have all the inequalities

(i) ALj 2 B,,


(ii) A,i>B,+I.j+l-l,

(2)
(iii) A, 6 B, + r,,,
(iv) AijGBf3j+lt

for O<i, j<k-1.


Remark. The preceding form of the definition of compatibility is the
only one that we shall need. There is, however, a more symmetrical version
in which we require only (i) and the analogous conditions on the other
three corners of A and B.
One may verify that the single 1 by 1 alternating sign matrix is com-
patible with both of the 2 by 2 alternating sign matrices and that identity
matrices of two consecutive sizes are always compatible.
We shall need two notions concerning alternating sign matrices which
are similar to inversions for ordinary permutation matrices. We define the
first now. We call these pips.
ALTERNATING SIGN MATRICES 173

For an n by n alternating sign matrix A we define its number of flips,


F(A), by

Here the IAl is the sum of the entries A, for I,< i, j< n. Z, is the n by n
identity matrix. The number of flips is always a non-negative integer.
Let uti be any array of rational functions defined for pairs of positive
integers (i, j) and let A be a k by k matrix with integer entries. It is con-
venient to define
k
MA = fl u1;4?
r.i= I

Given any array of rational functions u,,, we define two new arrays s(u)
and d(u) by

J(u)ij= ui+ ,.j+ I


and
d(“)~j= uQ”i+ I.j+ 1 + j”“i+ l.JLci.j+ 1.

When we do arithmetic operations on arrays of rational functions, these


operations are done pointwise. For example, if u and u are two arrays, then
their product MU is defined by (uu)~= u,~u,~.
Now we can state the formula for R,.

THEOREM 1. For k 3 2,

Mihere the sum is over all compatible pairs (A, B) of alternating sign matrices
B in &k and A in &‘~.,.
Here R, is the rational function defined at the beginning of Section 2. dk
is the set of k by k alternating sign matrices, F is the function giving the
number of flips of an alternating sign matrix, and s is a shift operator. If
A = -1 and, for some n by n matrix M, s and y are the arrays of connected
minors of A4 of sizes n - k -t 1 and n-k, then Rk will be its determinant.
ProoJ We may check directly that the theorem is true when k = 2. We
proceed by induction. Suppose we already know the theorem for k. Then,
making the substitution T, we have
174 ROBBINS AND RUMSEY

R
AzB

= /l”B’d(X)BS(y)-B 1 (3)
A.A ZB

On the other hand we must show that

In these two equations A, B, and C are, respectively, alternating sign


matrices of sizes k - 1, k, and k + 1.
Thus we need to show that the expressions on the right sides of (3) and
(4) are equal.
Now fix a size k alternating sign matrix B. It will suffice to show that
12F’B’ d(x)B C ~-“‘.4)s(,y) -A = 1 Jfi-(‘lx“,
(5)
A,.4 2 B C‘.C 2 B

The rest of our proof depends on a description of the sets of size k - 1


and size k + 1 alternating sign matrices that are compatible with our fixed
size k alternating sign matrix B.
Let us begin with a description of the A’s. Our definition of compatibility
(2) can be restated as requiring that the entries of K satisfy the inequality

max@,, Bi+l.i+l - l)<A,Gmin(B,,,+,, Bit,,;) (6)

for all (i, j) with 0 d i, j d k - 1.

LEMMA 2. For every (i, j) with 0 < i, j < k - 1, the maximum on the left
of (6) is either equal to the minimum on the right side of (6) or is 1 lessthan
it. Moreover the left side is less than the right side exact1.v when
B;+ I,j+ I = -1.

Proof Part (ii) of Lemma 1 implies that the max does not exceed the
min in (6). Also their difference cannot exceed the difference between B,
and Bi,j+ I which is < 1. The difference is 1 if and only if

B<~+ l zB~+l.~+l =Bi.j+l =B,+l,,. (7)


Taking mixed second differences, we have B, + ,,, + L= -1. Conversely, if
Bi+ l,~+ I = -1, then the partial row sum C.,C,+l B(i+l,s)=O. This
ALTERNATING SIGN MATRICES 175

implies that D,+i,,+, =Bj,,+l. Similarly, 8,+i,,+, =B,+ l,j. The first
equality in (7) now follows by taking differences. 1

LEMMA 3. A~J matrix A with integer entries for vvhich ,d satisfes (6) is
an alternating sign matrix.
Proof: We show that 2 satisfies the conditions of Lemma 1. First we
have

which shows that A is weakly increasing in the second coordinate. Also

4, + 1 -Aiidmin(B,.,+,, Bi+l.,+I)-max(Bjj, B,+I,,+I - 1)

68 ,+,.j+,-(~,+l,,+,-l)=l.

which shows that the steps are always 0 or 1. If 1 d id k - 1, then

i<max(Bi.kpI, B,+,,,- l)<Al,kpI dmin(B,, B;,,,, ,)bi,

which shows that A satisfies the condition (i) of Lemma 1. 1


Let A be a matrix with integer entries. We define its “positive part” A+
to be the result of replacing all the negative entries of A by 0 (and leaving
the remaining entries unchanged). By the “negative part” A~- of A we mean
the result of replacing all the positive entries of A by 0 and then negating
the matrix. With this notation we have A = A+ -A
Now let A0 be the k - 1 by k - 1 alternating sign matrix compatible with
B with minimum corner sum matrix; that is, from (6), the matrix A0 with

AO,=max(B,, B;+,.,+, - 1)

for all (i, j) with 0 < i, j< k - 1. Then all other compatible A’s can be
obtained from this one by adding l’s to a subset of those Ao,‘s for which
B r+l,J+l = - 1. For these pairs (i, j) the effect of one of these additions on
A0 is to add 1 to the entries of A0 with indices (i, j) and (i + 1, j + 1) and to
subtract 1 from those entries with indices (i + 1, j) and (i, j + 1). The effect
on s(x) PA’ is to multiply it by fi+ ],,+ 1 where

~~=-y~.i+Ix~+I.J~

xOxi+ l.j+ I

Since 1 has been added to a single entry of the corner sum matrix, the effect
176 ROBBINSANDRLJMSEY

on 1zFF’00 is to multiply by 2. Thus the sum on the left side of (5) can be
written as

c ~~F(A)s(x)~A=~-F(AO)s(x)~AOn (1 +/%ji+i.,+,). (8)

The product on the right side is over all pairs (i, j) with Bi+ l,j+ I = -1.
After replacing i and j by i - 1 and j - 1 in this product, we can rewrite (8)
as

(9)

Now we make a similar calculation for the sum on the right of (5). If we
rewrite the conditions for compatibility in terms of the c’s, we see that they
imply that

max(gj,j~,,B,_,,~)~C,~min(B,i,B,_,,j~,+l) (10)
for all (i, j) with 1 6 i, j < k. Then we have the following two lemmas
analogous to Lemmas 2 and 3. We omit their proofs.

LEMMA 4. For every (i, j) with 1 < i, j < k, the maximum on the left of
(10) is either equal to the minimum on the right side of (10) or 1 less than it.
Moreover the left side is less than the right side exactly when B, = 1.

LEMMA 5. Any k + 1 by k + 1 matrix C with integer entries, such that c


satisfies (10) and the condition (i) of Lemma 1, is an alternating sign matrix.
Now we let Co be the k + 1 by k + 1 alternating sign matrix compatible
with B that has maximum corner sum matrix. That is, we let

for all (i, j) with 1 < i, j < k and require that it satisfy the condition (i) of
Lemma 1. Then all other compatible Cs can be obtained from this one by
subtracting l’s from a subset of those cz’s for which B, = 1. The effect of
one of these subtractions on Co, for these pairs (i, j), is to subtract 1 from
the entries of Co with indices (i, j) and (i + 1, j + 1) and to add 1 to those
entries with indices (i + 1, j) and (i, j + 1). The effect on x@ is to multiply it
by fi/. Since 1 has been subtracted from a single entry of the corner sum
matrix, the effect on A”@’ is to multiply by 1. Thus the sum on the left side
of (5) can be written as

c 2 F(c)xc = A@‘x@ n (1 + AjU), (11)


C.BzC
ALTERNATING SIGN MATRICES 177

where the product on the right side of ( 11) is over all pairs (i, j) with
B,= 1. We can rewrite (11) as

(12)

Now, combining (5), (9), and (12) we see that we have reduced the
proof of Theorem 1 to showing that

For this it suffices to show that

s(.Q4” xc4 = (xs(x))B (13)


and that
F(A”) + F(C’) = 2F(B). (14)

From the definitions of A0 and Co we have

A? ,,,~,+~‘P,=max(B,~,,,~,,B,,-l)+min(Bo, B, ,,, ,+l)


= B,+ Bi I,j- 1 (15)

whenever 1 d i, j< k. One may verify directly that (15) also holds if i orj is
0, and, from the condition (i) of Lemma 1, that ( 15) also holds if i or j is
k + 1. Thus we may take mixed second differences and conclude, recalling
that terms with out of range indices are to be regarded as zero, that

AP- I., ,+C;=B,+B,-,,,-, (16)

for all i andj. But the two sides of (16) are the exponents of I, on the two
sides of (13). Thus we have proved (13).
It is easily verified that

k(k+ 1)(2k+ 1)
ITAl = 6

Now using the definition of F, we can reduce the proof of Eq. (14) to show-
ing that

lAoI + ITo1 = 2181 + 2k + 1. (17)


178 ROBBINSANDRUMSEY

If we sum both sides of (15) for 1 < i, j< k, then we obtain

(KO( + (CO1- [1+2+ ... +k+(k+l)+k+ ..’ +l]


=IEI+lr71-[1+2+ ... +(k-l)+k+(k-l)+ ... +1],

which is equivalent to (17). This completes the proof of Theorem 1.

4. FORMULA IN TERMS OF MINORS OF THREE CONSECUTIVE SIZES

Now we describe our alternate formula. We introduce a new array z of


rational functions which satisfy

xs(z) = d(y).

If L = -1 and the xii and y, represent the size n - k + 1 and size n -k


minors with upper left entry having indices (i, j), then, still assuming that
none of the connected minors are 0, zii will be the size n -k - 1 minor
whose upper left entry has indices (i, j). We will express R, as a sum of
monomials in the x’s, y’s, and Z’S with only y’s in the denominator.
Using Theorem 1 and substituting y for x in (9), we have

Here we have used the notation A’(B) to indicate the dependence of A0


on B. Using the definition of the Z’S and simplifying, we have

R, = 1 ~F’B’ ~ RA’CB)) xB+s(zy


BE.& s(y)” (YS(Y)Y’

Introduce the abbreviations

P(B) = F(B) - F(A’(B)) (18)


and
B; = A;.+ B,:+ I,j+, + B,;. (19)

In terms of these we have proved


ALTERNATING SIGN MATRICES 179

THEOREM 2a. For k > 2 we have

p(B,.xB+s(zy
R,= 1 I.
s( y)B’
BE.dk

Here R, is the rational function defined at the beginning of Section 2. C~k


is the set of k by k alternating sign matrices. We shall see in Theorem 2c
that the function P generalizes the function giving the number of inversions
of a permutation matrix. B+ and B are the positive and negative parts of
B defined in Section 3. B* is a k - 1 by k - 1 matrix depending on B. Its
construction is defined earlier in this section. An interesting alternate
description is given in Theorem 2b below. The function s is a shift operator
defined in Section 3. If 2 = -1 and, for some n by n matrix 44, X, y, and z
are the arrays of connected minors of M of sizes n-k + 1, n-k, and
12-k - 1, then R, will be its determinant. Since only the y’s appear in the
denominators, this is an expression for the determinant that has the form
promised earlier.
While the preceding theorem was easy to prove, we would prefer a sim-
pler description of the two functions B* and P(B). We derive these descrip-
tions in the remainder of this section.
We begin with a geometric description of B*. This will yield the form in
which our results were originally discovered. Let B be any k by k alter-
nating sign matrix, k 3 2. Let B, and B,, + , be a pair of adjacent entries in
a row of B. If xi=, B,,y= 1, we place an arrow between these two entries
which points to the left. If the sum is 0, we place an arrow pointing to the
right. We place vertical arrows similarly between every pair of adjacent
entries in the same column. For example, if

0 100
l-l 0 1
B
0 10’
i
=o 0 1 0 0 1
then, after we insert the arrows, we have

O+ltOt 0
1 T 1 1
1 t-1 + 0 + 1
T 1 1 T
0+0-l+ 0
T 1 T T
i o-+1+0+- 0i
180 ROBBINSAND RUMSEY

Fill in the boxes formed by the arrows according to the following rules:
(i) If two opposite sides of a box have parallel arrows, then fill the
box with a 0.
(ii) If the arrows circulate around the box in either direction, then
fill the box with a - 1.
(iii) Otherwise fill the box with a 1.
For example, if B is as above, then the resulting picture is

o-+1+--oto
1 1 t 1 1 0 1
1 e-1 -+ 0 + 1
t 1 1 0 1 1 t
0+0+14-o
t 0 1 1 t 0 t
0 --t 1 + 0 + 0

THEOREM 2b. Zf B is any n by n alternating sign matrix, then B* is the


n - 1 bl, n - 1 matrix formed inside the boxes of arrou,s according to the
preceding rules.
For example, when B is given as above, then

To begin our proof of Theorem 2b we need to interpret the rules for con-
structing B* algebraically. Define matrices of partial sums on the first and
second subscripts of B by

and
Bli”‘= 1 B,.
S < J’

Then Bf’ = 1 exactly when the arrow between B, and Bi+ 1,,points upward
and BI,“= 1 exactly when the arrow between B, and BLj+, points to the
left. Then the rules (i), (ii), and (iii) above are equivalent to

B+1,= (B!‘.’
r,,+ 1-B~“)(B~~l,j-B~f’), (20)
ALTERNATING SIGN MATRICES 181

when 1 d i, j 6 k - 1 and B$ = 0 for other values of i and j. Our proof


proceeds by restating the formula (19) which defines B*, in terms of the
B(“‘s and the 8”)‘s and then showing that it is equivalent to (20).
Since negative entries never appear on the boundary of an alternating
sign matrix, we may conclude from (19) that B,T = 0 unless 1 < i, j < k - 1.
Rewrite (19) as

B,T=B,:+B,_t,,j+,+(Ap,-B,J. l-21)

We will find expressions for each of the three terms on the right of (21) in
terms of the B”“s and the B’““s.
Suppose that 1~ i, j < k - 1. We have

B+ =B.B!“=(B!2’-B!“~,)B’!’ (22)
1, V r, 1, LJ 11

since Bi/’ = 0 when B, = - 1. Similarly,

Bi, I.;+, = - PI:),.,, , - BI’,‘+ ,) Bj? ,,,. (23)

From the definition of A” we have

if 0 < i, j< k - 1. The right side has the value 1 when Bi+ ,,,+ , exceeds BiJ
by 2 and is 0 otherwise. It follows that

AO,-B,,=(B,+,.j+,-B,+ ,,,)(Bi+,.j-‘i,)=BI:‘,.,+,B1’:,,,
=(B;+,.~+,-‘,.,+,)(‘;.,+I-‘,)=BIZ,’,.,+,B~,::,. (24)

When we take mixed second differences of both sides of (24), we obtain

A;-Bi,=B;:‘,j+,B)‘:,,,- B$+, B;‘- B$“B,:‘, , + B].,“Bi;’ ], (25)

for 1 < i, j d k - 1 and where, in the third term on the right, we have used
the second form in (24). Now (20) and Theorem 2b follow by adding
Eqs. (22), (23), and (25).
Next we give a simplified description of the function P(B). For any k by
k alternating sign matrix B we define the number Z(B) of inversions of B by

I(B)= i BjL’,,,B;;‘,. (26)


z,J= I
182 ROBBINS ANDRUMSEY

This definition generalizes the usual notion of inversions for permutation


matrices. It is equivalent to the definition given in [7] which is

z(B) = 1 B,B,n
where the sum is over all i, j, r, s with i< r and j>s.

THEOREM 2c. P(B) = I(B) - N(B), where N(B) is the number of negative
entries in B.

Remark. If B, = -1, then the corresponding term of (26) is a 1. We call


these inversions “negative inversions” and the remaining positive terms of
(26) “positive inversions.” Thus Theorem 2c states that P(B) is the number
of positive inversions of B. In [7] evidence is given which suggests that the
positive inversions of an alternating sign matrix correspond to the non-
special parts of a descending plane partition.

Proof of Theorem 2c. Using the definition of F and (18), we have

P(B) = lAoI - [Bl+ k2.

Now sum equation (24) for 0 < i, j < k - 1. We obtain

P(B)=IA’l-(I&(1+ ... +k+ ... +l))

=,gl B;‘B::‘,

= i (B;“,,jB$‘, + B,B;;‘,)
i.j = 1

=1(B)-N(B).

5. EXAMPLES. THE A-DETERMINANT

One may define determinants inductively by assigning the value 1 to size


0 determinants, the usual value to 1 by 1 determinants, and then obtaining
larger determinants using Dodgson’s identity. We now consider a similar
function, which we call the I-determinant. We denote the I-determinant of
a matrix M by det, M. We begin the induction the same way but we
require instead that it satisfy the following ;1 form of Dodgson’s identity:

M, det, M = M,,M,, + AM,,M,,,


ALTERNATING SIGN MATRICES 183

for square matrices A4 of size 32. Here the subscripted M’s represent
A-minors rather than ordinary minors as in Section 1.
Let M be an n by n matrix with all its i--minors non-zero. From the
recursion defining R, it is straightforward to check that when we substitute
for the X’S and y’s the connected A-minors of size n-X- + 1, n-k of the
matrix M, then R, becomes the %-determinant of M.
We may use the case k = n of Theorem 2 to obtain a formula for the
J--determinant of M. Then the JJ’S are all l’s, so that the Z‘S satisfy
s(Z) x = 1 + A. After substituting the entries of M for the X’S, we obtain, in
the obvious notation,

det, M = c APfE)(1 + A)N(B) MB. (27)


BE.d,

This formula makes sense whenever the entries of M are non-zero so it


provides an extension of the definition of the A-determinant to this wider
class of matrices. The i-determinant, even when extended, will continue to
satisfy the I form of Dodgson’s identity.
If we take 1= -1, then the J-determinant becomes the ordinary deter-
minant and all the terms on the right side of (27) except those
corresponding to permutation matrices vanish. Thus (27) becomes the
usual expression for a determinant as a sum over all permutations.
If we let M be the all l’s matrix, then, using the 1 form of Dodgson’s
identity, we can evaluate the J-determinant as (1 + A)n(nP’)‘2. Combining
this observation with the case 2 = 1 of (27), we obtain

N(B) = 2”‘” 1V2


c2
BE.dn

a result which also appears in [7]. Note that if we could somehow replace
the 2 by a 1 inside the summation sign in the preceding equation, we would
be able to enumerate the n by n alternating sign matrices.
We remark that the l-determinant shares some properties with ordinary
determinants. For example, if M is the n by n Vandermonde matrix with
entries M, = ajP ‘, i, j = l,..., n, then

det, M = fl (a, + La,).

where the product is over all pairs (r, S) with 1 <S < r < n.

REFERENCES

1. G. E. ANDREWS, Plane partitions III. The weak Macdonald conjecture, Inoent. Math. 53
(1979), 193-225.
184 ROBBINS AND RUMSEY

2. C. L. DODGSQN, Condensation of determinants. Proc. Royal Sot. London 15 (1866).


150-155.
3. P. S. DWYER, “Linear Computations,” Wiley. New York, 1951.
4. C. G. J. JACOBI, De binis quiduslibet functionibus homogeneis secundi ordinis per sub-
stitutiones lineares in alias binas transformandis, quae solis quadratis variabilium constant;
una cum variis theorematis de transformatione et determinatione integralium multiplicium,
J. Reine Angew. Math. 12 (1834) l-69.
5. 1. G. MACDONALD, ‘Symmetric Functions and Hall Polynomials.” Oxford Univ. Press
(Clarendon), Oxford, 1979.
6. W. H. MILLS, D. P. ROBBINS, ANU H. RUMSEY, JR., Proof of the Macdonald conjecture.
Inoent. Math. 66 (1982), 73-87.
7. W. H. MILLS, D. P. ROBBINS, AND H. RUMSEY, JR., Alternating sign matrices and
descending plane partitions, J. Combin. Theory Ser. A 34 (1983), 34&359.
8. H. W. TURNBULL, “The Theory of Determinants, Matrices, and Invariants,” Dover, New
York. 1960.

You might also like