Skew LT
Skew LT
Abstract. In this paper we introduce the concept of skew symmetric lattice matrix and shown that a
square lattice matrix can be expressed as join of symmetric and skew symmetric lattice matrices. Some
properties of symmetric and skew symmetric lattice matrices are obtained. Further, the solutions of the
lattice matrix equation XA ≥ B(resp. AX ≥ B) are obtained.
1. Introduction
The notion of lattice matrices appeared firstly in the work Lattice Matrices [3] by Giveon in 1964. A
matrix is called a lattice matrix if its entries belong to a distributive lattice. All Boolean matrices and fuzzy
matrices are lattice matrices. Lattice matrices in various special cases become useful tools in various domains
like the theory of switching nets,automata theory, and the theory of finite graphs [3].
In classical theory of matrices over a field, any square matrix can be written uniquely as a sum of sym-
metric and skew symmetric matrices. As an analogue to classical theory of matrices over a field, in [5], R.D
Luce shown that any square Boolean matrix can be uniquely decomposed as a disjoint sum of a symmetric
and a skew symmetric matrix.
In the present work, we extended the notions of symmetric and skew symmetric of Boolean matrices to
lattice matrices and we generalize the theorem of R.D. Luce [5] to Lattice matrices. As a different approach
to Zhao Cui Kui [8], in Section 2, we study the matrix inequality XA ≥ B on the lines of R.D Luce and
shown a class of solutions to the inequality.
2. Preliminaries
Throughout this paper N denotes the set of non zero natural numbers, we denote u, v, w, x, y, z etc as
vectors and a, b, c, α, β etc as scalars and also zero vector as 0 and the vector (1, 1, . . . ., 1) as 1.
We recall some basic definitions and results on lattice theory, lattice matrices,lattice vector spaces , pseudo
complements and properties which will be used in the sequel. For details see [1],[2],[3] and [4].
A partially ordered set (L, ≤) is a lattice if for all a, b ∈ L, the least upper bound of a, b and the
greatest lower bound of a, b exist in L. For any a, b ∈ L, the least upper bound and the greatest lower bound
are denoted by a ∨ b and a ∧ b (or ab), respectively. An element a ∈ L is called greatest element of L if α ≤ a,
for all α ∈ L. An element b ∈ L is called least element of L if b ≤ α, for all α ∈ L. We use 1 and 0 to denote
the greatest element and the least element of L, respectively.
If a, b ∈ L the largest x ∈ L satisfying the inequality a ∧ x ≤ b is called the relative pseudo complement of
a in b, and is denoted by a → b. If for any pair of elements a, b ∈ L, a → b exists, then L is said to be a
Brouwerian lattice. Dually for a, b ∈ L, the least x ∈ L satisfying a ∨ x ≥ b is called the relative lower
1
Department of Mathematics, FST, IFHE, Hyderabad, Telangana, India
[email protected]
2
Department of Mathematics, FST, IFHE, Hyderabad, Telangana, India
[email protected]
1991 Mathematics Subject Classification. AMS Subject Classification:06D99,15B99.
Key words and phrases. Complete and completely Distributive lattice, Lattice vector space, Orthovector, Skew symmetric.
1
pseudo complement of a in b, and is denoted by b − a . If for any pair of elements a, b ∈ L, b − a exists,
then L is said to be a Dually Brouwerian lattice. We shall denote a → 0 by a∗ . i.e.,a∗ = max{x :a∧x ≤ 0 }
A lattice L is a completely distributive lattice, if for any x ∈ L, and any family of elements {yi | i ∈ I},
I being an index set,
A lattice L is said to be a complete lattice, if every subset of L has both supremum and infimum in L.
Throughout this paper, unless otherwise stated, we assume that L is a complete and completely distribu-
tive lattice with the greatest element 1 and the least element 0.
Let L be a complete and completely distributive lattice. Then the following properties are due to
Y.J.Tan [6]: For any a, b, c ∈ L
(1) a − b ≤ a
(2) b ≤ c ⇒ a − b ≥ a − c and b − a ≤ c − a
(3) a ≤ b ⇔ a − b = 0
(4) a − (b ∨ c) ≤ (a − b)(a − c)
(5) a − (bc) = (a − b) ∨ (a − c)
(6) (ab) − c ≤ (a − b)(b − c)
(7) (a ∨ b) − c = (a − c) ∨ (b − c)
(8) (a − b) ∨ (b − c) = (a ∨ b) − (bc)
For any two elements a, b of a pseudo-complemented distributive lattice, then the following properties are
due to Gratzer [4]:
(1) a∗ = max{x ∈ L | a ∧ x = 0}
(2) 0∗∗ = 0
(3) a ∧ a∗ = 0
(4) a ≤ b implies b∗ ≤ a∗
(5) a ≤ a∗∗
(6) ab = 0 if and only if a ≤ b∗ if and only if b ≤ a∗
(7) a ≤ b implies a ∧ b∗ =0
(8) a∗∗∗ = a∗
(9) (a ∨ b)∗ = a∗ ∧ b∗
(10) (a ∧ b)∗∗ = a∗∗ ∧ b∗∗
Let Mn (L) be the set of n × n matrices over L, the elements of Mn (L) denoted by capital letters and suppose
A ∈ Mn (L), then the (i, j)th entry of A is denoted by aij . Giveon [3] calls them lattice matrices.
The following definitions are due to Giveon for the lattice matrices A = [aij ], B = [bij ], C = [cij ] ∈ Mn (L),
where aij , bij , cij ∈ L ,1 ≤ i, j ≤ n
2
AT = C if and only if cij = aji
A(BC) = (AB)C, AI = IA = A, AO = OA = O.
Mn (L) is a distributive lattice with least element zero as O and greatest element one as E with respect to
∧ and ∨.
A lattice vector space V over L(or a lattice vector space) is a system (V, L, +, ·), where V is a non-
empty set, L is a distributive lattice with 1 and 0, + is a binary operation on V called addition and · is a map
from L × V to V called scalar multiplication such that the following properties hold: For every x, y, z ∈ V
and a, b ∈ L satisfy:
(1) x+y=y+x
(2) x + (y + z) = (x + y) + z
(3) there is an element 0 in V such that x + 0 = x, for every x in V
(4) x + y = 0 if and only if x = y = 0
(5) a · (x + y) = a · x + a · y
(6) (a ∨ b) · x = a · x + b · x
(7) (ab)x = a · (b · x)
(8) 1·x=x
(9) 0·x=0
A vector x = (x1 ,x2 ,. . . ,xn ) in Vn (L) is said to be an ortho vector if xi xj = 0, for all i 6= j.
Let L be a complete and completely distributive lattice and A = [aij] , B = [bij ] ∈ Mm×n (L), then the
following definition is due to Y.J.Tan [5]:
A − B= C if and only if cij = aij − bij for i = 1, 2, . . . , m and j = 1, 2, . . . , n.
3
Figure 1
c b
1. Let A =
b d
b∗ c∗
T ∗
clearly A = A and A = ,
d∗ b∗
∗
b∗
c c b 0 0
then A∗ ∧ AT = ∧ =
b∗ d∗ b d 0 0
Consequently A is a symmetric matrix over L.
0 a 0 b
2. Let B = and B T = ,
b 0 a 0
0 a 0 b 0 0
then B ∧ B T = ∧ =
b 0 a 0 0 0
which implies B is a skew symmetric matrix over L.
Now we prove that any square matrix over a complete and completely distributive lattice L can be repre-
sented as join of symmetric and skew symmetric lattice matrices. Later we will discuss the uniqueness and
disjointness of symmetric and skew symmetric matrices over L.
Theorem 3.4. Any square lattice matrix can be expressed as join of symmetric and skew symmetric lattice
matrices.
Proof. Suppose A is a square lattice matrix and let S = A ∧ AT , Q = A − AT
Consider
S ∨ Q = (A ∧ AT ) ∨ (A − AT )
= [A ∨ (A − AT )] ∧ [AT ∨ (A − AT )]
= A ∧ [AT ∨ A]
=A
Consider, S ∗ ∧ S T = (A ∧ AT )∗ ∧ (A ∧ AT )T = (A ∧ AT )∗ ∧ (A ∧ AT ) = 0
Example 3.5. The following example shows how to write a square lattice matrix as a sum of symmetric
and skew symmetric lattice matrices.
4
Figure 2
Consider the lattice L = {0, a, b, c, d, e, f, 1} where the Hasse diagram of L is shown below:
c e f
Let A = b a d
a c e
c b 0
Consider, S = A ∧ AT = b a b ,
0 b e
0 e f
Q = A − AT = 0 0 d
a a 0
Clearly S ∨ Q = A and
c∗ b∗ 0∗
c b 0 0 0 0
S ∧ S = b∗ a∗
∗ T
b∗ ∧ b a b = 0 0 0
0∗ b∗ e∗ 0 b e 0 0 0
0 e f 0 0 a 0 0 0
Q ∧ QT = 0 0 d ∧ e 0 a = 0 0 0 .
a a 0 f d 0 0 0 0
which shows S is symmetric and Q is skew symmetric. We can easily see that A = S ∨ Q
Remark: In the representation of the matrix A as sum of symmetric and skew symmetric (A = S ∨ Q)
lattice matrices, the symmetric matrix S is unique and Q may not be unique. The following example will
endorse our claim:
Example 3.6. Consider the lattice L = {0, a, b, c, d, e, f, 1} where the Hasse diagram of L is shown in FIG-
URE 2.
a c e
Let A = d b f
e a c
a b e 0 a 0
then S = A ∧ AT = b b 0 , Q = A − AT = d 0 f
e 0 c 0 a 0
So, A = S ∨ Q. Clearly S is unique but we have other choices for Q which are
5
0 a a 0 a a 0 a b 0 a d
T1 = d 0 f , T2 = d 0 f , T3 = d 0 f , T4 = d 0 f ,. . .
b a 0 d a 0 a a 0 a a 0
So, that each Ti is skew symmetric such that A = S ∨ T1 = S ∨ T2 = S ∨ T3 = S ∨ Q = S ∨ T4 = . . ..
Remark: Even though Q = A − AT may not be unique in the matrix representation of A = S ∨ Q, the
lattice matrix Q = A − AT is the greatest lower bound of skew symmetric matrices T such that A = S ∨ T
as can be seen from the following result:
Proposition 3.7. Suppose A is a square lattice matrix and A = R ∨ T , where R is a symmetric lattice
matrix and T is a skew symmetric lattice matrix. Then R = S and Q ≤ T
Consider S = A ∧ AT = (R ∨ T ) ∧ (R ∨ T )T = (R ∨ T ) ∧ (R ∨ T T ) = R ∨ (T ∧ T T ) = R.
Q = A − AT = (R ∨ T ) − (R ∨ T T ) = [R − (R ∨ T T )] ∨ [T − (R ∨ T T )] = T − (R ∨ T T ) ≤ T .
which implies that (A − AT ) ≤ Ti , for all i ∈ I which further implies that (A − AT ) ≤ ∧i∈I Ti , but we have
∧i∈I Ti ≤ (A − AT ). Thus (A − AT ) = ∧i∈I Ti .
We now give the condition under which the sum in the representation of a square lattice matrix A is disjoint
Theorem 3.8. The necessary and sufficient condition for uniqueness of above representation of matrix A
(A = (A ∧ AT ) ∨ (A − AT )) is (aji )(aij − aji ) = 0, for all i, j.
Consider
S ∧ Q = (A ∧ AT ) ∧ (A − AT )
= AT ∧ (A − AT )
=0
6
Example 3.9. Consider the lattice L = {0, a, b, c, d, e, f, 1} where the Hasse diagram of L as shown in FIG-
URE 2.
d e c
Let A = a f e , for this matrix A we can easily see that aji (aij − aji ) = 0, for all i, j,
b aa
d a b 0 d a
then S = A ∧ AT = a f a , Q = A − AT = 0 0 d ,
b a a 0 0 0
where S is symmetric, Q is skew symmetric and A = S ∨ Q
consider.
d a b 0 d a 0 0 0
S∧Q = a f a ∧ 0 0 d = 0 0 0
b a a 0 0 0 0 0 0
Therefore S and Q are disjoint.
Remark 3.10. For any square lattice matrix A if we write A = S ∨ Q, where S = (A ∧ AT ) is symmetric and
Q = (A − AT ) is skew symmetric, then AT = S ∨ K, where S = (A ∧ AT ) is symmetric and K = (AT − A)
is skew symmetric.
We now extend the decomposition result of Boolean matrices of R.D.Luce [5] theorem 2.1 to lattice ma-
trices under some restrictions.
Theorem 3.11. A square lattice matrix A can be uniquely expressed as disjoint sum of symmetric and skew
symmetric lattice matrices, if aij ∧ aji = 0, for i 6= j.
Proof. Suppose A is a square lattice matrix with aij ∧ aji =0, for i 6= j
Consider S∨P = (A∧AT )∨(A∧(AT )∗ )= A∧(AT ∨(AT )∗ ) = A (Since (A∧(AT ∨(AT )∗ ))ij = aij ∧(aji ∨(aji )∗ )
= aij = Aij for i 6= j.
Consider A ∧ AT = (M ∨ N ) ∧ (M ∨ N )T = M ∨ (N ∧ N T ) = M (= S)
Example 3.12. Consider the lattice L = {0, a, b, c, d, e, f, 1} where the Hasse diagram of L as shown in
FIGURE 2
d a b
Let A = f a a clearly aij ∧ aji = 0, for all i 6= j.
a f e
d 0 0 0 a b
then S = A ∧ AT = 0 f 0 , P = A − AT = f 0 a ,
0 0 a a f 0
7
where S is symmetric and P is skew symmetric, so that A = S ∨ P and clearly S ∧ P =0
Theorem 3.13. If A is skew symmetric lattice matrix, then AT , αA are skew symmetric matrices, for every
α ∈ L.
Theorem 3.14. If A and B are skew symmetric lattice matrices, then A ∧ B, A − B and B − A are also
skew symmetric.
Theorem 3.15. The necessary and sufficient condition for join of two skew symmetric matrices A, B is
again a skew symmetric matrix is to be aij bji =0, for i 6= j.
Consider (A ∨ B)ij ∧ ((A ∨ B)ij )T = [aij ∨ bij ] ∧ [aji ∨ bji ] = (aij ∧ bji ) ∨ (aji ∧ bij ) = 0.
Conversely suppose A ∨ B is skew symmetric, that is, (A ∨ B)ij ∧ ((A ∨ B)ij )T = 0. Which implies
[aij ∨ bij ] ∧ [aji ∨ bji ] = 0,
so that (aij ∧ bji ) = (aji ∧ bij ) = 0. Therefore aij bji =0, for i 6= j.
= [v2 a21 ∨ v3 a31 ∨ . . . ∨ vn an1 ] [v1 a12 ∨ v3 a32 ∨ . . . ∨ vn an2 ] . . . [v1 a1n ∨ v2 a2n ∨ . . . ∨ vn−1 a(n−1)n ]
v1
v2
=0
..
.
vn
8
Theorem 3.18. If AT A = A, then A is symmetric and A2 = A.
Proof. Let A = [aij ] ∈ Mn (L) be a skew symmetric lattice matrix and P = [pij ] ∈ Mn×m (L) is a lattice
matrix such that each row of P is an ortho vector.
Consider
Suppose ABC ≤ I ∗ , then ∨k ∨l Aik Bkl Cli =0 implies Aik Bkl Cli =0, for all i, j, k which implies that
that Aij (BC)ji =0, for all i, j so,(ABC)ii =0, for all i. Therefore ABC ≤ I ∗ .
Theorem 4.2. Suppose L is a complete and completely distributive lattice with 0 and 1. For any A, B ∈
Mn (L), we have if XA ≤ B, then X ≤ (B ∗ AT )∗ and if AX ≤ B, then X ≤ (AT B ∗ )∗ .
Proof. Suppose XA ≤ B, then ∨j Xij Ajk ≤ Bik implies Xij Ajk (B ∗ )Tki =0, for all i, j, k. Summing on j, k
we obtain ∨j Xij (∨k Ajk (B ∗ )Tki )=0 which implies that XA(B ∗ )T ≤ I ∗ or XA(B T )∗ ≤ I ∗ , then by lemma 4.1,
9
Proof. Since it is always true that 0 ≤ XA, it is sufficient to consider XA ≤ 0 by theorem 4.2, which is
equivalent to X ≤ (0∗ AT )∗ = (EAT )∗ = ((AE)T )∗ .
Theorem 4.4. If A, B ∈ Mn (L) and if there exist a matrix C ∈ Mn (L) such that C ≤ A and B ≤ EC,
then any matrix X ∈ Mn (L) such that BC T ≤ X is a solutions of B ≤ XA.
Proof. Suppose there exist C such that C ≤ A and B ≤ EC that implies Cij ≤ Aij and bij ≤ ∨k Eik Ckj =∨k Ckj
and since BC T ≤ BC T .
Therefore B ≤ BC T A.
The condition of the above theorem is by no means necessary for if A = B = E, then X = I is a
solution.Now suppose the C of the theorem satisfies EC T = BC T ≤ I. Then for every i, j, i 6= j, 0 =
∨k Eik (Cik )T = ∨k Cjk and for every j and k, Cjk = 0. C= 0 contradicts the condition B = E ≤ EC.
In the theory of semi groups with a zero element, an element A is a right divisor of zero if A 6= 0 and there
exist B 6= 0 such that BA = 0. Note that lattice matrices form a semigroup under multiplication of matrices.
Theorem 4.5. A lattice matrix A is a right (left) divisor of zero if and only if AE 6= E(resp. EA 6= E).
Conversely,if AE 6= E, there exist some integer p such that ∨q Apq 6= e.Define Bpp = (∨q Apq )∗ and
Bij = 0 otherwise, hence B 6= 0. Consider BA: if i 6= p, ∨k (Bik Akj ) =0 since Bik = 0. If i = p,
∨k (Bpk Akj ) = Bpp Apk ) = (∨q Apq )∗ Apj ≤ Apj )∗ Apj =0. Thus BA= 0 and A, B 6= 0, so A is a right divisor
of zero.
Acknowledgments. The authors thank the management of FST, IFHE, Hyderabad for providing the
necessary facilities.
References
[1] Birkhoff G.,Lattice Theory, American Mathematical Society, Providence, RI, USA, 3rdedition, 1967.
[2] Geena Joy,K.V.Thomas.,Lattice vector spaces and linear transformations,world scientific,vol.12,no.1,2018.
[3] Giveon.Y.,Lattice matrices,Information and Control, vol. 7, no. 4, pp. 477-84,1964.
[4] Gratzer G,General lattice theory, Academic Press, New York, San Francisco, 1978.
[5] R. Duncan Luce, A Note on Boolean Matrix Theory, Proceedings of the American Mathematical Society Vol. 3, No. 3 (Jun.,
1952), pp. 382-388.
[6] Yi-jia Tan.,On compositions of lattice matrices,Fuzzy Sets and Systems 129 (2002) 19 - 28.
[7] Yi-jia Tan.,On the transitive matrices over distributive lattices, Linear Algebra and its Applications 400 (2005) 169-191.
[8] ZHAO Cui-Kui., On matrix equations in a class of complete and completley distributive latticesFuzzy Sets and Systems 22
(1987) 303-320.
10