Some Applications Involving Polytomous Data: (A) Matched Pairs: Nominal Response
Some Applications Involving Polytomous Data: (A) Matched Pairs: Nominal Response
Suppose the response is polytomous with k categories. Denote the logarithmic response probabilities for the control member
= + =
( 1 , 2 , K
,k
)
k
( 1
+ 1,2 +
,K , k +
).
exp ( j )
exp ( ) ,
r =1 r
exp (
r =1
exp ( r +
r )
[0
1]
= [0
1]
as R
Z = Z 1 + Z 2 = [0
is
0 1
ith
0 1 0
jth
0]
exp(r )
r =1
exp(r )
r =1
exp( + ) ) exp( )
r =1 r r
exp( j + j )
k
exp(r + r )
exp(r )
r =1
exp(i + i )
k r
exp(
r =1
+ r ) .
exp( i ) + exp( j )
and is only dependent on the
exp( j )
i j , which is independent of
Y = Yij + Y ji = m ij is just
the number of vector sums Z that have values in positions i and j. Thus, given Y = m ij ,
Note:
The above model is formal identical to Bradley-Terry (1952) model used for ranking individuals in a paired competition.
Note:
A curious and unusual feature of the above model logit( ij ) = j i is that the model matrix X corresponding to the formula j i is singular, for example, if k = 3 ,
1 0 1
0 1 = X 1 , 2 1 3
rank ( X ) = 2 .
(b) Ordinal responses
Consider the proportional-odds model for the comparison of 2 multinomial responses in which the categories are ordered.
G1 Y1 Sampled Not-sampled s1 Y1 = X 1 G2 Y2
Gk Yk s k Yk = X k sk
Total
m1
s 2 Y2 = X 2 s2
m2 m = s
s1
Let
Y ~ M (m 1 , 1 ), X ~ M (m 2 ,
in which the cumulative probabilities
),
logit (r1 j ) = j
r1 j , r2 j
satisfy
logit (r2 j ) = j , j = 1, 2, K , k 1 .
Let
s j = X j + Yj
Z 1 j = Y1 + Y2 + L + Y j , Z2 j = X1 + X 2 + K + X j , s j = s1 + s 2 + L + s j = Z j = Z 1 j + Z 2 j
Note that the treatment effect and parameters. To eliminate the nuisance parameter, the conditional distribution of .
j s
Z1j .
Z 1 j that Z 1 j given
nuisance
s = (s 1 , s 2 , K , s k
parameters
depends
on
both
the
1 , 2 , K , k 1
Thus we could not eliminate the nuisance parameters using conditional likelihood. However, we might use the following method similar to quasi-likelihood. Let
1 j ( ) = 1 j (e ) = E (Z 1 j | s j ) =
4
P1 ( ) P0 ( ) .
Since the score function U ( ) (the first derivative of the log-likelihood function) is a linear function of the data, we can use
U ( ) =
w [Z
k 1 j =1 j
1j
1 j e ,
( )]
wj = 1
or
U ( )
j + j +1 Z e 1 j 1 j , j =1
k 1
( )]
where
j =
1 j
2 j
, ij = ij ( ) ij 1 ( ) and = j .
j =1
Var c
( )
k 1 j 3 (m 1 ) 1 m j =1