JASP Tutorial
JASP Tutorial
V
í
d
e
o
]
A
n
á
li
si
s
d
e
l
a
fi
a
b
il
i
d
a
d
c
o
n
J
A
S
P
:
A
lf
a
C
r
o
n
b
a
c
h
×
Cerrar
Descripción
Este recurso te puede resultar útil para aprender a realizar
con JASP el estudio de la fiabilidad de las puntuaciones con
el coeficiente Alfa de Cronbach.
Transcripción
A
n
á
li
si
s
d
e
l
a
fi
a
b
il
i
d
a
d
c
o
n
J
A
S
P
:
C
o
e
fi
ci
e
n
t
e
d
e
C
o
r
r
e
l
a
ci
ó
n
I
n
t
r
a
cl
a
s
e
(
C
C
I
)
×
Cerrar
Descripción
Este recurso te puede resultar útil para aprender a realizar
con JASP el estudio de la fiabilidad de las puntuaciones con
el coeficiente de Correlación Intraclase (CCI).
Introduction
hey everybody and welcome to another
jasp tutorial in this one we're going to
continue our reliability analysis up
module and we are going to talk about
the intra class
correlation coefficient not too involved
in this video nothing too crazy we're
actually going to use the same uh inner
radar reliability data that I used in
the previous video for the Raider
agreement function in the reliability
module so let's go ahead and open up
that data before we do I am using the
jasp 0.16.4 which is the current build
uh the Intel version of course because
that's what I'm using on this Mac and so
let's go ahead and open that recent data
recent files and here we have my irr CSV
it's going to open this now if you
didn't watch that previous video because
you're like who cares about that uh here
we have a set of I have this is
completely farcical and completely
different and just ridiculous set of
Judges doing a measuring 10 contestants
in some I don't know talent show or
whatever dancing competition so we have
three judges they've each rated on a
scale from 0 to 10 and they're
relatively consistent but not really as
you may have seen in that previous video
but we're gonna do the ICC or the intra
class correlation now I used ICC in a
What is ICC
previous video
um that was for the um
enter item correlation coefficient so
that's not to be confused with this
intra class correlation ICC so my
apologies on using that same thing
um here is the interclass correlation
help information so just uh I don't know
if I can make it bigger let me see no
it's not uh no I can just copy uh reload
say page sorry I cannot make it bigger
but um just to tell you what it is so it
was um from field 2012 the ICC is a
correlation coefficient that assesses
consistency between measures of the same
class so what are classes well they are
different values of the same dependent
variable okay so different measurements
of the same dependent variable um so
shroud and flies distinguish between six
different ways of calculating the ICC
based on different designs they are
identified by different values for A and
B in ICC a comma B A is the dependent is
dependent on how Raiders are selected
and subjects are rated
and be on whether or not ratings are
averaged in the end Raider can refer to
judges tests or other forms of rating
here and in the section below so we can
talk about that as we go through this
let me go let me uh pull this uh it is
uh also I want to mention that it is a
common measure of integrated reliability
and consistency so fundabot on that all
right so let's go ahead and open this
Tutorial
how do you get the reliability module in
jasp well when you download jasp it
comes with the ability to add these
modules to your screen they exist
already but you just go ahead and click
them if you want to see them along your
top bar okay so that is the reliability
module and it ends up with this target
icon over here so we're going to click
on that and we're going to go to intra
class correlation the uni-dimensional
reliability analysis is your Chromebox
Alpha just in case you haven't seen that
there's a video on that on on the
channel already so interclass
correlation brings this up again nothing
too crazy and it'll bring back the help
file again you can get the help file by
clicking on the I button here okay
that's a good way to refresh yourself as
you go through this okay all right so
variables we're gonna put our three
judges in and I knew I was going to
click on that it was going to disappear
so don't worry I have the ability to
bring it back okay so uh it'll already
do things as I put this in so I'm not
going to change anything yet so don't
worry about that I put the three judges
in ignore the output for a second let's
talk about how we input stuff here all
right so the variables are in here as
the judges we don't want contestant in
there that's not necessary our
contestants are not actual data they're
just a numerical uh piece of information
to distinguish different contestants so
the variables columns compute ICC four
so we have three judges we want to
compute our inter-class correlation
value for them each variable corresponds
to one Raider with different rows
corresponding to different subjects
being rated that's our contestants okay
each subject is rated by and then we
have a choice here it's a radio button
you have to determine
which one works here and they are all
mutually exclusive because that's how
the point estimate is calculated so the
first one is a different rater
uh so each subject is rated by a
different Raider so they're randomly
selected so what does that mean each
subject is rated by a different rater
Raiders are selected randomly so you
have a pool this would be if you have a
pool of Raiders and um each person each
subject is rated by three and um they
get a three random Raiders from a pool
of let's say 10 Raiders right so this
corresponds to a one b in accordance
with the Shroud and Fleiss 79 paper
we'll get to that in a second
now the next one is the same set of
randomly selective Raiders or tests
right the same set of randomly selected
Raiders so what does this mean a random
random sample of K raters rate all
subjects so this will correspond to two
B in Trump's lice okay so this is it
would be if like everyone got um K
raters but they all rated every single
person so it's slightly different from a
different radar randomly selected okay
and then the same fixed set of Raiders
is the final one the same fixed set of
Raiders or tests okay fixed out of
Raiders for all subjects these are all
are all the Raiders of interest and
there is no generalization to a wider
population corresponds to icc3b from
shroud and flies Okay
um now there's no General generalization
to a wider population of Raiders it's
very important that you that I wish I
added that there no generalization to
wider population of Raiders of course we
are trying to uh generalize to a greater
population of uh people in a population
uh for our ICC but this is for this is
for a wider population of Raiders or
judges or tests Okay so so which one are
we choosing for this data well because I
have 10 consists 10 contestants and they
were all rated by all three judges I'm
going to choose the same fixed set of
Raiders because that's who they were
okay all right the next one is ratings
are averaged okay so this you check this
box are the ratings by all Raiders
averaged in the end this strongly
impacts the ICC coefficient if yes
corresponds to ICC a K in accordance
with trout and flies if not it
corresponds to ICC A1 Okay so
what are we doing here well are the
ratings themselves average and the
answer in this data set is no they are
not average they are single instant
ratings okay so we don't average them
across the ratings or we don't average
them across judges or we don't average
them across a set of tests or anything
like that there's no averaging in this
data set if we go to the data set these
are just whole numbers uh judge three
rated the performance of contestant one
as a six out of ten that's how that's
that's how that works there finally oop
actually I need to bring back the module
there we go
finally so this would be a one you can
see here that we've got the three and
one because we've got three and one
replaces B Isn't that cool and then we
can ask um for the confidence 95
confidence interval although you can
change it okay and that is for the ICC
um and the size right and when you
report this as you can see whether
confidence interval should be reported
um the idea here is that you never just
report your point estimate that you
include the uh you include the uh lower
and upper confidence interval values
oh and then um now the Bland almond pop
plot uh is not part of this so I'm not
sure what is going on here they may have
removed it but yeah the Bland Altman
plot is actually a different module
which will be in a different video
that's going to come out on the channel
after this
um but it creates a table so we'll come
back to that so let's talk about the
output here because
um uh chicketti
uh sorry so let's talk about the data
here uh so cicchetti there we go
cicchetti 1994 gives you an
interpretation set of guidelines for the
ICC and that's so part of the reason why
I wanted to do that here so this is 1994
we also have a 2016 coup and Li okay uh
the Bland almond plot we'll talk about
again in another video so let's talk
about these so let's see our Point
estimate for uh ICC three one ten
subjects three Raiders
seanflies 3-1 Point estimate of 0.185
with a lower bound um negative 0.166 and
an upper bound of our confidence
interval as 0.635 so our confidence
interval contains zero that makes it
suspect all right so here we have the
issue
sochetti puts uh 0.4 less than 0.4 as or
0.4 to 0.59 Fair
0.6 to 0.74 good I don't know if you can
hear my dog barking in the background
but I apologize for that uh and then
0.7521 excellent and a 0.185 is
definitely less than 0.4 oops uh KU and
Lee made this a little bit more
conservative instead of 0.4 they made it
0.5 so anything between 0.4 and 0.5
depending on who you like on whether or
not you want to be a little bit more
conservative uh with your estimate uh or
evaluating your estimate
um you would go with suchetti or kuhan
Lee but they're essentially the same
here essentially uh maybe not
essentially but it's kind of in any case
no we've got poor poor poor poor poor
poor poor agreement between my three
judges and if you saw the last video on
the Raider agreement module uh yeah
that's absolutely true that is
absolutely true so that's how you do the
intra class correlation you can say that
your Raiders your judges sucked maybe
you need to get new judges in jasp
thanks for watching this video please
leave your comment suggestions questions
and feedback down below thanks for
watching this video see you in the next
one bye