0% found this document useful (0 votes)
38 views

JASP Tutorial

This document provides a tutorial on conducting reliability analysis using JASP, specifically focusing on Cronbach's Alpha and the Intraclass Correlation Coefficient (ICC). It explains how to set up the analysis, including selecting relevant questions and handling reverse coding for reliability measures. The tutorial also discusses the interpretation of results and the importance of understanding the data structure for accurate analysis.

Uploaded by

ccberdejo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

JASP Tutorial

This document provides a tutorial on conducting reliability analysis using JASP, specifically focusing on Cronbach's Alpha and the Intraclass Correlation Coefficient (ICC). It explains how to set up the analysis, including selecting relevant questions and handling reverse coding for reliability measures. The tutorial also discusses the interpretation of results and the importance of understanding the data structure for accurate analysis.

Uploaded by

ccberdejo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

[

V
í
d
e
o
]
A
n
á
li
si
s
d
e
l
a
fi
a
b
il
i
d
a
d
c
o
n
J
A
S
P
:
A
lf
a
C
r
o
n
b
a
c
h
×
Cerrar
Descripción
Este recurso te puede resultar útil para aprender a realizar
con JASP el estudio de la fiabilidad de las puntuaciones con
el coeficiente Alfa de Cronbach.
Transcripción

hey everybody and welcome to another


jasp tutorial in this tutorial I want to
go ahead and explore
the reliability analysis in jasp so
we're going to do a quick tutorial
focusing on Chromebox
Alpha as always I'm using the latest
build available for Mac OS uh from the
folks at jasp this is jasp
0.14 and that's the latest uh uh feature
build preview build from the folks at
jasp as of time of recording okay so
let's open up some data okay so here's
some data this is the a data set from a
fictional data set that I've used in
some other tutorials as well as my
classes with Statistics my my methods
and and stats classes so this is uh an
SPSS anxiety questionnaire with multiple
questions 257
1 fictional
responses and this was uh this was
generated by Andy field a few years ago
and so we're we're not going to focus on
all of them in this reliability analysis
because that would just be a bit
unwieldy so we are going to specifically
talk about question one statistics make
me cry question three standard
deviations excite me which is going to
be a reverse coded one question four I
dream that Pearson is attacking me with
correlation coefficients you can see how
these are being phrased
that uh the as numbers get bigger on
this scale from 1 to five you can
imagine that fear is increasing I don't
understand statistics is question five
question 12 people try to tell you that
SPSS makes statistics easier to
understand but doesn't and you can
replace SPSS here with
jasp uh question 16 I weep openly At The
Mention Of Central central
tendency question 20 I cannot sleep for
thoughts of igen
vectors and question 21 I wake under my
duvet thinking I am trapped under a
normal distribution and as you can see
we have scale values from 1 to five and
where one is strongly disagree and five
is strongly agree which is why we'll
need to reverse code standard deviations
excite
me they definitely excite me but I don't
know about you so we are going to use
the reliability analysis module in SPS
or excuse me in Jas so much talk about
SPSS I forgot what I was doing
here um so we're going to use
reliability up here and we're going to
use the single test reliability analysis
under
classical okay and here we have all of
our single test reliability uh options
so so first and foremost we have all of
our variables our question items and we
are going to bring over the ones that I
said we needed we're going to focus on
so we're going to do
one
three or it looks like we're going to
have to bring them over uh individually
1
three
4
5
12 16
20 and 21 now you can explore the other
ones uh if you'd like at another time so
by default it brings up McDonald's Omega
we are not going to talk about
McDonald's Omega in this tutorial you
can read up about McDonald's Omega
elsewhere but I do want to note when I
before I changed this we have this note
the following item correlated negatively
with the scale Q three remember I said
that we're going to have to reverse code
that now this module has a really good
uh good and quick way to deal with that
just in case you're doing your re your
reliability analysis before you've gone
and reverse coded these variables so
imagine this is the first thing you've
done before you've done any data
cleaning any any average taking or
anything like that so imagine that this
is the case so uh I will get to that
this is going to be our second one but
let's let's get some statistics here
that uh will work for us again
McDonald's Omega is on by default so I'm
going to uncheck that and we are going
to specifically talk about Chromebox
Alpha so Chromebox Alpha is a
measurement of scale reliability and it
measures equivalence so what's it what
it is going to do is going to take these
eight questions the subset that I've
chosen here eight question subset and
it's going to do four versus four
essentially what's called split half
reliability so it's going to split those
items into two groups and it's going to
see whether or not the items in both
groups give comparable results now this
is not a measure of whether the scale
itself is uh unidimensional that is it's
whether or not the scale is a single
construct or multiple constructs that's
what you'd have to do a factor analysis
for either an exploratory factor
analysis or confirmatory factor analysis
or principal components so krox Alpha is
only telling you whether or not your
split half reliability if one half of
the items is the same for one person as
well as everyone else
so that's what we see here now with the
correlation
negatively with question three cuz we
haven't done the reverse scaled uh
option yet you can see that my point
estimate for kbx Alpha is below what
would be considered acceptable 7.8 range
or higher so there's something up here
it's kind of interesting though that uh
question three isn't adding more uh
reduced reliability or at least lowering
this number this point estimate lower
somewhat but that's fine uh additional
options you can get with this is
gutman's Lambda and uh 2 and and Lambda
6 uh I don't know what those are so you
can we're going to move beyond that
there limits to my knowledge here um but
uh we can get it item correlations which
adds this to the table again some issues
will be resolved once we reverse code uh
question three we'll get the mean and
standard deviation for the scale itself
and this is only going to give us a
point estimate uh for the scale
individual items now because we have not
selected McDonald's Omega or gutman's uh
statistics these are grayed out if I
check that it becomes available to
choose but I'm not going to focus on
that so we could look at uh individual
item uh reliability if we were to drop
any what would happen with that we can
get the item versus the the rest
correlation as well and so these are all
of my three correlations with the other
with the whole
group now you can see that Q3 is quite
negatively correlated with the rest of
the group and this is the calculation
this is the calculation that is being uh
handled for this
note and then we can get individual uh
means and standard deviations for each
of our items which is somewhat use
uh looking down the road you would
probably get the vast majority of this
information from your descriptives
module but you can get it here if you
really really wanted to now here's where
the reverse scaled items module uh well
subm module I suppose option is
fantastic for reliability analysis so
all we have to do is put Q3 over here
and it will reverse scale and you see
now it's recalculating the uh Alpha as
well as the item rest correlation uh and
then of of course the mean and standard
deviation will change too because that
also gets flipped and it gives you the
note here the following item was reverse
scaled Q3 and now we have uh our Point
estimate for Chromebox Alpha is
82 which is you
know
2 higher than what it was when Q3 was
improperly coded so you can you can see
that we have a pretty good point
estimate for our reliability which is
nice the mean and standard deviation
changed somewhat because of q3's mean
and standard deviation changing somewhat
here so this is uh this is I think a
fantastic little tool now the thing to
note is it does not change Q3 in the
actual data set so be aware of that
don't immediately run to doing you know
your your T tests or your regressions or
whatever with Q3 with with the knowledge
from this Q3 that you've reverse scaled
it because it's only being reverse
scaled within this module so you will
have to create a new variable to reverse
scale it now if you just had if you had
question three reverse scaled prior to
opening up reliability analysis then you
would put the reverse scaled version in
this variable list and not the um
non-reverse coded uh variable in this
list so I would say this would be
q03 rev is how I would name it uh if you
wanted to just use the uh reverse coded
variable in the reliability analysis and
not have to use this because again this
reverse scaled item is only in this
module it does not change the reverse uh
it does not change the data in the data
set some Advanced options exist uh as
far as uh what you would do with missing
values pairwise or
listwise um
bootstrapping uh this is on by default
doing 1,000 bootstraps which is why you
saw the progress bar up here because it
is doing the bootstraps it is getting
the unstandardized uh Chromebox Alpha
estimation but you can get a
standardized one if you'd like if you
use McDonald's Omega you can get
estimation in a confirmatory factor
analysis and uh and you can set your
interval to an analytic interval or a
bootstrapped
interval so that is how you do a
reliability analysis for Chromebox Alpha
in
jasp if you like this video please leave
a like if you like this content please
stick around and hit subscribe please
your comments and feedback down below
thanks for watching bye

A
n
á
li
si
s
d
e
l
a
fi
a
b
il
i
d
a
d
c
o
n
J
A
S
P
:
C
o
e
fi
ci
e
n
t
e
d
e
C
o
r
r
e
l
a
ci
ó
n
I
n
t
r
a
cl
a
s
e
(
C
C
I
)
×
Cerrar
Descripción
Este recurso te puede resultar útil para aprender a realizar
con JASP el estudio de la fiabilidad de las puntuaciones con
el coeficiente de Correlación Intraclase (CCI).
Introduction
hey everybody and welcome to another
jasp tutorial in this one we're going to
continue our reliability analysis up
module and we are going to talk about
the intra class
correlation coefficient not too involved
in this video nothing too crazy we're
actually going to use the same uh inner
radar reliability data that I used in
the previous video for the Raider
agreement function in the reliability
module so let's go ahead and open up
that data before we do I am using the
jasp 0.16.4 which is the current build
uh the Intel version of course because
that's what I'm using on this Mac and so
let's go ahead and open that recent data
recent files and here we have my irr CSV
it's going to open this now if you
didn't watch that previous video because
you're like who cares about that uh here
we have a set of I have this is
completely farcical and completely
different and just ridiculous set of
Judges doing a measuring 10 contestants
in some I don't know talent show or
whatever dancing competition so we have
three judges they've each rated on a
scale from 0 to 10 and they're
relatively consistent but not really as
you may have seen in that previous video
but we're gonna do the ICC or the intra
class correlation now I used ICC in a

What is ICC
previous video
um that was for the um
enter item correlation coefficient so
that's not to be confused with this
intra class correlation ICC so my
apologies on using that same thing
um here is the interclass correlation
help information so just uh I don't know
if I can make it bigger let me see no
it's not uh no I can just copy uh reload
say page sorry I cannot make it bigger
but um just to tell you what it is so it
was um from field 2012 the ICC is a
correlation coefficient that assesses
consistency between measures of the same
class so what are classes well they are
different values of the same dependent
variable okay so different measurements
of the same dependent variable um so
shroud and flies distinguish between six
different ways of calculating the ICC
based on different designs they are
identified by different values for A and
B in ICC a comma B A is the dependent is
dependent on how Raiders are selected
and subjects are rated
and be on whether or not ratings are
averaged in the end Raider can refer to
judges tests or other forms of rating
here and in the section below so we can
talk about that as we go through this
let me go let me uh pull this uh it is
uh also I want to mention that it is a
common measure of integrated reliability
and consistency so fundabot on that all
right so let's go ahead and open this

Tutorial
how do you get the reliability module in
jasp well when you download jasp it
comes with the ability to add these
modules to your screen they exist
already but you just go ahead and click
them if you want to see them along your
top bar okay so that is the reliability
module and it ends up with this target
icon over here so we're going to click
on that and we're going to go to intra
class correlation the uni-dimensional
reliability analysis is your Chromebox
Alpha just in case you haven't seen that
there's a video on that on on the
channel already so interclass
correlation brings this up again nothing
too crazy and it'll bring back the help
file again you can get the help file by
clicking on the I button here okay
that's a good way to refresh yourself as
you go through this okay all right so
variables we're gonna put our three
judges in and I knew I was going to
click on that it was going to disappear
so don't worry I have the ability to
bring it back okay so uh it'll already
do things as I put this in so I'm not
going to change anything yet so don't
worry about that I put the three judges
in ignore the output for a second let's
talk about how we input stuff here all
right so the variables are in here as
the judges we don't want contestant in
there that's not necessary our
contestants are not actual data they're
just a numerical uh piece of information
to distinguish different contestants so
the variables columns compute ICC four
so we have three judges we want to
compute our inter-class correlation
value for them each variable corresponds
to one Raider with different rows
corresponding to different subjects
being rated that's our contestants okay
each subject is rated by and then we
have a choice here it's a radio button
you have to determine
which one works here and they are all
mutually exclusive because that's how
the point estimate is calculated so the
first one is a different rater
uh so each subject is rated by a
different Raider so they're randomly
selected so what does that mean each
subject is rated by a different rater
Raiders are selected randomly so you
have a pool this would be if you have a
pool of Raiders and um each person each
subject is rated by three and um they
get a three random Raiders from a pool
of let's say 10 Raiders right so this
corresponds to a one b in accordance
with the Shroud and Fleiss 79 paper
we'll get to that in a second
now the next one is the same set of
randomly selective Raiders or tests
right the same set of randomly selected
Raiders so what does this mean a random
random sample of K raters rate all
subjects so this will correspond to two
B in Trump's lice okay so this is it
would be if like everyone got um K
raters but they all rated every single
person so it's slightly different from a
different radar randomly selected okay
and then the same fixed set of Raiders
is the final one the same fixed set of
Raiders or tests okay fixed out of
Raiders for all subjects these are all
are all the Raiders of interest and
there is no generalization to a wider
population corresponds to icc3b from
shroud and flies Okay
um now there's no General generalization
to a wider population of Raiders it's
very important that you that I wish I
added that there no generalization to
wider population of Raiders of course we
are trying to uh generalize to a greater
population of uh people in a population
uh for our ICC but this is for this is
for a wider population of Raiders or
judges or tests Okay so so which one are
we choosing for this data well because I
have 10 consists 10 contestants and they
were all rated by all three judges I'm
going to choose the same fixed set of
Raiders because that's who they were
okay all right the next one is ratings
are averaged okay so this you check this
box are the ratings by all Raiders
averaged in the end this strongly
impacts the ICC coefficient if yes
corresponds to ICC a K in accordance
with trout and flies if not it
corresponds to ICC A1 Okay so
what are we doing here well are the
ratings themselves average and the
answer in this data set is no they are
not average they are single instant
ratings okay so we don't average them
across the ratings or we don't average
them across judges or we don't average
them across a set of tests or anything
like that there's no averaging in this
data set if we go to the data set these
are just whole numbers uh judge three
rated the performance of contestant one
as a six out of ten that's how that's
that's how that works there finally oop
actually I need to bring back the module
there we go
finally so this would be a one you can
see here that we've got the three and
one because we've got three and one
replaces B Isn't that cool and then we
can ask um for the confidence 95
confidence interval although you can
change it okay and that is for the ICC
um and the size right and when you
report this as you can see whether
confidence interval should be reported
um the idea here is that you never just
report your point estimate that you
include the uh you include the uh lower
and upper confidence interval values
oh and then um now the Bland almond pop
plot uh is not part of this so I'm not
sure what is going on here they may have
removed it but yeah the Bland Altman
plot is actually a different module
which will be in a different video
that's going to come out on the channel
after this
um but it creates a table so we'll come
back to that so let's talk about the
output here because
um uh chicketti
uh sorry so let's talk about the data
here uh so cicchetti there we go
cicchetti 1994 gives you an
interpretation set of guidelines for the
ICC and that's so part of the reason why
I wanted to do that here so this is 1994
we also have a 2016 coup and Li okay uh
the Bland almond plot we'll talk about
again in another video so let's talk
about these so let's see our Point
estimate for uh ICC three one ten
subjects three Raiders
seanflies 3-1 Point estimate of 0.185
with a lower bound um negative 0.166 and
an upper bound of our confidence
interval as 0.635 so our confidence
interval contains zero that makes it
suspect all right so here we have the
issue
sochetti puts uh 0.4 less than 0.4 as or
0.4 to 0.59 Fair
0.6 to 0.74 good I don't know if you can
hear my dog barking in the background
but I apologize for that uh and then
0.7521 excellent and a 0.185 is
definitely less than 0.4 oops uh KU and
Lee made this a little bit more
conservative instead of 0.4 they made it
0.5 so anything between 0.4 and 0.5
depending on who you like on whether or
not you want to be a little bit more
conservative uh with your estimate uh or
evaluating your estimate
um you would go with suchetti or kuhan
Lee but they're essentially the same
here essentially uh maybe not
essentially but it's kind of in any case
no we've got poor poor poor poor poor
poor poor agreement between my three
judges and if you saw the last video on
the Raider agreement module uh yeah
that's absolutely true that is
absolutely true so that's how you do the
intra class correlation you can say that
your Raiders your judges sucked maybe
you need to get new judges in jasp
thanks for watching this video please
leave your comment suggestions questions
and feedback down below thanks for
watching this video see you in the next
one bye

You might also like