Validation Academic Integrity Survey
Validation Academic Integrity Survey
Validation Academic Integrity Survey
DOI 10.1007/s10805-016-9253-y
Abstract This study concerned validating academic integrity survey (AIS), a measure
developed in 2010 to investigate academic integrity practices in a Malaysian university. It
also examined the usefulness of the measure across gender and nationality of the
participants (undergraduates of Nigerian and Malaysian public universities). The sample
size comprised 450 students selected via quota sampling technique. The findings support-
ed the multidimensionality of academic dishonesty. Also, strong evidence of convergent
and discriminant validity, and construct reliability were generated for the revised AIS. The
testing of moderating effects yielded two outcomes. While the gender invariant analysis
produced evidence that the three-dimensional model was not moderated by gender; the
nationality effect was inconclusive, probably due to a noticeable imbalance in respondent
distribution for the nationality group. The significance of this study lies not only in the
rigorous statistical methods deployed to validate the dimension and psychometric proper-
ties of the AIS; but establishing the gender invariance of the model. It is understood from
the findings that although male and female students may vary in their academic miscon-
ducts, the underlying factors for these conducts are the same and can be addressed
effectively using a non-discriminating approach.
* Imran Adesile
adesileimraan@yahoo.com
1
Institute of Education, International Islamic University Malaysia, Jalan Gombak, 53100 Kuala
Lumpur, Malaysia
2
Research Management Center, International Islamic University Malaysia, Jalan Gombak,
53100 Kuala Lumpur, Malaysia
3
Institute of Education, International Islamic University Malaysia, Jalan Gombak, 53100 Kuala
Lumpur, Malaysia
Adesile I. et al.
Introduction
It is pertinent from the literature that several measures have been developed to
investigate academic dishonesty. However, as Imran and Sahari (2013) pointed out,
most of the measures lacked evidence of sound psychometric properties and their
dimensionalities were not painstakingly investigated. Few examples would suffice to
buttress this point.
One of the measures frequently used to investigate academic dishonesty was the
scale developed by McCabe and Trevino 1993. The scale measured students’ self
reported academic dishonesty with 12 items, on a 5-point scale ranging from 1
(= never) to 5 (= many times). This scale has been employed in several studies
including McCabe and Trevino (1997). Despite its wider application in the literature,
the psychometric properties have not been critically investigated. Apart from a token
of information on internal consistency (Cronbach’s alpha =0.794), no additional
information on validation was provided (either through the exploratory factor analysis
and/or confirmatory factor analysis).
Validating Academic Integrity Survey (AIS): An Application
Notwithstanding its deficiency, Iyer and Eastman (2006) adapted the McCabe and
Trevino’s (1993) measure. The adapted items were said to be similar with those in
Brown’s (1996; 2000) and Kidwell et al. (2003) studies. Like in McCabe and Trevino,
a 5-point scale ranging from 1 (= never) to 5 (= many times) was used. Although it
was claimed that Multitraits-Multimethods (MTMM) analysis was used to establish
convergent and discriminant validity of the scale, there was no concrete evidence
regarding this analysis.
Also, Lim and See (2001), in a study on attitudes and intentions toward reporting
academic cheating among students in Singapore, used a 21-item measure adapted from
Newstead et al. (1996). Data were collected on a 5-point scale ranging from 1 = never,
to 5 = frequently. The 21-item scale yielded Cronbach’s alpha 0.86 for self-reported
cheating, and 0.90 for perceived seriousness of cheating. However, like the other
measures discussed, no further information was given concerning the validity and
factor structure of the scale’s items.
Interestingly, even some recent studies too use measures with less sound psycho-
metric properties. For instance, in Bates et al. (2005) ‘Students today, pharmacists
tomorrow: A survey of student ethics,’ the current validity and reliability, and scale
dimensionality of the 12 scenarios adapted were not mentioned. The same was the
case with Arhin (2009) who adapted the twelve items used in Bates et al. (2005). The
author only highlighted that clarity and face validity of the items were checked
through focus groups and pilot study but no concrete evidence of the reliability and
factor structure of the items.
Similarly, the case of thirteen unethical conducts used in Nazir et al. (2011) were not
different from the other measures discussed above. The measure lacked detailed psychometric
properties. The authors barely listed the internal consistency (0.85) but no further information
on validity and factor structure of the items.
Besides validation/estimation of psychometric properties, literature is also characterized by
inconsistent findings about dimensionality of academic dishonesty. This issue is examined in
the next paragraphs.
cheating by other studies. These dimensions included: Cheating on classroom tests; copying
from the internet; knowledge and awareness of others’ (peers) cheating; and lying to avoid
detection.
Nonetheless, researchers like Roig and DeTommaso (1995), and Ferrari (2005)
opined that academic dishonesty is a two-dimensional construct namely, ‘plagiarism’
(regarding written home works) and ‘cheating’ (regarding class tests and exams). For
Rawwas and Isakson (2000), academic dishonesty is a unidimensional construct
comprising four general items listed as: ‘Aiding and abetting dishonesty conducts;
obtaining an unfair advantage; fabricating information; and ignoring prevalent
practices’.
The dimension reported by Pavela (1978) was in accord with Rawwas and Isakson, one-
dimensional with four items. She identified those items as: Using unauthorized materials for
academic activity (i.e. assignment and class test); fabricating information, references, or
results; plagiarizing; and helping other students in academic dishonesty (i.e. facilitating
academic dishonesty).
The above notwithstanding, a case was made for measuring academic dishonesty
under a specific form or dimension rather than using general items. According to
Swift and Nonis (1998), when students were asked about cheating in the general
sense, only 60 % of the students admitted to have cheated at least once, but when the
summated score for all specific forms of cheating were totaled, about 87 % of the
students admitted cheating at least once. Thus, identifying specific cheating behaviors
may reveal academic dishonesty better than using general questions (Swift and Nonis
1998).
Therefore, the ultimate goal of this study is in twofold: Validating the psychometric
properties of the refined academic integrity survey (AIS); and determining its dimen-
sionality. The study also aims to test the moderating effect of gender and nationality
on the revised measure. The following research hypotheses are proposed to guide the
thrust of this study.
Methods
Sample
This study drew participants from Nigerian and Malaysian public universities.
Altogether, a total of 450 undergraduates participated. A quota sampling technique
was applied in the selection of participants. Quota sampling is a sampling technique
in which the researcher first identify the general categories for which cases or people
will be selected, and then select to reach a pre-determined number of cases in each
category (Neuman 2006). This technique is most suitable in the context where the
researcher is unable to get access to the comprehensive list of the participant’s
sampling frame as was the case in this study. Given that the undergraduates in
Nigeria is almost twice of their Malaysian counterparts, a greater number of
Validating Academic Integrity Survey (AIS): An Application
respondents were chosen from the former. This includes 280 students from Nigeria
and 170 students from Malaysia. Table 1 shows the distribution of respondents and
the usable responses from the survey.
Instrument
This study adapted a measure of academic integrity developed by Imran (2010). The
measure, ‘Academic Integrity Survey’ – (AIS), was used to collect data on percep-
tions of postgraduate students towards academic integrity practices in a Malaysian
university. AIS comprised 16 items derived from measures of previous studies such as
McCabe and Trevino (1993, 1997), Lim and See (2001), Dawkins (2004), and Brown
and Weible (2006).
The survey comprised a 5-point Likert scale, from 1 (not dishonesty) to 5 (serious
dishonesty). High score on the scale denotes a higher perception of academic integrity
practices among students. Reverse is the case for a low score. The internal consis-
tency for the whole items was high (.946). Besides, the confirmatory factor analytic
(CFA) procedure was used to establish the construct validity, resulting in a four-factor
solution.
AIS was adapted because: (1) the instrument captured well key object of the
present study (the academic dishonest conducts among students of higher education),
and (2) the instrument was developed among students of diverse cultural and demo-
graphic characteristics; and consisted of items which have been widely used by
prominent researchers in the field. Nonetheless, since most items’ wordings were
modified to be more suitable in the context of the present study; it was necessary
to revalidate the psychometric properties as well as dimensionality of scale’s items.
Specifically, the wordings that referred mainly to postgraduates of the Malaysian
university were amended to reflect a general use. Besides, the present study extended
the response scale from 5 options to 8 options (1 = ‘very strongly disagree’; 8 = ‘very
strongly agree’), to enable a wider variation in the participants’ responses.
Items Components
1 2 3
Use unauthorized instruments (e.g. mobile phone, crib note, etc.) in a class test or exam. .884
Copy other students’ works without their consent in a class test/exam. .873
Tell a lie to receive an undeserved grade from the instructor. .851
Use notes during a closed book examination. .833
Submit other students’ works as your own to obtain grades. .832
Employ an expert to write your assignment(s). .817
Copy other students’ works with their permission during a class test/exam. .816
Fail to contribute a fair share to a group project or assignment. .794
Falsify list of references in a group or individual assignment. .638 .592
Ignore the established instructions for completing an assignment or a project .829
Submit a previously graded assignment to another instructor for grade without changes. .825
Falsify lab results. .816
Fabricate research data. .413 .798
Fail to acknowledge previous studies used in the literature review. .610 .660
Fail to acknowledge team members in a group work. .601 .657
value between 2.0 and 5.0 as acceptable fit; a Comparative Fit Index (CFI) value
greater than 0.90 but not reaching 1.0 as indication of a reasonable good fit; and a
Root Mean Square Error of Approximation (RMSEA) value between 0.05 and 0.08
Adesile I. et al.
Component 1 2 3
suggesting a moderate fit; the results of the present analysis especially, the χ2/df
(3.02), the CFI (.959), and the RMSEA (.078) showed that the model was modestly
fit to the sample data of this study. This was further supported by the parameter
loadings which were all quite adequate and reasonable; and correlations among factors
which were all moderate (ranging from .21 to .56). There was no offending estimate
and the loadings were statistically significant. Figure 1 presents output of the First-
Order AIS model.
A further assessment of The model output further revealed that all the parameter loadings
were high and practically reasonable (0.8 and above), suggesting the possibility of a common
latent variable underlying the three factors. This requires testing a second-order CFA model to
explore the underlying, central variable. Figure 2 displays the output of the tested second-order
measurement model.
The output in Fig. 2 showed that the model did not lack fitting to the sample data.
The results of the key GoF statistics (χ2/df, 2.53; CFI, .971; and RMSEA, .068), were
more adequate than the previous first-order model. The parameter loadings were
Fig. 1 First-Order AIS. CHT – Cheating; RMSC – Research Misconduct; PLG – Plagiarism; AD1 -
AD15 = Item 1–15 for Academic Dishonesty; e1 – e15 = Error terms associated with AD1 – AD15; CMINDF
– Relative Chi-Square; p – P-value; CFI – Comparative Fit Index; and RMSEA – Root Mean Square Error of
Approximation
Validating Academic Integrity Survey (AIS): An Application
Fig. 2 Second-Order AIS. CHT – Cheating; RMSC – Research Misconduct; PLG – Plagiarism; AD1 -
AD15 = Item 1–15 for Academic Dishonesty; e1 – e18 = Error terms associated with AD1 – AD15 and the
three 1st order latent factors; CMINDF – Relative Chi-Square; p – P-value; CFI – Comparative Fit Index; and
RMSEA – Root Mean Square Error of Approximation
adequate and reasonable (all loadings were above 0.5 cut-off recommended by Hair et
al. 2010); and the effect sizes of the three factors were quite substantial (ranging from
34 % - RMSC, to 89 % - PLG). Besides, the three components explained a
substantial, statistically significant per cent in underlying total variability in students’
academic dishonesty. These include 11 %, 40 %, and 79 % for research misconduct,
cheating and plagiarism, respectively. Table 6 presents the standardized regression
weights for all the variables in the model.
In order to establish proofs of validity and reliability of the revised AIS measure, the
three statistical measures recommended by Hair et al. (2010) were applied. These
included adequate standardized factor loadings (preferably loadings with 0.5 and
above); the AVE values greater than 0.5 (for evidence of convergent validity), and
shared values among variables (SV) lesser than their corresponding AVE values (for
evidence of discriminant validity); and CR values greater than 0.7 (for evidence of
construct reliability). The outcomes, as shown in Table 7, showed that the 3-
dimensional AIS model was high in internal consistency; and validity of scale’s
dimensionalities was adequately supported. It was evident that all the AVE values
were greater than the recommended threshold point (0.5), a proof of convergent
validity. Also, all the AVE values obtained were greater than the SVs, denoting
evidence of discriminant validity among dimensions of the measure. Lastly, the CR
values were equally greater than the recommended cut-off point (0.7), pointing to
proof of scale’s reliability.
Adesile I. et al.
S.E. – Standardized Estimates; C.R. – Critical Ratio; p – Power statistics; ‘***’ = Significant at .0001 alpha level;
F3 – Factor 3 (Plagiarism); F2 – Factor 2 (Cheating); F1 – Factor 1 (Research Misconduct); AD1 - AD15 = Item
1–15 adapted for Academic Dishonesty
The findings for the first hypothesis (Table 4, Figs. 1 and 2), showed that the revised AIS’
measure is a multi-dimensional construct, with three dimensions (i.e., cheating, plagiarism and
research misconduct). These dimensions consist of seven, three and four items, respectively.
The results for second hypothesis (Table 7) provide strong evidence that the revised AIS is
statistically valid and reliable. That is, evidence that different approaches to measure a
conceptually distinct latent construct produce not only similar results (convergent validity),
Diagonals (in bold and red color) represent AVEs the off diagonals (below) represent correlations among
constructs, and the ones above represent the corresponding shared variance among components. Composite
reliability values are presented in the last column (in bold and purple color)
Validating Academic Integrity Survey (AIS): An Application
but each construct is unique and captures some phenomena which other constructs do not
(discriminant validity); and that the latent constructs exhibit some degree of internal consis-
tency in their measures (construct reliability) (Hair et al. 2010).
The outcomes of the, configural invariance analyses (the third hypothesis) showed that the
revised AIS model is not gender bias. Configural is a terminology mostly used in literature to
represent the model which incorporates the baseline models for two groups within the same
file (Byrne 2010). It is a model which the subsequent invariant models e.g. constrained model
are compared (Hair et al. 2010). Figures 3 and 4 presents the outputs for configural and
constrained models. Only gender invariant analysis’ results are displayed for discussion, the
nationality group did not produce a significant outcome, probably due to a noticeable
difference in sample size distribution. For instance, the usable data gathered from the
Fig. 4 Constrained Model (Male, Female). CHT – Cheating; RMSC – Research Misconduct; PLG – Plagiarism;
AD1 - AD15 = Item 1–15 for Academic Dishonesty; e1 – e18 = Error terms associated with AD1 – AD15 and
the three 1st order latent factors; CMIN – Chi-Square; DF – Degree of Freedom; p – P-value; CFI – Comparative
Fit Index; and RMSEA – Root Mean Square Error of Approximation
Adesile I. et al.
Nigerian respondents (n = 200) were apparently greater than the ones from the Malaysian
participants (n = 128). Thus, the findings regarding the nationality effect were inconclusive.
The techniques proposed by prominent scholars in the field (chi-square statistic and compar-
ative fit index BCFI^) were employed to interprete results for the invariant analysis. In using
the chi-square’s approach, evidence of invariance is claimed if the difference in the chi-square
values (between the configural and constrained models) with difference in their degree of
freedoms is observed to be insignificant when compared with the tabulated chi-square value at
a preferred, stringent alpha level (Byrne 2010; Hair et al. 2010). It should be noted that this
approach has been criticized mostly by scholars in the applied research, for representing an
excessively stringent test of invariance, one which might not work well with the understanding
that the SEM models are at best only approximation of reality (Cudeck and Browne 1983; and
MacCallum et al. 1992; cited in Byrne 2010).
A more recent, alternative approach to interpret invariant analysis was proposed by
Cheung and Rensvold (2002, cited in Byrne 2010). This approach asserted that
evidence of invariance analysis should be based on the difference in the CFI values
(between the configural and constrained models) that exhibits a probability greater
than 0.01. Byrne (2010) noted that though the later approach is the recent and more
practical approach to testing for invariance, it has not been granted the official SEM
stamp to date. Nevertheless, its use is frequently reported in the literature largely
because it makes more practical sense to do so (Byrne 2010).
Following the above criteria, the results summarized on Table 8 showed that the
chi-square statistics, the differences in CFI, and RMSEA’s values, all argued for
equivalence (invariance) of the three-dimensional AIS model across the gender of
the respondents.
Specifically, the invariance test for the male (n1 = 148) and female (n2 = 180)
groups resulted in a statistically insignificant change in the Chi-square value,
Δχ2 (13) = 22.0, p > .005; and insignificant change in CFI and RMSEA values
(ΔCFI = .002; ΔRMSEA = .001). Simply put, the difference in the Chi-square values
between the configural and constrained models did not produce a poorer fit model.
Meaning, the parameter estimates do not vary significantly across respondents’ gender
group. Hence, gender does not interact with the overall factors/dimensions underlying
students’ academic dishonesty. Better still, gender is not a moderating variable. Table
8 presents the summary of invariance analysis’ results.
Fit Statistics Configural Model Constrained Model Difference Tab. χ2 (at p = 0.01) Decision
χ2 = Chi-Square statistics; Tab.χ2 = Table χ2 ; DF = Degree of freedom; CFI = Comparative Fit Index; RMSEA
= Root Mean Square Error of Approximation; p = Power statistics; and Not Sig. = Not Significant
Validating Academic Integrity Survey (AIS): An Application
The ultimate goal of this research was to produce a psychometrically sound instrument
with clear dimensionality for academic dishonesty among higher education students. The
results of the EFA and CFA statistics, in part, argued for a multi-dimensional academic
dishonesty. The results showed that the revised AIS is represented by three underlying
factors i.e. cheating, plagiarism and research misconduct. Among the factors, cheating had
the highest variance explained, followed by the research misconduct; and the least was
plagiarism.
Cheating comprised seven items which reflect dishonest conducts usually perpe-
trated by students during a class test and/or exam, such as, copying of another
student’s work, and using notes during a close book examination. Plagiarism
comprises three items which capture students’ dishonest conducts during the
group/individual take-home assignments, as well as abuse of online copyrighted
materials. Examples of such behaviors include failure to acknowledge team member
in a group work; and failure to acknowledge the previous studies used in a research.
Similarly, research misconduct consists of four items which address other students’
misconducts not in the category of cheating or plagiarism; such as, fabrication of lab
results and fabrication of research data.
The three-dimensional construct in this study conforms to the previous reports, that
academic dishonesty is a multi-dimensional construct (Roig and DeTommaso 1995; Ferrari
2005; Iyer and Eastman 2006; Imran 2010). For instance, Iyer and Eastman reported that
academic dishonesty comprised four factors; while Roig and DeTommasso argued for two
factors. Also, both cheating and plagiarism have appeared prominently in many studies where
dimensionality was examined (Ferrari 2005; Iyer and Eastman 2006; Imran and Sahari 2013).
However, the frequency of research misconduct as a component was not well pronounced in
the literature (Imran 2010).
Regarding the psychometric of the revised AIS, this study established a strong
evidence of convergent and discriminant validity, as well as internal consistency of
measures, using series of statistical tests. However, unlike in the original AIS, wherein
academic dishonesty consisted of four dimensions, the present study is a three-
dimensional construct. Also, although, the psychometric estimates reported of the
new revised AIS were not quite different from the former one, the strength of the
refined AIS lies in additional tests of convergent and discriminant validity. Put
together, the outcomes of this study indicate that different approaches to measure
academic dishonesty yield not only the same results (convergent validity), but each
sub-construct is unique and captures some phenomena which others do not (discrim-
inant validity); and sub-constructs exhibit some degree of reliability in their measures
(construct reliability) – (Hair et al. 2010).
Lastly, the three-dimensional AIS model exhibits evidence of gender invariant in
the assessment of undergraduates’ academic misconducts. That is, a proof that the
variability in students’ involvement in academic dishonesty is not moderated by the
gender factor. Until now, moderating effect of gender is less investigated, especially,
with structural equation modeling approach. Of course, findings abound on gender
disparity in self-reported academic dishonesty. What appears relatively scarce in the
literature, is the evidence that one model of academic dishonesty is useful and
adequate for analyzing students’ academic dishonesty across different independent
Adesile I. et al.
groups of student populace. This evidence is possible only by showing a proof that
the specified model exhibits a zero moderating effect for the particular target inde-
pendent groups.
This concern is more compelling in view of the inconsistencies characterising level
of involvement of male and female students in academic misconducts. While most
researchers reported that more males than females were involved in this act (Jendrek
1992; McCabe and Trevino 1997; Whitley et al. 1999; Iyer and Eastman 2006; Nazir
et al. 2011); others reported the opposite, i.e. more females than males were involved
(Leming 1980; Lambert et al. 2003). Still, many other findings have been largely
inconclusive (Thoma 1986; Jordan 2001; Malone 2006; Wotring 2007; Olasehinde
2008).
Hence, the evidence of gender-invariant in this study is a major contribution to the
literature in the field, and with important implication for theory and practice in higher
education. For instance, although male and female students may vary in their level of
academic dishonesty, this does not mean variation with respect to the underlying
factors characterizing academic dishonesty. As depicted by the present results, these
underlying dimensions are necessarily the same across the gender groups of the
respondents. The good side of this result is that, non-gender discriminating approaches
may work out well in measures geared to curb the recurrent academic dishonesty and
foster academic integrity among students of higher education. Thus, this result is
unique and insightful.
Limitations
Some limitations of this research should be noted. First, caution needs be applied
about generalizability of findings of this study. The reason being only undergraduates
from two federal government owned universities in Nigeria and a public university in
Malaysia were included in the study. Whereas, there were several other undergradu-
ates especially from state and private universities not captured. Secondly, it would
have been ideal to select more universities in each of the two countries, to allow a
wider coverage of the study area, but time and resources constraint did not permit.
Besides, the sampling technique approach also raises some concerns. Although quota
sampling method ensures that some differences are represented in the sample
(Neuman 2006), however, the technique is non-random and cannot guarantee equal
representation of subjects in the population. Notwithstanding, this technique is most
suitable especially in the context where the researcher is unable to assess a compre-
hensive list of the study’s sampling frame as was the case in this study. Lastly,
although efforts were made to reduce the effect of potential bias typical of response to
survey instrument, still due to the sensitivity of the subject matter involved; some
students may have chosen not to respond or not be truthful in their responses because
of less confidence in the anonymity of the results.
Consequently, future studies are urged to consider a bigger sample size using the
randomization principle. This will encourage a wider generalization of findings there-
about. Also, more studies are required to replicate and explore the possibility of
additional groups’ invariant effects of the revised AIS model especially, among other
demographic variables such as nationality, age, and CGPA.
Validating Academic Integrity Survey (AIS): An Application
References
Arhin, A.O. (2009). A pilot study of nursing student’s perceptions of academic dishonesty: A generation Y
perspective. ABNF Journal, 20 (1), 17.
Bates, I. P., Davies, J. G., Murphy, C., & Bone, A. (2005). A multi-faculty exploration of academic dishonesty.
Pharmacy Education, 5(1), 69–76.
Brown, B. S. (1996). A comparison of the academic ethics of graduate business, education, and engineering
students. College Student Journal, 30, 294–301.
Brown, B. S. (2000). The academic ethics of graduate business students: 1993 to 1998. The Journal of Applied
Business Research, 16, 105–112.
Brown, B. S., & Weible, R. (2006). Changes in academic dishonesty among MIS majors between 1999 and
2004. Journal of Computing in Higher Education, 18(1), 116–134.
Browne, M. W., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen & J. S. Long
(Eds.), Testing structural equation models (pp. 136–162). Sage: Newbury Park.
Byrne, M.B. (2010). Structural equation modeling with Amos: Basic concepts, application, and programming
(2nd ed.). London: Routledge.
Cheung, G.W., & Rensvold, R.B. (2002). Evaluating Goodness-Of-Fit Indexes for Testing Measurement
Invariance. In M.B. Byrne, (2010): Structural equation modeling with Amos: Basic concepts, application,
and programming (2nd ed.). London: Routledge.
Colton, D., & Covert, R. (2007). Instrument construction, validity and reliability. In letter to the editor (2012),
significance of attending to instrument planning and validation stages in nursing discipline. Asian Nursing
Research, 6, 82–83.
Creswell, J. W. (2008). Educational research: Planning, conducting, and evaluating quantitative and qualitative
research (3rd ed., ). Upper Saddle Creek, NJ: Pearson Education.
Cudeck, R., & Browne, M. W. (1983). Cross-validation of covariance structures. Multivariate Behavioral
Research, 18, 147–167.
Dawkins, R. L. (2004). Attributes and statuses of college students associated with classroom cheating on a small-
sized campus. College Student Journal, 38(1), 116–129.
Ferrari, J. R. (2005). BImposter tendencies and academic dishonesty: Do they cheat their way to success?^. Social
Behavior and Personality, 33(1), 11–18.
Field, A. (2005). Discovering statistics using SPSS (2nd ed., ). New Delhi: Sage Publications.
Fields, D. L. (2002). Taking the measure of work: A guide to validated scales for organizational research and
diagnosis. USA: Sage Publications.
Hair, Jr, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Upper
Saddle River: Pearson Education International.
Imran, A. M. (2010). IIUM students’ perceptions of academic integrity practices. A dissertation submitted in
partial fulfilment of the requirements for the degree of Master of Education.
Imran A. M., & Sahari, M. (2013). Predicting the underlying factors of academic dishonesty among undergrad-
uates in public universities: a path analysis approach. Journal of Academic Ethics, 11(2), 103–120.
Iyer, R., & Eastman, J.K. (2006). Academic dishonesty: are business students different from other college
students? Journal of Education For Business, 101–110.
Jendrek, M. P. (1992). Students’ reactions to academic dishonesty. Journal of College Student Development, 33,
260–273.
Jordan, A. E. (2001). College student cheating: The role of motivation, perceived norms, attitudes and knowledge
of institutional policy. Ethics and Behavior, 11(3), 233–247.
Karlins, M., Michaels, C., & Podlogar, S. (1988). An empirical investigation of actual cheating in a large sample
of undergraduates. Research in Higher Education, 29, 359–364.
Kidwell, L. A., Wozniak, K., & Laurel, J. P. (2003). Student reports and faculty perceptions of academic
dishonesty. Teaching Business Ethics, 7, 205–214.
Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed., ). New York: The Guilford
Press.
Lambert, E.G., Hogan, N.L., & Barton, S.M. (2003). Collegiate academic dishonesty revisited: what have they
done, how often have they done it, who does it, and why did they do it? Electronic Journal of Sociology, 1-
28. ISSN: 1198 3655.
Leming, J. S. (1980). Cheating behavior, subject variables, and components of the internal- external scale under
high and low risk conditions. Journal of Educational Research, 74(2), 83–87.
Lim, V. K. G., & See, S. K. B. (2001). Attitudes toward, and intentions to report, academic cheating among
students in Singapore. Ethics & Behavior, 11(3), 261–274.
MacCallum, R. C., Roznowski, M., & Necowitz, L. B. (1992). Model modifications in covariance structure
analysis: the problem of capitalization on chance. Psychological Bulletin, 111, 490–504.
Validating Academic Integrity Survey (AIS): An Application
MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size
for covariance structure modeling. Psychological Methods, 1, 130–149.
Malone, F. L. (2006). The ethical attitude of accounting students. Journal of American Academy of Business
Cambridge, 8(1), 142–146.
McCabe, D. L., & Trevino, L. K. (1993). Academic dishonesty: honor codes and other contextual influences.
Journal of Higher Education, 64, 520–538.
McCabe, D.L., & Trevino, L.K. (1997). Individual and contextual influences on academic dishonesty: A
multicampus investigation. Research in Higher Education, 38(3), 379-397.
Nazir, M.S., Aslam, M.S., & Nawas, M.M. (2011). Can demography predict academic dishonest behaviors of
students? A case of Pakistan. International Journal of Education Studies, 4(2), 208–218. ISSN 1913-9020
E-ISSN 1913-9039 DOI:10.5539/ies.v4n2p208.
Nelson, T., & Shaefer, N. (1986). Cheating among college students estimated with the randomized-response
technique. College Student Journal, 20(Fall), 321–325.
Neuman, W. L. (2006). Social research methods qualitative and quantitative approaches (6th ed., ). Boston:
Allyn and Bacon.
Newstead, S.E., Franklyn-Stokes, A., & Armstead, P. (1996). Individual differences in student cheating. Journal
of Educational Psychology, 88, 229–241.
Olasehinde, W.O. (2008). Lecturer and student sensitivity to academic dishonesty intervention approaches in the
University of Ilorin, Nigeria. Education Research Review, 3(11), 324–333. Accessed 5 July, 2011: http://
www.academicjournals.org/ERR.
Pallant, J. (2007). SPSS survival manual: A step by step guide to data analysis using SPSS for Windows (3rd
edn.). England: McGraw Hill.
Pavela, G. (1978). Judicial review of academic decision-making after Horowitz. School Law Journal, 55(8), 55–
75.
Rawwas, M. Y. A., & Isakson, H. R. (2000). Ethics of tomorrow's business managers: the influence of personal
beliefs and values, individual characteristics and situational factors. Journal of Education for Business,
75(6), 321–330.
Roig, M., & DeTommaso, L. (1995). Are college cheating and plagiarism related to academic procrastination?
Psychological Reports, 77, 691–698.
Swift, C. O., & Nonis, S. (1998). When no one is watching: cheating behaviors on projects and assignments.
Marketing Education Review, 8, 27–36.
Tabachnick, B.G., & Fidell, L.S. (2007). Using multivariate statistics (5th ed.). Boston: Pearson Education.
Thoma, S. (1986). Estimating gender differences in the comprehension and preference of moral issues.
Development Review, 6(1), 165–180.
Whitley, B. E., Nelson, A. B., & Jones, C. J. (1999). Gender differences in cheating attitudes and classroom
cheating behavior: A Meta analysis. Sex Roles, 41(9/10), 657–680.
Wotring, E. K. (2007). Cheating in the community college: generational differences among students and
implications for faculty. Inquiry, 12(1), 5–13.