0% found this document useful (0 votes)
45 views22 pages

A Conceptual Framework For Detecting Cheating in O

This article proposes a framework for detecting cheating in online and take-home exams. The framework arranges relevant theories and statistical models into three phases of increasing complexity, from descriptive statistics to more advanced inferential statistics and regression analysis. Tests on sample courses found that the first two phases of the framework effectively identified cheating in over 70% of cases.

Uploaded by

arcalasjohn540
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views22 pages

A Conceptual Framework For Detecting Cheating in O

This article proposes a framework for detecting cheating in online and take-home exams. The framework arranges relevant theories and statistical models into three phases of increasing complexity, from descriptive statistics to more advanced inferential statistics and regression analysis. Tests on sample courses found that the first two phases of the framework effectively identified cheating in over 70% of cases.

Uploaded by

arcalasjohn540
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Decision Sciences Journal of Innovative Education 

C 2017 The Authors Decision Sciences Journal of

Volume 00 Number 00 Innovative Education published by Wiley Periodicals,


September 2017 Inc. on behalf of Decision Sciences Institute
Printed in the U.S.A.

EMPIRICAL RESEARCH

A Conceptual Framework for Detecting


Cheating in Online and Take-Home Exams
Kelwyn A. D’Souza†
School of Business, Hampton University, East Queen Street, Hampton, VA 23668,
e-mail: [email protected]

Denise V. Siegfeldt
Florida Institute of Technology, Nathan M. Bisk College of Business, Department of Extended
Studies, Hampton Roads Site, Fort Eustis, VA 23604, e-mail: [email protected]

ABSTRACT
Selecting the right methodology to use for detecting cheating in online exams requires
considerable time and effort due to a wide variety of scholarly publications on academic
dishonesty in online education. This article offers a cheating detection framework that
can serve as a guideline for conducting cheating studies. The necessary theories and
related statistical models are arranged into three phases/sections within the framework
to allow cheating studies to be completed in a sufficiently quick and precise manner.
This cheating detection framework includes commonly used models in each phase and
addresses the collection and analysis of the needed data. The model’s level of com-
plexity ascends progressively from a graphical representation of data and descriptive
statistical models to more advanced inferential statistics, correlation analysis, regres-
sion analysis, and the optional comparison method and the Goldfeld-Quandt Test for
heteroscedasticity. An instructor receiving positive results on the possibility of cheating
in Phases 1 or 2 can avoid using more advanced models in Phase 3. Tests conducted
on sample courses showed that models in Phases 1 and 2 of the proposed framework
provided results effectively for over 70% of the test groups, saving users further time and
effort. High-tech systems and low-cost recommendations that can mitigate cheating are
discussed. This framework will be beneficial in guiding instructors who are converting
from the traditionally proctored in-class exam to a take-home or online exam without
authentication or proctoring. In addition, it can serve as a powerful deterrent that will
alleviate the concerns that an institution’s stakeholders might have about the reliability
of their programs.
Subject Areas: Cheating Detection Framework, Cheating Mitigation, De-
scriptive and Inferential Statistical Modeling, Goldfeld-Quandt Test, Online
Exam Cheating, Take-Home Exam Cheating.

† Corresponding Author.
This is an open access article under the terms of the Creative Commons Attribution-NonCommercial
License, which permits use, distribution and reproduction in any medium, provided the original work is
properly cited and is not used for commercial purposes.

1
2 Conceptual Framework for Detecting Cheating in Exams

INTRODUCTION AND BACKGROUND


Academic degrees conferred by institutions of higher education are generally based
on the student performance displayed for a set of required courses. Exams are com-
monly used to evaluate this performance, and relevant grades are assigned. Stellar
grades are a passport to upward mobility, so there can be considerable pressure
placed on students to achieve higher exam scores through cheating (McCabe,
Treviño, & Butterfield, 2001). University administrators and faculty are concerned
about academic dishonesty, as the practice undermines the academic integrity of
the degrees that schools offer their students (Barron & Crooks, 2005; Faurer, 2013;
Hemming, 2010; Lanier, 2006). It is crucial that faculty conduct their exams under
strictly controlled measures to ensure that all students actually earn the grade that
reflects their true level of performance (Cluskey, Ehlen, & Raiborn, 2011; McCabe
et al., 2001; Olt, 2002).
The University of Colorado at Denver and its College of Liberal Arts and
Science (CLAS) considers cheating to be academic dishonesty that “involves the
possession, communication, or use of information, materials, notes, study aids,
or other devices not authorized by the instructor in an academic exercise, or (in)
communication with another person during such an exercise” (CLAS, 2016, p. 1).
Although there appears to be a difference between the definition of cheating and
actual academic dishonesty, both terms appear regularly in the literature and are
used interchangeably in this article.

Cheating Detection in Proctored and Unproctored Exams


Cheating that occurs in proctored in-class exams as well as unproctored online
exams has been studied widely in the last decade and is well documented in the
literature (Harmon & Lambrinos, 2008; Hemming, 2010; Hollister & Berenson,
2009; McCabe et al., 2001; Olt, 2002; Shon, 2006). The general perception is
that unproctored online exams will demonstrate higher incidences of cheating
(Kennedy, Nowak, Raghuraman, Thomas, & Davis, 2000; King, Guyette, & Pi-
otrowski, 2009). Nearly 75% of accounting students that were surveyed at a uni-
versity in Florida perceive that it is easier to cheat in an online exam than in a
proctored in-class exam (King et al., 2009).
Studies on cheating have been carried out using surveys and/or interviews
with students (Bowers, 1964; McCabe et al., 2001). However, a survey faces
the challenges of coverage, measurement, nonresponse, or incorrect responses
(deLeeuw, Hox, & Dillman, 2008). Due to these issues, results of studies that have
used surveys to examine cheating behavior must be interpreted carefully (Fask,
Englander, & Wang, 2014). Indeed, few students will want to admit to cheat-
ing, even when completing anonymous self-reporting surveys. This assertion is
supported by results from an anonymous self-reporting survey conducted for this
article. Approximately 100 surveys were distributed to students, and 77 of them
were completed and returned. Among those returning the survey, 13% agreed and
4% strongly agreed that cheating took place in their online exams. In contrast,
a larger number of students disagreed (40%) and strongly disagreed (19%) that
cheating took place. Due to these documented problems and the higher cost in-
volved in carrying out surveys, more researchers are moving toward empirical
D’Souza and Siegfeldt 3

methods to study the nature and incidence of cheating (Fask, Englander, & Wang,
2015; Harmon & Lambrinos, 2008; Hollister & Berenson, 2009).
Cheating in online exams is usually detected by using self-reporting surveys,
empirical methods, and a combination of quantitative and qualitative methods.
However, results from earlier studies are mixed. Some studies have reported cheat-
ing (Fask et al., 2015; Harmon & Lambrinos, 2008; King et al., 2009) while others
registered no cheating (Greenberg et al., 2009; Hollister & Berenson, 2009; Peng,
2007; Werhner, 2010). Shen, Chung, Challis, and Cheung (2007) found no sig-
nificant performance difference between the traditional classes and online classes,
although the traditional class performed slightly better than the online class.
Academic dishonesty has been studied under different exam-course settings,
including: (1) unproctored online (OL) exams in online (OL) courses (Hollister
& Berenson, 2009; King et al., 2009); (2) proctored in-class (IC) exams in OL
courses (Hollister & Berenson, 2009; Werhner, 2010); (3) proctored IC exams in
traditional IC courses (Shon, 2006); (4) unproctored take-home (TH) exams in IC
courses (Andrada & Linden, 1993; Marsh, 1984); and (5) OL exams or quizzes in
IC courses (Fask et al., 2015; Peng, 2007). This study consisted of OL exams in
IC courses and TH exams in IC courses.

Cheating Detection Empirical Models


Previous empirical studies applied a variety of tools ranging from simple graphical
and descriptive statistical models to more advanced inferential statistics, analysis
of variance (ANOVA), correlation, regression analysis, the Goldfeld-Quandt Test
(GQT) for heteroskedasticity, and comparison of predicted with observed exam
scores (Harmon & Lambrinos, 2008).
Cheating detection begins with the application of a plain two-dimensional
graph and descriptive statistics. These are essential for establishing a visual under-
standing of the concept and identifying patterns without stringent assumptions, as
in the case with statistical models. Graphs are used by researchers to organize raw
unorganized data into a form that can be presented or used as inputs into statistical
models. They are often used singly or in combination with the other statistical
models to compare scores on proctored in-class and online exams and to compute
the mean and standard deviation of the test parameters. Extensive use of summary
statistics to analyze students’ performance measures and self-admitted responses
to cheating have been used by many researchers, including McCabe et al. (2001);
Harmon and Lambrinos (2008); Hollister and Berenson (2009); and Dobkin, Gil,
and Marion (2010). In addition to simple graphs (e.g., frequency polygons, and
line graphs), this article introduces a graphical cheating indicator to check whether
the differences in the mean sample scores of online and in-class exams were due
to cheating or something else. This is discussed later in the article.
Researchers have extensively applied hypothesis testing to determine if cheat-
ing took place in online exams by comparing if there is a significant difference in
the scores of an OL unproctored and IC proctored exam score (Fask et al., 2015;
Harmon & Lambrinos, 2008; Hollister & Berenson, 2009; Peng, 2007; Werhner,
2010). If the exam scores between OL and IC or TH and IC are significantly
different, then the differences denote that chance causes can be ruled out and we
4 Conceptual Framework for Detecting Cheating in Exams

can conclude that cheating may have taken place. Due to relatively small sample
sizes, sometimes less than 30, these researchers have used the student t-test for the
hypothesis test.
More advanced correlational and regression analysis models have also been
applied to detect cheating. An approach presented by Harmon and Lambrinos
(2008) utilized correlation and regression analysis as well as variance analysis to
detect cheating in an online macroeconomics exam. A regression model was used
to predict their summer 2004 and summer 2005 final exam scores from student-
related explanatory variables. According to Harmon and Lambrinos (2008, p. 119),
“the courses, although offered a year apart, were almost identical in structure and
content.” Among several variables that were considered by Harmon and Lambrinos
(2008), GPA was the only significant variable that reflected a student’s ability.
Moreover, they reasoned that the independent “human capital variable such as
GPA” has a high relationship (coefficient of determination, R2 ) to the dependent
variable (exam scores) due to the student’s high performance in the proctored in-
class exam, rather than to cheating (Harmon & Lambrinos, 2008, p. 121). Cheating
would result in a low value for R2 . Similarly, Fask et al. (2015, p. 2) introduce
student GPA and class attendance as “mastery (independent) variables” that could
reasonably increase the performance of the student and correspondingly detect if
cheating has occurred.
The OPM 450 and OPM 350 courses offered in academic years 2014-2015
and 2015-2016 were a part of this study. Missed class days (excused or unexcused)
and percentage of assignments submitted up to the exam date are considered as in-
dependent variables that are responsible for the students’ high in-class exam scores
(dependent variable). Due to privacy concerns, student GPA was not included as
an independent variable, although it is highly related to student performance and
was considered by other researchers (e.g., Fask et al., 2015; Harmon & Lambrinos,
2008).
The GQT proposed by Goldfeld and Quandt (1965) for testing homoscedas-
ticity was adapted to test for heteroskedasticity (unequal variance). It is used by
researchers to detect online cheating by examining the variance (σ 2 ) of the residual
errors (ԑi ) from the regression models (Fask et al., 2015; Harmon & Lambrinos,
2008). The limitations of the GQT are discussed in more detail in the Conclusions
and Limitations section. Furthermore, comparison of the predicted mean values
with the observed values obtained from the regression model is an additional
method to detect cheating (Harmon & Lambrinos, 2008).
These statistical models can indicate the possibility of cheating or no cheating
in an objective exam. It is difficult to identify and prove that specific students have
cheated based on statistical models (Marx & Longer, 1986). Alerted instructors
can increase proctoring and can keep a closer observation of the students, change
the exam schedule, modify the exam format, and use other interventions provided
in the Interventions for Mitigating Cheating section of this article.

Proposed Study
A large number of researchers drawn to this field have generated a wide variety of
scholarly efforts on the academic dishonesty occurring in online exams. Therefore,
D’Souza and Siegfeldt 5

an instructor who is planning a cheating study will spend considerable time and
effort searching for the most relevant methodology. There is an urgent need for
a preplanned and well-organized procedure for instructors and researchers to use
in detecting cheating in online or on other unproctored exams (e.g., take-home
exams).
This article provides a conceptual framework for planning and organizing a
sustained cheating detection and mitigating program. The framework emphasizes
the standard means of cheating detection using related concepts from the auditing
profession. Cheating in academics is similar to financial fraud that is detected
during the audit of a company’s financial statements by trained auditors. The “fraud
triangle” discussed by Ramos (2003, p. 28) is the conceptual framework used as a
guide in detecting financial fraud that satisfies one or more of the three conditions
(incentive/pressure, opportunity, and rationalization/attitude) when fraud occurs.
Subsequently, Becker, Connolly, Lentz, and Morrison (2011) matched the three
sides of the “fraud triangle” (Incentive to cheat, an opportunity to cheat, and
rationalization to cheat) with cheating in academics, producing the business fraud
triangle that they used as a framework to uncover academic dishonesty among
business students. The tested business fraud triangle (Becker et al., 2011) has been
adopted as the conceptual framework for the cheating detection framework (CDF).
The conceptualization of the framework occurred during the investigations
into alleged cheating complaints about online exams at a mid-size university in
the Commonwealth of Virginia and included an extensive literature review. The
necessary assumptions, theories, and related graphical and statistical models are
arranged in three phases/sections within the main body of the framework, allowing
cheating studies to be completed in a sufficiently quick and precise manner. The
framework offers instructors the choice of commonly used graphical charts and
statistical models to select in each phase and addresses the collection and analysis of
necessary data. Tests conducted during this study on sample business management
courses showed that the proposed CDF was able to detect cheating in Phases 1 and
2 for over 70% of the test groups, thus saving users further time and effort.

CHEATING DETECTION FRAMEWORK CONCEPTUALIZATION


Study Justification
The OPM 450: Operations Management and the OPM 350: Transportation and
Logistics Management undergraduate courses under study had been taught in
face-to-face classes during the past, with proctored in-class exams for the former
and take-home exams for the latter. The course codes and names published in this
study were altered to safeguard the department’s privacy policies. In fall 2015, a
decision was made to continue with IC courses but to administer all assignments
and three out of the four exams OL using the Blackboard Course Management
platform. It is estimated that this technological approach saves almost five classes
or more than ten percent of the class meetings in a regular semester. This savings in
“seat time” (Peng 2007, p. 10) allows the instructor to cover extra course material.
The OL exams provide flexibility for traveling students, especially athletes who
can take the online exams at any time or place. It also gives more time for faculty
6 Conceptual Framework for Detecting Cheating in Exams

Figure 1: Attendance plot in-class Exam 1.

100

90

80

70
Exam 1 Scores

60

50

40

30
EXAM I IC
20
Linear (EXAM I IC)
10

0
0 1 2 3
Missed Class Days

to extend their office hours for advising students and to pursue their scholarly
activities.
The rationale for the conceptual framework offered in the present study
was the sudden rise in exam scores for the OPM 450 course when these exams
were transformed from proctored in-class exams to online exams given no proctors.
According to Siegmann, Moore, and Aquino (2014), higher scores in online exams
could be due to more time being available for instructor-student interactions,
a less stress-filled exam environment, flexible timing, and students doing more
reading for unknown answers. These findings were supported by results from the
anonymous self-reporting survey addressed in the Introduction and Background
section, based on students who completed the OPM 450 and OPM 350 online
exams. Among those students that participated in the survey, 66% expressed a
preference for online exams because of the flexible timing to complete their exams
(77%), and their ability to take the exam when fully prepared (95%).
In this study, the instructor suspected that the higher online exam scores for
OPM 450 were due to cheating because students with poor attendance records were
getting abnormally high exam scores. This outcome contradicted results of Dobkin
et al. (2010), which reported that attendance at class meetings has a positive impact
on exam performance. The attendance plots graphed in Figures 1 and 2 for OPM
450 proctored in-class exams and unproctored online exams, respectively, visi-
bly demonstrate the impact of attendance on the scores for both exams. Figure 1
shows a general pattern of proctored in-class exam scores that realistically de-
creases almost identically with the standard attendance line (negative slope) when
class days missed increases, hence supporting the results of Dobkin et al. (2010).
Figure 2 shows the pattern of online exam scores in comparison to a standard
attendance line. The exam scores increase (positive slope) unrealistically as the
D’Souza and Siegfeldt 7

Figure 2: Attendance plot online Exam 1.

100
90
80
70
Exam 1 Scores
60
%EXAM 1
50
Linear (%EXAM 1)
40
30
20
10
0
0 1 2 3 4
MIssed Class Days

corresponding number of missed class days increase, suggesting the possibility of


cheating.

Study Design
The development and testing of the cheating detection framework (CDF) were
conducted in two parts. In Part 1, each phase of the CDF was assigned appropriate
graphical representation and statistical models in ascending order of complexity. In
Part 2, the functionality of the framework was tested on groups of sample business
courses. A group consisted of an in-class (IC) or take-home (TH) exam given in
the fall 2014 or spring 2015 semester of an academic year and an online (OL) exam
given during the same semester of the following academic year (i.e., fall 2015 and
spring 2016).
A total of seven groups were used in Parts 1 and 2 of this study. Group 1
was used in Part 1 to illustrate the composition and working of the framework, and
Groups 2-7 were similarly organized and were used to test the framework in Part 2.
A detailed composition of all seven groupings is provided in later sections (Table 3).
The equivalence requirements for assignments, quizzes, and exams suggested
by Fask et al. (2014) were strongly enforced within the time, costs, and legal re-
straints. The seven groups, each containing a set of two examinations, had a mix of
students from across campus. The OPM 450 course was a Management major re-
quirement, but it was popular in other departments across campus. Every semester,
the study population consisted of primarily senior level undergraduate students
with a management major or minor from departments including architecture, avi-
ation, journalism and mass media, psychology, and others. Similarly, OPM 350
was populated by juniors from management, aviation, and other students seeking
a major or minor in management. Random registration for these courses across
campus resulted in a class of students with approximately similar characteristics
every semester. The OPM 450 and 350 courses were taught by the same professor
using an identical syllabus, textbook and notes, PowerPoint presentations, exam
8 Conceptual Framework for Detecting Cheating in Exams

format (with different questions), and assignments in each course. The OL and
IC exams consisted of multiple-choice, true/false, and fill-in-the-blank questions,
which could be graded by Blackboard Course Management Systems. The difficulty
level of questions was maintained the same in both exams but the wording and
parameters were slightly modified to get different answers.
To maintain a consistent computer grading system, there is a need for
objective-type questions consisting of a mix of multiple-choice, true/false, and
fill-in-the-blank questions. The smaller descriptive quantitative and qualitative
questions can be included by breaking the question into subquestions whose an-
swers are sequentially entered into the course website in precise single or multiple
numerical value(s) or word(s). For the larger qualitative essay questions, it is highly
recommended that instructors use an anti-plagiarism tool to detect cheating.

Part 1: Development of the Cheating Detection Framework (CDF)


Group 1, consisting of Exam 1 scores for the fall 2014 OPM 450 proctored IC
exam and fall 2015 OPM 450 OL exam, was used to illustrate the development
and operation of the three-phase CDF. (Refer to Figure 3.)
Each phase of the CDF is discussed separately in the following subsections.

Phase 1: Preliminary tests


Exam 1 scores and attendance records were used as inputs for developing the
graphical charts and descriptive statistics models. An examination of the attendance
plots of both OPM exams in Figure 1 and 2 generally shows that OL students with
poor attendance were scoring higher on their exams, contradicting the results of
Dobkin et al. (2010). The height of the OL exam scores’ frequency polygon in
Figure 4 was substantially higher compared to the IC exam scores. This was
just a visual observation of the frequency polygon and line graphs. The Chi-
square goodness of fit test (Pett, 2016) could have been used to determine if
the differing size of the frequency distributions for the OL and IC or TH data
sets were significantly different or just due to chance. Instead, a simple graphical
cheating indicator displayed in Figure 5, together with descriptive statistics, was
used as an alternative. This cheating indicator displays the relationship between
the percentage of difference between the OL and IC mean scores (factor A) and the
p value (probability) from the test of hypothesis. The p value at which the decision
switches from “no cheating” to “cheating” was set at p = .06 (a little over the
.05 level), which corresponded to the base factor A value of 15%. In the case of
Group 1, which is being discussed here, the factor A value for the mean OL and
IC exam scores (86 and 72) was 19%, which falls in the cheating zone.
Therefore, based on the positive slope results from the attendance plots in
Figure 2, the height of the OL frequency polygons (Figure 4), and results from the
cheating indicator graph (Figure 5), Phase 1 of the CDF suggests that cheating is
strongly possible. It is recommended that the movement to higher phases of the
CDF be stopped and a cheating mitigating strategy be implemented, as discussed
later. Although it is not necessary to go to the next phase, an instructor may
decide to move to Phase 2 to confirm the results from Phase 1 and conduct further
analysis.
D’Souza and Siegfeldt 9

Figure 3: Schematic outline of the cheating detection framework (CDF).

Develop OL and IC Frequency Polygons, Instructor suspects cheating in


Aendance Plots and compute Factor A online exams. Provides input data
PHASE 1 from Descripve Stascs.

Graphs and Descriptive


Statistics
OL Freq Poly
higher than IC
YES Stop. Cheating is likely.
Aendance Refer to interventions to
Plot slope + ve. mitigate cheating.
Factor A > 15%

NO

Conduct Test of Hypothesis α = 0.05


PHASE 2 H0: μOL= μ IC
H1: μO ≠ μIC
Test of Hypothesis

YES
p ≤ 0.05 Stop. Cheating is likely. Refer
Reject H0
to interventions to mitigate
cheating.
NO

Correlation and Regression Analysis. The greater


PHASE 3 number of significant student motivation & effort
variables with high correlation.
Correlation and
Regression Analysis Yes
Lower R2 values Stop. Cheating is likely. Refer
for OL Exam to interventions to mitigate
Scores
cheating.
No
Optional Tests
Goldfeld-Quandt test checks for
Heteroskedasticity and/or Comparison Test

Stop. Cheang is very NO Yes There is a significant chance


unlikely. F> Fc and
that cheating has occurred.
Comparison
Test Fails Refer to interventions to
mitigate cheating.
End

Phase 2: Testing at the significance level


In Phase 2, the hypothesis test is recommended to investigate whether the mean
scores in OL and IC exams are the same or significantly different. The null and
alternate hypothesis is described as
H0 : μOL = μIC ,
H1 : μOL = μIC .
The level of significance denoted by α must be kept as small as possible to
reduce the probability of rejecting the null when it is true, a Type I error, which is
equivalent to stating that cheating takes place when it really does not take place.
The values of α used in research are generally, α = .001, .01, and .05. The value of
α = .05 is commonly used in this type of study, and some software packages apply
10 Conceptual Framework for Detecting Cheating in Exams

Figure 4: Phase 1 graphical method for Group 1.

Frequency Polygons Mean OL Exam 1


Score = 86
14
Mean IC Exam 1
12 Score = 72

10 Factor A = 19%
Frequency

(cheating zone).
8
Test of Hypothesis:
6 REJECT H0, (p =
0.002). Strong
4 chance of cheating.

0 FREQ_F15-OL
0 20 40 60 80 100 120
FREQ_F14-IC
Exam 1 Scores

the .05 value by default. The t-test for two sample means with unequal variance is
run on the data and the computed t-value is subject to the following decision rule
(α: = .05, two-tailed):
If t < tc or –t < –tc or if p < .05, the null is rejected, and the alternate
hypothesis is supported. (The tc value in the criterion corresponds to the critical
value associated with the degrees of freedom and the chosen α level.) In the Group 1
case, t = –3.37, t.05, 35 = –2.03, p = .0019. Therefore, the null hypothesis is rejected,
and we conclude that the mean scores in OL and IC exams were significantly
different. With the mean OL exam score being higher, we conclude that cheating

Figure 5: Cheating indicator.

0.2 Factor A% =
0.18 [(|µ IC- µOL|)/µ IC] *100
0.16
0.14 Base Factor = 15%
A < 15% - No
p-values

0.12
Cheating. A >
0.1
15% - Cheating
0.08
0.06 No Cheating Cheating
P-VALUE
0.04
0.02 Linear (P-
0 VALUE)
0% 5% 10% 15% 20% 25% 30%
Factor A
D’Souza and Siegfeldt 11

took place in the OL exams. Based on the framework, it is recommended that the
study is stopped and the instructor plan strategies for mitigating cheating. Although
unnecessary, some instructors may opt to proceed to Phase 3 to reconfirm the results
from Phase 2.

Phase 3: Advanced statistical models


In Phase 3, cheating is detected by examining the multiple regression results for
each exam score within the group. The three independent variables included in this
study were attendance, assignments completed, and student gender. Class atten-
dance and submission of self-completed assignments were performance-associated
independent variables that were strongly related to the dependent variable (exam
scores). Gender was also included since the authors intended to test the finding
by researchers that cheating among women is on the rise (McCabe et al., 2001).
A cumulative GPA of each student and other performance-related independent
variables were not included in this study but can be added if readily available.
Two multiple regression models developed for Group 1’s exam scores are
presented in the following equations (1) and (2). The dependent variable Yij mea-
sures the total contribution of the three independent variables, where index “i” =
1, 2, 3, . . . , nth student. The index “j” = OL or IC exam.
 
Yi,OL = β0,OL + β1,OL GENDERi,OL
   
+ β2,OL ABSi,OL + β3,OL ASSIGNi,OL , (1)
 
Yi,IC = β  0,IC + β  1,IC GENDERi,IC
   
+ β  2,IC ABSi,I C + β  3,IC ASSIGNi,IC . (2)
where Yi,OL is score for student “i” in online Exam 1; GENDERi, OL is gender of
student “i” in OL Exam 1 (1 = male, 0 = female); ABSi, OL is number of days
student “i” absent (excused and unexcused) from the class up to Exam 1; ASSIGNi,
OL is average percentage of assignments completed by student “i” up to OL Exam
1; Yi, IC is score for student “i” in proctored IC Exam 1; GENDERi, IC is gender of
student “i” in proctored IC Exam 1 (1 = male, 0 = female); ABSi, IC is number
of days student “i” absent from class up to Exam 1 (excused and unexcused); and
ASSIGNi, IC is average percent (%) of assignments completed by student “i” up to
IC Exam 1.
To improve the linearity, equations (1) and (2) are transformed into natural
log-linear (LN) models (Murray, 2006) as follows:
   
LN Yi,OL = β0,OL + β1,OL GENDERi,OL
   
+ β2,OL ABSi,OL + β3,OL ASSIGNi,OL , (3)
   
LN Yi,IC = β  0,IC + β  1,I C GENDERi,IC
   
+ β  2,IC ABSi,IC + β  3,IC ASSIGNi,IC . (4)

The summarized results of the linear regression analysis are shown in Table 1.
The fitted linear regression equation (5) for IC exams had one significant
12 Conceptual Framework for Detecting Cheating in Exams

Table 1: Results of the multiple regression Equations (3) and (4) (Group 1).
Significant
Multiple Regression Intercept Variable (S) R R2 F Ratio F Sig N

Equation (3) (OL) 4.3776 None .134 .018 .139 .936 21


Equation (4) (IC) 4.6656 β’2, IC = .694 .481 5.251 .01 27
0.361**
**
p < .01

Table 2: Goldberg-Quandt Test (GQT) for heteroskedasticity (Group 1).


Measure
Multiple of Fc
Regression Variance df SSR SSR/df F Ratio α = .05

Equation (3) Residuals 23 .635 .028 F = .046/.0280 F17,23 = 2.091


(OL) = 1.64 Do Not
Reject H0
Equation (4) Residuals 17 .777 .046
(IC)

independent variable while the linear regression for OL exams had no significant
variables.
   
LN Yi,IC = 4.6655 − 0.3611 ABSi,IC . (5)
The value of R2 is low (1.8%) for the OL exam scores and is high (48.1%)
for the IC exam scores. Attendance (ABS) was the only significant variable for
the IC exam scores (equation (4)), accounting for approximately 48% of the
variation, while the other independent variables were not significant. Equation (3)
had no significant variables. Therefore, none of the independent variables in the
regression model for equation (3) contributed toward the approximate 2% of the
variation. This large difference in the value of R2 between OL and IC is because
the performance associated variable ABS contributed to this increase in R2 in
the proctored IC exam, while a low value in the OL exam showed the same
variable as having a low contribution (Fask et al., 2015; Harmon & Lambrinos,
2008). Hence, the possibility of cheating in the OL exams was significantly higher
than cheating in the proctored IC exams, which were under strict supervision.

Optional test
The CDF contains two optional tests for cheating detection. They are not a require-
ment but can be used by instructors desiring additional confirmation of the outputs
from Phases 1, 2, and 3, or who may have skipped some of the phases in favor of
the other options.
The first is the GQT for homoscedasticity, used to test for heteroskedasticity
or the presence of unequal variance. The homoscedasticity assumption for the
regression model requires that the disturbances or errors (ԑi ) are constant and
randomly distributed, producing constant variance (σ 2 ). This concept has been
D’Souza and Siegfeldt 13

used to study disturbances in two independent samples, for example, the OL and
IC exam scores. The GQT tested the multiple regression equations (3) and (4)
for differences in variance, which is termed as the test of heteroskedasticity. The
question, however, is: Do the error terms ԑi for equation (3) and (4) have a different
variance for OL and IC exam scores?
The following steps in applying the GQT are suggested by Murray (2006):

i. Create two discrete sets. Set 1 consisted of nOL = 27 students in the OL


Exam 1, and Set 2 consisted of nIC = 21 students in the proctored IC
Exam 1.
ii. State the null and alternate hypothesis.

H0 : = σ 2 OL = σ 2 IC ,

H1 = σ 2 OL = σ 2 IC .

iii. Regress the natural log-linear regression for equations (3) and (4).
iv. Compute the F ratio of the sum of squares of the residuals (SSR) for OL
and IC (Table 2).
v. Compare the F ratio with the critical value Fc at α = .05. If F ratio > Fc ,
the null hypothesis is rejected.

The computed values are F ratio = 1.640 and F17, 23 = 2.091. Since, F ratio <
F17, 23 , there is not enough statistical evidence to reject the null hypothesis, and
we can conclude that there is no difference in the variance between the OL exams
and the proctored IC exams. This result satisfies the homoscedasticity assumption
for regression models and concludes that it is unlikely cheating is taking place.
However, this result from this optional test differs from the results from Phases 1,
2, and 3.
The second optional test is a comparison test that was conducted to corrob-
orate the Phase 1, 2, and 3 results. The observed performance associated variable,
ABSi,j values for OL and IC, were input into linear regression equation (5) to com-
pute the predicted exam scores, which were then compared with the observed exam
scores for OL and IC students. The difference between the predicted OL (70.5)
and observed (86.1) scores was highly significant (p = 1.57E-07). In contrast, the
difference between the predicted IC (75.1) and observed (71.2) scores was not
significant (p = .3897). Hence, results of a comparison test confirm the presence
of cheating in the OL exams.
Although cheating could have been detected in the earlier Phases 1 and 2, all
the Phases 1, 2, and 3 along with the optional models were applied to demonstrate
the different models that instructors can use, the data needed for testing, and
interpretation of the results.

Part 2: Testing the Framework


The cheating detection capabilities of the framework were tested on seven groups
(four being OPM 450 and three for OPM 350). The test results for all the groups
involved in the development and testing of the CDF are summarized in Table 3.
14 Conceptual Framework for Detecting Cheating in Exams

Due to space limitations, only the testing results for Group 3 (OPM 450) and Group
6 (OPM 350) are documented here, in detail.
Group 3 was subjected to testing by graphs and descriptive statistical models
in Phase 1. The frequency polygons indicate the highest point was the IC exam
conducted in spring of 2015. The computed mean scores for OL (75) and IC (71)
exams indicated a marginal difference of 6% (factor A) which, according to the
cheating indicator shown in Figure 5, falls in the “no cheating” zone. In the case
of Group 3, one concludes that there was no cheating in the OL exams. However,
the attendance plots for the OL exams shows a positive slope which can be due
to cheating. Therefore the instructor is advised to move to Phase 2 to confirm that
no cheating is taking place. The hypothesis test did not reject the null hypothesis
(p = .290). There is no significant difference in exam scores which leads one to
conclude that there is no strong evidence to prove that cheating is taking place. No
cheating was detected in Phase 2, so further testing in Phase 3 may be discontinued.
The CDF prepares a test report card for the instructor of Group 3 providing details
of tests conducted in Phases 1 and 2. A sample of this test report card is shown in
Figure 6 for Group 6, which is discussed in the following paragraphs.
In Group 6, the CDF’s results were inconsistent for the phases. In Phase 1,
the highest peak of the frequency polygon was for the OL exam (Figure 6). The
attendance plots for both exam scores had a positive slope. The attendance plot
for TH exam that is displayed in Figure 6 had a greater positive slope than the
OL exam. The mean exam scores of 78 in the OL exams and 85 in the TH exam
were close with factor A = 9%. For this value of Factor A, the cheating indicator
(Figure 5) classified Group 3 under the “no cheating” zone. In Phase 2, the test of
the hypothesis was modified to include the TH exam in place of the IC exam, as
follows:
H0 : μOL = μTH ,
H1 : μOL = μTH .
The computed p = .0502 was marginally greater than .0500. An instructor
will be unclear about these borderline outcomes in Phases 1 and 2 and needs to
proceed to Phase 3. In Phase 3, the multiple regressions equation (4) for IC exam
were replaced by the equation (6) for the TH exam as follows:
   
LN Yi,TH = β  0,TH + β  1,TH GENDERi,TH
   
+ β  2,TH ABSi,TH + β  3,TH ASSIGNi,ITH . (6)

Equations (3) and (6) are applied to the OL and TH exams in Group 6. The
multiple regression models’ R2 value for the TH exam scores of approximately 31%
is higher than the R2 value for the OL exam scores of approximately 11%, but there
were no significant variables in either equation. Since none of the performance-
associated independent variables in the regression model for equations (3) and (6)
contributed toward the 31% and 11% of R2 , there were other non-explanatory
factors that influenced the dependent variable, one of which was cheating. With
TH having a higher R2 than OL, it can be concluded that cheating took place in
OL and TH exams, but cheating is higher in TH exams. Finally, the GQT provides
D’Souza and Siegfeldt 15

Table 3: Summarized test results for Groups (Gr) 1-7.


Exam Cheating
Gr Course Sem/Exam No. Type Test Results (Y/N/U)

1 OPM 450 Fall 2015/1 OL Phase 1: OL attend. plot Y


OPM 450 Fall 2014/1 IC positive slope. OL freq poly.
Higher. Dist. Stat (μOL = 86
> μIC = 72). Factor A =
19%. Stop. Cheating
detected. Phase 2, p = .0019.
Reject H0 . Stop. Cheating
detected. Phase 3, R2 IC >
R2 OL (.481, .018). GQT;
F = 1.64 & F17, 23 = 2.091.
Do not reject H0 . Comp.
Test, obs value of OL
significant > pred. values (p
= 1.57E-07). Cheating
detected
2 OPM 450 Fall 2015/3 OL Phase 1: OL attend. plot Y
OPM 450 Fall 2014/3 IC slightly positive slope. OL
freq poly. Higher. Dist. Stat
(μOL = 92 >
μIC = 79). Factor A = 17%.
Stop. Cheating detected.
Phase 2, p = .03. Reject H0 .
Stop. Cheating detected.
Phase 3 N/A
3 OPM 450 Spring 2016/1 OL Phase 1: OL attend. plot N
OPM 450 Spring 2015/1 IC positive slope. OL freq poly.
higher. Dist. Stat (μOL = 75
> μIC = 71). Factor A =
6%. Stop. Cheating not
detected. Phase 2, p = .290.
Do Not Reject H0 . Stop. No
cheating detected. Phase 3
N/A
4 OPM 450 Spring 2015/3 OL Phase 1: OL attend. plot N
OPM 450 Spring 2016/3 IC positive slope. OL freq poly.
Higher. Dist. Stat (μOL =
82.1 ࣈ μIC = 82.3). Factor
A = 0%. Stop. Cheating not
detected. Phase 2, p = .9565.
Do Not Reject H0 . Stop. No
cheating detected. Phase 3
N/A
5 OPM 350 Fall 2014/2 OL Phase 1: TH attend plot Y
OPM 350 Fall 2015/2 TH positive slope. OL freq poly.
Higher. Dist. Stat (μTH = 99
> μIC = 75). Factor A =
32%. Stop. Cheating
detected. Phase 2, p = .001.
Reject H0 . Stop. Cheating
detected. Phase 3 N/A
(Continued)
16 Conceptual Framework for Detecting Cheating in Exams

Table 3: (Continued)
Exam Cheating
Gr Course Sem/Exam No. Type Test Results (Y/N/U)

6 OPM 350 Spring 2016/1 OL Phase 1: OL and TH attend. U*


OPM 350 Spring 2015/1 TH plots both positive slopes.
OL freq poly. higher. Dist.
Stat (μTH = 85 >
μOL = 78). Factor A = 9%.
Undecided. Phase 2, p =
.0502. Undecided. Phase 3,
R2 TH > R2 OL (.31, .11).
GQT; F = 1.03 & F7, 11 =
3.01. Do not reject H0 . No
cheating detected.
7 OPM 350 Spring 2016/3 OL Phase 1: TH attend. plot more Y
OPM 350 Spring 2015/3 TH positive. TH freq poly.
Higher. Dist. Stat (μTH = 96
> μOL = 66). Factor A =
46%. Stop. Cheating
detected. Phase 2, p =
3.25E-08. Reject H0 . Stop.
Cheating detected. Phase 3
N/A.
*
Undecided. The Instructor will make the final decision.

an F ratio = 1.03 and F7, 11 = 3.01. The null hypothesis was not rejected, meaning
there was no difference in the variations and hence, no cheating. The provision of
inconsistent results indicates that the CDF has a limitation in such narrow or close
cases and the instructor should make the final decision after a review of all the test
results documented in the test report card (Figure 6)
The CDF could provide sufficient test results to assist an instructor in de-
termining if cheating possibly took place or not for over 85% of the test groups.
Cheating was detected in 67% of these test groups, while no cheating was detected
in 33% of these test groups. Tests conducted on the groups showed that models
in Phases 1 and 2 of the proposed framework provided results effectively for over
70% of the test groups, saving users further time and effort.

INTERVENTIONS FOR MITIGATING CHEATING


After detecting cheating in the unproctored OL and TH exams, it is the instruc-
tor’s responsibility to take appropriate steps to avert or mitigate cheating. Further
discussions in this section will concentrate on OL exams, although cheating was
found to be higher in TH exams.
One major technical issue with OL exams is authentication regarding who is
actually taking the OL exam. The other issue pertains to how are OL exams that
are held at testing centers being proctored. Currently, Berkey and Halfond (2015)
have identified vendors marketing advanced equipment and software for authen-
tication and proctoring purposes. For example, ProctorU, Examity, and Software
D’Souza and Siegfeldt 17

Secure are vendors specialized in supplying and implementing authentication and


proctoring services. There are fully automated systems such as ProctorTrack, mar-
keted by Verificient Technologies; ProctorFree, in partnership with Blackboard
and CANVAS; Proctorio; and Biometric Security Systems and Facial Recognition
(BIOMIDS), which are capable of performing authentication and proctoring in the
absence of any human proctors. The market for automated cheating alleviation sys-
tems is expected to grow as more institutions crack down on academic dishonesty
(Berkey & Halfond, 2015).
The above discussion has focused on high-tech applications to manage cheat-
ing in OL exams. These advanced systems can be expensive to implement and
operate. There are pragmatic approaches that are easy to implement at low or no
cost, especially if an institution uses Blackboard, CANVAS, or other Web-based
course management systems.

Figure 6: Test Report Card for Group 6.

Course: OPM 350_E


Exam 1 Semester/Year spring/2016 – spring 2015

Date: Preepared ____


___________
____.

Phase 1: Frequency Polygon Aendance Plot for Take-Hpme Exam 1


120
10
9 100
8
Exam 1 Score

Mean TH = 85. 80
7
Frequency

6 Mean OL = 78.
60
5 Factor A = 9%
4 40
3 FREQ S'15 TH
2 FREQ S'16 OL 20
1
0 0
0 50 100 150 0 0.5 1 1.5 2 2.5
Exam Scores Missed Class Days

Phase 2: Test of Hyppothesis: Decision Rule (α: = 0.05, ttwo-tailed). If t < tc or -tt < - tc the nuull is rejected
and the alteernate hold trrue. t = 2.01
19, tc = 2.09
93, p = 0.05002 Borderlinne Case

Phase 3: Results of the regressioon models foor Equationns 3 and 6


Regress/E
Exam Inteercept Sign
nificant Variiable (S) R R2 F Rattio F Sig N
Equation 3 (OL) 4.165 Non
ne 0.345 0.1105 0.432 20.7422
Equation 6 (TH) 3.841 Nonne 0.560 0.3313 1.064 0.424

P
Phase 3: Optional
O Go
oldberg-Qua or Heteroskkedasticity foor Equation
andt Test Fo ns 3 and 6
MMultiple Semesster/ Measure
M of dfi SSR SSR/df F Ratio Fc
RRegression Exam
m Variance
V α = 0.05
EEquation 3 F7, 11 = 3.01
((OL) Sprin
ng’16/1 Residual
R 11 0.152 0..0139 = 0.0143/ Do Not
0.0139 Reject H0
EEquation 6
Spring
g’ ‘15/1 Residual
R 7 0.100 0.0143 = 1.03
((TH)
P
Phase 3: Optional
O com
mparison teest For pred
dicted and ob
bserved Exaam 1 scoress.
Comparisson Test Noot Required
GQT Tesst: Yes/No. Results: _GQT test of heteroskedast
h ticity – equaal variances______________
Prepared
d By: ______
___________ ______. Ap pproved By:: _____________________. Date: ________
Action Taken:
T _____
___________ __________ ____________________________________________
18 Conceptual Framework for Detecting Cheating in Exams

The following is a list of recommendations gathered by the authors, some of


which match those of other researchers (Barron & Crooks, 2005; Cluskey et al.,
2011; King et al., 2009; McCabe et al., 2001; Olt, 2002; Rogers, 2006):

i. Students must be apprised of the University’s code of conduct regarding


academic dishonesty and the detecting software (if available) that will be
used.
ii. The OL exams must have a stringent warning about cheating printed on
Page 1 of the exam.
iii. The penalty for cheating must be set high. Just issuing a warning or a
threat of failing the exam does not hinder future cheating if faculty adopts
a “look the other way” response which actually encourages more students
to cheat and to do so more frequently (McCabe et al., 2001, p. 226). The
student caught cheating not only needs to fail the exam but must also be
required to drop the course and retake it the following semester.
iv. Avoid giving the same multiple choice or other style test questions in the
following exam every semester. Students do have access to past exams.
In the authors’ opinion, an instructor must change at least 75% of her/his
previous exam questions.
v. To the extent possible, avoid using a test bank from the publisher. These,
or equivalent test banks, are available for purchase over the Internet.
Preparing one’s own question bank or modifying the publisher’s test bank
may be time-consuming, but it lessens the chance of cheating.
vi. Makeup exams for students who miss the regular exams must have dif-
ferent and more challenging questions than the regular exams so as to
discourage future requests for makeup exams.
vii. Use the following anticheating options in different Web-based educational
sites that do make it more difficult to cheat:
r Present questions one at a time and/in a random fashion with no back-
tracking.
r Scramble multiple choice answers so that every student gets a different
answer sequence presented.
r Provide just enough time that it would take a normal student to complete
the full exam.
r Provide multiple exams when it is possible without the knowledge
of students. Both exams must have the same format with a slight
change in wordings and parameters. Make a rough sketch of the seating
arrangement.
r Post answers to questions only after the exam due date.
r Check exams with the same score to see if there is any distinctive
similarity between the answers to questions.
r Compare each student’s exam times with the average for the class,
especially for students getting a high score in spite of having a poor
attendance record.
D’Souza and Siegfeldt 19

r A student finishing an exam in an abnormally short time may be a


cheating suspect.
r Check the clock time at which cheating students started and finished
the exam, and compare this with the time span of other students to
determine if they worked in groups.

CONCLUSIONS AND LIMITATIONS


Academic departments include instructors from varied backgrounds that may or
may not be familiar with empirical models that can be used to test if cheating takes
place in their online (OL) or take-home (TH) exams. This article has presented
a cheating detection framework (CDF) in a very simple and clearcut fashion,
making it easier for all instructors across college and university campuses to plan
and organize a cheating detection study. The CDF was developed to detect cheating
in OL exams but it can also be used for any unproctored exam. For example, the
OPM 350 courses had TH exams that were changed to OL exams. Both exams were
unproctored but cheating was found to be higher in the TH exams. The reasons are
under study but time appears to be an influencing factor. OL exam times can be set
for a fixed duration in Blackboard. In contrast, students can take a longer time to
complete TH exams, which provide increased opportunity for cheating.
Some advantages can be ascribed to the CDF. First, the framework will
alleviate the necessity of doing a literature review to find the right empirical model
to apply for the test. Second, it organizes the appropriate model at each phase in
an ascending complexity level, giving an instructor the flexibility of choosing a
suitable model(s) from Phases 1, 2, and 3. Finally, the graphical and statistical
models included in the framework have been successfully applied by researchers
and proven to produce results with a high level of significance.
The optional GQT failed to uncover unequal variances for OL and IC exams
in Group 1, and OL and TH exams in Group 6 (refer to Table 3). Consequently,
the GQT finds no cheating in both the groups of unproctored exams, while the
other graphs and statistical models in Phases 1, 2, and 3 concluded that cheating
has taken place in Group 1 and was undecided for Group 6 (Table 3). The GQT
test of heteroskedasticity also failed to detect cheating behaviour in unproctored
exams for an introductory statistics courses (Fask et al., 2015).
When the GQT was applied to the two subgroups 1 and 6 within each
regression equation, it resulted in homoscedasticity (equal variance), This satisfies
the Gauss-Markov Theorem which states that the ordinary least squares (OLS) will
still be unbiased but will not be the best linear unbiased estimator (BLUE) of the
coefficients (Murray, 2006). The unequal variance within each regression equation
produces erroneous results for the test of hypothesis and regression analysis. In
this study, two independent discrete data sets, the OL and IC exams, failed the test
for unequal variance, which is why the OLS remains the BLUE of the coefficients.
A future study planned by the authors includes comparing the impact of
cheating after implementing the interventions that are recommended in the Inter-
ventions for Mitigating Cheating section. This study would be an extension of the
earlier studies on cheating in objective types of exams (Bellezza & Bellezza, 1989;
Holland, 1996).
20 Conceptual Framework for Detecting Cheating in Exams

The framework experienced problems during the test for Group 6 due to
very narrow differences in the mean exam scores. In such borderline cases, the
CDF does provide decisive results, thereby shifting the decision making back to
the instructor. Even with all of the limitations discussed earlier, the CDF is still a
powerful deterrent that can mitigate the concerns that an institution’s stakeholders
might have about the reliability of their programs.

REFERENCES
Andrada, G. N., & Linden, K. W. (1993) Effects of two testing conditions on
classroom achievement: Traditional in-class versus experimental take-home
conditions. Report ED 360 329, Purdue University. Presented at the Annual
Meeting of the American Educational Research Association, Atlanta, GA,
April 2–16.
Barron, J., & Crooks, S. M. (2005) Academic integrity in web-based distance
education. Tech Trends Linking Research and Practice to Improve Learning,
49(2), 40–45.
Becker, D., Connolly, J., Lentz, P., & Morrison, J (2011) Using the business fraud
triangle to predict academic dishonesty among business students. Academy
of Educational Leadership Journal, 10(1), 37–54.
Bellezza, F. S. & Bellezza, S. F (1989) Detection of cheating on multiple-choice
tests by using error, similarity, analysis. Teaching of Psychology, 16(3), 151–
155.
Berkey, D., & Halfond, J. (2015) Cheating, student authentication and proctoring
in online programs. New England Board of Higher Education, July 20,
2015 http://www.nebhe.org/thejournal/cheating-student-authentication-and-
proctoring-in-online-programs/
Bowers, W. J. (1964) Student dishonesty and its control in college. New York
Bureau of Applied Social Research. New York, NY: Columbia University.
Cluskey Jr., G. R., Ehlen, C. R., & Raiborn, M. H. (2011) Thwarting online exam
cheating without proctor supervision. Journal of Academic and Business
Ethics, 4, 1–7.
College of Liberal Arts and Science (CLAS). (2016) Definition of aca-
demic dishonesty. University of Colorado Denver, Denver, 1–2 (www.
ucdenver.edu/academics/colleges/CLAS/faculty-).
deLeeuw, E. D., Hox, J. J., & Dillman, D. A. (2008) The cornerstones of survey
research. International handbook of survey methodology. New York: Psy-
chology Press, Taylor & Francis Group. E. D. deLeeuw, J. J. Hox, & D. A.
Dillman (Eds.) 1–17.
Dobkin, C., Gil, R., & Marion, J. (2010) Skipping class in college and exam per-
formance: evidence from a regression discontinuity classroom experiment.
Economics of Education Review, 29, 566–575.
D’Souza and Siegfeldt 21

Fask, A., Englander, E., & Wang, W. (2014) Online testing and its implications in
business education. Proceedings of the Northeast Business Economic Associ-
ation 41st Annual Meeting, Monmouth University, NJ 07764 (www.nbea.us).
Fask, A., Englander, E., & Wang, Z (2015) On the integrity of online testing for
introductory statistics courses: A latent variable approach. Practical Assess-
ment, Research, and Evaluation, 20(10), 1–12.
Faurer, J. C. (2013) Grade validity of online quantitative courses. Contemporary
Issues in Education Research, 6(1), 93–96.
Goldfeld, S. M., & Quandt, R. E. (1965) Some tests for homoscedasticity. Journal
of the American Statistical Association, 60(310), 539–547.
Greenberg, K., Lester, J. N., Evans, K., Williams, M., Hacker, C., & Halic,
O. (2009) Student learning with performance–based, in-class and learner-
centered, online exams. International Journal of Teaching and Learning in
Higher Education, 20(3), 383–393.
Harmon, O. R., & Lambrinos, J. (2008) Are online exams an invitation to cheat?
Journal of Economic Education, 39(2), 116–125.
Hemming, A. (2010) Online tests and exams: Lower standards or improved learn-
ing? The Law Teacher, 44 (3), 283–308.
Holland, P. W. (1996) Assessing unusual agreement between the incorrect answers
of two examinees using the K-Index: statistical theory and empirical support.
Program Statistics Research Technical Report No. 96-4, Educational Testing
Services (ETS), Princeton, NJ, 08541.
Hollister, K. K., & Berenson, M. L. (2009) Proctored versus un-proctored online
exams: Studying the impact of exam environment on student performance.
Decision Sciences Journal of Innovative Education, 7(1), 271–294.
Kennedy, K., Nowak, S., Raghuraman, R., Thomas, J., & Davis, S. F. (2000) Aca-
demic dishonesty and distance learning: Student and faculty views. College
Student Journal, 34(2), 309–314.
King, C. G., Guyette, R. W. Jr., & Piotrowski, C. (2009) Online exams and
cheating: An empirical analysis of business students’ views. The Journal of
Educators Online, 6(1), 1–11.
Lanier, M. M. (2006) Academic integrity and distance learning. Journal of Crim-
inal Justice Education, 17(2), 244–261.
Marsh, R. (1984) A comparison of take-home versus in-class exams. The Journal
of Educational Research, 78(2), 111–113.
Marx, D. B. & Longer, D. E. (1986) Cheating on multiple choice exams is difficult
to assess quantitatively. North American Colleges and Teachers of Agricul-
ture Journal, March 1986, 23–26.
McCabe, D. L., Treviño, L. K., & Butterfield, K. D. (2001) Cheating in aca-
demic institutions: A decade of research. Ethics and Behavior, 11(3), 219–
232.
Murray, M. P. (2006) Econometrics: A modern introduction. Boston, MA 02116:
Pearson Education, Inc.
22 Conceptual Framework for Detecting Cheating in Exams

Olt, M. R. (2002) Ethics and distance education: Strategies for minimizing aca-
demic dishonesty in online assessment. Online Journal of Distance Learning
Administration, V(III), 1–9.
Peng, Z. (2007) Giving online quizzes in corporate finance and investments for a
better use of seat time. The Journal of Educators Online, 4(2), 1–18.
Pett, M. A. (2016) Nonparametric statistics for health care research: Statistics
for small samples and unusual distributions (2nd ed.). Los Angeles, CA:
Sage.
Ramos, M. ((2003) Auditors’ responsibility for fraud detection. Journal of Ac-
countancy, 195(1), 28–36.
Rogers, C. F. (2006) Faculty perceptions about e-cheating during online testing.
Journal of Computing Sciences in Colleges, 22(2), 206–212.
Shen, Q., Chung, J. K. H, Challis, D. I., & Cheung, R. C. T. (2007) A comparative
study of student performance in traditional mode and online mode of learning.
Computer Applications in Engineering Education, 15(1), 30–40.
Shon, P. C. H. (2006) How students cheat on in-class examinations: Creativity,
strain, and techniques of innovation. Ann Arbor, MI: MPublishing, Univer-
sity of Michigan Library.
Siegmann, S. M., Moore, D., & Aquino, C. P. (2014) Exam performance in a
hybrid course: A model for assessing online and in class delivery modes. The
Online Journal of Distance Education and e-Learning, 2(4), 70–80.
Werhner, M. J. (2010) A Comparison of the performance of online versus tra-
ditional on-campus Earth Science students on identical exams. Journal of
Geoscience Education, 58(5), 310–312.

Kelwyn A. D’Souza is a Professor of management science in the School of


Business at Hampton University. He received his PhD in industrial engineering
from the University of South Florida in 1991 and has 10 years of prior industrial
experience. His research interests include transportation safety, distracted driving,
and academic dishonesty. Dr. D’Souza is the founding director of the University
Transportation Center (UTC) at Hampton University. He has been awarded 13
funded grants totaling over $4.0 million from the federal government and other
agencies for research and educational programs. He has an extensive publications
record and is the University’s contact person for transportation.

Denise V. Siegfeldt is an Associate Professor of management and the Hampton


Roads site director for Florida Institute of Technology, Department of Extended
Studies. She has held her position for the past seven years and works at the Herb
Bateman Army Education Center at Fort Eustis in Virginia. Dr. Siegfeldt received
her PhD in urban services with a concentration in management and an emphasis
in organizational development from Old Dominion University in 1991. She has
served as a peer reviewer/panel member for formal proposals submitted to NASA
HQ and the National Science Foundation. She has a record of publications and
presentations at national and international conferences.

You might also like