0% found this document useful (0 votes)
49 views20 pages

Are Online Exams An Invitation To Cheat?: The Journal of Economic Education February 2008

This document summarizes a study that analyzed exam scores from two online principles of economics courses to determine if cheating occurred on unproctored exams. One course had a proctored final exam, while the other did not. The study found lower explanatory power of student characteristics on exam scores for unproctored exams compared to proctored exams, suggesting cheating took place on unproctored exams. This provides empirical evidence that cheating may be more likely on unproctored assessments in online economics courses. The study helps fill gaps in research on cheating in online courses and assessment design for online instruction.

Uploaded by

ayaan khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views20 pages

Are Online Exams An Invitation To Cheat?: The Journal of Economic Education February 2008

This document summarizes a study that analyzed exam scores from two online principles of economics courses to determine if cheating occurred on unproctored exams. One course had a proctored final exam, while the other did not. The study found lower explanatory power of student characteristics on exam scores for unproctored exams compared to proctored exams, suggesting cheating took place on unproctored exams. This provides empirical evidence that cheating may be more likely on unproctored assessments in online economics courses. The study helps fill gaps in research on cheating in online courses and assessment design for online instruction.

Uploaded by

ayaan khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/23645255

Are Online Exams an Invitation to Cheat?

Article  in  The Journal of Economic Education · February 2008


DOI: 10.3200/JECE.39.2.116-125 · Source: RePEc

CITATIONS READS
74 837

2 authors:

Oskar Harmon James Lambrinos


University of Connecticut Clarkson University
44 PUBLICATIONS   257 CITATIONS    62 PUBLICATIONS   1,091 CITATIONS   

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Oskar Harmon on 02 June 2014.

The user has requested enhancement of the downloaded file.


Department of Economics Working Paper Series

Are Online Exams an Invitation to Cheat?


Oskar R. Harmon
University of Connecticut

James Lambrinos
Union University

Working Paper 2006-08R

March 2006, revised February 2007

341 Mansfield Road, Unit 1063


Storrs, CT 06269–1063
Phone: (860) 486–3022
Fax: (860) 486–4463
http://www.econ.uconn.edu/

This working paper is indexed on RePEc, http://repec.org/


Abstract
This study uses data from two online courses in principles of economics to
estimate a model that predicts exam scores from independent variables of student
characteristics. In one course the final exam was proctored, in the other course
the final exam was not proctored, and in both courses the first three exams were
unproctored. If no cheating took place we expect the prediction model to have
the same explanatory power for all exams, and conversely, if cheating occurred
in the unproctored exam the explanatory power would be lower. Our findings are
that both across and within class variations in the R-squared statistic suggest that
cheating was taking place when the exams were not proctored.
Journal of Economic Literature Classification: A2, A22
Keywords: online, cheating, assessment, undergraduate economics, face-to-
face

Forthcoming in Journal of Economic Education


Are Online Exams an Invitation to Cheat?

Online offerings of economics classes have experienced a recent growth surge. Sosin

(1997) in the Fall of 1997 surveyed 986 economics departments at post-secondary institutions

and received 325 completed surveys for a response rate of 33 percent. Of the respondents only

24 institutions offered a total of 40 online courses. Coates et al. (2001) conducted a similar

survey just three years later of approximately 750 higher education institutions and received

approximately 260 completed surveys for a response rate of 35 percent. Of the respondents 120

institutions offered 189 economics courses online. A comparison of the two surveys shows that

in the three year interval the number of institutions offering online economics courses increased

by 400 percent and the number of these courses increased by 373 percent.

Among college educators there is a widespread belief that the extent of academic

misconduct is on the rise (Hard, Conway, and Moran 2006). The issue is central to online

instruction because in the absence of the ID confirmation afforded by a proctored exam, it is

impossible to know whether the registered student or a substitute has taken the assessment, or if

students worked collaboratively on the exam. We report the findings of a natural experiment

wherein an identical exam was administered in a proctored and unproctored setting, holding

constant factors such as instructor, text and delivery method. Our purpose is to contribute useful

information to instructors as they decide whether to administer proctored or unproctored

assessments in their online courses.

LITERATURE REVIEW

There is an emerging literature on the appropriate design for assessment in online

instruction. One view is that a proctored test is the best practice for online assessment (Edling
2

2000, Rovai 2001, Deal 2002). These authors take the position “Proctored testing is particularly

relevant when testing is for high-stakes, summative purposes” because of the ease of cheating in

an unproctored environment (Rovai 2001). Even for students geographically distant from the

offering campus proctored tests are feasible because there are numerous commercial testing

centers and alternative non-profit testing collaborations (Liefert 2000, Young 2001, Taylor

2002). The alternative view is that with appropriate adjustments in format (e.g., randomized

questions from a large pool of test questions, open book testing with time constraints so students

do not have time to look up answers, etc.) the probability of cheating in the proctored and

unproctored format can be brought to equivalent levels (Vachris 1999, Shuey 2002, Serwatka

2003).

The literature on the extent and determinants of cheating on college campuses is quite

extensive (Passow et al. 2006). 1 These studies, examine cheating behaviors in general; they do

not examine whether cheating behaviors are different in online instruction compared to face-to-

face instruction. Two studies in this journal focused on the determinants of cheating in face-to-

face principles of economics classes. One study (Kerkvliet and Sigmund 1999) used random

response survey data to measure the effectiveness of measures to reduce cheating. A random

response survey asked the respondent to anonymously self-report cheating behavior. The study

findings were that the most effective deterrent is using tenure-track faculty instead of graduate

teaching assistants as proctors (32 percent reduction in the probability of cheating), followed by

using an additional test version (25 percent reduction), and simple use of verbal announcements

(12 percent reduction). Another study (Nowell and Laufer 1997) used direct evidence of cheating

to examine student characteristics as predictors of cheating. In their experiment students were

administered a quiz, the quiz was collected, photocopied and returned to the student to self-
3

grade. The self-graded score was compared to the score calculated from the photocopy and

discrepancies were direct evidence of cheating. The authors reported the likelihood of cheating

is positively associated with the student characteristics of poor performance in class, and

increased hours of employment.

There are only a few empirical studies of cheating in online classes (Charlesworth,

Charlesworth, and Vlcia 2006). Two studies were of student perceptions of cheating in online

courses and one (Kennedy et al. 2000) reported findings consistent with the view that cheating is

more likely to occur in the online classes than in the traditional face-to-face class and the other

(Charlesworth, Charlesworth, and Vlica 2006) reported that cheating is no more likely to occur

in the online class than in the face-to-face class. A third study (Grijalva, Nowell, and Kerkvliet

2006) using an anonymous survey of self-reported cheating for student in online courses reported

that the incidence of cheating was similar to that reported for similar studies of cheating in face-

to-face courses.

There are, to our knowledge, no published studies of cheating in online courses in

economics. Understanding of the potential dimension of the problem of online cheating is further

limited because there are no studies, to our knowledge, of the extent to which unproctored

assessments are used in online principles of economics courses. In the expanding literature that

compares the effectiveness of online instruction and face-to-face instruction in principles of

economics we reviewed four studies that report using the unproctored format (Vachris 1999,

Navarro 2000, Coates et al. 2004, Anstine and Mark 2005) for assessments in their online

classes. Given the relatively higher cost of the proctored format (Young 2001), the former group

may represent the tip of the iceberg regarding the common practice for assessment in online

principles of economics classes.


4

The purpose of our study is to begin to fill the gap of empirical research in the literature

on the extent and determinants of online cheating in principles of economics classes. This study

uses data from two online courses in principles of economics to estimate a model that predicts

exam scores from independent variables of student characteristics. In one course the final exam

was proctored, in the other course the final exam was not proctored, and in both courses the first

three exams were unproctored. If no cheating occurs we expect the prediction model to have the

same explanatory power in both classes, and conversely, if cheating occurs we would expect a

lower explanatory power in the class with the unproctored exam. If cheating occurs we expect

the R-squared statistic to be relatively low because a large portion of the variation would be

explained by cheating, which is an omitted variable in the model. To our knowledge this is the

first empirical study of cheating on unproctored assessments in online economics classes, and it

is the first study to use the R-squared statistic to detect whether cheating has occurred. 2

DATA

Our study uses data from two courses, an online class in principles of macroeconomics

taught in summer 2004 and the same class taught in summer 2005, both for the Online Division

of the School of Continuing Studies at the University of Connecticut. In summer 2004 the

enrollment was 25 students and we have information for 24 of these students. In summer 2005

the enrollment was 40 students and we have information for 38 of these students. The courses

though offered a year apart were almost identical in structure and content. The required readings

consisted of chapters in a standard principles of macroeconomics textbook. The online

instructional materials included PowerPoint presentations augmented with audio sound files,

online practice problems in Excel spreadsheets, and readings from the online edition of the Wall
5

Street Journal as background for participation in twice weekly instructor moderated online

discussions.

Each course was offered entirely online using the course management software WebCT. 3

Each course had three one-hour long exams weighted 18 percent of the course grade (a total of

54 percent), required participation in a discussion bulletin board for each chapter weighted 18

percent, and a cumulative 90-minute final exam weighted 28 percent. The corresponding exams

for each course were identical. Each exam had 20 multiple-choice questions (the final exam had

30), each exam randomly selected from a pool of approximately 100 multiple-choice questions,

and the response choices were randomly ordered. 4 No students taking the course in 2004 took it

again in 2005.

The sole significant difference between the two courses was that in summer 2004 the

final exam was unproctored and in summer 2005 the final exam was proctored. Students did not

know prior to enrollment whether the exams would be proctored so self-selection bias is

unlikely. 5 In the summer 2004 course, all four exams were unproctored. Students had a 3-day

period, usually encompassing a weekend in which to take the exam. After login, the student had

60 minutes (90 for the final exam) to complete the exam. The exams could be taken anywhere

the student could have access to the internet. In the summer 2005 class the three 60-minute

exams were administered as in the summer 2004 class. The final exam, however, was required to

be taken at one of five University campus locations, or at a pre-approved testing site.

The procedures for proctoring the summer 2005 final exam followed the guidelines

recommended by Kerkvliet and Sigmund (1999) that proctors are not teaching assistants,

multiple versions are used and verbal warnings are given. At the University campus locations

the test was proctored by a faculty member, or an administrator in the Division of Continuing
6

Studies. Five students took the test off-campus and were proctored by the testing center staff,

clergy, or faculty at other universities and colleges. The proctors were given identical

guidelines. Students were required to present a valid photo ID to sit for the exam. The exam

was administered for a 90-minute period beginning at exactly the same time at all testing

locations. Notes, books, scratch paper, computer files and calculators were allowed. Printing or

copying the exam or parts of the exam was not permitted. Cell phone usage and other forms of

communication such as instant messaging were not allowed. Proctors gave a verbal warning

about academic dishonesty.

The data for our study consists of scores on four exams in the course, and, from

University records, the student’s cumulative grade point average at the beginning of the

semester, age, academic major, and college grade level. Descriptive statistics for the students’

characteristics are shown in Table 1. A test of the difference between the means of the variables

for each course is reported in column 4 of Table 1. The average exam score for the summer

2004 course is generally below that for the summer 2005 course. These differences were

statistically significant only for the third hourly exam. The average GPA is slightly lower in the

summer 2004 course compared to the summer 2005 course (2.86 compared to 3.00) but the

difference is not statistically significant. The distribution by class standing is similar between

the courses. The slight differences in the means of the indicator variables SOPHOMORE,

JUNIOR, and SENIOR are not statistically significant. The percentage of economic majors is

larger in the summer 2004 class (29 percent) than in the summer 2005 class (21 percent) but the

difference is not statistically significant. On balance, the two sections have approximately the

same average level of human capital endowments.


7

Methodology and Results

The model for prediction of exam score was determined by past research studies

(Anderson, Benjamin and Fuss 1994, Brown and Liedholm 2002, Coates et al. 2004, Dickie

2006; Marburger 2006, Stanca 2006) and data availability. It is:

EXAM(i) = b0 + b1GPA + b2 SOPHOMORE + b3 JUNIOR + b4 SENIOR + b5ECON_MAJOR

+ b6AGE + Ui

The variables used in the study and their definitions are shown in Table 2. The

dependent variable EXAM(i) is the test score for the four exams so that i = 1-4. GPA is the

student’s grade point average at the beginning of the semester; it is used as a measure of student

ability 6 and its expected effect is positive (Anderson, Benjamin, abd Fuss 1994; Dickie 2006;

Stanca 2006). SOPHOMORE, JUNIOR, and SENIOR are indicator variables equal to one if the

student has the same class rank as the variable name, and zero otherwise. These indicator

variables are taken as a measure of student maturity and experience with academics and are

expected to have positive signs. ECON_MAJOR is an indicator variable equal to one if the

student is an economics or business major, zero otherwise. It is expected to have a positive sign

as majors in the discipline of the course are expected to have greater motivation to perform well.

The sign for AGE is not hypothesized. A small portion of the students are returning adult

learners enrolled in the Division of Continuing Studies and have distinctly different

circumstances than the majority of the students. On the one hand these older students tend to

exercise greater responsibility toward academic achievement implying a positive sign, but on the

other hand, these older students face greater opportunity costs arising from greater family and

job responsibilities implying a negative sign.


8

To detect cheating we compare the R-squared statistic of the summer 2004 results to the

summer 2005 results. The rationale for using the R-squared statistic to detect cheating is as

follows: we assume that the more human capital variables work to explain test scores, the more

the likelihood the test scores reflect the student’s own ability. If human capital variables such as

GPA and whether the student was an economics major explained a high percentage of the

variation in test score, it is more than likely their own effort that caused this high correlation

between ability and test scores. Cheating should serve to weaken this correlation resulting in a

low R-squared statistic. If the proctored final exam was associated with an unusually high R-

squared statistic, it would be difficult to conclude that this was not related to an absence of

cheating during this exam.

The results of the 8 OLS regressions (one for each of 4 exams in the two courses) are

reported in Table 3. Because GPA was the only substantive explanatory variable and an F test

indicated that the other explanatory variables (SOPHOMORE, JUNIOR, SENIOR,

ECON_MAJOR, and AGE) were statistically insignificant as a group, we report in Table 3 the

results for the simplest specification. For the summer 2005 course the R-squared for the

proctored final is 49.7 percent, much higher than the R-squared for the first three unproctored

exams, which average 15 percent. For the summer 2004 course the R-squared for the

unproctored final is only 0.08 percent, 49.6 percent percentage points below that for the

proctored final in the summer 2005 course.

The Goldfeld-Quandt test, commonly used to test for heteroskedasticity, can be used as a

test for equality of error variance across the two classes. The calculated F Ratio statistic for

testing the equality of the error variance between the unproctored and proctored final exam
9

models is 3.41 and the P value is less than 0.01. This suggests that the error variances are

significantly different between the two classes.

Another approach to detect if cheating was taking place is to use the equation for the

proctored final exam to predict the final exam score for the unproctored class. If the class had

many students whose predicted final exam score is far from their actual score that is taken as an

indication that cheating may have taken place. The standard error of the prediction interval is

roughly 8 points so two standard errors would be roughly 16 points. Adjusting for the

difference in final exam scores, we find 8 students whose actual score was more than 16 points

from the predicted score. Of these, 3 (13 percent) had scores that were worse than expected and

5 (21 percent) had scores that were better than expected. 7

CONCLUSIONS

This study addresses the question: Does mode of assessment format (proctored or

unproctored exams) affect test scores in online principles of economics classes? The data for the

study are from two courses of principles of macroeconomics, one taught in summer 2004, the

other in summer 2005. The courses are identical in every respect, except the final exam in the

summer 2004 course was not proctored, and the final exam in the summer 2005 course was

proctored. To detect cheating we estimated a model for each class that predicts exam scores

from independent variables of student characteristics and compared the R-squared statistic for

each exam. If no cheating took place we expected the prediction model to have the same

explanatory power for all exams, and conversely, if cheating occurred in the exams that were

unproctored, the explanatory power would be lower. We conclude that cheating took place

because the comparison of the R-squared statistics reveals that the human capital variables do not

explain nearly as much of the variation in test scores in the unproctored format as they do in the
10

proctored format. The potential for a higher incidence of academic dishonesty in online courses

than in face-to-face courses has been much discussed and many authors have commented on the

dearth of empirical evidence. Although our data are limited to two undergraduate classes in

principles of economics at a single institution, our results suggest that online exams administered

in a proctored environment might equalize the incidence of academic dishonesty between online

and face-to-face courses.


11

REFERENCES

Anderson, G., D. Benjamin, and M.A. Fuss. 1994. The determinants of success in university

introductory economics courses. Journal of Economic Education 25 (2): 99-119.

Anstine, J., and S. Mark. 2005. A small sample study of traditional and online courses with

sample selection adjustment. Journal of Economic Education 36 (2): 107-127.

Brown, B. W., and C. E. Liedholm. 2002. Can web courses replace the classroom in principles

of microeconomics? American Economic Review 92 (2): 444-49.

Charlesworth, P., D. D. Charlesworth, and C. Vlcia. 2006. Students' perspectives of the

influence of web-enhanced coursework on incidences. Journal of Chemical Education

83 (9): 1368-75.

Coates, D., and B. R. Humphreys. 2001. Evaluation of computer-assisted instruction in

principles of economics. Educational Technology & Society 4 (2): 444-49.

Coates, D., B. R. Humphreys, J. Kane, and M. A. Vachris. 2004. 'No significant distance'

between face-to-face and online instruction: evidence from principles of economics.

Economics of Education Review 23 (6): 533-546.

Deal, W. F., III. 2002. Distance learning: teaching technology online. (Resources In

Technology). The Technology Teacher 61 (8): 21-27.

Dickie, M. 2006. Experimenting: does it increase learning in introductory microeconomics?

Journal of Economic Education 37 (3): 267-288.

Edling, R. J. 2000. Information technology in the classroom: experiences and recommendations.

Campus - Wide Information Systems 17 (1): 10-15.


12

Grijalva, T. C., C. Nowell, and J. Kerkvliet. 2006. Academic honesty and online courses.

College Student Journal 40 (1): 180-6.

Grove, W. A., T. Wasserman, and A. Grodner. 2006. Choosing a proxy for academic aptitude.

Journal of Economic Education 37 (2): 131-48.

Hard, S. F., J. M. Conway, and A.C. Moran. 2006. Faculty and college student beliefs about

the frequency of student academic misconduct. Journal of Higher Education 77 (6):

1058-80.

Kennedy, K., S. Nowak, R. Raghuraman, J. Thomas, and S. F. Davis. 2000. Academic

dishonesty and distance learning: student and faculty views. College Student Journal 34

(2): 309-14.

Kerkvliet, J. and C. L. Sigmund. 1999. "Can we control cheating in the classroom? Journal of

Economic Education 30 (4): 331-43.

Liefert, J. 2000. Measurement and testing in a distance learning course. Journal of Instruction

Delivery Systems 14 (2): 13-16.

Marburger, D. R. 2006. Does mandatory attendance improve student performance? Journal of

Economic Education 37 (2): 148-55.

Navarro, P. 2000. Economics in the cyberclassroom. Journal of Economic Perspectives 14 (2):

119-32.

Nowell, C. and D. Laufer. 1997. Undergraduate student cheating in the fields of business and

economics. The Journal of Economic Education 28 (1): 3-12.

Passow, H. J., M. J. Mayhew, C. J. Finelli, T. S. Harding, and D. D. Carpenter. 2006. Factors

influencing engineering students' decisions to cheat by type of assessment. Research in

Higher Education 47 (6): 643-84.


13

Rovai, A. P. 2001. Online and traditional assessments: what is the difference? . The Internet

and Higher Education 3 (3): 141-51.

Serwatka, J. A. 2003. Assessment in on-line CIS courses. Journal of Computer Information

Systems 44 (1): 16-20.

Shuey, S. 2002. Assessing online learning in higher education. Journal of Instruction Delivery

Systems 16 (2): 13-18.

Sosin, K. 1997. Impact of the web on economics pedagogy. Presented at the Allied Social

Sciences Association Meeting, January 5, 1997.

Stanca, L. 2006. The effects of attendance on academic performance: panel data evidence for

introductory microeconomics. Journal of Economic Education 37 (3): 251-66.

Taylor, S. S. 2002. Education online: off course or on track? Community College Week 14 (20):

10-12.

Vachris, M. A. 1999. Teaching principles of economics without 'chalk and talk': the experience

of CNU online. Journal of Economic Education 30 (3): 292-303.

Young, J. R. 2001. Texas colleges collaborate to offer online students convenient proctored

tests. Chronicle of Higher Education 47 (26): A43.


14

TABLES

Table 1: Descriptive Statistics


t test of
difference
between
2004 and
2005
Summer 2004 Summer 2005 Means
Number of Number of
Variable Mean Observations Mean Observations
(1) (2) (3) (4) (5) (6)

Exam 1 65.41 24 70.40 38 -1.16


(17.99) (15.62)
Exam 2 84.79 24 84.57 37 0.07
(11.75) (13.15)
Exam 3 68.75 24 78.09 38 -2.89**
(12.18) 12.55
Final Exam 73.23 24 77.15 38 -1.32
(13.01) (10.33)
GPA 2.86 24 3.00 38 -0.99
(0.55) (0.54)
Sophomore=1 0.58 24 0.71 38 -1.05
(0.50) (0.46)
Junior=1 0.21 24 0.12 38 1.33
(0.41) (0.33)
Senior=1 0.04 24 0.18 38 -1.63
(0.20) (0.39)
Econ_Major=1 0.29 24 0.21 37 0.71
(0.46) (0.41)
Age 20.70 22 20.50 37 0.31
(4.29) (3.03)

* significant at the .10 Type 1 error level


** significant at the .05 Type 1 error level
15

T a b le 2 : D e fin itio n s o f V a ria b le s


V a ria b le D e fin itio n
AGE A g e in y e a rs
SOPHOMORE 1 if S o p h o m o re , 0 o th e rw ise
J U N IO R 1 if J u n io r, 0 o th e rw ise
S E N IO R 1 if S e n io r, 0 o th e rw ise
E C O N _M A JO R 1 if a n E c o n o m ic s o r B u sin e ss M a jo r, 0 o th e rw ise
EXA M 1 S c o re o n E x a m 1
EXA M 2 S c o re o n E x a m 2
EXA M 3 S c o re o n E x a m 3
EXA M 4 S c o re o n F in a l E x a m
GPA C u m u la tiv e G P A a t b e g in in g o f se m e ste r
PROCTOR 1 if e x a m p ro c to re d , 0 o th e rw ise
16

Table 3: Determinants of Final Exam Score (Parameter estimates)


Summer 2004 Summer 2005
Variable Exam 1 Exam 2 Exam 3 Final Exam Exam 1 Exam 2 Exam 3 Final Exam
Intercept 40.37 ** 63.90 *** 65.98 *** 71.28 *** 43.82 *** 66.32 *** 46.79 *** 41.08 ***
(19.41) (12.35) (13.63) (14.58) (12.49) (11.16) (9.05) (6.14)
GPA 8.76 7.38 * 0.97 0.68 9.28 ** 5.92 10.42 *** 12.09 ***
(6.67) (4.25) (4.69) (5.01) (4.13) (3.66) (2.99) (2.03)
R-Square 0.0306 0.0786 0.0019 0.0008 0.1232 0.0695 0.2524 0.4972
F-Ratio 1.73 2.96 * 0.04 0.02 5.06 ** 2.61 12.15 *** 35.60 ***
N 24 24 24 24 38 38 38 38
Standard errors are in parentheses below the parameter estimate.
* significant at the .10 Type 1 error level
**significant at the .05 Type 1 error level
***significant at the .01 Type 1 error level
17

NOTES

1
The studies consistently report the highest incidence of cheating occurs among vocational majors like business and
engineering (Passow, H. J., M. J. Mayhew, C. J. Finelli, T. S. Harding, and D. D. Carpenter 2006).
2
Passow, H. J., M. J. Mayhew, C. J. Finelli, T. S. Harding, and D. D. Carpenter (2006) use a similar methodology.
They estimate parallel models for exam cheating and homework cheating. They conclude that the dramatic
difference in the R-squared statistics is evidence that the factors that determine the frequency of exam cheating and
homework cheating are different.
3
Except for the proctored exam for the summer 2005 class, there were no other face-to-face meetings for either
class.
4
The exams are structured so that each student has a different exam but the exams are of equivalent difficulty. An
example illustrates. For example on a question to calculate marginal propensity to consume, there are 5 alternative
versions of the question, each version differing by the specific numbers given in the question. All 20 questions
including nonnumerical question types are designed in this manner. When a student logs on, the WebCT software
creates the exam by random selection of 20 questions, each from a pool of 5, and randomly selects the order of
responses for each question.
5
The course was first offered online in summer 2004. The course description released at the time of enrollment in
February 2004 for summer school did not contain information as to whether or not exams would be proctored. The
course description for the summer 2005 was the same as previously. The decision to administer the final exam in
proctored format was made in late April, and announced to students during the first week of class in mid-May 2005.
This resulted in considerable inconvenience for some students, and in the following year the information was
incorporated in the course description released during the enrollment period. Because students did not know
beforehand whether or not the final exam in summer 2005 would be proctored we believe that self-selection by
reason of assessment format is not an issue with our data.
6
A recent study of explanatory variables in research on student learning concluded that collegiate GPA is the best
proxy for individual student aptitude for academic learning (Grove, W. A., T. Wasserman, and A. Grodner 2006).
7
We undertook one other approach to identifying outcomes that possibly reflect cheating behaviors. Because the
R-square is much lower for the unproctored students for the third and fourth exams than for the first and second
exams it can be speculated as resulting from those students in trouble after the first two exams deciding to cheat on
the remaining two exams. We calculated a class rank based on the average of the first two exams and compared that
to the rank for the average of the last two exams. We identified 3 students (13%) as having a marked increase in
class rank after the second exam so large that they were outliers to the other students in the sample. Of this group
one had a low GPA (2.09) and also has a large and positive difference between the actual final exam score and the
predicted final exam score.

View publication stats

You might also like