Hoshower & Chen (2005) Persuading Students of Their Responsibilities in The Learning Process

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Journal of College Teaching & Learning – January 2005 Volume 2, Number 1

Persuading Students Of Their


Responsibilities
In The Learning Process
Leon Hoshower, (Email: [email protected]), Ohio University, Athens
Yining Chen, (Email: [email protected]), Ohio University, Athens

ABSTRACT

Education is a two-sided coin, with teaching technique and curriculum on one side and student
effort and motivation on the other side. Much educational research is directed predominately at
the teaching side, while slighting the student's side. This study reports an experiment that
emphasizes the role of the student’s effort in learning. The students in the experimental group
were asked to compare their individual effort and test score to the mean reported effort and test
score of the class. They were then asked to consider making adjustments in their efforts with the
hope of improving their performance. As a result, the students in the experimental group
increased their study hours and significantly increased their exam scores as compared to the
control group students. The results of this study indicate that actively reminding students of their
effort and performance in course work has a positive effect on students' effort of study which can
ultimately translate into improvement of academic performance.

INTRODUCTION

T eaching and learning are two sides of the same coin. Just as a one-sided coin cannot exist, teaching
cannot exist without learning. Conversely, learning cannot exist without teaching, although a student
can be his/her own teacher. Learning outcomes are, therefore, the joint results of a university's input
through teaching, curriculum, and advising and the student's effort, motivation, and talent (Marchese, 1996). There
has been much discussion about curriculum and delivery systems by the AACSB, the Accounting Education Change
Committee, and from a host of education literature. The suggestions for change include both curriculum issues and
delivery systems. Curriculum issues include coverage of international topics, ethics, and diversity issues as well as
competencies in computer usage, problem solving, and written and oral communication. Suggested changes in
delivery systems include collaborative learning, service-based learning, problem-based learning (ill-structured
problems), and curriculum integration. Although each of these suggestions for change has merit, they all dwell on
the teaching side of the coin while almost ignoring the student's responsibility to the learning process.

Without the student learning side of the coin, all the recent uproar over curriculum and delivery systems can
be described as rearranging the deck chairs on the Titanic. The seating arrangement can be interesting and it keeps
the passengers occupied, but it does not improve the outcome. Likewise, the best curriculum and teaching
techniques do little to improve learning outcomes if the students devote little time or effort to learning. One survey
of full-time resident students at a dozen campuses revealed that the average student spends ten hours out of class per
week working on academic activities. In other words, ten hours in total are spent on reading, writing papers, doing
projects, preparing presentations, and studying (Marchese, 1996).

Given the limited effort of students on their side of the learning process, great gains in learning could be
achieved by making students aware of their role in the learning process and by increasing their effort. This paper
describes an experiment to make students more aware of the importance of their effort in the learning process. It
documents the resulting change in self-reported student effort and performance on exams.

7
Journal of College Teaching & Learning – January 2005 Volume 2, Number 1

LITERATURE REVIEW

The literature clearly shows a positive relationship between class attendance and overall course
performance (Attainasi, 1992; Brocato, 1989; Buckalew, Daly, and Coffield, 1986; Launius, 1997; Polachek,
Kniesner, and Harwood, 1978; Vidler, 1980). Other studies take a broader view of student behavior and find that
attending to academic requirements, such as attending class, reading the text, doing homework, studying, and
completing assignments, is an important factor to academic success (Graham, 1988; Lenning, 1982; Mock and
Yonge, 1969). In other words, the academically successful university student attends class consistently and takes
responsibility in completing class assignments (Lenning, 1982).

Prior studies have observed a positive correlation between self-regulation and academic performance
(Bouffard, Boisvert, Vezeau, and Larouche, 1995; Paris, Wasik, and Turner, 1991; Nolen and Haladyna, 1990;
Pintrich and De Groot, 1990; Ames and Archer, 1988). Specifically, students who had high concern for both
learning and performance reported more self-regulatory strategies and achieved higher academic performance. Self-
regulating students select strategies for their learning process and control and evaluate the effectiveness of these
strategies. In fact, many researchers have noted that in order to engage in this kind of strategic behavior, a person
must be motivated and predisposed to invest the required efforts (Alder et al., 2000; Bouffard et al., 1995; Garcia
and Pintrich, 1996; Green and Foster, 1986; Maehr, 1976; Nolen, 1988; Paris, Wasik, and Turner, 1991; Pintrich
and De Groot, 1990). Some studies have tried to identify and explain the effect of personal characteristics on a
learner’s self-regulation (Bandura, 1986; Brown, Armbruster and Baker, 1986; Corno, 1986; Cullen, 1985; Eide,
Geiger, and Schwartz, 2001; Paris and Cross, 1983; Ryan, Connell and Deci, 1985).

This study examines whether reminding students of their effort and performance in course work will have a
positive effect on their attending to academic requirements and ultimately translate into higher academic
performance. Specifically we investigate one approach that an instructor can use in promoting students’ self-
regulation, which can ultimately result in their higher academic performance. Our approach was to ask each student
in the experimental group to compare his/her hours devoted to the course, class attendance, dedication to homework,
and exam scores with the class’s mean of these measures. Our desire was to increase students’ academic
performance by having them increase their own efforts. Furthermore, we wanted this increased student effort to flow
from the students’ internal drive, rather than from an extrinsic reward system, such as awarding points for attendance
and homework. Based on the literature, we believed that a student’s increased effort would yield increased
performance. This, in turn, would reinforce the student’s effort and perhaps lead to more effort and performance in
the future.

RESEARCH DESIGN

The experiment was performed at a mid-sized, mid-western university in the U.S.A. This experiment was
designed with two overriding principles. First, the experiment was to be as non-intrusive to the students and the
classroom as possible, so that the students would be unaware that an experiment was being done. This avoids
possible confounding due to the Hawthorn Effect. Consequently, the researcher sometimes sacrificed the
opportunity to gather potentially useful data in deference to this principle of being non-intrusive. The second
principle was that the faculty administering the treatment would not have any personal motivation for the success of
the experiment and would be unaware of the overall experimental design.

The format of the 395-student “mega” section of Introductory Financial Accounting was conducive to these
design goals. The course consisted of a two-hour mass lecture in an auditorium on Tuesdays and Thursdays and
multiple sections of a small two-hour class on Fridays, which covers homework and students’ questions. There were
12 Friday sections that were staffed by six graduate assistants; each graduate assistant staffing two sections.1

1
Among the six graduate assistants, five were female and two were international students.

8
Journal of College Teaching & Learning – January 2005 Volume 2, Number 1

We assigned one section of each graduate assistant to the experimental group and one section to the control
group. The sections were assigned so that, as closely as possible, the Friday class meeting times in the control group
and the experimental group would be the same (see table 1). All of the treatment to the experimental group was
applied during these Friday sessions. Everything that occurred during the mass lectures was common for both the
experimental and control groups. A profile of the participating class sections is disclosed in table 1.

TABLE 1
Profile of Participating Class Sections
At each of the three exams, which occurred
Section Class Size Friday Class Time
during Friday sessions, we asked each student to
Control 1 42 12-2 complete a self assessment questionnaire shown in
Experiment 1 31 10-12 table 2. Among the four questions asked in the
Control 2 42 12-2 questionnaire, three were used to measure students’
Experiment 2 29 10-12 learning effort (i.e., study hours, class missed, and
Control 3 36 10-12 homework missed). The self-reported data for class
Experiment 3 35 12-2 attendance may not be as accurate as actual data, but
Control 4 32 10-12 it would be nearly impossible to take attendance in a
Experiment 4 35 8-10 395-person class without a seating chart. Even so,
Control 5 28 10-12 the students would not sit in their assigned seats
Experiment 5 28 8-10 unless they were rewarded with points for
Control 6 28 8-10 attendance. Since the researchers’ intent was to
Experiment 6 29 12-2 foster intrinsic, self-motivated behavior, the
extrinsic reward system of points for attendance is
contradictory.

In order to be non-intrusive and to encourage students to be more honest in their responses, we did not have
students place their names on the self-assessment questionnaire. The trade-off for this non-intrusive design is that
data was gathered at the section level, rather than the individual level, for all measures except test scores. The
questionnaire was collected in a pile separate from the exam. Since the exam took place in the Friday small sections,
the researchers could easily keep the responses separate by Friday class time and by graduate assistant. The fourth
question in the questionnaire is for validity test purpose. Details of the test and results are presented in the validity
test sub-section below.

Validity Test of Data

To test the validity of the self-reported measurement of learning effort, we compared the actual attendance
of Friday sessions2 to the self-reported Friday attendance. The average difference between the actual Friday
absences and self-reported Friday absences is .39 and .44 for the control and experimental groups, respectively. The
magnitude of these differences was not significantly different between the control and the experimental groups. The
t-test for the reported difference between the two groups yielded a p-value of .3158. We also calculated the
Pearson's correlation between the actual and self-reported Friday absences and found no significant difference
between the two groups, with a p-value of .3799. From the above results, we inferred that if any under-reporting of
Friday session absences existed, this under-reporting was similar for both groups. By applying this inference to the
other self-reported data in this study, we have some limited assurance that, due to the influence of the experiment or
for some other confounding reasons, the experimental group did not misreport its learning effort, its regular classes
not attended, nor its homework not attempted more than the control group. Thus, we have some assurance that the
comparisons we made between the two groups were not biased by self-reported data.

2
The graduate assistants counted the number of students present in their Friday sessions so we have actual Friday
attendance data on a section-by-section basis. To avoid intrusion to the experiment, we assured the students on the
course syllabus in the grading procedure section that the Friday attendance records would not affect their grades.

9
Journal of College Teaching & Learning – January 2005 Volume 2, Number 1

TABLE 2
Self Assessment Questionnaire

Please answer the following questions honestly. These answers will not affect your grade in any way. Do not put your name on
this sheet. Hand it in separately from your exam.

1. Since last midterm exam*, on average, I have worked about hours per week (other than class time)
on Accounting 201.
2. Since last midterm exam, the number of Tuesday or Thursday classes of Accounting 201 which I missed (other than for
interviews, illness, and family emergencies) is . (There were 6 classes in total since last
midterm.)
3. Since last midterm exam, the number of times (class periods) which I did not make an honest attempt to complete all
the assigned Accounting 201 homework is
. (There were 6 classes in total since last midterm.)
4. Since last midterm exam, the number of Friday classes of Accounting 201 which I missed (other than for interviews,
illness, and family emergencies) is . (There were 3 Friday classes in total since last midterm.)

* For the questionnaire given at the first test, each question is phrased without "since last midterm exam."

The Treatment

The experimental treatment began when the first exam was returned to the students at the Friday session
following the exam. Each graduate assistant told his/her experimental section the entire class’s average test score,
average study hours per week, average number of mass lectures not attended, and average number of times a student
did not attempt all the homework problems. The graduate assistant then asked the students to compare these
averages to their own effort and performance and to consider whether they would like to change their effort. We
encouraged the graduate assistants to repeat this process of displaying the class’s aggregate data each Friday. The
graduate assistants told their control section only the test mean. This process of data collection and reporting to
students was repeated at the second exam. A third set of data was collected at the final exam. In keeping with our
unobtrusive design, we did not observe any of the Friday sessions. The graduate assistants reported to us that they
followed our instructions.

RESULTS

Table 3 reports the changes of learning effort measurements over the experimental periods and the equity
test of those changes between the two groups. The changes are calculated by taking the difference of the learning
effort measurements (i.e., study hours, classes missed, homework missed, and exam scores) between the first and
second exams, between the second and third exams, and between the first and third exams. Since we are interested
in improvement in learning effort and results, our hypotheses are directional. Consequently, we used one-tailed tests
in analyzing all results.

The most important result revealed by table 3 is that over the intervention period (from exam 1 to 3) the
changes in the exam scores were significantly different (at the .03 level) between the experimental and the control
group. Though both groups had their exam scores declined, the magnitude of decline was significantly less for the
experimental group than for the control group. This result indicates that though the experimental group did not
achieve an exam score significantly higher than the control group at the end of the experiment (as shown in table 4),
the changes of exam score between the two groups turned out to be significantly different as the experimental group
gradually improved more than the control group across the three exams.3 Other results in table 3 show that over the

3
Due to possible uneven drop rates between the two groups, we reran the equality tests for exam score excluding the
exam scores of those who dropped the class. The conclusion remains unchanged that the changes of exam score

10
Journal of College Teaching & Learning – January 2005 Volume 2, Number 1

experiment period (from exam 1 to 3), the experimental group increased its study hours more than the control group
at a close to significant level of .09. The changes of the percentage of classes missed and percentage of homework
missed, however, were not significantly different between the two groups.

TABLE 3
Changes of Learning Effort Measurements and
Equality Tests (T Statistics / P-Values)* between Control and Experimental Groups

Time Periods Study Hours Class Missed % Homework Missed % Exam Score
Control Exp. Control Exp. Control Exp. Control Exp.
From Exam 1 to 2 -0.29 -0.02 +9.33% +9.1% +13% +14% -7.72 -7.29
(0.76 / >0.10) (0.72 / >0.10) (-0.25 / >0.10) (1.49 / 0.10)
From Exam 2 to 3 -0.35 +0.31 +0.8% -1.2% -8.7% -1.9% +0.17 +0.95
(0.75 / >0.10) (0.24 / >0.10) (-0.82 / >0.10) (0.04 / >0.10)
From Exam 1 to 3 -0.64 +0.29 +10.13% +7.93 +4.3% +12.1% -7.55 -6.34
(1.56 / 0.09) (0.96 / >0.10) (-0.82 / >0.10) (2.34 / 0.03)

* The results of one-tail t tests and p-values are reported in the brackets throughout this table. The sign of t statistics was made to
be consistent for interpretation purposes. All the positive t statistics represent a larger improvement (or a smaller decline) of the
experimental group than that of the control group. All negative t statistics denote the opposite.

Supporting table 3, table 4 presents the cross sectional means of learning effort measurements and equality
test results between the control and experimental groups. From the cross sectional average numbers, we do not find
evidence that at the end of the experiment (exam 3) the experimental group performs significantly better than the
control group in terms of the study hours, class attendance, homework completion, and exam score. However, some
encouraging development can be observed throughout the experiment process. For example, the average study hours
for the experimental group is less (though not significantly) for the experimental group (5.29) than for the control
group (5.45) at the first exam. At the second exam, the experimental group starts to report more study hours (5.27)
than the control group (5.16) though the difference is not significant. At the third exam, the experimental group
continues to report more study hours (5.58 over 4.81) and the magnitude of difference has increased to a level that is
close to significant (p=.06). When we look at the individual section comparisons, this trend of improvement in study
hours can be observed in almost all six pairs of sections. This result is consistent with the findings in table 3.

Another encouraging development over time is the exam score. The experimental group begins with an
average score significantly lower than the control group at the first exam, which serves as a benchmark. As the
experiment proceeds, the difference mitigates and the p-value increases from a significant level of .05 at the first
exam to a non-significant level of .12 and .21 at the second and third exams respectively. Again, this gradual
improvement on exam scores for the experimental group over the control group is not unique or dominated by one
section. It can be observed in most of the sectional pair comparisons. This is, again, consistent with the findings in
table 3. Other results reported in table 4 include percentage of class missed and percentage of homework missed.
The control group and the experimental group did not differ significantly in terms of these two measurements at the
end of experiment.

Statistical Controls

To improve the precision of the statistical tests, we apply analysis of covariance to the cross sectional
averages reported in table 4. The analysis of covariance involves adjusting the observed response variable for the
effects of an uncontrollable nuisance (concomitant) variable (Montgomery, 1984). In our case, it is our intention to

between the two groups were significantly different as the experimental group gradually improves more than the
control group across the three exams. The new results are not presented but are available from the authors.

11
Journal of College Teaching & Learning – January 2005 Volume 2, Number 1
TABLE 4
Cross Sectional Means of Learning Effort Measurements and
Equality Tests (p-values)* between Control and Experimental Groups

Panel A: Measurement at Exam 1


Sample Study Class Homework Missed % Exam
Section Size Hours Missed % Score
Control 1 42 5.82 6.17 43.8 77.36
(0.23) (0.29) (0.11) (0.45)
Experimental 1 31 6.58 4.50 33.4 76.94
Control 2 43 6.03 2.83 31.3 78.58
(0.15) (0.27) (0.35) (0.46)
Experimental 2 29 5.16 5.00 34.5 78.24
Control 3 36 5.97 4.67 18.0 79.06
(0.00) (0.24) (0.00) (0.04)
Experimental 3 31 3.91 6.83 47.7 73.39
Control 4 30 4.69 3.00 36.8 76.33
(0.13) (0.16) (0.04) (0.17)
Experimental 4 35 5.60 5.17 23.8 72.49
Control 5 27 5.02 4.00 32.5 80.48
(0.34) (0.33) (0.16) (0.20)
Experimental 5 31 5.38 3.17 24.7 77.74
Control 6 27 4.56 4.00 34.0 79.67
(0.21) (0.22) (0.24) (0.26)
Experimental 6 30 5.33 6.33 38.7 77.45
Average
Control Average 205 5.45 4.17 33.0 78.48
(0.33) (0.19) (0.35) (0.05)
Experiment Ave. 187 5.29 5.17 31.7 76.20
*
The results of one-tail t tests (i.e., p-values) are reported in the brackets throughout this table.

Panel B: Measurement at Exam 2


Sample Study Class Homework Exam
Section Size Hours Missed % Missed % Score
Control 1 41 5.22 16.0 47.7 69.07
(0.32) (0.23) (0.28) (0.34)
Experimental 1 30 5.76 11.7 42.7 67.47
Control 2 43 6.08 13.2 34.2 71.02
(0.01) (0.30) (0.01) (0.28)
Experimental 2 26 3.88 16.7 55.0 73.00
Control 3 36 5.59 8.0 39.7 70.44
(0.47) (0.07) (0.17) (0.04)
Experimental 3 32 5.52 15.7 50.0 63.97
Control 4 31 4.61 13.5 56.7 71.29
(0.35) (0.37) (0.17) (0.18)
Experimental 4 33 5.14 15.0 47.2 67.61
Control 5 26 3.60 19.5 55.5 71.23
(0.06) (0.08) (0.03) (0.38)
Experimental 5 28 5.25 8.3 32.3 72.43
Control 6 27 4.42 15.7 19.7 70.30
(0.09) (0.49) (0.13) (0.30)
Experimental 6 29 6.14 15.5 42.5 72.28
Average
Control Average 204 5.16 13.5 46.0 70.76
(0.41) (0.39) (0.48) (0.12)
Experiment Ave. 178 5.27 14.3 45.7 68.91
Covariate Adjusted Average - Adjusted by the Measurement at Exam 1
Control Average 204 4.91 15.0 48.0 69.43
(0.22) (0.22) (0.28) (0.21)
Experiment Ave. 178 5.29 13.2 45.2 70.60

Panel C: Measurement at Exam 3


Sample Study Class Homework Exam
Section Size Hours Missed % Missed % Score
Control 1 39 5.60 17.7 42.1 70.58
(0.27) (0.41) (0.16) (0.48)
Experimental 1 28 6.58 19.1 53.6 70.71
Control 2 41 5.63 12.1 25.4 70.49
(0.20) (0.39) (0.05) (0.32)
Experimental 2 26 7.10 14.0 40.9 72.00
Control 3 34 4.85 9.7 37.0 71.18
(0.23) (0.47) (0.32) (0.07)
Experimental 3 28 5.56 9.6 41.9 66.00
Control 4 29 3.54 19.4 45.9 70.76
(0.32) (0.46) (0.24) (0.11)
Experimental 4 32 4.00 18.9 53.4 66.84
Control 5 26 3.61 17.6 40.9 68.13
(0.03) (0.02) (0.16) (0.30)
Experimental 5 28 5.24 5.1 30.6 69.55
Control 6 26 5.23 10.1 33.3 72.54
(0.46) (0.26) (0.12) (0.46)
Experimental 6 28 5.35 14.0 45.6 72.14
Average
Control Average 195 4.81 14.3 37.3 70.93
(0.06) (0.30) (0.06) (0.21)
Experiment Ave. 170 5.58 13.1 43.8 69.86
Covariate Adjusted Average - Adjusted by the Measurement at Exam 1
Control Average 195 4.78 14.7 37.4 69.80
(0.06) (0.32) (0.09) (0.33)
Experiment Ave. 170 5.60 13.1 44.3 70.34

12
Journal of College Teaching & Learning – January 2005 Volume 2, Number 1

remove (adjust for) the effect of initial inequality when testing for differences in subsequent performance measures
between the two groups. Thus, the initial performance measured at the first test is used as a benchmark (covariate).

The average cross sectional means and equality test results after the adjustment for covariate are presented
at the last row of panels B and C. The data in panel A was gathered before any treatment was applied and
consequently, no adjustment to panel A is necessary. Comparing the cross sectional average numbers with the
covariate adjusted average numbers on table 4, we observe minor changes in most cases. The new results, however,
strengthen the conclusions drawn earlier. For instance, after adjusting for the initial exam score as a covariate, the
experimental group actually reports higher (adjusted) scores on both the second and third exams than the control
group, although the differences remain insignificant.

CONCLUSIONS AND RECOMMENDATIONS

In contrast to most educational research that concentrates on curriculum issues and delivery systems of
teaching, this study emphasized the role of the student's effort in learning. After each midterm test, the members of
the experimental group were asked to compare their individual effort and test score to the mean reported effort and
test score of the class. They were then asked to consider making adjustments in their efforts with the hope of
improving their performance. Being informed and reminded about the average learning effort and test score of the
class, the experimental group was expected to be more conscientious of the impact of their effort on their
performance.

In fact, the results suggest that the experimental group increased their study hours relative to the control
group throughout the experimental period. This might have contributed to the significant improvement of exam
scores of the experimental group relative to the control group’s exam scores. Table 3 shows that the improvements
in exam scores and study hours occurred during the period from exam one to exam two and again from the period of
exam two to the final exam. This continued improvement of exam scores and increase in study hours of the
experimental group relative to the control group was not dominated by one section. It can be observed in most of the
sectional pair comparisons in both periods. For two other measures of effort (i.e., regular class attendance and
homework), the experimental group did not differ significantly from the control group.

To strengthen the internal validity, this experiment was designed to eliminate, as much as practical, the
possible factors that could bias the results of the experiment towards success. It was also conducted to minimize its
intrusion upon the students and the classroom. Although this non-intrusive approach was employed to increase
internal validity, it is also a strength when it comes to practical implementation. That is, the use of this technique
requires little effort by the instructor and uses little precious classroom time.

Although the design emphasized internal validity, the experiment’s external validity is also good. The
experiment used six different teaching assistants to apply the treatment which mitigated the possible impact of any
particular teaching style or the instructor’s personality upon the results. The majority of students were non-business
majors from a number of different colleges of the university. Class ranks were about evenly distributed among
sophomores, juniors, and seniors.

This treatment could be enriched or varied to increase its effect. For instance, the importance of students'
responsibility to the learning process could be stressed; the correlation between students' effort and course
performance could be emphasized with supporting evidence from prior research; and encouragement could be given
regularly to increase student effort. The instructor may also ask each student to set goals to be achieved (i.e., target
grade, study hours, attendance, homework completion) at the beginning of the semester and provide opportunities for
feedback and subsequent self-examination.

The results of this study indicate that the instructor may be able to play an active role in promoting students'
self-regulation, which can ultimately result in their higher academic performance. Actively reminding students of

13
Journal of College Teaching & Learning – January 2005 Volume 2, Number 1

their effort and performance in course work has a positive effect on students' effort (study hours) which ultimately
translates into improved academic performance.

Note: Readers may contact the authors for data.

REFERENCES

1. Adler, R. W., M. J. Milne, and C. P. Stringer. 2000. Identifying and overcoming obstacles to learner-
centered approaches in tertiary accounting education: A field study and survey of accounting educators’
perceptions. Accounting Education 9(2): 113-134.
2. Ames, C., and J. Archer. 1988. Achievement goals in the classroom: students’ learning strategies and
motivation process. Journal of Educational Psychology 80: 260-267.
3. Attinasi, L. C., Jr. 1992. Rethinking the study of the outcomes of college attendance. Journal of College
Student Development 33: 61-70.
4. Bandura, A. 1986. Social Foundations of Thought and Action: A Social Cognitive Theory. Englewood
Cliffs, NJ: Prentice Hall.
5. Bouffard, T., J. Boisvert, C. Vezeau, and C. Larouche. 1995. The impact of goal orientation on self-
regulation and performance among college students. British Journal of Educational Psychology 65: 317-
329.
6. Brocato, J. 1989. How much does coming to class matter? Some evidence of class attendance and grade
performance. Educational Research Quarterly 13: 2-6.
7. Brown, A. L., B. B. Armbruster, and L. Baker. 1986. The role of metacognition in reading and studying in
J. Orasanu edited Reading Comprehension: From Research to Practice: 49-76. Hillsdale, NJ: Erlbaum.
8. Buckalew, L. W., J. D. Daly, and K. E. Coffield. 1986. Relationship of initial class attendance and seating
location to academic performance in psychology classes. Bulletin of the Psychonomic Society 19: 357-360.
9. Corno, L. 1986. The metacognitive control components of self-regulated learning. Contemporary
Educational Psychology 11: 333-346.
10. Cullen, J. L. 1985. Children’s ability to cope with failure: implications of a metacognitive approach for the
classroom. In D. L. Forrest-Pressley, G. E. Mackinnon, and T. G. Waller edited Metacognition, Cognition
and Human Performance 2: 267-300. New York: Academic Press.
11. Eide, B. J., M. A. Geiger, and B. N. Schwartz. 2001. The canfield learning styles inventory: An assessment
of its usefulness in accounting education research, Issues in Accounting Education 16 (3): 341-365.
12. Garcia, T. and P. R. Pintrich. 1996. The effect of autonomy on motivation and performance in the college
classroom. Contemporary Educational Psychology 21: 477-486.
13. Graham, S. 1988. Indicators of motivation in college students. In C. Adelman edited Performance and
Judgment: Essay in Principle College Study Learning: 163-186. Washington, DC: Office of Research,
Government Printing Office.
14. Green, L. and D. Foster. 1986. Classroom intrinsic motivation: Effects of scholastic level, teacher
orientation, and gender. Journal of Educational Research 80: 34-39.
15. Launius, M. H. 1997. College student attendance: Attitudes and academic performance. College Student
Journal 31: 86-92.
16. Lenning, O. T. 1982. Variable-section and measurement concerns. In E. T. Pascarella edited Studying
Student Attrition: 35-53. San Francisco, CA: Jossey-Bass.
17. Maehr, M. L. 1976. Continuing motivation: An analysis of a seldom considered educational outcome.
Review of Educational Research 46: 443-462.
18. Marchese, T. J. 1996. Resetting expectations. Change 28: 4.
19. Mock, K. and G. Yonge. 1969. Students intellectual attitudes, aptitudes, and persistence at the University
of California. Center for Research and Development in Higher Education, Berkeley, CA: University of
California Press.
20. Montgomery, D. C. 1984. Design and Analysis of Experiments. John Wiley & Sons, New York.
21. Nolen, S. B. 1988. Reasons for studying: motivational orientations and study strategies. Cognition and
Instruction 5: 269-287.

14
Journal of College Teaching & Learning – January 2005 Volume 2, Number 1

22. Nolen, S. B. and T. M. Haladyna. 1990. Personal and environmental influences on students' beliefs about
effective study strategies. Contemporary Educational Psychology 15: 116-130.
23. Paris, S. G. and D. R. Cross. 1983. Ordinary learning: pragmatic connections among children’s beliefs,
motive and action. In J. Bisanz, G. L. Bisanz, and R. Kail edited Learning in Children: Progress in
Cognitive Development Research: 137-169. New York: Spring Verlag.
24. Paris, S. G., B. A. Wasik, and J. C. Turner. 1991. The development of strategic readers. In P. D. Pearson
edited Handbook of Reading Research: 137-169. New York: Spring Verlag.
25. Pintrich P. R. and E. V. De Groot. 1990. Motivational and self-regulated learning components of classroom
academic performance. Educational Psychology 82: 33-40.
26. Polachek, S. W., T. J. Kniesner, and H. J. Harwood. 1978. Educational production functions. Journal of
Educational Statistics 3: 209-231.
27. Ryan, R. M., J. P. Connell, and E. L. Deci. 1985. A motivational analysis of self-determination and self-
regulation in education. In C. Ames and R. Ames edited Research on Motivation in Education: Vol. 2: 13-
51. New York: Academic Press.
28. Vidler, D. C. 1980. Curiosity, academic performance, and class attendance. Psychological Reports 47: 589-
59.

15
Journal of College Teaching & Learning – January 2005 Volume 2, Number 1

Notes

16

You might also like