Jurnal PBL
Jurnal PBL
Jurnal PBL
To cite this article: Richard Higgins , Peter Hartley & Alan Skelton (2002) The Conscientious
Consumer: Reconsidering the role of assessment feedback in student learning, Studies in Higher
Education, 27:1, 53-64, DOI: 10.1080/03075070120099368
ALAN SKELTON
University of Sheféeld, UK
ABSTRACT This article reports the initial éndings of a 3-year research project investigating the
meaning and impact of assessment feedback for students in higher education. Adopting aspects of a
constructivist theory of learning, it is seen that formative assessment feedback is essential to encourage
the kind of ‘deep’ learning desired by tutors. There are a number of barriers to the utility of feedback
outside the sphere of control of individual students, including those relating to the quality, quantity
and language of comments. But the students in the study seemed to read and value their tutors’
comments. Their perceptions of feedback do not indicate that they are simply instrumental ‘con-
sumers’ of education, driven solely by the extrinsic motivation of the mark and as such desire feedback
which simply provides them with ‘correct answers’. Rather, the situation is more complex. While
recognising the importance of grades, many of the students in the study adopt a more ‘conscientious’
approach. They are motivated intrinsically and seek feedback which will help them to engage with
their subject in a ‘deep’ way. Implications of the éndings for theory and practice are discussed.
Introduction
The Importance of Formative Assessment
Black & Wiliam’s (2000) developing theoretical framework of formative assessment empha-
sises the interactions between teachers, pupils and subjects within ‘communities of practice’.
They adopt aspects of a constructivist approach to learning (Vygotsky, 1962; Bruner, 1986,
1990) by implying that students are not simply receptacles for transmitted information, but
active makers and mediators of meaning within particular learning contexts.
This is a view reèected in the work of Biggs (1999). He argues that meaning is constructed
through learning activities and, therefore, teaching and learning must be about conceptual
change. Furthermore, he asserts that the ways students are assessed inèuence the quality of
their learning (see also Sadler, 1983; Brown, 1999; Gibbs, 1999; Hyland, 2000). He
therefore argues that curricula, assessment procedures and teaching methods should be
aligned so that curriculum objectives relate to higher order cognitive thinking. Formative
assessment is an essential part of this alignment since it provides feedback to both tutor and
ISSN 0307-5079 print; ISSN 1470-174X online/02/010053-12 Ó 2002 Society for Research into Higher Education
DOI: 10.1080/0307507012009936 8
54 R. Higgins et al.
student (Biggs, 1999). It provides tutors with a way of checking on students’ constructions
(Biggs, 1999), and students with a means by which they can learn through information on
their progress (Brown & Knight, 1994; Ding, 1998). Feedback from formative assessment
‘has the capacity to turn each item of assessed work into an instrument for the further
development of each student’s learning’ (Hyland, 2000; p. 234). There is plenty of evidence
of the beneéts of formative assessment. For example, Black & Wiliam’s (1998) meta-analysis
of 250 research studies relevant to the subject of classroom formative assessment concluded
that formative assessment does make a positive difference to student learning. So, by
understanding teaching, assessment and learning as social practices, which involve the active
construction of meaning, we can see that formative assessment is vital for the kind of learning
valued so highly in higher education.
Feedback from formative assessment may take different forms (Hyland, 2000). How-
ever, this article focuses on written tutor comments on written assignments. MacKenzie
(1974) commented on the process of tutoring by written correspondence at the Open
University, and suggested that, in this context, written feedback comments were often the
only source of feedback for students. This is becoming the case in all institutions as the
landscape of higher education continues to be transformed. The workload of tutors is
growing alongside an expansion in the number of students. At the same time, the use of
distance learning and new technologies is becoming more extensive. As a result, face-to-face
student–tutor contact time is diminishing, leading to a greater reliance on written correspon-
dence (whether paper-based or electronic). For example, in Hyland’s (2000) study of
university history students, 40% of those questioned claimed to have never had a face-to-face
tutorial on their assessment work.
There is growing research interest in the use of formative assessment feedback (Eccle-
stone, 1998). Yet, despite the signiécant position that written feedback comments occupy in
students’ experiences, and that, today, an important purpose of assessment is considered to
be the improvement of student learning (Gipps, 1994), this area, surprisingly, remains
relatively underresearched—particularly from students’ perspectives.
manner. Nineteen students from two different subject units (level 1 Business and level 1
Humanities) across two institutions (a pre- and post-92 university in the North of England)
took part in the interviews. The interviews were conducted towards the end of semester two,
when the students already had some initial (albeit limited) experience of feedback. The
students in our study are diverse in terms of age, gender and background, in addition to
studying different units at different institutions.
The questionnaire allowed us to generate quantiéable data (Bryman, 1988) and to
identify general trends in light of the themes emerging from the interviews. The value of using
both qualitative and quantitative methods has been recognised by many social researchers
(for example, see Bryman, 1988; Layder, 1998), as have, the particular advantages of
methodological triangulation in educational research (for example, see Parlett & Hamilton,
1972; Cohen et al., 2000; Hartley & Chesworth, 2000). The questionnaires were handed out
to students during lectures (again towards the end of semester two). We collected completed
questionnaires before the end of each lecture in order to maximise the response rate. We were
able to gain 94 responses (a 77% response rate).
I think they should be more personal really ’cause quite a lot of the comments are
similar to what other people got, you know, just reproduce them. So in a way, if they
were more personal and direct then it would be more helpful.
These comments suggest that students in our study perceive feedback negatively if it does not
provide enough information to be helpful, if it is too impersonal, and if it is too general and
vague to be of any formative use. Handwriting also seems to be a common problem. For
example, 40% of our questionnaire respondents often énd feedback comments difécult to
read.
There may be numerous reasons for inconsistency and ‘poor quality’. The ways tutors
perceive both the role of feedback and their students are likely to inèuence what they provide.
For example (and while recognising that this is an oversimpliécation of the situation), some
tutors may wish to supply advice, while others will simply provide evaluative information as
a way of justifying the grade. Furthermore, some tutors may not see the point in attending
to the quality of their feedback comments if they are sceptical and cynical about whether
feedback is read at all (Ding, 1997). This latter perception may be compounded by tutors on
short units lacking the opportunity to see students’ future work, and to ascertain whether the
feedback they provided had any impact. But it may also stem from a belief that when, for
example, students do not take the opportunity given to them (by way of tutors’ oféce hours)
to seek further feedback, help and support, it is due to a lack of motivation or commitment.
In addition, tutors may not feel a need to produce detailed formative feedback for students
whose grades are satisfactory or of a high standard.
A further barrier to the use of formative feedback may be that some students increasingly
fail to understand the taken-for-granted academic discourses which underpin assessment
criteria and the language of feedback (Hounsell, 1987). According to Entwistle (1984, p. 1),
‘effective communication depends on shared assumptions, deénitions, and understanding’.
But a study at Lancaster University found that 50% of the third-year students in one
academic department were unclear what the assessment criteria were (Baldwin, 1993, cited
in Brown & Knight, 1994). As one of our students noted: ‘I haven’t got a clue what I’m assessed
on’.
This is perhaps not surprising if tutors’ assessments of work require qualitative judge-
ments in a learning environment where there are rarely either correct or incorrect answers
(Sadler, 1989). For Sadler, qualitative judgements usually involve multiple criteria, and at
least some of these criteria will be ‘fuzzy’. In other words, they will be abstract constructs
which have no absolute meaning independent of particular contexts. Consequently, teachers
may recognise a good performance, yet struggle to articulate exactly what they are looking for
because conceptions of quality usually take the form of tacit knowledge. So, the very language
of assessment criteria, and consequently of feedback comments, can be difécult for students
to grasp (Creme & Lea, 1997). The results of studies by Hounsell (1987), Orsmond et al.
(1996, 1997, 2000), Lillis (1997), Street & Lea, (1997), IvanicÏ (1998), Chanock, (2000),
Hartley & Chesworth (2000), echo the view that students often experience problems
interpreting the academic language underpinning assessment.
Our own research supports this suggestion. A concern for many of the students
interviewed was that comments are frequently either vague or too general. Often, feedback
comments employ the academic language used to express assessment criteria, but only 33%
of our respondents claimed to understand these criteria. An inability to fully comprehend the
meaning of assessment feedback may not necessarily prevent students from paying attention
Assessment Feedback in Student Learning 57
I do not
5 minutes or 10–15 15–30 More than read the No
less minutes minutes 30 minutes feedback response
39 42 13 3 0 3
to tutors’ comments, since they may unknowingly interpret them incorrectly yet still attempt
to utilise them. Nevertheless, this will almost certainly present an obstacle for many.
In light of the potential barriers to the efécacy of formative feedback—including the
impact of modularisation, the inconsistency and sometimes poor quality of feedback, and the
often tacit nature of the language underpinning comments—we might expect students to
disregard tutors’ written advice. But is this the case?
Normally I get the grade and then look through the self-assessment and the tutor’s
assessment, read the comments and … see what comments he’s made on the essay.
This énding is reinforced by Hyland’s (2000) study. He noted that the majority of the
students involved (from a range of institutions) seemed to try (even if only occasionally) to
use comments for future assignments.
responses of many of the students in our study indicate a tendency to ‘bear comments in
mind’ for future work:
Well, I just try to take in what they’ve said as best you can, like, um, that’s obviously
a pointer for doing things in the future properly.
I probably would have read it [the feedback] so it would be in the back of my mind,
but I wouldn’t refer to it really closely or exactly or anything. I would probably be
aware of what I had to do, but not really, it wouldn’t be, like, in the forefront of my
mind or anything.
However, the situation may be complex. Although the two students here do not seem to use
feedback in the sense that they have it in front of them from a previous assignment when
constructing a new piece of work, reading it closely and attending to every comment, their
statements may imply a less ‘rigorous’ yet more ‘intuitive’ use of feedback. A more reèective
approach may have considerable beneéts if desirable learning involves the development of
reèective skills. Clearly though, this area requires further research.
But what is it that is motivating them to seek improvement? Moreover, does the type of
motivation matter? We argue that it does. As already stated, there may be different ways of
reading and using feedback, and we anticipate that students’ motives for paying attention to
tutors’ comments will mediate the kinds of feedback comments they desire, and how and
under what circumstances they are likely to make use of them.
Demonstration of knowledge 97
Well structured 89
Critical analysis 89
Good style of writing 79
Note: égures are based on responses to a 5-point Likert scale (1–5, with 1
representing ‘very important’ and 5 representing ‘not at all important’). Responses
of 1 and 2 were judged to represent ‘important’.
students were asked to rate different types of feedback comment (see Table V). Comments
rated as important by over 75% of respondents include those that indicate the grade, correct
mistakes and advise how the student can improve. However, comments that explain mistakes,
focusing on the level of argument and of critical analysis are also rated as important.
The importance of feedback focusing on argument is reèected in many of our interview
responses:
I would like them [feedback comments] to be more general about the entirety of the
essay—how it’s laid out and how the argument has been formed and how to make
it more clear, things like that.
So it seems that, while the students in our study want feedback to provide them with a grade,
they also desire feedback which focuses on generic, ‘deep’ skills. It is possible that this is
Assessment Feedback in Student Learning 61
because they perceive skills such as ‘critical analysis’ and ‘argument’ to be valued by their
tutors and rewarded with high marks. But here we offer an alternative explanation. If students
are concerned simply with obtaining the grades they desire with minimum effort, then we
would expect them to adopt a ‘surface’ approach to learning (as outlined by Entwistle, 1987).
This is because a surface approach is most strongly correlated with ‘extrinsic motivation and
narrowly vocational concerns’ (Entwistle, 1987, p. 19), while intrinsic motivation (such as
interest in a subject area) is most strongly (and positively) correlated with a deep approach.
Our data suggest that the majority of students in our study are, at least to some extent,
intrinsically motivated and, as such, value feedback comments which focus on skills relating
to a deep approach to learning.
Nevertheless, the good news may be that, despite barriers to its use, the potential for
formative feedback to improve student learning remains. But, to make the most of students’
enthusiasm for feedback and allow formative assessment to work, tutors need to take account
of the following. Firstly, while recognising institutional constraints and difécult workloads,
timely feedback is vital; comments should be returned to students as soon as possible after
the assignment is submitted. Interim feedback on a érst draft or an essay plan might also be
productive. Secondly, it is not usually sufécient simply to tell a student where they have gone
wrong—misconceptions need to be explained and improvements for future work suggested.
Nor should comments focus solely on spelling and grammar. Fostering ‘higher order’ critical
skills may have more long-term educational value. Moreover, students may not view com-
ments on ‘surface’ aspects of their work as particularly relevant or useful. In addition,
providers of feedback cannot assume that the language they use is inherently meaningful to
students. As one of us has suggested elsewhere, frequently ‘tutors base their feedback on
implicit values and vocabulary that often mean nothing to the student’ (Higgins, cited in
Utley, 2000). Perhaps the introduction of some element of peer assessment may help
students to become more familiar with the meanings of the criteria upon which their work is
evaluated (although much care must be taken when designing peer assessment strategies if
their potential is to be realised (see Reynolds & Trehan, 2000)). Discussion between tutors
and students about tutors’ expectations may also help as might more open dialogue between
tutors themselves to prevent students receiving conèicting advice based on different meanings
across disciplines (Higgins et al., 2001).
Our éndings should be treated tentatively. While this article provides a useful starting
point for identifying and analysing the issues involved in the provision and utility of tutors’
feedback comments, the meaning and impact of assessment feedback for students is an area
that still remains relatively underresearched, particularly from the students’ perspective. As
MacKenzie (1976, p. 58) stated 26 years ago, ‘much remains to be known, in any detail,
about the average student’s use of his [sic] tutor’s comments’. This apparently remains the
case today, yet, as we have demonstrated, there is clearly room for improvement.
We need to develop a clearer picture of how exactly students use feedback. We must also
investigate further students’ abilities to understand the academic discourses upon which the
language of feedback is often based. We need to develop a better understanding of the
student–feedback and student–tutor relationships, whilst recognising that there are complex
tensions between students’ motivations, their approaches to assessment, the variable feedback
they are presented with, and their attempts to utilise comments. Furthermore, we need to
understand how tensions between being grade-sensitive, and being motivated by a desire to
engage with higher education at a ‘deep’ level are played out in students’ lives—or in other
words, to understand what it means to be a conscientious consumer.
Correspondence: Richard Wiggins, Learning and Teaching Institute, Sheéeld Hallam Univer-
sity, Adselts Centre, City Campus, Howard Street, Sheféeld SI 1WB, UK; e-mail:
[email protected]
REFERENCES
BECKER, H., GEER, B. & HUGHES, E. (1968) Making the Grade: the academic side of college life (London, John
Wiley).
BIGGS, J. (1999) Teaching for Quality Learning at University (Buckingham, Open University Press).
BLACK, P. & WILLIAM, D. (1998) Assessment and classroom learning, Assessment in Education, 5, pp. 7–75.
Assessment Feedback in Student Learning 63
BLACK, P. & WILLIAM, D. (2000) A theoretical model for formative assessment? paper presented to British
Educational Research Association Annual Conference, Cardiff, 7–10 September.
BROWN, S. (1999) Institutional strategies for assessment, in: S. BROWN & A. GLASNER (Eds) Assessment Matters
in Higher Education: choosing and using diverse approaches (Buckingham, Open University Press).
BROWN, S. & KNIGHT, P. (1994) Assessing Learners in Higher Education (London, Kogan Page).
BRUNER, J. (1986) Actual Minds, Possible Worlds (Cambridge, MA, Harvard University Press).
BRUNER, J. (1990) Acts of Meaning (Cambridge, MA, Harvard University Press).
BRYMAN, A. (1988) Quantity and Quality in Social Research (London, Routledge).
CHANOCK, K. (2000) Comments on essays: do students understand what tutors write? Teaching in Higher
Education, 5, pp. 95–105.
COHEN, L., MANION, L. & MORRISON, K. (2000) Research Methods in Education, 5th edn (London, Routledge
Falmer).
CONNORS, R.J. & LUNSFORD, A.A. (1993) Teachers’ rhetorical comments on student papers, College Compo-
sition and Communication, 44, pp. 200–223.
CREME, P. & LEA, M.R. (1997) Writing at University (Buckingham, Open University Press).
DING, L. (1997) Improving assessment feedback in higher education: voices from students and teachers, paper
presented to British Educationa l Research Association Annual Conference, University of York, 11–14
September.
DING, L. (1998) Revisiting assessment and learning: implications of students’ perspectives on assessment
feedback, paper presented to Scottish Educationa l Research Association Annual Conference, University of
Dundee, 25–26 September.
ECCLESTONE, K. (1998) ‘Just tell me what to do’: barriers to assessment-in-learning in higher education, paper
presented to Scottish Educational Research Association Annual Conference, University of Dundee, 25–26
September.
ENTWISTLE, N. (1984) Contrasting perspectives on learning, in: F. MARTON, D. HOUNSELL & N. ENTWISTLE
(Eds) The Experience of Learning (Edinburgh, Scottish Academic Press).
ENTWISTLE, N. (1987) A model of the teaching–learning process, in: J. T. E. RICHARDSON, W. EYSENCK & D.
W. PIPER (Eds) Student Learning: research in education and cognitive psychology (Milton Keynes, Open
University Press).
GIBBS, G. (1999) Using assessment strategically to change the way students learn, in: S. BROWN & A. GLASNER
(Eds) Assessment Matters in Higher Education: choosing and using diverse approaches (Buckingham, Open
University Press).
GIPPS, C. (1994) Beyond Testing: towards a theory of educational assessment (London, Falmer Press).
HARTLEY, J. & CHESWORTH, K. (2000) Qualitative and quantitative methods in research on essay writing: no
one way, Journal of Further and Higher Education, 24, pp. 15–24.
HIGGINS, R.A., SKELTON, A. & HARTLEY, P. (1999) Student approaches to learning and assessment: the
context of assessment feedback, in: J. HILL, S. ARMSTRONG, M. GRAFF, S. RAYNER & E. SADLER-SMITH
(Eds) Proceedings of the 4th Annual Conference of the European Learning Styles Information Network, 28 and
29 June 1999 (Lancashire, University of Central Lancashire).
HIGGINS, R.A., HARTLEY, P. & SKELTON, A. (2000) What do students really learn from tutors’ comments, in:
M. GRAAL & R. CLARK (Eds) Writing Development in Higher Education: partnerships across the curriculum,
Proceedings of the 6th Annual Writing Development in Higher Education Conference, April 1999 (Leicester,
Teaching and Learning Unit, University of Leicester).
HIGGINS, R.A., HARTLEY, P. & SKELTON, A. (2001) Getting the message across: the problem of communicat-
ing assessment feedback, Teaching in Higher Education, 6(2), pp. 269–274.
HOUNSELL, D. (1984) Students’ conceptions of essay writing, unpublished PhD thesis, University of
Lancaster.
HOUNSELL, D. (1987) Essay writing and the quality of feedback, in: J. T. E. RICHARDSON, M. W. EYSENCK &
D.W. PIPER (Eds) Student Learning: research in education and cognitive psychology (Milton Keynes, Open
University Press).
HYLAND, P. (2000) Learning from feedback on assessment, in: P. HYLAND & A. BOOTH (Eds) The Practice of
University History Teaching (Manchester, Manchester University Press).
IVANICÏ , R. (1998) Writing and Identity: the discoursal construction of identity in academic writing (Amsterdam,
John Benjamins).
IVANICÏ , R., CLARK, R. & RIMMERSHAW, R. (2000) What am I supposed to make of this? The messages
conveyed to students by tutors’ written comments, in: M. R. LEA & B. STIERER (Eds) Student Writing in
Higher Education: new contexts (Buckingham, Open University Press).
KVALE, S. (1996) InterViews: an introduction to qualitative research interviewing (London, Sage).
64 R. Higgins et al.
LAYDER, D. (1998) Sociological Practice: linking theory and social research (London, Sage).
LILLIS, T. (1997) New voices in academia? The regulative nature of academic writing conventions. Language
and Education, 11, pp. 182–199.
MACKENZIE, K. (1974) Some thoughts on tutoring by written correspondence in the Open University,
Teaching at a Distance, 1, pp. 45–51.
MACKENZIE, K. (1976) Student reactions to tutor comments on the tutor-marked assignment (the TMA),
Teaching at a Distance, 5, pp. 53–58.
ORSMOND, P., MERRY, S. & REILING, K. (1996) The importance of marking criteria in peer assessment,
Assessment and Evaluation in Higher Education, 21, pp. 239–249.
ORSMOND, P., MERRY, S. & REILING, K. (1997) A study in self-assessment: tutor and students’ perceptions of
performance criteria, Assessment and Evaluation in Higher Education, 22, pp. 357–369.
ORSMOND, P., MERRY, S. & REILING, K. (2000) The use of student derived marking criteria in peers and
self-assessment, Assessment and Evaluation in Higher Education, 25, pp. 23–38.
P ATTON, M.Q. (1990) Qualitative Evaluation and Research Methods, 2nd edn (London, Sage).
P ARLETT, M. & HAMILTON, D. (1972) Evaluation as illumination: a new approach to the study of innovatory
programmes, Occasional Paper No. 9 (Edinburgh, Centre for Research in the Educational Sciences).
REYNOLDS, M. & TREHAN, K. (2000) Assessment: a critical perspective, Studies in Higher Education, 25, pp.
267–278.
SADLER, D.R. (1983) Evaluation and the improvement of academic learning, Journal of Higher Education, 54,
pp. 60–79.
SADLER, D.R. (1989) Formative assessment and the design of instructional systems, Instructional Science, 18,
pp. 119–144.
STREET, B. & LEA, M. (1997) Perspectives on Academic Literacies: an institutiona l approach, ERSC End of Award
Report, No. R000221557 (Swindon, Economic and Social Research Council).
SWANN, J. & ARTHURS, J. (1998) Empowering lecturers: a problem-based approach to improving assessment
practice, Higher Education Review, 31(2), pp. 50–74.
UTLEY, A. (2000) Students misread tutor’s comments, Times Higher Education Supplement, 8 September, p. 6.
VYGOTSKY, L.S. (1962) Thought and Language (Cambridge, MA, MIT Press).