Improving The Validity of Objective Assessment in Higher Education - Steps For Building A Best-In-Class Competency - Based Assessment Program
Improving The Validity of Objective Assessment in Higher Education - Steps For Building A Best-In-Class Competency - Based Assessment Program
DOI: 10.1002/cbe2.1058
C A SE STUDY
KEYWORDS
1 | I NTRO D U C TI O N graduating. Students learn at their own pace and earn their degree by
demonstrating knowledge and skill in required subject areas through
All tests are NOT created equal! Arguably, what one considers “best- a series of carefully designed competency-based assessments. As
in-class” in one industry may differ markedly from best practice in the popularity of CBE programs continues to rise, the credibility of
another. Consider the differences between traditional instructor-led those programs will be scrutinized by students and employers alike,
and competency-based education (CBE) programs in higher educa- and the credibility of the CBE programs is largely dependent upon
tion. In the traditional instructor-led approach, students are awarded the quality of the assessments that are used (McClarty & Gaertner,
credit hours per “seat time” of instruction, and the transmission of 2015).
knowledge is passed from teacher to student through some type of Despite these differences, many of the principles of assessment
lecture or discourse (Johnston, 2011). As participants in this system, development in the competency-based education arena can be ap-
all students are taught the same materials at the same point, result- plied to the higher education assessment community. Several of
ing in inefficient use of students’ and teachers’ time. Students who these principles that will be discussed more thoroughly in the re-
do not learn quickly enough fail, rather than being allowed to suc- mainder of this paper include the creation of a value chain of devel-
ceed at their own pace, leading some to argue that student learning opment, the use of subject matter experts in the workplace to guide
in higher education is one of the least sophisticated aspects of the the nature of assessments, the flexibility and timely use of objective
teaching and learning process (James, 2003). assessments to assess more basic levels of cognitive reasoning, and
In contrast, CBE programs strive to balance a variety of learn- the need to include real-world relevance and practical applications
ing approaches requiring students to master key concepts before throughout the assessment experience.
The value chain represents the activities and products necessary Carolina, where three major recommendations were identified: (a)
to execute a full assessment solution – from the creation of assess- that high expectations be established for students; (b) that students
ment content to the delivery of remediation services, and ultimately be involved in active learning environments; and (c) that students be
graduating competent students. These activities include; assess- provided with prompt and useful feedback (Banta, 2002). Of partic-
ment design, measurement activities, assessment libraries, score ular note, conference organizers recognized the need to redefine the
analysis, and prescriptive remediation. This article focuses on the term “assessment” in higher education, as it meant different things
first of these activities (assessment design), and articulates 12 steps to different people.
for building a best-in-class competency-based assessment program. The most established definition had its roots in learning mastery
This process is supervised by professional subject matter experts (Bloom, 1968), where assessment referred to the process of deter-
(SMEs), content developers, and psychometricians, and adheres to mining an individual’s mastery of complex abilities. Mastery learning
professional and technical standards ensuring assessment reliability, maintains that students must achieve a level of competence (e.g.,
validity, and fairness; notably, guidelines covered by the American 70%) in required knowledge before moving forward to learn sub-
Educational Research Association, the American Psychological sequent information. If a student does not achieve mastery, they
Association, and the National Council on Measurement in Education are given additional instructional support and then tested again.
(Standards, 2014). This cycle continues until the learner accomplishes mastery, and
In the remaining sections of this paper, we briefly explore the they may then move on to the next stage. Yet, despite its popular-
history of objective assessment in higher education, studying its ity among researchers and assessment designers of the time, few, if
historical antecedents, and define next steps toward building a any within higher education operationalized mastery-learning into
standards-based community of practice. We operationally define practice.
competency-
based education, paying particular attention to the A far different meaning emerged from K-12 practice, where the
term “competence” and what it means in the context of maintaining term assessment described large-scale testing programs like the fed-
standards for assessment validity. We then offer 12 steps for build- erally funded National Assessment of Educational Progress (NAEP).
ing a best-in-class objective assessment program. We conclude with The primary objective of NAEP was not to examine individual mas-
case study metrics from Western Governors University’s assess- tery, but rather to benchmark school and district performance in the
ment portfolio, which describes four key assessment performance name of accountability. Finally, a third tradition of use defined as-
indicators and documents a commitment to continuous quality im- sessment as a special kind of program evaluation, whose purpose was
provement through quarterly assessment reviews. to gather evidence to improve curricula and pedagogy. Again, the
focus was on aggregate, not individual performance mastery.
Which begs the question: Why have institutions of higher
2 | OV E RV I E W O F O B J EC TI V E learning been so hesitant to adopt the learning-mastery approach?
A S S E S S M E NT I N H I G H E R E D U C ATI O N Perhaps, as Boud (2000) explains, assessment in higher education is
confronted with challenges that represent multiple and sometimes
Scant literature exists concerning sound assessment development contradictory responsibilities: assessment is meant to inform student
principles in higher education, especially as it relates to ensuring stu- learning even as it sorts students into those who pass and those who
dent competence in critical academic and professional domains. One fail; assessment measures learning outcomes but also compares stu-
of the first efforts in this area was the work of Robert F. Mager in 1962, dents with one another; assessment should be objective and individ-
with the publication of “Preparing Instructional Objectives,” where the ually accountable but must evaluate the attainment of dispositions
author highlights the three components needed for learning objec- such as creativity, leadership, and imagination. These contradictory
tives: a performance that will be measured in some way, conditions views have led some to criticize the current state of assessment in
that identify what is to be allowed in the performance (as well as what higher education, arguing that it has little effect on educational qual-
is not), and criteria that explain how the performance will be measured. ity and that accrediting agencies require institutions to invest time
Mager’s work served as a unique bridge between academic and profes- and resources into collecting evidence on student learning even
sional demonstration of student/learner demonstration of ability. though it does not improve academic quality (Gilbert, 2015).
In another early report entitled “A Nation at Risk,” published by Finally, faculty’s attitudes toward and expertise in what consti-
the U.S. Department of Education (1983), governors and legislators tutes valid assessment design has produced sizable discord in higher
recognized that postsecondary education was a powerful engine for education. Faculty have been shown to have mixed opinions on the
economic and workforce readiness. Institutions of higher learning purposes of assessment based on their attitudes about teaching and
were going to be faced with raising standards and expectations by, learning (McLellan, 2004; Murray & MacDonald, 1997). According to
among other things, advancing standardized achievement tests at Fletcher, Meyer, Anderson, Johnston, and Rees (2012), those who
major transition points from one level of education to another, and view teaching and learning as the transmission of knowledge from
particularly from high school to college and work. teacher to student are likely to view assessment as a method to test
Then in the fall of 1985, the First National Conference on students’ ability to reproduce information. In contrast, those who see
Assessment in Higher Education was held in Columbia, South teaching and learning as facilitating critical thinking and knowledge
GYLL and RAGLAND |
3 of 8
transfer view assessment as an integral part of the learning process. While there is no universally accepted definition of CBE, for
As Hattie (2009) argues, in higher education “we implicitly trust our this paper we have operationalized CBE as, “an outcome-b ased
academics to know what they value in their subjects, to set exam- approach to education that incorporates modes of instructional
inations and assignments, to mark reliably and validly, and then to delivery and assessment efforts designed to evaluate mastery of
record these marks…and the students move on, upwards, and/or learning by students through their demonstration of the knowl-
out” (p. 1). As a result, assessment activities within traditional higher edge, attitudes, values, skills, and behaviors required for the
education continue to be implemented as additions to the curricu- degree sought” (Gervais, 2016, p. 99). We chose this definition
lum, designed for purposes of program evaluation rather than being because of its emphasis on student mastery through tangible
integral to student mastery. “Good assessments are not once-and- demonstrations of requisite knowledge, skills, and abilities (KSAs)
done affairs,” says education consultant Linda Suskie. “They are part required from our degree programs. Demonstrating those KSAs
of an ongoing, organized, and systematized effort to understand and within a competency-b ased framework is the first step toward en-
improve teaching and learning” (Suskie, 2004, p. 50). suring the validity of educational programs and the utility of stu-
While more than three decades have passed since the First dent outcomes.
National Conference on Assessment in Higher Education, assess-
ment as a movement is still endeavoring for the cultural shift its orig-
3.1 | Standards of validity
inal supporters had hoped for. The most basic debate that arises as
faculty face assessment decisions is the extent to which educational Samuel Messick defined validity as “an integrated evaluative judg-
outcomes can be specified and measured at all (Ewell, 2002). “For ment of the degree to which empirical evidence and theoretical
most institutions, assessment remains an add on, done principally at rationale support the adequacy and appropriateness of inferences
the behest of the administration and sustained as a superstructure and actions based on test scores or other modes of assessments”
outside the traditional array of academic activities and rewards… (Messick, 1989). From a validity perspective, the goal of assess-
most campuses still do assessment because somebody tells them ment is to ensure that a comprehensive package of evidence-based
to” (p. 23). Consequently, Carless (2009a,b) recently called for work learning exists to document not only knowledge competency, but
toward building trust in the integrity of assessment processes, while also patterns of reasoning, performance skills, and behaviors of stu-
other higher education scholars highlight the need for developing a dents seeking a particular degree. Establishing validity with regard
scholarship of assessment (Banta, 2002), which closely resembles an to competency-based assessment is an ideological, semantical, and
established community of practice. experimental undertaking. Competency must be adequately defined
In order for assessment to move into mainstream higher ed- and logically defended before it may be experimentally explored.
ucation, two fundamental changes need to occur (Ewell, 2002).
One is at the level of teaching and learning, and requires shifting
3.1.1 | Construct validity
assessment’s conceptual paradigm from an evaluative stance to an
emphasis on assuming responsibility for furthering student achieve- Establishing construct validity in relation to competency and stu-
ment. Forces that might aid this conceptual transformation include dent mastery has typically constituted a task highly prone to dispute.
the growing acceptance of competency-based credentials, which are When assessments are designed to measure competence, it is pos-
fast becoming a way of life in many occupations and professions. sible that the concept may be defined in a number of divergent ways.
Second, higher education attendance patterns are fueling demands No clear consensus regarding the meaning of “competence” exists,
to reposition articulation and transfer from course-based “seat time” as the term has been vulnerable to ambiguity, divergent definitions,
to performance-based attainment, forcing faculty to think far more and discord. It is reasonable for an institution to define what subject
concretely and collectively about learning outcomes and how to cer- competence entails in a clear and valuable manner whether one fully
tify them. accepts that such terms tell the whole story on what it means to be
competent or not.
Once student competence has been concretely defined in ways
3 | A B E T TE R A PPROAC H that hinge on quantifiable student achievement across curricula and
programs, it can be measured and operationalized. Put somewhat
In competency-based programs, students move through their de- differently, despite the current lack of consensus regarding the defi-
gree programs demonstrating their ability through a variety of nition of competence, we believe it is both justifiable and productive
content and professional domains. As students work within an in- to furnish an operational, albeit limited and contestable, definition
dividual domain of knowledge and skill, they encounter one or more of competence that may be subsequently validated via sound as-
topics, each consisting of a series of competencies with associated sessment design and empirical data. We also believe that in doing
test objectives and performance tasks. Some of the test objectives so we provide employers a tool that will increase the likelihood
found under a given competency are tested using objective exams, that they will successfully hire competent students of a particular
other objectives are tested with performance tasks or performance type than they would otherwise be able to do without the validated
assessments. assessment.
|
4 of 8 GYLL and RAGLAND
Our working definition of competence is “the combination of ex- providing originality checks, the ability to verify individual student
periences, dispositions, knowledge, abilities, skills, and performances work on the assessment remains somewhat limited because there
that genuinely demonstrate the program goals and objectives.” As is no formal mechanism to ensure that a particular student truly au-
such, an assessment should foretell a student’s effectiveness in pro- thored the assessment.
spective situations in these terms in a manner that will be explicated Program design informs the overall purpose and development of
more fully below. all assessments required to authentically measure student compe-
tencies in their degree program. As part of program design, and with
the input of employers, accreditation, and licensure requirements,
4 | B E S T-T E S T D E S I G N PR I N C I PLE S key work products are identified for each program. This ensures that
all critical work products for the demonstration of competency are
Early in the design process, it is important to determine the mode of strategically incorporated and assessed within the program and can
testing that will be employed. Two primary types of assessments are be modularized appropriately. Program design also defines the high-
pervasive in competency-based education, objective assessments level goals for the program in which all course-level competencies
and performance assessments. Each has strengths and weaknesses are aligned. This ensures that course-level competencies support the
that should be carefully weighed to meet the needs of the students, higher, program-level outcomes and are aligned to the critical knowl-
the nature of the competencies being measured, and the best way to edge, skills (including noncognitive), and abilities that are expected
demonstrate the learning objectives with the highest degree of va- for an individual pursuing a career in their program area. This also
lidity that can be achieved. A brief description of these two assess- helps to guarantee that each degree program has been thoughtfully
ment types is provided, and the remainder of the design principles mapped to identify the most appropriate, relevant, and authentic
will focus on the objective assessment. measurement at both a program and course level.
The use of objective assessments is an important option for a The following best-test design principles further elaborate the
competency-based assessment program. This type of assessment process:
largely includes multiple-choice, matching, and short answer items
that can be quickly scored and reported to students, which is clearly
4.1 | Step 1. Establish the test purpose
an advantage for students moving at their own pace. Another ad-
vantage of this type of assessment is the ability protect test security The first step in the development of a best-in-class assessment pro-
with the use of parallel test forms (very similar, but not exact repli- gram is to establish the test purpose. How will assessment scores
cas); random shuffling of answer choices; online, proctored environ- be used? Is the assessment criterion or norm referenced? Will the
ments, and the use of test centers. However, objective assessments emphasis of the assessment be on minimum competency, mastery
in collegiate work also have their disadvantages. Many educators of content, or predicting success? The answers to these questions
rightly fear that higher levels of cognitive reasoning cannot be thor- will have implications for many aspects of the assessment, such as
oughly assessed with multiple choice items. Objective tests are bet- the overall length, the average difficulty of the items, the conditions
ter equipped to assess recall, understanding, and to some extent, under which the assessment will be administered, and the type of
the evaluation of concepts and ideas. Objective assessments are not information to be provided on the score reports. Taking time at the
well-suited to tailoring assessments to students’ interests and voca- beginning to establish a clear purpose, helps to ensure that goals and
tions, allowing for demonstration of theoretical application to prac- priorities are more effectively met.
tice, or evaluating strengths and weakness of competing options. For
these reasons, performance assessments are also an important way
4.2 | Step 2. Conduct the job/task analysis
to assess student competence in the CBE environment.
Performance assessments are structured assessments that allow A job/task analysis (JTA) is conducted in order to identify the knowl-
for tremendous flexibility for students to demonstrate competence. edge, skills, abilities, attitudes, dispositions, and experiences that a
These assessments come in a wide variety of formats and can include professional in a particular field ought to have. A well-conducted JTA
essays, video presentations, case studies, lesson plans, simulations, helps provide validity evidence for the assessment that is later de-
and many others. Technological innovations continue to expand the veloped. The JTA contributes to assessment validity by ensuring that
opportunities for performance assessments to be used to assess stu- the critical aspects of the field become the domains of content that
dent competence. The greatest advantage of this mode of testing is the assessment measures.
the real-world relevance that can be incorporated into the assess-
ment, allowing the work to closely mirror the actual performance
4.3 | Step 3. Create the test specifications
indicators sought by employers in the workplace. As these assess-
ments are most often graded by human evaluators, turn-around After the overall content of the assessment has been established
time to students is much longer than the objective assessments, and through a job/task analysis, the next step in developing an assess-
human subjectivity in grading is introduced. While security for per- ment is to create the detailed test specifications. Test specifications
formance assessments can be supported by a number of companies usually include a test description component and a test blueprint
GYLL and RAGLAND |
5 of 8
component. The test description specifies aspects of the planned likely to function operationally, before they are actually used
assessment such as its purpose, the target audience, and the overall to contribute to a student’s score. For that information to be
length. The test blueprint, sometimes also called the table of speci- accurate, it is important that the field test includes appropri-
fications, provides a listing of the major content areas and cogni- ate examinees. The field test examinees should be representa-
tive levels intended to be included on each assessment form. It also tive of the test population; that is, they should be as similar to
includes the number of items each assessment form should include future test-takers as possible. The examinees should also be
within each of these content and cognitive areas. motivated; that is, they should be attempting to do as well as
possible when they respond to the items. Items may be evalu-
ated at different development phases; under pilot test, field
4.4 | Step 4. Develop the initial pool of items
test, and pretest conditions. In all these instances, data are
Once the test specifications are complete, the item-writing phase collected to ensure the high quality of the items. That infor-
of the assessment development project can begin. Typically, a panel mation is then used to conduct an item analysis and review of
of subject matter experts (SMEs) is assembled to write a set of as- the items, prior to allowing items to be included on operational
sessment items. The panel is assigned to write items according to assessment forms.
the content areas and cognitive levels specified in the test blueprint.
Items are written for each of the item types identified in the test
4.7 | Step 7. Conduct the item analysis
specifications. By far the most commonly used item type in stand-
ardized assessments is the multiple-choice item, due to its relative The item analysis is an important phase in the development of an
advantages, including its ability to be used as a measure of higher assessment program. In this phase, statistical methods are used to
cognitive skill. Some assessment programs use item specifications identify any items that are not functioning well. If an item is too easy,
to further guide item writers with detailed requirements for each too difficult, failing to show a difference between skilled and un-
included item type. The total number of items that a particular as- skilled examinees, or even scored incorrectly, an item analysis will
sessment needs depends on specific aspects of the program. After reveal it. The two most common statistics reported in an item analy-
the items have been written, they are stored electronically in an item sis are the item difficulty, which is a measure of the proportion of
banking software application. examinees who responded to an item correctly, and the item dis-
crimination, which is a measure of how well the item discriminates
between examinees who are knowledgeable in the content area and
4.5 | Step 5. Review the items
those who are not.
After a set of items has been written, an important next step in the An additional analysis that is often reported is the distractor
assessment development process is to review the items. The items analysis. The distractor analysis provides a measure of how well
are reviewed for several different types of potential problems; thus, each of the incorrect options contributes to the quality of a multiple-
it is often helpful to have different types of experts conduct specific choice item. Once the item analysis information is available, an item
reviews. Subject matter experts review the items to confirm that review is often conducted.
they are accurate, clearly stated, and correctly keyed. Professional
editors then review the items for grammar, punctuation, and spell-
4.8 | Step 8. Assemble the test forms
ing. Measurement experts review the items to be sure that they are
not technically flawed. And the items are also reviewed for fairness, After draft items have been reviewed and field-tested, and item
to ensure that they will not disadvantage any examinee subgroup. analysis statistics have been obtained on the items, operational as-
Additionally, items are reviewed for sensitivity and language in order sessment forms can be assembled. Each new assessment form for
to be appropriate for a diverse student population. The items may the exam program is assembled to satisfy all of the elements in the
also be reviewed to ensure that they match the test specifications test specifications, and particularly in the test blueprint. Items are
and are written at an appropriate readability level. The review pro- selected from the bank to satisfy the content and cognitive pro-
cess is valuable for identifying problems, which should then be cor- portions specified in the test blueprint, as well as to meet certain
rected before the items are field-tested. statistical criteria. This process helps to ensure that each resulting
assessment form satisfies the intended purposes of the assessment
program. Most programs follow this process to develop not just a
4.6 | Step 6. Field test the items
single assessment form, but also multiple forms. These multiple as-
After test development and content matter staff has reviewed sessment forms should be assembled so that they are as similar to
a set of drafted items, the items are ready for field-t esting on one another as possible, or parallel. The use of multiple, parallel as-
examinees. When items are field tested, they are not scored sessment forms provides improved test security for the exam pro-
and they are not used to measure examinee performance; in- gram and minimizes a test–retest advantage for students attempting
stead, the items themselves are evaluated. An item field test is more than once. After administering the assessment, statistical
conducted to collect information about how well the items are equating of the parallel forms is often needed.
|
6 of 8 GYLL and RAGLAND
held to discuss the status of the assessments to discern trends and C O N FL I C T O F I N T E R E S T S TAT E M E N T
changes since the previous quarter or as the result of a specific im-
No conflicts declared.
provement activity. Additional information can also be included in
the meeting as well, such as help-desk ticket information, feedback
comments from student surveys, and observations made by course REFERENCES
faculty. Assessments that are performing more poorly, relative to
others in the college, are assigned to further investigation. This cy- Banta, T., & Associates (2002). Building a scholarship of assessment. San
clical review ensures that WGU assessments meet the quality stan- Francisco, CA: Jossey-Bass.
Bloom, B. S. (1968). Learning for mastery. Evaluation Comment, 1, 1–12.
dards of the institution.
Boud, D. (2000). Sustainable assessment: Rethinking assessment for the
learning society. Studies in Continuing Education, 22, 151–167. https://
doi.org/10.1080/713695728
5 | CO N C LU S I O N Carless, D. (2009a). Trust, distrust and their impact on assessment re-
form. Assessment and Evaluation in Higher Education, 34, 79–98.
https://doi.org/10.1080/02602930801895786
Barring any unexpected retreat of higher education away from post- Carless, D. (2009b). Learning-oriented assessment: Principles, practice
modern society, it appears that the range, functionality, power, and and a project. In L. H. Meyer, S. Davidson, H. Anderson, R. Fletcher,
utility of CBE will continue to expand into the foreseeable future. P. M. Johnston, & M. Rees (Eds.), Tertiary assessment and higher ed-
ucation student outcomes: Policy, practice and research (pp. 79–90).
For the 21st century learner, basic credentials, by themselves are
Wellington, NZ: Ako Aotearoa & Victoria University of Wellington.
not enough to ensure success in the workplace. The criticality of Cronbach, L. J. (1951). Coefficient alpha and the internal structure
CBE programs, therefore, has only become more evident now that of tests. Psychometrika, 16, 297–334. https://doi.org/10.1007/
employers are focused on identifying, recruiting, and retaining em- BF02310555
Ewell, P. T. (2002). An emerging scholarship: A brief history of assess-
ployees with transferable skills. Without an established set of as-
ment. In T. W. Banta (Ed.), Building a scholarship of assessment (pp.
sessment standards, this loose structure provides little room for a 3–25). San Francisco, CA: Jossey-Bass.
consensus to emerge as to which method provides students with the Fletcher, R. B., Meyer, L. H., Anderson, H., Johnston, P., & Rees, M.
necessary skills to succeed in a knowledge society. As demonstrated (2012). Faculty and students conceptions of assessment in higher
in this paper, for a best-in-class assessment program, construct va- education. Higher Education, 64, 119–133. https://doi.org/10.1007/
s10734-011-9484-1
lidity is the paramount component of validity evidence. This means
Gervais, J. (2016). The operational definition of competency- based
that in order to support the inferences drawn from assessment education. The Journal of Competency-Based Education, 1, 98–106.
scores, best-test-design principles must be maintained throughout https://doi.org/10.1002/cbe2.1011
the entire development life-cycle. Gilbert, E. (2015). Does assessment make colleges better? If so, we hav-
en’t seen the evidence. The Chronicle of Higher Education, 62, 50–51.
The current assessment boom exists within higher education
Hattie, J. (2009). The black box of tertiary assessment: An impending
because of the invalidity of existing measures of quality. Input mea- revolution. In L. H. Meyer, S. Davidson, H. Anderson, R. Fletcher,
sures such as levels of student achievement at entry and outputs P. M. Johnston, & M. Rees (Eds.), Tertiary assessment and higher
such as graduation and retention rates, as well as performance in- education student outcomes: Policy, practice and research (pp.
259–275). Wellington, NZ: Ako Aotearoa & Victoria University of
dicators of operational efficiency, are NOT strong indicators of stu-
Wellington.
dent learning and development. As a result, there are clear signs that James, R. (2003). Academic standards and the assessment of student
universities are moving toward adoption of assessment policies that learning. Tertiary Education and Management, 9, 187–198. https://doi.
promote standards-based approaches and prohibit norm-referenced org/10.1080/13583883.2003.9967103
Johnston, H. (2011). Proficiency-based education. Retrieved from www.
assessment. Without a standards-based framework, learners cannot
educationpartnerships.org
know whether their achievements are a result of meeting an accept- Mager, R. (1962). Preparing instructional objectives. Palo Alto, CA: Fearon
able standard or simply doing better than other students in the same Publishers Inc.
cohort. McClarty, K., & Gaertner, M. (2015). Measuring mastery: Best practices for
assessment in competency-based education. Retrieved from https://
Regardless of model (traditional versus competency-
based),
www.luminafoundation.org
assessments are an important part of the learning process. They McLellan, E. (2004). Authenticity in assessment tasks: A heuristic exploration
provide multiple functions for varying purposes; they provide infor- of academics’ perceptions (Vol. 23, pp. 19–33). Glasgow, UK: Higher
mation to faculty about teaching effectiveness; they provide stu- Education Research & Development, University of Strathclyde.
dents feedback about how well they are doing and how they can Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement
(3rd ed., pp. 13–103). New York, NY: American Council on Education/
improve their learning; and, they determine graduation requirements
Macmillan.
for students desiring a degree (Fletcher et al., 2012). A necessary Murray, K., & MacDonald, R. (1997). The disjunction between lectures’
part of taking responsibility for one’s own assessment is the ability conceptions of teaching and their claimed educational practice. Higher
to identify what standards should appropriately apply. Unless the Education, 33, 331–339. https://doi.org/10.1023/A:1002931104852
Standards (2014). Standards for educational and psychological testing.
expectation is set that raising questions about appropriate standards
Washington, DC: American Psychological Association.
is a normal part of approaching any learning task, this is unlikely to Suskie, L. (2004). Assessing student learning: A common sense guide. San
be sufficiently developed. Francisco, CA: Jossey-Bass.
|
8 of 8 GYLL and RAGLAND