Eric Soulsby Assessment Notes
Eric Soulsby Assessment Notes
Eric P. Soulsby
University of Connecticut
“If you don't know where you are going, you might wind up someplace else.” – Yogi Berra
1. Students come to the classroom with preconceptions about how the world works. If their initial
understanding is not engaged, they may fail to grasp the new concepts and information that are taught, or
they may learn them for purposes of a test but revert to their preconceptions outside the classroom.
3. A “metacognitive” approach to instruction can help students learn to take control of their own learning by
defining learning goals and monitoring their progress in achieving them.
Bransford et al. describe “transfer” – defined as the ability to extend what has been learned in one context to new
contexts – as being a key component of learning. All learning involves transfer from previous experiences.
Educators hope that students will transfer learning from one problem to another within a course, from one school
year to another, between school and home, and from school to the workplace. Transfer is affected by the degree
to which people learn with understanding rather than merely memorize sets of facts or follow a fixed set of
procedures.
Time spent learning for understanding has different consequences for transfer than time spent simply memorizing
facts or procedures from textbooks or lectures. In order for learners to gain insight into their learning and their
understanding, frequent feedback is critical: students need to monitor their learning and actively evaluate their
strategies and their current levels of understanding.
Bransford et al. indicate that assessment and feedback are crucial for helping people learn. Assessment should
mirror good instruction; happen continuously as part of instruction; and provide information about the levels of
understanding that students are reaching. Assessments must reflect the learning goals that define various
learning environments – if the goal is to enhance understanding and applicability of knowledge, it is not sufficient
to provide assessments that focus primarily on memory for facts and formulas.
In Knowing What Students Know (Pellegrino, Chudowsky, and Glaser 2001) state laws of skill acquisition:
Power law of practice – acquiring skill takes time, often requiring hundreds or thousands of instances of
practice in retrieving a piece of information or executing a procedure.
Knowledge of results – individuals acquire a skill much more rapidly if they receive feedback about the
correctness of what they have done.
A dilemma in education is that students often spend time practicing incorrect skills with little or no feedback
– the feedback they ultimately receive is often neither timely nor informative; i.e., unguided practice (e.g.,
homework in mathematics) can be practice in doing tasks incorrectly. One of the most important roles for
assessment is the provision of timely and informative feedback to students during instruction and learning
so that their practice of a skill and its subsequent acquisition will be effective and efficient
Once again, as highlighted, assessment forms a key ingredient of effective teaching; a natural conclusion given
the influence it has on learning.
Summarizing his study, at a 2004 NEEAN/NEASC meeting Ken Bain presented the following:
“People tend to learn most effectively (in ways that make a sustained, substantial, and positive influence
on the way they think, act, or feel) when
1. they are trying to solve problems (intellectual, physical, artistic, practical, abstract, etc.) or create
something new that they find intriguing, beautiful, and/or important;
2. they are able to do so in a challenging yet supportive environment in which they can feel a sense
of control over their own education;
3. they can work collaboratively with other learners to grapple with the problems;
4. they believe that their work will be considered fairly and honestly; and
5. they can try, fail, and receive feedback from expert learners in advance of and separate from
any summative judgment of their efforts.”
In Learner-Centered Assessment on College Campuses (Huba and Freed 2000) discuss hallmarks of learner-
centered teaching which again show the connection between learning, effective teaching, and assessment:
Learners are actively involved and receive feedback
“Sending students out on the basketball court to try to shoot baskets or to explore the game doesn’t
ensure mastery. Students will undoubtedly have fun, and they will surely learn something. But
they’ll never master the many interrelated skills of the game unless they get feedback about how they
are doing. Providing that feedback is what coaching – teaching and assessing – is all about.”
Learners apply knowledge to enduring and emerging issues and problems
“In leaner-centered teaching, students are asked to do important things worth doing. … They
complete assignments designed around real-world problems, and in this way, they experience the
compelling challenges typically faced by professionals in their disciplines. … Assessments in which
students address ill-defined problems – authentic assessments – are engaging to college students ….
Well-defined problems are helpful for developing skills that involve many steps. When students
complete them, they repeat the steps over and over so that they eventually become habits that can be
used when needed. However, just solving well-defined problems doesn’t help students know when
and how the habits and skills should be used – and knowing when and how to use knowledge is
critical to success in adult life.”
Learners integrate discipline-based knowledge and general skills
“Assessments designed around ill-defined problems typically take the form of projects, papers,
performances, portfolios, or exhibitions. Students completing them have to call upon and develop
their disciplinary knowledge, as well as their skills in the areas of inquiry, reasoning, problem
solving, communication, and perhaps teamwork. … Authentic assessments require that students
make connections between the abilities and skills they have developed in the general education
curriculum and the discipline-based knowledge and skills they have acquired in the major.”
Learners understand the characteristics of excellent work
“A key ingredient in learner-centered teaching is allowing students to make mistakes and learn from
them. … We must provide students with a clear vision of what excellent work is like and help them
use feedback to continually improve their own work and performance. … The opportunity to self-
correct and try again is essential to self-improvement and the development of lifelong learning skills.”
Learners become increasingly sophisticated learners and knowers
“In learner-centered teaching, students reflect upon what they learn and how they learn. Reflection is
a powerful activity for helping professors and students understand the present learning environment
and think of ways to improve it … Over time, students change not only in terms of what they know,
but also in terms of how they know. … In learner-centered environments then, we seek to
understand not only what students know, but also how they know it. … Learner-centered professors
use teaching techniques that help students develop into more sophisticated knowers.”
Professors coach and facilitate, intertwining teaching and assessing
“In a learner-centered environment … teaching and assessing are not separate, episodic events, but
rather, they are ongoing, interrelated activities focused on providing guidance for improvement. …
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.6 of 143
Students … need to practice what they are learning and receive continuous feedback they can use to
evaluate and regulate their performance.”
Professors reveal that they are learners, too
“... When we take a learner-centered approach, we design assessments to gather opinions from
students on a regular basis about how well they are learning and about how the course format helps
or hinders their efforts. …[Professors] need to know what students understand and don’t understand
in order to modify their performance as teachers ….”
Learning is interpersonal, and all learners – students and professors – are respected and valued
“Instead of emphasizing grades in assessment, the focus should be on descriptive feedback for
improvement. Feedback that focuses on self-assessment and self-improvement is a form of intrinsic
motivation.”
Knowledge is transmitted from professor to students Students construct knowledge through gathering and
synthesizing information and integrating it with the general
skills of inquiry, communication, critical thinking, problem
solving and so on
Emphasis is on acquisition of knowledge outside the Emphasis is on using and communicating knowledge
context in which it will be used effectively to address enduring and emerging issues and
problems in real-life contexts
Professor’s role is to be primary information giver and Professor’s role is to coach and facilitate
primary evaluator Professor and students evaluate learning together
Teaching and assessing are separate Teaching and assessing are intertwined
Desired learning is assessed indirectly through the use of Desired learning is assessed directly through
objectively scored tests papers, projects, performances, portfolios, and the
like
Only students are viewed as learners Professor and students learn together
Effective teaching Teach (present information) well and Engage students in their learning
those who can will learn Help all students master learning objectives
Use classroom assessment to improve
courses
Use program assessment to improve
programs
The point to be taken here is that learning occurs when effective teaching environments are learner-centered and
assessment forms a critical role in such environments. As pointed out in Assessing Student Learning (Suskie
2004), in the teacher-centered model, the major, if not the sole purpose of assessment, is to assign student grades.
In the learner-centered model, assessment also provides feedback to help faculty understand what is and is not
working and how to improve their curricular and teaching/learning strategies to bring about even greater
learning.
Assessment involves the use of empirical data on student learning to refine programs and improve student
learning. (Allen 2004)
Assessment is the process of gathering and discussing information from multiple and diverse sources in
order to develop a deep understanding of what students know, understand, and can do with their
knowledge as a result of their educational experiences; the process culminates when assessment results
are used to improve subsequent learning. An assessment is an activity, assigned by the professor, that
yields comprehensive information for analyzing, discussing, and judging a learner’s performance of
valued abilities and skills. (Huba and Freed 2000)
Assessment is the systematic collection of information about student learning, using the time, knowledge,
expertise, and resources available, in order to inform decisions about how to improve learning. (in
Assessment Clear and Simple Walvoord 2004)
Assessment is the systematic basis for making inferences about the learning and development of students.
It is the process of defining, selecting, designing, collecting, analyzing, interpreting, and using
information to increase students’ learning and development. (in Assessing Student Learning and
Development Erwin 1991)
Assessment is the systematic collection, review, and use of information about educational programs
undertaken for the purpose of improving student learning and development. (in Assessment Essentials
Palomba and Banta 1999)
Assessment is a process of reasoning from evidence. (Pellegrino, Chudowsky, and Glaser 2001)
Assessment may involve accountability as well as improvement in pedagogy as defined by Peter Ewell
(in Building a Scholarship of Assessment Banta and Associates 2002):
o assessment refers to the processes used to determine an individual’s mastery of complex abilities,
generally through observed performance
o assessment is large-scale testing programs whose primary objective is not to examine individual
learning but rather to benchmark school performance in the name of accountability
o assessment is a special kind of program evaluation whose purpose is to gather evidence to
improve curricula and pedagogy
The meaning of assessment is captured in key questions such as (Palomba and Banta 1999):
What should college graduates know, be able to do, and value?
Have the graduates of our institutions acquired this learning?
What, in fact, are the contributions of the institution and its programs to student growth?
How can student learning be improved?
Formulate statements of
intended learning outcomes
Create experiences
leading to outcomes
1. Formulating Statements of Intended Learning Outcomes – statements describing intentions about what
students should know, understand, and be able to do with their knowledge when they graduate.
2. Developing or Selecting Assessment Measures – designing or selecting data gathering measures to assess
whether or not our intended learning outcomes have been achieved. Includes
Direct assessments – projects, products, papers/theses, exhibitions, performances, case studies,
clinical evaluations, portfolios, interviews, and oral exams – which ask students to demonstrate
what they know or can do with their knowledge.
Indirect assessments – self-report measures such as surveys – in which respondents share their
perceptions about what graduates know or can do with their knowledge.
3. Creating Experiences Leading to Outcomes – ensuring that students have experiences both in and outside
their courses that help them achieve the intended learning outcomes. The curriculum must be designed as
a set of interrelated courses and experiences that will help students achieve the intended learning
outcomes. Designing the curriculum by working backwards from learning outcomes helps make the
curriculum a coherent ‘story of learning’.
4. Discussing and Using Assessment Results to Improve Teaching and Learning – the focus is on using the
results to improve individual student performance.
Erwin (1991) indicates most college catalogues present institutional goals, purposes, or mission in the form of
broad concepts, such as character, citizenship, or cultural appreciation. Because these goals are global and often
vague, it is necessary also to state objectives. Objectives are typically expressed in a list or series of statements
indicating what the department, program, or office is trying to accomplish with the student. Outcomes are the
achieved results of the actual consequences of what the students demonstrated or accomplished.
As discussed by Allen 2004, a program’s Mission = a holistic vision of the values and philosophy of the
department. Program goals = broad statements concerning knowledge, skills or values that faculty expect
graduating students to achieve. Learning objectives operationalize program goals – they describe observable
behaviors that allow faculty to know if students have mastered the goals.
An example illustrating the difference among the terms “mission”, “goal”, “objective”, and “outcome”:
University Mission: Broad exposure to the liberal arts … for students to develop their powers of written and
spoken expression …
Program Goal: The study of English enables students to improve their writing skills, their articulation …
English Composition Students will learn to acknowledge and adjust to a variety of writing contexts.
Course Objective:
Learning Outcome: The student will demonstrate through discussion, planning and writing an
awareness that audiences differ and that readers’ needs/expectations must be
taken into account as one writes
The student will write a draft and revise work with a sense of purpose and an
awareness of audience.
Robert Diamond (in Designing and Assessing Courses & Curricula, 1998) indicates “as we teach our courses, we
tend to lose sight of the fact that each course is but one element in a learning sequence defined as a curriculum.”
In general, the goals of a curriculum evolve from the total of the instructional outcomes associated with basic core
competencies, discipline-specific competencies related to core requirements, and discipline-specific competencies
associated with major and minor concentrations.
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.12 of 143
Successful assessment requires articulating goals and objectives for learning (Palomba and Banta 1999):
Goals for learning – express intended results in general terms. Used to describe broad learning concepts;
e.g., clear communication, problem solving, and ethical awareness.
Objectives for learning – express intended results in precise terms. Used to describe specific behaviors
students should exhibit; e.g., “graduates in speech communication should be able to interpret non-verbal
behavior and to support arguments with credible evidence”.
Objectives may also be thought of as intended outcomes, and the assessment results as the actual outcomes. As
captured in the following diagram, assessment is an iterative feedback process for continual program
improvement with a focus on student learning. Assessment involves comparing the measured learning outcomes
with the intended learning objectives to enable changes to be made to improve student learning.
Goals and Objectives are similar in that they describe the intended purposes and expected results of teaching
activities and establish the foundation for assessment. Goals are statements about general aims or purposes of
education that are broad, long-range intended outcomes and concepts; e.g., “clear communication”, “problem-
solving skills”, etc. Objectives are brief, clear statements that describe the desired learning outcomes of
instruction; i.e., the specific skills, values, and attitudes students should exhibit that reflect the broader goals.
How does one measure ‘understand’? No idea! This Goal can be made more measurable by identifying specific Outcomes
one would expect from a student who “understands” the seasons.
Thus, once goals have been formalized, the next step is to translate the often abstract language of course goals
into a set of concrete measurable student outcomes.
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.13 of 143
Measurable student outcomes are specific, demonstrable characteristics – knowledge, skills, values, attitudes,
interests--that will allow us to evaluate the extent to which course goals have been met.
Example: translating a course goal (in the context of dental health) into measurable student outcomes
Carefully written objectives allow for easier assessment of whether students are achieving what you want them to
achieve. Below is an example showing a link between objectives and assessment.
Program Objective: After analyzing and interpreting information from public opinion polls, the graduating Journalism major will
communicate the results to at least three different groups in written, oral, and graphic forms
After analyzing and interpreting information from public opinion polls, the
Verb: Use active verbs that describe behavior graduating Journalism major will communicate the results to at least three
different groups in written, oral, and graphic forms
After analyzing and interpreting information from public opinion polls, the
Object: Identify the focus of learning – content,
graduating Journalism major will communicate the results to at least three
concepts, skills, attitudes
different groups in written, oral, and graphic forms
After analyzing and interpreting information from public opinion polls, the
Target group: Specify subgroups when objective
graduating Journalism major will communicate the results to at least three
applies differentially
different groups in written, oral, and graphic forms
After analyzing and interpreting information from public opinion polls, the
Conditions: Describes context when students will
graduating Journalism major will communicate the results to at least three
demonstrate behavior – how, when, where
different groups in written, oral, and graphic forms
After analyzing and interpreting information from public opinion polls, the
Performance criteria: Identifies levels of
graduating Journalism major will communicate the results to at least three
acceptable performance
different groups in written, oral, and graphic forms
After analyzing and interpreting information from public opinion polls, the
Performance stability: Identifies how often the
graduating Journalism major will communicate the results to at least three
behavior must be observed to be a stable indicator
different groups in written, oral, and graphic forms
Goal: Students will be familiar with the major theories of the discipline.
There are three types of learning objectives, which reflect different aspects of student learning:
Cognitive objectives: “What do you want your graduates to know?”
Affective objectives: “What do you want your graduates to think or care about?”
Behavioral Objectives: “What do you want your graduates to be able to do?”
Outcomes are clear learning results that we want students to demonstrate at the end of significant learning
experiences. (Spady, 1994) Learning outcomes are statements that describe significant and essential learning that
learners have achieved, and can reliably demonstrate at the end of a course or program; i.e., what the learner will
know and be able to do by the end of a course or program.
The two terms, objectives and outcomes, are often used interchangeably, however, resulting in confusion.
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.15 of 143
What are the differences between Goals and Objectives? Both goals and objectives use the language of outcomes
– the characteristic which distinguishes goals from objectives is the level of specificity. Goals express intended
outcomes in general terms and objectives express them in specific terms. Goals are written in broad, global, and
sometimes vague, language. Objectives are statements that describe the intended results of instruction in terms of
specific student behavior.
What are the differences between Objectives and Outcomes? Objectives are intended results or consequences of
instruction, curricula, programs, or activities. Outcomes are achieved results or consequences of what was
learned; i.e., evidence that learning took place. Objectives are focused on specific types of performances that
students are expected to demonstrate at the end of instruction.
Thus, a first step in assessment is the establishment of objectives. Learning objectives = cognitively oriented
objectives, including subject matter knowledge and skills; e.g., students can learn basic principles and theories of
a discipline, or they can learn skills such as writing or computing. Developmental objectives = typically include
cognitive and affective dimensions, such as critical thinking, ethics, identity, and physical well-being.
Institution- and program-level assessment examines the integration of the three domains of learning identified by
Bloom (Maki 2004):
1. The cognitive domain, involving the development of intellectual abilities: knowledge, comprehension,
application, analysis, synthesis, and evaluation
a. Example: a medical student’s knowledge of anatomy
b. Example: an undergraduate business student’s evaluation of multiple solutions to a problem
in a case study
2. The psychomotor domain, involving the development of physical movement, coordination, and sets of
skills
a. Example: intricately timed movements of a dancer
b. Example: precision of a neurosurgeon
3. The affective domain, involving the development of values, attitudes, commitments, and ways of
responding
a. Example: valuing others’ perspectives
b. Example: responding to situations that disadvantage a group of people
c. Example: demonstrating a passion for learning
In general, research over the last 40 years has confirmed the taxonomy as a hierarchy; although it is uncertain at
this time whether synthesis and evaluation should be reversed (i.e., evaluation is less difficult to accomplish than
synthesis) or whether synthesis and evaluation are at the same level of difficulty but use different cognitive
processes. In any case it is clear that students can “know” about a topic or subject at different levels. While most
teacher-made tests still test at the lower levels of the taxonomy, research has shown that students remember more
when they have learned to handle the topic at the higher levels of the taxonomy.
Bloom’s level Learning goal: Students will understand the major theoretical approaches within
the discipline
Knowledge To know specific facts, Students can list the major theoretical approaches of the discipline
terms, concepts, principles Exam question at this level: Name the muscles of the rotator cuff.
or theories Medical faculty questions at this level: What was the heart rate? Where is the
primary lesion?
Comprehension To understand, interpret, Students can describe the key theories, concepts, and issues for each of the major
compare and contrast, theoretical approaches
explain; Management of Exam question at this level: How does the rotator cuff help you to raise your arm?
Knowledge Medical faculty questions at this level: When would you use that type of hernia
repair? Why is the fracture in the same place it was before?
Application To apply knowledge to new Students can apply theoretical principles to solve real-world problems
situations, to solve Exam question at this level: Why does throwing a curve ball cause rotator cuff
problems; Use of injury?
Comprehension or Medical faculty questions at this level: You are watching the patient and she falls –
Understanding what would you do? Here is a lady with no vibratory sensation – what problem
does this pose?
Analysis To identify the Students can analyze the strengths and limitations of each of the major theoretical
organizational structure of approaches for understanding specific phenomena
something; to identify Exam question at this level: How does the throwing motion stress each
parts, relationships, and component, in turn, of the rotator cuff?
organizing principles; Medical faculty questions at this level: What are the most significant aspects of this
Disassembly of Application patient’s story? That is a curious bit of information – how do you explain it?
Synthesis To create something, to Students can combine theoretical approaches to explain complex phenomena
integrate ideas into a Exam question at this level: Design a physical therapy program to strengthen each
solution, to propose an component of the rotator.
action plan, to formulate a Medical faculty questions at this level: How would you summarize this? What are
new classification scheme; your conclusions?
Assembly of Application
Evaluation To judge the quality of Students can select the theoretical approach that is most applicable to a
something based on its phenomenon and explain why they have selected that perspective
adequacy, value, logic, or Exam question at this level: Evaluate another physical therapist’s program to
use; Appraisal of own or strengthen the rotator cuff.
someone else’s Analysis or Medical faculty questions at this level: Why is that information pertinent? How
Synthesis valid is this patient’s story?
Synthesis Evaluation
Analysis The ability to The ability
The ability to
create evaluate
break up
something new usefulness for a
information
purpose
Application
The ability to apply
learning to a new or
novel task
Comprehension
The ability to show a
basic understanding
Knowledge
The ability to recall
what has been learnt
“[Bloom’s] Taxonomy is designed to be a classification of the student behaviors which represent the
intended outcomes of the educational process. It is assumed that essentially the same classes of behavior
may be observed in the usual range of subject-matter content of different levels of education (elementary,
high school, college), and in different schools. Thus a single set of classification should be applicable in
all these circumstances.
What we are classifying is the intended behaviors of students – the ways in which individuals are to think,
act or feel, as a result of participating in some unit of instruction. (Only such of those intended behaviors
as are related to mental acts of thinking are included in the part of the Taxonomy developed in the
handbook for the cognitive domain.)
It is recognized that the actual behaviors of the students after they have completed the unit of instruction
may differ in degree as well as kind from the intended behavior specified by the objectives. That is the
effects of instruction may be such that the students do not learn a given skill to any degree.
We initially limited ourselves to those objectives referred to as knowledge, intellectual abilities, and
intellectual skills. (This area, which we named the cognitive domain, may also be described as including
the behavior; remembering; reasoning, problem solving; concept formation, and to a limited extent
creative thinking.)”
Heywood (2000) elaborates on learning objectives by stating that “while much learning is informal, and while
students may already have attained the goals we wish them to obtain it is nevertheless the case that learning is
enhanced in situations where both the learner and teacher are clear about what they wish to achieve. Thus the
understanding of ‘learning’ which is the central goal of formal education must contribute to the selection of
‘objectives’ … [T]he process of curriculum, instructional design and assessment are the same. Moreover, it is a
complex activity. While it is convenient to begin with aims and objectives, any discussion of these must, at one
and the same time, consider the learning experiences (strategies) necessary to bring the students from where they
are (entering characteristics) to where they should be (objectives), as well as the most appropriate mode of
assessment …”
“An objective is an intent communicated by a statement describing a proposed change in the learner — a
statement of what the learner is like when he has successfully completed a learning experience ... When
clearly defined goals are lacking, it is impossible to evaluate a course or program efficiently, and there is
no sound basis for selecting appropriate materials, content, or instructional methods” (Mager 1962)
An instructional objective must (in Preparing Instructional Objectives Mager 1962, 1997)
1. Describe what the learner will be doing when demonstrating that he has reached the objective; i.e.,
What is the learner to do?
2. Describe the important conditions under which the learner will demonstrate his competence; i.e.,
Under what conditions will he do it?
3. Indicate how the learner will be evaluated, or what constitutes acceptable performance; i.e.,
What will you expect as satisfactory performance?
Example: Students can translate a Spanish newspaper into English with no more than 2 errors per
sentence
Behavior = create a translation
Condition = students are provided a Spanish newspaper
Criterion = no more than 2 errors per sentence
This level of detail is for course learning objectives rather than for program learning objectives.
Learning objectives are behavioral and can be described by verbs that delineate behaviors.
Examples of learning objectives given in Designing and Assessing Courses & Curricula (Robert Diamond, 1998):
Music: On hearing musical selections, you will be able to identify those that are examples of chamber
music and be able to identify the form, texture, and makeup of the ensemble.
Psychology: When given a case study, you will be able to identify whether it describes a case of
schizophrenia and, if it does, which of the following schizophrenic reactions are involved: hebephrenic,
catatonic, or paranoid.
Economics: Demonstrate graphically and explain how a change in expectations will affect the loanable
funds market. (Begin with an appropriately labeled graph that represents the initial equilibrium.)
Management: Identify (based on readings, case studies, and/or personal experiences) those activities that
are most likely to distinguish effective, well-managed technology development programs from ineffective
programs.
Government: When given a major decision made by a governmental leader, you will be able to identify
the major factors that the leader had to consider and discuss why the action was taken and what apparent
trade-offs were made.
Program learning objectives focus on the learner. Unlike the teacher-centered approach, a learner-centered
approach should be used to determine learning objectives. In other words, rather than list what a course/program
may cover, a learner-centered approach examines courses and curricula from the other direction: what is expected
of students upon completion of the course/program.
Plan for designing and delivering learning outcomes (Huba and Freed 2000):
In designing course outcomes
Start first with the broad outcomes expected of all students
Then work backward to design academic program outcomes
Finally design course outcomes that will lead to the achievement of both program and
institutional outcomes
When the program is delivered, students experience the system in reverse
Students first participate in experiences that address lesson outcomes
The learning that results from these experiences accumulates as students proceed through the
courses and other experiences in the program
The curriculum is designed so that it provides a coherent set of experiences leading to the
development of desired knowledge and skills – students show increasing levels of sophistication
and integration of skills as they progress through the program
There is an underlying coherence among the levels of learning outcome statements (Maki 2004):
At the Institution level, outcome statements are more general statements reflecting students’ entire educational
experiences. At the Program level outcome statements become more specific.
Curriculum mapping makes it possible to identify where within the curriculum learning objectives are addressed.
In other words, it provides a means to determine whether your objectives are aligned with the curriculum.
Alignment – the curricula must be systematically aligned with the program objectives (Allen 2004). Alignment
involves clarifying the relationship between what students do in their courses and what faculty expect them to
learn. Analyzing the alignment of the curricula with program objectives allows for the identification of gaps
which can then lead to curricular changes to improve student learning opportunities.
Approach to determining the alignment of courses with the program objectives – create a matrix:
100 I
101 P
102 D P
103 I D
Etc.
Aligning course objectives to program objectives may be accomplished by a curriculum alignment matrix which
maps each onto the other; a checkmark indicating coverage or an indication of the level of coverage can be used.
Similarly, a course alignment matrix may be used to indicate where course objectives support the overall
objectives of the program.
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.23 of 143
Course Alignment Matrix (Allen 2004)
An example, based on Pagano (2005), outlines the connections between program objectives and courses:
Program Objectives: All students with a major in Party Planning will be able to:
Develop and execute parties for a variety of situations and for diverse clientele.
Create complete menus for a variety of events.
Demonstrate an understanding of the biochemical properties of foods and liquids.
Plan, price, and budget a variety of parties.
Develop successful marketing strategies for a party planner.
Anticipate and respond to emergencies in parties they are running.
Train and manage staff.
#1 #2 #3 #4 #5 #6 #7
Develop and Create Demonstrate an Plan, price, Develop Anticipate and Train and
execute complete understanding of and budget successful respond to manage
parties for a menus for a the biochemical a variety of marketing emergencies staff.
variety of variety of properties of parties. strategies for in parties they
situations and events. foods and a party are running.
for diverse liquids. planner.
clientele.
PP 110
Introduction to I I I
Party Planning
PP 200
Party Budgeting I P
and Purchasing
PP 201
Fundamentals D I
of Catering
PP 240
Home P D
Decorations
PP 260
Crisis I D D
Management
PP 290
Capstone D P P D D
Course
#1 #2 #3 #4 #5 #6 #7
Develop and Create Demonstrate an Plan, price, Develop Anticipate and Train and
execute complete understanding of and budget successful respond to manage
parties for a menus for a the biochemical a variety of marketing emergencies in staff.
variety of variety of properties of parties. strategies for parties they are
situations and events. foods and a party running.
for diverse liquids. planner.
clientele.
PP 201
Objective #1 B B I
PP 201
Objective #2 B A A
PP 201
Objective #3 B B A
PP 201
Objective #4 I B
PP 201
Objective #5 B A
Business Econ Econ CS Eng Math Busi Busi Busi Busi Busi Busi Busi Busi Busi Busi
Administration Map 207 208 214 200 1165 201 203 211 231 241 251 252 281 371 411
Micro-Economics
International Bus
Writing for Bus
Pre-Calc (Bus)
Bus Statistics
Mgl Finance
Prin Acctg II
Prin Acctg I
Intro to Bus
Bus Policy
Prin Mgmt
Bus Law I
Prin Mktg
Writing Competencies
Identify a subject and
formulate a thesis
statement I R E
Organize ideas to
support a position I R R R E
Write in a unified and
coherent manner
appropriate to the
subject matter I R R R E
Use appropriate
sentence structure and
vocabulary I R R R E
Document references
and citations according
to an accepted style
manual I R R E
Critical Thinking
Competencies
Identify business
problems and apply
creative solutions I R R R R R E
Identify and apply
leadership techniques I R E
Translate concepts into
current business
environments I R R R R R E
Analyze complex
problems by identifying
and evaluating the
components of the
problem I R R R E E
Quantitative
Reasoning
Competencies
Apply quantitative
methods to solving real-
world problems I R R R E
Perform necessary
arithmetic computations
to solve quantitative
problems I R R R E
Evaluate information
presented in tabular,
numerical and graphical
form I R R R E E
Recognize the
reasonableness of
numeric answers I R R R E E
Oral Communications
Competencies
Organize an oral
argument in logical
sequence that will
understood by the
audience I R R R E
Use visual aids
effectively to support an I R R R E
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.26 of 143
Business Econ Econ CS Eng Math Busi Busi Busi Busi Busi Busi Busi Busi Busi Busi
Administration Map 207 208 214 200 1165 201 203 211 231 241 251 252 281 371 411
oral presentation
Demonstrate
professional demeanor,
speak clearly in well-
modulated tone, and
engage the audience I R R R E
Exhibit good listening
skills when others are
speaking I R R R E
Technology and
Information Literacy
Identify problem/topic I R R
Demonstrate familiarity
with information
resources and
technologies I R R
Conduct search query I R R
Evaluation sources of
information I R R
Computer Literacy
Demonstrate computer
literacy in preparation of
reports and
presentations I R E E
Demonstrate ability to
use software application
to solve business
problems I R R E
Conduct search queries
through the use of the
Internet I R R E
Values Awareness
Recognize ethical issues I R R R E E
Identify ethical issues I R R R E E
Identify theoretical
frameworks that apply to
corporate social
responsibility I R R R R R E
Translate ethical
concepts into
responsible behavior in a
business environment I R R R R E
Develop values
awareness I R R R E
CONTENT-SPECIFIC COMPETENCIES
Global Business
Competencies
Demonstrate knowledge
of contemporary social,
economic, and political
forces; their
interrelationship; and
their impact on the
global business
environment I I I R R RE R R
Identify the integration of
global markets from both
financial and
product/service
perspectives. I R RE R R
Incorporate diverse
cultural perspectives into
business decisions I R R RE R
Accounting
Competencies
Open-ended Please describe the most important concepts you learned in the program
Partially closed- Please check the most important factor that led you to major in engineering
ended ___ Experience in a specific class
___ Experience with a specific instructor
___ Work experience in this or a related field
___ Advice from a career planning office or consultant
___ Advice from family or friends
___ Other: please explain
Quality Please indicate the quality of instruction in the general education program
Very Poor Poor Good Very Good
Quantitative Compared to other interns I have supervised, this student’s knowledge of the theory and
judgment principles of clinical practice is
1 2 3 4 5 6 7 8 9 10
Below average Average Above Average
Ranking Please indicate your ranking of the importance of the following student learning objectives by
assigning ranks from “1” to “4”, where “1” is most important and “4” is least important
___ Computing
___ Critical thinking
___ Speaking
___ Writing
Selection criterion matrix for determining which methods to use (Palomba and Banta 1999):
Preparation time
Value to students
Programmatic information
Ways of comparing the scores or ratings from any assessment method (Erwin 1991):
Norm-referenced – report students scores relative to those of other students
Example: comparing students’ scores with students’ scores from other institutions
Proprietary tests are norm referenced, with percentile ranks ranging from 1 to 99 typically used.
Percentile rank = percentage of persons in a reference group who obtained lower scores
Criterion-referenced – report scores according to an absolute standard of achievement
Example: comparing students’ scores with a designated level of competency or cutoff standard; above
which is passing, below which is failing
Alternative terms = domain-based or content-based
Self-referenced – compare different scores or ratings from the same student
Reliable measures can be counted on to produce consistent responses over time (Palomba and Banta 1999):
Reliable data – variance in scores is attributable to actual differences in what is being measured, such as
knowledge, performance or attitudes
Unreliable data – score variance is due to measurement error; which can include such things as the
individuals responding to the instrument, the administration and scoring of the instrument, and the
instrument itself
Barriers to establishing reliability (Shermis and Daniels in Banta and Associates 2002) include rater bias – the
tendency to rate individuals or objects in an idiosyncratic way:
central tendency – error in which an individual rates people or objects by using the middle of the scale
leniency – error in which an individual rates people or objects by using the positive end of the scale
severity – error in which an individual rates people or objects by using the negative end of the scale
halo error – when a rater’s evaluation on one dimension of a scale (such as work quality) is influenced by
his or her perceptions from another dimension (such as punctuality)
Test-retest reliability A reliability estimate based on assessing a group of people twice and correlating the two
scores. This coefficient measures score stability.
Parallel forms reliability A reliability estimate based on correlating scores collected using two versions of the
(or alternate forms procedure. This coefficient indicates score consistency across the alternative versions.
reliability)
Inter-rater reliability How well two or more raters agree when decisions are based on subjective judgments.
Internal consistency A reliability estimate based on how highly parts of a test correlate with each other.
reliability
Coefficient alpha An internal consistency reliability estimate based on correlations among all items on a test.
Split-half reliability An internal consistency reliability estimate based on correlating two scores, each calculated
on half of a test.
Valid measures on ones in which the instrument measures what we want it to measure (Palomba and Banta 1999):
Construct-related validity – refers to the congruence between the meaning of the underlying construct and
the items on the test or survey; i.e., do results correlate with other instruments examining the same
construct?
Criterion-related validity – includes predictive validity: how dependable is the relationship between the
scores or answers on an instrument and a particular future outcome?
Content-related validity – refers to the match between the content of the instrument and the content of the
curriculum or other domain of interest
Validity must be judged according to the application of each use of the method. The validity of an assessment
method is never proved absolutely; it can only be supported by an accumulation of evidence from several
categories. For any assessment methods to be used in decision making, the following categories should be
considered (Erwin 1991):
Content relevance and representativeness
o The selected test should be a representative sample from those educational objectives which the test is
supposed to measure
o The test should cover what the program covered and should place emphasis in proportion to the
program’s emphases
o Tests may be reliable but not valid for a particular program
Internal test structure
o Typically demonstrated through intercorrelations among items covering the same content domain
External test structure
o Necessary when the educator wishes to compare test scores or ratings with other measures or related
variables
Process of probing responses
o Typically sought at two points during any test or scale construction: initially in the test construction to
determine whether the students’ interpretations are consistent with the intent of the test designer; and
at the point of probing the process to see if a pattern might be discovered on those students who
scored very high or very low
Test’s similarities and differences over time and across groups and settings
o In studying validity evidence over time, some outcome measures should increase over time
Value implications and social consequences
o If a test or rating scale discriminates against certain groups of people, that test or scale should be
considered suspect.
Construct validity Construct validity is examined by testing predictions based on the theory (or construct)
underlying the procedure. For example, faculty might predict that scores on a test that
assesses knowledge of anthropological terms will increase as anthropology students
progress in their major. We have more confidence in the test’s construct validity if
predictions are empirically supported.
Criterion-related validity Criterion-related validity indicates how well results predict a phenomenon of interest, and it
is based on correlating assessment results with this criterion. For example, scores on an
admissions test can be correlated with college GPA to demonstrate criterion-related validity.
Face validity Face validity is assessed by subjective evaluation of the measurement procedure. This
evaluation may be made by test takers or by experts in what is being assessed.
Formative validity Formative validity is how well an assessment procedure provides information that is useful
for improving what is being assessed.
Sampling validity Sampling validity is how well the procedure’s components, such as test items, reflect the full
range of what is being assessed. For example, a valid test of content mastery should assess
information across the entire content area, not just isolated segments.
What are the parts of a rubric? Rubrics are composed of four basic parts (Stevens and Levi 2005):
A task description (the assignment)
A scale of some sort (levels of achievement, possibly in the form of grades). Scales typically range
from 3 to 5 levels.
The dimensions of the assignment (a breakdown of the skills/knowledge involved in the assignment)
Descriptions of what constitutes each level of performance (specific feedback)
TASK DESCRIPTION
SCALE
DESCRIPTIONS OF DIMENSIONS
DIMENSIONS
Scoring rubrics are explicit schemes for classifying products or behaviors into categories that are steps along a
continuum – these steps usually range from “unacceptable” to “exemplary”, and the number of intermediate
categories varies with the need to discriminate among other performance levels (Allen 2004).
Rating scales – a checklist with a rating scale added to show the degree to which the ‘things you are
looking for’ are present
A rating scale rubric for an information literacy assignment (Suskie 2004)
Please indicate the student’s skill in each of the following respects, as evidenced by this assignment, by checking the
appropriate box. If this assignment is not intended to elicit a particular skill, please check the N/A box.
Outstanding (A)
Inadequate (F)
Acceptable (C)
acceptable (D)
Very Good (B)
Marginally
N/A
It should be noted that rating scales can be vague in nature leading to problems (Suskie 2004):
o When several faculty are doing the rating, they may be inconsistent in how they rate performance
o Students don’t receive thorough feedback; i.e., a scored rubric may not explain why something
was less than superior
Holistic rating scales
o Do not have a list of the ‘things you’re looking for’
o Have short narrative descriptions of the characteristics of outstanding work, acceptable work,
unacceptable work, and so on
Inadequate The essay has at least one serious weakness. It may be unfocused, underdeveloped, or
rambling. Problems with the use of language seriously interfere with the reader’s ability to
understand what is being communicated.
Developing The essay may be somewhat unfocused, underdeveloped, or rambling, but it does have some
competence coherence. Problems with the use of language occasionally interfere with the reader’s ability to
understand what is being communicated.
Acceptable The essay is generally focused and contains some development of ideas, but the discussion may
be simplistic or repetitive. The language lacks syntactic complexity and may contain occasional
grammatical errors, but the reader is able to understand what is being communicated.
Sophisticated The essay is focused and clearly organized, and it shows depth of development. The language is
precise and shows syntactic variety, and ideas are clearly communicated to the reader.
Descriptive rubrics
o Replace the checkboxes of rating scale rubrics with brief descriptions of the performance that
merits each possible rating
o Descriptions of each performance level make faculty expectations explicit and student
performance convincingly documented. But, coming up with succinct but explicit descriptions of
every performance level for every ‘thing you are looking for’ can be time-consuming.
o Are a good choice when several faculty are collectively assessing student work, it is important to
give students detailed feedback, or outside audiences will be examining the rubric scores.
A descriptive rubric for a slide presentation on findings from research sources (Suskie 2004)
Well done (5) Satisfactory (4-3) Needs improvement (2-1) Incomplete (0)
Introduction Presents overall topic. Clear, coherent, and Some structure but does not Does not orient
Draws in audience with related to topic. create a sense of what audience to what
compelling questions or follows. May be overly will follow.
by relating audience’s detailed or incomplete.
interests or goals. Somewhat appealing.
Etc.
Learning objective 2
Etc.
How can Rubrics be used to assess program learning goals? (Suskie 2004)
Embedded course assignments – program assessments which are embedded into course assignments can
be scored using a rubric
Capstone experiences – theses, oral defenses, exhibitions, presentations, etc. – can be scored using a
rubric to provide evidence of the overall effectiveness of a program
Field experiences – internships, practicum, etc.—supervisor’s ratings of the student’s performance can be
evidence of the overall success of a program
Employer feedback – feedback from the employers of alumni can provide information on how well a
program is achieving its learning goals
Student self-assessments – indirect measures of student learning
Peer evaluations – while having the potential for being inaccurate and biased – they can motivate students
to participate fully
Portfolios – rubrics can be a useful way to evaluate portfolios
Rubric scores are subjective and thus prone to unintentional scoring errors and biases (Suskie 2004):
Leniency errors – when faculty judge student work better than most of their colleagues would judge it
Generosity errors – when faculty tend to use only the high end of the rating scale
Severity errors – when faculty tend to use only the low end of the rating scale
Central tendency errors – when faculty tend to use only the middle of the rating scale
Halo effect bias – when faculty let their general impression of a student influence their scores
Contamination effect bias – when faculty let irrelevant student characteristics (e.g., handwriting or ethnic
background) influence their scores
Similar-to-me effect bias – when faculty give higher scores to those students whom they see as similar to
themselves
First-impression effect bias – when faculty’s early opinions distort their overall judgment
Contrast effect bias – when faculty compare a student against other students instead of established
standards
Rater drift – when faculty unintentionally redefine scoring criteria over time
Example Rubric for Scientific Experiment in Biology Capstone Course by Virginia Johnson Anderson, Towson University
(From Walvoord and Anderson, Effective Grading: A Tool for Learning and Assessment, 1998, pp. 197-201)
Task Assignment: Semester-long assignment to design an original experiment, carry it out, and write it up in scientific report format.
Students are to determine which of two brands of a commercial product (e.g. two brands of popcorn) are "best." They must base their
judgment on at least four experimental factors (e.g. "% of kernels popped" is an experimental factor. Price is not, because it is written on
the package).
5 4 3 2 1
Title Is appropriate in tone and Is appropriate in tone Identifies function, brand Identifies function or Is patterned after
structure to science journal; and structure to science name, but does not allow brand name, but not another discipline
contains necessary descriptors, journal; most descriptors reader to anticipate both; lacks design or missing.
brand names, and allows reader present; identifies design. information or is
to anticipate design. function of misleading
experimentation,
suggests design, but
lacks brand names.
Introduction Clearly identifies the purpose of Clearly identifies the Clearly identifies the Purpose present in Fails to identify
the research; identifies interested purpose of the research; purpose of the research. Introduction, but must the purpose of the
audiences(s); adopts an identifies interested be identified by research.
appropriate tone. audience(s). reader.
Scientific All material placed in the correct All material placed in Material place is right Some materials are Material placed in
Format sections; organized logically correct sections; sections but not well placed in the wrong wrong sections or
Demands within each section; runs parallel organized logically within organized within the sections or are not not sectioned;
among different sections. sections, but may lack sections; disregards adequately organized poorly organized
parallelism among parallelism. wherever they are wherever placed.
sections. placed.
Materials and Contains effective, quantifiable, As 5, but contains Presents an experiment Presents an Describes the
Methods concisely-organized information unnecessary information, that is definitely experiment that is experiment so
Section that allows the experiment to be and/or wordy replicable; all information marginally replicable; poorly or in such a
replicated; is written so that all descriptions within the in document may be parts of the basic nonscientific way
information inherent to the section. related to this section; design must be that is cannot be
document can be related back to however, fails to identify inferred by the replicated.
this section; identifies sources of some sources of data reader; procedures
all data to be collected; identifies and/or presents not quantitatively
sequential information in an sequential information in described; some
appropriate chronology; does not a disorganized, difficult information in Results
contain unnecessary, wordy pattern. or Conclusions cannot
descriptions of procedures. be anticipated by
reading the Methods
and Materials section.
Non- Student researches and includes Student acts as above, Student introduces price Student researches Student considers
experimental price and other nonexperimental but is somewhat less and other non- and includes price price and/or other
Information information that would be effective in developing experimental effectively; does not non-experimental
expected to be significant to the the significance of the information, but does not include or specifically variables as
audience in determining the non-experimental integrate them into exclude other non- research
better product, or specifically information. Conclusions. experimental variables; fails to
states non-experimental factors information. identify the
excluded by design; interjects significance of
these at appropriate positions in these factors to
text and/or develops a weighted the research.
rating scale; integrates
nonexperimental information in
the Conclusions.
Designing an Student selects experimental As 5, but student designs Student selects As 3, but research is Student designs a
Experiment factors that are appropriate to the an adequate experiment. experimental factors that weakened by bias poor experiment.
research purpose and audience; are appropriate to the AND inappropriate
measures adequate aspects of research purpose and sample size
these selected factors; establishes audience; measures
discrete subgroups for which data adequate aspects of
significance may vary; student these selected factors;
demonstrates an ability to establishes discrete
eliminate bias from the design subgroups for which data
and bias-ridden statements from significance may vary;
the research; student selects research is weakened by
appropriate sample size, bias OR by sample size
equivalent groups, and statistics; of less than 10.
student designs a superior
experiment.
Applying this rubric to student capstone course work resulted in scores showed a need for improvement in the
Design of Experiments and in Defining Operationally.
Student Scores for Science Reports Before and After Anderson Made Pedagogical Changes
(From Walvoord and Anderson, Effective Grading: A Tool for Learning and Assessment, 1998, p. 147)
After improving the course material an improvement was seen in the following year application of the rubric.
Key to success: don’t skip one of these steps. Information related to Step #3 is presented in the material below.
Learning How is this How will this Who will be A summary of what was
Objectives objective aligned objective be involved in the learned about each objective
with the assessed? assessment? and the impact of these
curriculum? findings could go in this
column to provide a written
record of the assessment
activities
Objective #1 Entries in this column
identify courses and
other aspects of the
curriculum that help
students master each
objective
Objective #2
Etc.
A good assessment program does the following (Palomba and Banta 1999):
Asks important questions
Reflects institutional mission
Reflects programmatic goals and objectives for learning
Contains a thoughtful approach to assessment planning
Is linked to decision making about curriculum
Is linked to processes such as planning and budgeting
Encourages involvement of individuals from on and off campus
Contains relevant assessment techniques
Includes direct evidence of learning
Reflects what is known about how students learn
Shares information with multiple audiences
Leads to reflection and action by faculty, staff, and students
Allows for continuity, flexibility, and improvement in assessment
Objective 2
Etc.
*Modified from Olds, Barbara & Miller, Ron (1998). “An Assessment Matrix for Evaluating Engineering programs”, Journal of
Engineering Education, April p. 175-178.
Questions to consider when establishing or evaluating an assessment program (Huba and Freed 2000):
Does assessment lead to improvement so that the faculty can fulfill their responsibilities to students and
to the public? Two purposes for assessment: the need to assess for accountability and the need to assess
for improvement – they lead to two fundamentally different approaches to assessment.
Is assessment part of a larger set of conditions that promote change at the institution? Does it provide
feedback to students and the institution? Assessment should become integrated into existing processes
like planning and resource allocation, catalog revision, and program review.
Does assessment focus on using data to address questions that people in the program and at the
institution really care about? Focusing on questions such as
What do we want to know about our students’ learning?
What do we think we already know?
How can we verify what we think we know?
How will we use the information to get to make changes?
allows use of the data for improved learning in our programs.
Does assessment flow from the institution’s mission and reflect the faculty’s educational values? The
mission and educational values of the institution should drive the teaching function of the institution.
Does the educational program have clear, explicitly stated purposes that can guide assessment in the
program? The foundation for any assessment program is the faculty’s statement of student learning
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.47 of 143
outcomes describing what graduates are expected to know, understand, and be able to do at the end of the
academic program – When we are clear about what we intend students to learn, we know what we must
assess.
Is assessment based on a conceptual framework that explains relationships among teaching, curriculum,
learning, and assessment at the institution? The assessment process works best when faculty have a
shared sense of how learning takes place and when their view of learning reflects the learner-centered
perspective.
Do the faculty feel a sense of ownership and responsibility for assessment? Faculty must decide upon the
intended learning outcomes of the curriculum and the measures that are used to assess them – this
assessment data must then be used to make changes that are needed to strengthen and improve the
curriculum. Assessment may be viewed as the beginning of conversations about learning.
Do the faculty focus on experiences leading to outcomes as well as on the outcomes themselves? In the
learner-centered paradigm, the curriculum is viewed as the vehicle for helping students reach our intended
learning outcomes – assessment results at the program level provide information on whether or not the
curriculum has been effective.
Is assessment ongoing rather than episodic? Assessment must become part of standard practices and
procedures at the institution and in each program.
Is assessment cost-effective and based on data gathered from multiple measures? No one assessment
measure can provide a complete picture of what and how students are learning – both direct and indirect
measures should be used.
Does assessment support diversity efforts rather than restrict them? Assessment data help us understand
what students are learning, where they are having difficulty, and how we can modify instruction and the
curriculum to help them learn better – the process helps populations of non-traditional students.
Is the assessment program itself regularly evaluated? Ongoing evaluation of assessment efforts helps
maximize the cost-effectiveness of assessment in that faculty and student efforts are used productively.
Does assessment have institution-wide support? Are representatives from across the educational
community involved? Administrators should play two key roles – that of providing administrative
leadership and that of providing educational leadership.
Matrix for Assessment Planning, Monitoring, or Reporting (Huba and Freed 2000)
Etc.
Outcome 1
Student Learning/Development
On the other hand, the baseline data called for by the objectives specified for Outcome 1 should provide more useful
programmatic benchmark indicators. Items are indicated below with mean scores as called for.
Responses were uniformly positive, with fewer than 10% “disagree” and no “strongly disagree” on any one item.
Specifically, the items in brief and mean scores (“Strongly Agree” = 5, “Strongly Disagree” =1) reflecting Outcome One
objectives were:
1. I have adequate knowledge of role of communication and information dissemination in society, including First
Amendment and related legal and ethical issues, and the rights and responsibilities of professional communicators. Mean =
4.1 (vs. 3.9 in Fall 2003)
2. I understand understanding the applications of communication principles and theories to professional communication
skills and activities: Mean = 4.2 (vs. 4.1 in Fall 2003)
3. Ability to identify communication strategies for messages that inform, educate and/or persuade audiences as
appropriate: Mean = 4.5 (vs. 4.1 in Fall 2004).
We would like to say that the comparisons with Fall 2004 suggest at the least positive consistency, with some slight but not
statistically significant improvements. However, an important caveat enters in here: Due to a printing error, an
inappropriate response scale was entered for the questions asked in Fall 2004, i.e. the questions were posed under the
rubric “How good a job do you think the courses that you took in your major:” , but the response categories were identified
as being from “Strongly Agree” to “Strongly Disagree” on a five-point index. The gaffe was not confusing enough so that all
students did not respond, but obviously on the next round the metric will be changed to “Excellent, very well...” etc., which
will not allow direct comparisons with this semester’s data.
The baseline census survey began with a pre-test across all students in a required sophomore course (JT210 Newswriting).
That instrument is under supplemental materials, appears to have worked well based upon preliminary analyses of results
and inquiries made of students, and will be repeated to the larger population in fall 2004.
Program Improvements
The evaluation data are still such that after a year we are hesitant to pursue meaningful longitudinal interpretations of the.
However, the positive consistency is highly encouraging. The development of the items above, and open-ended responses
by students to the CLA and sophomore course questionnaires, has opened discussion of directions to emphasize in our
program, and possible shortcomings in curricular structure. Two immediate outcomes have been formal discussions initiated
by the chair among members of the faculty with public relations interests as to how to better manage a smoother flow
among those courses, with less duplication. Similar discussions were held among instructors of courses emphasizing media
technology over the same basic issues. Those will continue. How to more effectively integrate the concentrations without
losing the distinctive elements of each has been discussed as well. We obviously await further data beyond what are still
early efforts, however. In addition, the department this year is undergoing its six-year accreditation review by the
Accrediting Council on Education in Journalism and Mass Communication. These assessments are being included in that
review, and we await further comments from the accrediting body as to interpretation of them for accreditation purposes.
Supplemental Materials
JT440 Video Concentration Portfolio Evaluations
JT450 PR Concentration Portfolio Evaluations
JT450 PR Concentration p2 Student Overall Evaluation
JT465 Tec Concentration p1 Portfolio Evaluations
JT465 Tech Concentration P2 Student Overall Evaluation
JT465 Tech Concentration P3 Student Overall Evaluation
JTC Student Survey Student Survey
Allen, Mary J., Assessing General Education Programs, Anker Publishing Company, Inc., 2006
Anderson, Lorin W. and Krathwohl, David R. (Eds.) with Airasian, Peter W., Cruikshank, Kathleen A., Mayer,
Richard E., Pintrich, Paul R., Raths, James, and Wittrock, Merlin C., A Taxonomy for Learning, Teaching, and
Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives, Addison Wesley Longman, Inc. 2001.
Bain, Ken, What the Best College Teachers Do, Harvard University Press, 2004
Banta, Trudy W., Lund, Jon P., Black, Karen E. and Oblander, Frances W., Assessment in Practice: putting
principles to work on college campuses by Jossey-Bass, 1996
Banta, Trudy W. and Associates (editors), Building a Scholarship of Assessment, Jossey-Bass, John Wiley &
Sons, 2002
Bloom, Benjamin S. (Ed.), Englehart, Max D., Furst, Edward J., Hill, Walker H., and Krathwohl, David R.,
Taxonomy of Educational Objectives, The Classification of Educational Goals, Handbook I: Cognitive Domain,
David McKay Company, Inc. New York, 1954, 1956.
Bransford, John D., Brown, Ann L., and Cocking, Rodney R. (editors), How People Learn; National Research
Council Committee on Developments in the Science of Learning; National Academy Press, 1999
Bresciani, Marilee J., Zelna, Carrie L. and Anderson, James A., Assessing Student Learning and Development: A
Handbook for Practitioners, National Association of Student Personnel Administrators (NASPA), 2004
Brown, George, Bull, Joanna, and Pendlebury, Malcolm, Assessing Student Learning in Higher Education,
Routledge, New York, 1997
Diamond, Robert M., Designing and Assessing Courses & Curricula, Jossey-Bass Inc., 1998
Eder, Douglas J., “General Education Assessment Within the Disciplines”, The Journal of General Education,
Vol. 53, No. 2, pp. 135-157, 2004.
Erwin, T. Dary, Assessing Student Learning and Development: A Guide to the Principles, Goals, and Methods of
Determining College Outcomes, Jossey-Bass Inc., 1991
Fulks, Janet, “Assessing Student Learning in Community Colleges”, Bakersfield College, 2004 obtained at
http://online.bakersfieldcollege.edu/courseassessment/Default.htm
Harrow, Anita J., A taxonomy of the psychomotor domain: a guide for developing behavioral objectives, David
McKay Company, Inc., 1972
Hernon, Peter and Dugan, Robert E. (Editors), Outcomes Assessment in Higher Education: Views and
Perspectives, Libraries Unlimited, A Member of the Greenwood Publishing Group, Inc., 2004
Heywood, John, Assessment in Higher Education, Jessica Kingsley Publishers Ltd, London, 2000
Huba, Mary E. and Freed, Jann E., Learner-Centered Assessment on College Campuses: shifting the focus from
teaching to learning, Allyn & Bacon, 2000
Kirkpatrick, Donald L., Evaluating Training Programs: the four levels, 2nd edition, Berrett-Koehler Publishers,
Inc., 1998
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.54 of 143
Krathwohl, David R., Bloom, Benjamin S., and Masia, Bertram B., Taxonomy of Educational Objectives, The
Classification of Educational Goals, Handbook II: Affective Domain, Longman Inc., 1964
Mager, Robert F., Preparing Instructional Objectives: A critical tool in the development of effective instruction 3rd
edition, The Center for Effective Performance, Inc. 1997
Maki, Peggy L., Assessing for Learning: Building a sustainable commitment across the institution, Stylus
Publishing, LLC, American Association for Higher Education, 2004
Pagano, Neil, “Defining Outcomes for Programs and Courses”, June 2005 Higher Learning Commission
Workshop Making a Difference in Student Learning: Assessment as a Core Strategy, available at
http://www.ncahigherlearningcommission.org/download/Pagano_DefiningOutcomes.pdf
Palomba, Catherine A. and Banta, Trudy W., Assessment Essentials: planning, implementing, and improving
assessment in higher education, Jossey-Bass, John Wiley & Sons, Inc., 1999
Pellegrino, James W. , Chudowsky, Naomi and Glaser, Robert (editors); Knowing What Students Know: The
science and design of educational assessment, Committee on the Foundations of Assessment, Center for
Education, Division of Behavioral and Social Sciences and Education, National Research Council, National
Academy Press, 2001
Prus, Joseph and Johnson, Reid, “A Critical Review of Student Assessment Options”, in "Assessment & Testing
Myths and Realities" edited by Trudy H. Bers and Mary L. Mittler, New Directions for Community Colleges,
Number 88, Winter 1994, pp. 69-83.
Spady, William G., Outcome-Based Education: Critical Issues and Answers, The American Association of School
Administrators, 1994.
Stevens, Dannelle D. and Levi, Antonia J., Introduction to Rubrics: An Assessment Tool to Save Grading Time,
Convey Effective Feedback, and Promote Student Learning, Stylus Publishing, 2005
Suskie, Linda, Assessing Student Learning: A common sense guide, Anker Publishing Company, 2004
Tagg, John, The Learning Paradigm College, Anker Publishing Company, Inc., 2003
Terenzini, Patrick T., “Assessment with open eyes: Pitfalls in studying student outcomes.” Journal of Higher
Education, Vol. 60, No. 6, pp. 644-664, November/December 1989.
Walvoord, Barbara E. and Anderson, Virginia J., Effective Grading: A Tool for Learning and Assessment, Jossey-
Bass, 1998
Walvoord, Barbara E., Assessment Clear and Simple, John Wiley & Sons, 2004
Authors: Alexander W. Astin; Trudy W. Banta; K. Patricia Cross; Elaine El-Khawas; Peter T. Ewell; Pat Hutchings; Theodore J.
Marchese; Kay M. McClenney; Marcia Mentkowski; Margaret A. Miller; E. Thomas Moran; Barbara D. Wright
a. The assessment of student learning begins with educational values. Assessment is not an end in
itself but a vehicle for educational improvement. Its effective practice, then, begins with and enacts a vision
of the kinds of learning we most value for students and strive to help them achieve. Educational values
should drive not only what we choose to assess but also how we do so. Where questions about educational
mission and values are skipped over, assessment threatens to be an exercise in measuring what's easy,
rather than a process of improving what we really care about.
The college mission must be understood not just by the school’s faculty and staff but also by its students
and the community it serves. Assessment must be based on that which is truly important.
Successful assessment techniques embody creativity, adaptability, reliability, and validity. Through the
use of multiple methods, triangulation, and the measurement of knowledge and performance over time,
effective assessment techniques can begin to capture and reflect the complex nature of learning.
c. Assessment works best when the programs it seeks to improve have clear, explicitly stated
purposes. Assessment is a goal-oriented process. It entails comparing educational performance with
educational purposes and expectations -- those derived from the institution's mission, from faculty intentions
in program and course design, and from knowledge of students' own goals. Where program purposes lack
specificity or agreement, assessment as a process pushes a campus toward clarity about where to aim and
what standards to apply; assessment also prompts attention to where and how program goals will be taught
and learned. Clear, shared, implementable goals are the cornerstone for assessment that is focused and
useful.
Assessment is most effective when it is based on clear and focused goals and objectives. It is from these
goals that educators fashion the coherent frameworks around which they can carry out inquiry. When
such frameworks are not constructed, assessment outcomes fall short of providing the direction
necessary to improve programs.
d. Assessment requires attention to outcomes but also and equally to the experiences that lead to
those outcomes. Information about outcomes is of high importance; where students "end up" matters
greatly. But to improve outcomes, we need to know about student experience along the way -- about the
curricula, teaching, and kind of student effort that lead to particular outcomes. Assessment can help us
understand which students learn best under what conditions; with such knowledge comes the capacity to
improve the whole of their learning.
Effective assessment strategies pay attention to process. Educational processes are essential to the
attainment of an outcome. Successful assessment practitioners understand that how students get there
matters.
Assessment strategies must be continually nurtured, evaluated, and refined in order to ensure success.
f. Assessment fosters wider improvement when representatives from across the educational
community are involved. Student learning is a campus-wide responsibility, and assessment is a way of
enacting that responsibility. Thus, while assessment efforts may start small, the aim over time is to involve
people from across the educational community. Faculty play an especially important role, but assessment's
questions can't be fully addressed without participation by student-affairs educators, librarians,
administrators, and students. Assessment may also involve individuals from beyond the campus (alumni/ae,
trustees, employers) whose experience can enrich the sense of appropriate aims and standards for learning.
Thus understood, assessment is not a task for small groups of experts but a collaborative activity; its aim is
wider, better-informed attention to student learning by all parties with a stake in its improvement.
Successful assessment is dependent upon the involvement of many individuals – each person contributes
his or her knowledge, expertise, and perspectives, thereby enhancing the overall assessment program.
Assessment therefore works best when it is conceptualized as a group effort.
g. Assessment makes a difference when it begins with issues of use and illuminates questions that
people really care about. Assessment recognizes the value of information in the process of improvement.
But to be useful, information must be connected to issues or questions that people really care about. This
implies assessment approaches that produce evidence that relevant parties will find credible, suggestive, and
applicable to decisions that need to be made. It means thinking in advance about how the information will be
used, and by whom. The point of assessment is not to gather data and return "results"; it is a process that
starts with the questions of decision-makers, that involves them in the gathering and interpreting of data,
and that informs and helps guide continuous improvement.
Successful assessment programs know how to use data. Assessment makes a difference when
meaningful data are collected, connected, and applied creatively to illuminate questions and provide a
basis for decision making. Only then can data guide continuous improvement.
h. Assessment is most likely to lead to improvement when it is part of a larger set of conditions
that promote change. Assessment alone changes little. Its greatest contribution comes on campuses
where the quality of teaching and learning is visibly valued and worked at. On such campuses, the push to
improve educational performance is a visible and primary goal of leadership; improving the quality of
undergraduate education is central to the institution's planning, budgeting, and personnel decisions. On such
campuses, information about learning outcomes is seen as an integral part of decision making, and avidly
sought.
Successful assessment is directed toward improvements. Those improvements may occur in teaching,
student learning, academic and support programs, or institutional effectiveness. The bottom line is that
assessment information must be applied systematically toward improvements if it is to have a lasting
impact on the institution.
i. Through assessment, educators meet responsibilities to students and to the public. There is a
compelling public stake in education. As educators, we have a responsibility to the publics that support or
depend on us to provide information about the ways in which our students meet goals and expectations. But
that responsibility goes beyond the reporting of such information; our deeper obligation -- to ourselves, our
students, and society -- is to improve. Those to whom educators are accountable have a corresponding
obligation to support such attempts at improvement.
Additional principle put forward by Banta, Lund, Black, and Oblander, 1996:
Without a supportive environment, most assessment efforts will fail to take root and grow.
The following questions are designed to help faculty examine the processes by which you are pursuing your goals
for student learning in a program of study. Although most of these questions seem to call for “Yes” or “No”
answers, they are meant to prompt wider discussions.
If you answer “Yes” to a question, your self-study should briefly describe the “Who, What, When, Where, and
How” of that answer. If you answer “No,” the self-study should discuss whether you wish to improve in this regard
and how you plan to do so.
Learning Objectives
Have we explicitly defined what we want students who complete our program to know and be able to do?
(e.g., as employees, as graduate students, as citizens)
Do we work collaboratively to define program learning objectives, or is the task delegated to one or a few
individuals?
Do we consult sources beyond our own faculty when defining program learning objectives? (e.g.,
employers, students or graduates, comparable programs in other institutions, professional associations)
Do we communicate program learning objectives to students, employers or other stakeholders?
Do we periodically review program learning objectives to see how they might be improved?
(See also questions in the remaining focal areas on how we use program learning objectives.)
Quality Assurance
How do we assure ourselves that each course in the curriculum addresses agreed upon content, that
sound teaching practices are carried out appropriately and consistently, that assessments are conducted
as planned, and that agreed upon plans to improve courses or the program as a whole are implemented
by those responsible?
How do we assure ourselves that other faculty activities affecting students, such as academic
advisement, are being performed appropriately and consistently?
Do we provide meaningful, timely feedback and recognition to faculty regarding how they are performing
work related to the curriculum, teaching and learning, assessment, and other practices affecting
students?
Do we identify best practices in quality assurance and use this information to improve how we assure that
the work of the program is performed appropriately and consistently?
Do we periodically review our quality assurance practices to see how they might be improved?
The assessment literature is full of terminology such as “mission”, “goals”, “objectives”, “outcomes”, etc. but
lacking in a consensus on a precise meaning of each of these terms. Part of the difficulty stems from changes in
approaches to education – shifts from objective-based, to competency-based, to outcomes-based, etc. education
have taken place over the years with various champions of each espousing the benefits of using a different point
of view. As a result, some of the terminology associated with student learning outcomes may appear to an
“assessment newcomer” as confusing, and, at times, contradictory.
Regardless of which frame of reference is at the foundation of the approach to education involving student
learning outcome assessment, the notion of a ‘pyramid’ whereby more general statements about the mission/goals
of a program for student learning are supported by more detailed or specific statements of program/course student
learning objectives/outcomes is a good building block to use in trying to come to grips with assessment
terminology.
The Outcomes Pyramid shown below presents a pictorial clarification of the hierarchical relationships among
several different kinds of goals, objectives, and outcomes that appear in assessment literature.
The ‘pyramid’ image is chosen to convey the fact that increasing complexity and level of specificity are
encountered as one moves downward. The pyramid structure also reinforces the notion that learning flows from
the mission of the institution down to the units of instruction. As we will see, this pyramid is not intended as the
definitive description of these terms, as some organizations have defined terms to meet their specific needs. It
does, however, provide a general interpretation of common assessment terminology as will be elaborated upon
below.
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.61 of 143
Outcomes Pyramid Definitions
A Mission Statement is a general, concise statement outlining the purpose guiding the practices of an institution or
school/college. Accrediting bodies expect that student learning outcomes flow from the mission statements of the
institution and school/college; i.e., the school/college mission should be in harmony with the mission statement of
the institution.
Goals are broad, general statements of what the program, course, or activity intends to accomplish. Goals
describe broad learning outcomes and concepts (what you want students to learn) expressed in general terms (e.g.,
clear communication, problem-solving skills, etc.)
Goals should provide a framework for determining the more specific educational objectives of a program, and
should be consistent with the mission of the program and the mission of the institution. A single goal may have
many specific subordinate learning objectives.
Note: A single Department within a School may offer several Programs. Hence, at times a Department may have
an overarching set of Goals which encompass all of the Program-specific goals. In dealing with student learning
outcomes associated with a program of study, it is perhaps best not to confuse the ‘organizational’ side of the
university (Department) with the ‘academic’ side (Program). Thus, in the Outcomes Pyramid the items below the
Mission statements are meant to pertain to Programs and Courses. The Program is assumed to be one which is
consistent with the mission of the organization within which it resides.
Objectives
Instructional Objectives describe in detail the behaviors that students will be able to perform at the conclusion of a
unit of instruction such as a class, and the conditions and criteria which determine the acceptable level of
performance.
Goals and Objectives are similar in that they describe the intended purposes and expected results of teaching
activities and establish the foundation for assessment. Goals are statements about general aims or purposes of
education that are broad, long-range intended outcomes and concepts; e.g., “clear communication”, “problem-
solving skills”, etc. Objectives are brief, clear statements that describe the desired learning outcomes of
instruction; i.e., the specific skills, values, and attitudes students should exhibit that reflect the broader goals.
There are three types of learning objectives, which reflect different aspects of student learning:
Cognitive objectives: “What do you want your graduates to know?”
Affective objectives: “What do you want your graduates to think or care about?”
Behavioral Objectives: “What do you want your graduates to be able to do?”
What are the differences between Goals and Objectives? Both goals and objectives use the language of outcomes
– the characteristic which distinguishes goals from objectives is the level of specificity. Goals express intended
outcomes in general terms and objectives express them in specific terms. Goals are written in broad, global, and
sometimes vague, language. Objectives are statements that describe the intended results of instruction in terms of
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.62 of 143
specific student behavior. The two terms, objectives and outcomes, are often used interchangeably, however,
resulting in confusion.
Outcomes
Learning Outcomes are statements that describe significant and essential learning that learners have achieved, and
can reliably demonstrate at the end of a course or program. Learning Outcomes identify what the learner will
know and be able to do by the end of a course or program – the essential and enduring knowledge, abilities
(skills) and attitudes (values, dispositions) that constitute the integrated learning needed by a graduate of a course
or program. Learning outcomes normally include an indication of the evidence required to show that the learning
has been achieved and how that evidence is to be obtained.
The learning outcomes approach to education means basing program and curriculum design, content, delivery,
and assessment on an analysis of the integrated knowledge, skills and values needed by both students and society.
In this outcomes-based approach to education, the ability to demonstrate learning is the key point. This
demonstration of learning involves a performance of some kind in order to show significant learning or learning
that matters – knowledge of content must be manifested through a demonstration process of some kind.
This approach differs from more traditional academic approaches that emphasize coverage by its emphasis on:
basing curriculum on what students need to know and be able to do as determined by student and societal
needs not disciplinary tradition,
focusing on what students should be able to do rather than merely what knowledge they should possess as
a result of a course or program,
making explicit the development and assessment of generic abilities.
It differs from competency-based approaches in its emphasis on integration and the development of more general
abilities that are often overlooked in a competency approach. For example, competencies such as being able to
punctuate correctly or know appropriate vocabulary must be recognized as subordinate to the learning outcome of
writing and communicating effectively.
What are the differences between Objectives and Outcomes? Objectives are intended results or consequences of
instruction, curricula, programs, or activities. Outcomes are achieved results or consequences of what was
learned; i.e., evidence that learning took place. Objectives are focused on specific types of performances that
students are expected to demonstrate at the end of instruction. Objectives are often written more in terms of
teaching intentions and typically indicate the subject content that the teacher(s) intends to cover. Learning
outcomes, on the other hand, are more student-centered and describe what it is that the learner should learn.
Objectives statements can vary in form and nature – they can range from general ‘curriculum’ objectives, to more
specific ‘learning’ objectives, to even more specific ’behavioral’ objectives. They may be expressed as intentions
on the part of the lecturer (e.g., ‘The objectives of this unit are to …’), or as desired outcomes (‘By the end of this
unit you should be able to….’). It is the latter form – the outcome statement – that has the most power in
informing teaching and learning, whether it be called a ‘learning outcome’, ‘learning objective’, or some other
name. An outcome statement clarifies intention. It is squarely focused on the learner and is performance-
oriented, beginning with an action verb (e.g. ‘demonstrate’, apply’ etc.) and signaling the desired level of
performance. A learning outcome is thus an unambiguous statement of what the learner is expected to achieve
and how he/she is expected to demonstrate that achievement.
The most common way of expressing educational aims in academic courses is in terms of the “course objectives”.
“Course objectives” and “learning outcomes” are often contrasted. Because there is no fixed meaning to the
notion of course objectives, objectives commonly include statements about what the instructor intends to do
(“provide a basic introduction to…", “expose the student to…”) and statements about what both the instructor and
student will do (“there will be daily class discussions”) and often, outcome type statements about what the student
should know or be able to do at the end of the course. A mixture of “instructional intentions”, “inputs” and
“learning outcomes” often results.
Learning outcomes are an essential part of any unit outline. A learning outcome is a clear statement of what a
learner is expected to be able to do, know about and/or value at the completion of a unit of study, and how well
they should be expected to achieve those outcomes. It states both the substance of learning and how its
attainment is to be demonstrated.
Key to the learning outcomes approach to assessment is the use of “authentic assessment.” The idea of authentic
assessments is to create assignments and assessments that simulate as much as possible the situations in which
students would make integrated use of the knowledge, skills and values developed in a course. By focusing
assessment in this way, instructors emphasize their intention that students should be able to make use of their
learning outside of class. Instructors need to ask themselves what kind of student performance would give them
confidence that the student had understood and could apply the material learned.
An effective set of learning outcomes statements informs and guides both the instructor and the students:
For teaching staff: It informs:
the content of teaching
the teaching strategies you will use
the sorts of learning activities/tasks you set for your students
appropriate assessment tasks
course evaluation.
For students: The set of learning outcomes provides them with:
a solid framework to guide their studies and assist them to prepare for their assessment
a point of articulation with graduate attributes at course and/or university (i.e. generic) level.
Learning Outcome statements may be broken down into three main components:
an action word that identifies the performance to be demonstrated;
a learning statement that specifies what learning will be demonstrated in the performance;
a broad statement of the criterion or standard for acceptable performance.
For example:
ACTION WORD LEARNING STATEMENT CRITERION
(Geology) To develop knowledge, To explain the different magma Students should be able to demonstrate how
understanding and skills related to the geochemistries derived from partial magma geochemistry relates to partial melting
recognition and interpretation of melting of the mantle in different of the mantle by contrasting the outcomes of
igneous and metamorphic rocks. tectonic regime. this process in different tectonic regimes
through the critical analysis of specific case
studies.
(Biochemistry) To explain the To demonstrate the application of Students should be able to apply the principles
biochemical basis of drug design and molecular graphics to drug design. underpinning the use of molecular graphics in
development. the design of drugs to illustrate general and
specific cases through a computer-based
presentation.
(English) To introduce students to To familiarize students with a number Students should be able to analyze the
modes of satiric writing in the of substantive eighteenth century texts. relationship between the language of satire to
eighteenth century. Students will be trained in the close literary form by the close examination of a
reading of language and its relation to selected number of eighteenth-century texts in
literary form. a written essay.
(Engineering) This course introduces The student is able to function in Functioning as a member of a team, the
senior engineering students to design of teams. student will design and present a concrete
concrete components of structure and structure which complies with engineering
foundation and the integration of them standards.
into overall design structures.
(Geology) Become acquainted with Use topographic maps and employ Students should be able to
topographic maps and their usage. these maps to interpret the o Locate and identify features on
physiography and history of an area. topographic maps by latitude and
longitude and township and range.
o Contour a topographic map and construct
a topographic profile.
o Identify major landform features on
topographic maps and relate them to
basic geologic processes of stream,
groundwater, glacial or marine erosion
and deposition.
o Interpret geologic maps and geologic
cross-sections.
(Business) Introduce students to {Course level} The objective of this {Course level} At the end of this course,
business communication course is to expose [by instructor] students will be able to
students to the complex nature of • Identify and describe the most common
business communications, forms of business communication
consolidations of financial statements, • Consolidate financial statements as of the
international accounting issues, and date of acquisition
accounting for partnerships • Consolidate financial statements
subsequent to the date of acquisition
• Describe the formation and operations of
partnerships
As shown in the Outcomes Pyramid above, there is very often an interconnection between Objectives and
Outcomes at the program, course, and instructional unit levels. Teachers will modify objectives and outcomes
based on the success of the delivery of the subject matter.
Below is an example based on material from Eastern Kentucky University Social Work program:
University Mission Eastern Kentucky University is a student-centered comprehensive public university dedicated to high-quality
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.65 of 143
instruction, service, and scholarship.
Program Mission/Goals
Program Objectives 1. Apply critical thinking skills within the context of professional social work practice.
2. Practice within the values and ethics of the social work profession and with an understanding of
and respect for the positive value of diversity.
3. Demonstrate the professional use of self.
4. Understand the forms and mechanisms of oppression and discrimination and the strategies for
change that advance social and economic justice.
5. Understand the history of the social work profession and its current structures and issues.
6. Apply the knowledge and skills of generalist social work practice with systems of all sizes.
7. Apply knowledge of bio-psycho-social, cultural, and spiritual variables that affect individual
development and behavior, and use theoretical frameworks to understand the interactions among
individuals and between individuals and social systems (i.e., families, groups, organizations, and
communities).
8. Analyze the impact of social policies on client systems, workers, and agencies.
9. Evaluate research studies and apply findings to practice, and, under supervision, to evaluate their
own practice interventions and those of other relevant systems.
10. Use communication skills differentially with a variety of client populations, colleagues, and
members of the community.
11. Use supervision appropriate to generalist practice.
12. Function within the structure of organizations and service delivery systems, and under supervision,
seek necessary organizational change.
13. Analyze the impact of violence on the psychological, social, cultural, and spiritual functioning of
individuals, groups, organizations, communities, and society.
14. Apply understanding of the dynamics of violence when assessing and intervening with private
trouble and public issues.Analyze the role of institutional and cultural violence in the creation and
maintenance of social oppression and economic injustice.
SWK 358 Students will learn the causes and effects of violence on the micro and macro levels. (Program
(Child Abuse and Neglect) Objectives 1, 4, 6, 7, 8, 13, 14, and 15)
Course Objectives Students will learn indicators and family dynamics of child neglect, physical abuse, sexual
abuse, and emotional maltreatment. (Program Objectives 1, 7, 13, 14, and 15)
Students will be able to identify and describe the interaction between individual developmental
stages and family developmental stages. (Program Objectives: 1 and 7)
Students will utilize the principles of empowerment and strength perspective as well as systems
framework to understand how individuals in families communicate and develop. (Program
Objectives: 2 and 7)
Students will learn the indicators and relationship dynamics of domestic violence as it relates to
child abuse and neglect. (Program Objectives: 1, 6, 7, 13, 14, and 15)
Students will know reporting requirements for child abuse/neglect and spouse abuse/partner
abuse and how to make such abuse/neglect reports. (Program Objectives 1, 6, 13, 14, and 15)
Students will learn the roles of primary professionals involved in domestic violence cases and
summarize the effectiveness of the multidisciplinary approach. (Program Objectives: 1, 6, 7, 13,
14, and 15)
Students will be able to diagram the present structure of Public Child Welfare System and its
relationship with other community partners. (Program Objectives: 5 and 8)
Students will gain knowledge of society's response to child/spouse maltreatment including
current legislation. (Program Objectives: 1, 4, 8, 13, 14, and 15)
Students will learn systems issues contributing to violence and barriers impeding protection of
victims. (Program Objectives: 1, 4, 5, 8, 13, 14, and 15)
Students will understand the social worker's intervention roles and responsibilities in
abuse/neglect situations. (Program Objectives: 1 and 5)
Students will be able to explain the most effective treatment modalities for intervening in CPS
abuse and neglect and domestic violence situations. (Program Objectives: 1, 2 and 7)
Students learn to identify the principles of advocacy for children and families. (Program
Objectives: 1, 2, 4, 6, 8, 10 and 12)
Students will be able to restate the roles and functions of the multi-partners needed in the
collaborative process necessary for the continuum of care provided to families. (Program
Objectives: 1, 5, 10 and 12)Students will learn about the potential impact of cultural and ethnic
background as it apples to family function and system response. (Program Objectives: 2, 4, 5,
and 10)
SWK 358 Students should be able to list at least five indicators of child abuse/neglect, and five
(Child Abuse and Neglect) indicators of domestic violence.
Course Outcomes Students should learn when and how to make a child or adult maltreatment report.
Students will know and be able to restate current legal responsibilities of the social worker
in domestic violence and child maltreatment cases.
Students should be able to describe at least five resources and community partners
available to assist child and adult victims in Kentucky.
Students will know and be able to relate at least three advocacy groups established to
assist children and abused women.
Students will be able to identify at least three deleterious effects of maltreatment of
children and women.
Students will be able to identify at least five treatment modalities.
Students will be able to plan case and class advocacy strategies on behalf of maltreated
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.66 of 143
children and women.
Students will be able to identify strengths and weakness of Kentucky's child welfare
system.
Students will be able to differentiate between at least three cultural practices and child
maltreatmentStudents will be able to view family dynamics, strengths, and needs with
cultural sensitivity.
ABET, Inc accreditation criteria mandate that “Engineering programs must demonstrate that their students attain:
a. an ability to apply knowledge of mathematics, science, and engineering
b. an ability to design and conduct experiments, as well as to analyze and interpret data
c. an ability to design a system, component, or process to meet desired needs within realistic constraints
such as economic, environmental, social, political, ethical, health and safety, manufacturability, and
sustainability
d. an ability to function on multi-disciplinary teams
e. an ability to identify, formulate, and solve engineering problems
f. an understanding of professional and ethical responsibility
g. an ability to communicate effectively
h. the broad education necessary to understand the impact of engineering solutions in a global, economic,
environmental, and societal context
i. a recognition of the need for, and an ability to engage in life-long learning
j. a knowledge of contemporary issues
k. an ability to use the techniques, skills, and modern engineering tools necessary for engineering practice.”
These, therefore, make up Program Outcomes which may be augmented by any “additional outcomes articulated
by the program to foster achievement of its education objectives.” As an example, for Mechanical Engineering,
“the program must demonstrate that graduates have:
o knowledge of chemistry and calculus-based physics …
o the ability to apply advanced mathematics …
o familiarity with statistics and linear algebra
o the ability to work professionally in both thermal and mechanical systems areas including the design and
realization of such systems.”
The key here is that this is an outcomes-based approach whereby the outcomes are mandated rather than
developed from objectives. But, it is clear that some of these mandated attributes for a student graduating from an
Engineering program are worded in such a manner that a determination of “knowledge” or “familiarity” or … is
not at all clear. Thus, many programs developed Measurable Learning Outcomes based on these Program
Outcomes.
Hence, a variation on the Outcomes Pyramid more suitable for this Engineering scenario is as follows (as
originally given in the material by Yokomoto and Bostwick):
Program Educational Objectives are statements that describe what we expect graduates to be able to do a few
years after graduation. They describe the knowledge, skills, abilities, capacities, attitudes or dispositions you
expect students to acquire in your program. Program Educational Objectives are statements describing how a
program will satisfy constituency needs and fulfill its mission – the audience for objective statements are external
constituents such as prospective students, employers, student sponsors, etc.
Program Educational Objectives are more specific than the broad Goals of the program, and they are more general
than the Program Outcomes, which reside one level lower in the pyramid. Each of the Program Educational
Objectives should be linked to the Program Goals.
Program Outcomes
Program Outcomes describe the essential knowledge, skills and attitudes graduates are expected to have after
completing the program. They are statements that describe what the graduates of the curriculum will be able to
do; i.e., what students actually develop through their college experience. Each of your Program Outcomes should
be linked to one or more of your Program Objectives.
Assessment experts will tell you that these are often too broad to be assessed and should be broken down into
more measurable units. This can be done in several ways, one of which is through the development of
Measurable Learning Outcomes.
In an ideal assessment process, you should have a set of Measurable Learning Outcomes associated with each of
your Program Outcomes to help define what each Program Outcome means in terms of the terminology specific
to your program. They are more specific than your Program Outcomes, and they are more general than your
Course Outcomes, which reside at the next lower level in the pyramid. You may use them to articulate your
Program Outcomes or you may use them in your assessment of student learning, or both.
It may even be possible to let your Course Outcomes, which reside at the next lower level in the pyramid, to serve
as your Measurable Learning Outcomes.
Course Outcomes
Course Outcomes are statements that describe the broad knowledge that students will obtain from a course. They
are detailed, specific, measurable or identifiable, and personally meaningful statements that are derived from the
course goals and articulate what the end result of the course is to achieve. They refer to the specific knowledge,
skills, or developmental attributes that a student actually develops through their course experience.
They should be written with active language that describes what students should be able to demonstrate to show
that they have accomplished the learning expected of them, and they should be reduced in number by combining
statements with common themes into a single statement. Active verbs such as “solve,” “compute,” “draw,”
“explain,” and “design,” etc., should be used, and passive terms such as “understand” and “know” should be
avoided.
The easiest way is to write Course Outcomes is to start with your course outline, the table of contents of your
textbook, or the Course Instructional Objectives that reside at the next lower level in the pyramid and reduce
them to a set of broader outcomes. Course Outcomes should be put in your syllabus and in any publication that
communicates with your constituents.
Unit Instructional Objectives describe in detail the behaviors that students will be able to perform at the
conclusion of a unit of instruction such as a class, and the conditions and criteria which determine the acceptable
level of performance. Unit Instructional Objectives have three components:
o A description of what the student will be able to do
o The conditions under which the student will perform the task
o The criteria for evaluating student performance
They are statements that define the circumstances by which it will be known if the desired change has occurred.
They are the intended student outcomes; i.e., the specific skills, values, and attitudes students should exhibit that
reflect the broader course objectives (e.g., for students in a freshman writing course, this might be “students are
able to develop a cogent argument to support a position”).
Experts in good practices in education tell us that student learning is enhanced when each student is provided with
a list of detailed Unit Instructional Objectives that tell them what they will be held responsible for within each
unit of instruction. These statements help students prepare for exams. Just as in the writing of Measurable
Learning Outcomes and Program Outcomes, instructional objectives should be written using active verbs.
An example from IUPUI Mechanical Engineering which takes the step of defining Measurable Outcomes:
University Mission The mission of IUPUI is to provide for its constituents, excellence in:
Teaching and Learning
Research, Scholarship, and Creative Activity
Civic Engagement, Locally, Nationally, and Globally with each of these core activities characterized by:
o Collaboration within and across disciplines and with the community,
o A commitment to ensuring diversity, and
o Pursuit of best practices.
School Mission The mission of the IUPUI School of Engineering and Technology is to provide quality education, develop technical
leaders, and conduct basic and applied research. The School strives to enhance the local community through civic
responsibility and by promoting economic development.
Measurable Outcome a1 Ability to work with forces, moments, statics and dynamics of rigid bodies, electricity, material
chemistry, electrical circuits, basic digital electronics, basic fluid statics and dynamics, and basic heat
energy an thermodynamics.
Measurable Outcome a2 Ability to use multivariate calculus, differential equations, and linear algebra in solving problems in
fluid mechanics, heat and mass transfer, system modeling of dynamic systems, dynamic and control
systems.
Measurable Outcome a3 Ability to use statistics and probability in experiments and measurements. Use regression analysis to
determine relationships between measured dependent and independent variables.
Measurable Outcome a4 Ability to apply the knowledge mathematics and science in solving problems in engineering sciences.
Measurable Outcome b Ability to conduct experiments methodically, analyze data and interpret results. Use regression
analysis to determine relationships between measured dependent and independent variables.
Measurable Outcome c1 Ability to design mechanical systems that meet desired needs, work in teams, communicate the
design process and results in the form of written reports, posters, and/or oral presentations.
Generate creative and multiple design ideas based on functional specifications and evaluate them
based on customer requirements.
Measurable Outcome c2 Ability to design thermal-fluid systems that meet desired needs, work in teams, communicate the
design process and results in the form of written reports, posters, and/or oral presentations.
Generate creative and multiple design ideas based on functional specifications and evaluate them
based on customer requirements.
Measurable Outcome d Ability to work in teams for solving multidisciplinary projects, such as in electromechanical, dynamic
systems and control system. Also, work on projects involving solid, thermal and fluid systems.
Measurable Outcome e Ability to identify an engineering problem, formulate it mathematically and find a solution for it.
Present the solution in the form of a software or hardware product, device or process that meets a
need in upper level design courses.
Measurable Outcome f Ability to: a) describe how an ethics course can help a practicing engineer, b) describe how codes of
ethics help an engineer work ethically, c) analyze a behavior using models of right and wrong, d)
analyze ethics codes using models of right and wrong, e) describe how group discussions can help
Comments: The terms “outcome,” “objective,” and “goal” have been commonly used in education circles, and
different people have different understandings of them. It would be wise to use phrases instead of single terms
when using these words, such as Program Outcomes instead of “outcomes” and Program Objectives or Unit
Instructional Objectives instead of simply using “objectives.”
Finally, below is a checklist to use when reviewing program-level learning outcome statements (Maki 2004):
The Program Mission Statement is a concise statement of the general values and principles which guide the
curriculum. It sets a tone and a philosophical position from which follow a program's goals and objectives. The
Program Mission Statement should define the broad purposes the program is aiming to achieve, describe the
community the program is designed to serve, and state the values and guiding principles which define its
standards.
Program Mission Statements must also be consistent with the principles of purpose set forth in the University's
mission and goals statements. Accrediting bodies expect that Program Mission Statements are in harmony with
mission statements of the institution, school/college, and/or department. Therefore, a good starting point for any
program mission statement is to consider how the program mission supports or complements the University,
school/college, and department missions and strategic goals.
“The mission of (name of your program or unit) is to (your primary purpose) by providing (your
primary functions or activities) to (your stakeholders).” (Additional clarifying statements)
(Note: the order of the pieces of the mission statement may vary from the above structure.)
This tells who the organization is, what it intends to do, for whom it intends to do it, and by what means (how) it
intends to do it.
Program Goals are general statements of what the program intends to accomplish. Program Goals are broad
statements of the kinds of learning we hope students will achieve – they describe learning outcomes and concepts
(what you want students to learn) in general terms (e.g., clear communication, problem-solving skills, etc.)
Program Goals are statements of long range intended outcomes of the program and the curriculum. They describe
the knowledge, skills, and values expected of graduates and should be consistent with the mission of the program
and the mission of the institution.
Program Goals flow from the mission and provide the framework for determining the more specific educational
learning objectives and outcomes of a program. Goals describe overarching expectations such as "Students will
develop effective written communication skills." or "Students will understand the methods of science."
The main function of the Program Goals statement is to form a bridge between the lofty language of the Mission
Statement and the concrete-specific nuts and bolts of program objectives. The Program Goals statement becomes
a blueprint for implementing the mission by answering the following questions:
How do program goals relate to the program mission?
How does this program fit into a student's overall development?
What general categories of knowledge and abilities will distinguish your graduates?
For each principle of the mission, what are the key competency categories graduates of the program
should know or be able to do?
“Ideal graduate”:
Describe the “perfect student” in your program in terms of his/her knowledge, abilities, values, and
attitudes. Which of these characteristics can be directly attributed to the program experience?
Describe the “ideal student” at various phases in your program, focusing on the abilities, knowledge,
values, and attitudes that this student has either acquired or has had supported as a result of your program.
Then answer
o What does the student know? (cognitive)
o What can the student do? (performance/skills)
o What does the student care about? (affective)
Think what an ideal unit or program would look like and how its services and operations (refer to your
mission) would need to be conducted to reach that vision – think of how you would improve, minimize,
maximize, provide, etc. Then state these ideas as goals.
List the skills and achievements expected of graduates of the program. Describe the program alumni in
terms of their achievements, such as career accomplishments, lifestyles, and community involvement.
Use these to identify overarching goals.
Existing material review
Review current material which may shed light on program goals; e.g., catalog descriptions, program
review reports, mission and vision statements, accrediting agency documents, etc. List five to seven of
the most important goals identified in the sources listed above. Prioritize the list of important goals in
terms of their importance to your program and their contribution to a student’s knowledge, abilities,
attitudes, and values.
Course goals inventory
Review course syllabi, assignments, tests, and any additional materials and categorize the instructional
materials into (i) recall or recognition of factual information, (ii) application and comprehension, or (iii)
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.75 of 143
critical thinking and problem solving. From this inventory, determine the goals which are taught and use
them as a starting point for determining program goals.
Review other programs’ goals
Often broad overarching goal statements are quite similar from program to program and from institution
to institution. Looking at what is in use elsewhere can reaffirm or serve as a starting point for
brainstorming.
Note: a single goal may have many specific subordinate learning objectives.
Examples:
to graduate students who are prepared for industry
to adequately prepare students for graduate school
University Mission:
Broad exposure to the liberal arts . . .for students to develop their powers of written and spoken
expression ...
Program Goal:
The study of English enables students to improve their writing skills, their articulation ...
English Composition Course Goal:
Students will learn to acknowledge and adjust to a variety of writing contexts.
Learning Outcome:
The student will demonstrate through discussion an awareness that audiences differ and that
readers’ needs/expectations must be taken into account as one writes
Objectives
Goals and Objectives are similar in that they describe the intended purposes and expected results of teaching
activities and establish the foundation for assessment. Goals are statements about general aims or purposes of
education that are broad, long-range intended outcomes and concepts; e.g., “clear communication”, “problem-
solving skills”, etc. Objectives are brief, clear statements that describe the desired learning outcomes of
instruction; i.e., the specific skills, values, and attitudes students should exhibit that reflect the broader goals.
There are three types of learning objectives, which reflect different aspects of student learning:
Cognitive objectives: “What do you want your graduates to know?”
Affective objectives: “What do you want your graduates to think or care about?”
Behavioral Objectives: “What do you want your graduates to be able to do?”
Instructional Objectives describe in detail the behaviors that students will be able to perform at the conclusion of a
unit of instruction such as a class, and the conditions and criteria which determine the acceptable level of
performance.
What are the differences between Goals and Objectives? Both goals and objectives use the language of outcomes
– the characteristic which distinguishes goals from objectives is the level of specificity. Goals express intended
outcomes in general terms and objectives express them in specific terms.
Outcomes
Learning Outcomes are statements that describe significant and essential learning that learners have achieved, and
can reliably demonstrate at the end of a course or program. Learning Outcomes identify what the learner will
know and be able to do by the end of a course or program – the essential and enduring knowledge, abilities
(skills) and attitudes (values, dispositions) that constitute the integrated learning needed by a graduate of a course
or program.
The learning outcomes approach to education means basing program and curriculum design, content, delivery,
and assessment on an analysis of the integrated knowledge, skills and values needed by both students and society.
In this outcomes-based approach to education, the ability to demonstrate learning is the key point.
What are the differences between Objectives and Outcomes? Objectives are intended results or consequences of
instruction, curricula, programs, or activities. Outcomes are achieved results or consequences of what was
learned; i.e., evidence that learning took place. Objectives are focused on specific types of performances that
students are expected to demonstrate at the end of instruction. Objectives are often written more in terms of
teaching intentions and typically indicate the subject content that the teacher(s) intends to cover. Learning
outcomes, on the other hand, are more student-centered and describe what it is that the learner should learn.
Learning outcomes are statements that specify what learners will know or be able to do as a result of a learning
activity; i.e., the outcomes that students must meet on the way to attaining a particular degree. Outcomes are
Example:
Poor: Students should know the historically important systems of psychology.
This is poor because it says neither what systems nor what information about each system
students should know. Are they supposed to know everything about them or just names?
Should students be able recognize the names, recite the central ideas, or criticize the
assumptions?
Better: Students should know the psychoanalytic, Gestalt, behaviorist, humanistic, and cognitive
approaches to psychology.
This is better because it says what theories students should "know", but it still does not
detail exactly what they should "know" about each theory, or how deeply they should
understand whatever it is they should understand.
Best: Students should be able to recognize and articulate the foundational assumptions, central ideas,
and dominant criticisms of the psychoanalytic, Gestalt, behaviorist, humanistic, and cognitive
approaches to psychology.
This is the clearest and most specific statement of the three examples. It clarifies how
one is to demonstrate that he/she "knows". It provides even beginning students an
understandable and very specific target to aim for. It provides faculty with a reasonable
standard against which they can compare actual student performance.
Learning objectives specify both an observable behavior and the object of that behavior.
"Students will be able to write a research paper."
In addition, the criterion could also be specified:
"Students will be able to write a research paper in the appropriate scientific style."
Optionally, the condition under which the behavior occurs can be specified:
"At the end of their field research, students will be able to write a research paper in the appropriate
scientific style."
Note that the verb you choose will help you focus on what you assess. For example, consider the following
“Students will be able to do research.”
Here the verb do is vague and open to many interpretations; i.e., Do you mean identify an appropriate research
question, review the literature, establish hypotheses, use research technology, collect data, analyze data, interpret
Customers will be highly satisfied with the service and requests for service will increase
(Here you need to measure satisfaction separately from the number of requests for service.)
Student learning outcome statements should be aligned with mission statements (and goals if applicable).
Student learning outcome statements should clearly indicate the level and type of competence that is
required of graduates of a program. The following information should be included in a well-defined
learning outcome statement.
o Areas/fields that are the focus of the assessment.
o Knowledge, abilities, values and attitudes that a student in your program is expected to have
within that area/field.
o Depth of the knowledge, abilities, values and attitudes expected of a student in your program.
Student learning outcome statements should be distinctive and specific. Examples of generic and
distinctive outcomes are provided below:
Example of a generic outcome:
Students completing the Engineering program will be practiced in design skills.
Example of a distinctive outcome:
Engineering graduates will demonstrate knowledge of math, science, and engineering
fundamentals. Specifically, the student will have the ability to: demonstrate general
design principles; use fundamental engineering techniques, skills, and tools for
engineering practice; analyze and interpret data to produce meaningful conclusions and
recommendations.
Student learning outcome statements should be framed in terms of the program and not individual courses
or students.
Student learning outcome statements should be simple. Do not join elements in one objective statement
that cannot be assessed by a single assessment method.
Example of a “bundled” statement:
Engineering graduates will demonstrate knowledge of math, science, and engineering
fundamentals, and gain competency in basic skills as writing reports, communicating
research ideas and oral presentations.
Note: This would likely require two different methods of assessment. Oral presentations would
require a different approach than assessing knowledge of mathematics.
Student learning outcome statements should describe intended learning outcomes and not the actual
outcomes. Learning outcome statements should describe the abilities, knowledge, values and attitudes
expected of students after completion of the program and not the actual results.
Student learning outcome statements should be stated such that the outcome can be measured by more
than one assessment method. An outcome statement should not impose restrictions on the type or number
Many program brochures include learning outcomes which are unclear or represent elements of curriculum rather
than some action the participants will demonstrate. Consider the example
"Participants will develop an appreciation of cultural diversity in the workplace."
If you ask a simple question ("Can it be measured?"), you see readily that this learning outcome has
shortcomings. It is not measurable – one needs to know how a student will demonstrate that he/she “appreciates”.
If you modify this outcome statement by changing the action verb a useful statement will result:
Participants will summarize in writing their feelings about cultural diversity in the workplace."
Learners now have a much better idea of what is expected of them. What is the importance of action verbs?
Since the learner's performance should be observable and measurable, the verb chosen for each outcome
statement should be an action verb which results in overt behavior that can be observed and measured.
Examples
A. Fine Arts
Broad: Students will demonstrate knowledge of the history, literature and function of the theatre,
including works from various periods and cultures.
More specific: Students will be able to explain the theoretical bases of various dramatic genres
and illustrate them with examples from plays of different eras.
Even more specific, specifying the conditions: During the senior dramatic literature course, the
students will be able to explain the theoretical bases of various dramatic genres and illustrate
them with examples from plays of different eras.
B. Philosophy
Broad: The student will be able to discuss philosophical questions.
More specific: The student is able to develop relevant examples and to express the significance of
philosophical questions.
C. General Education
Broad: Students will be able to think in an interdisciplinary manner.
More specific: Asked to solve a problem in the student's field, the student will be able to draw
from theories, principles, and/or knowledge from other disciplines to help solve the problem.
D. Business
Broad: Students will understand how to use technology effectively.
More specific: Each student will be able to use word processing, spreadsheets, databases, and
presentation graphics in preparing their final research project and report.
To sum up, objectives/outcomes provide the necessary specificity which allows students to know what it is they
are to learn. To reach this level of specificity often requires several iterations.
Beginning in 1948, a group of educators undertook the task of classifying education goals and objectives. The
intention was to develop a classification system for three domains:
This taxonomy of learning behaviors can be thought of as the goals of training; i.e., after a training session, the
learner should have acquired new skills, knowledge, and/or attitudes. This has given rise to the obvious short-
hand variations on the theme which summarize the three domains; for example, Skills-Knowledge-Attitude, KAS,
Do-Think-Feel, etc.
The cognitive domain involves knowledge and the development of intellectual skills. This includes the recall or
recognition of specific facts, procedural patterns, and concepts that serve in the development of intellectual
abilities and skills. The affective domain includes the manner in which we deal with things emotionally, such as
feelings, values, appreciation, enthusiasms, motivations, and attitudes. The psychomotor domain includes
physical movement, coordination, and use of the motor-skill areas.
Work on the cognitive domain was completed in 1956 and is commonly referred to as Bloom's Taxonomy of the
Cognitive Domain, since the editor of the volume was Benjamin S. Bloom, although the full title was Taxonomy
of educational objectives: The classification of educational goals. Handbook I: Cognitive domain, 1956 by
Longman Inc. with the text having four other authors (Max D. Engelhart, Edward J. Furst, Walker H. Hill, and
David R. Krathwohl).
The major idea of the taxonomy is that what educators want students to know (and, therefore, statements of
educational objectives) can be arranged in a hierarchy from less to more complex. Bloom identified six levels
within the cognitive domain, from the simple recall or recognition of facts, as the lowest level, through
increasingly more complex and abstract mental levels, to the highest order which is classified as evaluation.
Cognitive learning is demonstrated by knowledge recall and the intellectual skills: comprehending information, organizing ideas,
analyzing and synthesizing data, applying knowledge, choosing among alternatives in problem-solving, and evaluating ideas or actions
Bloom's Taxonomy second domain, the Affective Domain, was detailed by Bloom, Krathwhol and Masia in 1964
(Taxonomy of Educational Objectives: Volume II, The Affective Domain). Bloom's theory advocates this structure
and sequence for developing attitude – also now commonly expressed in the modern field of personal
development as 'beliefs'. Again, as with the other domains, the Affective Domain detail provides a framework for
teaching, training, assessing and evaluating the effectiveness of training and lesson design and delivery, and also
the retention by and affect upon the learner or trainee.
Krathwohl's affective domain taxonomy is perhaps the best known of any of the affective taxonomies. The
taxonomy is ordered according to the principle of internalization. Internalization refers to the process whereby a
person's affect toward an object passes from a general awareness level to a point where the affect is 'internalized'
and consistently guides or controls the person's behavior.
Responding refers to active participation on the part of the answers, assists, complies, Completing homework assignments.
student. At this level he or she not only attends to a particular conforms, discusses, greets, Participating in team problem-
phenomenon but also reacts to it in some way. Learning helps, labels, performs, solving activities.
outcomes in this area may emphasize acquiescence in practices, presents, reads, Questions new ideals, concepts,
responding (reads assigned material), willingness to respond recites, reports, selects, tells, models, etc. in order to fully
(voluntarily reads beyond assignment), or satisfaction in writes understand them.
responding (reads for pleasure or enjoyment). The higher levels
of this category include those instructional objectives that are
commonly classified under “interest”; that is, those that stress
the seeking out and enjoyment of particular activities.
Valuing is concerned with the worth or value a student completes, describes, Accepting the idea that integrated
attaches to a particular object, phenomenon, or behavior. This differentiates, explains, follows, curricula is a good way to learn.
ranges in degree from the simpler acceptance of a value forms, initiates, invites, joins, Participating in a campus blood
(desires to improve group skills) to the more complex level of justifies, proposes, reads, drive.
commitment (assumes responsibility for the effective functioning reports, selects, shares, studies, Demonstrates belief in the
of the group). Valuing is based on the internalization of a set of works democratic process.
specified values, but clues to these values are expressed in the Shows the ability to solve problems.
student's overt behavior. Learning outcomes in this area are Informs management on matters
concerned with behavior that is consistent and stable enough to that one feels strongly about.
make the value clearly identifiable. Instructional objectives that
are commonly classified under “attitudes” and “appreciation”
would fall into this category.
Organization is concerned with bringing together different adheres, alters, arranges, Recognizing own abilities,
values, resolving conflicts between them, and beginning the combines, compares, limitations, and values and
building of an internally consistent value system. Thus the completes, defends, explains, developing realistic aspirations.
emphasis is on comparing, relating, and synthesizing values. generalizes, identifies, Accepts responsibility fro one’s
Learning outcomes may be concerned with the conceptualization integrates, modifies, orders, behavior.
of a value (recognizes the responsibility of each individual for organizes, prepares, relates, Explains the role of systematic
improving human relations) or with the organization of a value synthesizes planning in solving problems.
system (develops a vocational plan that satisfies his or her need Accepts professional ethical
for both economic security and social service). Instructional standards.
objectives relating to the development of a philosophy of life Prioritizes time effectively to meet
would fall into this category. the needs of the organization,
family, and self.
Characterization by a value or value set. The individual acts, discriminates, displays, A person's lifestyle influences
has a value system that has controlled his or her behavior for a influences, listens, modifies, reactions to many different kinds of
sufficiently long time for him or her to develop a characteristic performs, practices, proposes, situations.
“life-style.” Thus the behavior is pervasive, consistent, and qualifies, questions, revises, Shows self-reliance when working
predictable. Learning outcomes at this level cover a broad serves, solves, uses, verifies independently.
range of activities, but the major emphasis is on the fact that Uses an objective approach in
the behavior is typical or characteristic of the student. problem solving.
Instructional objectives that are concerned with the student's Displays a professional commitment
general patterns of adjustment (personal, social, emotional) to ethical practice on a daily basis.
would be appropriate here. Revises judgments and changes
behavior in light of new evidence.
Various people have since built on Bloom's work, notably in the third domain, the 'psychomotor' or skills, which
Bloom originally identified in a broad sense, but which he never fully detailed. This was apparently because
Bloom and his colleagues felt that the academic environment held insufficient expertise to analyze and create a
suitable reliable structure for the physical ability 'Psychomotor' domain. As a result, there are several different
contributors providing work in this third domain, such as Simpson and Harrow which are described below.
The psychomotor domain includes physical movement, coordination, and use of the motor-skill areas.
Development of these skills requires practice and is measured in terms of speed, precision, distance, procedures,
or techniques in execution. The seven major categories listed from the simplest behavior to the most complex are
shown below.
Psychomotor learning is demonstrated by physical skills: coordination, dexterity, manipulation, grace, strength, speed; actions which
demonstrate the fine motor skills such as use of precision instruments or tools, or actions which evidence gross motor skills such as the
use of the body in dance or athletic performance
Another taxonomy for the psychomotor domain due to Harrow is organized according to the degree of
coordination including involuntary responses as well as learned capabilities. Simple reflexes begin at the lowest
level of the taxonomy, while complex neuromuscular coordination make up the highest level.
Reflex movements are actions elicited without learning in response to some stimuli. Examples include: flexion,
extension, stretch, postural adjustments.
Basic fundamental movement are inherent movement patterns which are formed by combining of reflex
movements and are the basis for complex skilled movements. Examples are: walking, running, pushing, twisting,
gripping, grasping, manipulating.
Perceptual refers to interpretation of various stimuli that enable one to make adjustments to the environment.
Visual, auditory, kinesthetic, or tactile discrimination. Suggests cognitive as well as psychomotor behavior.
Examples include: coordinated movements such as jumping rope, punting, or catching.
Physical activities require endurance, strength, vigor, and agility which produces a sound, efficiently functioning
body. Examples are: all activities which require a) strenuous effort for long periods of time; b) muscular exertion;
c) a quick, wide range of motion at the hip joints; and d) quick, precise movements.
Skilled movements are the result of the acquisition of a degree of efficiency when performing a complex task.
Examples are: all skilled activities obvious in sports, recreation, and dance.
Non-discursive communication is communication through bodily movements ranging from facial expressions
through sophisticated choreographics. Examples include: body postures, gestures, and facial expressions
efficiently executed in skilled dance movement and choreographics.
An objective
Is an intent communicated by a statement describing a proposed change in a learner
Is a statement of what the learner is to be like when he/she has successfully completed a learning
experience
An instructional objective describes an intended outcome rather than a description or summary of content. A
usefully stated objective is stated in behavioral, or performance, terms that describe what the learner will be doing
when demonstrating his/her achievement of the objective. The statement of objectives for an entire program of
instruction will consist of several specific statements.
Course objective:
What a successful learner is able to do at the end of the course
Is a description of a product, of what the learner is supposed to be like as a result of the process
The statement of objectives of a program must denote measurable attributes observable in the graduate of the
program; otherwise it is impossible to determine whether or not the program is meeting the objectives. Tests or
examinations are the milestones along the road of learning and are supposed to tell the teacher and the student the
degree to which both have been successful in their achievement of the course objectives. But unless goals are
clearly and firmly fixed in the minds of both parties, tests are at best misleading; at worst, they are irrelevant,
unfair, or useless. To be useful they must measure performance in terms of the goals.
An advantage of clearly defined objectives is that the student is provided the means to evaluate his/her own
progress at any place along the route of instruction; thus, the student knows which activities on his/her part are
relevant to his/her success. A meaningfully stated objective is one that succeeds in communicating to the reader
the writer’s instructional intent and one that excludes the greatest number of possible alternatives to your goal.
To KNOW To WRITE
To UNDERSTAND To RECITE
To ENJOY To IDENTIFY
To APPRECIATE To DIFFERENTIATE
To GRASP THE SIGNIFICANCE OF To SOLVE
To COMPREHEND To CONSTRUCT
To BELIEVE To LIST
To COMPARE
To CONTRAST
The idea is to describe what the learner will be doing when demonstrating that he/she “understands” or
“appreciates”.
A useful objective identifies the kind of performance that will be accepted as evidence that the learner has
achieved the objective. An objective always states what a learner is expected to be able to do and/or produce to
be considered competent. Two examples:
Be able to ride a unicycle. the performance stated is ride
Be able to write a letter. the performance stated is writing, the product is a letter
Performances may be visible, like writing, repairing, or painting; or invisible, like adding, solving, or identifying.
If a statement does not include a visible performance, it isn’t yet an objective.
To state an objective that will successfully communicate your educational intent, you will sometimes have to
define terminal behavior further by stating the conditions you will impose upon the learner when he/she is
demonstrating his/her mastery of the objective. As a simple example:
(a) “To be able to solve problems in algebra.”
vs. (b) “Given a linear-algebraic equation with one unknown, the learner must be able to solve
for the unknown without the aid of references, tables, or calculating devices.”
In (b) we clearly see a more well-defined statement of the conditions under which solving an algebraic equation
will occur.
You should be detailed enough to be sure the target behavior would be recognized by another competent person,
and detailed enough so that other possible behaviors would not be mistaken for the desired behavior. You should
describe enough conditions for the objective to imply clearly the kind of test items appropriate for sampling the
behavior you are interested in developing.
Examples:
“Given a list of 35 chemical elements, be able to recall and write the valences of at least 30.”
‘Given a list – Tells us something about the conditions under which the learner will be recalling
the valences of elements.
‘at least 30' – Tells us something about what kind of behavior will be considered ‘passing’;
30 out of 35 is the minimum acceptable skill.
“Given a product and prospective customer, be able to describe the key features of the product.”
The performance is to occur in the presence of a product and a customer; these are the conditions
that will influence the nature of the performance, and so they are stated in the objective.
To avoid surprises when working with objectives, we state the main intent of the objective and describe the main
condition under which the performance is to occur. For example, “Be able to hammer a nail …” is different from
“Given a brick, be able to hammer a nail …”.
Miscommunications can be avoided by adding relevant conditions to the objective by simply describing the
conditions that have a significant impact on the performance – in other words, describe the givens and/or
limitations within which the performance is expected to occur. Some simple examples:
With only a screwdriver …
Without the aid of references …
Given a standard set of tools and the TS manual …
Guiding questions:
What will the learner be expected to use when performing (e.g., tools, forms, etc.)?
What will the learner not be allowed to use while performing (e.g., checklists or other aids)?
What will be the real-world conditions under which the performance will be expected to occur (e.g., on
top of a flagpole, under water, in front of a large audience, in a cockpit, etc.)?
Are there any skills that you are specifically not trying to develop? Does the objective exclude such
skills?
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.91 of 143
Scheme to fulfill step [2]:
Given an objective and a set of test items or situations, accept or reject each test item on the basis of
whether the objective defines (includes) the behavior asked for. If you must accept all kinds of test
items as appropriate, the objective needs to be more specific. If the objective allows you to accept
those items you intend to use and allows you to reject those items you do not consider relevant or
appropriate, the objective is stated clearly enough to be useful.
(i) Objective: “When asked a question in French, the student must be able to demonstrate his/her
understanding of the question by replying, in French, with an appropriate sentence.”
Inappropriate test situations:
“Translate the following French sentences.”
“Translate the following French questions.”
Appropriate test situation:
“Reply, in French, to the following questions.”
(ii) Objective: “To be able to solve a simple linear equation.”
Inappropriate test situation:
“If seven hammers cost seven dollars, how much does one hammer cost?”
Appropriate test situation:
“Solve for x in the following 2 + 4x = 12"
Key point: If you expect the student to learn how to solve word problems, then teach him/her how
to solve word problems. Do not expect him/her to learn to solve word problems by teaching
him/her how to solve equations. The only appropriate way to test to see whether they have
learned to solve equations (as stated in the objective) is to ask them to solve equations.
(iii) Objective: “Given a DC motor of ten horsepower or less that contains a single malfunction, and
given a standard kit of tools and references, the learner must be able to repair the motor
within a period of 45 minutes.”
Test question: “Given a motor with trouble in it, locate the trouble.”
Appropriate (Yes or No)?:
No! The objective asked for repairing behavior rather than locating behavior. ‘Repair the
motor’ means to make it work. Making it work is the desired behavior. The test item
sampled only a portion of the behavior called for by the objective.
You can increase the ability of an objective to communicate what it is you want the learner to be able to do by
telling the learner how well you want him/her to be able to do it. If you can specify at least the minimum
acceptable performance for each objective, you will have a performance standard against which to test your
instructional programs; you will have a means for determining whether your programs are successful in achieving
your instructional intent. Indicate in your statement of objectives what the acceptable performance will be, by
adding words that describe the criterion of success.
An objective describes the criteria of acceptable performance; that is, it says how well someone would have to
perform to be considered competent. For example,
“Given a computer with word-processing software, be able to write a letter”
could have a criteria of “all words are spelled correctly, there are no grammatical or punctuation errors, and the
addressee is not demeaned or insulted”. Thus, you complete your objective by adding information that describes
the criterion for success keeping in mind that if it isn’t measurable, it isn’t an objective.
Summary
A statement of instructional objectives is a collection of words or symbols describing one of your
educational intents.
An objective will communicate your intent to the degree you have described what the learner will be
doing when demonstrating his/her achievement and how you will know when he/she is doing it.
To describe terminal behavior (what the learner will be doing)
o Identify and name the overall behavior act.
o Define the important conditions under which the behavior is to occur (givens or restrictions).
o Define the criterion of acceptable performance.
To prepare an objective
o Write a statement that describes the main intent or performance expected of the student.
o If the performance happens to be covert, add an indicator behavior through which the main intent
can be detected.
o Describe relevant or important conditions under which the performance is expected to occur.
Add as much description as is needed to communicate the intent to others.
Revise as needed to create a useful objective, i.e., continue to modify a draft until these questions are
answered:
o What do I want students to be able to do?
o What are the important conditions or constraints under which I want them to perform?
o How well must students perform for me to be satisfied?
Write a separate statement for each objective; the more statements you have, the better chance you have
of making clear your intent.
Sociology
As indicated in the American Sociological Association’s 2004 publication Liberal Learning and the Sociology Major Updated: Meeting the
Challenge of Teaching Sociology in the Twenty-First Century by K. McKinney, C. Howery, K. Strand, E. Kain, and C. Berheide; A Report
of the ASA Task Force on the Undergraduate Major, 2004, American Sociological Association
The sociology major should study, review, and demonstrate* understanding of the following:
1. The discipline of sociology and its role in contributing to our understanding of social reality, such that the student will be
able to:
(a) describe how sociology differs from and is similar to other social sciences and to give examples of these
differences;
(b) describe how sociology contributes to a liberal arts understanding of social reality; and
(c) apply the sociological imagination, sociological principles, and concepts to her/his own life.
2. The role of theory in sociology, such that the student will be able to:
(a) define theory and describe its role in building sociological knowledge;
(b) compare and contrast basic theoretical orientations;
(c) show how theories reflect the historical context of the times and cultures in which they were developed; and
(d) describe and apply some basic theories or theoretical orientations in at least one area of social reality.
3. The role of evidence and qualitative and quantitative methods in sociology, such that the student will be able to:
(a) identify basic methodological approaches and describe the general role of methods in building sociological
knowledge;
(b) compare and contrast the basic methodological approaches for gathering data;
(c) design a research study in an area of choice and explain why various decisions were made; and
(d) critically assess a published research report and explain how the study could have been improved.
4. The technical skills involved in retrieving information and data from the Internet and using computers appropriately for
data analysis. The major should also be able to do (social) scientific technical writing that accurately conveys data findings
and to show an understanding and application of principles of ethical practice as a sociologist.
5. Basic concepts in sociology and their fundamental theoretical interrelations, such that the student will be able to define,
give examples, and demonstrate the relevance of culture; social change; socialization; stratification; social structure;
institutions; and differentiations by race/ethnicity, gender, age, and class.
6. How culture and social structure operate, such that the student will be able to:
(a) show how institutions interlink in their effects on each other and on individuals;
(b) demonstrate how social change factors such as population or urbanization affect social structures and
individuals;
(c) demonstrate how culture and social structure vary across time and place and the effect is of such variations; and
(d) identify examples of specific policy implications using reasoning about social-structural effects.
7. Reciprocal relationships between individuals and society, such that the student will be able to:
(a) explain how the self develops sociologically;
(b) demonstrate how societal and structural factors influence individual behavior and the self’s development;
(c) demonstrate how social interaction and the self influences society and social structure; and
(d) distinguish sociological approaches to analyzing the self from psychological, economic, and other approaches.
8. The macro/micro distinction, such that the student will be able to:
(a) compare and contrast theories at one level with those at another;
(b) summarize some research documenting connections between the two; and
(c) develop a list of research or analytical issues that should be pursued to more fully understand the connections
between the two.
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.94 of 143
9. In depth at least two specialty areas within sociology, such that the student will be able to:
(a) summarize basic questions and issues in the areas;
(b) compare and contrast basic theoretical orientations and middle range theories in the areas;
(c) show how sociology helps understand the area;
(d) summarize current research in the areas; and
(e) develop specific policy implications of research and theories in the areas.
10. The internal diversity of American society and its place in the international context, such that the student will be able to
describe:
(a) the significance of variations by race, class, gender, and age; and
(b) will know how to appropriately generalize or resist generalizations across groups.
11. To think critically, such that the student will be able to:
(a) move easily from recall analysis and application to synthesis and evaluation;
(b) identify underlying assumptions in particular theoretical orientations or arguments;
(c) identify underlying assumptions in particular methodological approaches to an issue;
(d) show how patterns of thought and knowledge are directly influenced by political-economic social structures;
(e) present opposing viewpoints and alternative hypotheses on various issues; and
(f) engage in teamwork where many or different viewpoints are presented.
* “Demonstrate” means that the student will be able to show or document appropriate mastery of the material and/or skills,
and thus that this mastery can be assessed (with an exam, a presentation, by a portfolio, and so forth).
Psychology
This document represents the work of the Task Force on Undergraduate Psychology Major Competencies appointed by the American
Psychological Association’s Board of Educational Affairs. The document has been endorsed by the Board of Educational Affairs, March
2002, but does not represent policy of the APA.
Knowledge, Skills, and Values Consistent with the Science and Application of Psychology
Knowledge, Skills, and Values Consistent with Liberal Arts Education that are Further Developed in Psychology
Chemical Engineering
The educational objectives in the undergraduate program in the Department of Chemical Engineering are to:
educate students in chemical engineering fundamentals and practice;
train students in chemical process design and integration;
train students in critical thinking and in the identification, formulation, and solution of open-ended engineering
problems;
help students be aware of their responsibility to conduct ethical, safe, and environmentally conscious engineering;
train students to be good communicators and function effectively as individuals and in teams;
provide students with knowledge of contemporary issues and understanding of the impact of engineering practices in
global and societal contexts; and
teach students the necessity and tools for continued, life-long learning.
In addition, students completing the undergraduate program in chemical engineering acquire the ability and skills to:
apply knowledge of mathematics, science, and engineering;
design and conduct experiments and analyze and interpret data;
use modern engineering tools, skills, and methods for engineering practice;
design processes and systems to meet desired performance specifications;
identify, formulate, and solve engineering problems;
understand professional and ethical responsibilities;
communicate effectively in oral and written forms;
function effectively on multidisciplinary teams;
understand the impact of engineering solutions in global and societal contexts;
know contemporary issues; and
recognize the need for and have an ability to engage in life-long learning.
English
The undergraduate degree in English emphasizes knowledge and awareness of:
canonical and noncanonical works of English and American literature;
the general outlines of the history of British and American literature;
literary theories, including recent theoretical developments; and
the social and historical contexts in which the traditions developed.
In addition, students completing the degree in English are expected to acquire the ability and skills to:
analyze literary texts;
interpret texts on the basis of such analysis;
relate analyses and interpretations of different texts to one another; and
communicate such interpretations competently in written form.
The undergraduate degree in creative writing emphasizes knowledge and awareness of:
literary works, including the genres of fiction, poetry, playwriting, and screenwriting, and the major texts of
contemporary writers;
literary history, including the origins and development of genres, major writers of the past, and the role of the writer
in society; and
literary analysis, including theories of literary composition and critical theory.
In addition, students completing the degree in creative writing are expected to acquire the ability and skills to:
write in different poetic modes and styles;
write in various fictive styles; and
evaluate other students' written work.
History
The undergraduate degree in history emphasizes knowledge and awareness of:
the main topics in the political, social, cultural, and economic history of the United States, from its origins to the
present;
the main topics in the political, social, cultural, and economic history of western civilization, from its origins in
antiquity to the present;
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.100 of 143
the main topics in the political, social, cultural, and economic history of one or more geographic areas outside
Europe and America; and
methodology in historical studies.
In addition, students completing the degree in history are expected to acquire the ability and skills to:
research and conduct an investigation, consulting appropriate works for developing a bibliography;
distinguish between primary and secondary sources, analyze arguments and interpretations, and recognize
interpretative conflicts;
interpret evidence found in primary sources and develop an historical argument based on and sustained by the
evidence available; and
produce historical essays that are coherent, cogent, and grammatically correct.
Mathematics
The undergraduate degree in mathematics emphasizes knowledge and awareness of:
basic real analysis of one variable;
calculus of several variables and vector analysis;
basic linear algebra and theory of vector spaces;
the structure of mathematical proofs and definitions; and
at least one additional specialized area of mathematics.
In addition, students completing a degree in mathematics are expected to acquire the ability and skills to:
use techniques of differentiation and integration of one and several variables;
solve problems using differentiation and integration;
solve systems of linear equations;
give direct proofs, proofs by contradiction, and proofs by induction;
formulate definitions;
read mathematics without supervision; and
utilize mathematics.
Sociology
The undergraduate degree in sociology emphasizes knowledge and awareness of:
the basic data, concepts, theories, and modes of explanation appropriate to the understanding of human societies;
the structure of modern American society, its social stratification, its ethnic, racial, religious, and gender
differentiation, and its main social institutions - family, polity, economy, and religion;
the basic social processes that maintain and alter social structure, especially the processes of integration,
organization, and conflict; and
the diversity of human societies, including the differences between major historical types such as foraging,
agricultural, industrial, and post-industrial societies.
In addition, students completing the degree in sociology are expected to acquire the ability to:
locate and consult works relevant to a sociological investigation and write a sociological paper that is coherent,
cogent, and grammatically correct;
understand the basic procedures of sociological research and analyze sociological data;
understand and interpret the results of sociological research; and
integrate and evaluate sociological writings.
Each program must identify its general goals; learning objectives in three main areas: declarative knowledge, intellectual
skills, and student attitudes.
English
General goals of the Undergraduate program:
The undergraduate majors in English and Rhetoric aim to develop students’
familiarity with literatures written in English and with the outlines of British and American literary
tradition;
understanding of texts in their cultural and historical contexts;
appreciation for the aesthetic qualities of literature and literary production;
awareness of critical and interpretive methods;
critical reading, thinking, and communication skills.
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.101 of 143
Desired Learning Outcomes:
Declarative Knowledge: The English and Rhetoric majors aim to increase students’ familiarity with:
literary terms, forms, and genres;
representative authors and cultural characteristics of major literary historical periods;
critical and interpretive methods;
principles of composition and bibliographic reference.
Intellectual Skills and Abilities: The English and Rhetoric majors aim to improve students’ ability
to comprehend texts from a variety of historical periods and cultures and to relate them to each other
formally, thematically, culturally, or historically;
to understand the process by which literature is produced in response to and in reaction against prior
literary texts and cultural settings;
to construct critical and interpretive arguments;
to reflect self-consciously on the cultural, psychological, and aesthetic bases of literary response;
to write clear, coherent, and persuasive essays;
to locate, evaluate, and use responsibly a variety of research materials from both the print and electronic
media;
to create original poetry, prose fiction, or drama;
to adapt expository writing to different audiences and purposes.
Attitudes: The English and Rhetoric majors aim to increase students’
appreciation for the aesthetic pleasures of literature and good writing;
openness to a variety of cultural or ethnic perspectives;
awareness of and reflection on personal values and openness to the possibility of self-transformation
through reading and creating literature;
commitment to intellectual honesty and integrity in the use of sources;
confidence in critical thinking and analytic skills.
General Goals: the English Graduate Program in Literature and Writing Studies seeks to develop:
the ability to conduct significant research in the fields of literary criticism and writing studies;
the ability to teach a range of courses in Composition and in English, American, and World Literatures in
English;
the ability to understand and contribute to issues and debates in the field.
Desired Learning Outcomes
Declarative knowledge:
broad knowledge of several of the historical fields in, literary genres of, and major critical approaches to
English, American, and World Literatures in English; or, broad knowledge of Writing Studies issues and
methodologies;
specialized competence in the primary and secondary literature of an appropriate specialized sub-field of
Literature or Writing Studies;
development of a range of teaching methods and strategies appropriate for particular courses.
Intellectual Skills and Abilities:
the ability to analyze literary and cultural texts with originality and rigor in the light of contemporary
theory and to contribute to the field;
the ability to write publishable critical essays and a book-length dissertation;
teaching excellence.
Attitudes:
respect for and understanding of the literatures and cultures of different historical periods, nationalities,
genders, and ethnicities;
respect for and appropriate use and acknowledgment of the scholarly work of others;
respect for and commitment to students’ intellectual growth.
History
General Goals: the undergraduate program in history seeks to develop:
Effective learning and reasoning skills;
Understanding of some of the various areas of history, including historiography and methodology.
Career-Transferable Skills: transferable, functional abilities that are required in many different problem-solving and
task-oriented situations.
o information management skills
o design and planning skills
o research and investigation skills
o communications skills
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.102 of 143
o human relations and interpersonal skills
o critical thinking skills
o management and administration skills
Learning Objectives.
Declarative Knowledge
The student should command:
An understanding of the central concepts and language of history; and
General competence in the historical areas the student has chosen to study.
Intellectual Skills:
Ability to formulate and solve research problems; and
Effective written and verbal communication skills.
o Focus: A well-focused piece of writing or presentation is one in which all of the elements work
together toward a common, coherent goal. Such a piece of writing might discuss many different
perspectives, but the goal of the discussion will be clear, and the different elements will each
contribute toward meeting that goal.
o Support: Supporting evidence plays a crucial role in any academic writing or presentation, because
academic writing is generally argumentative or persuasive. To convince or persuade in a logic-
driven genre, one needs evidence.
o Organization: A well-organized piece of writing makes the reader's job easier -- it helps bring the
reader efficiently and comfortably to the thesis or objective and then through the argumentation
which supports that thesis. Organization is all about intentionality -- when an academic writer is
writing well, the arrangement of her material is rarely accidental, but rather is carefully chosen so
that her argument is represented in the best possible way.
Attitudes. The student should:
Promote cross-cultural awareness and understanding
Subscribe to the ethical codes of the historical discipline based on the American Historical Association’s
Statement on Standards of Professional Conduct, 1998 Edition.
Business Administration
Desired Learning Outcomes
Students pursuing either the B.S. degree or the M.S. degree in Business Administration are expected to have:
knowledge and understanding of the basic functional areas of business management;
knowledge and understanding of one or more areas of concentration including the critical skills necessary
to solve business problems;
knowledge of written and verbal communication skills, and computer use;
knowledge of the legal and international environments in which businesses operate;
knowledge of mathematics and statistics sufficient to apply quantitative reasoning and analysis;
knowledge of the economic, political science and behavioral science fields to be able to manage human and
material resources effectively.
In addition, students completing these two degrees are expected to demonstrate the ability to:
apply basic business principles to solve new and recurring decision problems;
conceptualize and analyze business problems;
communicate their conceptualization, analyses, and solutions effectively, both verbally and in writing.
Economics
Goal 1 Develop the ability to explain core economic terms, concepts and theories.
Objective 1.1 Explain supply, demand, and the function of markets and prices as allocative
mechanisms.
Objective 1.2 Apply the concept of equilibrium at the macro and micro economic levels.
Objective 1.3 Identify key macroeconomic indicators and measures of economic changes and
growth.
Objective 1.4 Identify and discuss the key concepts underlying international trade and
international financial flows.
Objective 1.5. Assess the role of both domestic and international institutions and laws in shaping
different economic outcomes, especially in the context of market- based economies.
European Studies
Goal 1 Illustrate knowledge of the cultural history of Europe.
Objective 1.1 Compare the origins of a specific cultural manifestation in two or more European
countries.
Objective 1.2 Differentiate among the diverse cultures that form modern Europe.
Objective 1.3 Interpret differing perspectives on European unity.
Women’s Studies
Goal 1 Understand the intersectionality of different dimensions of social organization gender, race, class,
culture, etc) as concepts and as lived experience.
Objective 1.1 Articulate a way of looking at the world from the standpoint of diverse women
nationally and internationally.
Objective 1.2 Discuss the way that gender is shaped by race, class, and culture.
Objective 1.3 Identify ways that people negotiate and represent multiple identities.
College of Education
Postsecondary Education Leadership Program
Goal 1 Describe and evaluate the major theories of adult learning and select a theory(ies) upon which to
build practice in a postsecondary environment.
Objective 1.1 Recognize the major adult developmental stages affecting learning.
Objective 1.2 Design a lesson, unit, or program taking into account adult developmental tasks
associated with one or more stages.
Objective 1.3 Construct a philosophy about adult learning and teaching adults utilizing adult
learning theories.
College of Sciences
Biology Department
Goal 1 Explain the interactions of organisms with their environments and with each other.
Objective 1.1 Describe ecosystems as existing of populations of organisms plus physical
characteristics, nutrient cycles, energy flow and controls.
Objective 1.2 Explain how populations of the same and different species interact dynamically in
communities.
Psychology
Goal 1 Understand the developmental, cognitive, social, and biological bases of normal and
abnormal/maladaptive behavior.
Objective 1.1 Explain the roles of persons and situations as causes of behavior.
Objective 1.2 Explain the nature-nurture controversy, and cite supportive findings from different
areas of psychology for each side.
Goal 2 Understand the process of psychological inquiry, including the formulation of hypotheses and the
methods and designs used to test hypotheses.
Objective 2.1 Formulate scientific questions using operational definitions.
Objective 2.2 Demonstrate familiarity with the concepts and techniques of testing hypotheses.
The material below is a comprehensive approach to showing how individual course objectives support overall
program goals/objectives/outcomes and how courses in a curriculum align to provide intended learning outcomes.
For an article describing the system see http://www.engineer.ucla.edu/stories/2004/eeweb1.htm
In partnership with its constituencies, the mission of the Electrical Engineering Department at UCLA is:
¤ To produce highly qualified, well-rounded, and motivated students with fundamental knowledge in Electrical Engineering to
serve California, the Nation, and the World.
¤ To pursue creative research and new technologies in Electrical Engineering and across disciplines in order to serve the
needs of industry, government, society, and the scientific community by expanding the body of knowledge in the field.
¤ To develop partnerships with industrial and government agencies.
¤ To achieve visibility by active participation in conferences and technical and community activities.
¤ To publish enduring scientific articles and books.
In consultation with its constituents, the Electrical Engineering Department at UCLA has set its educational
objectives as follows:
1: Fundamental Knowledge: Graduates of the program will be skilled in the fundamental concepts of electrical engineering
necessary for success in industry or graduate school.
2: Specialization: Graduates of the program will be prepared to pursue career choices in electrical engineering,
computer engineering, biomedical engineering, or related interdisciplinary fields that benefit
from a strong background in applied sciences or engineering.
3: Design Skills: Graduates of the program will be prepared with problem solving skills, laboratory skills, and
design skills for technical careers.
4: Professional Skills: Graduates of the program will be prepared with communication and teamwork skills as well as
an appreciation for ethical behavior necessary to thrive in their careers.
5: Self Learning: Graduates of the program will be prepared to continue their professional development through
continuing education and personal development experiences based on their awareness of
library resources and professional societies, journals, and meetings.
Program Constituents:
The Program Educational Objectives are determined and evaluated through a regular consultation and
examination process that involves four core constituents: Students, Alumni, Industry, and Faculty.
¤ Student input is obtained through a standing departmental Student Advisory Committee consisting of representatives from
several student organizations, student representation in regular faculty meetings, annual departmental Town Hall meetings,
exit interviews with graduating students, student evaluation forms, and individual faculty-student advisee interaction.
¤ Alumni input is obtained through a standing departmental Alumni Advisory Board, surveys with department Alumni, and
exit surveys with graduating students.
¤ Industry input is obtained through surveys with industry participants at the annual departmental Research Symposium,
surveys with department Alumni, and surveys with participants in the department's Industry Affiliate Program.
¤ Faculty input is obtained through a standing ABET departmental committee, regular faculty meetings, annual departmental
retreats, and the departmental courses and curriculum committee. Input from other engineering faculty in the School of
Engineering and Applied Science is obtained through the Faculty Executive Committee.
In addition, in order to facilitate the participation of the constituencies in the formulation and evaluation of the
Program Educational Objectives, and in order to solicit further input and feedback, these objectives are publicized
on the Department's web page, in the Department's Annual Report, and in the School of Engineering and Applied
Science catalog of courses.
Students graduating from the Electrical Engineering Department at UCLA will be expected and prepared to
exercise the skills and abilities (a) through (n) listed in the table of Program Outcomes below. The table also
indicates how the Program Outcomes relate to the Program Educational Objectives.
Program Educational
Objectives
1 2 3 4 5
a. Ability to apply knowledge of mathematics, science, and engineering. X X X X
b. Ability to design and conduct experiments, as well as to analyze and interpret data. X X X X
c. Ability to design a system, component, or process to meet desired needs. X X X X
d. Ability to function on multi-disciplinary teams. X X X X X
e. Ability to identify, formulate, and solve engineering problems. X X X X
f. Understanding of professional and ethical responsibility. X
g. Ability to communicate effectively. X X
h. Broad education necessary to understand the impact of engineering solutions in a global and X X X
societal context.
i. Recognition of the need for, and an ability to engage in life-long learning. X X X
j. Knowledge of contemporary issues. X X
k. Ability to use the techniques, skills, and modern engineering tools necessary for engineering X X X X
practice.
l. Knowledge of probability and statistics, including applications to electrical engineering. X X X X
m. Knowledge of mathematics through differential and integral calculus, and basic and X X X X
engineering sciences, necessary to analyze and design complex electrical and electronic
devices, software, and systems containing hardware and software components, as
appropriate to electrical engineering.
n. Knowledge of advanced mathematics, including differential equations, linear algebra, and X X X X
complex variables.
Assessment Tools:
The assessment process of the Program Educational Objectives relies on several tools that seek feedback from
students, instructors, alumni, Alumni Advisory Board, and the Student Advisory Committee. The input is analysed
by the department, its instructors and its ABET committee.
Assessment Tool Administrered By Examined By
¤ End-of-course surveys (Quarterly).
Department & Instructors ABET Committee
¤ Student comments (Quarterly).
Department & Instructors ABET Committee
¤ Instructor evaluation reports (Quarterly).
Program outcomes Department ABET Committee
¤ ABET problems (Quarterly)
specific to each Instructors and TAs ABET Committee
course ¤ Classroom work (Quarterly).
Instructors and TAs Instructors and TAs
¤ Course performance reports (Quarterly)
Department Instructors and TAs
Course performance history plots
¤ Department Instructors and TAs
(Quarterly)
Implementation:
The assessment process is meant to ensure that the Program Outcomes that are important to the Mission of the Department
and its Program Educational Objectives are being monitored and measured. The results of the assessment process are regularly
The constituents (Faculty, Students, Alumni, and Industry) of the department are engaged in the following manner in the
department assessment activities.
Faculty and Instructors. Prior to the start of an undergraduate course, every instructor is advised to review the:
1. Course Objectives and Outcomes Form of his/her course in order to familiarize themselves with the expected
outcomes for the course and how these specific course outcomes relate to the overall Program Outcomes.
2. Past Course Performance Form of his/her course in order to familiarize themselves with the performance of the
course in prior offerings and in order to identify any points of weakness that may require additional emphasis.
1. Save samples of student works (homework and exam solutions, lab and design reports) on a regular basis.
2. Assess the contribution of the course to its Strong Program Outcomes through the selection of an ABET problem and
by evaluating student performance on the problem.
3. Upload the information pertaining to the ABET problem, its solution, and sample student responses into the course
archives.
4. Complete and file an Instructor Evaluation of Student Performance in order to comment on the overall course
performance towards meeting its objectives and specific outcomes.
5. Encourage students to participate in the End-of-Course Surveys.
The teaching assistants of undergraduate courses also participate in the above tasks.
Students. The department engages its undergraduate students and collects their feedback for accreditation purposes through
the online End-of-Course Student Surveys. The Student Surveys collect student input on course material, course
organization, and instruction. Besides asking students questions about the quality of a course and its instruction, the surveys
also assess, for each course, the main topics that students are expected to have been exposed to during the course. Students
are asked to rate, on a scale from Poor to Excellent, whether they feel they have had an opportunity to learn the Specific Course
Outcomes well. The student input is then summarized and tracked in:
The department also collects student feedback through two additional mechanisms:
Alumni and Industry. The department engages its alumni in its assessment mechanism in two ways:
1. Alumni Advisory Board. The board consists of 10 alumni members from industry and academia. It meets twice
yearly (Fall and Spring) and examines issues related to alumni activities and to department performance in meeting its
Educational Objectives and Program Outcomes.
2. Alumni Survey administered to alumni from prior years.
Since several members of the Alumni Advisory Board are members of industry and hold management positions at leading
companies that hire a good number of our graduating seniors, their input is used by the department as the link between the
department and its industry partners. Likewise, the alumni survey helps to collect feedback from alumni in various industries.
ABET Problem. The ABET problem functionality engages the instructor rather directly in the assessment mechanism. It is the
main mechanism used to obtain instructor feedback on whether the students in the course achieved some of the desired course
outcomes. The ABET problem functionality is as follows.
Each undergraduate course in the department contributes to a list of Program Outcomes. Usually, a course contributes strongly
to some outcomes and less strongly to other outcomes. While a course may contribute to several ABET outcomes, usually only a
subset of its strong outcomes are used for ABET assessment under the ABET problem requirement.
The ABET problem is meant to measure how well the students in a course learned some of the most significant (strong) Program
Outcomes that a course contributes to. The ABET problem could be chosen as any of the following:
Saving Samples of Student Works. Each undergraduate course is required to save samples of student homework solutions,
laboratory reports, project or design reports, and exam solutions, typically from poor to good quality. At the end of each
quarter, the teaching assistants of all undergraduate courses must compile a binder containing in addition to the solutions, the
corresponding homework questions, exam questions, lab description, and project description. Specifically, each course binder
needs to be organized as follows, for each course offering:
1. Page 1. A cover page listing the number of the course, the title of the course, the quarter and year, instructor’s name,
and teaching assistant(s)’ name(s).
2. Page 2. A copy of the course info handout. Preferably, the completed Class Info page from EEweb should be printed
and used.
3. Page 3. A table listing the grades of the students whose performance has been tracked for all assignments, exams,
and their overall course grade. This information can be obtained from the course gradebook. Do not identify the
students. Refer to the students instead as Students A, B, C, and so forth.
4. Page 4. A histogram of the course grade distribution. This information can be obtained from the course gradebook as
well. The histogram can be printed.
5. Pages 5-6. A printout of the ABET problem for the course, its solution, and the instructor’s evaluation of the student
performance on this problem. The histogram of the ABET problem grade distribution should be printed and included as
well.
6. Afterwards: Copies of sample student solutions of the ABET problem. Do not identify the students by name. Instead,
refer to them as Students A, B, C, and so forth.
7. Afterwards: Copies of the homework assignments and the exams. Remove student names and student ID numbers.
8. Afterwards:
Copies of work samples by Student A
Copies of work samples by Student B
Copies of work samples by Student C
Program Outcomes
Legend:
LEC - Lecture course - Strong contribution
LAB - Laboratory course - Average contribution
DES - Design course - Some contribution
OTH - Other - No contribution
a CHEM20A, CHEM20B, CHEM20L, EE1, EE2, EE10, EEM16, EE100, EE101, EE102, EE103, EE110, EE113,
EE113D, EE114D, EE115A, EE115AL, EE115B, EE115BL, EE115C, EE115D, EE116B, EEM116C,
EEM116D, EEM117, EE118D, EE121B, EE122AL, EE123A, EE123B, EE124, EE129D, EE131A, EE131B,
EE132A, EE132B, EE141, EE142, EEM150, EE161, EE162A, EE163A, EE163C, EE164AL, EE164DL,
EEM171L, EE172, EE172L, EE173, EE173DL, EE174, EE180D, EEM185, EE194, EE199, MATH31A,
MATH31B, MATH32A, MATH32B, MATH33A, MATH33B, PHY1A, PHY1B
b EE102, EE103, EE110L, EE113, EE113D, EE114D, EE115AL, EE115BL, EE115D, EEM116D, EEM117,
EE122AL, EE131A, EE132A, EE141, EEM150L, EE161, EE163C, EE164AL, EE164DL, EEM171L, EE172L,
EE173DL, EE180D, EE194, EE199
c EE102, EE103, EE110L, EE113, EE113D, EE114D, EE115AL, EE115B, EE115BL, EE115C, EE115D,
EE116B, EEM116D, EEM117, EE118D, EE122AL, EE129D, EE131A, EE132A, EE141, EEM150, EE161,
EE163A, EE163C, EE164AL, EE164DL, EEM171L, EE172L, EE173, EE173DL, EE174, EE180D, EE194,
EE199
d EE110L, EE113D, EE115AL, EE115BL, EE115D, EE122AL, EEM150L, EE180D, EE194, EE199, ENGR183,
ENGR185
e EE10, EE110, EE110L, EE113D, EE114D, EE115AL, EE115BL, EE115C, EE115D, EE116B, EEM116D,
EEM117, EE118D, EE129D, EE164DL, EE180D, EE194, EE199
g EE110L, EE113D, EE115AL, EE115D, EE122AL, EE129D, EE173DL, EE194, EE199, ENGR183
k EE110L, EE113D, EE115AL, EE115BL, EE115D, EEM116L, EEM117, EEM150, EEM150L, EE164AL,
EE164DL, EE180D, EE194, EE199
m EE2, EE100, EE102, EE103, EE115D, EEM171L, MATH31A, MATH31B, MATH32A, MATH32B, PHY1A,
PHY1B
Course objectives and their This is a required course for electrical engineering, with computer and biomedical
relation to the Program engineering options as well as computer science and engineering. EE10 introduces the
Educational Objectives: principles of circuits and systems and their role in electrical engineering . EE10 then
introduces and demonstrates the power of the fundamental circuit laws, source equivalent
circuits, and analysis methods. This is followed by an introduction to the principle of
negative feedback and its impact on circuit performance and design. Operational amplifier
properties and operational amplifier circuits follow. Finally, the properties and applications of
reactive circuit elements are introduced along with first and second order circuits. Students
are prepared to analyze circuit properties with these tools and methods for each circuit type
using both manual methods and PSpice tools. This course contributes to the Educational
Objectives 1 (Fundamental Knowledge), 2 (Specialization), 3 (Design Skills), and 5 (Self-
Learning).
Contribution of the course to Engineering Topics: 100 %
the Professional Component:
General Education: 0%
Mathematics & Basic Sciences: 0%
Will this course involve computer assignments? YES Will this course have TA(s) when it is offered? YES
(i) Average
(k) Some
(n) Average
:: Upon completion of this course, students will have had an opportunity to learn about the following ::
Specific Course Outcomes Program
Outcomes
1. Analyze circuit systems using direct application of Kirchoff’s Current and Voltage Laws along with Ohm’s aekn
Law.
2. Interpret analytical circuit results to properly assign power, current, and voltage values to circuit graphical aekn
representations.
3. Apply node-voltage analysis techniques to analyze circuit behavior. aekn
4. Apply mesh-current analysis techniques to analyze circuit behavior. aekn
5. Construct parallel, series, delta, and Y, resistor equivalent circuits. aekn
6. Explain the role of negative feedback in establishing amplifier response. aekn
Program outcomes and how they are covered by the specific course outcomes
(a) ¤ Analyze circuit systems using direct application of Kirchoff’s Current and Voltage Laws along with Ohm’s Law.
¤ Interpret analytical circuit results to properly assign power, current, and voltage values to circuit graphical
representations.
¤ Apply node-voltage analysis techniques to analyze circuit behavior.
¤ Apply mesh-current analysis techniques to analyze circuit behavior.
¤ Construct parallel, series, delta, and Y, resistor equivalent circuits.
¤ Explain the role of negative feedback in establishing amplifier response.
¤ Explain the characteristics of ideal and non-ideal operational amplifiers.
¤ Analyze the characteristics of ideal and non-ideal operational amplifier circuits using node-voltage methods.
¤ Explain the characteristics of capacitor and inductor circuit elements.
¤ Compute initial conditions for current and voltage in first order R-L and R-C capacitor and inductor circuits.
¤ Compute time response of current and voltage in first order R-L and R-C capacitor and inductor circuits.
¤ Compute initial conditions for current and voltage in second order RLC circuits.
¤ Compute time response of current and voltage in second order RLC circuits.
¤ Use PSpice tools to create and analyze circuit models.
¤ Use PSpice tools to design and analyze resistive circuit systems.
¤ Use PSpice tools to design and analyze operational amplifier circuit systems.
¤ Several homework assignments delving on core concepts and reinforcing analytical skills learned in class.
¤ Opportunities to interact weekly with the instructor and the teaching assistant(s) during regular office hours and
discussion sections in order to further the students' learning experience and the students' interest in the material.
(c) ¤ Use PSpice tools to create and analyze circuit models.
¤ Use PSpice tools to design and analyze resistive circuit systems.
¤ Use PSpice tools to design and analyze operational amplifier circuit systems.
¤ Several homework assignments delving on core concepts and reinforcing analytical skills learned in class.
¤ Opportunities to interact weekly with the instructor and the teaching assistant(s) during regular office hours and
discussion sections in order to further the students' learning experience and the students' interest in the material.
(e) ¤ Analyze circuit systems using direct application of Kirchoff’s Current and Voltage Laws along with Ohm’s Law.
¤ Interpret analytical circuit results to properly assign power, current, and voltage values to circuit graphical
representations.
¤ Apply node-voltage analysis techniques to analyze circuit behavior.
¤ Apply mesh-current analysis techniques to analyze circuit behavior.
¤ Construct parallel, series, delta, and Y, resistor equivalent circuits.
¤ Explain the role of negative feedback in establishing amplifier response.
¤ Explain the characteristics of ideal and non-ideal operational amplifiers.
¤ Analyze the characteristics of ideal and non-ideal operational amplifier circuits using node-voltage methods.
¤ Explain the characteristics of capacitor and inductor circuit elements.
¤ Compute initial conditions for current and voltage in first order R-L and R-C capacitor and inductor circuits.
¤ Compute time response of current and voltage in first order R-L and R-C capacitor and inductor circuits.
¤ Compute initial conditions for current and voltage in second order RLC circuits.
¤ Compute time response of current and voltage in second order RLC circuits.
¤ Use PSpice tools to create and analyze circuit models.
¤ Use PSpice tools to design and analyze resistive circuit systems.
¤ Use PSpice tools to design and analyze operational amplifier circuit systems.
¤ Several homework assignments delving on core concepts and reinforcing analytical skills learned in class.
¤ Opportunities to interact weekly with the instructor and the teaching assistant(s) during regular office hours and
discussion sections in order to further the students' learning experience and the students' interest in the material.
(i) ¤ Use PSpice tools to design and analyze resistive circuit systems.
The following article provides an argument for direct or authentic assessment of student learning outcomes.
Mr. Wiggins, a researcher and consultant on school reform issues, is a widely-known advocate of authentic assessment in
education. This article is based on materials that he prepared for the California Assessment Program.
Assessment is authentic when we directly examine student performance on worthy intellectual tasks. Traditional assessment,
by contract, relies on indirect or proxy 'items'--efficient, simplistic substitutes from which we think valid inferences can be
made about the student's performance at those valued challenges.
Do we want to evaluate student problem-posing and problem-solving in mathematics? experimental research in science?
speaking, listening, and facilitating a discussion? doing document-based historical inquiry? thoroughly revising a piece of
imaginative writing until it "works" for the reader? Then let our assessment be built out of such exemplary intellectual
challenges.
Further comparisons with traditional standardized tests will help to clarify what "authenticity" means when considering
assessment design and use:
Authentic assessments require students to be effective performers with acquired knowledge. Traditional tests tend
to reveal only whether the student can recognize, recall or "plug in" what was learned out of context. This may be
as problematic as inferring driving or teaching ability from written tests alone. (Note, therefore, that the debate is
not "either-or": there may well be virtue in an array of local and state assessment instruments as befits the purpose of
the measurement.)
Authentic assessments present the student with the full array of tasks that mirror the priorities and challenges found
in the best instructional activities: conducting research; writing, revising and discussing papers; providing an
engaging oral analysis of a recent political event; collaborating with others on a debate, etc. Conventional tests are
usually limited to paper-and-pencil, one- answer questions.
Authentic assessments attend to whether the student can craft polished, thorough and justifiable answers,
performances or products. Conventional tests typically only ask the student to select or write correct responses--
irrespective of reasons. (There is rarely an adequate opportunity to plan, revise and substantiate responses on typical
tests, even when there are open-ended questions). As a result,
Authentic assessment achieves validity and reliability by emphasizing and standardizing the appropriate criteria for
scoring such (varied) products; traditional testing standardizes objective "items" and, hence, the (one) right answer
for each.
"Test validity" should depend in part upon whether the test simulates real-world "tests" of ability. Validity on most
multiple-choice tests is determined merely by matching items to the curriculum content (or through sophisticated
correlations with other test results).
Authentic tasks involve "ill-structured" challenges and roles that help students rehearse for the complex ambiguities
of the "game" of adult and professional life. Traditional tests are more like drills, assessing static and too-often
arbitrarily discrete or simplistic elements of those activities.
Beyond these technical considerations the move to reform assessment is based upon the premise that assessment should
primarily support the needs of learners. Thus, secretive tests composed of proxy items and scores that have no obvious
meaning or usefulness undermine teachers' ability to improve instruction and students' ability to improve their performance.
We rehearse for and teach to authentic tests--think of music and military training--without compromising validity.
While multiple-choice tests can be valid indicators or predictors of academic performance, too often our tests mislead
students and teachers about the kinds of work that should be mastered. Norms are not standards; items are not real problems;
right answers are not rationales.
What most defenders of traditional tests fail to see is that it is the form, not the content of the test that is harmful to learning;
demonstrations of the technical validity of standardized tests should not be the issue in the assessment reform debate.
Students come to believe that learning is cramming; teachers come to believe that tests are after-the-fact, imposed nuisances
composed of contrived questions--irrelevant to their intent and success. Both parties are led to believe that right answers
matter more than habits of mind and the justification of one's approach and results.
A move toward more authentic tasks and outcomes thus improves teaching and learning: students have greater clarity about
their obligations (and are asked to master more engaging tasks), and teachers can come to believe that assessment results are
both meaningful and useful for improving instruction.
If our aim is merely to monitor performance then conventional testing is probably adequate. If our aim is to improve
performance across the board then the tests must be composed of exemplary tasks, criteria and standards.
The costs are deceptive: while the scoring of judgment-based tasks seems expensive when compared to multiple-choice tests
(about $2 per student vs. 1 cent) the gains to teacher professional development, local assessing, and student learning are
many. As states like California and New York have found (with their writing and hands-on science tests) significant
improvements occur locally in the teaching and assessing of writing and science when teachers become involved and
invested in the scoring process.
If costs prove prohibitive, sampling may well be the appropriate response--the strategy employed in California, Vermont and
Connecticut in their new performance and portfolio assessment projects. Whether through a sampling of many writing
genres, where each student gets one prompt only; or through sampling a small number of all student papers and school-wide
portfolios; or through assessing only a small sample of students, valuable information is gained at a minimum cost.
And what have we gained by failing to adequately assess all the capacities and outcomes we profess to value simply because
it is time-consuming, expensive, or labor-intensive? Most other countries routinely ask students to respond orally and in
writing on their major tests--the same countries that outperform us on international comparisons. Money, time and training
are routinely set aside to insure that assessment is of high quality. They also correctly assume that high standards depend on
the quality of day-to-day local assessment--further offsetting the apparent high cost of training teachers to score student work
in regional or national assessments.
WILL THE PUBLIC HAVE ANY FAITH IN THE OBJECTIVITY AND RELIABILITY OF
JUDGMENT-BASED SCORES?
We forget that numerous state and national testing programs with a high degree of credibility and integrity have for many
years operated using human judges:
the New York Regents exams, parts of which have included essay questions since their inception--and which are
scored locally (while audited by the state);
the Advanced Placement program which uses open-ended questions and tasks, including not only essays on most
tests but the performance-based tests in the Art Portfolio and Foreign Language exams;
state-wide writing assessments in two dozen states where model papers, training of readers, papers read "blind" and
procedures to prevent bias and drift gain adequate reliability;
the National Assessment of Educational Progress (NAEP), the Congressionally-mandated assessment, uses
numerous open-ended test questions and writing prompts (and successfully piloted a hands-on test of science
performance);
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.119 of 143
newly-mandated performance-based and portfolio-based state-wide testing in Arizona, California, Connecticut,
Kentucky, Maryland, and New York.
Though the scoring of standardized tests is not subject to significant error, the procedure by which items are chosen, and the
manner in which norms or cut-scores are established is often quite subjective--and typically immune from public scrutiny and
oversight.
Genuine accountability does not avoid human judgment. We monitor and improve judgment through training sessions,
model performances used as exemplars, audit and oversight policies as well as through such basic procedures as having
disinterested judges review student work "blind" to the name or experience of the student--as occurs routinely throughout the
professional, athletic and artistic worlds in the judging of performance.
Authentic assessment also has the advantage of providing parents and community members with directly observable products
and understandable evidence concerning their students' performance; the quality of student work is more discernible to
laypersons than when we must rely on translations of talk about stanines and renorming.
Ultimately, as the researcher Lauren Resnick has put it, What you assess is what you get; if you don't test it you won't get it.
To improve student performance we must recognize that essential intellectual abilities are falling through the cracks of
conventional testing.
ADDITIONAL READING
Archbald, D. & Newmann, F. (1989) "The Functions of Assessment and the Nature of Authentic Academic Achievement," in
Berlak (ed.) Assessing Achievement: Toward the development of a New Science of Educational Testing. Buffalo, NY: SUNY
Press.
Frederiksen, J. & Collins, A. (1989) "A Systems Approach to Educational Testing," Educational Researcher, 18, 9
(December).
National Commission on Testing and Public Policy (1990) From Gatekeeper to Gateway: Transforming Testing in America.
Chestnut Hill, MA: NCTPP, Boston College.
Wiggins, G. (1989) "A True Test: Toward More Authentic and Equitable Assessment," Phi Delta Kappan, 70, 9 (May).
Wolf, D. (1989) "Portfolio Assessment: Sampling Student Work," Educational Leadership 46, 7, pp. 35-39 (April).
SCALE
DIMENSIONS
DESCRIPTIONS OF DIMENSIONS
Part 2: Scale
Describes how well or poorly any given task has been performed
Positive terms which may be used: “Mastery”, “Partial Mastery”, “Progressing”, “Emerging”
Nonjudgmental or noncompetitive language: “High level”, “Middle level”, “Beginning level”
Commonly used labels:
o Sophisticated, competent, partly competent, not yet competent
o Exemplary, proficient, marginal, unacceptable
o Advanced, intermediate high, intermediate, novice
o Distinguished, proficient, intermediate, novice
o Accomplished, average, developing. Beginning
3-5 levels are typically used
o the more levels there are, the more difficult it becomes to differentiate between them and to
articulate precisely why one student’s work falls into the scale level it does
o but, more specific levels make the task clearer for the student and they reduce the professor’s
time needed to furnish detailed grading notes
Part 3: Dimensions
Lay out the parts of the task simply and completely
Should actually represent the type of component skills students must combine in a successful
scholarly work
Breaking up the assignment into its distinct dimensions leads to a kind of task analysis with the
components of the task clearly identified
Example Scoring Guide Rubric: (includes description of dimensions at the highest level of
performance) (Stevens and Levi 2005)
Task: Each student will make a 5-minute presentation on the changes in one community over
the past 30 years. The student may focus the presentation in any way he or she wishes, but
there needs to be a thesis of some sort, not just a chronological exposition. The presentation
should include appropriate photographs, maps, graphs, and other visual aids for the
audience.
Task: Each student will make a 5-minute presentation on the changes in one community over
the past 30 years. The student may focus the presentation in any way he or she wishes, but
there needs to be a thesis of some sort, not just a chronological exposition. The presentation
should include appropriate photographs, maps, graphs, and other visual aids for the
audience.
How to construct a rubric: four stages in constructing a rubric (Stevens and Levi 2005)
1. Reflecting. In this stage, we take the time to reflect on what we want from the students, why we created this
assignment, what happened the last time we gave it, and what our expectations are.
a) Why did you create this assignment?
b) Have you given this assignment or a similar assignment before?
c) How does this assignment relate to the rest of what you are teaching?
d) What skills will students need to have or develop to successfully complete this assignment?
e) What exactly is the task assigned?
2. Listing. In this stage, we focus on the particular details of the assignment and what specific learning
objectives we hope to see in the completed assignment.
Answers to (d)-(e)-(f) above regarding skills required, the exact nature of the task, and the types of
evidence of learning are most often a good starting point to generate this list. Once the learning goals
have been listed, you add a description of the highest level of performance you expect for each learning
goal. These will later contribute to the “Descriptions of Dimensions” on a finished rubric.
3. Grouping and Labeling. In this stage, we organize the results of our reflections in Stages 1 and 2, grouping
similar expectations together in what will probably become the rubric dimensions. Start with the highest
performance expectations completed in Stage 2 and group together items which are related. Once the
performance descriptions are in groups of similar skills, read them and start to find out what is common
across the group and label it. These labels will ultimately become dimensions on the rubric – it is important
to keep them clear and neutral; e.g., “Organization”, “Analysis”, or “Citations”.
4. Application. In this stage, we apply the dimensions and descriptions from Stage 3 to the final form of the
rubric, utilizing the matrix/grid format.
Once you have identified what you are assessing; e.g., critical thinking, here are steps for creating holistic rubrics
(Allen 2004):
Identify the characteristics of what you are assessing; e.g., appropriate use of evidence, recognition of
logical fallacies
Describe the best work you could expect using these characteristics – this describes the top category
Describe the worst acceptable product using these characteristics – this describes the lowest
acceptable category
Describe an unacceptable product – this describes the lowest category
Develop descriptions of intermediate-level products and assign them to intermediate categories. You
might decide to develop a scale with five levels; e.g., unacceptable, marginal, acceptable, competent,
outstanding, or three levels; e.g., novice, competent, exemplary, or any other set that is meaningful.
Ask colleagues who were not involved in the rubric’s development to apply it to some products or
behaviors and revise as needed to eliminate ambiguities.
Example:
Inadequate The essay has at least one serious weakness. It may be unfocused, underdeveloped, or
rambling. Problems with the use of language seriously interfere with the reader’s ability to
understand what is being communicated.
Developing The essay may be somewhat unfocused, underdeveloped, or rambling, but it does have some
competence coherence. Problems with the use of language occasionally interfere with the reader’s ability to
understand what is being communicated.
Acceptable The essay is generally focused and contains some development of ideas, but the discussion may
be simplistic or repetitive. The language lacks syntactic complexity and may contain occasional
grammatical errors, but the reader is able to understand what is being communicated.
Sophisticated The essay is focused and clearly organized, and it shows depth of development. The language is
precise and shows syntactic variety, and ideas are clearly communicated to the reader.
Question Action
1 What criteria or essential elements must be present in the Include these as rows in your rubric
student’s work to ensure that it is high in quality?
These should be the criteria that distinguish good
work from poor work
2 How many levels of achievement do I wish to illustrate for Include these as columns in your rubric and label them
students?
The levels should generally describe a range of
achievement varying from excellent to unacceptable
o Example: exemplary, proficient, marginal,
unacceptable
o Example: sophisticated, competent, partly
competent, not yet competent
o Example: distinguished, proficient,
intermediate, novice
o Example: accomplished, average,
developing, beginning
3 For each criterion or essential element of quality, what is a clear Include descriptions in the appropriate cells of the
description of performance at each achievement level? rubric
Avoid undefined terms (e.g., “significant”, “trivial”,
“shows considerable thought”)
Avoid value-laden terms (e.g., “excellent”, “poor”)
Use objective descriptions that help provide guidance
to the students for getting better when needed
4 What are the consequences of performing at each level of Add descriptions of consequences to the commentaries
quality? in the rubric
5 What rating scheme will I use in the rubric? Add this to the rubric in a way that fits in with your
Some criteria may be weighted differently than others grading philosophy
6 When I use the rubric, what aspects work well and what aspects Revise the rubric accordingly
need improvement?
Does the rubric help you distinguish among the levels
of quality in a student sample?
Do the criteria seem to be appropriate?
Are there too many or too few levels of achievement
specified?
Are there any descriptions that are incomplete or
unclear?
1 What content must students master in order to complete the Develop criteria that reflect knowledge and/or use of
task well? content and add them to the rubric
2 Are there any important aspects of the task that are specific to Identify skills and abilities that are necessary in this
the context in which the assessment is set? context and add related criteria to the rubric
3 In the task, is the process of achieving the outcome as Include and describe criteria that reflect important
important as the outcome itself? aspects of the process
*
Contexts for Consideration:
1. Cultural/Social – Group, national, ethnic behavior/attitude
2. Scientific – Conceptual, basic science, scientific method
3. Educational – Schooling, formal training
4. Economic – Trade, business concerns costs
5. Technological – Applied science, engineering
6. Ethical – Values
7. Political – Organizational or governmental
8. Personal Experience – Personal observation, informal character
1. Tests
a. Commercial, norm-referenced, standard examinations
b. Locally developed written examinations (objective or subjective designed by faculty);
c. Oral examinations (evaluation of student knowledge levels through a face-to-face interrogative
dialogue with program faculty).
2. Competency-Based Methods
a. Performance Appraisals - systematic measurement of overt demonstration of acquired skills
b. Simulations
c. “Stone” courses (primarily used to approximate the results of performance appraisal, when direct
demonstration of the student skill is impractical).
4. External Examiner (using an expert in the field from outside your program – usually from a similar
program at another institution – to conduct, evaluate, or supplement the assessment of your students).
5. Behavioral Observations – including scoring rubrics and verbal protocol analysis (measuring the
frequency, duration and topology of student actions, usually in a natural setting with non-interactive
methods).
6. Archival Records (biographical, academic, or other file data available from college or other agencies and
institutions).
Definition: Group administered, mostly or entirely multiple-choice, “objective” tests in one or more curricular areas. Scores
are based on comparison with a reference or norm group. Typically must be obtained (purchased) from a private vender.
Target of Method: Used primarily on students in individual programs, courses or for a particular student cohort.
Advantages:
Can be adopted and implemented quickly
Reduce/eliminate faculty time demands in instrument development and grading (i.e., relatively low “frontloading”
and “backloading” effort)
Objective scoring
Provide for externality of measurement (i.e., external validity is the degree to which the conclusions in your study
would hold for other persons in other places and at other times – ability to generalize the results beyond the original
test group.)
Provide norm reference group(s) comparison often required by mandates.
May be beneficial or required in instances where state or national standards exist for the discipline or profession.
Very valuable for benchmarking and cross-institutional comparison studies.
Disadvantages:
May limit what can be measured.
Eliminates the process of learning and clarification of goals and objectives typically associated with local
development of measurement instruments.
Unlikely to completely measure or assess the specific goals and objectives of a program, department, or institution.
“Relative standing” results tend to be less meaningful than criterion-referenced results for program/student
evaluation purposes.
Norm-referenced data is dependent on the institutions in comparison group(s) and methods of selecting students to
be tested. (Caution: unlike many norm-referenced tests such as those measuring intelligence, present norm-
referenced tests in higher education do not utilize, for the most part, randomly selected or well stratified national
samples.)
Group administered multiple-choice tests always include a potentially high degree of error, largely uncorrectable by
“guessing correction” formulae (which lowers validity).
Summative data only (no formative evaluation)
Results unlikely to have direct implications for program improvement or individual student progress
Results highly susceptible to misinterpretation/misuse both within and outside the institution
Someone must pay for obtaining these examinations; either the student or program.
If used repeatedly, there is a concern that faculty may teach to the exam as is done with certain AP high school
courses.
Bottom Line:
Relatively quick, and easy, but useful mostly where group-level performance and external comparisons of results are
required. Not as useful for individual student or program evaluation. May not only be ideal, but only alternative for
benchmarking studies.
Bibliographic References:
1. Mazurek, D. F., “Consideration of FE Exam for Program Assessment.” Journal of Professional Issues in
Engineering Education, vol. 121, no. 4, 1995, 247-249.
Definition: Objective and/or subjective tests designed by faculty of the program or course sequence being evaluated.
Target of Method: Used primarily on students in individual classes, a specific program of interest, or for a particular cohort
of students
Advantages:
Content and style can be geared to specific goals, objectives, and student characteristics of the program, curriculum,
etc.
Specific criteria for performance can be established in relationship to curriculum
Process of development can lead to clarification/crystallization of what is important in the process/content of student
learning.
Local grading by faculty can provide relatively rapid feedback.
Greater faculty/institutional control over interpretation and use of results.
More direct implication of results for program improvements.
Disadvantages:
Require considerable leadership/coordination, especially during the various phases of development
Cannot be used for benchmarking, or cross-institutional comparisons.
Costly in terms of time and effort (more “frontloaded” effort for objective; more “backloaded” effort for subjective)
Demands expertise in measurement to assure validity/reliability/utility
May not provide for externality (degree of objectivity associated with review, comparisons, etc. external to the
program or institution).
Bottom Line:
Most useful for individual coursework or program evaluation, with careful adherence to measurement
principles. Must be supplemented for external validity.
Bibliographic References:
1. Banta, T.W., “Questions Faculty Ask about Assessment,” Paper presented at the Annual Meeting of the
American Association for Higher Education (Chicago, IL, April 1989).
2. Banta, T.W. and J.A. Schneider, “Using Locally Developed Comprehensive Exams for Majors to Assess and
Improve Academic Program Quality,” Paper presented at the Annual Meeting of the American Educational
Research Association (70th, San Francisco, CA, April 16-20, 1986).
3. Burton, E. and R.L. Linn, “Report on Linking Study--Comparability across Assessments: Lessons from the Use
of Moderation Procedures in England. Project 2.4: Quantitative Models to Monitor Status and Progress of
Learning and Performance”, National Center for Research on Evaluation, Standards, and Student Testing, Los
Angeles, CA, 1993
4. Lopez, C.L., “Assessment of Student Learning,” Liberal Education, 84(3), Summer 1998, 36-43.
5. Warren, J., “Cognitive Measures in Assessing Learning,” New Directions for Institutional Research, 15(3), Fall
1988, 29-39.
Definition: An evaluation of student knowledge levels through a face-to-face interrogative dialogue with program faculty.
Target of Method: Used primarily on students in individual classes or for a particular cohort of students
Advantages
Content and style can be geared to specific goals, objectives, and student characteristics of the institution, program,
curriculum, etc.
Specific criteria for performance can be established in relationship to curriculum
Process of development can lead to clarification/crystallization of what is important in the process/content of student
learning.
Local grading by faculty can provide immediate feedback related to material considered meaningful.
Greater faculty/institutional control over interpretation and use of results.
More direct implication of results for program improvements.
Allows measurement of student achievement in considerably greater depth and breadth through follow-up questions,
probes, encouragement of detailed clarifications, etc. (= increased internal validity and formative evaluation of
student abilities)
Non-verbal (paralinguistic and visual) cues aid interpretation of student responses.
Dialogue format decreases miscommunications and misunderstandings, in both questions and answers.
Rapport-gaining techniques can reduce “test anxiety,” helps focus and maintain maximum student attention and
effort.
Dramatically increases “formative evaluation” of student learning; i.e., clues as to how and why they reached their
answers.
Identifies and decreases error variance due to guessing.
Provides process evaluation of student thinking and speaking skills, along with knowledge content.
Disadvantages
Requires considerable leadership/coordination, especially during the various phases of development
Costly in terms of time and effort (more “frontload” effort for objective; more “backload” effort for subjective)
Demands expertise in measurement to assure validity/reliability/utility
May not provide for externality (degree of objectivity associated with review, comparisons, etc. external to the
program or institution).
Requires considerably more faculty time, since oral exams must be conducted one-to-one, or with very small groups
of students at most.
Can be inhibiting on student responsiveness due to intimidation, face-to-face pressures, oral (versus written) mode,
etc. (May have similar effects on some faculty!)
Inconsistencies of administration and probing across students reduces standardization and generalizability of results
(= potentially lower external validity).
Bottom Line:
Oral exams can provide excellent results, but usually only with significant – perhaps prohibitive – additional cost. Definitely
worth utilizing in programs with small numbers of students (“Low N”), and for the highest priority objectives in any
program.
Bibliographic References:
1. Bairan, A. and B.J. Farnsworth, “Oral Exams: An Alternative Evaluation Method,” Nurse Educator, 22,
Jul/Aug 1997, 6-7.
2. De Charruf, L.F., “Oral Testing,” Mextesol Journal, 8(2), Aug 1984, 63-79.
3. Dressel, J.H., “The Formal Oral Group Exam: Challenges and Possibilities-The Oral Exam and Critical
Thinking,” Paper presented at the Annual Meeting of the National Council of Teachers of English (81st, Seattle,
WA, November 22-27, 1991).
Performance Appraisals
Definition: A competency-based method whereby pre-operationalized abilities are measured in most direct, real-world
approach. Systematic measurement of overt demonstration of acquired skills.
Target of Method: Used primarily on students in individual classes or for a particular cohort of students
Advantages:
Provide a more direct measure of what has been learned (presumably in the program)
Go beyond paper-and-pencil tests and most other assessment methods in measuring skills
Preferable to most other methods in measuring the application and generalization of learning to specific settings,
situations, etc.
Particularly relevant to the goals and objectives of professional training programs and disciplines with well defined
skill development.
Disadvantages:
Ratings/grading typically more subjective than standardized tests
Requires considerable time and effort (especially front-loading), thus being costly
Sample of behavior observed or performance appraised may not be typical, especially because of the presence of
observers
Bottom Line:
Generally the most highly valued but costly form of student outcomes assessment – usually the most valid way to measure
skill development.
Bibliographic References:
1. Burke, Kay, ed. Authentic Assessment: A Collection. Illinois: Skylight Training and Publishing, Inc., 1992.
2. Hart, Diane. Authentic Assessment: A Handbook for Educators. New York: Addison-Wesley, 1994.
3. Ryan, Alan G. “Towards Authentic Assessment in Science via STS.” Bulletin of Science, Technology & Society.
1994, v 14, n 5/6, p 290.
4. Wiggins, Grant. “The Case for Authentic Assessment.” ERIC Digest. December 1990.
Simulations
Definition: A competency based measure whereby pre-operationalized abilities are measured in most direct, real-world
approach. Simulation is primarily utilized to approximate the results of performance appraisal, but when – due to the target
competency involved, logistical problems, or cost – direct demonstration of the student skill is impractical.
Disadvantages
For difficult skills, the higher the quality of simulation the greater the likelihood of the problems of performance
appraisal; e.g., cost, subjectivity, etc. (see “Performance Appraisals”).
Usually requires considerable “frontloading” effort; i.e., planning and preparation.
More expensive than traditional testing options in the short run.
Bottom Line:
An excellent means of increasing the external and internal validity of skills assessment at minimal long-term costs.
Bibliographic References:
1. Darling-Hammond, Linda. Jacqueline Ancess, and Beverly Falk. Authentic Assessment in Action. New York:
Teachers College, Press, 1995.
2. Kerka, Sandra. “Techniques for Authentic Assessment.” ERIC Clearinghouse on Adult, Career, and Vocational
Education. Columbus, Ohio. 1995.
3. Paris, Scott G., and Linda R. Ayres. Becoming Reflective Students and Teachers with Portfolios and Authentic
Assessment. Washington, DC: American Psychological Association, 1994.
4. Ryan, Alan G. “Towards Authentic Assessment in Science via STS.” Bulletin of Science, Technology & Society.
1994, v 14, n 5/6, p 290.
“Stone” Courses1
1
Often not considered an assessment method in itself.
Definition: Courses, usually required for degree/program completion, which in addition to a full complement of instructional
objectives, also serve as primary vehicles of student assessment for program evaluation purposes; e.g., Capstone,
Cornerstone, and Keystone courses.
Advantages:
Provides for a synergistic combination of instructional and assessment objectives.
A perfect mechanism for course-embedded assessment of student learning and development (i.e., outcomes, pre-
program competencies and/or characteristics, “critical indicators,” etc.)
Can add impetus for design of courses to improve program orientation/integration/updating information for students.
Disadvantages:
None specified
Bottom Line
“Stone” course are-perfect blends of assessment and instruction to serve program quality improvement and accountability
goals (capstones for outcomes measures; cornerstones for pre-program measures); and should be considered by all academic
programs.
Bibliographic References:
1. Brouse, P. S., “Senior Design Project: ABET 2000 Certification, Proceedings of the 1999 Frontiers in Education
Conference, Session 11b2-1.
Definition: Asking individuals to share their perceptions of their own attitudes and/or behaviors or those of others. Includes
direct or mailed, signed or anonymous.
Target of Method: Used primarily on students, could be used by third parties, such as student peers, faculty, employers,
parents, etc.
Advantages:
Typically yield the perspective that students, alumni, the public, etc., have of the institution which may lead to
changes especially beneficial to relationships with these groups.
Convey a sense of importance regarding the opinions of constituent groups
Can cover a broad range of content areas within a brief period of time
Results ten to be more easily understood by lay persons
Can cover areas of learning and development which might be difficult or costly to assess more directly.
Can provide accessibility to individuals who otherwise would be difficult to include in assessment efforts (e.g.,
alumni, parents, employers).
When ‘third-parties’ are making the reports there are additional advantages, as follows:
Can provide unique stakeholder input, valuable in its own right (especially employers and parents). How is our
college serving their purposes?
Offer different perspectives, presumably less biased than either student or assessor.
Enable recognition and contact with important, often under-valued constituents. Relations may improve by just
asking for their input.
Can increase both internal validity (through “convergent validity”/”triangulation” with other data) and external
validity (by adding more “natural” perspective).
Convey a sense of importance regarding the opinions of stakeholder groups.
Disadvantages
Results tend to be highly dependent on wording of items, salience of survey or questionnaire, and organization of
instrument. Thus, good surveys and questionnaires are more difficult to construct than they appear.
Frequently rely on volunteer samples which tend to be biased.
Mail surveys tend to yield low response rates.
Require careful organization in order to facilitate data analysis via computer for large samples.
Commercially prepared surveys tend not to be entirely relevant to an individual institution and its students.
Forced response choices may not allow respondents to express their true opinions.
Results reflect perceptions which individuals are willing to report and thus tend to consist of indirect data.
Locally developed instrument may not provide for externality of results.
Bottom Line:
A relatively inexpensive way to collect data on important evaluative topics from a large number of respondents. Must always
be treated cautiously, however, since results only reflect what subjects are willing to report about their perception of their
attitudes and/or behaviors.
Bibliographic References:
1. Converse, Jean M. & Stanley Presser (1986). Survey Questions: Handcrafting the Standardized Questionnaire. Sage
University Paper series on Quantitative Applications in the Social Sciences, series No. 07-063. Newbury Park, CA:
Sage.
2. Dovidio, John & Russell Fazio (1991). “New Technologies for the Direct and Indirect Assessment of Attitudes.” In
J. Tanur (ed.), Questions About Questions: Inquires into the Cognitive Bases of Surveys, pp. 204-237. New York:
Russell Sage Foundation.
3. Sudman, Seymour & Norman Bradburn (1982). Asking Questions: A Practical Guide to Questionnaire Design. San
Francisco: Jossey-Bass Publishers.
4. Labaw, Patricia (1981). Advanced Questionnaire Design, Abt Books, Incorporated.
5. Lees-Haley, Paul (1980) Questionnaire Design Handbook, Rubicon.
6. Fowler, Floyd J. (1993). Survey Research Methods, 2nd Ed. Newbury Park, CA: Sage.
7. Rossi, Peter H., James D. Wright, & Andy B. Anderson (1983). Handbook of Survey Research. London: Academic
Press.
8. Spector, P.E. (1992). Summated Rating Scale Construction: An Introduction. Sage University Paper series on
Quantitative Applications in the Social Sciences, series no. 07-082. Newbury Park, CA: Sage.
9. Suskie, Linda (1996). Questionnaire Survey Research: What Works? Association for Institutional Research,
Resources for Institutional Research, Number Six.
Definition: Asking individuals to share their perceptions of their own attitudes and/or behaviors or those of others.
Evaluating student reports of their attitudes and/or behaviors in a face-to-face interrogative dialogue.
Target of Method: Used primarily on students; could be used by third parties, such as student peers, employers, etc.
Advantages
Student interviews tend to have most of the attributes of surveys and questionnaires with the exception of requiring
direct contact, which may limit accessibility to certain populations. Exit interviews also provide the following
additional advantages:
Allow for more individualized questions and follow-up probes based on the responses of interviewees.
Provide immediate feedback
Include same observational and formative advantages as oral examinations.
Revision: 15 January 2009 University of Connecticut – Eric Soulsby p.134 of 143
Frequently yield benefits beyond data collection that comes from opportunities to interact with students and other
groups.
Can include a greater variety of items than is possible on surveys and questionnaires, including those that provide
more direct measures of learning and development.
When ‘third-parties’ are making the reports there are additional advantages, as follows:
Can provide unique stakeholder input, valuable in its own right (especially employers and parents). How is the
college/program/project/course serving the purposes of the stakeholder group?
Offer different perspectives, presumably less biased than either student or assessor.
Enable recognition and contact with important, often under-valued constituents. Relations may improve by just
asking for their input.
Can increase both internal validity (through “convergent validity”/”triangulation” with other data) and external
validity (by adding more “natural” perspective).
Disadvantages
Require direct contact, which may be difficult to arrange.
May be intimidating to interviewees, thus biasing results in the positive direction.
Results tend to be highly dependent on wording of items and the manner in which interviews are conducted.
Time consuming, especially if large numbers of persons are to be interviewed.
Bottom Line:
Interviews provide opportunities to cover a broad range of content and to interact with respondents. Opportunities to follow-
up responses can be very valuable. Direct contact may be difficult to arrange, costly, and potentially threatening to
respondents unless carefully planned.
Bibliographic References:
1. Dobson, Ann (1996), Conducting Effective Interviews: How to Find out What You Need to Know and Achieve the
Right Results, Trans-Atlantic Publications, Inc.
2. Bradburn, Norman and Seymour Sudman (?) Improving Interview Method and Questionnaire Design, Books on
Demand (ISBN: 0835749703)
Definition: To discuss a particular topic related to a research or evaluation question with the direction of a moderator.
Typically conducted with 7-12 individuals who share certain characteristics that are related to the topic of discussion. Group
discussion is conducted (several times, if possible) with similar types of participants to identify trends/patterns in perceptions.
Moderator’s purpose is to provide direction and set the tone for the group discussion, encourage active participation from all
group members, and manage time. Moderator must not allow own biases to enter, verbally or nonverbally. Careful and
systematic analysis of the discussions provides information about how a product, service, or opportunity is perceived.
Target of Method: Used primarily on students, could be used by third parties, such as employers, department’s visiting
board, etc.
Advantages
Useful to gather ideas, details, new insights, and to improve question design.
Inexpensive, quick information tool, helpful in the survey design phase.
Can aid the interpretation of results from mail or telephone surveys.
Can be used in conjunction with quantitative studies to confirm/broaden one’s understanding of an issue.
Allows the moderator to probe and explore unanticipated issues.
Disadvantages
Not suited for generalizations about population being studied.
Not a substitute for systematic evaluation procedures.
Moderators require training.
Differences in the responses between/among groups can be troublesome.
Groups are difficult to assemble.
Researcher has less control than in individual interviews.
Data are complex to analyze.
Bottom Line:
Focus groups are a quick and, if locally done, inexpensive method of gathering information. They are very useful for
triangulation to support other assessment methods but they are not a substitute for systematic evaluation procedures. Focus
Groups should meet the same rigor as other assessment methods and should be developed and analyzed according to sound
qualitative practices.
Bibliographic References:
1. Morgan, D., et. al. (1998) Focus Groups as Qualitative Research, University Paper series on Quantitative
Applications in the Social Sciences, Newbury Park, CA: Sage.
2. Morgan, D. (1998) Focus Groups as Qualitative Research, Thousand Oaks, CA: Sage.
3. Krueger, Richard (1998). Developing Questions for Focus Groups, Vol 3. University Paper series on Quantitative
Applications in the Social Sciences, Newbury Park, CA: Sage.
4. Steward, D. and P. Shamdasani (1990). Focus Groups: Theory and Practice, University Paper series on Quantitative
Applications in the Social Sciences, Newbury Park, CA: Sage.
5. Krueger, Richard (1997). Moderating Focus Groups, Vol 4. University Paper series on Quantitative Applications in
the Social Sciences, Newbury Park, CA: Sage.
6. Morgan, D., and A. Scannell (1997). Planning Focus Groups, Vol 2. University Paper series on Quantitative
Applications in the Social Sciences, Newbury Park, CA: Sage.
Definition: Using an expert in the field from outside your program, usually from a similar program at another institution to
conduct, evaluate, or supplement assessment of your students. Information can be obtained from external evaluators using
many methods including surveys, interviews, etc.
Target of method: Used primarily on students in individual classes or for a particular cohort of students; could be used by
third parties, such as employers or visiting board, etc.
Advantages:
Increases impartiality, third party objectivity (=external validity)
Feedback useful for both student and program evaluation. With a knowledgeable and cooperative (or well-paid)
examiner, provides an opportunity for a valuable program consultation.
May serve to stimulate other collaborative efforts between departments/institutions - Incorporate external
stakeholders and communities
Students may disclose to an outsider what they might not otherwise share
Outsiders can “see” attributes to which insiders have grown accustomed
Evaluators may have skills, knowledge, or resources not otherwise available
Useful in conducting goal-free evaluation (discovery-based evaluation without prior expectations)
Disadvantages:
Always some risk of a misfit between examiner’s expertise and/or expectations and program outcomes
For individualized evaluations and/or large programs, can be very costly and time consuming
Volunteers may become “donor weary”
Bottom Line:
Best used as a supplement to your own assessment methods to enhance external validity, but not as the primary assessment
option. Other benefits can be accrued from the cross-fertilization that often results from using external examiners.
Bibliographic References:
1. Bossert, James L., Quality Function Deployment, Milwaukee: ASQC Quality Press, 1991, especially pp. 52-64.
2. Fitzpatrick, Jody L. and Michael Morris, Eds., Current and Emerging Ethical Challenges in Evaluation, San
Francisco, CA: Jossey-Bass, 1999.
Behavioral Observations
Definition: Measuring the frequency, duration, topology, etc. of student actions, usually in a natural setting with non-
interactive methods. For example, formal or informal observations of a classroom. Observations are most often made by an
individual and can be augmented by audio or videotape.
Advantages
Best way to evaluate degree to which attitudes, values, etc. are really put into action (= most internal validity).
Catching students being themselves is the most “natural” form of assessment (= best external validity).
Least intrusive assessment option, since purpose is to avoid any interference with typical student activities.
Disadvantages
Always some risk of confounded results due to “observer effect;” i.e., subjects may behave atypically if they know
they’re being observed.
Bottom Line:
This is the best way to know what students actually do, how they manifest their motives, attitudes and values. Special care
and planning are required for sensitive target behaviors, but it’s usually worth it for highly valid, useful results.
Bibliographic References:
1. Lincoln, Y. S. and E. G. Guba (1985). Naturalistic Inquiry. Newbury Park, CA, SAGE Publications.
2. Miles, M. B. and A. M. Huberman (1984). Qualitative Data Analysis. Beverly Hills, Sage Publications.
Archival Data
Definition: Biographical, academic, or other file data available from college or other agencies and institutions.
Target of Method: Primarily aggregated student information; can use comparable data from other institutions for
benchmarking.
Advantages:
Tend to be accessible, thus requiring less additional effort.
Build upon efforts that have already occurred.
Can be cost efficient if required date is readily retrievable in desired format.
Constitute unobtrusive measurement, not requiring additional time or effort from students or other groups.
Very useful for longitudinal studies
Ideal way to establish a baseline for before and after comparisons
Disadvantages:
Especially in large institutions, may require considerable effort and coordination to determine exactly what data are
available campus-wide and to then get that information in desired format.
To be most helpful, datasets need to be combined. This requires an ability to download and combine specific
information for multiple sources. It may require designing a separate database management system for this
downloaded information.
Typically the archived data are not exactly what is required, so that the evaluator must make compromises. In some
cases, it may be a stretch to use such data as surrogates for the desired measures.
If individual records are included, protection of rights and confidentiality must be assured; should obtain
Institutional Review Board approval if in doubt.
Availability may discourage the development of other, more responsive measures or data sources.
May encourage attempts to “find ways to use data” rather than measurement related to specific goals and objectives.
Bibliographic References:
1. Astin, Alexander W. “Involvement in Learning Revisted: Lessons We Have Learned.” Journal of College Student
Development; v37 n2 p. 123-34, March 1996.
2. Astin, Alexander W.; et al., Degree Attainment Rates at American Colleges and Universities: Effects of Race,
Gender, and Institutional Type. Higher Education Research Inst., Inc., Los Angeles, CA, 1996.
Portfolios
Definition: Collections of multiple student work samples usually compiled over time. Rated by some type of rubric.
Target of Method: Used primarily on students in individual classes or in for a particular cohort of students
Advantages:
Can be used to view learning and development longitudinally (e.g. samples of student writing over time can be
collected), which is most valid and useful perspective.
Multiple components of a curriculum can be measured (e.g., writing, critical thinking, research skills) at the same
time.
Samples in a portfolio are more likely than test results to reflect student ability when pre-planning, input from
others, and similar opportunities common to most work settings are available (which increases
generalizability/external validity of results).
The process of reviewing and grading portfolios provides an excellent opportunity for faculty exchange and
development, discussion of curriculum goals and objectives, review of grading criteria, and program feedback.
Economical in terms of student time and effort, since no separate “assessment administration” time is required.
Greater faculty control over interpretation and use of results.
Results are more likely to be meaningful at all levels (i.e., the individual student, program, or institution) and can be
used for diagnostic/prescriptive purposes as well.
Avoids or minimizes “test anxiety” and other “one shot” measurement problems.
Increases “power” of maximum performance measures over more artificial or restrictive “speed” measures on test or
in-class sample.
Increases student participation (e.g., selection, revision, evaluation) in the assessment process.
Disadvantages
Costly in terms of evaluator time and effort.
Management of the collection and grading process, including the establishment of reliable and valid grading criteria,
is likely to be challenging.
May not provide for externality.
If samples to be included have been previously submitted for course grades, faculty may be concerned that a hidden
agenda of the process is to validate their grading.
Security concerns may arise as to whether submitted samples are the students’ own work, or adhere to other
measurement criteria.
Bibliographic References:
1. Barrett, H.C. (1994). Technology-supported assessment portfolios. "Computing Teacher," 21(6), 9-12. (EJ 479 843)
2. Hart, D. (1994). Authentic assessment: a handbook for educators. Menlo Park, CA: Addison-Wesley.
3. Hodges, D. (1998). Portfolio: A self-learning guide. Barrington, IL.
4. Jackson, L. and Caffarella, R.S. (1994). Experiential learning: A new approach. San Francisco, CA: Jossey-Bass.
5. Khattru, N., Kane, M., and Reeve, A. (1995). How performance assessments affect teaching and learning.
Educational Leadership. (11), 80-83.
6. Murphy, S.M. (1998). Reflection: In portfolios and beyond. Clearing House,(72), 7-10.
7. Paulson, L.F., Paulson, P.R., & Meyer, C. (1991) What makes a portfolio a portfolio? "Educational Leadership,"
48(5), 60-63. (EJ 421 352)
8. Porter, C. and Cleland, J. (1995). The portfolio as a learning strategy. Portsmouth, NH: Boynton/Cook Publishers.
9. Rogers, Gloria and Timothy Chow, “Electronic Portfolios and the Assessment of Student Learning.” Assessment
Update, Josey-Bass Publisher, January-February 2000, Vol. 12, No. 1, pp. 4-6, 11.
Examples of various assessment tools are included in the table below. It should be noted that the categorizations
may vary depending upon your perspective and the way in which you construct the assessment.
Bloom's level
K= Knowledge
Domain
Method Usage Type C= Comprehension
C= Cognitive
Tool D= Direct F= Formative A= Application Pros Cons
P= Psychomotor
I= Indirect S= Summative ASE= Analysis or
A= Affective
Synthesis or
Evaluation
Multiple D C F or S K, C easy to grade; reduces
Choice Exam If carefully objective assessment to
constructed ASE multiple choice
answers
Licensing D C S K, C, A easy to score and no authentic
Exams compare testing, may
outdate
1. Does the assessment adequately evaluate academic performance relevant to the desired outcome? (validity)
2. Does this assessment tool enable students with different learning styles or abilities to show you what they have
learned and what they can do?
3. Does the content examined by the assessment align with the content from the course? (Content validity)
4. Does this assessment method adequately address the knowledge, skills, abilities, behavior, and values
associated with the intended outcome? (Domain validity)
5. Will the assessment provide information at a level appropriate to the outcome? (Bloom’s)
6. Will the data accurately represent what the student can do in an authentic or real life situation? (Authentic
assessment)
7. Is the grading scheme consistent; would a student receive the same grade for the same work on multiple
evaluations? (Reliability)
8. Can multiple people use the scoring mechanism and come up with the same general score? (Reliability)
9. Does the assessment provide data that is specific enough for the desired outcomes? (alignment with outcome)
10. Is the assessment summative or formative - if formative does it generate diagnostic feedback to improve
learning?
11. Is the assessment summative or formative - if summative, is the final evaluation built upon multiple sources of
data? (AAHE Good practice)
12. If this is a summative assessment, have the students had ample opportunity for formative feedback and practice
displaying what they know and can do?
13. Is the assessment unbiased or value-neutral, minimizing an attempt to give desirable responses and reducing
any cultural misinterpretations?
14. Are the intended uses for the assessment clear? (Grading, program review, both)
18. Will the information derived from the assessment help to improve teaching and learning? (AAHE Good Practice)
19. Will you provide the students with a copy of the rubric or assignment grading criteria?