0% found this document useful (0 votes)
50 views10 pages

Wasc Rubric For Using Capstone Exp Assess PRGR Learning Outcomes

The rubric assesses capstone experiences across five criteria: 1. Relevant outcomes and evidence are clearly identified and concrete plans for assessment are in place. 2. Valid evidence is collected using agreed-upon criteria like rubrics to ensure accurate assessment of student achievement. 3. Reviewers are calibrated and inter-rater reliability is ensured to provide reliable results. 4. Faculty analyze results to improve student learning and implement changes as needed. 5. Students understand the purpose and intended learning outcomes of the capstone experience.

Uploaded by

Layla Vu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views10 pages

Wasc Rubric For Using Capstone Exp Assess PRGR Learning Outcomes

The rubric assesses capstone experiences across five criteria: 1. Relevant outcomes and evidence are clearly identified and concrete plans for assessment are in place. 2. Valid evidence is collected using agreed-upon criteria like rubrics to ensure accurate assessment of student achievement. 3. Reviewers are calibrated and inter-rater reliability is ensured to provide reliable results. 4. Faculty analyze results to improve student learning and implement changes as needed. 5. Students understand the purpose and intended learning outcomes of the capstone experience.

Uploaded by

Layla Vu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

CAPSTONE RUBRIC

Rubric for Using Capstone Experiences to Assess Program Learning Outcomes


Criterion Initial Emerging Developed Highly Developed
Relevant It is not clear which program The relevant outcomes are identified, Relevant outcomes are identified. Relevant evidence is collected; faculty
Outcomes outcomes will be assessed in e.g., ability to integrate knowledge to Concrete plans for collecting has agreed on explicit criteria statements,
and Lines of the capstone course. solve complex problems; however, evidence for each outcome are e.g., rubrics, and has identified examples
Evidence concrete plans for collecting evidence agreed upon and used routinely of student performance at varying levels
Identified for each outcome have not been by faculty who teach the capstone of mastery for each relevant outcome.
developed. course.

Valid Results It is not clear that potentially Faculty has reached general Faculty has agreed on concrete Assessment criteria, such as rubrics,
valid evidence for each agreement on the types of plans for collecting relevant have been pilot-tested and refined
relevant outcome is collected evidence to be collected for each evidence for each outcome. over time; they are usually shared
and/or individual faculty outcome; they have discussed Explicit criteria, e.g., rubrics with students. Feedback from external
use idiosyncratic criteria to relevant criteria for assessing have been developed to assess reviewers has led to refinements in
assess student work or each outcome but these are not the level of student attainment the assessment process, and the
performances. yet fully defined. of each outcome. department uses external
benchmarking data.

Reliable Those who review student Reviewers are calibrated to apply Reviewers are calibrated to Reviewers are calibrated, and faculty
Results work are not calibrated to assessment criteria in the same way apply assessment criteria in the routinely finds assessment data have
apply assessment criteria in or faculty routinely check for inter- same way, and faculty routinely high inter-rater reliability.
the same way; there are no rater reliability. check for inter-rater reliability.
checks for inter-rater
reliability.

Results Are Results for each outcome may Results for each outcome are Results for each outcome are Faculty routinely discusses results, plan
Used or may not be collected. They collected and may be discussed by collected, discussed by needed changes, secure necessary
are not discussed among the faculty, but results have not been faculty, analyzed, and used resources, and implement changes. They
faculty. used to improve the program. to improve the program. may collaborate with others, such as
librarians or Student Affairs
professionals, to improve results.
Follow-up studies confirm that changes
have improved learning.

The Student Students know little or Students have some knowledge of Students have a good grasp of Students are well-acquainted with
Experience nothing about the purpose of the purpose and outcomes of the purpose and outcomes of the the purpose and outcomes of the
the capstone or outcomes to capstone. Communication is capstone and embrace it as a capstone and embrace it. They may
be assessed. It is just another occasional, informal, and left to learning opportunity. participate in refining the
course or requirement. individual faculty or advisors. Information is readily experience, outcomes, and rubrics.
available in advising guides, Information is readily available.
etc.
Guidelines for Using the Capstone Rubric

A capstone is a culminating course or experience that requires review, synthesis and application of what has been learned. For the fullest picture of an
institution’s accomplishments, reviews of written materials should be augmented with interviews at the time of the visit.

Dimensions of the Rubric:


1. Relevant Outcomes and Evidence. It is likely that not all program learning outcomes can be assessed within a single capstone course or experience.
Questions: Have faculty explicitly determined which program outcomes will be assessed in the capstone? Have they agreed on concrete
plans for collecting evidence relevant to each targeted outcome? Have they agreed on explicit criteria, such as rubrics, for assessing the evidence?
Have they identified examples of student performance for each outcome at varying performance levels (e.g., below expectations, meeting
expectations, exceeding expectations for graduation)?
2. Valid Results. A valid assessment of a particular outcome leads to accurate conclusions concerning students’ achievement of that outcome.
Sometimes faculty collects evidence that does not have the potential to provide valid conclusions. For example, a multiple-choice test will not provide
evidence of students’ ability to deliver effective oral presentations. Assessment requires the collection of valid evidence and judgments about that
evidence that are based on well-established, agreed-upon criteria that specify how to identify low, medium, or high-quality work.
Questions: Are faculty collecting valid evidence for each targeted outcome? Are they using well-established, agreed-upon criteria, such as rubrics,
for assessing the evidence for each outcome? Have faculty pilot tested and refined their process based on experience and feedback from external
reviewers? Are they sharing the criteria with their students? Are they using benchmarking (comparison) data?
3. Reliable Results. Well-qualified judges should reach the same conclusions about a student’s achievement of a learning outcome, demonstrating
inter-rater reliability. If two judges independently assess a set of materials, their ratings can be correlated and discrepancy between their scores can
be examined. Data are reliable if the correlation is high and/or if discrepancies are small. Raters generally are calibrated (“normed”) to increase
reliability. Calibration usually involves a training session in which raters apply rubrics to preselected examples of student work that vary in quality,
then reach consensus about the rating each example should receive. The purpose is to ensure that all raters apply the criteria in the same way so that
each student’s product would receive the same score, regardless of rater.
Questions: Are reviewers calibrated? Are checks for inter-rater reliability made? Is there evidence of high inter-rater reliability?
4. Results Are Used. Assessment is a process designed to monitor and improve learning, so assessment findings should have an impact. Faculty can
reflect on results for each outcome and decide if they are acceptable or disappointing. If results do not meet faculty standards, faculty can determine
which changes should be made, e.g., in pedagogy, curriculum, student support, or faculty support.
Questions: Do faculty collect assessment results, discuss them, and reach conclusions about student achievement? Do they develop explicit
plans to improve student learning? Do they implement those plans? Do they have a history of securing necessary resources to support this
implementation? Do they collaborate with other institution professionals to improve student learning? Do follow-up studies confirm that
changes have improved learning?
5. The Student Experience. Students should understand the purposes different educational experiences serve in promoting their learning and
development and know how to take advantage of them; ideally they can also participate in shaping those experiences.
Questions: Are purposes and outcomes communicated to students? Do they understand how capstones support learning? Do they
participate in reviews of the capstone experience, its outcomes, criteria, or related activities?

8/21/13
GENERAL EDUCATION RUBRIC
Rubric for Evaluating General Education Assessment Process

Criterion Initial Emerging Developed Highly Developed


GE GE learning outcomes have Learning outcomes have been Outcomes are well organized, Outcomes are reasonable, appropriate, and
Outcomes not yet been developed for developed for the entire GE assessable, and focus on the most assessable. Explicit criteria, such as rubrics,
the entire GE program; program, but list is problematic important knowledge, skill, and are available for assessing student learning.
there may be one or two (e.g. too long, too short, values of GE. Work to define levels of Exemplars or student performance are
common ones, e.g., writing, unconnected to mission and non- performance is beginning. specified at varying levels for each outcome.
critical thinking. assessable values.)
Curriculum No clear relationship Students appear to have Curriculum is explicitly designed to Curriculum, pedagogy, grading, advising,
Alignment between the outcomes and opportunities to develop each provide opportunities for students to are explicitly aligned with GE outcomes.
with the GE curriculum. outcome. Curriculum map develop increasing sophistication re Curriculum map and rubrics are well known
Outcomes Students may not have shows opportunities to acquire each outcome. Curriculum map and consistently used. Co-curricular viewed
opportunity to develop outcomes. Sequencing and shows “beginning,” “intermediate,” as resources for GE learning and aligned with
each outcome adequately. frequency of opportunities may and “advanced” treatment of GE outcomes.
be problematic. outcomes.
Assessment No formal plan for GE assessment relies on short- Campus has a reasonable, multi-year Campus has a fully articulated, sustainable,
Planning assessing each GE term planning: selecting which assessment plan that identifies when multi-year assessment plan that describes
outcome. No coordinator outcome(s) to assess in the each outcome will be assessed. Plan when and how each outcome will be
or committee that takes current year. Interpretation and addresses use of findings for assessed. A coordinator or committee leads
responsibility for the use of findings are implicit rather improvement. A coordinator or review and revision of the plan, as needed.
program or than planned or funded. No committee is charged to oversee Campus uses some form of comparative data
implementation of its individual or committee is in assessment. (e.g., own past record, aspirational goals,
assessment plan. charge. external benchmarking).
Assessment Not clear that potentially Appropriate evidence is Appropriate evidence is collected; Assessment criteria, such as rubrics, have
Implementa- valid evidence for each GE collected; some discussion of faculty use explicit criteria, such as been pilot-tested and refined and typically
tion outcome is collected relevant criteria for assessing rubrics, to assess student attainment shared with students. Reviewers are
and/or individual outcome. Reviewers of student of each outcome. Reviewers of calibrated with high inter-rater reliability.
reviewers use idiosyncratic work are calibrated to apply student work are calibrated to apply Comparative data used when interpreting
criteria to assess student assessment criteria in the same assessment criteria in the same way; results and deciding on changes for
work. way, and/or faculty check for faculty routinely checks for inter- improvement.
inter-rater reliability. rater reliability.
Use of Results for GE outcomes Results are collected and Results for each outcome are Relevant faculty routinely discusses results,
Results are collected, but not discussed by relevant faculty; collected, discussed by relevant plan improvements, secure necessary
discussed Little or no results used occasionally to faculty, and regularly used to resources, and implement changes. They may
collective use of findings. improve the GE program. improve the program. Students are collaborate with others to improve the
Students are unaware of Students are vaguely aware of very aware of and engaged in program. Follow-up studies confirm that
and/or uninvolved in the outcomes and assessments to improvement of their learning. changes have improved learning.
process. improve their learning.
Guidelines for Using the General Education Rubric
For the fullest picture of an institution’s accomplishments, reviews of written materials should be augmented with interviews at the time of the visit. Discussion
validates that the reality matches the written record.
Dimensions of the Rubric:
1. GE Outcomes. The GE learning outcomes consists of the most important knowledge, skills, and values students learn in the GE program. There is no strict
rule concerning the optimum number of outcomes, and quality is more important than quantity. Do not confuse learning processes (e.g., completing a science
lab) with learning outcomes (what is learned in the science lab, such as ability to apply the scientific method). Outcome statements specify what students do to
demonstrate their learning. Criteria for assessing student work are usually specified in rubrics, and faculty identify examples of varying levels of student
performance, such as work that does not meet expectations, that meets expectations and that exceeds expectations.
Questions: Is the list of outcomes reasonable and appropriate? Do the outcomes express how students can demonstrate learning? Have faculty agreed on
explicit criteria, such as rubrics, for assessing each outcome? Do they have exemplars of work representing different levels of mastery for each outcome?
2. Curriculum Alignment. Students cannot be held responsible for mastering learning outcomes without a GE program that is explicitly designed to develop
those outcomes. This design is often summarized as a curriculum map—a matrix that shows the relationship between courses and learning outcomes.
Pedagogy and grading aligned with outcomes help encourage student growth and provide students’ feedback on their development. Relevant academic
support and student services can also be designed to support development of the learning outcomes, since learning occurs outside of the classroom as well as
within it.
Questions: Is the GE curriculum explicitly aligned with program outcomes? Does faculty select effective pedagogies and use grading to promote
learning? Are support services explicitly aligned to promote student development of GE learning outcomes?
3. Assessment Planning. Explicit, sustainable plans for assessing each GE outcome need to be developed. Each outcome does not need to be assessed every year,
but the plan should cycle through the outcomes over a reasonable period of time, such as the period for program review cycles. Experience and feedback from
external reviewers can guide plan revision.
Questions: Does the campus have a GE assessment plan? Does the plan clarify when, how, and how often each outcome will be assessed? Will all
outcomes be assessed over a reasonable period of time? Is the plan sustainable? Supported by appropriate resources? Are plans revised, as needed, based
on experience and feedback from external reviewers? Does the plan include collection of comparative data?
4. Assessment Implementation. Assessment requires the collection of valid evidence that is based on agreed-upon criteria that identify work that meets or
exceeds expectations. These criteria are usually specified in rubrics. Well-qualified judges should reach the same conclusions about a student’s achievement of
a learning outcome, demonstrating inter-rater reliability. If two judges independently assess a set of materials, their ratings can be correlated and discrepancy
between their scores can be examined. Data are reliable if the correlation is high and/or if discrepancies are small. Raters generally are calibrated (“normed”)
to increase reliability. Calibration usually involves a training session in which raters apply rubrics to preselected examples of student work that vary in
quality, then reach consensus about the rating each example should receive. The purpose is to ensure that all raters apply the criteria in the same way so that
each student’s product would receive the same score, regardless of rater.
Questions: Do GE assessment studies systematically collect valid evidence for each targeted outcome? Does faculty use agreed-upon criteria such as
rubrics for assessing the evidence for each outcome? Do they share the criteria with their students? Are those who assess student work calibrated in the
use of assessment criteria? Does the campus routinely document high inter-rater reliability? Do faculty pilot-test and refine their assessment processes?
Do they take external benchmarking (comparison) data into account when interpreting results?
5. Use of Results. Assessment is a process designed to monitor and improve learning. Faculty can reflect on results for each outcome and decide if they are
acceptable or disappointing. If results do not meet faculty standards, faculty (and others, such as student affairs personnel, librarians, and tutors) can
determine what changes should be made, e.g., in pedagogy, curriculum, student support, or faculty supports.
Questions: Do faculty collect assessment results, discuss them, and reach conclusions about student achievement? Do they develop explicit plans to
improve student learning? Do they implement those plans? Do they have a history of securing necessary resources to support this implementation? Do
they collaborate with other campus professionals to improve student learning? Do follow-up studies confirm that changes have improved
learning?

Rev 8/2013
PORTFOLIOS RUBRIC
Rubric for Using Portfolios to Assess Program Learning Outcomes

Criterion Initial Emerging Developed Highly Developed


Clarification of Instructions to students for Students receive instructions Students receive instructions that Students in the program understand the
Students’ Tasks portfolio development provide for their portfolios, but they describe faculty expectations in portfolio requirement and the rationale for it,
insufficient detail for them to still have problems determining detail and include the purpose of and they view the portfolio as helping them
know what faculty expects. what is required of them the portfolio, types of evidence to develop self-assessment skills. Faculty may
Instructions may not identify and/or why they are compiling include, role of the reflective essay monitor the developing portfolio to provide
outcomes to be addressed in a portfolio. (if required), and format of the formative feedback and/or advise individual
the portfolio. finished product. students.
Valid Results It is not clear that valid Appropriate evidence is Appropriate evidence is collected Assessment criteria, e.g., in the form of
evidence for each relevant collected for each outcome, and for each outcome; faculty use rubrics, have been pilot-tested and refined
outcome is collected and/or faculty has discussed relevant explicit criteria, such as agreed- over time; they are shared with students, and
individual reviewers use criteria for assessing each upon rubrics, to assess student students may have helped develop them.
idiosyncratic criteria to assess outcome. attainment of each outcome. Feedback from external reviewers has led to
student work. Rubrics are usually shared with refinements in the assessment process. The
students. department also uses external benchmarking
data.

Reliable Results Those who review student Reviewers are calibrated to Reviewers are calibrated to apply Reviewers are calibrated; faculty routinely
work are not calibrated with apply assessment criteria in the assessment criteria in the same finds that assessment data have high inter-
each other to apply assessment same way or faculty routinely way, and faculty routinely check rater reliability.
criteria in the same way, and check for inter-rater reliability. for inter-rater reliability.
there are no checks for inter-
rater reliability.
If Results Are Results for each outcome are Results for each outcome are Results for each outcome are Faculty routinely discusses results,
Used collected, but they are not collected and discussed by the collected, discussed by faculty, plan needed changes, secure
discussed among the faculty. faculty, but results have not and used to improve the program. necessary resources, and implement
been used to improve the changes. They may collaborate with
program. others, such as librarians or Student
Affairs professionals, to improve
student learning. Students may also
participate in discussions and/or
receive feedback, either individual or
in the aggregate. Follow-up studies
confirm that changes have improved
learning.

Technical There is no technical support There is informal or minimal Formal technical support is readily Support is readily available, proactive, and
Support for e- for students or faculty to learn formal support for students available and technicians effective. Programming changes are made
Portfolios the software or to deal with and faculty. proactively assist users in learning when needed.
problems. the software and solving problems.
Guidelines for Using the Portfolio Rubric
Portfolios can serve multiple purposes: to build students’ confidence by showing development over time; to display students’ best work; to better advise
students; to provide examples of work students can show to employers; to assess program learning outcomes. This rubric addresses the use of rubrics for
assessment. Two common types of portfolios for assessing student learning outcomes are:
• Showcase portfolios—collections of each student’s best work
• Developmental portfolios—collections of work from early, middle, and late stages in the student’s academic career that demonstrate growth. Faculty
generally requires students to include a reflective essay that describes how the evidence in the portfolio demonstrates their achievement of program
learning outcomes. Sometimes faculty monitors developing portfolios to provide formative feedback and/or advising to students, and sometimes they
collect portfolios only as students near graduation. Portfolio assignments should clarify the purpose of the portfolio, the kinds of evidence to be included,
and the format (e.g., paper vs. e-portfolios); and students should view the portfolio as contributing to their personal development.

Dimensions of the Rubric:


1. Clarification of Students’ Task. Most students have never created a portfolio, and they need explicit guidance.
Questions: Does the portfolio assignment provide sufficient detail so students understand the purpose, the types of evidence to include, the
learning outcomes to address, the role of the reflective essay (if any), and the required format? Do students view the portfolio as contributing to
their ability to self-assess? Does faculty use the developing portfolios to assist individual students?
2. Valid Results. Sometimes portfolios lack valid evidence for assessing particular outcomes. For example, portfolios may not allow faculty to assess
how well students can deliver oral presentations. Judgments about that evidence need to be based on well-established, agreed-upon criteria that
specify (usually in rubrics) how to identify work that meets or exceeds expectations.
Questions: Do the portfolios systematically include valid evidence for each targeted outcome? Is faculty using well-established, agreed-upon
criteria, such as rubrics, to assess the evidence for each outcome? Have faculty pilot-tested and refined their process? Are criteria shared with
students? Are they collaborating with colleagues at other institutions to secure benchmarking (comparison) data?
3. Reliable Results. Well-qualified judges should reach the same conclusions about a student’s achievement of a learning outcome, demonstrating inter-
rater reliability. If two judges independently assess a set of materials, their ratings can be correlated and discrepancy between their scores can be
examined. Data are reliable if the correlation is high and/or if discrepancies are small. Raters generally are calibrated (“normed”) to increase
reliability. Calibration usually involves a training session in which raters apply rubrics to preselected examples of student work that vary in quality,
then reach consensus about the rating each example should receive. The purpose is to ensure that all raters apply the criteria in the same way so that
each student’s product would receive the same score, regardless of rater.
Questions: Are reviewers calibrated? Are checks for inter-rater reliability made? Is there evidence of high inter-rater reliability?
4. Results Are Used. Assessment is a process designed to monitor and improve learning, so assessment findings should have an impact. Faculty can
reflect on results for each outcome and decide if they are acceptable or disappointing. If results do not meet their standards, faculty can determine
what changes should be made, e.g., in pedagogy, curriculum, student support, or faculty support.
Questions: Do faculty collect assessment results, discuss them, and reach conclusions about student achievement? Do they develop explicit
plans to improve student learning? Do they implement those plans? Do they have a history of securing necessary resources to support this
implementation? Do they collaborate with other institution professionals to improve student learning? Do follow-up studies confirm that
changes have improved learning?
5. Technical Support for e-Portfolios. Faculty and students alike require support, especially when a new software program is introduced. Lack of
support can lead to frustration and failure of the process. Support personnel may also have useful insights into how the portfolio assessment
process can be refined.
Questions: What is the quality and extent of technical support? What is the overall level of faculty and student satisfaction with the technology
and support services?

Rev 8/2013
PROGRAM LEARNING OUTCOMES RUBRIC
Rubric for Assessing the Quality of Academic Program Learning Outcomes

Criterion Initial Emerging Developed Highly Developed


Comprehensive The list of outcomes is problematic: The list includes reasonable The list is a well-organized set of The list is reasonable,
List e.g., very incomplete, overly outcomes but does not specify reasonable outcomes that focus on appropriate, and comprehensive,
detailed, inappropriate, and expectations for the program as the key knowledge, skills, and values with clear distinctions between
disorganized. It may include only a whole. Relevant institution- students learn in the program. It undergraduate and graduate
discipline-specific learning, wide learning outcomes and/or includes relevant institution-wide expectations, if applicable.
ignoring relevant institution-wide national disciplinary standards outcomes (e.g., communication or National disciplinary standards
learning. The list may confuse may be ignored. Distinctions critical thinking skills). Outcomes are have been considered. Faculty
learning processes (e.g., doing an between expectations for appropriate for the level has agreed on explicit criteria for
internship) with learning outcomes undergraduate and graduate (undergraduate vs. graduate); assessing students’ level of
(e.g., application of theory to real- programs may be unclear. national disciplinary standards have mastery of each outcome.
world problems). been considered.
Assessable Outcome statements do not identify Most of the outcomes indicate Each outcome describes how students Outcomes describe how students can
Outcomes what students can do to how students can demonstrate can demonstrate learning, e.g., demonstrate their learning. Faculty
demonstrate learning. Statements their learning. “Graduates can write reports in APA has agreed on explicit criteria
such as “Students understand style” or “Graduates can make original statements, such as rubrics, and has
scientific method” do not specify contributions to biological identified examples of student
how understanding can be knowledge.” performance at varying levels for
demonstrated and assessed. each outcome.
Alignment There is no clear relationship Students appear to be given The curriculum is designed to provide Pedagogy, grading, the curriculum,
between the outcomes and the reasonable opportunities to opportunities for students to learn and relevant student support services and
curriculum that students develop the outcomes in the to develop increasing sophistication co- curriculum are explicitly and
experience. required curriculum. with respect to each outcome. This intentionally aligned with each
design may be summarized in a outcome. Curriculum map indicates
curriculum map. increasing levels of proficiency.
Assessment There is no formal plan for The program relies on short-term The program has a reasonable, multi- The program has a fully-articulated,
Planning assessing each outcome. planning, such as selecting which year assessment plan that identifies sustainable, multi-year assessment
outcome(s) to assess in the when each outcome will be assessed. plan that describes when and how
current year. The plan may explicitly include each outcome will be assessed and
analysis and implementation of how improvements based on
improvements. findings will be implemented. The
plan is routinely examined and
revised, as needed.

The Student Students know little or nothing Students have some knowledge Students have a good grasp of Students are well-acquainted with
Experience about the overall outcomes of the of program outcomes. program outcomes. They may use program outcomes and may
program. Communication of Communication is occasional them to guide their own learning. participate in the creation and use of
outcomes to students, e.g. in syllabi and informal, left to individual Outcomes are included in most syllabi rubrics. They are skilled at self-
or catalog, is spotty or nonexistent. faculty or advisors. and are readily available in the catalog, assessing in relation to the outcomes
on the web page, and elsewhere. and levels of performance. Program
policy calls for inclusion of outcomes
in all course syllabi, and they are
readily available in other program
documents.
Guidelines on Using the Learning Outcomes Rubric
This rubric is intended to help teams assess the extent to which an institution has developed and assessed program learning outcomes and made improvements
based on assessment results. For the fullest picture of an institution’s accomplishments, reviews of written materials should be augmented with interviews at the
time of the visit.

Dimensions of the Rubric:


1. Comprehensive List. The set of program learning outcomes should be a short but comprehensive list of the most important knowledge, skills, and values
students learn in the program. Higher levels of sophistication are expected for graduate program outcomes than for undergraduate program outcomes.
There is no strict rule concerning the optimum number of outcomes, but quality is more important than quantity. Learning processes (e.g., completing an
internship) should not be confused with learning outcomes (what is learned in the internship, such as application of theory to real-world practice).
Questions. Is the list reasonable, appropriate and well organized? Are relevant institution-wide outcomes, such as information literacy, included?
Are distinctions between undergraduate and graduate outcomes clear? Have national disciplinary standards been considered when developing
and refining the outcomes? Are explicit criteria – as defined in a rubric, for example – available for each outcome?
2. Assessable Outcomes. Outcome statements specify what students can do to demonstrate their learning. For example, an outcome might state, “Graduates
of our program can collaborate effectively to reach a common goal” or “Graduates of our program can design research studies to test theories.” These
outcomes are assessable because the quality of collaboration in teams and the quality of student-created research designs can be observed. Criteria for
assessing student products or behaviors usually are specified in rubrics that indicate varying levels of student performance (i.e., work that does not meet
expectations, meets expectations, and exceeds expectations).
Questions, Do the outcomes clarify how students can demonstrate learning? Are there agreed upon, explicit criteria, such as rubrics, for
assessing each outcome? Are there examples of student work representing different levels of mastery for each outcome?
3. Alignment. Students cannot be held responsible for mastering learning outcomes without a curriculum that is designed to develop increasing sophistication
with respect to each outcome. This design is often summarized in a curriculum map—a matrix that shows the relationship between courses in the required
curriculum and the program’s learning outcomes. Pedagogy and grading aligned with outcomes help encourage student growth and provide students
feedback on their development.
Questions. Is the curriculum explicitly aligned with the program outcomes? Do faculty select effective pedagogy and use grading to promote
learning? Are student support services and the co-curriculum explicitly aligned to reinforce and promote the development of student learning
outcomes?
4. Assessment Planning. Programs need not assess every outcome every year, but faculty are expected to have a plan to cycle through the outcomes over a
reasonable period of time, such as the timeframe for program review.
Questions. Does the plan clarify when, how, and how often each outcome will be assessed? Will all outcomes be assessed over a reasonable
period of time? Is the plan sustainable, in terms of human, fiscal, and other resources? Are assessment plans revised, as needed?
5. The Student Experience. At a minimum, students need to be aware of the learning outcomes of the program(s) in which they are enrolled. Ideally, they
could be included as partners in defining and applying the outcomes and the criteria for varying levels of accomplishment.
Questions: Are the outcomes communicated to students consistently and meaningfully? Do students understand what the outcomes mean
and how they can further their own learning? Do students use the outcomes and criteria to self-assess?
Do they participate in reviews of outcomes, criteria, curriculum design, or related activities?

Rev 8/2013
PROGRAM REVIEW RUBRIC
Rubric for Assessing the Integration of Student Learning Assessment into Program Reviews
Criterion Initial Emerging Developed Highly Developed
Required Program faculty may be Faculty are required to provide Faculty are required to provide the Faculty are required to evaluate the program’s
Elements of required to provide a list of the program’s student learning program’s student learning outcomes, student learning outcomes, annual assessment
the Self-Study program-level student outcomes and summarize annual annual assessment studies, findings, and findings, bench-marking results, subsequent
learning outcomes. assessment findings. resulting changes. They may be required changes, and evidence concerning the impact
to submit a plan for the next cycle of of these changes. They present a plan for the
assessment studies. next cycle of assessment studies.

Process of Internal and external Internal and external reviewers Internal and external reviewers analyze Well-qualified internal and external
Review reviewers do not address address indirect and possibly direct and indirect evidence of student reviewers evaluate the program’s learning
evidence concerning the direct evidence of student learning in the program and offer outcomes, assessment plan, evidence,
quality of student learning in the program; they evaluative feedback and suggestions benchmarking results, and assessment
learning in the program do so at the descriptive level, for improvement. They have sufficient impact. They give evaluative feedback and
other than grades. rather than providing an expertise to evaluate program efforts. suggestions for improvement. The
evaluation. Departments use the feedback to department uses the feedback to improve
improve their work. student learning.

Planning and The campus has not The campus has attempted to The campus generally integrates The campus systematically integrates
Budgeting integrated program integrate program reviews into program reviews into planning and program reviews into planning and
reviews into planning and planning and budgeting budgeting processes, but not through a budgeting processes, e.g., through
budgeting processes. processes, but with limited formal process. negotiating formal action plans with
success. mutually agreed-upon commitments.

Annual No individual or An individual or committee A well-qualified individual or A well-qualified individual or committee


Feedback on committee on campus occasionally provides feedback committee provides annual feedback on provides annual feedback on the quality of
Assessment provides feedback to on the quality of outcomes, the quality of outcomes, assessment outcomes, assessment plans, assessment
Efforts departments on the quality assessment plans, assessment plans, assessment studies, etc. studies, benchmarking results, and
of their outcomes, studies, etc. Departments use the feedback to assessment impact. Departments
assessment plans, improve their work. effectively use the feedback to improve
assessment studies, student learning. Follow-up activities
impact, etc. enjoy institutional support
The Student Students are unaware of Program review may include The internal and external reviewers Students are respected partners in the
Experience and uninvolved in focus groups or conversations examine samples of student work, e.g., program review process. They may offer
program review. with students to follow up on sample papers, portfolios, and capstone poster sessions on their work, demonstrate
results of surveys projects. Students may be invited to how they apply rubrics to self-assess, and/or
discuss what they learned and how they provide their own evaluative feedback.
learned it.
Guidelines for Using the Program Review Rubric
For the fullest picture of an institution’s accomplishments, reviews of written materials should be augmented with interviews at the time of the visit.

Dimensions of the Rubric:


1. Self-Study Requirements. The campus should have explicit requirements for the program’s self-study, including an analysis of the program’s learning
outcomes and a review of the annual assessment studies conducted since the last program review. Faculty preparing the self-study can reflect on the
accumulating results and their impact, and plan for the next cycle of assessment studies. As much as possible, programs can benchmark findings
against similar programs on other campuses.
Questions: Does the campus require self-studies that include an analysis of the program’s learning outcomes, assessment studies, assessment
results, benchmarking results, and assessment impact, including the impact of changes made in response to earlier studies? Does the campus
require an updated assessment plan for the subsequent years before the next program review?
2. Self-Study Review. Internal reviewers (on-campus individuals) and external reviewers (off-campus individuals, usually disciplinary experts) evaluate
the program’s learning outcomes, assessment plan, assessment evidence, benchmarking results, and assessment impact; and they provide evaluative
feedback and suggestions for improvement.
Questions: Who reviews the self-studies? Do they have the training or expertise to provide effective feedback? Do they routinely evaluate the
program’s learning outcomes, assessment plan, assessment evidence, benchmarking results, and assessment impact? Do they provide suggestions
for improvement? Do departments effectively use this feedback to improve student learning?
3. Planning and Budgeting. Program reviews are not be pro forma exercises; they should be tied to planning and budgeting processes, with expectations
that increased support will lead to increased effectiveness, such as improving student learning and retention rates.
Questions: Does the campus systematically integrate program reviews into planning and budgeting processes? Are expectations established for the
impact of planned changes?
4. Annual Feedback on Assessment Efforts. Institutions often find considerable variation in the quality of assessment efforts across programs. While
program reviews encourage departments to reflect on multi-year assessment results, some programs are likely to require more immediate feedback,
usually based on a required annual assessment report. This feedback might be provided by an assessment director or committee, relevant dean or
others; and whoever has this responsibility should have the expertise to provide quality feedback.
Questions: Does someone or a committee have the responsibility for providing annual feedback on the assessment process? Does this person or
team have the expertise to provide effective feedback? Does this person or team routinely provide feedback on the quality of outcomes,
assessment plans, assessment studies, benchmarking results, and assessment impact? Do departments effectively use this feedback to improve
student learning?
5. The Student Experience. Students have a unique perspective on a given program of study: they know better than anyone what it means to go through
it as a student. Program review can take advantage of that perspective and build it into the review.
Questions: Are students aware of the purpose and value of program review? Are they involved in preparations and the self-study? Do they have
an opportunity to interact with internal or external reviewers, demonstrate and interpret their learning, and provide evaluative feedback?

Rev 8/2013

You might also like