100% found this document useful (1 vote)
473 views35 pages

What Is Assessment

The document provides an overview of assessment in education. It defines assessment as collecting information about student learning and performance to improve education. There are two main types of assessment: formative assessment which provides feedback to students and teachers, and summative assessment which evaluates student learning at the end of a course. For assessments to be effective, they must be aligned with learning objectives and instructional strategies. The document provides examples of appropriate assessment types for different learning objectives. It also distinguishes between assessment, which improves education, and grading, which evaluates individual students.

Uploaded by

Amin D. Ace
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
473 views35 pages

What Is Assessment

The document provides an overview of assessment in education. It defines assessment as collecting information about student learning and performance to improve education. There are two main types of assessment: formative assessment which provides feedback to students and teachers, and summative assessment which evaluates student learning at the end of a course. For assessments to be effective, they must be aligned with learning objectives and instructional strategies. The document provides examples of appropriate assessment types for different learning objectives. It also distinguishes between assessment, which improves education, and grading, which evaluates individual students.

Uploaded by

Amin D. Ace
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

INSTITUT PENDIDIKAN GURU KAMPUS TUANKU BAINUN ENGLISH LANGUAGE TEACHING METHODOLOGY FOR YOUNG LEARNERS (ELE3104)

What is assessment?
Where do we want students to be at the end of a course or a program? And how will we know if they get there? Those two questions are at the heart of assessment. Although there is a lot of buzz about assessment these days, assessment itself is nothing new. If youve ever given an exam, led a discussion, or assigned a project and used what you discovered about student learning to refine your teaching youve engaged in assessment. Assessment is simply the process of collecting information about student learning and performance to improve education. At Carnegie Mellon, we believe that for assessment to be meaningful (not beancounting or teaching to the test!) it must be done thoughtfully and systematically. We also believe it should be driven by faculty so that the information gathered: Reflects the goals and values of particular disciplines Helps instructors refine their teaching practices and grow as educators Helps departments and programs refine their curriculum to prepare students for an evolving workplace

Assessment Basics
Assessment is a broad and rapidly growing field, with a strong theoretical and empirical base. However, you dont have to be an assessment expert to employ sound pra ctices to guide your teaching. Here we present the basic concepts you need to know to become more systematic in your assessment planning and implementation: Why should assessments, learning objectives, and instructional strategies be aligned? What is the difference between formative and summative assessment? What is the difference between assessment and grading? Glossary of terms.

Why should assessments, learning objectives, and instructional strategies be aligned?


Assessments should reveal how well students have learned what we want them to learn while instruction ensures that they learn it. For this to occur, assessments, learning objectives, and instructional strategies need to be closely aligned so that they reinforce one another. To ensure that these three components of your course are aligned, ask yourself the following questions:
Learning objectives: What do I want students to know how to do when they leave this course? Assessments: What kinds of tasks will reveal whether students have achieved the learning objectives I have identified? Instructional strategies: What kinds of activities in and out of class will reinforce my learning objectives and prepare students for assessments?

What if the components of a course are misaligned?


If assessments are misaligned with learning objectives or instructional strategies, it can undermine both student motivation and learning. Consider these two scenarios: Your objective is for students to learn to apply analytical skills, but your assessment measures only factual recall. Consequently, students hone their analytical skills and are frustrated that the exam does not measure what they learned. Your assessment measures students ability to compare and critique the arguments of different authors, but your instructional strategies focus entirely on summarizing the arguments of different authors. Consequently, students do not learn or practice the skills of comparison and evaluation that will be assessed.

What do well-aligned assessments look like?


This table presents examples of the kinds of activities that can be used to assess different types of learning objectives (adapted from the revised Blooms Taxonomy).
Type of learning Examples of appropriate assessments objective Recall Recognize

Objective test items such as fill-in-the-blank, matching, labeling, or


2

Identify

multiple-choice questions that require students to:


recall or recognize terms, facts, and concepts

Interpret Exemplify Classify Summarize Infer Compare Explain

Activities such as papers, exams, problem sets, class discussions, or concept maps that require students to:
summarize readings, films, or speeches compare and contrast two or more theories, events, or processes classify or categorize cases, elements, or events using established criteria paraphrase documents or speeches find or identify examples or illustrations of a concept or principle

Apply Execute Implement

Activities such as problem sets, performances, labs, prototyping, or simulations that require students to:
use procedures to solve or complete familiar or unfamiliar tasks determine which procedure(s) are most appropriate for a given task

Analyze Differentiate Organize Attribute

Activities such as case studies, critiques, labs, papers, projects, debates, or concept maps that require students to:
discriminate or select relevant and irrelevant parts determine how elements function together determine bias, values, or underlying intent in presented material

Evaluate Check Critique Assess Create Generate Plan Produce Design

Activities such as journals, diaries, critiques, problem sets, product reviews, or studies that require students to:
test, monitor, judge, or critique readings, performances, or products against established criteria or standards

Activities such as research projects, musical compositions, performances, essays, business plans, website designs, or set designs that require students to:
make, build, design or generate something new

This table does not list all possible examples of appropriate assessments. You can develop and use other assessments just make sure that they align with your learning objectives and instructional strategies!

What is the difference between formative and summative assessment?


Formative assessment
The goal of formative assessment is to monitor student learning to provide ongoing feedback that can be used by instructors to improve their teaching and by students to improve their learning. More specifically, formative assessments:
help students identify their strengths and weaknesses and target areas that need work help faculty recognize where students are struggling and address problems immediately

Formative assessments are generally low stakes, which means that they have low or no point value. Examples of formative assessments include asking students to:
draw a concept map in class to represent their understanding of a topic submit one or two sentences identifying the main point of a lecture turn in a research proposal for early feedback

Summative assessment
The goal of summative assessment is to evaluate student learning at the end of an instructional unit by comparing it against some standard or benchmark. Summative assessments are often high stakes, which means that they have a high point value. Examples of summative assessments include:
a midterm exam a final project a paper a senior recital

Information from summative assessments can be used formatively when students or faculty use it to guide their efforts and activities in subsequent courses.

What is the difference between assessment and grading?


Assessment and grading are not the same. Generally, the goal of grading is to evaluate individual students learning and performance. Although grades are sometimes treated as a proxy for student learning,
4

they are not always a reliable measure. Moreover, they may incorporate criteria such as attendance, participation, and effort that are not direct measures of learning. The goal of assessment is to improve student learning. Although grading can play a role in assessment, assessment also involves many ungraded measures of student learning (such as concept maps and CATS). Moreover, assessment goes beyond grading by systematically examining patterns of student learning across courses and programs and using this information to improve educational practices.

Common Assessment Terms


Assessment for Accountability
The assessment of some unit, such as a department, program or entire institution, which is used to satisfy some group of external stakeholders. Stakeholders might include accreditation agencies, state government, or trustees. Results are often compared across similar units, such as other similar programs and are always summative. An example of assessment for accountability would be ABET accreditation in engineering schools, whereby ABET creates a set of standards that must be met in order for an engineering school to receive ABET accreditation status.

Assessment for Improvement


Assessment activities that are designed to feed the results directly, and ideally, immediately, back into revising the course, program or institution with the goal of improving student learning. Both formative and summative assessment data can be used to guide improvements.

Concept Maps
Concept maps are graphical representations that can be used to reveal how students organize their knowledge about a concept or process. They include concepts, usually represented in enclosed circles or boxes, and relationships between concepts, indicated by a line connecting two concepts. Example [

Direct Assessment of Learning


Direct assessment is when measures of learning are based on student performance or demonstrates the learning itself. Scoring performance on tests, term papers, or the execution of lab skills, would all be examples of direct assessment of learning. Direct assessment of learning can occur within a course (e.g., performance on a series of tests) or could occur across courses or years (comparing writing scores from sophomore to senior year).
5

Embedded Assessment
A means of gathering information about student learning that is integrated into the teaching-learning process. Results can be used to assess individual student performance or they can be aggregated to provide information about the course or program. can be formative or summative, quantitative or qualitative. Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Webbased information (as part of a college-wide outcome to demonstrate information literacy).

External Assessment
Use of criteria (rubric) or an instrument developed by an individual or organization external to the one being assessed. This kind of assessment is usually summative, quantitative, and often high-stakes, such as the SAT or GRE exams.

Formative Assessment
Formative assessment refers to the gathering of information or data about student learning during a course or program that is used to guide improvements in teaching and learning. Formative assessment activities are usually low-stakes or no-stakes; they do not contribute substantially to the final evaluation or grade of the student or may not even be assessed at the individual student level. For example, posing a question in class and asking for a show of hands in support of different response options would be a formative assessment at the class level. Observing how many students responded incorrectly would be used to guide further teaching.

High stakes Assessment


The decision to use the results of assessment to set a hurdle that needs to be cleared for completing a program of study, receiving certification, or moving to the next level. Most often, the assessment so used is externally developed, based on set standards, carried out in a secure testing situation, and administered at a single point in time. Examples: at the secondary school level, statewide exams required for graduation; in postgraduate education, the bar exam.

Indirect Assessment of Learning


Indirect assessments use perceptions, reflections or secondary evidence to make inferences about student learning. For example, surveys of employers, students self assessments, and admissions to graduate schools are all indirect evidence of learning.

Individual Assessment
Uses the individual student, and his/her learning, as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement. Most of the student assessment conducted in higher education is focused on the individual. Student test scores, improvement in writing during a course, or a students improvement presentation skills over their undergraduate career are all examples of individual assessment.

Institutional Assessment
Uses the institution as the level of analysis. The assessment can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability. Ideally, institution-wide goals and objectives would serve as a basis for the assessment. For example, to measure the institutional goal of developing collaboration skills, an instructor and peer assessment tool could be used to measure how well seniors across the institution work in multi-cultural teams.

Local Assessment
Means and methods that are developed by an institution's faculty based on their teaching approaches, students, and learning goals. An example would be an English Departments construction and use of a writing rubric to assess incoming freshmens writing samples, which might then be used assign students to appropriate writing courses, or might be compared to senior writing samples to get a measure of valueadded.

Program Assessment
Uses the department or program as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability. Ideally, program goals and objectives would serve as a basis for the assessment. Example: How well can senior engineering students apply engineering concepts and skills to solve an engineering problem? This might be assessed through a capstone project, by combining performance data from multiple senior level courses, collecting ratings from internship employers, etc. If a goal is to assess value added, some comparison of the performance to newly declared majors would be included.

Qualitative Assessment
Collects data that does not lend itself to quantitative methods but rather to interpretive criteria (see the first example under "standards").
7

Quantitative Assessment
Collects data that can be analyzed using quantitative methods (see "assessment for accountability" for an example).

Rubric
A rubric is a scoring tool that explicitly represents the performance expectations for an assignment or piece of work. A rubric divides the assigned work into component parts and provides clear descriptions of the characteristics of the work associated with each component, at varying levels of mastery. Rubrics can be used for a wide array of assignments: papers, projects, oral presentations, artistic performances, group projects, etc. Rubrics can be used as scoring or grading guides, to provide formative feedback to support and guide ongoing learning efforts, or both.

Standards
Standards refer to an established level of accomplishment that all students are expected to meet or exceed. Standards do not imply standardization of a program or of testing. Performance or learning standards may be met through multiple pathways and demonstrated in various ways. For example, instruction designed to meet a standard for verbal foreign language competency may include classroom conversations, one-onone interactions with a TA, or the use of computer software. Assessing competence may be done by carrying on a conversation about daily activities or a common scenario, such as eating in a restaurant, or using a standardized test, using a rubric or grading key to score correct grammar and comprehensible pronunciation.

Summative Assessment
The gathering of information at the conclusion of a course, program, or undergraduate career to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program. Examples: examining student final exams in a course to see if certain specific areas of the curriculum were understood less well than others; analyzing senior projects for the ability to integrate across disciplines.

Value Added
The increase in learning that occurs during a course, program, or undergraduate education. Can either focus on the individual student (how much better a student can write, for example, at the end than at the beginning) or on a cohort of students (whether senior papers demonstrate more sophisticated writing skills-in the aggregate-than freshmen papers). To measure value-added a baseline measurement is needed for
8

comparison. The baseline measure can be from the same sample of students (longitudinal design) or from a different sample (cross-sectional). Adapted from Assessment Glossary compiled by American Public University System, 2005 http://www.apus.edu/Learning-OutcomesAssessment/Resources/Glossary/Assessment-Glossary.htm

How to Assess Students Prior Knowledge


In order to gauge how much students have learned, it is not enough to assess their knowledge and skills at the end of the course or program. We also need to find out what they know coming in so that we can identify more specifically the knowledge and skills they have gained during the course or program. You can choose from a variety of methods to assess your students prior knowledge and skills. Some methods (e.g., portfolios, pre-tests, auditions) are direct measures of students capabilities entering a course or program. Other methods (e.g., students self reports, inventories of prior courses or experiences) are indirect measures. Here are links to a few methods that instructors can employ to gauge students prior knowledge. Performance-based prior knowledge assessments Prior knowledge self-assessments Classroom assessment techniques (CATs) Concept maps Concept tests

Performance-Based Prior Knowledge Assessments


The most reliable way to assess students prior knowledge is to assign a task (e.g., quiz, paper) that gauges their relevant background knowledge. These assessments are for diagnostic purposes only, and they should not be graded. They can help you gain an overview of students preparedness, identify areas of weakness, and adjust the pace of the course. To create a performance-based prior knowledge assessment, you should begin by identifying the background knowledge and skills that students will need to succeed in
9

your class. Your assessment can include tasks or questions that test students capabilities in these areas.

Prior Knowledge Self-Assessments


Prior knowledge self-assessments ask students to reflect and comment on their level of knowledge and skill across a range of items. Questions can focus on knowledge, skills, or experiences that: you assume students have acquired and are prerequisites to your course you believe are valuable but not essential to the course you plan to address in the course

The feedback from this assessment can help you calibrate your course appropriately or direct students to supplemental materials that can help them address weaknesses in their existing skills or knowledge. The advantage of a self-assessment is that it is relatively easy to construct and score. The potential disadvantage of this method is that students may not be able to accurately assess their abilities. However, accuracy improves when the response options clearly differentiate both types and levels of knowledge.

Writing Appropriate Questions for Self-Assessments


Writing appropriate questions for prior knowledge self-assessments can seem daunting at first. Identifying specific terms, concepts, or applications of skills to ask about will help you write effective questions.

Examples of questions with possible closed responses: How familiar are you with "Karnaugh maps"?
1. 2. 3. 4. I have never heard of them or I have heard of them but don't know what they are. I have some idea what they are, but don't know when or how to use them. I have a clear idea what they are, but haven't used them. I can explain what they are and what they do, and I have used them.

Have you designed or built a digital logic circuit?


1. 2. 3. 4. I have neither designed nor built one. I have designed one, but not built one. I have built one, but not designed one. I have both designed and built one.

10

How familiar are you with a "t-test"?


1. 2. 3. 4. 5. I have never heard of it. I have heard of it, but don't know what it is. I have some idea of what it is, but its not very clear. I know what it is and could explain what it's used for. I know what it is and when to use it, and I could use it to analyze data.

How familiar are you with Photoshop?


1. I have never used it or I tried using it but couldn't do anything with it. 2. I can do simple edits using preset options to manipulate single images (e.g., standard color, orientation and size manipulations). 3. I can manipulate multiple images using preset editing features to create desired effects. 4. I can easily use precision editing tools to manipulate multiple images for professional quality output.

For each of the following Shakespearean plays, place a check mark in the cell if it describes your experience.
Play Have Read it Have seen a performance live Have seen a TV or Have written a collegemovie production level paper on it

Hamlet King Lear Henry IV Othello

Using Classroom Assessment Techniques


Classroom Assessment Techniques (CATs) are a set of specific activities that instructors can use to quickly gauge students comprehension. They are generally used to assess students understanding of material in the current course, but with minor modifications they can also be used to gauge students knowledge coming into a course or program.
11

CATs are meant to provide immediate feedback about the entire classs level of understanding, not individual students. The instructor can use this feedback to inform instruction, such as speeding up or slowing the pace of a lecture or explicitly addressing areas of confusion.

Asking Appropriate Questions in CATs


Examples of appropriate questions you can ask in the CAT format:
How familiar are students with important names, events, and places in history that they will need to know as background in order to understand the lectures and readings (e.g. in anthropology, literature, political science)? How are students applying knowledge and skills learned in this class to their own lives (e.g. psychology, sociology)? To what extent are students aware of the steps they go through in solving problems and how well can they explain their problem-solving steps (e.g. mathematics, physics, chemistry, engineering)? How and how well are students using a learning approach that is new to them (e.g., cooperative groups) to master the concepts and principles in this course?

Using Specific Types of CATs


Minute Paper
Pose one to two questions in which students identify the most significant things they have learned from a given lecture, discussion, or assignment. Give students one to two minutes to write a response on an index card or paper. Collect their responses and look them over quickly. Their answers can help you to determine if they are successfully identifying what you view as most important.

Muddiest Point
This is similar to the Minute Paper but focuses on areas of confusion. Ask your students, What was the muddiest point in (todays lecture, the reading, the homework)? Give them one to two minutes to write and collect their responses.

Problem Recognition Tasks


Identify a set of problems that can be solved most effectively by only one of a few methods that you are teaching in the class. Ask students to identify by name which methods best fit which problems without actually solving the problems. This task works best when only one method can be used for each problem.

12

Documented Problem Solutions


Choose one to three problems and ask students to write down all of the steps they would take in solving them with an explanation of each step. Consider using this method as an assessment of problem-solving skills at the beginning of the course or as a regular part of the assigned homework.

Directed Paraphrasing
Select an important theory, concept, or argument that students have studied in some depth and identify a real audience to whom your students should be able to explain this material in their own words (e.g., a grants review board, a city council member, a vice president making a related decision). Provide guidelines about the length and purpose of the paraphrased explanation.

Applications Cards
Identify a concept or principle your students are studying and ask students to come up with one to three applications of the principle from everyday experience, current news events, or their knowledge of particular organizations or systems discussed in the course.

Student-Generated Test Questions


A week or two prior to an exam, begin to write general guidelines about the kinds of questions you plan to ask on the exam. Share those guidelines with your students and ask them to write and answer one to two questions like those they expect to see on the exam.

Classroom Opinion Polls


When you believe that your students may have pre-existing opinions about courserelated issues, construct a very short two- to four-item questionnaire to help uncover students opinions.

Creating and Implementing CATs


You can create your own CATs to meet the specific needs of your course and students. Below are some strategies that you can use to do this.
Identify a specific assessable question where the students responses will influence your teaching and provide feedback to aid their learning. Complete the assessment task yourself (or ask a colleague to do it) to be sure that it is doable in the time you will allot for it. 13

Plan how you will analyze students responses, such as grouping them into the categories good understanding, some misunderstanding, or significant misunderstanding. After using a CAT, communicate the results to the students so that they know you learned from the assessment and so that they can identify specific difficulties of their own.

From Angelo, Thomas A., & Cross, K. Patricia. (1993). Classroom Assessment Techniques: A Handbook for College Teachers. San Francisco: Jossey-Bass.

Using Concept Maps


Concept maps are a graphic representation of students knowledge. Having students create concept maps can provide you with insights into how they organize and represent knowledge. This can be a useful strategy for assessing both the knowledge students have coming into a program or course and their developing knowledge of course material. Concept maps include concepts, usually enclosed in circles or boxes, and relationships between concepts, indicated by a connecting line. Words on the line are linking words and specify the relationship between concepts. See an example (pdf).

Designing a concept map exercise


To structure a concept map exercise for students, follow these three steps:
1. Create a focus question that clearly specifies the issue that the concept map should address, such as What are the potential effects of cap-and-trade policies? or What is materials science? 2. Tell students (individually or in groups) to begin by generating a list of relevant concepts and organizing them before constructing a preliminary map. 3. Give students the opportunity to revise. Concept maps evolve as they become more detailed and may require rethinking and reconfiguring.

Encourage students to create maps that:


Employ a hierarchical structure that distinguishes concepts and facts at different levels of specificity Draw multiple connections, or cross-links, that illustrate how ideas in different domains are related Include specific examples of events and objects that clarify the meaning of a given concept

14

Using concept maps throughout the semester


Concept maps can be used at different points throughout the semester to gauge students knowledge. Here are some ideas:
Ask students to create a concept map at the beginning of the semester to assess the knowledge they have coming into a course. This can give you a quick window into the knowledge, assumptions, and misconceptions they bring with them and can help you pitch the course appropriately. Assign the same concept map activity several times over the course of the semester. Seeing how the concept maps grow and develop greater nuance and complexity over time helps students (and the instructor) see what they are learning. Create a fill-in-the-blank concept map in which some circles are blank or some lines are unlabeled. Give the map to students to complete. You can see an example of this type of concept map exercise at: http://flag.wceruw.org/tools/conmap/solar.php.

Using Concept Tests


Concept tests (or ConcepTests) are short, informal, targeted tests that are administered during class to help instructors gauge whether students understand key concepts. They can be used both to assess students prior knowledge (coming into a course or unit) or their understanding of content in the current course. Usually these tests consist of one to five multiple-choice questions. Students are asked to select the best answer and submit it by raising their hands, holding up a color card associated with a response option, or using a remote control device to key in their response. The primary purpose of concept tests is to get a snapshot of the current understanding of the class, not of an individual student. As a result, concept tests are usually ungraded or very low-stakes. They are most valuable in large classes where it is difficult to assess student understanding in real time.

Creating a concept test


Creating a good concept test can be time-consuming, so you might want to see if question repositories or fully developed concept tests already exist in your field. If you create your own, you need to begin with a clear understanding of the knowledge and skills that you want your students to acquire. The questions should probe a student's comprehension or application of a concept rather than factual recall.
Concept test questions often describe a problem, event, or situation. Examples of appropriate types of questions include: 15

asking students to predict the outcome of an event (e.g., What would happen in this experiment? How would changing one variable affect others?) asking students to apply rules or principles to new situations (e.g., Which concept is relevant here? How would you apply it?) asking students to solve a problem using a known equation or select a procedure to complete a new task (e.g., What procedure would be appropriate to solve this problem?) The following question stems are used frequently in concept test questions: Which of the following best describes Which is the best method for If the value of X was changed to Which of the following is the best explanation for Which of the following is another example of What is the major problem with What would happen if When possible, incorrect answers (distractors) should be designed to reveal common errors or misconceptions. Example 1: Mechanics (pdf) This link contains sample items from the Mechanics Baseline Test (Hestenes & Wells, 1992). Example 2: Statics (pdf) This link contains sample items from a Statics Inventory developed by Paul Steif, Carnegie Mellon. Example 3: Chemistry This links to the Journal of Chemistry Educations site, which contains a library of conceptual questions in different scientific areas.

Implementing concept tests


Concept tests can be used in a number of different ways. Some instructors use them at the beginning of class to gauge students understanding of readings or homework. Some use them intermittently in class to test students comprehension. Based on how well students perform, the instructor may decide to move on in the lecture or pause to review a difficult concept. Another method is to give students the chance to respond to a question individually, then put them in pairs or small groups to compare and discuss their answers. After a short period of time, the students vote again for the answer they think is correct. This gives students the opportunity to articulate their reasoning for a particular answer.
Implementing ConcepTests This site contains strategies for implementing ConcepTests that are drawn from Chemistry but applicable to other disciplines: http://jchemed.chem.wisc.edu/JCEDLib/QBank/collection/ConcepTests/CTinfo.html 16

How to Assess Students Learning and Performance


Learning takes place in students heads where it is invisible to others. This mea ns that learning must be assessed through performance: what students can do with their learning. Assessing students performance can involve assessments that are formal or informal, high- or low-stakes, anonymous or public, individual or collective. Here we provide suggestions and strategies for assessing student learning and performance as well as ways to clarify your expectations and performance criteria to students. Creating assignments Creating exams Using classroom assessment techniques Using concept maps Using concept tests Assessing group work Creating and using rubrics

Creating Assignments
Here are some general suggestions and questions to consider when creating assignments. There are also many other resources in print and on the web that provide examples of interesting, discipline-specific assignment ideas.

Consider your learning objectives.


What do you want students to learn in your course? What could they do that would show you that they have learned it? To determine assignments that truly serve your course objectives, it is useful to write out your objectives in this form: I want my students to be able to ____. Use active, measurable verbs as you complete that sentence (e.g., compare theories, discuss ramifications, recommend strategies), and your learning objectives will point you towards suitable assignments.

Design assignments that are interesting and challenging.


This is the fun side of assignment design. Consider how to focus students thinking in ways that are creative, challenging, and motivating. Think beyond the conventional assignment type! For example, one American historian requires students to write diary entries for a hypothetical Nebraska farmwoman in the 1890s. By specifying that students diary entries must demonstrate the breadth of their historical knowledge (e.g., gender, economics, technology, diet, family structure), the instructor gets students to
17

exercise their imaginations while also accomplishing the learning objectives of the course (Walvoord & Anderson, 1989, p. 25).

Double-check alignment.
After creating your assignments, go back to your learning objectives and make sure there is still a good match between what you want students to learn and what you are asking them to do. If you find a mismatch, you will need to adjust either the assignments or the learning objectives. For instance, if your goal is for students to be able to analyze and evaluate texts, but your assignments only ask them to summarize texts, you would need to add an analytical and evaluative dimension to some assignments or rethink your learning objectives.

Name assignments accurately.


Students can be misled by assignments that are named inappropriately. For example, if you want students to analyze a products strengths and weaknesses but you call the assignment a product description, students may focus all their energies on the descriptive, not the critical, elements of the task. Thus, it is important to ensure that the titles of your assignments communicate their intention accurately to students.

Consider sequencing.
Think about how to order your assignments so that they build skills in a logical sequence. Ideally, assignments that require the most synthesis of skills and knowledge should come later in the semester, preceded by smaller assignments that build these skills incrementally. For example, if an instructors final assignment is a resea rch project that requires students to evaluate a technological solution to an environmental problem, earlier assignments should reinforce component skills, including the ability to identify and discuss key environmental issues, apply evaluative criteria, and find appropriate research sources.

Think about scheduling.


Consider your intended assignments in relation to the academic calendar and decide how they can be reasonably spaced throughout the semester, taking into account holidays and key campus events. Consider how long it will take students to complete all parts of the assignment (e.g., planning, library research, reading, coordinating groups, writing, integrating the contributions of team members, developing a presentation), and be sure to allow sufficient time between assignments.

18

Check feasibility.
Is the workload you have in mind reasonable for your students? Is the grading burden manageable for you? Sometimes there are ways to reduce workload (whether for you or for students) without compromising learning objectives. For example, if a primary objective in assigning a project is for students to identify an interesting engineering problem and do some preliminary research on it, it might be reasonable to require students to submit a project proposal and annotated bibliography rather than a fully developed report. If your learning objectives are clear, you will see where corners can be cut without sacrificing educational quality.

Articulate the task description clearly.


If an assignment is vague, students may interpret it any number of ways and not necessarily how you intended. Thus, it is critical to clearly and unambiguously identify the task students are to do (e.g., design a website to help high school students locate environmental resources, create an annotated bibliography of readings on apartheid). It can be helpful to differentiate the central task (what students are supposed to produce) from other advice and information you provide in your assignment description.

Establish clear performance criteria.


Different instructors apply different criteria when grading student work, so its important that you clearly articulate to students what your criteria are. To do so, think about the best student work you have seen on similar tasks and try to identify the specific characteristics that made it excellent, such as clarity of thought, originality, logical organization, or use of a wide range of sources. Then identify the characteristics of the worst student work you have seen, such as shaky evidence, weak organizational structure, or lack of focus. Identifying these characteristics can help you consciously articulate the criteria you already apply. It is important to communicate these criteria to students, whether in your assignment description or as a separate rubric or scoring guide. Clearly articulated performance criteria can prevent unnecessary confusion about your expectations while also setting a high standard for students to meet.

Specify the intended audience.


Students make assumptions about the audience they are addressing in papers and presentations, which influences how they pitch their message. For example, students may assume that, since the instructor is their primary audience, they do not need to define discipline-specific terms or concepts. These assumptions may not match the instructors expectations. Thus, it is important on assignments to specify the intended audience http://wac.colostate.edu/intro/pop10e.cfm (e.g., undergraduates with no biology background, a potential funder who does not know engineering).
19

Specify the purpose of the assignment.


If students are unclear about the goals or purpose of the assignment, they may make unnecessary mistakes. For example, if students believe an assignment is focused on summarizing research as opposed to evaluating it, they may seriously miscalculate the task and put their energies in the wrong place. The same is true they think the goal of an economics problem set is to find the correct answer, rather than demonstrate a clear chain of economic reasoning. Consequently, it is important to make your objectives for the assignment clear to students.

Specify the parameters.


If you have specific parameters in mind for the assignment (e.g., length, size, formatting, citation conventions) you should be sure to specify them in your assignment description. Otherwise, students may misapply conventions and formats they learned in other courses that are not appropriate for yours.

A Checklist for Designing Assignments


Here is a set of questions you can ask yourself when creating an assignment. Have I...
Provided a written description of the assignment (in the syllabus or in a separate document)? Specified the purpose of the assignment? Indicated the intended audience? Articulated the instructions in precise and unambiguous language? Provided information about the appropriate format and presentation (e.g., page length, typed, cover sheet, bibliography)? Indicated special instructions, such as a particular citation style or headings? Specified the due date and the consequences for missing it? Articulated performance criteria clearly? Indicated the assignments point value or percentage of the course grade? Provided students (where appropriate) with models or samples?

Adapted from the WAC Clearinghouse at http://wac.colostate.edu/intro/pop10e.cfm.

Creating Exams
How can you design fair, yet challenging, exams that accurately gauge student learning? Here are some general guidelines. There are also many resources, in print and on the web, that offer strategies for designing particular kinds of exams, such as multiple-choice.
20

Choose appropriate item types for your objectives.


Should you assign essay questions on your exams? Problem sets? Multiple-choice questions? It depends on your learning objectives. For example, if you want students to articulate or justify an economic argument, then multiple-choice questions are a poor choice because they do not require students to articulate anything. However, multiplechoice questions (if well-constructed) might effectively assess students ability to recognize a logical economic argument or to distinguish it from an illogical one. If your goal is for students to match technical terms to their definitions, essay questions may not be as efficient a means of assessment as a simple matching task. There is no single best type of exam question: the important thing is that the questions reflect your learning objectives.

Highlight how the exam aligns with course objectives.


Identify which course objectives the exam addresses (e.g., This exam assesses your ability to use sociological terminology appropriately, and to apply the principles we have learned in the course to date). This helps students see how the components of the course align, reassures them about their ability to perform well (assuming they have done the required work), and activates relevant experiences and knowledge from earlier in the course.

Write instructions that are clear, explicit, and unambiguous.


Make sure that students know exactly what you want them to do. Be more explicit about your expectations than you may think is necessary. Otherwise, students may make assumptions that run them into trouble. For example, they may assume perhaps based on experiences in another course that an in-class exam is open book or that they can collaborate with classmates on a take-home exam, which you may not allow. Preferably, you should articulate these expectations to students before they take the exam as well as in the exam instructions. You also might want to explain in your instructions how fully you want students to answer questions (for example, to specify if you want answers to be written in paragraphs or bullet points or if you want students to show all steps in problem-solving.)

Write instructions that preview the exam.


Students test-taking skills may not be very effective, leading them to use their time poorly during an exam. Instructions can prepare students for what they are about to be asked by previewing the format of the exam, including question type and point value (e.g., there will be 10 multiple-choice questions, each worth two points, and two essay questions, each worth 15 points). This helps students use their time more effectively during the exam.
21

Word questions clearly and simply.


Avoid complex and convoluted sentence constructions, double negatives, and idiomatic language that may be difficult for students, especially international students, to understand. Also, in multiple-choice questions, avoid using absolutes such as never or always, which can lead to confusion.

Enlist a colleague or TA to read through your exam.


Sometimes instructions or questions that seem perfectly clear to you are not as clear as you believe. Thus, it can be a good idea to ask a colleague or TA to read through (or even take) your exam to make sure everything is clear and unambiguous.

Think about how long it will take students to complete the exam.
When students are under time pressure, they may make mistakes that have nothing to do with the extent of their learning. Thus, unless your goal is to assess how students perform under time pressure, it is important to design exams that can be reasonably completed in the time allotted. One way to determine how long an exam will take students to complete is to take it yourself and allow students triple the time it took you or reduce the length or difficulty of the exam.

Consider the point value of different question types.


The point value you ascribe to different questions should be in line with their difficulty, as well as the length of time they are likely to take and the importance of the skills they assess. It is not always easy when you are an expert in the field to determine how difficult a question will be for students, so ask yourself: How many subskills are involved? Have students answered questions like this before, or will this be new to them? Are there common traps or misconceptions that students may fall into when answering this question? Needless to say, difficult and complex question types should be assigned higher point values than easier, simpler question types. Similarly, questions that assess pivotal knowledge and skills should be given higher point values than questions that assess less critical knowledge.

Think ahead to how you will score students work.


When assigning point values, it is useful to think ahead to how you will score students answers. Will you give partial credit if a student gets some elements of an answer right? If so, you might want to break the desired answer into components and decide how many points you would give a student for correctly answering each. Thinking this through in advance can make it considerably easier to assign partial credit when you do the actual grading. For example, if a short answer question involves four discrete components, assigning a point value that is divisible by four makes grading easier.
22

Creating objective test questions


Creating objective test questions such as multiple-choice questions can be difficult, but here are some general rules to remember that complement the strategies in the previous section.
Write objective test questions so that there is one and only one best answer. Word questions clearly and simply, avoiding double negatives, idiomatic language, and absolutes such as never or always. Test only a single idea in each item. Make sure wrong answers (distractors) are plausible. Incorporate common student errors as distractors. Make sure the position of the correct answer (e.g., A, B, C, D) varies randomly from item to item. Include from three to five options for each item. Make sure the length of response items is roughly the same for each question. Keep the length of response items short. Make sure there are no grammatical clues to the correct answer (e.g., the use of a or an can tip the test-taker off to an answer beginning with a vowel or consonant). Format the exam so that response options are indented and in column form. In multiple choice questions, use positive phrasing in the stem, avoiding words like not and except. If this is unavoidable, highlight the negative words (e.g., Which of the following is NOT an example of?). Avoid overlapping alternatives. Avoid using All of the above and None of the above in responses. (In the case of All of the above, students only need to know that two of the options are correct to answer the question. Conversely, students only need to eliminate one response to eliminate All of the above as an answer. Similarly, when None of the above is used as the correct answer choice, it tests students ability to detect incorrect answers, but not whether they know the correct answer.)

Assessing Group Work


All of the basic principles of assessment that apply to individual students work apply to group work as well. Assessing group work has additional aspects to consider, however. First, depending on the objectives of the assignment, both process- and product-related skills must be assessed. Second, group performance must be translated into individual grades, which raises issues of fairness and equity. Complicating both these issues is the fact that neither group processes nor individual contributions are necessarily apparent in the final product. Thus, instructors need to find ways of obtaining this information. The general principles described in the next few sections can be adapted to the context of specific courses.

23

Assess process, not just product.


If both product and process are important to you, both should be reflected in students grades although the weight you accord each will depend on your learning objectives for the course and for the assignment. Ideally, your grading criteria should be communicated to students in a rubric. This is especially important if you are emphasizing skills that students are not used to being evaluated on, such as the ability to cooperate or meet deadlines.

Ask students to assess their own contribution to the team.


Have students evaluate their own teamwork skills and their contributions to the groups process using a self-assessment of the process skills you are emphasizing. These process skills may include, among others, respectfully listening to and considering opposing views or a minority opinion, effectively managing conflict around differences in ideas or approaches, keeping the group on track both during and between meetings, promptness in meeting deadlines, and appropriate distribution of research, analysis, and writing.

Hold individuals accountable.


To motivate individual students and discourage the free-rider phenomenon, it is important to assess individual contributions and understanding as well as group products and processes. In addition to evaluating the work of the group as a whole, ask individual students to demonstrate their learning. This can be accomplished through independent write-ups, weekly journal entries, content quizzes, or other types of individual assignments.

Ask students to evaluate their contributions of their teammates.

groups

dynamics

and

the

Gauge what various group members have contributed to the group (e.g., effort, participation, cooperativeness, accessibility, communication skills) by asking team members to complete an evaluation form for group processes. This is not a foolproof strategy (students may feel social pressure to cover for one another). However, when combined with other factors promoting individual accountability, it can provide you with important information about the dynamics within groups and the contributions of individual members. If you are gathering feedback from external clients for example, in the context of public reviews of students performances or creations this feedback can also be incorporated into your assessment of group work. Feedback from external clients can address product (e.g., Does it work?, Is it an effective design?) or process (e.g., the groups ability to communicate effectively, respond appropriately, or meet deadlines) and can be incorporated formally or informally into the group grade.
24

Grading Methods for Group Work Instructor and student options for assessing group work. Example of Group and Self-Assessment Tool (download .pdf | download .doc)

25

Grading Methods for Group Work


Instructor Assessment of Group Product
Assessment Option Advantages encourages group work groups sink or swim together decreases likelihood of plagiarism (more likely with individual products from group work) relatively straightforward method may provide motivation for students to focus on both individual and group work and thereby develop in both areas Disadvantages individual contributions are not necessarily reflected in the marks stronger students may be unfairly disadvantaged by weaker ones and vice versa may be perceived as unfair by students stronger students may be unfairly disadvantaged by weaker ones and vice versa difficult to find tasks that are exactly equal in size/complexity does not encourage the group process/collaboration dependencies between tasks may slow progress of some precise manner in which individual reports should differ often very unclear to students likelihood of unintentional plagiarism increased may diminish importance of group work additional work for staff in designing exam questions may not be effective, students may be able to answer the questions by reading the group reports

Shared Group Grade The group submits one product and all group members receive the same grade, regardless of individual contribution.

Group Average Grade Individual submissions (allocated tasks or individual reports) are scored individually. The group members each receive the averageof these individual scores.

Individual Grade - Allocated task Each student completes an allocated task that contributes to the final group product and gets the marks for that task

a relatively objective way of ensuring individual participation may provide additional motivation to students potential to reward outstanding performance

Individual Grade - Individual report Each student writes and submits an individual report based on the group's work on the task/project

ensures individual effort perceived as fair by students

Individual Grade - Examination Exam questions specifically target the group projects, and can only be answered by students who have been thoroughly involved in the project

may increase motivation to learn from the group project including learning from the other members of the group

26

Student Assessment of Group Product


Assessment Option Student distribution of pool of marks Instructor awards a set number of scores and let the group decide how to distribute them. Example: 4 member group Product grade: 80/100. 4 * 80 = 320 pts to be distributed. No one student can be given less than zero or more than 100. If members decide that they all contributed equally then each get 80 If they decided that person A deserved much more, then A might get 95, and the remaining if equal would get 75. Students allocate individual weightings Instructor gives shared group grade & individual grade adjusted according to a peer assessment factor. Example Group Grade = 80/100 The individual student's peer grade ranges from .5 1.5, with 1 for full Grade = Group grade * peer Below=80 *.75 =60 Above=80 * 1.2 = 96 As Above As Above Advantages Disadvantages

easy to implement may motivate students to contribute more negotiation skills become part of the learning process potential to reward outstanding performance may be perceived as fairer than shared or average group mark alone

open to subjective evaluation by friends may lead to conflict may foster competition and therefore be counterproductive to team work students may not have the skills necessary for the required negotiation

Peer Evaluation - random marker, using criteria, moderated Assessment items are anonymously completed by students who identify whether their peer has met the assessment criteria and awards a grade These grades are moderated by instructor and rating sheets returned to student.

helps clarify criteria for assessment encourages sense of involvement and responsibility assists students to develop skills in independent judgement increases feedback to students random allocation addresses potential friendship and other influences on assessment provides experience to careers where peer judgement occurs

time may have to be invested in teaching students to evaluate each other instructor moderation is time consuming

From Winchester-Seeto, T. (April, 2002). Assessment of collaborative work collaboration versus assessment. Invited paper presented at the Annual Uniserve Science Symposium, The University of Sydney

27

Creating and Using Rubrics


A rubric is a scoring tool that explicitly describes the instructors performance expectations for an assignment or piece of work. A rubric identifies:
criteria: the aspects of performance (e.g., argument, evidence, clarity) that will be assessed descriptors: the characteristics associated with each dimension (e.g., argument is demonstrable and original, evidence is diverse and compelling) performance levels: a rating scale that identifies students level of mastery within each criterion

Rubrics can be used to provide feedback to students on diverse types of assignments, from papers, projects, and oral presentations to artistic performances and group projects.

Benefitting from Rubrics


A carefully designed rubric can offer a number of benefits to instructors. Rubrics help instructors to: reduce the time spent grading by allowing instructors to refer to a substantive description without writing long comments help instructors more clearly identify strengths and weaknesses across an entire class and adjust their instruction appropriately help to ensure consistency across time and across graders reduce the uncertainty which can accompany grading discourage complaints about grades An effective rubric can also offer several important benefits to students. Rubrics help students to: understand instructors expectations and standards use instructor feedback to improve their performance monitor and assess their progress as they work towards clearly indicated goals recognize their strengths and weaknesses and direct their efforts accordingly

Examples of Rubrics
Here we are providing a sample set of rubrics designed by faculty at Carnegie Mellon and other institutions. Although your particular field of study or type of assessment may not be represented, viewing a rubric that is designed for a similar assessment may give you ideas for the kinds of criteria, descriptions, and performance levels you use on your own rubric.

28

Paper
Example 1: Philosophy Paper This rubric was designed for student papers in a range of courses in philosophy (Carnegie Mellon). Example 2: Psychology Assignment Short, concept application homework assignment in cognitive psychology (Carnegie Mellon). Example 3: Anthropology Writing Assignments This rubric was designed for a series of short writing assignments in anthropology (Carnegie Mellon). Example 4: History Research Paper. This rubric was designed for essays and research papers in history (Carnegie Mellon).

Projects
Example 1: Capstone Project in Design This rubric describes the components and standards of performance from the research phase to the final presentation for a senior capstone project in design (Carnegie Mellon). Example 2: Engineering Design Project This rubric describes performance standards for three aspects of a team project: research and design, communication, and team work.

Oral Presentations
Example 1: Oral Exam This rubric describes a set of components and standards for assessing performance on an oral exam in an upper-division course in history (Carnegie Mellon). Example 2: Oral Communication This rubric is adapted from Huba and Freed, 2000. Example 3: Group Presentations This rubric describes a set of components and standards for assessing group presentations in history (Carnegie Mellon).

Class Participation/Contributions
Example 1: Discussion Class This rubric assesses the quality of student contributions to class discussions. This is appropriate for an undergraduate-level course (Carnegie Mellon). Example 2: Advanced Seminar This rubric is designed for assessing discussion performance in an advanced undergraduate or graduate seminar.

See also "Examples and Tools" section of this site for more rubrics.

How to Assess Your Teaching


What are your strengths and weaknesses as an instructor? What can you do to improve your teaching effectiveness? There are a number of methods you can use to assess your teaching in addition to the end-of-semester evaluations conducted by the university. They include: Early course evaluations
29

One-on-one teaching consultations Classroom observations Student focus groups Course representatives

Early Course Evaluations


Why wait until the end of the semester to ask students to evaluate your course? Conducting an early course evaluation allows you to use student feedback to improve a course in progress.

Scheduling an Early Course Evaluation


Ideally, you should conduct your evaluation early for example, in the first three to six weeks of a semester-long course or the first two to three weeks of a mini course. By this time, students will have a reasonable sense of how you teach and evaluate their learning and will be able to make substantive comments. Conducting the evaluation early also gives you time to make adjustments to your teaching and the course and see their impact. For best quality feedback, allow 10 to 15 minutes at the beginning of class for students to complete the form. If you distribute the forms at the end of class, many students will be too rushed and the quality of the feedback will be diminished.

Preparing Students for the Evaluation


Let students know that you would like their feedback so that you can create a better learning experience for them. Stress that you want candid and constructive responses that will help you meet this goal. Encourage students to write to you rather than about you. Finally, tell students that you will talk with them about the feedback you receive. This shows them that you are genuinely interested in their feedback and will respond to their comments.

Choosing an Evaluation Form


The Eberly Center recommends using an evaluation form that asks open-ended questions. This allows students to address the issues that they perceive to be the most important. You can always add one or two questions about specific issues or concerns you have.

30

Sample Forms
Here we provide examples of early course evaluations for instructors and TAs. In addition, the Eberly Center can provide you with assistance in developing your own form.
Form for Instructors: PDF format MS Word format Form for TA's: PDF format MS Word format

Organizing Student Feedback


A pile of open-ended responses can seem daunting, but the data can often be easily organized so that you can identify major themes. The following process has been helpful to many faculty: Starting with the first students comments, rewrite an abbreviated version of each main point or idea they touch on (e.g., too fast, homework doesnt relate to lecture). When you see an identical or similar comment by another student, make a tally mark next to the original abbreviated version to indicate that the comment was repeated. Sort the list into themes (e.g., pace, difficulty, presentation skills, tone). Note the frequency of the different kinds of comments. There will probably be areas of consensus (high frequency) and divergence (low frequency). Areas of consensus will usually be your highest priority when determining what, if anything, to change.

Interpreting Student Feedback


One useful way to make sense of student feedback is to group students comments into the following categories: strengths, which are aspects that students felt you did well or were positive aspects of the course ideas for change, which are aspects that students felt were weak or could be improved issues beyond your control, which are that you cant change When considering changes, focus on ideas for change, but do not lose sight of the strengths students identify! Eberly Center consultants are happy to help you interpret early course evaluations and develop appropriate responses.

Discussing Student Feedback with Your Class


A critical part of the early course evaluation process is discussing the feedback with your students and thanking them for their input. This sets a positive tone for the class and shows a 31

fundamental respect for students role in making the class work. Here is a suggested process for discussing this feedback with your class: Select three to five issues that you want to report to the class. Balance the issues so that you present both positive feedback and areas for improvement. If you plan to make changes based on the feedback, explain the changes and the rationale behind them. If possible, enlist students help in your efforts (e.g., if they reported that you talk too fast or too softly, ask them to indicate with a hand signal or some other sign when they cannot follow or hear). If you decide not to make changes in an area students identified as problematic, explain why the changes are not possible or why it is important to do it the way you are currently doing it. Maintain a positive tone throughout the discussion. It is important not to seem defensive, angry, or over-apologetic because these reactions can undermine students perceived value of future evaluations.

One-on-One Teaching Consultations


The Eberly Center offers individual consultations to any instructor on campus who would like the opportunity to discuss teaching issues, solve teaching problems, or try new innovations in the classroom. In these consultations we work collaboratively with you to help you identify your strengths as a teacher, collect information (e.g., classroom observations, student focus groups, examination of teaching materials) to reveal the source of problems, and develop productive solutions.

Who We Work With


We work with faculty, post-docs, and graduate students with all level of teaching experience, including instructors who are:
new to Carnegie Mellon and want to calibrate to our students and the institution experienced and successful teachers who want to try new techniques, approaches, or technologies encountering difficulties in their courses and want help identifying and addressing problems new to teaching and want help getting started (including graduate students who anticipate pursuing an academic career)

How Consultations Work


We hold ourselves to high standards when working one on one with instructors. All of our consultations are: Strictly confidential: We do not disclose any information from our consultations. This
32

includes the identities of those with whom we work, the information they share with us, and the data we gather on their behalf from classroom observations and interactions with TAs and students. Documented for faculty and graduate student purposes alone: We provide written feedback to the instructors with whom we consult that summarizes and documents the consultation process. We do not write letters of support for reappointment, promotion, or tenure, but faculty can choose to use our documentation as they see fit.

Classroom Observations
Having a colleague observe your classroom can be a useful way to get immediate feedback about your strengths and weaknesses as an instructor, as well as concrete, contextualized suggestions for improvement. The Eberly Center provides this service to any faculty member or graduate student at Carnegie Mellon, regardless of experience level. The feedback we provide is strictly for you: we ensure strict confidentiality. To ensure that observations are as productive as possible, we recommend that you meet with an Eberly consultant before the first classroom observation to: discuss the goals of the course discuss the goals of the particular class being observed talk about any particular concerns or requirements you have regarding the observation share relevant course materials discuss specific aspects of your teaching you would like the observer to provide feedback on You should also meet with the consultant again after the observation to go over feedback, ask questions, discuss applicable strategies, and (if the class was videotaped) review and discuss the videotape together. You can request follow-up observations as well.

Student Focus Groups


When conducted thoughtfully by a skilled interviewer, focus groups can be a useful and reliable method for collecting information about students experiences in a course or program. Because they allow for clarification and follow-up, they are more effective than surveys for identifying areas of agreement and disagreement across groups of students and for eliciting students suggestions for improvement.

33

Course Representatives
One way to monitor your effectiveness as a teacher is to ask a student to serve as a student representative or ombuds(wo)man. The representatives role is to communicate student concerns and feedback anonymously to you. The student you ask to play this role should be someone you consider trustworthy who is respected by classmates (one option is to ask the class to choose their own representative.) Explain that the representatives responsibility will be strictly to synthesize and share students concerns anonymously, but that he or she is free to decline the responsibility. The role should be strictly voluntary. When you have chosen your course representative, let other students know that you welcome their feedback, which they can convey to you directly or channel to their representative, who will share it anonymously with you. You can then meet or correspond periodically with the course representative to collect student feedback. In larger classes, you might want to designate a team of student representatives who can synthesize the experiences of students in different recitation sections in their feedback to you.

Course-level Examples Listed by Type

34

Assignments and Exams Performance Rubrics for 95820Production Management Assignment, Heinz Performance Rubrics for 95821Production Management Assignment, Heinz Assessing the Effectiveness of using Multi-Media for Case-based Learning, H&SS MORE...

Comprehension Checks Using a Clicker System and Concept Questions to Assess Student Understanding During Class, H&SS Using Quizzes and Clickers for Assessing Students Understanding of Concepts in Real Time

Group Process Assessments Weighted Peer Evaluation for Group Project, Heinz

Pre-/Post-Tests Performance Criteria Rubric for Assessing Project Work, CFA Forms for Evaluating Student Projects, CFA Rubrics for Assessing Student's Writing, CFA Rubrics for Assessing Student Participation, CFA MORE... Pre & Post Tests for Assessing the Effectiveness of an Argument Mapping Tool for Teaching, H&SS Pre-/Post-Test for Technology for Global Development Course, H&SS

Prior Knowledge Assessments Survey for Assessing Students Motivation, Confidence, and Goals for Writing, H&SS Quizzes and Item Analysis to Inform Teaching and Learning, MCS Surveys of Student Learning Goals, Tepper

Reflective Assessments Process Books for Assessing How Students Think About Design, CFA Rubric for Developing Student SelfAssessment Skills, CFA Journals to Monitor Student Thinking in Statistics, H&SS Reading Reflection Exercise to Prepare for Class Discussion, H&SS

35

You might also like