100% found this document useful (1 vote)
370 views10 pages

03 Bloom Taxonomi

Bloom's Taxonomy is a framework for classifying questions and assessments based on the level of cognitive complexity required. It divides cognitive skills into six levels - knowledge, comprehension, application, analysis, synthesis, and evaluation. Classroom assessments should include questions targeting both lower and higher order thinking skills to prepare students for standardized tests. Item analysis involves calculating difficulty and discrimination indices to evaluate the quality of individual test questions and identify areas for improvement.

Uploaded by

Hendrik Gunawan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
370 views10 pages

03 Bloom Taxonomi

Bloom's Taxonomy is a framework for classifying questions and assessments based on the level of cognitive complexity required. It divides cognitive skills into six levels - knowledge, comprehension, application, analysis, synthesis, and evaluation. Classroom assessments should include questions targeting both lower and higher order thinking skills to prepare students for standardized tests. Item analysis involves calculating difficulty and discrimination indices to evaluate the quality of individual test questions and identify areas for improvement.

Uploaded by

Hendrik Gunawan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 10

Bloom's Taxonomy Assessment

A. Bloom's Taxonomy
Questions (items) on quizzes and exams can demand different levels of thinking skills. For example, some questions might be simple memorization of facts, and others might require the ability to synthesize information from several sources to select or construct a response. Benjamin Bloom created a hierarchy of cognitive skills (called Bloom's taxonomy) that is often used to categorize the levels of cognitive involvement (thinking skills) in educational settings. The taxonomy provides a good structure to assist teachers in writing objectives and assessments. It can be divided into two levels -- Level I (the lower level) contains knowledge, comprehension and application; Level II (the higher level) includes application, analysis, synthesis, and evaluation (see the diagram below).

Figure 1. Bloom's Taxonomy.

Table 1 : Bloom's Taxonomy and the skills


Competence Skills Deminstrated observation and recall of information knowledge of dates, events, places, major ideas mastery of subject matter Question Cues: list, define, tell, describe, identify, show, label, collect, examine, tabulate, quote, name, who, when, where, etc understanding information, grasp meaning translate knowledge into new context interpret facts, compare, contrast order, group, infer causes, predict consequences Question Cues: summarize, describe, interpret, contrast, predict, associate, distinguish, estimate, differentiate, discuss, extend Able to use information, methods, concepts, theories solve problems using required skills or knowledge Questions Cues: apply, demonstrate, calculate, complete, illustrate, show, solve, examine, modify, relate, change, classify, experiment, discover seeing patterns, organization of parts recognition of hidden meanings identification of components Question Cues: analyze, separate, order, explain, connect, classify, arrange, divide, compare, select, explain, infer use old ideas to create new ones generalize from given facts relate knowledge from several areas predict, draw conclusions Question Cues: combine, integrate, modify, rearrange, substitute, plan, create, design, invent, what if?, compose, formulate, prepare, generalize, rewrite compare and discriminate between ideas assess value of theories, presentations make choices based on reasoned argument verify value of evidence, recognize subjectivity Question Cues assess, decide, rank, grade, test, measure, recommend, convince, select, judge, explain, discriminate, support, conclude, compare, summarize

Knowledge

Comprehension

Application

Analysis

Synthesis

Evaluation

Table 2 : The Cognitive Process Dimension of Blooms Taxonomi


The Knowledg e Dimension Factual Knowledg e Conceptual Knowledg e Procedural Knowledg e Metacognitive Knowledg e Knowledge [Remember] list describe tabulated Appropriate use Comprehension [Understand] summarize interpret predict Application [Appling] classify experiment calculate Analysis [Analysing] order explain differentiate Synthesis [Evaluating] rank access conclude Evaluation [crating] combine plan compose

execute

construct

achieve

action

actualize

Bloom's taxonomy is also used to guide the development of standardized assessments. For example, in Florida, about 65% of the questions on the statewide reading test (FCAT) are designed to measure Level II thinking skills (application, analysis, synthesis, and evaluation). To prepare students for these standardized tests, classroom assessments must also demand both Level I and II thinking skills. Integrating higher level skills into instruction and assessment increases the likelihood that students will succeed on tests and become better problem solvers. Sometimes objective tests (such as multiple choice) are criticized because the questions emphasize only lower-level thinking skills (such as knowledge and comprehension). However, it is possible to address higher level thinking skills via objective assessments by including items that focus on genuine understanding -"how" and "why" questions. Multiple choice items that involve scenarios, case studies, and analogies are also effective for requiring students to apply, analyze, synthesize, and evaluate information

B. Writing Selected Response Assessment Items


Selected response (objective) assessment items are very efficient once the items are created, you can assess and score a great deal of content rather quickly. Note that the term objective refers to the fact that each question has a right and wrong answer and that they can be impartially scored. In fact, the scoring can be automated if you have access to an optical scanner for scoring paper tests or a computer for computerized tests. However, the construction of these objective items might well include subjective input by the teacher/creator.

Before you write the assessment items, you should create a blueprint that outlines the content areas and the cognitive skills you are targeting. One way to do this is to list your instructional objectives, along with the corresponding cognitive level. For example, the following table, table 3, has four different objectives and the corresponding levels of assessment (relative to Bloom's taxonomy). For each objective, five assessment items will be written, some at Level I and some at Level II. This approach helps to ensure that all objectives are covered and that several higher level thinking skills are included in the assessment, Table 3. Table 3 : An example of taxonomy Objective 1 2 3 4 corresponding levels of assessment to Bloom's Number of Items at Level II (Blooms' Taxonomy) 3 2 4 1

Number of Items at Level I (Bloom's Taxonomy) 2 3 1 4

After you have determined how many items you need for each level, you can begin writing the assessments. There are several forms of selected response assessments, including multiple choice, matching, and true/false. Regardless of the form you select, be sure the items are clearly worded at the appropriate reading level and do not include unintentional clues. The validity of your test will suffer tremendously if the students cant comprehend or read the questions! This section includes a few guidelines for constructing objective assessment items, along with examples and non-examples. Multiple Choice Multiple choice questions consist of a stem (question or statement) with several answer choices (distractors). All answer choices should be plausible and homogeneous. Answer choices should be similar in length and grammatical form. List answer choices in logical (alphabetical or numerical) order. Avoid using "All of the Above" options. True/False True/false questions can appear to be easier to write; however, it is difficult to write effective true/false questions. Also, the reliability of T/F questions is not generally very high because of the high possibility of guessing. In most cases, T/F questions are not recommended.

Statements should be completely true or completely false. Use simple, easy-to-follow statements. Avoid using negatives -- especially double negatives. Avoid absolutes such as "always; never."

Matching Matching items consist of two lists of words, phrases, or images (often referred to as stems and responses). Students review the list of stems and match each with a word, phrase, or image from the list of responses. For each of the following guidelines, click the buttons to view an Example or Non-Example. Answer choices should be short, homogeneous and arranged in logical order. Responses should be plausible and similar in length and grammatical form. Include more response options than stems. As a general rule, the stems should be longer and the responses should be shorter.

C. Item Analysis
After you create your objective assessment items and give your test, how can you be sure that the items are appropriate -- not too difficult and not too easy? How will you know if the test effectively differentiates between students who do well on the overall test and those who do not? An item analysis is a valuable, yet relatively easy, procedure that teachers can use to answer both of these questions. To determine the difficulty level of test items, a measure called the Difficulty Index is used. This measure asks teachers to calculate the proportion of students who answered the test item accurately. By looking at each alternative (for multiple choice), we can also find out if there are answer choices that should be replaced. For example, let's say you gave a multiple choice quiz and there were four answer choices (A, B, C, and D). The following table, Table 4, illustrates how many students selected each answer choice for Question #1 and #2. Table 4 : an example of item analysis Question A B C #1 0 3 24* #2 12* 13 3 * Denotes correct answer. D 3 2

For Question #1, we can see that A was not a very good distractor -- no one selected that answer. We can also compute the difficulty of the item by dividing the number of students who choose the correct answer (24) by the number of total students (30). Using this formula, the difficulty of Question #1 (referred to as p) is equal to 24/30 or .80. A rough "rule-of-thumb" is that if the item difficulty is more than .75, it is an easy item; if the difficulty is below .25, it is a difficult item. Given these parameters, this item could be regarded moderately easy -- lots (80%) of students got it correct. In contrast, Question #2 is much more difficult (12/30 = .40). In fact, on Question #2, more students selected an incorrect answer (B) than selected the correct answer (A). This item should be carefully analyzed to ensure that B is an appropriate distractor. Another measure, the Discrimination Index, refers to how well an assessment differentiates between high and low scorers. In other words, you should be able to expect that the high-performing students would select the correct answer for each question more often than the low-performing students. If this is true, then the assessment is said to have a positive discrimination index (between 0 and 1) -indicating that students who received a high total score chose the correct answer for a specific item more often than the students who had a lower overall score. If, however, you find that more of the low-performing students got a specific item correct, then the item has a negative discrimination index (between -1 and 0). Let's look at an example. Table 5 below displays the results of ten questions on a quiz. Note that the students are arranged with the top overall scorers at the top of the table. Table 5 : an example of item analysis Questions Total Student Score (%) 1 2 3 Asif 90 1 0 1 Sam 90 1 0 1 Jill 80 0 0 1 Charlie 80 1 0 1 Sonya 70 1 0 1 Ruben 60 1 0 0 Clay 60 1 0 1 Kelley 50 1 1 0 Justin 50 1 1 0 Tonya 40 0 1 0 "1" indicates the answer was correct; "0" indicates it was incorrect.

Follow these steps to determine the Difficulty Index and the Discrimination Index.

1. After the students are arranged with the highest overall scores at the top, count the number of students in the upper and lower group who got each item correct. For Question #1, there were 4 students in the top half who got it correct, and 4 students in the bottom half. 2. Determine the Difficulty Index by dividing the number who got it correct by the total number of students. For Question #1, this would be 8/10 or p=.80. 3. Determine the Discrimination Index by subtracting the number of students in the lower group who got the item correct from the number of students in the upper group who got the item correct. Then, divide by the number of students in each group (in this case, there are five in each group). For Question #1, that means you would subtract 4 from 4, and divide by 5, which results in a Discrimination Index of 0. 4. The answers for Questions 1-3 are provided in Table 5. Table 6 : an example of item analysis # Correct # Correct Item (Upper group) (Lower group) Question 1 4 4 Question 2 0 3 Question 3 5 1 Difficulty (p) .80 .30 .60 Discrimination (D) 0 -0.6 0.8

Now that we have the table filled in, what does it mean? We can see that Question #2 had a difficulty index of .30 (meaning it was quite difficult), and it also had a negative discrimination index of -0.6 (meaning that the low-performing students were more likely to get this item correct). This question should be carefully analyzed, and probably deleted or changed. Our "best" overall question is Question 3, which had a moderate difficulty level (.60), and discriminated extremely well (0.8). Another consideration for an item analysis is the cognitive level that is being assessed. For example, you might categorize the questions based on Bloom's taxonomy (perhaps grouping questions that address Level I and those that address Level II). In this manner, you would be able to determine if the difficulty index and discrimination index of those groups of questions are appropriate. For example, you might note that the majority of the questions that demand higher levels of thinking skills are too difficult or do not discriminate well. You could

then concentrate on improving those questions and focus your instructional strategies on higher-level skills.

Item Analysis Worksheet


Ten students have taken an objective assessment. The quiz contained 10 questions. In the table below, the students scores have been listed from high to low (Joe, Dave, Sujie, Darrell, and Eliza are in the upper half). There are five students in the upper half and five students in the lower half. The number1 indicates a correct answer on the question; a 0 indicates an incorrect answer. Total Student Score Name (%) Joe 100 Dave 90 Sujie 80 Darrell 70 Eliza 70 Zoe 60 Grace 60 Hannah 50 Ricky 40 Anita 30 Questions 1 1 1 1 0 1 1 0 0 1 0 2 1 1 1 1 1 1 1 1 1 1 3 1 1 0 1 1 1 1 1 1 0 4 1 1 1 1 0 0 0 1 0 0 5 1 1 1 1 1 1 1 0 1 0 6 1 1 1 1 1 1 1 0 0 1 7 1 1 1 0 1 0 0 1 0 0 8 1 1 1 1 0 1 1 0 0 0 9 1 0 0 0 0 0 0 1 0 1 10 1 1 0 1 1 0 1 0 1 0

Calculate the Difficulty Index (p) and the Discrimination Index (D) for each question. # Correct # Correct Discrimination Item Difficulty (p) (Upper group) (Lower group) (D) Question 1 4 2 0.6 0.4 Question 2 5 5 1.0 0 Question 3 4 4 0.8 0 Question 4 4 1 0.5 0.6 Question 5 5 2 0.8 0.6 Question 6 5 3 0.8 0.4 Question 7 4 1 0.5 0.6 Question 8 4 2 0.6 0.4 Question 9 1 3 0.3 -0.4 Question 10 4 2 0.6 0.4 1. Which question was the easiest? 2. Which question was the most difficult? 3. Which item has the poorest discrimination? Question #2 Question #9 Question #9

4. Which questions would you eliminate first (if any)? Question #9

You might also like