Types of Validity
Types of Validity
Validity
-Validity is a degree to which the assessment instrument measures what it intends to measure.
- It also refers to the usefulness of the instrument for a given purpose.
- It is the most important criterion of a good assessment instrument.
Construct Validity
- The test is the extent to which a test measures a theoretical trait. This involves such tests as those of understanding, and
interpretation of data.
Example: A teacher might design whether an educational program increases artistic ability amongst pre-school
children. Construct validity is a measure of whether your research actually measures artistic ability, a
slightly abstract label.
Face Validity
- Test questions are said to have face validity when they appear to be related to the group being examined.
- This is done by examining the test to bind out if it is the good one and there is no common numerical method for face
validity.
Example: Calculation of the area of the rectangle when its given direction of length and width are 4 feet and 6 feet
respectively.
1. Unclear directions: directions that do clearly indicate to the students how to respond to the task and how to record
responses tend to reduce validity.
2. Reading Vocabulary and sentence structure too difficult: vocabulary and sentences structure that are too complicated
for the student result in the assessment of reading comprehension thus altering the meaning of assessment result.
3. Ambiguity: ambiguous statements in assessments tasks contribute to misinterpretations and confusion. Ambiguity
sometimes confuses the better students more than it does the poor students.
4. Inadequate time limits: time limits that do not provide students with enough time to consider the tasks and provide
thoughtful responses can reduce the validity of interpretations of results.
5. Overemphasis of easy to assess aspects of domain at the expense of important, but hard: to assess aspects it is easy to
develop test question that assess factual recall and generally harder to develop ones that conceptual understanding or
higher-order thinking processes such as the evaluation of completing positions or arguments. Hence, it is important to
guard against under representation of task getting the important, but more difficult to assess aspects of achievements.
6. Test Items Inappropriate for the outcomes being measured: attempting to measure understanding, thinking skills, and
other complex types of achievements with tests forms that are appropriate for only measuring factual knowledge will
invalidate the results.
7. Poorly constructed test items: test items that unintentionally provide clues to the answer tend to measure the student’s
alertness in detecting clues as well as mastery of skills or knowledge the test is intended to measure.
8. Test too short: if test is too short to provide a representative sample of the performance we are interested in its validity
will suffer accordingly.
9. Improper arrangement of items: test items are typically arranged in order of difficulty, with the easiest items first. Placing
difficult items first in the test may cause students to spread too much time on these and prevent them from items they
could easily answer. Improper arrangement may also influenced validity by having a detrimental effect on student
motivation.
10. Identifiable pattern of answer: placing correct answers in some systematic pattern enables students to guess tha answers
to some items more easily and this lowers validity.
PRACTICABILITY IN ASSESSMENT METHOD
Practicability
-Practicability means the test can be satisfactorily used by teachers and researchers without undue expenditure of time,
money, and effort. In other words, practicability means usability.
Ease of Administration
Facilitating ease of administration through complete and precise instructions.
Example: For a high school biology test, use clear instructions for a group experiment that can be conducted
simultaneously, saving time and effort
Low Cost
Practicality is enhanced if the test is low-cost material-wise and can be reused by future teachers.
Example: Creating an online quiz platform with reusable questions can significantly reduce the cost of printing
materials and benefit multiple teachers.
Ease of administration, scoring, interpretation, low cost, and proper mechanical make-up contribute to the practicability of
assessment methods
Assessment must include learning, motivation, and grading for the students you teach. Effective evaluation techniques yield
important insights into students' learning. They let us know what the students learnt, how well they learned it, and what areas they
found difficult.
FAIRNESS IN ASSESSMENT
Introduction
Fairness in assessment is fundamental to ensuring that evaluation processes are equitable, just, and supportive of all
students, regardless of their background, abilities, or circumstances. It stands as a cornerstone, essential for guaranteeing that the
processes of evaluation are not only impartial but extend a sense of equity, justice, and support to every student.
2. Cultural Sensitivity:
• Recognition and respect for cultural diversity, avoiding biases that might disadvantage certain cultural or linguistic groups.
• Inclusion of content that reflects a variety of cultural perspectives to create a more inclusive assessment environment.
Resource Disparities:
• Addressing resource discrepancies among educational institutions and ensuring all students have access to necessary
tools and materials for assessments
What is reliability?
•It refers to the precision or accuracy of the measurement of score.
• Reliability refers to the stability of a test measure.
• Reliability is the degree to which a Practice, Procedure, or Test (PPT) produces stable and consistent results if
repeated/re-examined on same individuals/students on different occasions, or with different sets of equivalent items when all other
factors are held constant.
•one of the most important elements of test quality
v. Interrater Reliability
All of the methods for estimating reliability discussed thus far are intended to be used for objective tests. When a test
includes performance tasks, or other items that need to be scored by human raters, then the reliability of those raters must be
estimated
MORALITY IN ASSESSMENT
CONCLUSION
Morality in assessment is a critical aspect of education that directly impacts the lives of students and the integrity of
educational systems. Striking a balance between assessment goals and ethical principles is a challenging but necessary endeavor. By
prioritizing fairness, transparency, and positive learning outcomes, educators and institutions can work towards ensuring that
assessments are conducted with the utmost ethical considerations in mind. In doing so, we can uphold the moral values of
education and promote equitable access to opportunities for all students.
1. The assessment of student learning starts with the institution's mission and core values.
2. Assessment works best when the program has clear statement of objectives aligned with the institutional mission and core values.
3. Outcomes-based assessment focus on the student activities that will still be relevant after formal schooling concludes.
4. Assessment requires attention not only to outcomes but also and equally to the activities and experiences that lead to the
attainment of learning outcomes.
5. Assessment works best when it is ongoing, not episodic.
6. Begin by specifying clearly and exactly what you want to assess.
7. The intended learning outcome/lesson objectives NOT content is the basis of the assessment task.
8. Set your criterion of success or acceptable standard of success.
9. Make use of varied tools for assessment data-gathering and multiple sources of assessment data.
10. Learners must be given feedback about their performance.
11. Assessment should be on real-world application and not on out-of-context drills.
12. Emphasize on the assessment of higher-order thinking.
13. Provide opportunities for self assessment.
PHASE 9: REVIEW/RETEACH
Review is what most teachers do the day or week before a large test while reteach is targeted to a specific objective that
was struggle for students
3 MAIN COMPONENTS
Intended learning outcome
Teaching and learning activities
Assessment task
Intended Learning Outcomes (What do we want students to know?)
Example: By the end of this course, students will be able to analyze and evaluate ethical issues in business situations,
demonstrating critical thinking skills and the ability to make informed and ethically sound decisions.
Teaching and Learning Activities (How do we want them to learn?)
Example: Experiments, projects,lecture, tutorial or group discussions. engage in hands-on tasks to apply what they've
learned.
Assessment (How will we know the students have learnt?)
Example: Essay, test, Oral Examination, Portfolios or Laboratory reports.