0% found this document useful (0 votes)
183 views

Types of Validity

Uploaded by

Princess Sam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
183 views

Types of Validity

Uploaded by

Princess Sam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

TYPES OF VALIDITY

Validity
-Validity is a degree to which the assessment instrument measures what it intends to measure.
- It also refers to the usefulness of the instrument for a given purpose.
- It is the most important criterion of a good assessment instrument.

WAYS IN ESTABLISHING VALIDITY


Content Validity
- It is related to how adequately the content of the root test sample the domain about which inference is to be made.
- This is being established through logical analysis adequate sampling of test items usually enough to assure that a test has
content validity.
Example: A teacher wishes to validate a test in Mathematics. He requests experts in Mathematics. He requests
experts in Mathematics to judge if the items or questions measures the knowledge the skills and values
supposed to be measured.

Construct Validity
- The test is the extent to which a test measures a theoretical trait. This involves such tests as those of understanding, and
interpretation of data.
Example: A teacher might design whether an educational program increases artistic ability amongst pre-school
children. Construct validity is a measure of whether your research actually measures artistic ability, a
slightly abstract label.

Criterion-Related Validity (Concurrent Validity)


- It refers to the degree to which the test correlates with a criterion which is set up as an acceptable measure on standard
other than the test itself. The criterion is always available at the time of testing.

Criterion-Related Validity (Predictive Validity)


- This refers to the degree of accuracy of how a test predicts one performance at some subsequent outcome.
- This describes the future performance of an individual by correlating the sets of scores obtained from two measures given
at a longer time interval.
Example: Mr. Celso wants to know the predictive validity of his test administered in the previous year by
correlating the scores with the grades of the same students obtained in a (test) later date. Their scores
and grades are presented below.

Face Validity
- Test questions are said to have face validity when they appear to be related to the group being examined.
- This is done by examining the test to bind out if it is the good one and there is no common numerical method for face
validity.
Example: Calculation of the area of the rectangle when its given direction of length and width are 4 feet and 6 feet
respectively.

FACTORS INFLUENCING THE VALIDITY OF AN ASSESSMENT INSTRUMENT

1. Unclear directions: directions that do clearly indicate to the students how to respond to the task and how to record
responses tend to reduce validity.
2. Reading Vocabulary and sentence structure too difficult: vocabulary and sentences structure that are too complicated
for the student result in the assessment of reading comprehension thus altering the meaning of assessment result.
3. Ambiguity: ambiguous statements in assessments tasks contribute to misinterpretations and confusion. Ambiguity
sometimes confuses the better students more than it does the poor students.
4. Inadequate time limits: time limits that do not provide students with enough time to consider the tasks and provide
thoughtful responses can reduce the validity of interpretations of results.
5. Overemphasis of easy to assess aspects of domain at the expense of important, but hard: to assess aspects it is easy to
develop test question that assess factual recall and generally harder to develop ones that conceptual understanding or
higher-order thinking processes such as the evaluation of completing positions or arguments. Hence, it is important to
guard against under representation of task getting the important, but more difficult to assess aspects of achievements.
6. Test Items Inappropriate for the outcomes being measured: attempting to measure understanding, thinking skills, and
other complex types of achievements with tests forms that are appropriate for only measuring factual knowledge will
invalidate the results.
7. Poorly constructed test items: test items that unintentionally provide clues to the answer tend to measure the student’s
alertness in detecting clues as well as mastery of skills or knowledge the test is intended to measure.
8. Test too short: if test is too short to provide a representative sample of the performance we are interested in its validity
will suffer accordingly.
9. Improper arrangement of items: test items are typically arranged in order of difficulty, with the easiest items first. Placing
difficult items first in the test may cause students to spread too much time on these and prevent them from items they
could easily answer. Improper arrangement may also influenced validity by having a detrimental effect on student
motivation.
10. Identifiable pattern of answer: placing correct answers in some systematic pattern enables students to guess tha answers
to some items more easily and this lowers validity.
PRACTICABILITY IN ASSESSMENT METHOD

Practicability
-Practicability means the test can be satisfactorily used by teachers and researchers without undue expenditure of time,
money, and effort. In other words, practicability means usability.

Factors that Determine Practicability


 Ease of administration
 Ease of scoring
 Ease of interpretation and application
 Low cost
 Proper mechanical make-up.

Ease of Administration
Facilitating ease of administration through complete and precise instructions.
Example: For a high school biology test, use clear instructions for a group experiment that can be conducted
simultaneously, saving time and effort

Ease of Scoring depends on the following:


(1) construction of the test is objective
(2) answer keys are adequately prepared
(3) scoring directions are fully understood
Example: In a multiple-choice math test, having a well-constructed answer key and clear instructions for scoring can
simplify the process for teachers

Ease of Interpretation and Application


Results are easily interpreted and applied with the use of tables. Norms based on age, grade/year level, and different
learner backgrounds are essential.
Example: Displaying student performance data in a table, such as a percentage of correct answers, makes it easy
for teachers to quickly identify strengths and weaknesses.

Low Cost
Practicality is enhanced if the test is low-cost material-wise and can be reused by future teachers.
Example: Creating an online quiz platform with reusable questions can significantly reduce the cost of printing
materials and benefit multiple teachers.

Proper Mechanical Make - Up


Emphasize the importance of clear printing, appropriate font size, and attention to pictures and illustrations, especially for
lower grades.
Example: In an elementary school reading assessment, use large, clear fonts and colorful illustrations to engage
young learners and make the test more visually appealing

Ease of administration, scoring, interpretation, low cost, and proper mechanical make-up contribute to the practicability of
assessment methods
Assessment must include learning, motivation, and grading for the students you teach. Effective evaluation techniques yield
important insights into students' learning. They let us know what the students learnt, how well they learned it, and what areas they
found difficult.

FAIRNESS IN ASSESSMENT

Introduction
Fairness in assessment is fundamental to ensuring that evaluation processes are equitable, just, and supportive of all
students, regardless of their background, abilities, or circumstances. It stands as a cornerstone, essential for guaranteeing that the
processes of evaluation are not only impartial but extend a sense of equity, justice, and support to every student.

Definition of Fairness in Assessment:


Fairness in assessment refers to the impartial and unbiased treatment of all individuals undergoing evaluation. It
encompasses various dimensions, including accessibility, cultural sensitivity, and the elimination of any potential sources of
advantage or disadvantage

Principles of Fair Assessment


1. Equity and Accessibility:
• Assessments should be designed to accommodate diverse learning styles, ensuring that all students have equal
opportunities to demonstrate their understanding.
• Provision of reasonable accommodations for students with special needs to ensure a level playing field

2. Cultural Sensitivity:
• Recognition and respect for cultural diversity, avoiding biases that might disadvantage certain cultural or linguistic groups.
• Inclusion of content that reflects a variety of cultural perspectives to create a more inclusive assessment environment.

3. Validity and Reliability:


• Development of assessments that accurately measure what they intend to assess (validity) and consistently produce
similar results under similar conditions (reliability).
• Regular review and validation of assessment tools to ensure ongoing effectiveness and relevance
4. Transparency:
• Clear communication of assessment criteria and expectations to all students, fostering a transparent and understandable
evaluation process.
• Openness in sharing the purpose and outcomes of assessments, promoting a sense of trust among students.

5. Feedback and Support:


• Providing timely and constructive feedback to help students understand their strengths and areas for improvement.
• Implementing support mechanisms for students who may require additional assistance to meet assessment expectations

Challenges in Achieving Fairness:


Stereotyping and Bias:
• Ongoing efforts needed to eliminate unconscious biases that may affect assessment outcomes.
• Training educators and assessors to recognize and address their biases in the evaluation process.

Resource Disparities:
• Addressing resource discrepancies among educational institutions and ensuring all students have access to necessary
tools and materials for assessments

RELIABILITY (TECHNIQUES OF IN TESTING RELIABILITY)

What is reliability?
•It refers to the precision or accuracy of the measurement of score.
• Reliability refers to the stability of a test measure.
• Reliability is the degree to which a Practice, Procedure, or Test (PPT) produces stable and consistent results if
repeated/re-examined on same individuals/students on different occasions, or with different sets of equivalent items when all other
factors are held constant.
•one of the most important elements of test quality

TYPES AND TECHNIQUES OF IN TESTING RELIABILITY


i. Test-Retest Reliability
To estimate test-retest reliability, you must administer a test form to a single group of examinees on two separate
occasions. Typically, the two separate administrations are only a few days or a few weeks apart; the time should be short enough so
that the examinees' skills in the area being assessed have not changed through additional learning

ii. Parallel Forms Reliability


Many exam programs develop multiple, parallel forms of an exam to help provide test security. These parallel forms are all
constructed to match the test blueprint, and the parallel test forms are constructed to be similar in average item difficulty

iii. DECISION Consistency


In the descriptions of test-retest and parallel forms reliability given above, the consistency or dependability of the test
scores was emphasized. For many criterion referenced tests (CRTs) a more useful way to think about reliability may be in terms of
examinees’ classifications.

iv. Internal Consistency


The internal consistency measure of reliability is frequently used for norm referenced tests (NRTs). This method has the
advantage of being able to be conducted using a single form given at a single administration.

v. Interrater Reliability
All of the methods for estimating reliability discussed thus far are intended to be used for objective tests. When a test
includes performance tasks, or other items that need to be scored by human raters, then the reliability of those raters must be
estimated

MORALITY IN ASSESSMENT

What is morality in assessment?


Assessment morality entails adhering to ethical principles in designing, implementing, and using assessments across
different fields, like education, psychology, and employment, to ensure fairness, justice, and the dignity of individuals being assessed.

1. The Importance of Assessment


Assessment serves several crucial functions in education:
- Measuring student learning and progress.
- Informing teaching strategies and curriculum development.
- Facilitating accountability and quality assurance in education.

2. Equity and Fairness in Assessment


- Fairness in Assessment: Fairness in assessment is about ensuring that all students are given an equal opportunity to
demonstrate their knowledge and skills. This includes considerations for cultural, linguistic, and socioeconomic diversity.
- Bias in Assessment: Inadvertent biases in assessment tools, including cultural, gender, and socio-economic biases, can lead
to unfair outcomes and undermine the moral principles of assessment.
3. Ethical Considerations in Assessment
- Honesty and Integrity: Assessments should promote honesty and integrity among both educators and students. Cheating,
plagiarism, and other dishonest practices should be discouraged.
- Privacy and Data Security: Protecting student data and privacy is an ethical obligation. Institutions must handle
assessment data with care and in accordance with relevant laws.
- Transparency and Accountability: Assessment processes and criteria should be transparent, ensuring students and
stakeholders understand the evaluation methods and can hold institutions accountable.
- Positive Learning Outcomes: Assessment should be designed to foster positive learning outcomes and support students in
their educational journeys.

4. Challenges in Morality of Assessment


- High-Stakes Testing
- Pressure on Educators

5. Strategies to Promote Morality in Assessment


- Diverse Assessment Methods: Implement a variety of assessment methods, such as formative assessments, projects, and
authentic assessments, to provide a holistic view of student performance.
- Professional Development: Provide educators with training and support to develop ethical assessment practices and
mitigate biases.
- Inclusive Assessment: Ensure assessment processes accommodate diverse student populations and offer reasonable
accommodations.
- Ethical Guidelines and Policies: Establish clear ethical guidelines and policies related to assessment at institutional and
national levels.

CONCLUSION
Morality in assessment is a critical aspect of education that directly impacts the lives of students and the integrity of
educational systems. Striking a balance between assessment goals and ethical principles is a challenging but necessary endeavor. By
prioritizing fairness, transparency, and positive learning outcomes, educators and institutions can work towards ensuring that
assessments are conducted with the utmost ethical considerations in mind. In doing so, we can uphold the moral values of
education and promote equitable access to opportunities for all students.

PRINCIPLES OF GOOD PRACTICE IN ASSESSING LEARNING OUTCOMES

1. The assessment of student learning starts with the institution's mission and core values.
2. Assessment works best when the program has clear statement of objectives aligned with the institutional mission and core values.
3. Outcomes-based assessment focus on the student activities that will still be relevant after formal schooling concludes.
4. Assessment requires attention not only to outcomes but also and equally to the activities and experiences that lead to the
attainment of learning outcomes.
5. Assessment works best when it is ongoing, not episodic.
6. Begin by specifying clearly and exactly what you want to assess.
7. The intended learning outcome/lesson objectives NOT content is the basis of the assessment task.
8. Set your criterion of success or acceptable standard of success.
9. Make use of varied tools for assessment data-gathering and multiple sources of assessment data.
10. Learners must be given feedback about their performance.
11. Assessment should be on real-world application and not on out-of-context drills.
12. Emphasize on the assessment of higher-order thinking.
13. Provide opportunities for self assessment.

PHASES OF OUTCOME ASSESSMENT IN THE INSTRUCTIONAL CYCLE


ASSESSMENT
In education, the term assessment refers to the wide variety of methods or tools that educators use to evaluate, measure,
and document the academic readiness, learning progress, skill acquisition, or educational needs of students

PHASE 1: INSTITUTIONAL VISION AND MISSION


Institutional mission statements provide various constituencies--students, faculty, legislators, etc. --with the institution's
educational goals and guidance concerning the achievement of these goals
EXAMPLE: CTU's Vision and Mission

PHASE 2: PROGRAM GOALS


Program goals describe what you want a program to do or accomplish. They are broad statements of the kinds of learning
we hope students will achieve.
EXAMPLE: Graduates will be competent in critical questioning and analysis
Graduates will have an appreciation of the necessity and difficulty of making ethical choices
Graduates will have the ability to design and conduct experiments as well as analyze and interpret data

PHASE 3: SUBJECT OBJECTIVES


Subject objectives are brief statements that describe what students will be expected to learn by the end of school year,
course, lesson, project or class period
EXAMPLE: The lesson is all about distinguishing and constructing various paper-pencil-tests. by the end of the school year
students will be expected to:
• Construct a table of specifications
• Construct paper-pencil-tests in accordance with the guidelines in the test construction

PHASE 4: DESIRED LEARNING OUTCOMES


Learning outcomes are statements that describe significant and essential learning that learners have achieved and can
reliably demonstrate at the end of a course or a program
EXAMPLE Activity: A lecture on organization strategies
Learning outcome: Learners can demonstrate how they will use organization strategies with actionable steps
PHASE 5: DIAGNOSTIC ASSESSMENT
A diagnostic assessment is a form of pre assessments where teachers can evaluate students strengths, weaknesses,
knowledge, and skills before their instruction
EXAMPLE: • Pre-test • Journals
• Performance task • Word splash

PHASE 6: DECIDING ON LESSON FOCUS


In this phase, it is the time when the teacher is demonstrating, modelling, and sharing his or her thinking with students. It is
a brief discussion that last for about (5-15minutes).

PHASE 7: SUPPORTING STUDENTS ACTIVITIES


In this phase, It is a strategic part of the teacher on how to create a supportive environment for its students for them to
actively involved in class
EXAMPLE: Students apply principles of logical thinking and persuasive argument in writing Ways to on how support
students:
•Forming opinion about the topic
•Researching and writing about a variety of perspectives
•Adapting style to identified audience
•Employing clear argument in writing

PHASE 8: FORMATIVE ASSESSMENT


Formative assessment refers to a wide variety of methods that teachers used to conduct in-process evaluations of students
comprehension, learning needs, and academic progress during a lesson or course.
Example: Metacognition Table: at the end of class each student answers the following questions presented to them on
index cards:
•What we do in class?
•Why did we do it?
•What did I learn today?
•How can I apply it?
•What questions do I have about it?

PHASE 9: REVIEW/RETEACH
Review is what most teachers do the day or week before a large test while reteach is targeted to a specific objective that
was struggle for students

PHASE 10: MASTERY LEARNING


Mastery learning maintains that students must achieve a level of mastery in a pre requisite knowledge before moving
forward to learn subsequent information

PHASE 11: SUMMATIVE ASSESSMENT OF OUTCOMES


Summative assessment or summative evaluation refers to the assessment of participants where the focus is on the
outcome of a program.
Example: •Mid-term exam •Final project
CONSTRUCTIVE ALIGNMENT

What is Constructive Alignment?


Constructive alignment is a design for teaching in which is intended students should learn and how they should express
their learning is clearly stated before teaching takes place.

3 MAIN COMPONENTS
 Intended learning outcome
 Teaching and learning activities
 Assessment task
 Intended Learning Outcomes (What do we want students to know?)
Example: By the end of this course, students will be able to analyze and evaluate ethical issues in business situations,
demonstrating critical thinking skills and the ability to make informed and ethically sound decisions.
 Teaching and Learning Activities (How do we want them to learn?)
Example: Experiments, projects,lecture, tutorial or group discussions. engage in hands-on tasks to apply what they've
learned.
 Assessment (How will we know the students have learnt?)
Example: Essay, test, Oral Examination, Portfolios or Laboratory reports.

You might also like