0% found this document useful (0 votes)
4 views

Assessing Language

The document provides an overview of various assessment methods and testing types used in education, including informal and formal assessments, formative and summative assessments, and different testing approaches like norm-referenced and criterion-referenced tests. It discusses the importance of testing in understanding student progress, guiding teaching decisions, and evaluating learning outcomes. Additionally, it highlights concepts such as self-assessment, peer assessment, and dynamic assessment, emphasizing their roles in promoting student growth and understanding.

Uploaded by

learntz.academy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Assessing Language

The document provides an overview of various assessment methods and testing types used in education, including informal and formal assessments, formative and summative assessments, and different testing approaches like norm-referenced and criterion-referenced tests. It discusses the importance of testing in understanding student progress, guiding teaching decisions, and evaluating learning outcomes. Additionally, it highlights concepts such as self-assessment, peer assessment, and dynamic assessment, emphasizing their roles in promoting student growth and understanding.

Uploaded by

learntz.academy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

1. What is a test?

A test is a tool used to measure a person's knowledge, skills, or abilities in a specific


area.

2. Assessment and Teaching

Assessment is the process of gathering information about a student’s learning to


understand their progress. It helps guide teaching decisions.

3. Informal and Formal assessment

Informal assessment happens during everyday classroom activities (like


discussions), while formal assessment uses structured methods (like quizzes or
exams).

4. Formative and Summative assessment

Formative assessment occurs during the learning process to improve students’


skills, while summative assessment is done at the end to evaluate overall
achievement.

5. Norm-Referenced and Criterion-Referenced Tests

Norm-referenced tests compare a student’s performance to others, while criterion-


referenced tests measure how well a student meets specific learning goals.

6. Approaches to Language Testing: A Brief History

This refers to the evolution of language testing methods, from focusing on


grammar to more interactive and communicative skills.

1
7. Discrete-Point and Integrative Testing

Discrete-point tests focus on individual language elements (like grammar), while


integrative tests look at language skills in combination (like using grammar in
speaking).

8. Communicative Language Testing

This testing evaluates how well a person can communicate in real-life situations,
rather than just knowing grammar or vocabulary.

9. Performance-Based Assessment

This type of assessment tests how well a student can perform tasks in real-world
scenarios (like giving a presentation or writing an essay).

10. Current Issues in Classroom Testing

These are the ongoing challenges and debates in testing, such as improving
accuracy, fairness, and how technology can influence testing.

11. New Views on Intelligence

New theories suggest that intelligence is not just about logic or language skills, but
also includes things like creativity, emotions, and how we relate to others.

12. Traditional and "Alternative" Assessment

Traditional assessments are standardized tests (like multiple-choice exams), while


alternative assessments focus more on real-world tasks and personalized feedback.

13. Computer-Based Testing

This is testing done on a computer, which can offer quicker feedback and more
flexibility, but also comes with concerns like cheating or lack of human interaction.

2
1. Importance of Testing: Testing helps teachers and others understand how much
students have learned, improving education and solving problems in society.

2. Decision Making: Decisions in education, big or small, need accurate information


to be effective, like placing students in the right class or choosing materials.

3. Test, Measurement, and Evaluation:

- Test: A set of questions to find out what someone knows.

- Measurement: Collecting numerical data about skills or abilities.

- Evaluation: Judging the value or success of something based on goals.

4. Language Testing: Testing in language learning motivates students, helps them


prepare better, and reveals their strengths and weaknesses.

5. Why Test? Tests show if teaching was effective, if materials worked, and what
needs improvement. They benefit both students and teachers.

6. Teacher-Made Tests vs. Standardized Tests:

- Teacher-Made Tests: Created by teachers for their class, often less formal but
directly connected to what students learned.

- Standardized Tests: Made by experts, used widely, and follow strict rules for
fairness and comparison.

7. Language Teaching and Testing:

- Traditional Tests: Focus on grammar, vocabulary, and translation.

- Multiple-Choice Items: Easy to score, but sometimes lack context for real
communication.

3
- Testing Communication: Goes beyond grammar to assess how well students use
language in real-life situations.

1. Functions of Language Tests: Language tests help collect information to make


decisions, like predicting future performance (prognostic tests) or evaluating what
students have learned (attainment tests).

2. Prognostic Tests: Tests that predict how well someone might do in the future, like
deciding the best course of study or career path.

3. Selection Tests: Tests used to decide if someone can join a program or job by
meeting specific requirements (e.g., a driving test).

4. Placement Tests: Tests that help place students in the right class or level based
on their current knowledge, without pass or fail.

5. Aptitude Tests: Tests that predict how easily someone can learn a skill or
language, even if they haven’t studied it before.

6. Evaluation of Attainment Tests: Tests that measure what students have already
learned:

- Achievement Tests: Focus on knowledge from a specific course or material (e.g.,


classroom exams).

- Proficiency Tests: Measure overall language ability, regardless of how or where


it was learned (e.g., TOEFL).

- Knowledge Tests: Check subject knowledge (e.g., physics) in a second language,


not the language itself.

4
1. Forms of Language Tests: The "form" refers to how a test looks and is structured
(e.g., written, oral). It should match the skill being tested, like using written tests for
reading and oral tests for listening.

2. The Structure of an Item: Each test question has two parts:

- Stem: The main part that asks the question (e.g., "What is the past tense of 'go'?").

- Response: The answer provided by the student, which could be written, spoken,
or selected from options.

3. Classification of Item Forms: Test items are categorized based on their format
and approach. Examples include multiple-choice, fill-in-the-blank, or essay
questions.

4. Subjective vs. Objective Items:

- Subjective Items: Scoring depends on the scorer's opinion (e.g., essays).

- Objective Items: Scoring is fixed and clear, with one correct answer (e.g.,
multiple-choice questions).

5. Essay-Type vs. Multiple-Choice Items:

- Essay-Type: Students write their own answers (e.g., explaining or discussing).

- Multiple-Choice: Students pick the correct answer from a list.

6. Suppletion vs. Recognition Items:

- Suppletion: Students supply or complete missing information (e.g., filling


blanks).

- Recognition: Students identify the correct answer among given options.

7. Psycholinguistic Classification: A system that classifies test items based on:

5
- Psychological Process: Recognition (e.g., choosing), comprehension (e.g.,
understanding), or production (e.g., creating).

- Mode of Language: How the question is presented (oral, written, or visual). This
helps match the test format to the skill being measured.

1. Theories of Language Testing

These are ideas about how to create and understand tests that measure language
skills. They are influenced by psychology, linguistics, and teaching methods.

2. Discrete-Point Approach

This method tests one small part of language (like vocabulary or grammar) at a time.
For example, multiple-choice questions focus on specific skills separately.

3. Integrative Approach

This tests multiple skills together in real-life tasks, like writing an essay or having a
conversation, to see how well someone uses the language as a whole.

4. Functional Approach

This focuses on testing how people use language in real-life situations, such as
asking for help or giving directions. It measures practical communication skills.

5. Interpretation of Test Scores

This explains what test results mean. For example, scores can show how a person
compares to others or whether they meet a specific standard.

6. Norm-Referenced Interpretation

This compares a test taker’s score to others’ scores to see how well they did
compared to a group, like ranking students in a class.

6
1. Principles of Language Assessment: Key rules to check if a test works well,
focusing on practicality, reliability, validity, authenticity, and washback.

2. Practicality: A test is easy, affordable, and not time-wasting to create, take, or


grade.

3. Reliability: A test gives consistent results every time.

Rater Reliability: Scoring is fair and consistent between or by examiners.

Student-Related Reliability: Results aren’t affected by student’s bad day or health.

Test Administration Reliability: The test environment doesn’t disrupt results (e.g.,
noisy rooms).

Test Reliability: The test is clear, not too long, and avoids unfair or tricky questions.

4. Validity: A test measures what it’s supposed to.

Face Validity: Test looks fair and useful to students.

Consequential Validity: Test’s results and preparation have positive effects on


learning.

Content-Related Evidence: Test matches the course material.

Construct-Related Evidence: Test follows theories about the skill (e.g., speaking
needs oral tasks).

Criterion-Related Evidence: Test predicts performance in real-life or other tests.

1. AUTHENTICITY

Authenticity means a test feels real and reflects real-world tasks. It uses natural
language, meaningful topics, and tasks similar to real-life situations.

7
2. WASHBACK

Washback is the effect of a test on teaching and learning. Positive washback helps
students improve through feedback, while negative washback may lead to "teaching
to the test" without real learning.

3. APPLYING PRINCIPLES TO EVALUATION OF CLASSROOM TESTS:

- Are the test procedures practical?

Practicality means the test is easy to organize and fits the time, cost, and resources
available.

- Is the test reliable?

Reliability means the test gives consistent results, with clear instructions, equal
conditions for all, and scoring without bias.

- Does the procedure demonstrate content validity?

Content validity means the test covers what was taught and matches the learning
objectives.

- Is the procedure face valid and "biased for best"?

Face validity means the test looks clear and logical to students. "Biased for best"
means it is designed to help students perform their best, not confuse or trick them.

1. What is self-assessment?

It's when students check and reflect on their own work to see how well they meet
learning goals.

8
2. Why self-assessment?

It helps students understand their progress, take responsibility for learning, and
improve by identifying strengths and weaknesses.

3. How to implement self-assessment?

- Teach students how to judge their work.

- Give clear rules and examples.

- Let students create or agree on standards.

- Make it safe for honest feedback.

4. What is peer assessment?

It's when students give feedback or grades to each other based on agreed standards.

5. Why use peer assessment?

It encourages teamwork, helps students learn from others, and improves


understanding by exchanging ideas.

6. How to implement peer assessment?

- Set clear rules and explain the process.

- Build trust in the class.

- Use feedback to help learning, not just for grading.

- Let students practice giving comments.

This presentation explains Dynamic Assessment (DA), an assessment method that


combines teaching and evaluation to understand a learner's potential for
development, based on Vygotsky’s Zone of Proximal Development (ZPD).

9
Key points:

1. Dynamic Assessment focuses on how learners can improve with help, not just
what they know. It uses interaction to identify what the learner is capable of with
guidance.

2. ZPD is the gap between what a learner can do alone and what they can do with
assistance.

3. DA integrates teaching (mediation) and testing, adapting to the learner's needs to


promote growth.

4. Types of DA:

- Interventionist DA: Uses a structured approach, testing before and after


intervention.

- Interactionist DA: Relies on flexible, ongoing interactions to guide learning.

5. Approaches in DA:

- Graduated Prompting: Gradually increases help until the learner succeeds.

- Curriculum-Based DA: Aligns assessment with curriculum to address specific


learning challenges.

6. Strengths and Challenges:

- Strength: Focuses on the learning process and skill transfer.

- Challenge: Requires detailed observation, interpretation, and validation of


results.

It’s a powerful way to assess how students learn and grow, rather than just what they
know.

10

You might also like