0% found this document useful (0 votes)
37 views16 pages

Study Notes Psyc 469

The document provides an overview of psychological testing and assessment. It discusses the history and development of testing from early intelligence tests to current standards. Key concepts covered include the differences between psychological test users and assessors, components of assessments, and the testing process. Legal and ethical considerations like cultural factors, informed consent, and qualifications for test administrators are also reviewed.

Uploaded by

Debbie Benoliel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views16 pages

Study Notes Psyc 469

The document provides an overview of psychological testing and assessment. It discusses the history and development of testing from early intelligence tests to current standards. Key concepts covered include the differences between psychological test users and assessors, components of assessments, and the testing process. Legal and ethical considerations like cultural factors, informed consent, and qualifications for test administrators are also reviewed.

Uploaded by

Debbie Benoliel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Unit 1: Psychological Testing and Assessment

Learning Objectives

1. Describe the roots of contemporary psychological testing and assessment.

Early 20th century-alfred binet designed testing to help assign children to their level class-france looked to improve its
educational standard to live up to its inscription of “liberté, égalité et fraternité”…education for all-Binet-Simon were
commissioned by state toformulate testing that could predict studentss that will require special education so they can
receive intensified education from the start-these became know as intellignece tests (IQ)-this initiative then
influenced other countries including england, america and others-psyc testing was then produced for military use to
predict individuals that can properly deal with the stresses and reality of war (WWI & WWII)-William Stern then
wrote ethical safeguard protocol for use of assesments.

2. Explain the difference between a psychological test user and a psychological assessor.

A psychological tester is adept to administering a test and following all protocols established to ensure that the data
collect is precise and accurate. A psychological assessor can administer testing but beyond that, is trained to use the
data to answer the referral question. Furthermore, the assessor will also use their observation and background
knowledge to synthesize, generalize and conclude on the client's general profile based on the data collected. The
assessor will also direct the assessment by prescribing the required tests to be administered and possibly add more
testing during the assessment period based on the data and his/her professional judgement.

3. List all components included in a psychological assessment.


4. Explain the difference between testing and assessment with regard to their objective, process, role of
the evaluator, skill of the evaluator, and outcome.
5. List and describe the various steps involved in the process of assessment.

Referral-referral question(s) put forth-assessor could meet with assessee b4 formal assessment-selects the tools and
test he will use (at times, research is required to find the best tools possible)-administering of the tests-writing of the
report to answer the referral question-feedback session with assessee or interested third party.

6. Explain the difference between traditional psychological evaluations and a therapeutic psychological
assessment.

In a traditional psyc assessment, the assessment is designed to have a precis purpose which is to answer ans clarify
the referral question. A therapwutic psychological assessment aims to support and help the client throughout the
assessment and through the assessment. Feedback and collaborative dialogue btw the assessor and assessee is
continuously and they co-develop an interpretation of the data as they collaboratively decide on the treatment plan.

7. Explain the phases of a dynamic assessment.

1. Evaluation 2. Intervention 3. Evaluation

8. Outline what type of information is collected during an interview.

What is said, how it is being said, what is not said as well, the non-verbal (body language, movements, facial
expression, eye contact, willingness to cooperate, reaction to the interview setting), physical appearance,

9. List the uses of the interview.

Helps make informed decisions about employment (hiring, firing, advancement, placement). Motivational
interviewing is used in clinical assessment to gather info about the problematic issues or behaviours while trying to
address them simultaneously. Being that the interview process is an interactive one, it is also an opportunity to
position someone in multiple scenarios, depending on the interviewers abilities.
10. Describe instances in which case history data may be useful.
11. List pros and cons of computer-assisted psychological assessment.
12. List some of the factors which may be affecting test-taker performance.
13. Describe the types of evaluations used in each of the settings mentioned in the textbook (i.e.,
educational, clinical, counselling, geriatric, business and military, governmental and organizational,
academic research settings).

Key Terms

o Psychological assessment
o Psychological testing
o Referral question
o Tool/instrument selection
o Cut score/Cutoff score
o Psychometrist
o Psychometrics
o Utility
o Test developer
o Test user
o Test taker
o Protocol
o Rapport
o Accommodation

Tools of Psychological Assessment

o Psychological test
o Interview
o Portfolio
o Case history data
o Behavioural observation
o Role play
o Computers

Unit 2: Historical, Cultural, and Legal/Ethical Considerations

Learning Objectives

1. Describe the significance of competition testing for civil service jobs in ancient China.

The Sui dynasty created the imperial examination system in order to screen candidates for govt positions. The tests
included ability to read, write and calculate, proficiency in geography, agriculture, military strategy, moral and
physical prowess.

2. Summarize Galton’s contributions to psychology.

He became proficient in measuring and assessment. He aspired to classify ppl based on their talents and individuality
as well as the deviation from average. He studied heredity of sweet peas and then of humans. He coined the concept
of coefficient of correlation technique.

3. Compare and contrast Galton’s and Wundt’s perspectives on the assessment of individuals.
In contrast to Galton, Wundt focused on how ppl were similar, not different. He disliked the difference as it would
mess his experiment.

4. Describe Cattell’s contributions to mental testing in America.

He coined the term of “mental test” and brought assessment idea to america. He was a founding member of APA.

5. Describe Binet’s early research on intelligence.

Binet researched the measuring of memory and social ability as a prelude to measuring intelligence. In collaboration
with colleague Simon, he eventually designed 30 points assessment tool to measure intelligence (IQ test)

6. Summarize the use of testing during WWI and WWII.

Binet devised a group intelligence test which was used to recruit and assign position of military personnel in the two
WWs.

7. Describe the pros and cons of self-report in personality tests.

Pros-ppl are the best positioned to speak about themselves. Cons-ppl can at times be unaware or choose not to
divulge and expose themselves.

8. Understand how culture and language can affect individuals’ performance on psychological tests.

Culture and language influences what a society considers appropriate in behaviour and in thinking. Considering this,
diverse society may respond differently to questions on a standardized assessment. Psychologists have become more
sensitive about the way they word the tests because of this.

9. Describe early attempts at developing culture-specific tests.

10. Explain the steps used by today’s test developers to ensure tests used are suitable for many
populations.
11. Explain how the nature-nurture debate applies to psychological assessment.
12. Describe the assessment-related issues of communication in a cultural context.
13. Describe how being from an individualist vs. a collectivist culture may affect test scores.
14. Distinguish between ethics, code of ethics, and standard of care.
15. Summarize the public’s opposition to standardized testing over the years.
16. Summarize the three levels of test-user qualifications.
17. Describe the challenges related to testing people with disabilities.
18. List and summarize the four rights of test-takers.

Key Terms

o Self-report
o Projective tests
o Culture
o Eugenics
o Culture-specific tests
o Individualist vs. collectivist culture
o Affirmative action
o Ethics
o Code of professional ethics
o Standard of care
o Hired guns
o Informed consent
o Confidentiality

Key People

o Sir Francis Galton


o James McKeen Cattell
o Wilhelm Wundt
o Alfred Binet
o David Wechsler

Unit 3: Norms and Reliability

Learning Objectives

1. Describe what test users typically mean by “psychometric soundness.”

They refer to reliability and validity of the test.

2. Outline all the components involved in a standardized test.

A standardized test typically includes components like instructions, test items, answer sheets, scoring rubrics, and a
manual for administration and interpretation. It aims to ensure uniformity in testing conditions and evaluation
across all test takers.

Standardized testing will typically have norms by which to compare the results. These norms are developed based on
a sample of the population or the complete population itself.

3. Describe the difference between a purposive and a convenience sample.

A purposive sample is deliberately chosen based on specific characteristics or criteria relevant to the research
objective. In contrast, a convenience sample is selected based on ease of access, often resulting in a less deliberate or
systematic representation of the population.

4. Explain how a raw score is converted to a percentile and explain what a score at the 87th percentile
means.

The raw scores would be put in order of smallest to biggest and divided into 100 segments. Each segment is a
percentile. The 87th percentile corresponds to the the 87th segment or score when beginning from the lowest
percentile.

5. Summarize the “Do’s” and “Don’ts” of culturally informed assessment.

Do-be sensitive to the fact that culture impacts how concepts and words are understood and an individual’s response
to a test item. In cases where culture can impact testing, it may be preferable to use a testing that has less cultural
biases. At times, adjusting the language to the equivalence in another culture would be necessary tortilla-bread).
Likewise, scoring and interpreting the data in cultural context is important.

6. Explain the difference between a true score and a construct score.

A construct score is the measurement of the theoretical construct one is trying to measure (such as depression). A
true score is connected to the measurement tool. It represents the score on a particular pest void of measurement
error.
7. Provide two examples of both random error and systematic error.

Random error can be weather, the buzzing of a light or a smell in the room.

Systematic error are more consistent as they will impact scores consistently. They can include error in administering
the test or the manner of scoring.

8. Discuss how test administration and test scoring/interpretation can affect reliability.

Reliability talks of the consistency in which the tool will measure a defined trait, state or other. The tool has been
designed in a very specific manner and is menat to be used in that way only. If I consider 1 inch amark to be 1 cm
mark, the measures would all be wrong and inconsistent to the norms they are compared to.

9. Compare and contrast ways of assessing reliability: test-retest, internal consistency.

Test-retest is used to test the stability of a measure and requires 2 sessions. The statistical procedure to calculate the
error variance is pearson r or Spearman rho. In contrast, internal consistency test the extent of which items on test
relate to each other and are equivalent. It requires only one testing session. The source of error variance is tested
Pearson r between equivalent test halves with Spearman Brown correction or Kuder-Richardson for dichotomous
items, or coefficient alpha for multipoint items.

10. Define reliability, including potential sources of reliability coefficient and standard error of
measurement.

Reliability in testing refers to the consistency and stability of measurement results. The reliability coefficient indicates
the degree to which a test yields consistent scores. Potential sources of reliability include internal consistency, test-
retest stability, and inter-rater agreement. The standard error of measurement reflects the extent to which an
individual's true score may vary from their observed score due to measurement error.

11. Explain the difference between the reliability coefficient and standard error of measurement.

The reliability coefficient measures the consistency and stability of scores from a test, indicating how well it reliably
measures the construct. In contrast, the standard error of measurement quantifies the expected variability or margin
of error in an individual's observed score due to measurement inaccuracies, providing an estimate of score precision.
While the reliability coefficient is a measure of consistency, the standard error of measurement gauges the potential
imprecision in individual scores.

12. Describe four ways in which the nature of the test can affect reliability.

1. The test items are homogeneous or heterogeneous. 2. What is being measured is static or dynamic.
3. The range of scores are restricted or not. 4. Is it a power or speed test. 5. Whether the test is
criterion based or not

Key Terms

o Norms
o Normative sample
o Norming
o Standardization/test standardization
o Sample
o Sampling
o Stratified sample
o Purposive sample
o Convenience sample
o Percentile
o Criterion-referenced tests
o Reliability coefficient
o Measurement error
o Variance
o True variance
o Error variance
o Reliability
o Random error
o Systematic error
o Test-retest reliability
o Internal consistency
o Spearman-Brown formula
o Split-half reliability
o Inter-scorer reliability
o Classical test theory
o Standard error of measurement

Unit 4: Validity

Learning Objectives

1. Define validity in terms of test score interpretation and use.

The extent to which the test measures what it purports to measure. It is a judgement based on evidence about the
inferences taken from the results.

2. Compare and contrast three main types of validity evidence (content, criterion, and construct) and
identify examples of how each type is established.

Content validity involves ensuring that a test adequately covers the intended content. It is established through expert
reviews and subject matter analysis. Criterion validity assesses how well a test predicts a specific criterion, either
concurrently or predictively. For example, a hiring test's criterion validity might be established by correlating scores
with job performance. Construct validity evaluates whether a test measures the theoretical construct it claims to
measure. It's established through convergent and discriminant validity, where correlations with related and unrelated
measures are examined.

3. Explain why the phrase “valid test” is sometimes misleading.

It can be misleading as a valid test is accepted as valid to the construct it measures only.

4. Define face validity and its significance to test users.

Face validity means that the intent of the assessment is explicit and exposes its intentions. Users seem to feel more
reassured in administering a test with good face validity as it makes them feel secure that it is measuring what they
are looking to measure.

5. Define characteristics of a good criterion.

Criterion is relevant, valid and uncontaminated.


6. Explain the difference between concurrent and predictive validity.

Concurrent validity and predictive validity are both types of criterion-related validity. Concurrent validity assesses the
relationship between a test and a criterion that are measured at the same point in time. In contrast, predictive validity
assesses how well a test predicts future performance on a criterion. So, concurrent validity involves simultaneous
measurement, while predictive validity involves forecasting future outcomes based on the test results.

7. Define and provide an example of incremental validity.

Incremental validity refers to the extent to which a new test or measure adds valuable information beyond what
existing measures already provide. In other words, it assesses whether the new test contributes something unique to
predicting an outcome.

For example, consider a hiring process where interviews and reference checks are standard procedures. If a new
personality test for job candidates demonstrates incremental validity, it means it provides additional predictive power
for success on the job beyond what the interviews and reference checks already offer.

8. Describe the procedures which may be used to demonstrate evidence of construct validity.

-the test is homogeneous, measuring a single construct;

-test scores increase or decrease as a function of age, the passage of time, or an experimental manipulation as
theoretically predicted;

-test scores obtained after some event or the mere passage of time (or, posttest scores) differ from pretest scores as
theoretically predicted;

-test scores obtained by people from distinct groups vary as predicted by the theory;

-test scores correlate with scores on other tests in accordance with what would be predicted from a theory that covers
the manifestation of the construct in question.

9. Explain the procedure of the multitrait-multimethod matrix.

The multitrait-multimethod (MTMM) matrix is a research design used to assess the validity of a set of measurements.
It involves examining the relationships between multiple traits and multiple methods used to measure those traits.

The procedure includes measuring several traits using different methods and then correlating the scores. The matrix
typically has three types of correlations:

**Convergent Validity (within trait):** Correlations between different methods measuring the same trait.

**Discriminant Validity (between trait):** Correlations between the same method measuring different traits.

**Method Variance:** Correlations between different methods measuring different traits, reflecting the influence of
the measurement method itself.

By analyzing these correlations, researchers can evaluate the consistency of results within traits, distinguish between
traits, and identify potential biases introduced by measurement methods. The MTMM matrix provides a
comprehensive perspective on the validity and reliability of measurements.

10. Explain the advantages of using factor analysis for learning about convergent and discriminant
validity. Enthusiasm
11.
It is useful as it brings forth interresting data however it is very complex to use and is rarely used in research because
of that.

12. Provide examples of test bias.

In psychology, a bias “a factor inherent in a test that systematically prevents accurate, impartial measurement. An
example if the test uses academic content that some students have not been taught, in a test that measures
intelligence.

13. Distinguish between leniency error, severity error, central tendency error, and halo effect.

Leniency Error:** Occurs when a rater consistently gives higher ratings or evaluations than warranted. This can result
in inflated scores and may not accurately reflect the actual performance or attributes being assessed.

Severity Error:** In contrast to leniency error, severity error happens when a rater consistently gives lower ratings
than deserved. It can lead to undervaluing performance or characteristics and may affect fairness in evaluations.

Central Tendency Error:** Involves a rater consistently assigning average or middle-of-the-scale ratings, avoiding
extreme judgments. This error can mask individual differences and make it challenging to distinguish between high
and low performers.

Halo Effect:** This occurs when a rater's overall impression of an individual, either positive or negative, influences
their evaluations of specific attributes or behaviors. It can lead to biased assessments where one characteristic
unfairly influences judgments across the board.

These errors highlight the challenges in subjective evaluations and emphasize the importance of rater training and
awareness to minimize biases and enhance the accuracy of assessments.

14. Discuss the concept of test fairness.

Fairness in the context of psychometric testing, refers to the use of test in an impartial, equitable and just way. At
times, tests are used in unreasonable ways such as governments “diagnosing” ppl as ill for deviating from their
governmental vision.

Key Terms

o Inference
o Validity
o Validation
o Content validity
o Criterion-related validity
o Construct validity
o Ecological validity
o Face validity
o Criterion contamination
o Validity coefficient
o Incremental validity
o Convergent validity
o Discriminant validity
o Multitrait-multimethod matrix
o Factor analysis
o Rating error
o Leniency error
o Severity error
o Central tendency error
o Halo effect

Chapter 9

1. Discuss the various meanings of the term intelligence.


2. Understand the process undertaken in factor analysis.
3. Distinguish between confirmatory and explanatory factor
analysis.
4. Describe Spearman’s two-factor theory of intelligence.
5. Describe the Cattell-Horn-Carroll theory and comment on its
utility.
6. Describe Gardner’s Theory of Multiple Intelligences and
comment on its validity.
7. Discuss tasks used to measure intelligence across the lifespan.
8. List and describe some contexts in which intelligence tests may
be administered to adults.
9. Outline seven factors taken into consideration by a test user in
choosing which intelligence test to administer.
10. Discuss the use of adaptive testing and its advantages.
11. Describe the common features of the Wechsler intelligence tests.
12.Discuss when a clinician would use the GAI and/or CPI on the
WAIS-IV test.
13.Discuss how group-administered intelligence tests are used
today.
14.Explain how culture can affect the measurement of intelligence.

Accommodation: Providing adjustments or changes to meet specific needs.


Adaptive Testing: Tailoring the difficulty of test items based on the test taker's performance.
AFQT: Armed Forces Qualification Test, assessing abilities for military service.
Alerting Response: Immediate reaction to a stimulus indicating increased attention.
Alternate Item: A different version of a test item, used to reduce cheating.
Army Alpha Test: An early group intelligence test administered during World War I.
Army Beta Test: Another version of the Army intelligence test, designed for illiterate individuals.
Assimilation: Incorporating new information into existing cognitive structures.
ASVAB: Armed Services Vocational Aptitude Battery, assessing skills for military jobs.
Basal Level: The starting point for scoring in a test, indicating basic competency.
Binet, Alfred: A psychologist who pioneered intelligence testing with the Binet-Simon Scale.
Ceiling Effect: When a test fails to measure high levels of ability due to insufficient difficulty.
Ceiling Level: The highest difficulty level a test item can have before it becomes too challenging.
CHC Model: Cattell-Horn-Carroll model, a framework for understanding intelligence.
Cognitive Style: Individual preferences in approaching and solving problems.
Convergent Thinking: Focusing on finding a single correct solution to a problem.
Cross-Battery Assessment: Evaluating an individual's cognitive abilities across different tests.
Crystallized Intelligence: Acquired knowledge and skills over time.
Culture-Fair Intelligence Test: A test designed to minimize cultural bias.
Culture-Free Intelligence Test: An attempt to create a test that is independent of cultural influences.
Culture Loading: The extent to which a test is influenced by cultural factors.
Deviation IQ: Expressing an individual's IQ relative to the average score for their age group.
Divergent Thinking: Generating multiple solutions to a problem.
Emotional Intelligence: Ability to perceive, understand, and manage emotions.
Extra-Test Behavior: Non-test-related actions observed during assessment.
Factor-Analytic Theories (of Intelligence): Theories identifying underlying factors contributing to intelligence.
Floor: The lowest difficulty level a test item can have, suitable for all test takers.
Fluid Intelligence: Capacity for solving novel problems independent of acquired knowledge.
Flynn Effect: The observed increase in IQ scores over generations.
G (Factor of Intelligence): General intelligence factor underlying specific cognitive abilities.
Gf and Gc: Fluid intelligence and crystallized intelligence, respectively.
Giftedness: Exceptional intellectual ability or talent.
Group Factors: Shared factors influencing performance on certain types of tests.
Hierarchical Model: A model organizing intelligence into different levels or strata.
Information-Processing Theories (of Intelligence): Theories examining mental operations involved in intelligence.
Intelligence: Mental ability to learn, reason, problem-solve, and adapt.
Interactionism: The idea that genetics and environment interact to influence intelligence.
Interpersonal Intelligence: Ability to understand and interact effectively with others.
Intrapersonal Intelligence: Self-awareness and understanding of one's emotions.
IQ (Intelligence Quotient): Numerical representation of an individual's intelligence relative to their peers.
Maintained Abilities: Abilities that remain relatively stable over time.
Mental Age: The age at which a person's cognitive abilities match their test performance.
Nominating Technique: Identifying gifted individuals based on recommendations.
Optional Subtest: Additional test components that may be administered based on individual needs.
Parallel Processing: Simultaneous handling of multiple cognitive tasks.
PASS Model: Planning, Attention, Simultaneous, Successive model of intelligence.
Point Scale: A scoring system based on the number of correct responses.
Predeterminism: The belief that intelligence is predetermined and unchangeable.
Preformationism: Historical idea that traits are inherited in a preformed state.
Psychoeducational Assessment: Evaluating cognitive, emotional, and educational factors.
RAT: Remote Associates Test, measuring creative thinking.
Ratio IQ: A ratio of mental age to chronological age, multiplied by 100.
Routing Test: A test used to guide the administration of subsequent subtests.
Schema: Mental framework organizing and interpreting information.
Schemata: Plural form of schema, cognitive frameworks.
Screening Tool: A brief assessment to identify potential issues or strengths.
Sequential Processing: Handling one cognitive task at a time.
S Factor (of Intelligence): Specific abilities contributing to intelligence.
Short Form: Abbreviated version of a test, preserving its reliability.
Simultaneous Processing: Handling multiple cognitive tasks simultaneously.
Stanford-Binet: A widely used intelligence test, revised from the Binet-Simon Scale.
Successful Intelligence: Intelligence applied to achieve one's goals in life.
Successive Processing: Handling cognitive tasks in a sequential manner.
Supplemental Subtest: Additional test components providing supplementary information.
Teaching Item: A test item designed to measure knowledge imparted through instruction.
Temperament: Innate behavioral and emotional traits.
Terman, Lewis: A psychologist known for his work on giftedness, Stanford-Binet revisions.
Termites: Participants in Terman's longitudinal study of gifted individuals.
Testing the Limits: Assessing an individual's potential by gradually increasing difficulty.
Three-Stratum Theory of Cognitive Abilities: A model categorizing cognitive abilities into three levels.
Two-Factor Theory of Intelligence: Spearman's theory proposing general and specific intelligence factors.
Verbal, Perceptual, and Image Rotation (VPR) Model: A model categorizing cognitive abilities.
Vulnerable Abilities: Cognitive functions susceptible to impairment.
WAIS: Wechsler Adult Intelligence Scale, a widely used intelligence test for adults.
WASI: Wechsler Abbreviated Scale of Intelligence, a short form of the WAIS.
Wechsler, David: A psychologist who developed prominent intelligence tests.
Wechsler-Bellevue: An early version of Wechsler's intelligence tests.
WISC: Wechsler Intelligence Scale for Children, assessing intelligence in children.
WPPSI: Wechsler Preschool and Primary Scale of Intelligence, for young children.

Chapter 10

1. Define specific learning disability and explain the role of


intelligence and achievement tests in evaluation.
2. Describe the case for and against educational testing in schools.
3. Compare and contrast dynamic assessment procedures and
traditional assessment procedures.
4. Explain how achievement tests are used in educational settings.
5. Describe how achievement and aptitude are measured for
various age groups across the lifespan.

- Formative Assessment: Ongoing evaluation during the learning process to


provide feedback and guide instruction.

- Summative Assessment: Evaluation conducted at the end of a learning


period to measure overall performance and understanding.
- Specific Learning Disability: Impairment in certain academic skills that
significantly hinders learning.

- Integrative Assessment: Comprehensive evaluation that combines various


sources of information to gain a holistic understanding.

- Zone of Proximal Development: The range of tasks a learner can perform


with assistance, optimizing learning potential.

- Achievement Tests: Assessments measuring a person's acquired


knowledge or skills in a particular area.

- Aptitude Tests: Evaluate an individual's potential to develop certain skills


or abilities.

- Checklist: A list of items to be checked or assessed for completion or


presence.

- Rating Scale: A tool assigning a value or score to performance, often


indicating the degree or quality.

- Informal Evaluation: Assessment conducted without standardized tools,


often based on observation or informal measures.

- Diagnostic Test: A tool designed to identify specific areas of difficulty or


impairment.

- Diagnostic Information: Data that pinpoints specific strengths and


weaknesses in a person's performance or abilities.

- Evaluative Information: Information used for judgment or assessment


purposes.

- Psychoeducational Test Batteries: Comprehensive assessments measuring


cognitive, emotional, and educational factors.

- Performance Task: An activity or assignment that demonstrates a person's


skills or abilities in a real-world context.
- Portfolio Assessment: Evaluation based on a collection of work samples,
showcasing skills, growth, or achievements.

- Authentic Assessment: Evaluation of real-world tasks or skills in a


meaningful context.

Chapter 11

1. Distinguish between personality traits, types, and states.


2. Discuss the pros and cons of self-report to measure personality.
3. Explain the concepts of response style and impression
management; discuss how these concepts may impact the
validity of the personality profile.
4. Discuss the pros and cons of using validity scales in personality
assessment.
5. Distinguish between the nomothetic approach and the
idiographic approach.
6. Explain how culture and language can impact personality
assessment.
7. Briefly explain the steps to developing an instrument to assess
personality.
8. Describe the essential features of the MMPI-2 and comment on
reliability and validity.

Acculturation: The process of adapting to and integrating with a different culture.


Acquiescent Response Style: Tendency to agree or acquiesce in survey responses.
Big Five: Five broad personality traits - openness, conscientiousness, extraversion, agreeableness, and neuroticism.
Control Group: A group in an experiment that does not receive the treatment, used for comparison.
Criterion: A standard or benchmark for evaluating performance or behavior.
Criterion Group: Participants chosen based on meeting specific criteria.
Empirical Criterion Keying: Developing a test by selecting items based on their correlation with external criteria.
Error of Central Tendency: The tendency to rate individuals around the average, avoiding extreme judgments.
Forced-Choice Format: Respondents must select from predetermined options, reducing response bias.
Frame of Reference: The context or perspective influencing perception or judgment.
Generosity Error: Rating individuals more positively than warranted.
Graphology: Analysis of handwriting to infer personality traits.
Halo Effect: Tendency for a positive impression in one area to influence judgments in other areas.
Identification: Associating oneself with another person or group.
Identity: The sense of self, including personal characteristics and values.
Idiographic Approach: Focusing on individual uniqueness in understanding personality.
Impression Management: Deliberate control of one's image or presentation.
Instrumental Values: Values representing the means to achieve desired outcomes.
IPIP: International Personality Item Pool, a collection of personality items.
Leniency Error: Rating individuals more leniently than warranted.
Locus of Control: Belief in one's ability to control life events (internal) or attributing control to external factors (external).
MMPI: Minnesota Multiphasic Personality Inventory, a widely used personality assessment.
MMPI-2: Revised version of the MMPI.
MMPI-2-RF: MMPI-2 Restructured Form, a refined version of the MMPI-2.
MMPI-3: The third edition of the MMPI.
MMPI-A-RF: MMPI Adolescent Restructured Form, designed for adolescents.
NEO PI-R: NEO Personality Inventory-Revised, assessing the Big Five personality traits.
Nomothetic Approach: Generalizing principles across individuals in understanding personality.
Personality: Unique and enduring patterns of thoughts, feelings, and behaviors.
Personality Assessment: The process of measuring and evaluating personality traits.
Personality Profile: Summary of an individual's personality characteristics.
Personality Trait: Enduring patterns of behavior, thought, and emotion.
Personality Type: A classification based on shared personality characteristics.
Profile: A summary representation of test scores or personality traits.
Profile Analysis: Examining patterns across multiple variables in a profile.
Profiler: A person skilled in creating personality profiles.
Q-Sort Technique: Sorting statements about oneself based on their relevance.
Response Style: A consistent way of responding to survey items.
Self-Concept: The individual's perception of oneself, including beliefs and feelings.
Self-Concept Differentiation: The extent to which distinct aspects of self are differentiated.
Self-Concept Measure: Tools assessing an individual's self-concept.
Self-Report: Gathering information directly from individuals about their thoughts, feelings, or behaviors.
Semantic Differential: A scale assessing attitudes by rating opposite adjectives.
Severity Error: Rating individuals more critically than warranted.
State: Temporary emotional or situational conditions influencing behavior.
Structured Interview: A standardized interview with predetermined questions.
Terminal Values: Enduring values representing desired end states.
Type A Personality: Personality characterized by competitiveness and time urgency.
Type B Personality: Personality characterized by a more relaxed and laid-back approach.
Validity Scale: A scale on personality tests checking for the accuracy and truthfulness of responses.
Values: Beliefs or principles that guide behavior and decision-making.
Welsh Code: A coding system for MMPI items developed by J.C. Welsh.
Worldview: A person's fundamental beliefs and outlook on life.

Chapter 12

1. Discuss some concerns related to the objectivity of personality


assessment.
2. Explain the Rorschach test and its procedure; list some of the
current issues that can reliably be assessed by the Rorschach.
3. Explain the concept of projective tests and list three types.
4. Describe the assumptions inherent in projective testing.
5. Explain how situational variables can affect the results of
projective tests.
6. Distinguish between traditional and behavioural approaches to
psychological assessment.
7. List a few ways in which behavioural observation may occur.
8. Explain how psychophysiological methods can be used in
psychological assessment.
9. Discuss issues around reliability in behavioural assessment.
10. Distinguish between the clinical judgment and actuarial
approaches to making predictions in psychological assessment.

Analogue Behavioral Observation: Systematic observation of behavior in a setting designed to resemble the natural
environment.
Analogue Study: A research study conducted in an environment that simulates real-life situations.
Apperceive: To comprehend or interpret sensory information and integrate it with existing knowledge.
Behavioral Assessment: Evaluation of behavior through direct observation, measurement, and analysis.
Behavioral Observation: Systematic recording and analysis of observable behaviors in their natural settings.
Biofeedback: The use of electronic monitoring to provide individuals with information about physiological processes for self-
regulation.
Composite Judgment: A comprehensive evaluation formed by combining multiple judgments or assessments.
Comprehensive System (Exner): A scoring system for the Rorschach inkblot test developed by John E. Exner.
Contrast Effect: The influence of a preceding stimulus on the perception of a subsequent one.
Ecological Momentary Assessment: The collection of real-time data on a subject's behaviors, thoughts, and emotions in their
natural environment.
Figure Drawing Test: A projective test where individuals draw a human figure, analyzed for psychological insights.
Free Association: A psychoanalytic technique where individuals express thoughts without censorship to reveal unconscious
processes.
Functional Analysis of Behavior: An assessment method analyzing antecedents, behaviors, and consequences to understand and
modify behavior.
Implicit Motive: Unconscious motivational forces that influence behavior.
Inquiry (on the Rorschach): A category on the Rorschach inkblot test measuring a person's approach to problem-solving.
Leaderless Group Technique: A group assessment method where participants interact without a designated leader.
Need (Murray): A concept in Murray's theory of personality, representing a recurrent theme in an individual's experiences.
Objective Methods of Personality Assessment: Assessment techniques with standardized questions and clear scoring criteria.
Penile Plethysmograph: A device measuring changes in penile circumference, often used in sexual arousal research.
Percept (on the Rorschach): The visual response or interpretation of an inkblot on the Rorschach test.
Phallometric Data: Measurements of physiological responses related to sexual arousal.
Plethysmograph: An instrument measuring changes in volume in an organ or part of the body.
Polygraph: A lie detector measuring physiological responses such as heart rate and skin conductivity during questioning.
Press (Murray): Environmental influences and demands that act on an individual in Murray's theory of personality.
Projective Hypothesis: The idea that ambiguous stimuli elicit projections of unconscious thoughts and feelings.
Projective Method: A psychological assessment using unstructured stimuli to reveal unconscious thoughts.
Psychophysiological (Assessment Methods): Techniques measuring physiological responses to psychological stimuli.
Reactivity: Changes in behavior or physiological responses due to being observed or assessed.
Role Play: Acting out situations to assess or train individuals' responses in a specific context.
Rorschach Test: A projective psychological test using inkblots to assess personality and emotional functioning.
Self-Monitoring: The process of observing and recording one's behavior and responses to assess and modify them.
Sentence Completion: A projective technique where individuals complete sentences to reveal underlying thoughts and
emotions.
Sentence Completion Stem: The beginning of a sentence in a sentence completion test that individuals complete.
Sentence Completion Test: A psychological test using incomplete sentences to elicit responses revealing thoughts and feelings.
Situational Performance Measure: An assessment of behavior in specific situations to predict future performance.
TAT (Thematic Apperception Test): A projective test using ambiguous pictures to elicit stories revealing personality.
Testing the Limits (on the Rorschach): A procedure in Rorschach testing involving gradual changes in administration to explore
response variations.
Thema (Murray): In Murray's theory, a recurrent theme or cluster of needs influencing behavior.
Timeline Followback (TLFB) Methodology: A method collecting retrospective data on behavior over a specific time period.
Unobtrusive Measure: An assessment method that does not interfere with the natural context or behavior being observed.
Word Association: A psychological test where individuals respond to a stimulus word with the first word that comes to mind.
Word Association Test: A projective test using word stimuli to elicit associations revealing unconscious thoughts and emotions.

You might also like