0% found this document useful (0 votes)
180 views21 pages

Measurement, Reliability and Validity

This document discusses key concepts related to developing effective selection systems, including measurement, reliability, validity, and bias. It defines reliability as the consistency of measurement and validity as measuring the intended constructs. Reliability is achieved through various types of testing, while validity is established through criterion-related, content-related, and construct-related validation studies. An effective selection strategy uses scientific methods to match skills to job requirements and reduce turnover.

Uploaded by

Alluka Zoldyck
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
180 views21 pages

Measurement, Reliability and Validity

This document discusses key concepts related to developing effective selection systems, including measurement, reliability, validity, and bias. It defines reliability as the consistency of measurement and validity as measuring the intended constructs. Reliability is achieved through various types of testing, while validity is established through criterion-related, content-related, and construct-related validation studies. An effective selection strategy uses scientific methods to match skills to job requirements and reduce turnover.

Uploaded by

Alluka Zoldyck
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

MEASUREMENT,

RELIABILITY
AND VALIDITY
LEARNING OBJECTIVES:

• List and describe the basic statistical concepts necessary for development of selection system.
• Define reliability and explain why it is a central concept to the development of a selection system.
• Define validity and explain why is it a central concept to the development of a selection system
• Explain how bias affects a selection system, how it can be detected and how it can be overcome.
MEASUREMENT

• It is a systematic process by which objects, events are


quantified and/or classified with respect to particular
dimension.
• This is usually achieved by the assignment of
numerical values.
TURNOVER RISK
MITIGATION
• It is the number of candidates screened out post-interview as
a result of poor peer feedback. Gauge the quality of your late
stage screening tools. How effective is your reference checks
and does it impact your hiring decisions?
NEW TALENT IMPACT
• New Talent Impact provides early recognition of a new hire’s fit and can be used as an identifier for
potential issues.
• It also helps to control the employment experience and reveals the effectiveness of your HR
processes.
• The new hire’s predicted success can be measured by:
• Hiring Manager rehire certainty
• Team member’s cultural fit
• New hire engagement process
NEW HIRE ENGAGEMENT

• It can help round out a new hire’s performance based on their fit, focused and feelings. This metric
should be gauged at regular intervals during an employee’s first few months on the job.
• Onboarding process
• Training resources
• Overall culture

• New hire engagement helps identify the candidate’s perspective on the hiring process.
MANAGER SELECTION STRENGTH

• Create a rating on how your managers select talent.


• Compare the overall pre-employment ratings of hired candidates versus candidates who were
screened out during the hiring process.
PASSIVE PIPELINE GROWTH

• It is a measurement of how many passive candidates you’re adding to your candidate pipeline via
non-job specific recruiting methods.
• If your pipeline is stocked, this ultimately decreases your time to hire and improve the overall
quality of your workforce.
RECOMMENDATIONS

• Start Small – focus on the little areas where you can start improving your talent selection metrics.
For those who are new to this, we suggest sending a survey after reasonable time your new hire was
made and quantify how that new hire is working out.
• Create a process – Create a simple process to collect and evaluate the data. Make sure you are being
strategic in measuring key things that will provide you with important information to help you drive
results.
THE IMPORTANCE OF DEVELOPING A SELECTION
STRATEGY WHICH IS BASED ON SCIENTIFIC METHODS
• Employee selection strategies are composed of research, testing and evaluation methods.
• The task of selection strategies is to match an organization’s predetermined requirements with the
correct skillset.
• Employment selection strategy theorizes that by matching your companie’s needs to the candidate
best suited for the job, you can reduce employee turnover and increase employee productivity,
saving time and money.
BASIC COMPONENTS OF EVERY SELECTION

• Examples of Tests
• Intelligence Test – used to measure intelligence.
• Aptitude Test – used to predict the potentials an individual has to perform a job or specific tasks within a
job.
• Personality Test – used to measure behavior that indicates individual interests, values and behavior that
may be required.
SORTING OUT APPLICATIONS

• All applicants are listed in a standard control sheet.


• Each component of the application criteria is awarded a point
• There should be weighing and ranking of the applicants according to points scored.
• Short listing those qualified for an interview.
• Preparing an interview program.
• Inviting interviewees using a standard letter. Inform those who did not qualify.
INTERVIEWING
• Types of interviews
• Individual interviews – Involves face to face discussion and provides the best opportunity for the
establishment of close contact between the interviewer and the candidate.
• Interviewing Panels – Two or more people gather together to interview one person. Panel interviews help
to develop a common consensus about the candidate through discussions amongst panel members and
hence reduce superficial biases.
• Selection Boards – These are more formal , and usually larger interviewing panels convened by an official
body because there are a number of parties interested in the selection decisions.
PREPARATION FOR AN INTERVIEW

• One must know what he/she wants to achieve from the interview. One needs to set objectives that
can be reasonably achieved by the interview and that are directly related to the job description and
specification.
CONDUCTING AN INTERVIEW
• The questions to be used and the way to be asked make a difference on the effectiveness of an
interview. The following questions may be useful:
• Open questions begin with words such as “why”, “how”, “what” etc. or phrases like “tell me about”
• Probing questions can be used to explore a particular topic in more detail.
• Leading questions indicates the answer, which interviewer expects to hear.
• Loaded questions imply that the interviewer is judging or criticizing the interviewee
• Double headed questions are where several questions are strung together.
• Self-assessment questions are questions to which the interviewee is asked to “sell” himself on the
interviewer.
• Hypothetical questions are the ones that pose imaginary situations for the interviewee.
STAR MODEL Situation – What was the situation one faced
IN in the past?
CONDUCTING Task – What was his/ her task/job? What
INTERVIEW were you supposed or expected to do?

Action – What did you do?

Result – What was expected of you? What


was the result?
MAKING DECISION
• It is desirable to use a scoring scheme at both the short listing and the final selection stages.

ABILITY TESTS
• Ability Tests should have the following characteristics:
• Should be sensitive enough
• It has to be standardized
• Reliability
• Validity
RELIABILITY
• It is the degree to which an assessment tool produces stable and consistent results.

TYPES OF RELIABILITY
Test – retest reliability – a measure of reliability obtained by administering the
same test twice over a period of time to a group of individuals.

Parallel forms reliability – a measure of reliability obtained by administering


different versions of an assessment tools to the same group of individuals.
TYPES OF RELIABILITY

Inter- rater Reliability - a measure of reliability used to assess the degree to


which different judges or raters agree in their assessment decisions.

Internal consistency reliability – a measure of reliability used to evaluate the


degree to which different test items that probe the same construct produce
similar results
• Average inter-item correlation – Obtained by taking all of the items on a test that probe the
same construct.
• Split-half reliability – The process of obtaining split-half reliability is begun by “splitting in
half” all items of a test that are intended to probe the same area of knowledge
VALIDITY
It is the most important issue in selecting a test.

It refers to what characteristics the test measures and how well it measures those characteristics.

Validity tells if the characteristic being measured by a test is related to the job qualifications and
requirements.

It gives meaning to test scores.

Validity also describes the degree to which you can make specific conclusions or predictions about people
based on their test scores.
APPROACHES FOR CONDUCTING VALIDATION
STUDIES

01 02 03
Criterion related validation – Content related validation – Construct related validation
requires demonstration of a requires a demonstration that – requires demonstration that
correlation or other statistical the content of the test the test measures the construct
relationship between test represents important job- or characteristic it claims to
performance and job related behaviors. measure, and that this
performance, characteristic is important to
successful performance on the
job.

You might also like