Measurement and Data Collection

Download as pdf or txt
Download as pdf or txt
You are on page 1of 82

Measurement & Methods of

Data Collection

Θ
By: Ephrem Mannekulih
(MSc. in Biostatistics & Health/Inf., Asst. Prof.)
Study Variables
A variable is a characteristic of a person, object, or phenomenon
that can take on different values in different person/
object/phenomenon.

Two types of variables


Dependent and independent variables.

Dependent variable is a variable used to describe or measure the


problem under study.

Independent variable is a variables used to describe or measure


the factors that are assumed to influence (or cause) the problem.
Types of variables…cont’d
For example, in a study of relationship between smoking
behavior and status of arterial blood pressure
“Status of arterial blood pressure“ would be the dependent variable

"Smoking behavior “ would be the independent variable.


Types of variables…cont’d
Background variables
Variables that are usually related to a number of independent variables
and influence the problem indirectly.

Almost every study involving human subjects includes


background variables

Eg.
Age, sex, educational status, monthly family income, marital status and
religion will be.
Types of variables…cont’d
Confounding variable - A variable that is associated with the
problem and with a possible cause of the problem.
They may either strengthen or weaken the apparent relationship
between the problem and a possible cause.

Composite variable - A variable constructed based on two or


more other variables
E.g, Body Mass Index is a composite variable of Weight and Height
Measurement in Research
Measurement is a process of mapping aspects of a domain
according to some rule of correspondence.

In measuring,
We devise some form of scale in the range in terms of set theory and

Then transform or map the properties of objects from the domain onto
this scale.
Measurement Scales
The most widely used classification of measurement scales
are:
Nominal scale;

Ordinal scale

Interval scale; and

Ratio scale
Measurement Scales…Cont’d
Ordinal scale:
Scale of measurement in which data can be assigned into categories
that are ranked in terms of order.

Although non-numerical, can be considered to have a natural ordering

Examples: Patient status, cancer stages, social class, etc.

8
Measurement Scales…Cont’d
Example of ordinal scale:
Pain level:
1. None
2. Mild
3. Moderate
4. Severe
The numbers have LIMITED meaning 4>3>2>1 is all we know
apart from their utility as labels
 Precise differences between the ranks do not exist

9
Measurement Scales…Cont’d
Interval scale: Scale of measurement in which data are
measured on a continuum numerical form that are ranked in
terms of magnitude

A differences between two numbers on a scale are of known


size
Example: Temperature, IQ Score

Example: Temp. in 𝑂𝐹 on 4 consecutive days


Days: A B C D

Temp. 𝑂𝐹 : 50 55 60 65

10
Measurement Scales…Cont’d
It has no true zero value

Means “0” is arbitrarily and does not indicate a total absence


of the quantity being measured.

11
Measurement Scales…Cont’d
 Ratio scale: The highest level of measurement scale in which
data are measured on a continuum numerical form that are
ranked in terms of magnitude

Measurement begins at a true zero point and the scale has


equal space.

 Examples: Height, age, weight, BP, etc.

12
Measurement Scales…Cont’d
Characterized by equality of ratios as well as equality of
intervals can be determined
Someone who weighs 80 kg is two times as heavy as someone else who
weighs 40 kg.

This is true even if weight had been measured in other measurements.

13
Why measurement Validity & Reliability?
The quality of a research outputs depends on the validity of
instrument we use.

The purpose of establishing reliability and validity in research


is essentially to ensure that data are sound and replicable,
and the results are accurate

The measurement error not only affects the ability to find


significant results but also can result in damaging
consequences from its conclusion
Sources of Error in Measurement
Respondent: Inability of the respondent to respond
accurately and fully
Reason for respondent related error:

Reluctant to express strong negative feelings

Having very little knowledge but may not admit his ignorance

Fatigue, boredom, anxiety, etc


Sources of Error…Cont’d
Situation: Any condition which places a strain on interview
can have serious effects on the interviewer respondent
rapport.
If the respondent feels that anonymity is not assured, he may be
reluctant to express certain feelings.
Measurer: The interviewer can distort responses by rewording
or reordering questions.
His behavior, style and looks may encourage or discourage certain
replies from respondents.
Careless mechanical processing may distort the findings.
Errors may also creep in because of incorrect coding, faulty tabulation
and/or statistical calculations, particularly in the data-analysis stage.
Sources of Error…Cont’d
Instrument: Error may arise because of the defective
measuring instrument.
Use of complex words, beyond the comprehension of the respondent,
Ambiguous meanings,
poor printing,
inadequate space for replies,
response choice omissions, etc.
Another type of instrument deficiency is the poor sampling of
the universe of items of concern
Measurement and Information Bias
Information Bias: refers to any systematic error introduced
during measurement of information
It occurs when the individual measurements of disease or exposure are
inaccurate

That is, when they do not measure correctly what they are supposed to
measure
Information Bias…Cont’d
In analytical studies, usually one factor is known and another
is measured

Example;
In case control studies, the ‘outcome’ is known and the ‘exposure’ is
measured

In cohort studies, the exposure is known and the outcome is measured
Types of Information Bias
Interviewer Bias:
An interviewer’s knowledge may influence the structure of questions
and the manner of presentation, which may influence responses

Recall Bias:
Those with a particular outcome or exposure may remember events
more clearly or amplify their recollections

Observer Bias:
Observers may have preconceived expectations of what they should
find in an examination
Information bias…
Hawthorne effect:
An effect first documented at a Hawthorne manufacturing plant;
people act differently if they know they are being watched

Surveillance bias:
The group with the known exposure or outcome may be followed more
closely or longer than the comparison group

Social desirability bias:


Occurs because subjects are systematically more likely to provide a
socially acceptable response
Information Bias…
Placebo effect:
In experimental studies which are not placebo-controlled, observed
changes may be ascribed to the positive effect of the subject's belief
that the intervention will be beneficial

Misclassification bias: errors that are made in classifying


either disease or exposure status
It is a systematic bias that is introduced when measurement
(ascertainment) of exposure/outcome or both is not well done using the
same procedure
Types of Misclassification Bias
Differential (non-random)misclassification:
If measurement (ascertainment) failure of exposure or outcome
variable is in one side of the groups

Measurement failure is different for groups

Non-differential (random) misclassification:


If measurement (ascertainment) failure is in both sides of the groups.

Measurement failure is similar for groups


Misclassification bias…cont’d
Why misclassification of disease status?
Incorrect diagnosis
Limited knowledge
 Complex diagnosis process
Inadequate access to technology
Laboratory errors
 Disease subclinical
Detection bias (more thorough exam if exposed)
Recall
Not willing to be truthful
Misclassification bias…cont’d
Why misclassification of Exposure?
Imprecise measurement

Subject self report

Interviewer bias

Incorrect coding of exposure


Information bias: solution
Maximise accuracy of measurements

Minimise ambiguity of measurements


‘Biological’ vs. questionnaire/interviewer rating

Historical measures of exposure (e.g. old notes)

Blinding interviewers to case status

Blinding participants (and /interviewers) to study hypothesis


Controlling for Bias
Be purposeful in the study design to minimize the chance for
bias

Define, on a priori, who is a case or what constitutes exposure


Define categories within groups clearly (age groups, aggregates of
person years)

Set up strict guidelines for data collection


Train observers or interviewers to obtain data in the same fashion

Use more than one observer or interviewer, but not much, since they
cannot be trained in an identical manner
Cont’d…
When interpreting study results, ask yourself these questions …
Given conditions of the study, could bias have occurred?

Is bias actually present?

Are consequences of the bias large enough to distort the measure of


association in an important way?

Which direction is the distortion? – Is it towards the null or away from the
null?
Tests of Sound Measurement
Sound measurement must meet the tests of
Validity,

Reliability and

Practicality

In fact, these are the three major considerations one should
use in evaluating a measurement tool.
Validity vs. Reliability

Validity Reliability
How well a measurement agrees with How well a series of measurements
an accepted value agree with each other
Test of Validity
Validity:
The degree to which an instrument measures what it is supposed to
measure.
The extent to which differences found with a measuring instrument
reflect true differences among those being measured

There are Four types of validity


Face validity
Content validity
Criterion-related validity and
Construct validity.
Test of Validity…Cont’d
Face validity know as logical validity
It is refers to how accurately an assessment measures what it was
supposed to measure supposed.

It is concerned with whether a measure seems relevant and


appropriate for what it’s assessing

It’s a simple first step to measuring the overall validity of a test or


technique.

It’s a relatively intuitive, quick, and easy way to start checking whether
a new measure seems useful at first glance.
Test of Validity…Cont’d
Having face validity doesn’t guarantee that you have good
overall measurement validity or reliability
 It’s considered a weak form of validity because it’s assessed
subjectively without any systematic testing and is at risk for
bias
But testing face validity is an important first step to reviewing
the validity of your test.
Once you’ve secured face validity, you can assess other
more complex forms of validity
Test of Validity…Cont’d
Content validity: is the extent to which a measuring
instrument provides adequate coverage of the topic under
study.
High content validity means the test covers the topic extensively.

Low validity shows the test is missing important measurement elements.


Test of Validity…Cont’d
Criterion-related validity: relates to our ability to predict some
outcome or estimate the existence of some current condition.
Criterion validity shows you how well a test correlates with an
established standard of comparison called a criterion.

A measurement instrument, like a questionnaire, has criterion validity if


its results converge with those of some other, accepted instrument,
commonly called a “gold standard.”
Test of Validity…Cont’d
The concerned criterion must possess the following qualities
Relevance: A criterion is relevant if it is defined in terms we judge to be
the proper measure

Freedom from bias: the criterion gives each subject an equal


opportunity to score well

Reliability: stable or reproducible

Availability: information specified by the criterion must be available


Test of Validity…Cont’d
Construct validity:
It is the degree that it confirms to predicted correlations with other
theoretical propositions.
The degree to which scores on a test can be accounted for by the
explanatory constructs of a sound theory.
For determining construct validity, we associate a set of other
propositions with the results received from using our measurement
instrument.
If measurements on our devised scale correlate in a predicted way
with these other propositions, we can conclude that there is some
construct validity
Test of Validity…Cont’d
There are two main types of construct validity.
Convergent validity: is the extent to which measures of the same or
similar constructs actually correspond to each other.

Discriminant validity: Conversely, means that two measures of unrelated


constructs that should be unrelated, very weakly related, or negatively
related actually are in practice.
Test of Reliability
Reliability: is the degree of a measuring instrument is provides
consistent results
Reliable measuring instrument does contribute to validity, but a reliable
instrument need not be a valid instrument.

Reliability is not as valuable as validity, but it is easier to assess reliability


in comparison to validity.

The two aspects of reliability are stability and equivalence


Test of Reliability…Cont’d
Stability: is concerned with securing consistent results with
repeated measurements of the same person and with the
same instrument.
We usually determine the degree of stability by comparing the results of
repeated measurements.
Equivalence: considers how much error may get introduced
by different investigators or different samples of the items
being studied.
A good way to test for the equivalence of measurements by two
investigators is to compare their observations of the same events
Test of Reliability…Cont’d
Reliability can be improved in the following two ways:
By standardizing the conditions under which the measurement takes
place

That will improve stability aspect

By carefully designed directions for measurement with no variation from


group to group,

by using trained and motivated persons to conduct the research and

by broadening the sample of items used.

This will improve equivalence aspect.


Test of Practicality
Practicality can be judged in terms of economy,
convenience and interpretability.
Economy consideration suggests that some trade-off is needed
between the ideal research project and that which the budget can
afford.
Convenience test suggests that the measuring instrument should be
easy to administer
For instance, a questionnaire, with clear instructions, is certainly more
effective and easier to complete
Interpretability consideration is specially important when persons other
than the designers of the test are to interpret the results
Technique of Developing Measurement Tools

The technique of developing measurement tools involves a


four-stage process, consisting of the following:
Concept development;

Specification of concept dimensions;

Selection of indicators; and

Formation of index.
Developing Tools…Cont’d
Concept development;
At this step researcher should arrive at an understanding of the major
concepts pertaining to his study.

This step is more apparent in theoretical studies than in the more


pragmatic research, where the fundamental concepts are often
already established.
Developing Tools…Cont’d
Specify the dimensions of the concepts
This task may accomplished by deduction i.e., by adopting a more or
less intuitive approach or

By empirical correlation of the individual dimensions with the total


concept and/or the other concepts.
Developing Tools…Cont’d
Selection of indicators;
Indicators are specific questions, scales, or other devices by which
respondent’s knowledge, opinion, expectation, etc., are measured

As there is seldom a perfect measure of a concept, the researcher


should consider several alternatives for the purpose.

The use of more than one indicator gives stability to the scores and it
also improves their validity
Developing Tools…Cont’d
Formation of an index: obtain an overall index for the various
concepts concerning the research study
It is a task of combining several dimensions of a concept or different
measurements of a dimensions into a single index
Scaling
Researchers often face a problem of valid measurement
when;
The concepts to be measured are complex and abstract and

There is no standardized measurement tools

While measuring attitudes and opinions

While measuring physical or institutional concepts

There has to be some procedures which may enable us to


measure abstract concepts more accurately.

This brings us to the study of scaling techniques.


Scaling
Scaling describes the procedures of assigning numbers to
various degrees of opinion, attitude and other concepts

It come be done in two ways;


Making a judgment about some characteristic of an individual and
then placing him directly on a scale that has been defined in terms of
that characteristic and

Constructing questionnaires in such a way that the score of individual’s


responses assigns him a place on a scale.
Important Scaling Techniques
Rating scales
The graphic rating scale
Scaling…Cont’d
Ranking scales: we make relative judgments against other
similar objects
Method of paired comparisons
Method of rank order
Different Scales for Measuring Attitudes of People
Scaling…Cont’d
Summated Scales (or Likert-type Scales):
They are developed by utilizing the item analysis approach

A particular item is evaluated on the basis of how well it discriminates


between those persons whose total score is high and those whose score
is low.

Those items or statements that best meet this sort of discrimination test
are included in the final instrument.
Operationalizing variables
It is necessary to operationally define both the dependent
and independent variables

Operationalizing variables is helps to;


Easily determine the values of variable

Make them ‘measurable‘ and ensure consistency in the measurement

For some variables it is important to have one or more precise


INDICATORS to operationalize them
Identifying Variables:
The concepts used in a research should be operationalized in
measurable terms

So that the extent of variations in respondents’ understanding


is reduced if not eliminated.

Two important points to reduce variability in understanding of


variable
Techniques about how to operationalize concepts, and

Knowledge about variables


Concept Vs. Variable:
Concepts are mental images or perceptions about certain
phenomena
Therefore their meaning varies markedly from individual to individual.

A concept cannot be measured

Variable can be subjected to measurement by


crude/refined or subjective/objective units of measurement.

It is therefore important for the concept to be converted into


variables .
Cont’d…

Concept Variable
• Subjective impression • Measurable with degree of
• Can not be measured precision varies from scale to
• No uniformity on its scale, variable to variable
understanding among
different people
Eg. Excellent, high achiever, Eg. Gender(Male Vs Female),
rich, Age(in year, month),
weight(in kg, gm)
Concepts, indicators and variables
If you are using a concept in your study, you need to
operationalize them into measured term
For this, you need to identify indicators

Indicators are a set of criteria that are reflective of the


concepts and can be converted into variables.

The choice of indicators for a concept might vary with


researchers,

But those selected must have a logical link with the concept.
Concepts Indicators Variables
Cont’d…

Concepts Indicators Variables Working definition

1. Daily income 1. If Income >100,000


1.Income
Rich 2. Total values of homes, 2. If Income >250,000
2. Assets
car, investment

Different before and after


Effectiveness No of No of patients served/month
fulfilling important
of a service patients
equipments
Data Collection Techniques
In the collection of data, we have to be systematic.

Data collection techniques allow us to systematically collect


data about study subjects and the settings in which they
occur.

If data are collected haphazardly, it will be difficult to answer


research questions in any conclusive way.

Depending on the type of variables and the objective of the


study different data collections methods can be employed.
What is Data?
Data: is collection of facts and evidences from which we can
extract information and draw conclusions.

Types of data
Primary data: data collected directly from individuals or subjects or
respondents for the purpose of certain study.

Secondary data: data which had been originally collected by certain


people or agency, and then statistically treated and the information
contained in it is used for other purpose
Sources of data
Routinely kept records

literatures

Surveys

Experiments

Reports

Observation, etc.
Stages of data collection
Three Stages in the Data Collection Process
Stage 1: Permission to proceed

Stage 2: Data collection

Stage 3: Data handling

Stage 1: Permission to proceed


Ethically approved and consent must be obtained from the relevant
authorities.
Cont’d…
Stage 2: Data collection

When collecting our data, we have to consider:


Logistics: who will collect what, when and with what resources

Quality control

Measures that helps to ensure good quality of data:


Prepare a field work manual for the research team as a whole,

Train research assistants (data collectors, supervisors) carefully

Pre-test research instruments


Cont’d.…
Stage 3: Data handling
A clear procedure should be developed for handling and storing them.
Data Collection Methods

Interviews

Self
Data Others
Administered Collection
Methods

Document Observatio
Review n
Interview Types
Face-to-Face, Telephone or Skype
 ideally tape record with participant’s permission and take notes
Unstructured
Focus on a broad area for discussion
Participant talks about topic in their own way
Semi-Structured
Common set of topics or questions for each interview
Questions vary depending on participant
Flexibility re order of questions
Follow up on topics that emerge
Structured or Focused Interview
Identical set of questions for each interview
Questions asked in the same way, using the same words for each interview
Open Questions
• Can you tell me about...?
• When did you notice...?
• Why do you think that happened...?
• What happened then...?
• Do you think...?
• How did you know...?
• Did that affect....?
• How did you feel...?
• What impact did that have on....?
• Who else was there,
• What did you see as the main...?
• Where was that....?
• What did you think....?
Questionnaire
Guidelines for Constructing Questionnaire/Schedule
Must keep in view the problem he is to study and be clear about the
various aspects of his research problem
Should depends on the nature of information sought, the sampled
respondents and the kind of analysis intended
Rough draft of the Questionnaire/Schedule be prepared, giving due
thought to the appropriate sequence of putting questions
Researcher must invariably re-examine, and in case of need may revise
the rough draft for a better one
Pilot study should be undertaken for pre-testing the questionnaire. The
questionnaire may be edited in the light of the results of the pilot study.
Questionnaire must contain simple but straight forward directions for the
respondents so that they may not feel any difficulty in answering the
questions.
Interview Skills
Sensitivity to Interviewer/ee
•Researching up, across or ‘down’?
Interaction

Establishing Rapport •Without affecting neutrality

Listening Attentively without •Purpose of interview to hear participant’s perspective, experience and
Passing Judgement views

Re-Focusing and Mainlining Control •If going off topic given limited time

Use of Probes •To illicit further information & get examples

Learning the Language •Sensitive to cultural setting and discourse commonly used

Non-Verbal Messages •Participant's body language if uncomfortable with a question

•Expressions of understanding and interest, echoing their words,


Encouraging Responses summarizing

Encouraging
o Responses Non- •To not interrupt - eye contact, head nodding, ‘um huh’
Verbally

Flexibility •To adapt to what emerges during the interview


Focus Groups
Focus Groups are an adaptation of interview technique - a group
interview which is typically tape or video recorded with permission
Differs in that it seeks to generate discussion among the group with the
help of the focus group facilitator
Usually 5 to 13 people who have
something in common connected
to the research topic (ideal = 6-8)
Typically 1-2 hours in length
Can include tasks for the group to complete eg. ranking/prioritising a list,
buzz groups, etc.
Emphasis on interactions within the group, the content of the discussions
and how the topics are discussed
Types of Focus Groups
 Exploratory
 Pre-pilot stage of forming a research topic
 To discover what participants think is important about a topic
 To assist the formation of interview questions or surveys

 Observing and Recording


 Emphasis on the way the group discusses a topic
 Who leads, how language is used, how concepts are defined

 Consultation and Evaluation


 Participants asked to discuss a proposal
 Participants asked to reflect on a project or event
 Often includes a group task

 Checking Back
 To discuss emerging findings of interviews, surveys, focus groups etc with participants

 Involving and Empowering Participants


 Proving a sense of ownership in decision-making
Facilitating Focus Groups

Encourage Discussion
Managing the
members of the stays focused
discussion
group interact on the topics

Don’t lead or
Aware of the Time
influence the
group dynamics management
discussion

Ensure all
Managing Facilitating
members
disagreements tasks
participate
Self-administered Questionnaire:
It is a data collection tool in which written questions are presented
to be answered by the respondents in written form.
A self-administered questionnaire can be administered in different
ways
1. Through mailing to respondents
2. Gathering all or part of respondents, giving oral or written instructions,
and letting them fill out the questionnaires;
3. Hand-delivering questionnaires to respondents and collecting them
later

 The questions can be either open ended or closed (with pre -


categorized answers)
Self-administered Questionnaire:
Advantages:-
Can cover a large number of people or organizations

Relatively cheap

 No prior arrangements are needed

 No interviewer bias
Cont’d….
Disadvantages:-
Difficult to design and often require many rewrites before an
acceptable questionnaire is produced.

Questions have to be relatively simple

 Time delay for waiting response

 Assume no literacy problem

Historically low response rate

 Not possible to give assistance if required


Observation
Observation is the act of watching social phenomena in the
real world and recording events as they happen
Takes place in the real world/ real situation

Can provide detailed rounded picture of phenomena or situation

Data recorded in situ

Rich data collected

Observation Types:
Covert, Overt, Complete Observer or Complete Participant
Observation
Simple Observation
Researcher as objective outsider
Participant Observation
Researcher immersed in social situation
To achieve intimate knowledge of the setting or group
To understand people’s behaviours, cultural practices, power dynamics
etc
To understand why specific practices occur , how they originate and
change over time
Observation Considerations
Ethical Issues
Covert vs. Non- Convert Observation
Ethical Issues with covert?
Gaining informed consent from full group
Observer Effects (Hawthorne Effect)
Will people change their behaviour if they know they’re being
observed?
Losing objectivity if immersed in a group – ‘Going native’
Recording Data
Difficult to decide what to record
Time Consuming
Documentary Review
Documents are often readily available potential sources of
data
Contain large amounts of information

Static ‘snapshot’ of a particular time

Documents are socially constructed and can therefore tell us more


than the information they contain

Useful when wanting to triangulate data


Documents
Primary and Secondary Sources:

Written records

Policy and guidelines

Numerical data (census of population or surveys)

Qualitative data (report & findings of other research)

Mass media: newspaper, TV, documentaries, films

Personal documents (letters, diaries)

Historical documents

Visual or audio material


Other Methods

Visual Participant Participant


Narratives
Methods Diaries Photos

Secondary
Poetry
Data
Data Quality Assurance Measures
Standardizing all the features and categories of data
Using consistent data formats and measurement standards
Rigorous data handling and analysis procedures:
Select data collection and storage tools that promote data
consistency
Training
Use of different sources of data
Combining Different Data Collection Techniques

Pre-testing
Supervision

82

You might also like