0% found this document useful (0 votes)
40 views102 pages

Research Methods Notes Combine

This document provides an overview of a research methods course, including: - The course covers the research process, preparing research proposals and reports, and developing skills in data collection and analysis. - The objectives are for students to learn how to conduct research and use findings to inform decision making, and to complete a research project on a microfinance topic. - The content includes introductions to research, the research process, literature reviews, methodology, data analysis, and conclusions. - Assessment is based on assignments, exams, and a final research project proposal and report.

Uploaded by

abbynimmo2013
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views102 pages

Research Methods Notes Combine

This document provides an overview of a research methods course, including: - The course covers the research process, preparing research proposals and reports, and developing skills in data collection and analysis. - The objectives are for students to learn how to conduct research and use findings to inform decision making, and to complete a research project on a microfinance topic. - The content includes introductions to research, the research process, literature reviews, methodology, data analysis, and conclusions. - Assessment is based on assignments, exams, and a final research project proposal and report.

Uploaded by

abbynimmo2013
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 102

Don Cherono Faith

THE CO-OPERATIVE
UNIVERSITY OF KENYA

DUCU 1207: RESEARCH METHODS

R e s e a r c h M e t h o d s Page 2

RESEARCH METHODS

Contact Hours: Lectures Hours 36

Credits :8 Credits Units and 2 Exam Hours

Course Description

Rationale:

Research may be defined as a disciplined inquiry or systematic investigation

aimed at providing solutions to problems.

The main purpose of research is to advance or discover new knowledge and


improve practice through scientific process. To ensure quality and timely

completion of a research project, clear guidelines and procedures are

necessary.

The purpose of this module is to help the student acquire knowledge and

skills of research methods and their applications to the solutions in both

academics research and in management decision- making process.

General Objective:

This course is intended to help the student to be able to prepare research

proposal, conduct and submit a comprehensive research project report on a

topic in microfinance using knowledge gained in the course.

Specific Objectives:

By the end of this course, students will be able to:

· Demonstrate knowledge on the process of conducting research and

how to use findings to inform managerial decision making

· Identify the problem definition and understand the research process

· Have basic understanding of data collection and data analysis.

· Interpret research findings to inform managerial decision making.


· Identify a research problem in a business field and write research

proposal and research project

Course Content

1: Introduction to Research

2: The Research Process (Steps in the research process)

· Selecting A Research Topic

· Formulating The Research Problem

· Defining Concepts and Developing Conceptual Framework

· Literature Review

· Selecting The Research Design

· Selecting The Data Collection Method

· Selecting The Survey Method

· Preparing Data Collection Instruments

· Selecting Data Analysis Tools

· Report Writing and Dissemination of Results

R e s e a r c h M e t h o d s Page 3

3: Chapter One: Introduction

4: Chapter Two: Literature Review

5: Chapter Three: Research Methodology


6: Chapter Four: Data Analysis and Presentation of Results

7: Chapter Five: Summary, Conclusions and Recommendations

8: Miscellaneous

Key Learning Points

· Introduction to research

· The research process

· Preparing the research proposal

· Preparing the final research project report

Training Methods

· Short Lectures

· Small group discussions followed by presentations and panel

discussions

· Individual and group home works /assignments.

Student Activity:

The major part of this section is to be covered as an independent study. The

contact time will be used for consultations between students and supervisors.

Each student will be required to produce a system and related


documentation, and participate in oral presentation and defense of the

research proposal.

Recommended Reading:

Author (and year of

publication)

Title Publisher and place of

publication

O.M Mugenda and A.G

Mugenda(2003)

Research Methods :

Quantitative and Qualitative

approaches

Acts Press

M. Saunders, P. Lewis and

A. Thornhill (2005)

Research Methods for

business students 3rd Edition

Dorling Kindersley Pvt.

Ltd

Kate L. Turabian A Manual for Writers of

Term Papers, Theses, and

Dissertations (Chicago
Guides to Writing, Editing,

and Publishing)

The University of

Chicago Press

Orodho, J.A(2000) Techniques of Writing

Research Proposals and

Reports in Education and

Social Sciences

Masola Publishers,

Nairobi

R e s e a r c h M e t h o d s Page 4

Other Support Materials and Resources

Borg, R.W. and M.D. Gall. 1989. Educational Research: An Introduction.

New York: Longman, Inc.

Chandran, Emil. 2004. Research Methods: A Quantitative Approach.

Nairobi: Daystar University.

Dornan, Edward A. and Charles W. Dawe. 1984. The Brief English Module.

Little: Brown and Company.

Higham, Nicholas J.1993. Module of Writing for the Mathematical Sciences,

SIAM Press.

Orodho, J.A. 2004. Techniques of Writing Research Proposals and Reports

in Education and Social Sciences. Nairobi: Masola Publishers.


Peil, Margaret. 1995. Social Science Research Methods: A Module for

Africa. Nairobi: EAEP.

Strunk, William Jr. and E. B. White. 1972. The Elements of Style. New York:

Macmillan

R e s e a r c h M e t h o d s Page 5

RESEARCH METHODS

COURSE OUTLINE ASSESSMENT:

A. COURSEWORK AND PROJECT PROPOSAL

(1) Assignments and/or CAT……………… 30%

(4) Final Exam ………………………………………… 70%

TOTAL 100%

R e s e a r c h M e t h o d s Page 6

LESSON ONE: INTRODUCTION TO RESEARCH

1.1What is research?
§ There are many ways of defining “research”

§ To research is to carry out a diligent inquiry or a critical examination of a given

phenomena.

§ Research also involves a critical analysis of existing conclusions or theories with regard

to newly discovered facts

§ Research is a systematic, controlled, empirical, and critical investigation of hypothetical

prepositions about the presumed relations among natural phenomena

§ Research is the process of arriving at dependable solutions through a systematic

collection, analysis and interpretation of data

§ Research is the careful and systematic inquiry into or examination of a field of

knowledge in order to establish facts and principles.

All definitions emphasize that research is a process, not an event. It must therefore be

carefully planned, implemented, disseminated, and consumed.

1.1Purposes of research

§ The main purpose of research is to discover new knowledge. This involves the

discovery of new facts, their correct interpretation and practical application.

§ Secondly, is to describe a phenomenon. Accurate identification of any event involves

thorough description.

§ Thirdly, is to enable prediction. This is the ability to estimate phenomenon. We

sometimes use a set of variables to predict a given variable.

§ The fourth purpose of research is to enable control. In scientific research control is


concerned with the ability to regulate the phenomenon under study. Many scientific

experiments are designed to achieve this objective.

R e s e a r c h M e t h o d s Page 7

§ The fifth purpose of research is to enable explanation of phenomenon. Explanation

involves accurate observation and measurement of a given phenomenon.

§ The sixth purpose of research is to enable theory development. Theory development

involves formulating concepts, laws and generalization about a given phenomenon

1.3 Distinguishing characteristics of research

§ Research is systematic

§ Research is controlled

§ Research is empirical. It deals with data, which is tested scientifically.

§ Research is self-correcting. The results of research are open to public scrutiny.

1.4 Research and Knowledge

Suppose you want to know why many clients use loans outside the business. There are four

sources of knowledge, namely: -

§ Experience

§ Reasoning

§ Authority

§ Research

Research is the most important tool for advancing knowledge. It is also the most important tool
for promoting progress, relating to our own environment, enhancing the accomplished of our

purposes and for resolving conflicts within any sector such as microfinance.

Social research studies the problems of man in a social set up. Being a very human process, it is

prone to error and bias. To minimize the influence of error and bias on their findings, researchers

have developed various procedures. As such the approach to inquiry in microfinance that

involves the conduct of research is different from other approaches to learning about

microfinance and improving it.

For that reason, research ranks higher above other approaches like; -

R e s e a r c h M e t h o d s Page 8

§ Folklore and mysticism (including magic)

§ Dogma and tradition

§ Casual observation

1.5 Areas of research in Information Technology

§ Opinions and attitudes

§ Needs of the people

§ Feasibility of proposed microfinance products and/or activities

§ Identifying relevant approaches and models

§ Evaluation of ongoing programs, current products, policies, procedures, approaches, etc.

§ How certain events occur and the relationship between events. This has got to do with

human behavior and how certain events affect human behavior.


1.6 Quality Requirements of Research Projects

Whether a student is seeking to complete a diploma, degree at an undergraduate level or

Master’s level, one key factor that must be beared in mind is quality. Quality is generally defined

as conformance to requirements or fitness for purpose. Degree/Diploma project demonstrates the

student’s readiness to join scholars and practitioners in advancing the knowledge and practice in

the real world of business. Consequently, students are expected to produce quality research

projects that:

§ Make contribution to the knowledge in the discipline,

§ Address current problems of interest to the practitioners,

§ Demonstrate a mastery of a specialization area within the degree/diploma program,

§ Reflect the integration of practice and scholarship, and

§ Are of publishable quality.

1.7 Research Project Prerequisite

R e s e a r c h M e t h o d s Page 9

The major prerequisite for the research project is Business Statistics and Research Methods. All

students are required to complete Business Statistics and Research Methods before registering

for the research project course. Each student taking Business research methods must develop a

detailed research proposal for the intended research project. The research proposal should focus

on the student’s area of concentration within their diploma program, in this case, human resource
management.
1.8 Role of the Supervisor in Research Project

The supervisor should be an expert or experienced in the intended area of study. The major role

of the project supervisor is to supervise the design and development of the research proposal, the

conduct of the research, and the preparation of the final research project document.

The supervisor should ensure that the research project is academically sound, is clearly and

correctly written, and provides an original contribution to the field.

1.9 The Research Proposal

The research proposal is a blue print or a plan for an intended study. Research proposal

preparation is essential in the development and pursuit of a research endeavor. The quality of the

final research project often depends on the quality of the research proposal. Consequently, each

student must develop a comprehensive research proposal before registering for the research

project

The research proposal for the project should consist of three major chapters or sections including

introduction, literature review and methodology. In addition to the three major chapters, the

research proposal should also provide an abstract, reference or bibliography, implementation

schedule and, implementation budget. The three major chapters or sections (introduction,

literature review, and methodology) of the research proposal should correspond to the first three

sections of the research project report in terms quality and comprehensiveness. The only

difference being that, the introduction and methodology sections are written in present or future

tense in the research proposal and past tense in the research project report.
R e s e a r c h M e t h o d s Page 10

1.9.1 Introduction

The introduction section of the proposal should include:

• Background of the problem

• Statement of the problem

• Purpose of the study or general objective

• Research questions or specific objectives or hypothesis. The hypothesis should be stated if the

study involves experimental designs or statistical tests.

• Importance or justification or significance of the study

• Scope of the study

• Chapter Summary

1.9.2 Literature Review

The literature review section of the proposal should present a review of the literature related to

the problem and purpose. The literature review section should therefore be organized or

categorized according to the research questions or specific objectives in order to ensure

relevance to the research problem. It should be written using appropriate writing style such as the

American Psychological Association (APA) style.

1.9.3 Research Methodology

The research methodology section of the proposal should provide explanation and description of

the methods and procedures used in conducting the study. This section should include:

• Introduction

• Research design

• Population and sample


• Data collections methods (instrumentation)

• Research procedures

• Data analysis methods

R e s e a r c h M e t h o d s Page 11

• Chapter Summary

1.10 Submission of the Final Research Project

The supervisor must approve the final document before submission. The supervisor should

ensure that the final document is of high quality and complies with the appropriate writing style

such as the American Psychological Association (APA) style.

1.11 Research Project Format

Research project reports consist of two main sections, the preliminary section or front matter and

the text or body.

1.11.1 The Sequence of Front Matter

The front matter or preliminary pages in a research project should be presented in the following

sequence:

i. First title page

ii. Second title page

iii. Student’s declaration

iv. Copy right page

v. Abstract

vi. Acknowledgement (optional)

vii. Dedication (optional)

viii. Table of content


ix. List of tables (if more than four tables are in the text)

x. List of figures (if more than four figures are in the text)

xi. Definition of terms

1.11.2 Front Matter Pagination

R e s e a r c h M e t h o d s Page 12

The front matter or preliminary pages of a research project should be paginated appropriately

with small Roman numbers at the bottom center of the page. The pagination should be as

follows:

i. Second title page is counted as i, but not paginated

ii. Student’s declaration is paginated as ii

iii. Copyright page is paginated as iii

iv. Abstract is paginated as iv - v

v. Acknowledgement is paginated depending on the abstract

vi. Dedication is paginated depending on the acknowledgement

vii. Table of content is paginated depending on the dedication

viii. List of tables is paginated depending on the table of content

ix. List of figures is paginated depending on the list of tables

1.11.3 The Abstract

The abstract is required with all research projects. The purpose of the abstract is to provide a

clear and concise summary of the:

• Purpose or problem

• Methodology used

• Major findings and conclusions


• Major recommendations or suggestions for improvement

The abstract should be approximately 300 - 400 words. It should be prepared after the five

chapters or major sections of the project report have been written but presented as front matter

material in terms of sequence

1.11.4 The Body or Text

The majority of research projects in business, economics and social sciences follow a five-model

chapter. The major sections in the five-model chapter include:

R e s e a r c h M e t h o d s Page 13

• Chapter 1: Introduction

• Chapter 2: Literature Review

• Chapter 3: Methodology

• Chapter 4: Data Findings and Presentation of Results

• Chapter 5: Summary, Conclusions and Recommendations.

In addition to the five major sections, a research project should include an abstract, reference or

bibliography, and appendix for data collection instruments and other relevant materials used in

the study.

Definition of basic terms used in research


¾ Population: it refers to an entire group of individuals, events or objects having a

common observable characteristic.

¾ Sample: It is a smaller group obtained from the accessible population.

¾ Sampling: It is the process of selecting a number of individuals for a study in such a


way that the individuals selected represent the population.

¾ Variable: It is a measurable characteristic that assumes different values among the

subjects. They can be dependent, independent, intervening, confounding or

antecedent variables.

¾ Data: refers to all information a researcher gathers for his or her study. Can be

secondary data or primary data.

¾ Parameter: It is a characteristic that is measurable and can assume different values in

the population.

¾ Statistics: it is the science of organizing, describing and analyzing data. Descriptive

and inferential statistics.

¾ Objective: it refers to the specific aspects of the phenomenon under study that the

researcher desires to bring out at the end of the research study.

¾ Literature review: It involves locating, reading and evaluating reports of previous

studies, observations and opinions related to the planned study.

¾ Hypothesis: It is a researcher’s anticipated explanation or opinion regarding the

result of the study.

¾ Theory: It is a set of concepts or constructs and the interrelations that are assumed to

exist among those concepts. It provides the basis for establishing the hypothesis to be

tested in the study.

¾ A construct is an image or idea specifically invented for a given research and/or

theory-building purpose

¾ A concept is a bundle of meanings or characteristics associated with certain events,

objects, conditions, situations, and behaviors. Concepts have been developed over
time through shared usage

1.12 REVIEW ASSIGNMENTS

1. State the meaning of research and purposes of research?

2. What are the Distinguishing characteristics of research?

3. What are the most probable areas of research in microfinance?

R e s e a r c h M e t h o d s Page 14

LESSON TWO:

THE RESEARCH PROCESS

Steps in the Research Process

Scientific research is a systematic inquiry. It must therefore be carefully planned and conducted.

This entails going through a clear step-by-step process. This process consists of ten steps as

outlined in the diagram below

1. Selecting a research topic

2. Formulating the research problem

3. Defining concepts and developing

conceptual framework

4. Literature Review

5. Selecting the research design

6. Selecting the data collection


method

7. Selecting the survey method

8. Preparing the data collection

instrument(s)

10. Report writing

R e s e a r c h M e t h o d s Page 15

Step 1: Selecting a Research Topic

It is important to choose a topic, which can be studied within various constraints facing the

researcher. These include, time, finances, and ability of the researcher.

Whereas topics for research may be selected for the researcher e.g. by those in authority, it is

better for one to come up with his/her own topic. Academic research is usually left to the

individual scholars, whether students or members of staff. Each then chooses a topic from an

area that s/he is interested in and comfortable with. The process is as follows:

1. Identify the Broad Area

The criteria for choosing a research area comprises of the following three considerations.

(i) Need - ask yourself whether there is need for a study in the area. Who needs it and why? The

selection of a topic is governed by the need to address some problems or questions or understand

some given situations.

(iii) Interest or concern - What is the interest of the concerned college department, industry,

sector or institution? For instance, the micro-finance sector may be interested in determining the

causes of the low level of reach by MFIs to enterprises that need financial services.
(iii) Feasibility - The research chosen must be feasible. Is it possible and practical to achieve the

research easily and conveniently? The scope, time, financial and other resources available affect

the feasibility of a research.

2. Word the Topic

Once the researcher is satisfied with the broad area of study, he words the topic appropriately.

The topic is stated in words that indicate the focus, problem. or issue of the research. Chandran

suggests the following guidelines for wording of the research topic:

· The topic should capture fully the focus or the issue of the research.

· It should have clear reference to the specific population or group of people or the objects

targeted for the research

· It should include the key or main variables of the research

· It should reveal the nature of the research (i.e. whether qualitative or quantitative)

· It may include references to the time period of the issue or the research (e.g. in the case of

historical research the date of the case study is optional in a topic that is current)

· It does not necessarily have to be a statement - it could be a phrase or a question

· The wording has to be precise.

Examples:

- Attitude of small-scale furniture makers in Nakuru towards management consultancy

services.

- Factors affecting growth of transport businesses owned by women in urban centers in


central province.

3. Build Preliminary Knowledge

The purpose of looking for preliminary knowledge is to enable the researcher to ascertain

whether or not there is really a research problem in the area (topic chosen). It also helps the

researcher to find out what is already known about the topic. Overall, it saves the researcher time

and other resources, which would otherwise be used in pursuing a research that he/she is forced

to discard midstream. Some of the resource of the preliminary information is: -

R e s e a r c h M e t h o d s Page 16

- Studies that have already been conducted

- Recommendations made by previous studies (especially on areas requiring further research)

- Journals

- Dissertations

- Reports and conference materials

Since research should contribute to knowledge, you should read any material critically. Are there

any flaws in what is already known? Is the research done properly or badly? Could

Conditions have changed over time? Has the research been replicated? Are there obvious gaps in

information? Has the theory been tested adequately?

Step 2: Formulating the Research Problem

The researcher must move on from the topic selection to problem formulation. The concepts or

characteristics that are included in the research frame must be sharpened and the scope narrowed.

By formulating the research problem carefully, the researcher is able to identify the type of data
that needs to be collected. A research problem must be researchable.

Students as well as novice researchers often find it extremely difficult to formulate research

problem. Sometimes it is a painful and laborious process even for experienced researchers.

Identifying a research problem

§ The first step in selecting a research problem is to identify the broad area that one is

interested in. such areas should be related to the professional interests and goals of the

researcher. § The next step is to identify a specific problem within it that will form the basis of
the

research study. This means that the researcher should narrow down from the broad area

to a specific problem § In selecting a specific problem, the researcher should consider the key
factors that help in

identifying a researchable problem. § An important research problem is one that should:

(i) Challenge some commonly held truism

(ii) Review the inadequacies of existing laws, views and policies

(iii)Lead to findings that have widespread implications in a particular area.

Ways of identifying a research problem

§ Existing theories: an existing theory is a good source of research problem

§ Existing literature: a systematic reading programme in the general area of interest is

perhaps the best way of locating specific research problems

§ Discussion with experts: Such discussion usually involves experienced and well informed

Researchers

§ Previous research studies: a review of previous research studies provides the researcher

with researchable project that would when carried out add knowledge
§ Replication: this involves carrying outs a research project that has been done previously.

This is done to find out whether findings hold over time and across regions

§ The media: issues which are frequently reported in the media can also form the basis of

research problems

§ Personal experiences: first hand observations and reflection on intriguing experiences

could be sources of research problems

R e s e a r c h M e t h o d s Page 17

Stating the problem

A research study usually starts with a brief introductory section. In this section, the researcher

introduces briefly the general area of study. The researcher then narrows down to the specific

problem to be studied. In general, a good problem statement has the following characteristics:

(i) Problem is real. This means that it comes from real life situations rather than the

researcher’s imagination.

(ii) The concepts can be clearly defined. The concepts i.e. characteristics in the problem

are so clear that one can specify in words what the questions are. For example, in the

problem “to determine factors affecting growth of transport businesses owned by

women in urban centers in central Kenya”, the concepts “growth”, “transport’’,

“business”, “urban centers”, and “Central Kenya’’ must should be clearly defined.

- Where concepts are clearly defined, it becomes easier to perceive clearly the

questions in the problem.

(iii) Concepts must be measurable. This could be represented by some evidence, which

can be obtained from direct observation or other activities.


(iv) The research activity is feasible. This refers to the ease and convenience of carrying

out the research.

Problem formulation should be done very cautiously because it affects all subsequent steps in the

research process. It affects the choice of the research design, type of the data, data collection

method and data analysis methods.

A good problem statement has the following characteristics:

· Statement is clear. A clear statement will make it easier for the researches to mentally

conceptualize the problem and put it in research objectives or questions. It should also

show how the concept or variable are related to each other.

· Statement is specific. This is reflected in specific objective or questions. For example

the statement, “Factors affecting growth of businesses owned by women entrepreneurs”

is vague. What type of businesses, where?

· Statement is exhaustive. The statement covers fully all the aspects of a topic including

concepts and relationships. For example, the statement “Factors affecting growth of

businesses owned by women entrepreneurs” is not exhaustive. It does not specify the

type of businesses, the area where they are located and the time period.

Types of Problem Formulation

There are three ways of formulating a research problem.

· Objectives
· Questions

· Hypothesis

(a) Research Objectives

The objective of the research should be stated clearly. They should also be testable, based on

measurable variables of the study. The objectives are important in any research study because:

-They determine the kind of questions to be asked (for gathering data)

-They determine the nature or form of study

-They determine the data collection and analysis procedure to be used.

R e s e a r c h M e t h o d s Page 18

Research objectives may be of two types - general (broad) and specific. A broad objective

indicates the general focus and direction of the study.

Example – “To find out the annual growth rates of the businesses owned by women in Nairobi”

Specific research study objectives are more specific. They indicate specific aspects like issues,

relationships or associations between concepts and their effect on each other.

Example – “To find out the relationship between age and growth of women owned transport

businesses in Nairobi”

- “To find out the relationship between access to credit and growth of women owned

transport businesses in Nairobi”

Using a study like “Access and participation in secondary school education among pastoralist

and urban-slum communities in Kenya’’, Orodho1, suggests the following objectives:

· To analyze the enrolment rate of pupils and students in secondary schools in pastoralist
(Garissa) and urban-slum schools by gender between 1990-2002.

· Find out the current status of the study of the physical facilities and instructional

materials in the study districts.

· Analyze the performance of students in KCPE and KCSE by gender in the study districts.

· Uncover the critical non-school based factors causing regional inequalities in students’

access to and participation in primary and secondary school education in Kenya.

(b) Research Questions

The study problem can also be stated as research questions. These are questions that the

researcher would like answered by undertaking the proposed study. (The difference between

research questions and objectives is that research questions are stated in a questions form while

objectives are stated in a statement format). If the questions and objectives are referring to the

same phenomenon, then only one set should be included in the study.

Research questions can also be stated in broad (general) or specific terms. Whereas there are no

set rules of selecting research questions, the following guiding questions can be raised:

· Is the question really important?

· Will the question make a difference?

· Will the question lead to interesting or relevant results?

· Will it lead to policy changes in the organization?

Examples - What is the relationship between age (of business) and growth of women-owned

transport businesses in Nairobi?

- What has been the regional (pastoralist and urban-slum) student enrolment in primary and
secondary school by gender between 1990-2002?

(c) Research Hypotheses

A hypothesis is simply an assumption or some supposition to be proved or disapproved. In

research, it is a formal question that originates from the research problem that the study

anticipates to solve. It is a statement, which is subject to being tested empirically through

scientific investigation resulting in acceptance or rejection.

A hypothesis is also seen as a proposition or set of propositions advanced as an explanation for

the occurrence of particular event. This is an educated guess about possible differences,

relationships or causes of a research problem. A research problem is stated as a hypothesis wh


ere

it is possible to test it using scientific methods. Relating an independent variable to some

dependent variable does this. Because the hypothesis (tentative assumption) is made to draw out

R e s e a r c h M e t h o d s Page 19

and test logical or empirical consequences, the hypothesis should be stated after extensive

literature review.

Research hypotheses should be very specific and limited to the research at hand because they

have to be tested. A hypothesis helps the researcher to delimit the area of research, sharpens his

focus, and indicates the type of data required and the data analysis methods to be used. For each

hypothesis, the researcher should specify the method to be used for analysis.

The characteristics of good hypotheses


§ They must state clearly and briefly the expected relationship between variables

§ They must be consistent with common sense or generally accepted truths

§ They must be related to empirical phenomena

§ They must be simple and as concise as the complexity of the concepts involved allows

§ They must be testable within a reasonable time

§ They must be based on a sound rationale derived from theory or previous research or

professional experience

NB: Not all studies test hypotheses, especially in the case of exploratory and case studies.

Hypotheses may be stated in two forms, directed and null.

Directional/ Alternative hypotheses state the relationship between the variables being studied.

Examples: -

H1 - pre-loan training influences clients’ loan repayment

H2 - There is a positive and significant relationship between the experiences of Credit Officers

and clients loan repayment

Null hypotheses state that no relationship exists between the variables being studied.

Examples:

Ho 1 - Pre-loan training does not influence clients’ loan repayment.

Ho 2 - There is no positive relationship between the experience of credit officers and loan

repayment by their clients.

The null hypothesis is stated so that it can be tested and ultimately accepted or rejected. It is not

necessarily the researcher’s expectations. Nevertheless, it is used because it is better fitted to

scientific techniques, many of which are aimed at measuring the possibility that a difference

found is truly greater than zero. This means that any difference found in the sample is also
present in the population.

Step 3: Defining Concepts and Developing Conceptual Framework

Defining Concepts

A concept is an abstract idea, which can be used to describe situations, events individuals or

groups being studied. It is a term that refers to the characteristics of the situation’s events,

individuals or groups. Examples of concepts are: role, authority, capital, community, wealth,

poverty, growth, small business, delinquency, default, influence, women-owned, performance,

etc.

A concept may mean different things to different people based on the context and their

experiences. Concepts derive their meaning from a cultural context and are culture or tradition

bound. For example, the concept of ‘marriage” has raised interesting debate in the recent past. It

R e s e a r c h M e t h o d s Page 20

is also important to define concepts for the sake of consistency in measurement. (The data

collection step is simply to measure concepts and represent them in quantities).

NB: (i). A definition of a concept is not the same as an operational definition.

The latter is a description of a concept in terms of measurable indicators, which will be

quantified through empirical data for analysis. An operational definition gives a precise list of

the characteristics to be included so that there can be no doubt of what falls into this category or

does not.

Example
Concept Definition

Operational Definition

1) INFLUENCE Power or ability to

affect someone’s belief

or actions

Effects: e.g. promptness in repaying

loans, attitude towards loan

repayment

2) SMALL

BUSINESS

Informal business - Employing less than 50 people

- Owner-managed

3) WOMENOWNED

Women registered

owner or co-owners

- Registered by a woman

- Operated by a woman

- Operational decisions made by the

woman

(ii) It is important to take note of and remember the operational definitions as you read available

literature.

(ii) It is advisable to use the same operational definitions as used in previous works on the topic.

This contributes a great deal to the comparability of results. (It is also easier to assess the flaws
in a definition that has been tried in the field than to know what will happen with a definition,

which has just been created!).

Conceptual Framework

This is a schematic representation of a research problem that includes a network of concepts and

exhibits the flow and direction of their relationships. It is a flow chart that shows which concepts

are related to which others. The course in which the influence between flows is the direction.

Concepts or variables that influence others are called independent variables. Those, which are

influenced by one or more variables, are called dependent variables.

A conceptual framework helps the reader to quickly see the proposed relationships. It is also a

useful step towards an operational definition of concepts. Furthermore, a conceptual framework

enables the researcher to consider the most appropriate steps towards collecting empirical

evidence. Finally a conceptual framework helps the research to separate the effect of the

independent variables from their intervening variables.

An intervening variable is one that comes between the independent and dependent variables,

modifying the effect of independent variables.

Example:

Hypothesis: - “infant death rates will fall as national income rises”

A country may increase its income per capita without lowering the death rate if most of the

income goes to a few families. In this case the distribution of income is an intervening

R e s e a r c h M e t h o d s Page 21

variable. This is the challenge faced by most of the students


e.g. in determining the impact of financial intervention.

ETHICS IN RESEARCH

Ethics are norms or standards of behaviour that guide moral choices about our behaviour

and our relationship with others. Ethics differ from legal constraints, in which generally

accepted standards have defined penalties that are universally enforced. The goal of

ethics in research is to ensure that no one is harmed or suffers adverse consequences from

research activities.

As the research is designed, several ethical considerations must be balanced e.g.

¾ Protect the rights of the participant or subject.

¾ Ensure the sponsor receives ethically conducted and reported research.

¾ Follow ethical standards when designing research

¾ Protect the safety of the researcher and team

¾ Ensure the research team follows the design

1. Ethical treatment of participants

In general, the research must be designed in such a manner that the respondent does not

suffer physical harm, discomfort, pain, embarrassment or loss to privacy. To safeguard

against these, the researcher should follow the following guidelines:

¾ Explain the study benefits


¾ Obtain informed consent

¾ Explain respondents rights and protection

(a) Benefits

Whenever direct contact is made with a respondent, the researcher should discuss the

study benefits, being careful to neither overstate nor understate the benefits. An

interviewer should begin an introduction with his or her name, the name of the research

organisation and a brief description of the purpose and benefits of the research. This puts

the respondent at ease, lets them know to whom they are speaking and motivates them to

answer questions truthfully. Inducements to participate, financial or otherwise, should

not be disproportionate to the task or presented in a fashion that results in coercion.

Deception occurs when the respondents are told only part of the truth or when the truth is

fully compromised. The benefits to be gained by deception should be balanced against

the risks to the respondents. When possible, an experiment or interview should be

designed to reduce reliance on deception. In addition, the respondent’s rights and well

being must be adequately protected. In instances where deception in an experiment could

produce anxiety, a subject’s medical condition should be checked to ensure that no

adverse physical harm follows.

(b) Informed consent

Securing informed consent from respondents is a matter of fully disclosing the

procedures of the proposed survey or other research design before requesting permission
to proceed with the study. There are exemptions that argue for a signed consent form.

When dealing with children, it is wise to have a parent or other person with legal standing

sign a consent form. If the researchers offer only limited protection of confidentiality, a

signed form detailing the types of limits should be obtained. For most business research,

oral consent is sufficient.

In situations where respondents are intentionally or accidentally deceived, they should be

debriefed once the research is complete. Debriefing involves several activities following

the collection of data e.g.

¾ Explanation of any deception.

¾ Description of the hypothesis, goal or purpose of the study.

¾ Post study sharing of results.

¾ Post study follow-up medical or psychological attention.

According to Neuman and Wiegand (2000), a full blown consent statement would

contain the following: -

¾ A brief description of the purpose and procedure of the research, including the

expected duration.

¾ A statement of any risks, discomforts or inconveniences associated with

participation.

¾ A guarantee of anonymity or at least confidentiality, and an explanation of both.

¾ The identification, affiliation and sponsorship of the research as well as contact

information.

¾ A statement that participation is completely voluntary and can be terminated at any

time without penalty.


¾ A statement of any procedures that may be used.

¾ A statement of any benefits to the class of subjects involved.

¾ An offer to provide a free copy of a summary of the findings.

(c) Rights to privacy

All individuals have a right to privacy and researchers must respect that right. The

privacy guarantee is important not only to retain validity of the research but also to

protect respondents. Once the guarantee of confidentiality is given, protecting that

confidentiality is essential. The researcher can protect respondent’s confidentiality in

several ways, which include: -

¾ Obtaining signed nondisclosure documents

¾ Restricting access to respondent identification.

¾ Revealing respondent information only with written consent.

¾ Restricting access to data instruments where the respondent is identified.

¾ Nondisclosure of data subsets.

Researchers should restrict access to information that reveals names, telephone numbers,

address or other identifying features. Only researchers who have signed nondisclosure,

confidentiality forms should be allowed access to the data. Links between the data or

database and the identifying information file should be weakened. Individual interview

response sheets should be inaccessible to everyone except the editors and data entry

personnel.

Occasionally, data collection instruments should be destroyed once the data are in a data

file. Data files that make it easy to reconstruct the profiles or identification of individual
respondents should be carefully controlled. For very small groups, data should not be

made available because it is often easy to pinpoint a person within the group. Employee

satisfaction survey feedback in small units can be easily used to identify an individual

through descriptive statistics.

Privacy is more than confidentiality. A right to privacy means one has the right to refuse

to be interviewed or to refuse to answer any question in an interview. Potential

participants have a right to privacy in their own homes, including not admitting

researchers and not answering telephones. They have the right to engage in private

behaviour in private places without fear of observation. To address these rights, ethical

researchers can do the following:-

¾ Inform respondents of their right to refuse to answer any questions or participate in

the study.

¾ Obtain permission to interview respondents

¾ Schedule field and phone interviews.

¾ Limit the time required for participation.

¾ Restrict observation to public behaviour only.

2. Ethics and the sponsor

There are ethical considerations to keep in mind when dealing with the research client or

sponsor. Whether undertaking product, market, personnel, financial or other research, a

sponsor has the right to receive ethically conducted research.


(a) Confidentiality

Sponsors have a right to several types of confidentiality including sponsor nondisclosure,

purpose nondisclosure and findings nondisclosure.

¾ Sponsor nondisclosure: Companies have a right to dissociate themselves from the

sponsorship of a research project. Due to the sensitive nature of the management

dilemma or the research question, sponsors may hire an outside consulting or

research firm to complete research projects. this is often done when a company is

testing a new product idea, to avoid potential consumers from being influenced by

the company’s current image or industry standing. If a company is contemplating

entering a new market, it may not wish to reveal its plans to competitors. In such

cases, it is the responsibility of the researcher to respect this desire and device a

plan to safeguard the identity of the sponsor.

¾ Purpose nondisclosure: It involves protecting the purpose of the study or its

details. A research sponsor may be testing a new idea that is not yet patented and

may not want the competitor to know his plans. It may be investigating employee

complaints and may not want to spark union activity. The sponsor might also be

contemplating a new public stock offering, where advance disclosure would spark

the interest of authorities or cost the firm thousands of shillings.

¾ Findings nondisclosure: If a sponsor feels no need to hide its identity or the

study’s purpose, most sponsors want research data and findings to be confidential,

at least until the management decision is made.

(b) Right to quality research

An important ethical consideration for the researcher and the sponsor is the sponsor’s
right to quality research. The right entails:

¾ Providing a research design appropriate for the research question.

¾ Maximizing the sponsor’s value for the resources expended

¾ Providing data handling and reporting techniques appropriate for the data

collected.

From the proposal through the design to data analysis and the final report, the researcher

guides the sponsor on the proper techniques and interpretations. Often sponsors would

have heard about sophisticated data handling technique and will want it used even when

it is inappropriate for the problem at hand. The researcher should propose the design most

suitable for the problem. The researcher should not propose activities designed to

maximize researcher revenue or minimize researcher effort at the sponsor’s expense. The

ethical researcher should report findings in ways that minimize the drawing of false

conclusions. He should also use charts, graphs and tables to show the data objectively,

despite the sponsor’s preferred outcomes.

(c) Sponsor’s Ethics

Occasionally, research specialists may be asked by sponsors to participate in unethical

behaviour. Compliance by the researcher would be a breach of ethical standards. Some

examples to be avoided are:

¾ Violating respondent confidentiality

¾ Changing data or creating false data to meet a desired objective

¾ Changing data presentations or interpretations.

¾ Interpreting data from a biased perspective.

¾ Omitting sections of data analysis and conclusions.


¾ Making recommendations beyond the scope of the data collected.

The ethical course often requires confronting the sponsor’s demand and taking the

following actions: -

¾ Educating the sponsor on the purpose of research

¾ Explain the researcher’s role in fact finding versus the sponsor’s role in decision

making.

¾ Explain how distorting the truth or breaking faith with respondents leads to future

problems

¾ Failing moral suasion, terminate the relationship with the sponsor.

3. Researchers and team members

Researchers have an ethical responsibility to their team’s safety as well as their own and

also protecting the anonymity of both the sponsor and the respondent.

(a) Safety

It is the researcher’s responsibility to design a project so the safety of all interviewers,

surveyors, experimenters, or observers is protected. Several factors may be important to

consider in ensuring a researcher’s right to safety e.g. some urban areas and undeveloped

rural areas may be unsafe for research assistants, therefore a team member can

accompany the researcher. It is unethical to require staff members to enter an

environment where they feel physically threatened. Researchers who are insensitive to

these concerns face both research and legal risks.

(b) Ethical behaviour of assistants

Researchers should require ethical compliance from team members just as sponsors

expect ethical behaviour from the researcher. Assistants are expected to carry out the
sampling plan, to interview or observe respondents without bias and to accurately record

all necessary data. Unethical behaviour such as filling in an interview sheet without

having asked the respondent the questions cannot be tolerated. The behaviour of the

assistants is under the direct control of the responsible researcher or field supervisor. If an

assistant behaves improperly in an interview or shares a respondents interview sheet with

unauthorized person, it is the researcher’s responsibility. All researchers’ assistants

should be well trained and supervised.

(c) Protection of anonymity

Researchers and assistants protect the confidentiality of the sponsor’s information and the

anonymity of the respondents. Each researcher handling data should be required to sign a

confidentiality and nondisclosure statement.

MEASUREMENT

Introduction

While people measure things casually in daily life, research measurement is more precise

and controlled. In measurement, one settles for measuring properties of the objects rather

than the objects themselves. An event is measured in terms of its duration i.e. what

happened during it, who was involved, where it occurred etc. Measurement is the basis

for all systematic inquiry because it provides us with the tools for recording differences in

the outcome of variable change.

Definition of Measurement

Measurement is the procedure by which we assign numerals, numbers, or other

distinguishing values to variables according to rules. These rules help us determine the
kinds of values we will assign to certain observable phenomena or variables. They also

determine the quality of measurement. Precision and exactness in measurement are

vitally important. The measures are what are actually used to test the hypotheses. A

researcher needs good measures for both independent and dependent variables.

Measurement is a three – part process that includes:-

i. Selecting observable empirical events

ii. Developing a set of mapping rules: a scheme for assigning numbers or symbols to

represent aspects of the event being measured.

iii. Applying the mapping rules to each observation of that event

Mapping rules have four characteristics:-

1. Classification: Numbers are used to group or sort responses. No order exists.

2. Order: Numbers are ordered. One number is greater than, less than or equal to

another number.

3. Distance: Differences between numbers are ordered. The difference between any

pair of numbers is greater than, less than or equal to the difference between any

other pair of numbers.

4. Origin: The number series has a unique origin indicated by the number zero.

This is an absolute and meaningful zero point.

Measurement consists of two basic processes called conceptualization and

Operationalization, then an advanced process called determining the levels of

measurement, and then even more advanced methods of measuring reliability and
validity.

Conceptualization is the process of taking a construct or concept and refining it by

giving it a conceptual or theoretical definition. Ordinary dictionary definitions will not

do. Instead, the researcher takes keywords in their research question or hypothesis and

finds a clear and consistent definition that is agreed-upon by others in the scientific

community. Conceptualization is often guided by the theoretical framework, perspective,

or approach the researcher is committed to.

Operationalization is the process of taking a conceptual definition and making it more

precise by linking it to one or more specific, concrete indicators or operational

definitions. These are usually things with numbers in them that reflect empirical or

observable reality. For example, if the type of crime one has chosen to study is theft (as

representative of crime in general), creating an operational definition for it means at least

choosing between petty theft and grand theft (false taking of less or more than $150).

LEVELS OF MEASUREMENT

A level of measurement is a scale by which a variable is measured. For 50 years, with

few detractors, science has used the Stevens (1951) typology of measurement levels

(scales). There are three things to remember about this typology:

¾ Anything that can be measured falls into one of the four types;

¾ The higher the level of measurement, the more precision in measurement; and

¾ Every level up contains all the properties of the previous level.


The four levels of measurement, from lowest to highest, are:

(a) Nominal level. The observations are classified under a common characteristic e.g.

sex, race, marital status, employment status, language, religion etc. helps in

sampling.

(b) Ordinal level: items or subjects are not only grouped into categories, but they are

ranked into some order e.g. greater than, less than, superior, happier than, poorer,

above etc. helps in developing a likert scale.

(c) Interval level: numerals are assigned to each measure and ranked. The intervals

between numerals are equal. The numerals used represent meaningful quantities but

the zero point is not meaningful e.g. test scores, temperature.

(d) Ratio level: has all the characteristics of the other levels and in addition the zero

point is meaningful. Mathematical operations can be applied to yield meaningful

values e.g. height, weight, distance, age, area etc.

Sources of measurement differences

The ideal study should be designed and controlled for precise and unambiguous

measurement of the variables. Since 100% control is unattainable, error occurs. Much

potential error is systematic (results from a bias) while the remainder is random (occurs

erratically). Some of the major sources of error are:

(a) The respondent: opinion differences that affect measurement come from relatively

stable characteristics of the respondent e.g. employee status, ethnic group and

social class. Temporary factors like fatigue, boredom, anxiety and other distractions

also limit the ability to respond accurately and fully. Hunger, impatience or general

variations in mood will also have an impact.


(b) The situational factors: any condition that places a strain on the interview or

measurement session can have serious effects on the interviewer – respondent

rapport. If another person is present, that person can distort responses by joining in,

by distracting or by merely being present. If the respondents believe anonymity is

not ensured, they may be reluctant to express certain feelings.

(c) The measurer: the interviewer can distort responses by re-wording, paraphrasing,

or re-ordering questions. Stereotypes in appearance and action introduce bias.

Inflections of voice or unconscious prompting with smiles and nods may encourage

or discourage certain replies. Incorrect coding, careless tabulation and faulty

statistical calculation may introduce further errors in data analysis.

(d) The data collection instrument: a defective instrument can cause distortion in two

major ways:

¾ It can be too confusing and ambiguous e.g. the use of complex words,

leading questions, ambiguous meanings, multiple questions.

¾ Leads to poor selection from the universe of content items. Seldom does

the instrument explore all the potentially important issues.

TYPES OF VARIABLES

A variable is a measurable characteristic that assumes different values among the

subjects. According to Mugenda and Mugenda (2003), variables can be classified into the

following categories: -

1. Independent variables / Predictor variables


It is a variable that a researcher manipulates in order to determine its effect or influence

on another variable. They predict the amount of variation that occurs in another variable.

Types of independent variables

i.

Experimental variables: They are variables which the researcher has

manipulative control over them. Are commonly used in biological and physical

sciences e.g. influence of amount of fertilizer on the yield of wheat, influence of

alcohol on reaction time.

ii.

Measurement types of independent variables: Are variables, which have

already occurred. They have fixed manipulative and uninfluenceable properties.

Most of the variables are either environmental or personalogical e.g. age, gender,

marital status, race, colour, geographical location, nationality, soil type, altitude

etc. (e.g. influence of nationality on choice of food).

2. Dependent variables / criterion variables

It is the variable that is measured, predicted or monitored and is expected to be affected

by manipulation of an independent variable. They attempt to indicate the total influence

arising from the effects of the independent variable. It varies as a function of the

independent variable e.g. influence of hours studied on performance in a statistical test,

influence of distance from the supply center on cost of building materials.

3. Extraneous variables

They are those variables that affect the outcome of a research study either because the
researcher is not aware of their existence or if the researcher is aware, she or he has no

control over them.

Extraneous variables are often classified into three types:

1. Subject variables, which are the characteristics of the individuals being studied

that might affect their actions. These variables include age, gender, health status,

mood, background, etc.

2. Experimental variables are characteristics of the persons conducting the

experiment which might influence how a person behaves. Gender, the presence of

racial discrimination, language, or other factors may qualify as such variables.

3. Situational variables are features of the environment in which the study or

research was conducted, which have a bearing on the outcome of the experiment

in a negative way. Included are the air temperature, level of activity, lighting, and

the time of day.

4. Control variables / concomitant / covariate or blocking variables

They are extraneous variables that are built into the study. Extraneous variables are

variables, which influence the results of a study when they are not controlled.

Reasons for introducing control variables:

¾ It increases the validity of the data.

¾ It leads to more convincing generalizations.

Since absolute control of extraneous variables is not possible in any study, results are

interpreted on the basis of degrees of confidence rather than certainty.

Once the major extraneous variables are identified, the researcher can control them by:-
i.

Building the extraneous variable into the study: i.e. including it as an independent

variable. E.g. in determining the effect of alcohol on reaction time, sex may

influence reaction time. Therefore, sex can be introduced as an independent

variable. Using regression, one can measure the effect of alcohol on reaction time,

controlling sex.

ii.

Include them in the study but only at one level e.g. time is the dependent variable,

alcohol level - the independent and sex the extraneous variable. Sex can be

controlled by sampling only females or males of a given age. The disadvantage of

this method is that generalizations are limited to a smaller population.

iii.

By removing the effects of the extraneous variables by statistical procedures i.e.

by siphoning its effects on the dependent variable. This can be done by:

¾ Analysis of co-variance

¾ Partial correlation.

5. Intervening variables

They are a special case of extraneous variables. The difference between the intervening

and extraneous variables is in the assumed relationship among the variables. An

intervening variable is a hypothetical internal state that is used to explain relationships

between observed variables, such as independent and dependent variables, in empirical


research. With an extraneous variable, there is no causal link between the independent

and dependent variable, but they are independently associated with a third variable – the

extraneous variable. An intervening variable is recognized as being caused by the

independent variable and as being a determinant of the dependent variable.

Independent intervening dependent

The total effect of an independent variable on a dependent variable can be subdivided

into direct and indirect effects.

¾ Indirect effects are those effects of an intervening variable.

¾ Direct effects are not transmitted through another variable.

The choice of the right intervening variables helps one not only to determine accurately

the total effects of an independent variable on the dependent variable but also partition

the total effects into direct and indirect.

Examples of intervening variables include: motivation, intelligence, intention, and

expectation.

6. Antecedent variables

They do not interfere with the established relationship between an independent and

dependent variable but clarifies the influence that precedes such a relationship.

Antecedent independent dependent

Conditions that must hold for a variable to be classified as a antecedent variable:-

¾ The variables including the antecedent variable must be related in some logical

sequence.

¾ When the antecedent variable is controlled for, the relationship between the
independent and the dependent variables should not disappear. Rather it should be

enhanced.

¾ When the independent variable is controlled for or its influence removed, there

should not be any relationship between the antecedent variable and the dependent

variable.

e.g. political stability – attracts investors – increased job opportunities – high standards of

living – reduction of poverty.

7. Suppressor variables

It is an extraneous variable which when not controlled for, removes a relationship

between the two variables. When a suppressor variable is introduced in the study as a

control variable, a true relationship emerges.

8. Distorter variables

It is a variable that converts what was thought of as a positive relationship into a negative

relationship and vice-versa. Its effects lead a researcher into drawing erroneous

conclusions from the data. When the distorter variable is controlled, a true relationship is

obtained. Consideration of distorter variables in a study reduces the chances of making a

type I (rejecting a true null hypothesis) or type two error (accepting a false null

hypothesis).

9. Exogenous and endogenous variables

They are commonly used in testing hypothesized causal models. Path analysis ( a

procedure that tests causal links among several variables) is often used in testing the

validity of causal relationships in a theory or model. A C

BD
C and D are called endogenous variables. Each endogenous variable is caused or

explained by the variable that precedes it. E.g. D is caused by A, B and C.

A and B are called exogenous variables. They lack hypothesized causes in the model.

Validity and Reliability in Research

The quality of a research study depends to a large extent on the accuracy of the data

collection procedures. Reliability and validity measures the relevance and correctness of

the data.

Reliability

Reliability is the extent to which an experiment, test, or any measuring procedure yields

the same result on repeated trials. Without the agreement of independent observers able

to replicate research procedures, or the ability to use research tools and procedures that

yield consistent measurements, researchers would be unable to satisfactorily draw

conclusions, formulate theories, or make claims about the generalizability of their

research. In addition to its important role in research, reliability is critical for many parts

of our lives, including manufacturing, medicine and sports. Reliability is such an

important concept that it has been defined in terms of its application to a wide range of

activities.

Reliability is influenced by random error. Random error is the deviation from a true

measurement due to factors that have not effectively been addressed by the researcher. As

random error increases, reliability decreases.

Causes of random error


¾ Inaccurate coding

¾ Ambiguous instruction to the subjects

¾ Interviewer’s fatigue

¾ Interviewee’s fatigue

¾ Interviewer’s bias

Research instruments yield data that have two components; the true value or score and an

error component. The error component of the data reflects the limitations of the

instrument. There are three types of errors that arise at the time of data collection;

¾ Error due to the inaccuracy of the instrument

¾ Error due to the inaccuracy of scoring by the researcher

¾ Unexplained error

Ways of Assessing Reliability

¾ Test-Retest

¾ Equivalent form

¾ Internal consistency

¾ Interrater reliability

1. The Test-Retest technique

It involves administering the same instruments twice to the same group of subjects, but

after some time. Stability reliability (sometimes called test, re-test reliability) is the

agreement of measuring instruments over time. To determine stability, a measure or test

is repeated on the same subjects at a future date. Results are compared and correlated

with the initial test to give a measure of stability.


An example of stability reliability would be the method of maintaining weights used by

the Kenya Bureau of Standards. Platinum objects of fixed weight (one kilogram, half

kilogram, etc...) are kept locked away. Once a year they are taken out and weighed,

allowing scales to be reset so they are "weighing" accurately. Keeping track of how much

the scales are off from year to year establishes stability reliability for these instruments.

In this instance, the platinum weights themselves are assumed to have a perfectly fixed

stability reliability

Disadvantages

¾ Subjects may be sensitized by the first testing hence will do better in the second test

¾ Difficulty in establishing a reasonable period between the two testing sessions.

2. Equivalent form

Equivalent reliability is the extent to which two items measure identical concepts at an

identical level of difficulty. Equivalency reliability is determined by relating two sets of

test scores to one another to highlight the degree of relationship or association. In

quantitative studies and particularly in experimental studies, a correlation coefficient,

statistically referred to as r, is used to show the strength of the correlation between a

dependent variable (the subject under study), and one or more independent variable,

which are manipulated to determine effects on the dependent variable. An important

consideration is that equivalency reliability is concerned with correlational, not causal,

relationships.

For example, a researcher studying university Bachelor of commerce students happened

to notice that when some students were studying for finals, their holiday shopping began.

Intrigued by this, the researcher attempted to observe how often, or to what degree, these
two behaviors co-occurred throughout the academic year. The researcher used the results

of the observations to assess the correlation between studying throughout the academic

year and shopping for gifts. The researcher concluded there was poor equivalency

reliability between the two actions. In other words, studying was not a reliable predictor

of shopping for gifts.

Two instruments are used. Specific items in each form are different but they are designed

to measure the same concept. They are the same in number, structure and level of

difficulty e.g. TOEFL, GRE

Advantages

¾ Estimates the stability of the data as well as the equivalence of the items in the two

forms

Disadvantages

¾ Difficulty in constructing two tests, which measure the same concept (time and

resources).

3. Internal consistency technique

Internal consistency is the extent to which tests or procedures assess the same

characteristic, skill or quality. It is a measure of the precision between the observers or of

the measuring instruments used in a study. This type of reliability often helps researchers

interpret data and predict the value of scores and the limits of the relationship among

variables.

For example, a researcher designs a questionnaire to find out about college students'

dissatisfaction with a particular textbook. Analyzing the internal consistency of the


survey items dealing with dissatisfaction will reveal the extent to which items on the

questionnaire focus on the notion of dissatisfaction.

4. Interrater reliability

Interrater reliability is the extent to which two or more individuals (coders or raters)

agree. Interrater reliability addresses the consistency of the implementation of a rating

system.

A test of interrater reliability would be the following scenario: Two or more researchers

are observing a high school classroom. The class is discussing a movie that they have just

viewed as a group. The researchers have a sliding rating scale (1 being most positive, 5

being most negative) with which they are rating the student's oral responses. Interrater

reliability assesses the consistency of how the rating system is implemented. For

example, if one researcher gives a "1" to a student response, while another researcher

gives a "5," obviously the interrater reliability would be inconsistent. Interrater reliability

is dependent upon the ability of two or more individuals to be consistent. Training,

education and monitoring skills can enhance interrater reliability.

Ways of improving reliability

¾ Minimize external sources of variation

¾ Standardize conditions under which measurements occurs

¾ Improve investigator consistency by using only well trained, supervised and

motivated persons to conduct the research

¾ Broaden the sample of measurement questions by adding similar questions to the data

collection instrument or adding more observers or occasions to an observation study.

¾ Improve internal consistency of an instrument by excluding data from analysis drawn


from measurement questions eliciting extreme responses.

Validity

Validity refers to the degree to which a study accurately reflects or assesses the specific

concept that the researcher is attempting to measure. It is the degree to which results

obtained from the analysis of data actually represent the phenomenon under study. It is

the accuracy and meaningfulness of inferences, which are based on the research results. It

has to do with how accurately the data obtained in the study represents the variables of

the study. If such data is a true reflection of the variables, then inferences based on such

data will be accurate and meaningful. Validity is largely determined by the presence or

absence of systematic error in the data e.g. using a faulty scale to measure.

Types of validity

(a) Construct validity

Construct validity seeks agreement between a theoretical concept and a specific

measuring device or procedure. For example, a researcher inventing a new IQ test might

spend a great deal of time attempting to "define" intelligence in order to reach an

acceptable level of construct validity.

Construct validity can be broken down into two sub-categories: Convergent validity and

discriminate validity. Convergent validity is the actual general agreement among ratings,

gathered independently of one another, where measures should be theoretically related.

Discriminate validity is the lack of a relationship among measures which theoretically

should not be related.

To understand whether a piece of research has construct validity, three steps should be
followed. First, the theoretical relationships must be specified. Second, the empirical

relationships between the measures of the concepts must be examined. Third, the

empirical evidence must be interpreted in terms of how it clarifies the construct validity

of the particular measure being tested.

(b) Content validity

Content Validity is based on the extent to which a measurement reflects the specific

intended domain of content.

Content validity can be illustrated using the following examples: Researchers aim to

study mathematical learning and create a survey to test for mathematical skill. If these

researchers only tested for multiplication and then drew conclusions from that survey,

their study would not show content validity because it excludes other mathematical

functions. Although the establishment of content validity for placement-type exams

seems relatively straight-forward, the process becomes more complex as it moves into the

more abstract domain of socio-cultural studies. For example, a researcher needing to

measure an attitude like self-esteem must decide what constitutes a relevant domain of

content for that attitude. For socio-cultural studies, content validity forces the researchers

to define the very domains they are attempting to study.

The usual procedure in assessing the content validity of a measure is to use professional

or experts in the particular field. The instrument is given to two groups of experts, one

group is requested to assess what concept the instrument is trying to measure. The other

group is asked to determine whether the set of items or checklist accurately represents the

concept under study.

(c) Criterion related validity


Criterion related validity, also referred to as instrumental validity, is used to demonstrate

the accuracy of a measure or procedure by comparing it with another measure or

procedure which has been demonstrated to be valid. For example, imagine a hands-on

driving test has been shown to be an accurate test of driving skills. By comparing the

scores on the written driving test with the scores from the hands-on driving test, the

written test can be validated by using a criterion related strategy in which the hands-on

driving test is compared to the written test.

Types

¾ Predictive validity – refers to the degree to which obtained data predicts the future

behaviour of subjects e.g. B. Com graduates

¾ Concurrent validity- refers to the degree to which data are able to predict the

behaviour of subjects in the present and not in the future e.g. psychiatry

Internal and external validity

Researchers should be concerned with both external and internal validity.

¾ External validity refers to the extent to which the results of a study are

generalizable or transferable. External validity is the degree to which research

findings can be generalized to populations and environments outside the

experimental setting. It has to do with representativeness of the sample with regard

to the target population.

¾ Internal validity refers to (1) the rigor with which the study was conducted (e.g.,

the study's design, the care taken to conduct measurements, and decisions

concerning what was and wasn't measured) and (2) the extent to which the

designers of a study have taken into account alternative explanations for any
causal relationships they explore. In studies that do not explore causal

relationships, only the first of these definitions should be considered when

assessing internal validity. Internal validity depends on the degree to which

extraneous variables have been controlled for in the study

Internal and external validity are inversely related to each other.

Threats to internal validity

¾ History – refers to occurrence of events that influence experimental units during t

he course of the study

¾ Maturation – refers to the biological or psychological processes which occur among

the subjects in a relatively short time and which influence research findings

¾ Instrumentation -

¾ Pre-testing – solution – use equivalent form tests

¾ Statistical regression

¾ Attrition- subjects dropping out of the study before completion- leads to error,

biasness in the sample

¾ Differential selection – occurs when subjects are systematically selected for a study -

volunteers and non-volunteers – biasness leads error

¾ Selection – maturation interaction

¾ Ambiguity - when correlation is taken for causation

¾ Apprehension - when people are scared to respond to your study

¾ Demoralization - when people get bored with your measurements

¾ Diffusion - when people figure out your test and start mimicking symptoms

Threats to external validity


¾ Accessible and target population

¾ Control of extraneous variables

¾ Pre-test treatment interaction

¾ Explicit description of the sample

¾ Multi-treatment interference

Step 4: Literature Review

Literature review involves the systematic identification, location and analysis of documents

containing information related to the research problem being investigated. Literature review

should be extensive and thorough because it is aimed at obtaining detailed knowledge.

Purpose of Literature Review

For any study to contribute to research knowledge, it should clearly build on the work of others

in the area of inquiry. Literature review is particularly useful to this end as it helps one to

develop a deeper understanding of the problem that he/she intends to investigate.

i. Delimiting the research problem

Selecting a limited problem and investigating it in depth is better than the superficial study of

a broad problem. The literature will also show how other researchers have formulated useful

lines of inquiry within a broad field.

ii. Seeking new lines of inquiry

During the review of literature, one is able to determine what research has already been done

in the area of interest. One should at the same time be on the look out for research

possibilities that might have been overlooked. Identification of new and unexplored areas is
good knowledge in itself.

iii. Avoiding fruitless approaches

Review of literature sometimes identifies several similar studies over a period in the past,

employing approximately the same research methodology and all of which failed to produce

a significant result. If several studies under the same circumstances have been done with the

results confirming the initial findings, it would not be prudent to do yet another similar study.

Such a study would serve no purpose and would only show that the researcher had not done

adequate literature review.

iv. Gaining methodological insights

A good review of literature helps the researcher in identifying possible practical research

methods that he could use. It is therefore necessary to research beyond the results reported.

The methodological insights gained might be useful to other researchers.

v. Identifying recommendations for further research

Every study usually concludes with a discussion of its findings and recommendations for

further research. A researcher should consider these recommendations carefully because

these could provide you with a research problem as well as the justification for studying it.

vi. Seeking support for grounded theory

Many research studies are designed to test a theory that has already been developed.

According to Barney Glaser though, studies can also be designed in a way that data are

collected first, and then a theory is derived from those data 4. This results in a “grounded

theory” i.e. “grounded” in a set of real world data.

R e s e a r c h M e t h o d s Page 22

When literature review is conducted in this way, it might generate support for the theory. It
might also lead to the researcher(s) to question their own theory or might make them refine

their theory. Ultimately they might even develop ideas for further study.2

Steps in Carrying out Literature

§ Be very familiar with the library before beginning the literature review

§ Make a list of key words or phrases to guide your literature

§ With the key words and phrases related to the study, one should go to the source of

literature.

§ Summarize the references on the card for easy organization of literature

§ Once collected, the literature should be analyzed, organized and reported in an orderly

manner

§ Make an outline of the main topics or themes in order of presentation.

§ Analise each reference in terms of the outline made and establish where it will be most

relevant

§ Studies contrary to your topic shouldn’t be ignored such studies should be analyzed and

possible explanation for the difference given.

§ The literature should be organized in such way that the more general is covered first

before the researcher narrows down to that which is more specific to the research

problem.

Major Steps in a Literature Review

Literature review is more likely to be fruitful if one has already developed preliminary research

problem. The steps in a literature review are highlighted in the framework below. Note that the

steps are not necessary done in the sequence. The results of one step might often lead to a review

of the research problem and any or all the other steps


Step What it entails

1.Search preliminary sources:

These are indexes to particular bodies of literature.

Examples are journals (that publish articles on small

enterprise or microfinance); books, articles, professional

papers, which are relevant to the research problem.

Look for citations (also called bibliographic citation or

reference) A citation is a description of the document that

identifies its author(s), title, year of publication and

publisher.

NB: Seek assistance from the librarian on this. Make use of

the internet as well

1. Use of secondary sources

A document written by someone who did not actually do the

research or develop theories, or express the opinions that

they have synthesized into a literature review.

It helps you to determine whether relevant secondary

sources exist

E.g.- review of literature already done by other researchers

in the area of interest

R e s e a r c h M e t h o d s Page 23

2. Read primary sources

This is a document (e.g., journal article/dissertation) written


by the persons who actually conducted the research, or

formulated the theory or opinions described in the

document.

Obtain and study the original reports of the least, the studies

that are most central to your proposal study.

3. Synthesize the literature

To synthesize is to put together, in a coherent manner,

different ideas or theories that have been gathered from

different sources.

In this way the researcher looks for links in the literature

reviewed and relates the current study problem.

The review will have shown that it is already known, not yet

known and the problems or questions that you plan to study.

As you write your literature review (synthesis), show clearly

how the proposed study relates to, and builds upon, the

existing knowledge

Source: Adapted from: Gall, M D, and Borg W R (1996). Educational Research: An

Introduction. New York: Longman. Pp. 114 - 117

When doing literature review, the following research aspects need to be considered:

§ Research topic

§ Research objectives/questions/hypothesis

§ Research concepts and their relationships to each other


§ Research design

§ Research methods – e.g. data collection and data analysis.

Some of the sources of literature available locally are:

§ Libraries e.g. KNLS, British Council, Information Library, Macmillan, etc

§ Universities and other learning institutions libraries/resource centers e.g. SU, UoN, KU,

JKUAT, Day

star, ANU, CUEA, Kenya Polytechnic, KIBT, EPC, Co-operative College,

etc.

§ Development institutions/programs e.g. UNDP, WTO Centre, KNCC & I, KNFJKAs, KRep

(KDA/KAS), AFRICAP, Microsave

§ Microfinance Institutions – e.g. AMFI, KUSCCO, Individual MFIs, Banks etc.

§ Government Ministries/Departments/Agencies e.g. KIBT, Min of Labor, Min of

Planning, CBS, Central Bank, etc

§ Organizations sponsoring you.

Step 5: Selecting the Research Design

Meaning

Chandran sees a research design as, “an arrangement of conditions for collection and analysis of

data in a way that combines their relationship with the purpose of the research to the economy
of

R e s e a r c h M e t h o d s Page 24

the procedures ... it is a means to achieve the research objectives through empirical evidence
that is acquired economically.” 3

According to Peil, “designing a research project involves organizing the collection and analysis

of data to provide the information which is sought.” 4

No single definition brings out the full range of important aspects. From the above though, we

can emphasize the research design as consisting of the plan, which is designed for: -

- Collecting data

- Analyzing the data

- Presenting the findings vis-à-vis the research objectives.

Obviously then, the design ultimately chosen will depend on the study objectives, types of data

required, sources of data and cost.

Classification of Research Designs

Over the years several research designs have been developed, tested and used in various fields.

Various types of research can be seen as an expression of differing research goals: descriptive,

exploratory, casual, experimental, and comparative research provide somewhat different types of

information. Many projects combine two or more of these. Unfortunately, no simple

classification system defines all the variations that must be considered.

Overall, any design can be said to be either quantitative or qualitative. This is based on the

nature of the data they aim at and end up collecting. If the data can be quantified, the design can

be said to be quantitative. If the data to be collected is not of a quantifiable nature, then the

design is said to be qualitative. For example the attitudes of a client towards a credit program

services is largely qualitative.


Some authorities have thus classified research designs as follows:

· Quantitative designs - Descriptive research

- Casual comparative

- Co-relational research

- Experimental research

· Qualitative designs - Case study

- Historical research

We can classify research design using at least seven other different perspectives.

i. The degree to which the research problem has been crystallized

– Exploratory study

– Formal study

The difference between the two lies in the degree of structure and the immediate objective of the

study. An exploratory study uses loose structures and the objective is to discover future research

tasks. The formal study begins with a hypothesis or question. It involves precise procedures and
data specification of data sources. Its objective is to test the hypothesis or answer the research

questions. The distinction between the two is however not very precise.

ii. According to the method of data collection

- Observational study

- Survey study
In monitoring (which includes observational studies), the researcher looks at the subjects, which

are not asked any questions. For example, observing the actions of a credit group during a

meeting.

In a survey study, the researcher asks the subjects questions and collects their responses.

iii. According to the ability of the researcher to manipulate or produce effects in the

variables under study

- Experimental study

- Ex-post facto study

In an experiment, the researcher is able to control and / or manipulate the variables e g. to change

them or hold them constant. It is the most powerful support possible for a hypothesis of

causation.

In an ex-post facto design, the researcher has no control over the variables. The researcher can

only report what is happening.

iii. According to the purpose of the study

- Descriptive study

- Casual study

A descriptive study seeks to find out who, what, where, when, or how much - e.g. a study on

delinquency. A casual study seeks to explain relationships among variables. E.g. why

delinquency rates are higher in one branch than the other.


iv. According to the time dimension

- Cross-sectional study

- Longitudinal study

- Historical study

A cross-sectional study describes a sample at a particular point in time – a snapshot of the

phenomenon at the time. A longitudinal study describes a sample over a period of time for the

purpose of tracking changes in the samples (same people over a period of time) cohort group

studies (where different subjects are used for each subsequent measurement)

A historical study is a systematic or objective location evaluation and synthesis of evidence in

order to establish facts and draw conclusions about past events.

v. According to the topical scope

- Case study

- Statistical study

A case study emphasizes a full contextual analysis of fewer events or conditions and their

interrelations. Hypothesis may be used, but the study relies on qualitative data, which makes

R e s e a r c h M e t h o d s Page 26

testing of the hypothesis more difficult. The emphasis on detail helps to give the researcher

valuable insight for problem solving, evaluation and strategy. In addition, it relies on a multiple

sources of information.
A statistical study is designed for breadth rather than depth. A statistical study attempts to

capture the characteristics of a population by making inferences from a sample’s characteristics.

Here hypothesis are tested quantitatively. If sample is large enough to represent the population, it

is possible to make generalizations.

vii According to the research environment

- Field conditions research

- Laboratory conditions research

- Simulation research

Field conditions studies are those that occur under actual environmental conditions. Laboratory

conditions studies are carried out under laboratory-controlled conditions. Simulations arise out of

replicating the essence of a system or process. Examples of simulations: where characteristics of

various conditions and relationships in actual situations are often represented in mathematical

models; role-playing etc.

Some Research Designs for I T

a) Descriptive Design

This is an appropriate design where the study seeks to describe and portray characteristics of an

event, situation, and a group of people, community, or a population. It enables the researcher to

profile the sample or population by gathering complete, and possibly accurate information.
Data for the survey is collected using a questionnaire, although a combination of tools may be

used. A well-structured survey covers personal, special, and economic characteristics of the

subjects.

NB: A historical study can also be used as a descriptive design. This describes past events, e.g.

the development of microfinance in Kenya.

Advantages:

§ Portrayal of phenomenon or events fully

§ Appropriate for conducting baseline surveys

Disadvantages:

§ Lack of a scope to identify and assess relationships between concepts. Likewise, a hypothesis

on a causal relationship cannot be tested.

§ Occurrence of errors. Examples: -sampling errors (if sampling method fails to select a sample

which fully represents the population) and measurement error (when data is not measured

accurately).

b) Historical Research Design

This is a systematic or objective location, evaluation, and synthesis of evidence in order to

establish facts and draw conclusions about past events.

R e s e a r c h M e t h o d s Page 27

Stages

i. Identification and delineation of the problem

ii. Formulate hypotheses/ set of questions/research objectives.

iii. Select data resources- then collect data, organize, verify, validate, analyze, select relevant
data from the main or massive data.

iv. Process data

v. Test the hypotheses/answer the questions

vi. Write the report

Value of historical research

i. Allows answers to current problems to be looked for in the past

ii. It has the ability of employing the past to predict the future.

iii. Ability to use the present to explain the past

Limitations

i. Difficulty of obtaining adequate data and the determination of how much data is adequate.

ii. Modern history argues that there is too much data to choose from; that much of it is not

relevant. The question is what is relevant/authentic

iii. One does not make his own observations – but relies on other people’s observations and data.

These other people are not necessarily trained observers.

Sources of data

Primary sources – e.g. archives, museums, remains or relics of a given period (e.g. skeletons,

tools, buildings); objects/events that have a direct relation with the subject; documents written by
“direct” persons, original minutes etc

Secondary sources – these do not bear direct physical relationship to the event being studied. e.g.

replica of an art object such as copies of original documents

Evaluation

The information gathered should be carefully evaluated or attested. That data accepted will be

historical evidence.

Criticism

§ Authenticity of the source (external criticism)

§ Accuracy of the data (internal criticism)

c) Exploratory Research Design

This research design seeks to provide new insights and discovery of new ideas to the researcher.

Examples: Community financial need assessment studies.

Stages

i. Formulate research questions that are addresses through a scientific inquiry or investigation

such as a survey.

ii. Literature review, especially in historical surveys


iii. Analyzing data and stimulating cases for new insights

Advantages

§ Leads a researcher top formulate a research hypothesis for further or future researches

§ Provides for possibilities of doing different types of research.

R e s e a r c h M e t h o d s Page 28

§ Stimulates interest and encourages the attitude of seeking to understand and gain new

insights rather than trying to test a certain research related statement

§ Promotes depth ness in seeking for answers and explanation of events and situations as they

take place

§ Encourages drawing together various pieces of information and thus increases the

investigative power of the researcher.

d) Experimental Research Design

In experimental research the investigator deliberately, controls and manipulates the conditions,

which determine the events. The researcher makes a change in the value of one variable

(Independent variable) and then observes the effect of that change on another variable

(Dependent variable)

In microfinance and social research in general, it is not possible to carry out the experiment per

se. However, we may employ something close to it – thus coming up with a quasi-experimental

research.

In a quasi-experimental design, we introduce a number of control groups to ensure internal


validity. This is done to eliminate the possibility of the outcomes being affected by the

experimental treatment. We also seek to guard against threats to external validity such as aging.

Some of the common threats to validity are: -

§ History – when one is exposed to the treatment, there is a history of events which may not

affect the outcome.

§ Maturation

§ Statistical regression – if the instruments used to measure the outcome are unreliable, which

leads to inaccuracies

§ Instrumentation – the reliability of the people testing and the testing itself.

§ Selection – bias in the selection of groups.

§ Experimental mortality – between time t1 and t2 the residual group may not be related to the

initial group.

Advantages

i. It’s the only design in which a hypothesis is truly formulated and tested

ii. It facilitates the assessment of casual relationship between variables and the degree of that

relationship.

Disadvantages

i. Time consuming and costly – it requires more complicated planning and type of data that is

required

ii. It requires more sophisticated research skills from the researcher.


iii. Measurement error – due to response error and investigator’s bias

RESEARCH INSTRUMENTS

The research instruments that are widely used include

¾ Questionnaires

¾ Interviews

¾ Observations

QUESTIONNAIRES

Each item in the questionnaire is developed to address a specific objective, research

question or hypothesis of the study. The researcher must also know how information

obtained from each questionnaire item will be analysed.

Types of questions used in questionnaires

1 Structured or closed-ended questions

They are questions, which are accompanied by a list of possible alternatives from which

respondents select the answer that best describes their situation.

Advantages of Structured or closed-ended questions

¾ They are easier to analyse since they are in an immediate usable form

¾ They are easier to administer

¾ They are economical to use in terms of time and money

Disadvantages of Structured or closed-ended questions

¾ They are more difficult to construct

¾ Responses are limited and the respondent is compelled to answer questions according

to the researcher’s choices

2 Unstructured or open – ended questions


They refer to questions, which give the respondent complete freedom of response. The

amount of space provided is always an indicator of whether a brief or lengthy answer is

desired.

Advantages of Unstructured or open – ended questions

¾ They permit a greater depth of response

¾ They are simple to formulate

¾ The respondent’s responses may give an insight into his feelings, background, hidden

motives, interest and decisions.

Disadvantages of Unstructured or open – ended questions

¾ There is a tendency of the respondents providing information, which does not answer

the stipulated research questions or objectives.

¾ The responses given may be difficult to categorize and hence difficult to analyze

quantitatively

¾ Responding to open ended questions is time consuming, which may put some

respondent off.

3 Contingency questions

In particular cases, certain questions are applicable to certain groups of respondents. In

such cases, follow-up questions are needed to get further information from the relevant

sub-group only. These subsequent questions, which are asked after the initial questions,

are called ‘contingency questions’ or ‘ filter questions’. The purpose of these kinds of

questions is to probe for more information. They also simplify the respondent’s task, in

that they will not be required to answer questions that are not relevant to them.
4 Matrix questions

These are questions, which share the same set of response categories. They are used

whenever scales like likert scale are being used.

Advantages of matrix questions

¾ When questions or items are presented in matrix form, they are easier to complete and

hence the respondent is unlikely not to be put off.

¾ Space is used efficiently

¾ It is easy to compare responses given to different items.

Disadvantages of matrix questions

¾ Some respondents, especially the ones that may not be too keen to give right

responses, might form a pattern of agreeing or disagreeing with statements.

¾ Some researchers use them when in fact the kind of information being sought could

better be obtained in another format.

Rules for constructing questionnaires and questionnaire items

1. List the objectives that you want the questionnaire to accomplish before

constructing the questionnaire.

2. Determine how information obtained from each questionnaire item will be

analyzed.

3. Ensure clarity and avoid ambiguity.

4. If a concept has several meanings and that concept must be used in a question, the

intended meaning must be defined.

5. Construct short questions.


6. Items should be stated positively as possible.

7. Double-barreled items should be avoided.

8. Leading and biased questions should be avoided.

9. Very personal and sensitive questions should be avoided.

10. Simple words that are easily understandable should be used.

11. Questions that assume facts with no evidence should be avoided.

12. Avoid psychologically threatening questions.

13. Include enough information in each item so that it is meaningful to the

respondent.

Tips on how to organize or order items in a questionnaire

1. Begin with non-threatening, interesting items.

2. It is not advisable to put important questions at the end of a long questionnaire.

3. Have some logical order when putting items together.

4. Arrange the questions according to themes being studied.

5. If the questionnaire is arranged into content sub-sections, each section should be

introduced with a short statement concerning its content and purpose.

6. Socio-economic questions should be asked at the end because respondents may be

put off by personal questions at the beginning of the questionnaire.

Presentation of the questionnaire

1. Make the questionnaire attractive by using quality paper. It increases the response

rate.

2. Organize and lay out the questions so that the questionnaire is easy to complete.

3. All the pages and items in a questionnaire should be numbered.


4. Brief but clear instruction must be included.

5. Make your questionnaire short.

Pretesting the questionnaire

The questionnaire should be pretested to a selected sample, which is similar to the actual

sample, which the researcher plans to study. This is important because:-

¾ Questions that are vague will be revealed in the sense that the respondents will

interpret them differently.

¾ Comments and suggestions made by respondents during pretesting should be

seriously considered and incorporated.

¾ Pretesting will reveal deficiencies in the questionnaire.

¾ It helps to test whether the methods of analysis are appropriate.

Ways of administering questionnaires

Questionnaires are mainly administered using three methods:

i.

Self administered questionnaires

Questionnaires are send to the respondents through mail or hand-delivery, and they

complete on their own.

ii.

Researcher administered questionnaires

The researcher can decide to use the questionnaire to interview the respondents. This

is mostly done when the subjects may not have the ability to easily interpret the

questions probably because of their educational level.


iii.

Use of the internet

The people sampled for the research receive and respond to the questionnaires

through their web sites or e-mail addresses.

The letter of transmittal / Cover letter

The letter of transmittal / Cover letter should accompany every questionnaire.

Contents of a letter of transmittal

¾ It should explain the purpose of the study.

¾ It should explain the importance and significance of the stuidy.

¾ A brief assurance of confidentiality should be included in the letter.

¾ If the study is affiliated to a certain institution or organisation, it is advisable to have

an endorsement from such an institution or organisation.

¾ In a sensitive research, it may be necessary to assure the anonymity of respondents.

¾ The letter should contain specific deadline dates by which the completed

questionnaire is to be returned.

Follow-up techniques

¾ Sending a follow-up letter which should be polite, and asking the subjects to

respond

¾ A questionnaire and a follow-up letter.

Response rate

It refers to the percentage of subjects who respond to questionnaires. Many authors

believe that a response rate of 50% is adequate for analysis and reporting. If the response

rate is low, the researcher must question the representativeness of the sample.
INTERVIEWS

An interview is an oral (face to face) administration of a questionnaire or an interview

schedule. To obtain accurate information through interviews, a researcher needs to obtain

the maximum co-operation from respondents. Interviews are particularly useful for

getting the story behind a participant's experiences. The interviewer can pursue in-depth

information around a topic. Interviews may be useful as follow-up to certain respondents

to questionnaires, e.g., to further investigate their responses. Usually open-ended

questions are asked during interviews.

Guidelines for preparation for Interview

1. Choose a setting with little distraction. Avoid loud lights or noises, ensure the

interviewee is comfortable (you might ask them if they are), etc. Often, they may feel

more comfortable at their own places of work or homes.

2. Explain the purpose of the interview.

3. Address terms of confidentiality. Note any terms of confidentiality. (Be careful here.

Rarely can you absolutely promise anything. Courts may get access to information, in

certain circumstances.) Explain who will get access to their answers and how their

answers will be analyzed. If their comments are to be used as quotes, get their written

permission to do so.

4. Explain the format of the interview. Explain the type of interview you are conducting

and its nature. If you want them to ask questions, specify if they're to do so as they

have them or wait until the end of the interview.

5. Indicate how long the interview usually takes.


6. Tell them how to get in touch with you later if they want to.

7. Ask them if they have any questions before you both get started with the interview.

8. Don't count on your memory to recall their answers. Ask for permission to record the

interview or bring along someone to take notes.

Types of Interviews approaches

(a) Informal, conversational interview - no predetermined questions are asked, in

order to remain as open and adaptable as possible to the interviewee's nature and

priorities; during the interview, the interviewer "goes with the flow".

(b) General interview guide approach - the guide approach is intended to ensure

that the same general areas of information are collected from each interviewee;

this provides more focus than the conversational approach, but still allows a

degree of freedom and adaptability in getting information from the interviewee.

(c) Standardized, open-ended interview - here, the same open-ended questions are

asked to all interviewees (an open-ended question is where respondents are free

to choose how to answer the question, i.e., they don't select "yes" or "no" or

provide a numeric rating, etc.); this approach facilitates faster interviews that can

be more easily analyzed and compared

(d) Closed, fixed-response interview - where all interviewees are asked the same

questions and asked to choose answers from among the same set of alternatives.

This format is useful for those not practiced in interviewing.

Sequence of Questions

1. Get the respondents involved in the interview as soon as possible.

2. Before asking about controversial matters (such as feelings and conclusions), first
ask about some facts. With this approach, respondents can more easily engage in

the interview before warming up to more personal matters.

3. Intersperse fact-based questions throughout the interview to avoid long lists of

fact-based questions, which tends to leave respondents disengaged.

4. Ask questions about the present before questions about the past or future. It's

usually easier for them to talk about the present and then work into the past or

future.

5. The last questions might be to allow respondents to provide any other information

they prefer to add and their

impressions of the interview.

Wording of Questions

¾ Wording should be open-ended. Respondents should be able to choose their own

terms when answering questions.

¾ Questions should be as neutral as possible. Avoid wording that might influence

answers, e.g., evocative, judgmental wording.

¾ Questions should be asked one at a time.

¾ Questions should be worded clearly. This includes knowing any terms particular to

the program or the respondents' culture.

¾ Be careful asking "why" questions. This type of question infers a cause-effect

relationship that may not truly exist. These questions may also cause respondents

to feel defensive, e.g., that they have to justify their response, which may inhibit

their responses to this and future questions.


¾ While Carrying Out Interview

¾ Occasionally verify the tape recorder (if used) is working.

¾ Ask one question at a time.

¾ Attempt to remain as neutral as possible. That is, don't show strong emotional

reactions to their responses. Patton suggests to act as if "you've heard it all before."

¾ Encourage responses with occasional nods of the head, "uh huh"s, etc.

¾ Be careful about the appearance when note taking. That is, if you jump to take a

note, it may appear as if you're surprised or very pleased about an answer, which

may influence answers to future questions.

¾ Provide transition between major topics, e.g., "we've been talking about (some

topic) and now I'd like to move on to (another topic)."

¾ Don't lose control of the interview. This can occur when respondents stray to

another topic, take so long to answer a question that times begins to run out, or

even begin asking questions to the interviewer.

Immediately After Interview

¾ Verify if the tape recorder, if used, worked throughout the interview.

¾ Make any notes on your written notes, e.g., to clarify any scratchings, ensure pages

are numbered, fill out any notes that don't make senses, etc.

¾ Write down any observations made during the interview. For example, where did

the interview occur and when, was the respondent particularly nervous at any

time? Were there any surprises during the interview? Did the tape recorder break?

Personal interviews

People selected to be part of the sample are interviewed in person by a trained


interviewer.

Requirements for success

Three broad conditions must be met in order to have a successful personal interview:

¾ The participant must possess the information being targeted by the investigative

questions

¾ The participant must understand his or her role in the interview as the provider of

accurate information

¾ The participant must perceive adequate motivation to cooperate

Increasing the participant’s receptiveness

The first goal in an interview is to establish a friendly relationship with the participant.

Three factors will help increase participant receptiveness. The participant must:

¾ Believe that the experience will be pleasant and satisfying

¾ Believe that answering the survey is an important and worthwhile use of his or her

time

¾ Dismiss any mental reservations that he or she might have about participation.

The technique of stimulating participants to answer more fully and relevantly is termed

probing. Since it presents a great potential for bias, a probe should be neutral and appear

as a natural part of the conversation. Appropriate probes should be specified by the

designer of the data collection instrument. There are several probing styles e.g.

¾ A brief assertion of understanding and interest e.g. comments such as “I see” “yes”.

¾ An expectant pause

¾ Repeating the question


¾ Repeating the participant’s reply

¾ A neutral question or comment

¾ Question clarification.

Problems likely to be encountered during personal interviews

In personal interviews, the researcher must deal with bias and cost.

Biased results is as a result of three types of errors:

(a)

Sampling error

It’s the difference between a sample statistic and its corresponding population

parameter. The sampling distribution of the sample means is a probability distribution

of possible sample means of a given sample size.

(b)

Non-response error

This occurs when the responses of participants differ in some systematic way from the

responses of non-participants. It occurs when the researcher:

¾ Cannot locate the person to be studied

¾ Is unsuccessful in encouraging that person to participate

Solutions to reduce errors of non-response are

¾ Establishing and implementing callback procedures

¾ Creating a non response sample and weighting results from this sample

¾ Substituting another individual for the missing non-participant.

(c)

Response error
Occurs when the data reported differ from the actual data. It can occur during the

interview or during preparation of data analysis.

¾ Participant-initiated error occurs when the participant fails to answer fully and

accurately either by choice or because of inaccurate or incomplete knowledge. Can be

solved by using trained interviewers who are knowledgeable about such problems.

¾ Interviewer error can be caused by:-

- Failure to secure full participant cooperation

- Failure to consistently execute interview procedures

- Failure to establish appropriate interview environment

- Falsification of individual answers or whole interviews

- Inappropriate influencing behaviour

- Failure to record answers accurately and completely

- Physical presence bias.

Advantages of Personal interviews

¾ Good cooperation from the respondents

¾ Interviewer can answer questions about survey, probe for answers, use follow-up

questions and gather information by observation.

¾ Special visual aids and scoring devices can be used.

¾ Illiterate and functionally illiterate respondents can be reached

¾ Interviewer can prescreen respondent to ensure he / she fits the population profile.

¾ Responses can be entered directly into a portable microcomputer to reduce error

and cost when using computer assisted personal interviewing.


Disadvantages of Personal interviews

¾ High costs

¾ Need for highly trained interviewers

¾ Longer period needed in the field collecting data

¾ May be wide geographic dispersion

¾ Follow-up is labour intensive

¾ Not all respondents are available or accessible

¾ Some respondents are unwilling to talk to strangers in their homes

¾ Some neighbourhoods are difficult to visit

¾ Questions may be altered or respondent coached by interviewers.

Telephone interviews

People selected to be part of the sample are interviewed on the telephone by a trained

interviewer.

Advantages of Telephone interviews

¾ Lower costs than personal interviews

¾ Expanded geographic coverage without dramatic increase in costs

¾ Uses fewer, more highly skilled interviewers

¾ Reduced interview bias

¾ Fates completion time

¾ Better access to hard-to-reach respondents through repeated callbacks

¾ Can use computerized random digit dialing

¾ Responses can be entered directly into a computer file to reduce error and cost when

using computer assisted telephone interviewing.


Disadvantages of Telephone interviews

¾ Response rate is lower than for personal interview

¾ Higher costs if interviewing geographically dispersed sample

¾ Interview sample must be limited

¾ Many phone numbers are unlisted or not working, making directory listings

unreliable

¾ Some target groups are not available by phone

¾ Responses may be less complete

¾ Illustrations cannot be used.

¾ Respondents may not be honest with their responses since it is not a face to face

situation

Rules pertaining to interviews

The interviewer must

¾ Be pleasant

¾ Show genuine interest in getting to know respondents without appearing like spies.

¾ Be relaxed and friendly.

¾ Be very familiar with the questionnaire or the interview guide.

¾ Have a guide which indicates what questions are to be asked and in what order.

¾ Interact with the respondent as an equal.

¾ Pretest the interview guide before using it to check for vocabulary, language level

and how well the questions will be understood.

¾ Inform the respondent about the confidentiality of the information given.


¾ Not ask leading questions

¾ Remain neutral in an interview situation in order to be as objective as possible.

An interview schedule

It’s a set of questions that the interviewer asks when interviewing. It makes it possible

to obtain data required to meet specific objectives of the study.

Note taking during interviews

It refers to the method of recording in which the interviewer records the respondent’s

responses during the interview.

Advantages

¾ It facilitates data analysis since the information is readily accessible and already

classified into appropriate categories.

¾ If taken well, no information is left out.

Disadvantages of note taking

¾ It may interfere with the communication between the respondent and the

interviewer.

¾ It might upset the respondent if the answers are personal and sensitive.

¾ If it is delayed, important details may be forgotten.

¾ It makes the interview lengthy and boring.

Tape recording

The interviewer’s questions and the respondent’s answers are recorded either using a tape

recorder or a video tape.

Advantages

¾ It reduces the tendency for the interviewer to make unconscious selection of data in
the course of the recording.

¾ The tape can be played back and studied more thoroughly.

¾ A person other than the interviewer can evaluate and categorize responses.

¾ It speeds up the interview.

¾ Communication is not interrupted.

Disadvantages

¾ It changes the interview situation since respondents get nervous.

¾ Respondents may be reluctant to give sensitive information if they know they are

being taped.

¾ Transcribing the tapes before analysis is time consuming and tedious.

Advantages of interviews

¾ It provides in-depth data, which is not possible to get using a questionnaire.

¾ It makes it possible to obtain data required to meet specific objectives of the study.

¾ Are more flexible than questionnaires because the interviewer can adapt to the

situation and get as much information as possible.

¾ Very sensitive and personal information can be extracted from the respondent.

¾ The interviewer can clarify and elaborate the purpose of the research and effectively

convince respondents about the importance of the research.

¾ They yield higher response rates

Disadvantages of interviews

¾ They are expensive – traveling costs

¾ It requires a higher level of skill


¾ Interviewers need to be trained to avoid bias

¾ Not appropriate for large samples

¾ Responses may be influenced by the respondent’s reaction to the interviewer.

OBSERVATION

Observation is one of the few options available for studying records, mechanical

processes, small children and complex interactive processes. Data can be gathered as

the event occurs. Observation includes a variety of monitoring situations that cover non

behavioural and behavioural activities.

The observer-participant relationship

Interrogation presents a clear opportunity for interviewer bias. The problem is less

pronounced with observation but is still real. The relationship between observer and

participant may be viewed from three perspectives:

¾ Whether the observation is direct or indirect

¾ Whether the observer’s presence is known or unknown to the participant

¾ What role the observer plays

Guidelines for the qualification and selection of observers

¾ Concentration: Ability to function in a setting full of distractions

¾ Detail-oriented: Ability to remember details of an experience

¾ Unobtrusive: Ability to blend with the setting and not be distinctive

¾ Experience level: Ability to extract the most from an observation study

Advantages of observation

Enables one to:


¾ Secure information about people or activities that cannot be derived from

experiment or surveys

¾ Reduces obtrusiveness

¾ Avoid participant filtering and forgetfulness

¾ Secure environmental context information

¾ Optimize the naturalness of the research setting

Limitations of observation

¾ Difficulty of waiting for long periods to capture the relevant phenomena

¾ The expense of observer costs and equipment

¾ Reliability of inferences from surface indicators

¾ The problem of quantification and disproportionately large records

Observation forms, schedules or checklists

The researcher must define the behaviours to be observed and then develop a detailed list

of behaviours. During data collection, the researcher checks off each as it occurs. This

permits the observer to spend time thinking about what is occurring rather than on how to

record it and this enhances the accuracy of the study.

Validity and Reliability in Research

The quality of a research study depends to a large extent on the accuracy of the data

collection procedures. Reliability and validity measures the relevance and correctness of

the data.

Reliability
Reliability is the extent to which an experiment, test, or any measuring procedure yields

the same result on repeated trials. Without the agreement of independent observers able

to replicate research procedures, or the ability to use research tools and procedures that

yield consistent measurements, researchers would be unable to satisfactorily draw

conclusions, formulate theories, or make claims about the generalizability of their

research. In addition to its important role in research, reliability is critical for many parts

of our lives, including manufacturing, medicine and sports. Reliability is such an

important concept that it has been defined in terms of its application to a wide range of

activities.

Reliability is influenced by random error. Random error is the deviation from a true

measurement due to factors that have not effectively been addressed by the researcher. As

random error increases, reliability decreases.

Causes of random error

¾ Inaccurate coding

¾ Ambiguous instruction to the subjects

¾ Interviewer’s fatigue

¾ Interviewee’s fatigue

¾ Interviewer’s bias

Research instruments yield data that have two components; the true value or score and an

error component. The error component of the data reflects the limitations of the

instrument. There are three types of errors that arise at the time of data collection;

¾ Error due to the inaccuracy of the instrument

¾ Error due to the inaccuracy of scoring by the researcher


¾ Unexplained error

Ways of Assessing Reliability

¾ Test-Retest

¾ Equivalent form

¾ Internal consistency

¾ Interrater reliability

1. The Test-Retest technique

It involves administering the same instruments twice to the same group of subjects, but

after some time. Stability reliability (sometimes called test, re-test reliability) is the

agreement of measuring instruments over time. To determine stability, a measure or test

is repeated on the same subjects at a future date. Results are compared and correlated

with the initial test to give a measure of stability.

An example of stability reliability would be the method of maintaining weights used by

the Kenya Bureau of Standards. Platinum objects of fixed weight (one kilogram, half

kilogram, etc...) are kept locked away. Once a year they are taken out and weighed,

allowing scales to be reset so they are "weighing" accurately. Keeping track of how much

the scales are off from year to year establishes stability reliability for these instruments.

In this instance, the platinum weights themselves are assumed to have a perfectly fixed

stability reliability

Disadvantages

¾ Subjects may be sensitized by the first testing hence will do better in the second test

¾ Difficulty in establishing a reasonable period between the two testing sessions.


2. Equivalent form

Equivalent reliability is the extent to which two items measure identical concepts at an

identical level of difficulty. Equivalency reliability is determined by relating two sets of

test scores to one another to highlight the degree of relationship or association. In

quantitative studies and particularly in experimental studies, a correlation coefficient,

statistically referred to as r, is used to show the strength of the correlation between a

dependent variable (the subject under study), and one or more independent variable,

which are manipulated to determine effects on the dependent variable. An important

consideration is that equivalency reliability is concerned with correlational, not causal,

relationships.

For example, a researcher studying university Bachelor of commerce students happened

to notice that when some students were studying for finals, their holiday shopping began.

Intrigued by this, the researcher attempted to observe how often, or to what degree, these

two behaviors co-occurred throughout the academic year. The researcher used the results

of the observations to assess the correlation between studying throughout the academic

year and shopping for gifts. The researcher concluded there was poor equivalency

reliability between the two actions. In other words, studying was not a reliable predictor

of shopping for gifts.

Two instruments are used. Specific items in each form are different but they are designed

to measure the same concept. They are the same in number, structure and level of

difficulty e.g. TOEFL, GRE

Advantages
¾ Estimates the stability of the data as well as the equivalence of the items in the two

forms

Disadvantages

¾ Difficulty in constructing two tests, which measure the same concept (time and

resources).

3. Internal consistency technique

Internal consistency is the extent to which tests or procedures assess the same

characteristic, skill or quality. It is a measure of the precision between the observers or of

the measuring instruments used in a study. This type of reliability often helps researchers

interpret data and predict the value of scores and the limits of the relationship among

variables.

For example, a researcher designs a questionnaire to find out about college students'

dissatisfaction with a particular textbook. Analyzing the internal consistency of the

survey items dealing with dissatisfaction will reveal the extent to which items on the

questionnaire focus on the notion of dissatisfaction.

4. Interrater reliability

Interrater reliability is the extent to which two or more individuals (coders or raters)

agree. Interrater reliability addresses the consistency of the implementation of a rating

system.

A test of interrater reliability would be the following scenario: Two or more researchers

are observing a high school classroom. The class is discussing a movie that they have just

viewed as a group. The researchers have a sliding rating scale (1 being most positive, 5

being most negative) with which they are rating the student's oral responses. Interrater
reliability assesses the consistency of how the rating system is implemented. For

example, if one researcher gives a "1" to a student response, while another researcher

gives a "5," obviously the interrater reliability would be inconsistent. Interrater reliability

is dependent upon the ability of two or more individuals to be consistent. Training,

education and monitoring skills can enhance interrater reliability.

Ways of improving reliability

¾ Minimize external sources of variation

¾ Standardize conditions under which measurements occurs

¾ Improve investigator consistency by using only well trained, supervised and

motivated persons to conduct the research

¾ Broaden the sample of measurement questions by adding similar questions to the data

collection instrument or adding more observers or occasions to an observation study.

¾ Improve internal consistency of an instrument by excluding data from analysis drawn

from measurement questions eliciting extreme responses.

Validity

Validity refers to the degree to which a study accurately reflects or assesses the specific

concept that the researcher is attempting to measure. It is the degree to which results

obtained from the analysis of data actually represent the phenomenon under study. It is

the accuracy and meaningfulness of inferences, which are based on the research results. It

has to do with how accurately the data obtained in the study represents the variables of

the study. If such data is a true reflection of the variables, then inferences based on such

data will be accurate and meaningful. Validity is largely determined by the presence or
absence of systematic error in the data e.g. using a faulty scale to measure.

Types of validity

(a) Construct validity

Construct validity seeks agreement between a theoretical concept and a specific

measuring device or procedure. For example, a researcher inventing a new IQ test might

spend a great deal of time attempting to "define" intelligence in order to reach an

acceptable level of construct validity.

Construct validity can be broken down into two sub-categories: Convergent validity and

discriminate validity. Convergent validity is the actual general agreement among ratings,

gathered independently of one another, where measures should be theoretically related.

Discriminate validity is the lack of a relationship among measures which theoretically

should not be related.

To understand whether a piece of research has construct validity, three steps should be

followed. First, the theoretical relationships must be specified. Second, the empirical

relationships between the measures of the concepts must be examined. Third, the

empirical evidence must be interpreted in terms of how it clarifies the construct validity

of the particular measure being tested.

(b) Content validity

Content Validity is based on the extent to which a measurement reflects the specific

intended domain of content.

Content validity can be illustrated using the following examples: Researchers aim to

study mathematical learning and create a survey to test for mathematical skill. If these

researchers only tested for multiplication and then drew conclusions from that survey,
their study would not show content validity because it excludes other mathematical

functions. Although the establishment of content validity for placement-type exams

seems relatively straight-forward, the process becomes more complex as it moves into the

more abstract domain of socio-cultural studies. For example, a researcher needing to

measure an attitude like self-esteem must decide what constitutes a relevant domain of

content for that attitude. For socio-cultural studies, content validity forces the researchers

to define the very domains they are attempting to study.

The usual procedure in assessing the content validity of a measure is to use professional

or experts in the particular field. The instrument is given to two groups of experts, one

group is requested to assess what concept the instrument is trying to measure. The other

group is asked to determine whether the set of items or checklist accurately represents the

concept under study.

(c) Criterion related validity

Criterion related validity, also referred to as instrumental validity, is used to demonstrate

the accuracy of a measure or procedure by comparing it with another measure or

procedure which has been demonstrated to be valid. For example, imagine a hands-on

driving test has been shown to be an accurate test of driving skills. By comparing the

scores on the written driving test with the scores from the hands-on driving test, the

written test can be validated by using a criterion related strategy in which the hands-on

driving test is compared to the written test.

Types

¾ Predictive validity – refers to the degree to which obtained data predicts the future
behaviour of subjects e.g. B. Com graduates

¾ Concurrent validity- refers to the degree to which data are able to predict the

behaviour of subjects in the present and not in the future e.g. psychiatry

Internal and external validity

Researchers should be concerned with both external and internal validity.

¾ External validity refers to the extent to which the results of a study are

generalizable or transferable. External validity is the degree to which research

findings can be generalized to populations and environments outside the

experimental setting. It has to do with representativeness of the sample with regard

to the target population.

¾ Internal validity refers to (1) the rigor with which the study was conducted (e.g.,

the study's design, the care taken to conduct measurements, and decisions

concerning what was and wasn't measured) and (2) the extent to which the

designers of a study have taken into account alternative explanations for any

causal relationships they explore. In studies that do not explore causal

relationships, only the first of these definitions should be considered when

assessing internal validity. Internal validity depends on the degree to which

extraneous variables have been controlled for in the study

Internal and external validity are inversely related to each other.

Threats to internal validity

¾ History – refers to occurrence of events that influence experimental units during t

he course of the study

¾ Maturation – refers to the biological or psychological processes which occur among


the subjects in a relatively short time and which influence research findings

¾ Instrumentation -

¾ Pre-testing – solution – use equivalent form tests

¾ Statistical regression

¾ Attrition- subjects dropping out of the study before completion- leads to error,

biasness in the sample

¾ Differential selection – occurs when subjects are systematically selected for a study -

volunteers and non-volunteers – biasness leads error

¾ Selection – maturation interaction

¾ Ambiguity - when correlation is taken for causation

¾ Apprehension - when people are scared to respond to your study

¾ Demoralization - when people get bored with your measurements

¾ Diffusion - when people figure out your test and start mimicking symptoms

Threats to external validity

¾ Accessible and target population

¾ Control of extraneous variables

¾ Pre-test treatment interaction

¾ Explicit description of the sample

¾ Multi-treatment interference

You might also like