0% found this document useful (0 votes)
72 views37 pages

Research Methodology: Unit-2: Fundamental Concept On Research

Hypotheses are testable statements about the relationship between variables. A hypothesis predicts the outcome of a study and can be supported or rejected based on results. There are different types of hypotheses including simple, complex, directional, and null hypotheses. Sampling involves selecting a subset of a population for a study. Probability sampling aims to give all members of the population an equal chance of being selected, while non-probability sampling has no predefined selection process. Common sampling techniques include simple random sampling, systematic sampling, stratified sampling, and cluster sampling.

Uploaded by

nabin prasai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views37 pages

Research Methodology: Unit-2: Fundamental Concept On Research

Hypotheses are testable statements about the relationship between variables. A hypothesis predicts the outcome of a study and can be supported or rejected based on results. There are different types of hypotheses including simple, complex, directional, and null hypotheses. Sampling involves selecting a subset of a population for a study. Probability sampling aims to give all members of the population an equal chance of being selected, while non-probability sampling has no predefined selection process. Common sampling techniques include simple random sampling, systematic sampling, stratified sampling, and cluster sampling.

Uploaded by

nabin prasai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

RESEARCH METHODOLOGY

UNIT-2: FUNDAMENTAL CONCEPT ON RESEARCH


HYPOTHESIS
 Hypothesis = “Hypo” + “Thesis” Hypo= Pre/Before, Thesis=“Theory”
 It simply means Pre Theory.
 It may be guess, imagination which becomes basis for our action or investigation for the
researches.
 A hypothes is a precise, testable statement of what the researchers predict will be the
outcome of the study. It is stated at the start of the study.
 This usually involves proposing a possible relationship between two variables: the
independent variable (what the researcher changes) and the dependent variable (what
the research measures).
 A fundamental requirement of a hypothesis is that is can be tested against reality, and
can then be supported or rejected.
Examples:
“The greater number of coal plants in a region (independent variable) increases water
pollution (dependent variable)”
If you change the independent variable (building more coal factories), it will change the
dependent variable (amount of water pollution).
“What is the effect of diet or regular soda (independent variable) on blood sugar levels
(dependent variable)?”
If you change the independent variable (the type of soda you consume), it will change the
dependent variable (blood sugar levels)
TYPES OF HYPOTHESIS
1. Simple Hypothesis
• It shows a relationship between one dependent variable and a single independent
variable. For example – If you eat more vegetables, you will lose weight faster. Here,
eating more vegetables is an independent variable, while losing weight is the
dependent variable.
2. Complex Hypothesis
• It shows the relationship between two or more dependent variables and two or more
independent variables. Eating more vegetables and fruits leads to weight loss, glowing
skin, reduces the risk of many diseases such as heart disease, high blood pressure and
some cancers.
3. Directional Hypothesis
• It shows how a researcher is intellectual and committed to a particular outcome. The
relationship between the variables can also predict its nature. For example- children
aged four years eating proper food over a five-year period are having higher IQ levels
than children not having a proper meal. This shows the effect and direction of effect.
4. Non-directional Hypothesis
• It is used when there is no theory involved. It is a statement that a relationship exists
between two variables, without predicting the exact nature (direction) of the
relationship.
TYPES OF HYPOTHESIS
5. Null Hypothesis
• It provides the statement which is contrary to the hypothesis. It’s a negative
statement, and there is no relationship between independent and dependent
variables. The symbol is denoted by “HO”.
6. Altenative Hypothesis
• It states that there is a relationship between the two variables of the study and
that the results are significant to the research topic.

Functions/Importance of Hypothesis
• Hypothesis helps in making an observation and experiments possible.
• It becomes the start point for the investigation.
• Hypothesis helps in verifying the observations.
• It helps in directing the inquiries in the right directions.
SAMPLING
 A sample is a small, manageable version of larger group. It is the subset containing
the characterstics of larger population. Samples are used in statistical testing when
population sizes are too large for the test to include all possible members of the
observation. A sample should represent the whole population and should not bais
towards specific attribute.
 Sampling is a process used in research in which a predetermined number of
observations are taken from a larger population.
 In research terms a sample is a group of people, objects, or items that are taken
from a larger population for measurement. The sample should be representative
of the population to ensure that we can generalise the findings from the research
sample to the population as a whole.
 Sampling is the process of selecting subsets from a population of interest so that
by studying the sample we may fairly generalize our results back to the population
from which they were chosen.
For example, if a drug manufacturer would like to research the adverse side effects of
a drug on the country’s population, it is almost impossible to conduct a research
study that involves everyone. In this case, the researcher decides a sample of
people from each demographic and then researches them, giving him/her
indicative feedback on the drug’s behavior.
CHARACTERSTICS OF SAMPLING
1. Much Cheaper
2. Saves Time
3. Much Reliable
4. Very suitable for carrying out surveys
5. Scientific in Nature
CHARACTERSTICS OF GOOD SAMPLE
TYPES OF SAMPLING
Types of SAMPLING
1. Probability Sampling
 Probability sampling is a sampling technique in which researchers choose samples
from a larger population using a method based on the theory of probability.
 Probability sampling means that every member of the population has a chance of
being selected. It is mainly used in quantitative research. If you want to produce
results that are representative of the whole population, probability sampling
techniques are the most valid choice.
For example: Selecting the students from a class of 50 students for competition by
allowing them to choose a cheats numbered from 1 to 50. Each student has equal
probability 1/50 of getting selected.

2. Non Probability Sampling


 In non-probability sampling, the researcher chooses members for research at
random. This sampling method is not a fixed or predefined selection process. This
makes it difficult for all elements of a population to have equal opportunities to be
included in a sample.
For example: Selecting the sample of students for competition having higher score in
Math and Science
Types of Porbability SAMPLING
i. Simple random sampling: One of the best probability sampling techniques that
helps in saving time and resources, is the Simple Random Sampling method. It is a
reliable method of obtaining information where every single member of a
population is chosen randomly, merely by chance. Each individual has the same
probability of being chosen to be a part of a sample.
For example, in an organization of 500 employees, if the HR team decides on
conducting team building activities, it is highly likely that they would prefer picking
chits out of a bowl. In this case, each of the 500 employees has an equal
opportunity of being selected.

ii) Systematic sampling: Researchers use the systematic sampling method to choose
the sample members of a population at regular intervals. It requires the selection
of a starting point for the sample and sample size that can be repeated at regular
intervals. This type of sampling method has a predefined range, and hence this
sampling technique is the least time-consuming.
For example, a researcher intends to collect a systematic sample of 500 people in
a population of 5000. He/she numbers each element of the population from 1-
5000 and will choose every 10th individual to be a part of the sample (Total
population/ Sample Size = 5000/500 = 10).
Types of Probability SAMPLING
iii) Stratified sampling: Strafied Sampling involves dividing the population into
subpopulations that may differ in important ways. It allows you draw more precise
conclusions by ensuring that every subgroup is properly represented in the sample. To
use this sampling method, you divide the population into subgroups (called strata)
based on the relevant characteristic (e.g. gender, age range, income bracket, job role).
Based on the overall proportions of the population, you calculate how many people
should be sampled from each subgroup. Then you use random or systematic
sampling to select a sample from each subgroup.
Example: The company has 800 female employees and 200 male employees. You want to
ensure that the sample reflects the gender balance of the company, so you sort the
population into two strata based on gender. Then you use random sampling on each
group, selecting 80 women and 20 men, which gives you a representative sample of
100 people.

iv) Cluster sampling: Cluster sampling is a method where the researchers divide the entire
population into sections or clusters that represent a population. Clusters are identified
and included in a sample based on demographic parameters like age, sex, location, etc.
This makes it very simple for a survey creator to derive effective inference from the
feedback.
Example: The company has offices in 10 cities across the country (all with roughly the same
number of employees in similar roles). You don’t have the capacity to travel to every
office to collect your data, so you use random sampling to select 3 offices – these are
your clusters.
Types of Probability SAMPLING
Types of Non Probability SAMPLING
i) Convenience Sampling
• A convenience sample simply includes the individuals who happen to be most
accessible to the researcher.
• This is an easy and inexpensive way to gather initial data, but there is no way to tell
if the sample is representative of the population, so it can’t produce generalizable
results.
For example, startups and NGOs usually conduct convenience sampling at a mall to
distribute leaflets of upcoming events or promotion of a cause – they do that by
standing at the mall entrance and giving out pamphlets randomly.
ii) Purposive Sampling
• This type of sampling, also known as judgement sampling, involves the researcher
using their expertise to select a sample that is most useful to the purposes of the
research.
• Researchers purely consider the purpose of the study, along with the
understanding of the target audience.
For instance, when researchers want to understand the thought process of people
interested in studying for their master’s degree. The selection criteria will be: “Are
you interested in doing your masters in …?” and those who respond with a “No”
are excluded from the sample.
Types of Non Probability SAMPLING
iii) Snowball Sampling
• If the population is hard to access, snowball sampling can be used to recruit
participants via other participants.
For Example, You are researching experiences of homelessness in your city. Since
there is no list of all homeless people in the city, probability sampling isn’t
possible. You meet one person who agrees to participate in the research, and she
puts you in contact with other homeless people that she knows in the area.
• Researchers also implement this sampling method in situations where the topic is
highly sensitive and not openly discussed—for example, surveys to gather
information about HIV Aids.
iv) Quota sampling: In Quota sampling, the selection of members in this sampling
technique happens based on a pre-set standard. In this case, as a sample is formed
based on specific attributes, the created sample will have the same qualities found
in the total population. It is a rapid method of collecting samples.
For Example, Choosing only the male students for the competition
Types of Non Probability SAMPLING
ADVANTAGES OF SAMPLING
 Low Cost of Sampling
 Less Time Consuming
 Scope of sampling is high and Very reliable: Less analysis and processing required
 Accuracy of Data is high: Less error prone
 Organization of convenience: Sampling can be taken with limited resourses by
organization
 Suitable in limited resources
 In cases, when the universe is very large, then the sampling method is the only
practical method for collecting the data.
LIMITATIONS OF SAMPLING
 Chances of Bias: All attribute may not be taken equally
 Difficulty in selecting truly a representative sample
 Need for subject specific knowledge: Untrained manpower cannot perform
sampling.
 Inadequacy of the samples
 Impossibility of sampling: In cases, where all individuals are entirely different
sampling is not possible
 Chances of committing the errors in sampling.
FIELD WORK
 Field research, field studies, or fieldwork is the collection of raw data outside
a laboratory, library, or workplace setting. The approaches and methods used in
field research vary across disciplines.
 For example, biologists who conduct field research may simply observe
animals interacting with their environments, whereas social scientists conducting
field research may interview or observe people in their natural environments to
learn their languages, folklore, and social structures.
 Essential for consolidating theory and practice. It creates oppurtunity to integrate
classroom knowledge with experimental learning.
 Field work is the process of observing and collecting data about people, cultures,
and natural environments. Field work is conducted in the wild of our everyday
surroundings rather than in the semi-controlled environments of a lab or
classroom. This allows researchers to collect data about the dynamic places,
people, and species around them. Field work enables students and researchers to
examine the way scientific theories interact with real life.
METHODS OF FIELD WORK
There are 4 main methods of conducting field research, and they are as follows:
1. Etnography
 Branch of Anthropology (study of human behaviour, culture, human biology etc.)
 Descriptive study of a particular human society or the process of making such a
study. Contemporary ethnography is based almost entirely on fieldwork and
requires the complete immersion of the anthropologist in the culture and
everyday life of the people who are the subject of his study.
 A classic example of ethnographic research would be an anthropologist traveling
to an island, living within the society on said island for years, and researching its
people and culture through a process of sustained observation and participation.

2. Qualitative Interviews
The goal of qualitative interviews is to provide a researcher with a breadth of
information that they can sift through in order to make inferences of their sample
group. It does so through interviews by directly asking participants questions.
There are three types of qualitative interviews; informal, conversational, and open
ended.
METHODS OF FIELD WORK
3. Direct observation
This method of field research involves researchers gathering information on their
subject through close visual inspection in their natural setting. The researcher, and
in this case the observer, remains unobtrusive and detached in order to not
influence the behavior of their subject.

4. Participant Observation
In this method of field research, the researchers join people by participating in certain
group activities relating to their study in order to observe the participants in the
context of said activity.
STEPS OF FIELD WORK
The following are some key steps taken in conducting field research:
1. Identifying and obtaining a team of researchers who are specialized in the field of
research of the study.

2. Identifying the right method of field research for your research topic. The various
methods of field research are discussed above. A lot of factors will play a role in
deciding what method a researcher chooses, such as duration of the study,
financial limitations, and type of study.

3. Visiting the site/setting of the study in order to study the main subjects of the
study.

4. Analyzing the data collected through field research.

5. Constructively communicating the results of the field research, whether that be


through a research paper or newspaper article etc.
REASONS TO CONDUCT FIELD RESEARCH

• To understand the context of studies: field research allows researchers to identify


the setting of their subjects to draw correlations between how their surroundings
may be affecting certain behaviors.

• To acquire in-depth and high quality data: Field research provides in-depth
information as subjects are observed and analysed for a long period of time.

• When there is a lack of data on a certain subject: field research can be used to fill
gaps in data that may only be filled through in-depth primary research.
Advantages of field research
 Can yield detailed data as researchers get to observe their subjects in their own
setting.
 May uncover new social facts: Field research can be used to uncover social facts
that may not be easily discernible, and that the research participants may also be
unaware of.
 No tampering of variables as methods of field research are conducted in natural
settings in the real world. Voxco’s mobile offline research software is a powerful
tool for conducting field research.
Disadvantages
 Expensive to collect: most methods of field research involve the researcher to
immerse themselves into new settings for long periods of time in order to acquire
in-depth data. This can be expensive.
 Time consuming: Field research is time consuming to conduct.
 Information gathered may lack breadth: Field research involves in-depth studies
and will usually tend to have a small sample group as researchers may be unable
to collect in-depth data from large groups of people.
VALIDITY
 Reliability and Validity are concepts used to evaluate the quality of research. They
indicate how well a method, technique or test measures something. Reliability is
about the consistency of a measure, and validity is about the accuracy of a
measure.
 It’s important to consider reliability and validity when you are creating
your research design, planning your methods, and writing up your results,
especially in quantitative research.

1. Validity
• The conclusions you draw from your research (whether from analyzing survey,
focus groups, experimental design, or other research methods) are only useful if
they’re valid.
• How “true” are these results? How well do they represent the thing you’re actually
trying to study? Validity is used to determine whether research measures what it
intended to measure and to approximate the truthfulness of the results.
• Validity refers to how accurately a method measures what it is intended to
measure.
VALIDITY
 Validity is how researchers talk about the extent that results represent reality.
Research methods, quantitative or qualitative, are methods of studying real
phenomenon – validity refers to how much of that phenomenon they measure vs.
how much “noise,” or unrelated information, is captured by the results.
 For example, if a weight measuring scale is wrong by 4kg (it deducts 4 kg of the
actual weight), it can be specified as reliable, because the scale displays the same
weight every time we measure a specific item. However, the scale is not valid
because it does not display the actual weight of the item.

Validity can also be divided into five types:


1. Face Validity
• Face validity is how valid your results seem based on what they look like. This is
the least scientific method of validity, as it is not quantified using statistical
methods.
• Face validity is not validity in a technical sense of the term. It is concerned with
whether it seems like we measure what we claim.
• Here we look at how valid a measure appears on the surface and make subjective
judgments based off of that.
VALIDITY
For Example
Imagine you give a survey that appears to be valid to the respondent and the
questions are selected because they look valid to the administer.
The administer asks a group of random people, untrained observers, if the questions
appear valid to them
• In research it’s never enough to rely on face judgments alone – and more
quantifiable methods of validity are necessary in order to draw acceptable
conclusions.
2. Content Validity
Content validity is whether or not the measure used in the research covers all of the
content in the underlying construct (the thing you are trying to measure).
For example, A test that aims to measure a class of students’ level of Spanish contains
reading, writing and speaking components, but no listening component. Experts
agree that listening comprehension is an essential aspect of language ability, so
the test lacks content validity for measuring the overall level of ability in Spanish.
VALIDITY
3. Construct Validity
Relates to assessment of suitability of measurement tool to measure the phenomenon
being studied. Application of construct validity can be effectively facilitated with
the involvement of panel of ‘experts’ closely familiar with the measure and the
phenomenon.
Example: with the application of construct validity the levels of leadership
competency in any given organisation can be effectively assessed by devising
questionnaire to be answered by operational level employees and asking
questions about the levels of their motivation to do their duties in a daily basis.

4. Criterion Related Validity


The extent to which the result of a measure corresponds to other valid measures of
the same concept.
For Example, A survey is conducted to measure the political opinions of voters in a
region. If the results accurately predict the later outcome of an election in that
region, this indicates that the survey has high criterion validity.
VALIDITY
5. Internal Validity
 “Internal validity refers to how the research findings match reality.
 If the effect of the dependent variable is only due to the independent variable(s)
then internal validity is achieved. This is the degree to which a result can be
manipulated.
 Put another way, internal validity is how you can tell that your research “works” in
a research setting. Within a given study, does the variable you change affect the
variable you’re studying?
6. External Validity
 External validity refers to the extend to which the research findings can be
replicated to other environments
 External validity refers to the extent to which the results of a study can be
generalized beyond the sample. Which is to say that you can apply your findings to
other people and settings.
 A laboratory setting (or other research setting) is a controlled environment with
fewer variables. External validity refers to how well the results hold, even in the
presence of all those other variables.
Reliability
 Reliability refers to how consistently a method measures something.
 If the same result can be consistently achieved by using the same methods under
the same circumstances, the measurement is considered reliable.
For Example,
You measure the temperature of a liquid sample several times under identical
conditions. The thermometer displays the same temperature every time, so the
results are reliable.
A doctor uses a symptom questionnaire to diagnose a patient with a long-term
medical condition. Several different doctors use the same questionnaire with the
same patient but give different diagnoses. This indicates that the questionnaire
has low reliability as a measure of the condition.
Types of Reliability
1. Test-retest reliability
Test-retest reliability measures the consistency of results when you repeat the same
test on the same sample at a different point in time. You use it when you are
measuring something that you expect to stay constant in your sample.
Reliability
Why it’s important?
• Many factors can influence your results at different points in time: for example,
respondents might experience different moods, or external conditions might affect
their ability to respond accurately.
• Test-retest reliability can be used to assess how well a method resists these factors
over time. The smaller the difference between the two sets of results, the higher
the test-retest reliability.
How to measure it?
• To measure test-retest reliability, you conduct the same test on the same group of
people at two different points in time. Then you calculate the correlation between
the two sets of results.
Test-retest reliability example
• You devise a questionnaire to measure the IQ of a group of participants (a
property that is unlikely to change significantly over time).You administer the test
two months apart to the same group of people, but the results are significantly
different, so the test-retest reliability of the IQ questionnaire is low.
Reliability
2. Interrater reliability
 Interrater reliability (also called interobserver reliability) measures the degree of
agreement between different people observing or assessing the same thing. You
use it when data is collected by researchers assigning ratings, scores or categories
to one or more variables.
Why it’s important?
• People are subjective, so different observers’ perceptions of situations and
phenomena naturally differ. Reliable research aims to minimize subjectivity as
much as possible so that a different researcher could replicate the same results.
• When designing the scale and criteria for data collection, it’s important to make
sure that different people will rate the same variable consistently with minimal
bias. This is especially important when there are multiple researchers involved in
data collection or analysis.
How to measure it?
• To measure interrater reliability, different researchers conduct the same
measurement or observation on the same sample. Then you calculate the
correlation between their different sets of results. If all the researchers give similar
ratings, the test has high interrater reliability.
Reliability
Interrater reliability example
 Based on an assessment criteria checklist, five examiners submit substantially different
results for the same student project. This indicates that the assessment checklist has
low inter-rater reliability (for example, because the criteria are too subjective).

3. Parallel forms reliability


Parallel forms reliability measures the correlation between two equivalent versions of a
test. You use it when you have two different assessment tools or sets of questions
designed to measure the same thing.
Why it’s important
• If you want to use multiple different versions of a test (for example, to avoid
respondents repeating the same answers from memory), you first need to make sure
that all the sets of questions or measurements give reliable results.
How to measure it?
• The most common way to measure parallel forms reliability is to produce a large set of
questions to evaluate the same thing, then divide these randomly into two question
sets.
• The same group of respondents answers both sets, and you calculate the correlation
between the results. High correlation between the two indicates high parallel forms
reliability.
Reliability
Parallel forms reliability example

If a person gives correct answer for


the set A then he must also gives
correct anwers to be parallel form
reliable.

4. Internal consistency
• Internal consistency is the method of estimating whether the different parts of the
test are measuring the same thing.
• You can calculate internal consistency without repeating the test or involving other
researchers, so it’s a good way of assessing reliability when you only have one data
set.
Why it’s important
• When you devise a set of questions or ratings that will be combined into an overall
score, you have to make sure that all of the items really do reflect the same thing.
If responses to different items contradict one another, the test might be unreliable.
Reliability
How to measure it?
There are two approaches to measure internal consistency:
i) Split half correlation
This involves splitting the items into two sets, such as the first and second halves of
the items or the even- and odd-numbered items. Then a score is computed for
each set of items, and the correlation between the two sets of scores is examined.
ii) Cronbach’s alpha method
Conceptually, α is the mean of all possible split-half correlations for a set of items. It
can be calculated by using a mathematical formula. Its value is between 0 to 1. The
alpha value greater than 0.9 is supposed to have highest internal consistency.

Internal consistency example


You design a questionnaire to measure self-esteem. If you randomly split the results
into two halves, there should be a strong correlation between the two sets of
results. If the two results are very different, this indicates low internal consistency.
Reliability
Reliability vs Validity

You might also like