0% found this document useful (0 votes)
31 views7 pages

Research Methods Ass

The document discusses principles of data collection and evaluation ethics. It addresses the importance of avoiding harm to stakeholders, ensuring voluntary participation, and maintaining confidentiality of information. Planning the entire data collection process from start to finish is also key to ensure the reliability, validity and credibility of the data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views7 pages

Research Methods Ass

The document discusses principles of data collection and evaluation ethics. It addresses the importance of avoiding harm to stakeholders, ensuring voluntary participation, and maintaining confidentiality of information. Planning the entire data collection process from start to finish is also key to ensure the reliability, validity and credibility of the data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 7

The principles of data collection include;

 keeping things simple

 planning the whole process;

 ensuring reliability, credibility and validity; and

 addressing the ethics of data collection.1

Addressing ethics.

Monitoring and evaluation of data collection requires adherence to some ethics that set out as a
control measure of the process to go through; however, it is understood that there no or very little
recognised set of ethics for organisations, programmes or projects carrying out internal
monitoring or review processes. There are some fundamental ethical principles that should be
always be adhered to whenever monitoring and evaluation are being carried out which may
include but are not limited to;

Avoidance of harm- Avoidance of harm is a key principle whenever data is collected. People
should not be put in a position where they might suffer because of the information they
provide. For example, villagers supplying information about government services; women
supplying information about domestic violence; children supplying information about bullying;
or even staff providing information on leadership culture within a CSO could all be considered
potentially at harm. Measures should always be taken to mitigate the possibility of harm. If this
is not possible then the data should not be collected.The benefits and costs to different
stakeholders need to be considered. For example, there may seem little harm in getting
together a group of farmers to engage in a focus group discussion about farming methods. But
in some cases this might mean taking them away from their fields at harvest time. Where
possible, it is important to balance the costs and benefits of data collection activities to the
stakeholders themselves, as well as to the organisation, programme or project.1
Benefits and costs to stakeholders- these need to be considered for example, they may seem
little harm in getting together a group of farmers to engage in a focus group discussion about
farming methods, but in some cases, this might mean taking them away from their fields during
harvest time, hence, it is important to balance the costs and benefits of data collection activities
to the stakeholders themselves, as well as to the organisation.

Participation- this should always be voluntary and people should not be pressured into taking
part. In fact, any attempt to pressurise people into engaging with monitoring and evaluation
almost always backfires because people are usually unwilling to tell the truth in situations where
they feel forced to participate.

Confidentiality- it needs to be respected as some people may be willing to express opinions


provided, they are quoted, or the information is not used widely. In this case, this may need to be
recorded clearly alongside any notes taken. The information should not be disseminated or used
without consent of the person who supplied it though it is normally acceptable to use the
information to shape judgements or help to make conclusions.

Keeping things simple

In any project, programme or organisation, basic monitoring needs to be carried out. At project level
there is often little difference between monitoring and project management. For example, project
monitoring may involve simple processes such as conducting regular meetings, reviewing documents or
records, discussing issues informally with staff, etc. In these cases there is no need to engage in complex
methodologies of data collection and analysis.Sometimes, more complex methodologies need to be
adopted. For example, if carrying out an evaluation of a large programme it may be necessary to
implement a formal methodology, such as a Randomised Control Trial or Qualitative Comparative
Analysis, which requires specialist skills. But it is important not to undertake any data collection or
analysis methodology that is more complex or expensive than is necessary. The key, therefore, is to keep
things as simple as possible.2

Planning the whole process

It is always important to know why information is needed before collecting it. A common mistake is to
collect information before working out how it will be analysed or used. Sometimes, this means that the
information cannot be properly analysed and used because it has not been collected in the right way, at
the right time or in the right place. Some basic questions to ask before collecting any information are as
follows.

• What information do you intend to gather?

• Where will you get this information, and how will it be collected?

• Why is the information needed, and what questions is the information going to answer?

• Who will use the information once collected?

• How will the information be analysed?

• How will any analyses be used?

If the answers to any of these questions are unknown or uncertain then it is important to find out the
answers before going any further. Huge amounts of time, money and energy are wasted every year
because information is collected that is never analysed or used.“If you collect information just because
you think it might be useful at some stage in the future then there is a very good chance it will never be
used.” The golden rule is ‘if data is not being used then stop collecting it’. The time otherwise spent
collecting the data can then be used for something more productive.

Ensuring reliability, credibility and validity

Data is considered reliable when there is confidence that similar results would be obtained if the data
collection exercise was repeated within the same period, using the same methods. If data is reliable it
means it is not too heavily dependent on the skills and honesty of the person collecting it.Data is valid
when it measures or describes what it set out to measure or describe. Data is not valid if it is misused.
For example, information collected on attendance at a training session would be valid if used to show
that the training session was held and people turned up. But information on attendance would not be
valid if used to claim that participants had increased their awareness or understanding of an issue.
Another common mistake is to get information from just one or two stakeholders and then to use this
information as if it represents the views of a much wider population. Data is considered credible when it
is believable, and is consistent with a ‘common sense’ view of the world. But just because data is not
credible does not mean it is inaccurate. It simply means that it needs further checking. For example, if a
small pilot project claimed to have data that showed it had greatly increased the living standards of
farmers in a region the data may not be considered credible at first. But further data collection and
analysis might confirm the findings and explain why such large changes had occurred. In that case the
new data would be considered credible.4

2. Differencs between Validity and Reliability.

Reliability and validity are closely related, but they mean different things. A measurement can be
reliable without being valid. However, if a measurement is valid, it is usually also reliable.

Reliability

Reliability refers to how consistently a method measures something. If the same result can be
consistently achieved by using the same methods under the same circumstances, the measurement is
considered reliable.

You measure the temperature of a liquid sample several times under identical conditions. The
thermometer displays the same temperature every time, so the results are reliable.

A doctor uses a symptom questionnaire to diagnose a patient with a long-term medical condition.
Several different doctors use the same questionnaire with the same patient but give different diagnoses.
This indicates that the questionnaire has low reliability as a measure of the condition.

Validity

Validity refers to how accurately a method measures what it is intended to measure. If research has high
validity, that means it produces results that correspond to real properties, characteristics, and variations
in the physical or social world.

High reliability is one indicator that a measurement is valid. If a method is not reliable, it probably isn’t
valid.

If the thermometer shows different temperatures each time, even though you have carefully controlled
conditions to ensure the sample’s temperature stays the same, the thermometer is probably
malfunctioning, and therefore its measurements are not valid.

If a symptom questionnaire results in a reliable diagnosis when answered at different times and with
different doctors, this indicates that it has high validity as a measurement of the medical condition.
However, reliability on its own is not enough to ensure validity. Even if a test is reliable, it may not
accurately reflect the real situation. for example The thermometer that you used to test the sample
gives reliable results. However, the thermometer has not been calibrated properly, so the result is 2
degrees lower than the true value. Therefore, the measurement is not valid.

A group of participants take a test designed to measure working memory. The results are reliable, but
participants’ scores correlate strongly with their level of reading comprehension. This indicates that the
method might have low validity: the test may be measuring participants’ reading comprehension instead
of their working memory.

Validity is harder to assess than reliability, but it is even more important. To obtain useful results, the
methods you use to collect data must be valid: the research must be measuring what it claims to
measure. This ensures that your discussion of the data and the conclusions you draw are also
valid.Below is a diagram differentiating validity and reliability.
Validity Reliability

Validity implies the extent to which the research Reliability refers to the degree to which
instrument measures, what it is intended to assessment tool produces consistent results, when
measure. repeated measurements are made.

It refers to the ability of the instrument/test to It refers to the reproducibility of the results when
measure what it is supposed to measure repeated measurements are done

It relates to the correct applicability of the It relates to the extent to which an experiment,
instrument/test/procedure in a needed situation test or any procedure gives the same result on
repeated trials.

Answers, ‘Is it the right instrument/test for what I Answers, ‘Can the results obtained be replicated if
need to measure?’ the test is repeated?’

Validity looks at accuracy Reliability looks at repeatability/consistency

Validity mainly focuses on the outcome Reliability mainly focuses on maintaining


consistent result

Influencing factors for validity are: process, Influencing factors for reliability are: test length,
purpose, theory matters, logical implications, etc. test score variability, heterogenicity, etc.
Examples of different types of validity are: Examples of different types of reliability are:

Face validity Test-retest reliability

Construct validity Parallel forms reliability

Content validity Intra rater reliability.

You might also like