0% found this document useful (0 votes)
12 views

Research method lecture notes

The document provides an overview of statistical procedures, focusing on descriptive statistics, sampling methods, and data collection techniques. It outlines key concepts such as measures of central tendency, dispersion, and distribution shape, as well as the importance of sampling design and potential errors. Additionally, it discusses the types of data, including primary and secondary data, and various methods for collecting and presenting data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Research method lecture notes

The document provides an overview of statistical procedures, focusing on descriptive statistics, sampling methods, and data collection techniques. It outlines key concepts such as measures of central tendency, dispersion, and distribution shape, as well as the importance of sampling design and potential errors. Additionally, it discusses the types of data, including primary and secondary data, and various methods for collecting and presenting data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

STATISTICAL PROCEDURES

Descriptive statistics

Descriptive statistics is a field of statistics that focuses on summarizing, organizing, and

presenting data in a meaningful way. Unlike inferential statistics, which involves making

predictions or generalizations about a population based on a sample, descriptive statistics simply

outline the characteristics of a dataset. It plays a vital role in research, business, economics,

healthcare, and various other fields where data interpretation is crucial.

By simplifying complex datasets, descriptive statistics provide numerical and graphical

representations that enhance comprehension. As a fundamental component of statistical analysis,

it lays the groundwork for further exploration and informed decision-making. Its applications

extend across multiple disciplines, including finance, psychology, social sciences, and

engineering.

Types of Descriptive Statistics

Descriptive statistics can be broadly categorized into three types:

1. Measures of Central Tendency

These measures summarize data by identifying a central point within a dataset. The three main

measures of central tendency are:

 Mean: Also known as the average, it is calculated by summing all values in a dataset and

dividing by the total number of observations. The mean provides a general representation

of the data but can be affected by extreme values (outliers). It is widely used in economic

data analysis, business reports, and social sciences.


 Median: The median is the middle value of an ordered dataset. If the dataset has an even

number of values, the median is the average of the two middle values. It is less sensitive

to outliers than the mean, making it a more reliable measure when dealing with skewed

distributions. For instance, median income is often used instead of mean income because

it is not influenced by extremely high or low values.

 Mode: The mode is the most frequently occurring value in a dataset. A dataset may have

no mode, one mode (unimodal), or multiple modes (bimodal or multimodal). The mode is

particularly useful in categorical data analysis, such as determining the most common

customer preference or product type in a survey.

2. Measures of Dispersion (Variability)

These measures provide insight into the spread or variability of data points within a dataset. The

main measures of dispersion include:

 Range: The range is the difference between the maximum and minimum values in a

dataset. While easy to compute, it does not provide information about data distribution.

For instance, if two datasets have the same range but different distributions, the range

alone would not be sufficient to compare their variability.

 Variance: Variance is a statistical measure that quantifies how much the values in a

dataset deviate from the mean (average). It indicates the spread or dispersion of data

points. A higher variance means greater spread (data points are far from the mean),

while a lower variance means the data points are closer to the mean.

 Standard Deviation: The standard deviation is the square root of the variance and

represents the average distance of data points from the mean. A higher standard deviation
indicates greater dispersion. Standard deviation is commonly used in quality control,

finance, and experimental sciences to measure consistency and predictability.

 Interquartile Range (IQR): The IQR is the range between the first quartile (Q1) and the

third quartile (Q3), which eliminates the influence of extreme values and better represents

data spread. It is commonly used in box plots to visualize data distributions.

3. Measures of Distribution Shape

These measures describe the shape and symmetry of a dataset. The two key components are:

 Skewness: Skewness measures the asymmetry of a dataset. A dataset can be positively

skewed (right-skewed), negatively skewed (left-skewed), or symmetric (zero skewness).

Positive skewness indicates that data is concentrated on the left, while negative skewness

suggests that data is concentrated on the right.

 Kurtosis: Kurtosis measures the “tailedness” of a distribution. A high kurtosis indicates

heavy tails (outliers), while low kurtosis suggests light tails. It helps in identifying

whether a dataset has extreme values that may influence statistical conclusions.

Uses of Descriptive Statistics

Descriptive statistics are widely used in various fields, including business, healthcare, education,

and social sciences. The key applications include:

1. Data Summarization

Descriptive statistics help summarize large datasets in a meaningful way. Instead of analyzing

thousands of data points, measures like mean, median, and mode provide quick insights into the
data’s key characteristics. For example, a hospital can use descriptive statistics to summarize

patient wait times and treatment durations.

2. Comparison of Data

By using statistical measures such as mean, standard deviation, and quartiles, researchers can

compare different datasets effectively. For example, comparing the average income across

different regions helps in economic analysis. Businesses use descriptive statistics to evaluate

product performance across multiple markets.

3. Identifying Trends and Patterns

Descriptive statistics allow for the identification of trends in datasets. Businesses use statistics to

track sales performance over time, while epidemiologists monitor disease outbreaks using

statistical trends. Trend analysis is essential in stock market prediction, climate change studies,

and marketing research.

4. Decision Making

Data-driven decision-making is essential in industries like finance, healthcare, and policy-

making. Descriptive statistics provide a foundation for making informed decisions by presenting

relevant data insights. For example, insurance companies rely on statistical data to assess risks

and determine premium rates.

5. Data Visualization
Descriptive statistics facilitate data visualization through tables, graphs, and charts such as

histograms, pie charts, and box plots. These visual representations make it easier to interpret

complex datasets. Businesses use graphical summaries to present sales reports and market trends

to stakeholders.

A time series is a sequence of data points recorded at successive time intervals. It represents

observations collected over time, typically at regular intervals such as daily, monthly, quarterly,

or annually. Time series data can be used to analyze trends, patterns, and seasonal variations,

making it essential in fields such as economics, finance, meteorology, and engineering.

Key components of time series analysis include:

1. Trend – The long-term movement or direction in the data.

2. Seasonality – Repeating patterns or cycles over a fixed period.

3. Cyclic Variations – Fluctuations that occur over longer periods without a fixed pattern.

4. Irregular Components – Random variations that cannot be attributed to trends or cycles.

Time series analysis helps in forecasting future values, identifying relationships between

variables, and making informed decisions based on historical data.

Index numbers are statistical measures used to express changes in economic data over time,

allowing for comparisons between different periods. They help track variations in prices,

production, income, or other economic indicators by converting complex data into a simplified

numerical form.

Types of Index Numbers:


1. Price Index – Measures changes in the price level of goods and services (e.g., Consumer

Price Index - CPI, Producer Price Index - PPI).

2. Quantity Index – Tracks changes in the volume or quantity of goods produced or

consumed over time.

3. Value Index – Reflects changes in both price and quantity to measure total revenue or

expenditure trends.

Importance of Index Numbers:

 Help in analyzing inflation, cost of living, and economic growth.

 Assist policymakers in making informed decisions.

 Used in business and finance for market trend analysis.

Index numbers are essential tools in economics and statistics for understanding relative changes

in various economic variables over time.

SAMPLING

Sampling is the process of selecting a subset of individuals from a larger population for analysis,

making data collection more manageable while Randomness ensures that each member has an

equal chance of selection, reducing bias and improving representativeness

Sample design refers to the framework or strategy used to select a subset of individuals, items,

or observations from a larger population for study. It determines how the sample will be chosen,

ensuring that it accurately represents the entire population. A well-structured sample design

improves the reliability and validity of research findings while minimizing bias and errors.
Key Components of Sample Design:

1. Target Population – The group from which the sample is drawn.

2. Sampling Frame – A list or database containing elements of the population.

3. Sampling Method – The technique used to select the sample.

4. Sample Size – The number of observations selected for analysis.

Sampling Errors

Sampling errors occur when a sample is not representative of the population. This can happen

due to chance or bias in the sampling process.

Types of Sampling Errors

1. Random Sampling Error: This occurs when a sample is randomly selected, but it still doesn't

accurately represent the population.

2. Bias Sampling Error: This occurs when the sampling process is biased, resulting in a sample

that doesn't accurately represent the population.

Causes of Sampling Errors

1. Small Sample Size: A small sample size can lead to sampling errors.

2. Poor Sampling Method: Using a poor sampling method, such as convenience sampling, can

lead to sampling errors.


3. Non-Response: Non-response from participants can lead to sampling errors.

Non-Sampling Errors

Non-sampling errors occur when there are errors in the data collection process, data processing,

or data analysis.

Types of Non-Sampling Errors

1. Measurement Error: This occurs when data is collected incorrectly or inaccurately.

2. Data Entry Error: This occurs when data is entered incorrectly into a database or spreadsheet.

3. Analysis Error: This occurs when data is analyzed incorrectly or inaccurately.

Causes of Non-Sampling Errors

1. Poor Data Collection Methods: Using poor data collection methods, such as poorly designed

surveys, can lead to non-sampling errors.

2. Human Error: Human error, such as data entry mistakes, can lead to non-sampling errors.

3. Technical Issues: Technical issues, such as software glitches, can lead to non-sampling errors.

Types of sampling designs

Probability sampling is a sampling method in which every individual or unit in the population

has a known chance of being selected. This method ensures that every member of the population

has an equal opportunity to be included in the sample.

Key Characteristics of Probability Sampling


1. Random selection: Every individual or unit is selected randomly from the population.

2. Known probability: Every individual or unit has a known chance of being selected.

3. Equal opportunity: Every member of the population has an equal opportunity to be included in

the sample.

4. Representative sample: The sample is representative of the population, as every individual or

unit has an equal chance of being selected.

Types of Probability Sampling

1. Simple Random Sampling: Every individual or unit is selected randomly from the population,

without replacement.

2. Systematic Random Sampling: Every nth individual or unit is selected from the population,

starting from a random point.

3. Stratified Random Sampling: The population is divided into strata, and a random sample is

selected from each stratum.

4. Cluster Random Sampling: The population is divided into clusters, and a random sample of

clusters is selected.

Advantages of Probability Sampling

1. Representative sample: Probability sampling ensures that the sample is representative of the

population.

2. Accurate estimates: Probability sampling allows for accurate estimates of population

parameters.
3. Reliable results: Probability sampling provides reliable results, as the sample is selected

randomly and without bias.

4. Generalizability: Probability sampling allows for generalizability of results to the larger

population.

Disadvantages of Probability Sampling

1. Time-consuming: Probability sampling can be time-consuming, especially for large

populations.

2. Expensive: Probability sampling can be expensive, especially for large populations.

3. Difficulty in selecting a representative sample: It can be difficult to select a representative

sample, especially if the population is diverse or complex.

Non-probability sampling is a sampling method in which the selection of individuals or units is

based on non-random criteria, such as convenience, judgment, or quota. This method does not

ensure that every member of the population has an equal chance of being selected.

Types of Non-Probability Sampling

1. Convenience Sampling: Selecting individuals or units that are easily accessible or convenient

to sample..

3. Quota Sampling: Selecting individuals or units to meet specific quotas or criteria, such as age,

sex, or income.

4. Snowball Sampling: Selecting individuals or units through referrals or recommendations from

existing sample members.


5. Purposive Sampling: Selecting individuals or units based on specific characteristics or criteria,

such as expertise or experience.

Advantages of Non-Probability Sampling

1. Time-efficient: Non-probability sampling can be faster and more efficient than probability

sampling.

2. Cost-effective: Non-probability sampling can be less expensive than probability sampling.

3. Flexibility: Non-probability sampling allows for flexibility in selecting sample members.

4. Expertise: Non-probability sampling can be useful when selecting experts or individuals with

specific characteristics.

Disadvantages of Non-Probability Sampling

1. Lack of representativeness: Non-probability sampling may not provide a representative sample

of the population.

2. Bias: Non-probability sampling can introduce bias into the sample, as the selection criteria

may not be objective.

3. Limited generalizability: Non-probability sampling may limit the generalizability of the results

to the larger population.

4. Lack of precision: Non-probability sampling may not provide precise estimates of population

parameters.

Data collection and presentation


Data refers to the facts, figures, and statistics collected for analysis, interpretation, and decision-

making. It can be in various forms, such as numbers, text, images, audio, and video. Data is used

to describe, analyze, and visualize information, helping individuals and organizations make

informed decisions.

Types of data

(1) Primary Data

Primary data is original, raw data collected directly from the source, typically through

experiments, surveys, observations, or interviews. It is firsthand information gathered to address

a specific research question or hypothesis.

Characteristics of Primary Data:

i. Original: Collected directly from the source.

ii. Raw: Unprocessed and unanalyzed.

iii. Firsthand: Gathered by the researcher or organization.

iv. Specific: Collected to address a specific research question or hypothesis.

(2) Secondary Data

Secondary data is existing, pre-collected data that has been previously gathered, analyzed, and

published by others. It is secondhand information that can be used to answer research questions

or test hypotheses.

Characteristics of Secondary Data:

1. Existing: Already collected and published.


2. Pre-collected: Gathered by others, not the researcher.

3. Processed: Analyzed and interpreted by others.

4. General: Collected for a broader purpose, not specific to the researcher's question.

Types of Primary data

Quantitative Primary Data

1. Definition: Numerical data collected to answer research questions or test hypotheses.

2. Examples: Questionnaires, rating scales, checklists, surveys.

3. Characteristics: Quantifiable, measurable, and can be analyzed statistically.

4. Advantages: Allows for generalizability, precision, and comparability.

5. Disadvantages: May not capture nuanced or contextual information.

Qualitative Primary Data

1. Definition: Non-numerical data collected to gain insights, understanding, and meaning.

2. Examples: Open-ended questions, in-depth interviews, focus groups, observational studies.

3. Characteristics: Rich, detailed, and contextual.

4. Advantages: Provides in-depth understanding, captures nuanced information, and allows for

exploration.

5. Disadvantages: May not be generalizable, can be time-consuming and resource-intensive.

Cross-Sectional Primary Data


1. Definition: Data collected at a single point in time to examine relationships, attitudes, or

behaviors.

2. Examples: Surveys, questionnaires, interviews.

3. Characteristics: Provides a snapshot of the population at a specific moment.

4. Advantages: Allows for comparison, identification of patterns, and examination of

relationships.

5. Disadvantages: May not capture changes over time, can be influenced by external factors.

Longitudinal Primary Data

1. Definition: Data collected over an extended period to examine changes, developments, or

trends.

2. Examples: Panel studies, cohort studies, time-series analysis.

3. Characteristics: Provides insights into changes, developments, or trends over time.

4. Advantages: Allows for examination of causal relationships, identification of patterns, and

analysis of changes.

5. Disadvantages: Can be time-consuming, expensive, and challenging to maintain participant

engagement.

Spatial Primary Data

1. Definition: Data collected to examine geographic patterns, relationships, or phenomena.

2. Examples: Geographic information systems (GIS), remote sensing, spatial analysis.


3. Characteristics: Provides insights into geographic patterns, relationships, or phenomena.

4. Advantages: Allows for examination of spatial relationships, identification of patterns, and

analysis of geographic phenomena.

5. Disadvantages: Can be challenging to collect and analyze, requires specialized skills and

software.

Method of Collecting Primary Data

Questionnaire Method of Collecting Primary Data

A questionnaire is a structured set of questions used to collect data from respondents. It is a

popular method of collecting primary data, especially in social sciences, marketing, and

healthcare research.

Types of Questionnaires

1. Structured Questionnaires: Questions are pre-determined and respondents are asked to select

from a set of pre-defined answers.

2. Unstructured Questionnaires: Questions are open-ended and respondents are free to provide

detailed answers.

3. Semi-Structured Questionnaires: A combination of structured and unstructured questions.

Advantages

1. Cost-Effective: Questionnaires can be distributed online or offline, making it a cost-effective

method.
2. Time-Efficient: Respondents can complete questionnaires at their own pace, making it a time-

efficient method.

3. Large Sample Size: Questionnaires can be distributed to a large number of respondents,

making it possible to collect data from a large sample size.

4. Anonymity: Respondents can remain anonymous, which can increase the likelihood of honest

responses.

Disadvantages

1. Response Rate: The response rate may be low, especially if the questionnaire is lengthy or

complex.

2. Bias: Respondents may provide biased answers, especially if they have a vested interest in the

outcome.

3. Lack of Depth: Questionnaires may not provide in-depth answers, especially if the questions

are closed-ended.

4. Data Quality: The quality of data collected may be poor if the questionnaire is not well-

designed or if respondents do not understand the questions.

Designing an Effective Questionnaire

1. Clear Objectives: Clearly define the objectives of the questionnaire.

2. Simple Language: Use simple language that is easy to understand.

3. Concise Questions: Keep questions concise and to the point.

4. Pilot Testing: Pilot test the questionnaire to ensure it is effective.


5. Avoid Bias: Avoid biased language and questions.

Administration of Questionnaires

1. Online Surveys: Distribute questionnaires online through email or social media.

2. Offline Surveys: Distribute questionnaires offline through face-to-face interviews or mail.

3. Telephone Interviews: Conduct telephone interviews to collect data.

4. In-Person Interviews: Conduct in-person interviews to collect data.

Analysis of Questionnaire Data

1. Descriptive Statistics: Use descriptive statistics to summarize the data.

2. Inferential Statistics: Use inferential statistics to draw conclusions about the population.

3. Data Visualization: Use data visualization techniques to present the findings.

4. Thematic Analysis: Use thematic analysis to identify patterns and themes in the data.

Principles of Preparing a Questionnaire

I. Clear Objectives

1. Define the purpose: Clearly define the purpose of the questionnaire and what you want to

achieve.

2. Specific goals: Identify specific goals and objectives that the questionnaire should fulfill.

II. Simple and Clear Language

1. Avoid jargon: Avoid using technical terms or jargon that respondents may not understand.

2. Simple vocabulary: Use simple vocabulary and phrases that are easy to understand.
3. Avoid ambiguity: Avoid ambiguous or unclear language that may confuse respondents.

III. Relevant and Concise Questions

1. Relevant questions: Ask only relevant questions that are related to the purpose of the

questionnaire.

2. Concise questions: Keep questions concise and to the point, avoiding unnecessary words or

phrases.

3. Avoid leading questions: Avoid leading questions that may influence respondents' answers.

IV. Logical Question Sequence

1. Introduction: Start with an introduction that explains the purpose of the questionnaire.

2. Warm-up questions: Begin with warm-up questions that are easy to answer and help

respondents become comfortable.

3. Transition questions: Use transition questions to move from one topic to another.

4. Conclusion: End with a conclusion that thanks respondents for their time and participation.

V. Question Types

1. Open-ended questions: Use open-ended questions to gather detailed and qualitative

information.

2. Closed-ended questions: Use closed-ended questions to gather quantitative information and to

make it easier to analyze data.

3. Multiple-choice questions: Use multiple-choice questions to provide respondents with a range

of options.
4. Rating scales: Use rating scales to measure respondents' attitudes or opinions.

VI. Avoiding Bias

1. Avoid leading questions: Avoid leading questions that may influence respondents' answers.

2. Avoid loaded questions: Avoid loaded questions that may contain emotionally charged

language.

3. Avoid assumptions: Avoid assumptions about respondents' knowledge, attitudes, or opinions.

VII. Pilot Testing

1. Pilot test: Pilot test the questionnaire with a small group of respondents to identify any issues

or problems.

2. Revise and refine: Revise and refine the questionnaire based on feedback from pilot testers.

VIII. Anonymity and Confidentiality

1. Ensure anonymity: Ensure that respondents' identities are kept anonymous.

2. Ensure confidentiality: Ensure that respondents' answers are kept confidential.

IX. Instructions and Guidance

1. Clear instructions: Provide clear instructions on how to complete the questionnaire.

2. Guidance: Provide guidance on any technical terms or concepts used in the questionnaire.

X. Feedback Mechanism
1. Feedback mechanism: Provide a feedback mechanism for respondents to provide comments or

suggestions.

2. Respondent feedback: Use respondent feedback to improve the questionnaire and make it

more effective.

Interview Method of Collecting Primary Data

Definition

An interview is a structured or unstructured conversation between a researcher and a respondent

to gather information on a specific topic or issue.

Types of Interviews

1. Structured Interviews: A set of pre-determined questions are asked to gather specific

information.

2. Unstructured Interviews: An open-ended conversation is held to gather detailed and qualitative

information.

3. Semi-Structured Interviews: A combination of structured and unstructured questions are asked

to gather both specific and detailed information.

Advantages

1. In-Depth Information: Interviews can provide in-depth and detailed information on a specific

topic or issue.

2. Flexibility: Interviews can be structured or unstructured, allowing for flexibility in data

collection.
3. Personal Interaction: Interviews allow for personal interaction between the researcher and

respondent, which can provide valuable insights.

Disadvantages

1. Time-Consuming: Interviews can be time-consuming, especially if they are unstructured or

require travel to meet respondents.

2. Costly: Interviews can be costly, especially if they require travel or equipment.

3. Bias: Interviews can be biased if the researcher's questions or demeanor influence the

respondent's answers.

Observation Method of Collecting Primary Data

Observation is a method of collecting primary data by watching and recording behavior, events,

or phenomena in a natural or controlled setting.

Types of Observation

1. Participant Observation: The researcher participates in the activity or event being observed.

2. Non-Participant Observation: The researcher observes from a distance, without participating in

the activity or event.

3. Structured Observation: A set of pre-determined criteria are used to observe and record

behavior or events.
4. Unstructured Observation: An open-ended approach is used to observe and record behavior or

events.

Advantages

1. Rich Data: Observation can provide rich and detailed data on behavior, events, or phenomena.

2. Natural Setting: Observation can be conducted in a natural setting, which can provide a more

accurate representation of real-life behavior or events.

3. Flexibility: Observation can be structured or unstructured, allowing for flexibility in data

collection.

Disadvantages

1. Time-Consuming: Observation can be time-consuming, especially if it requires prolonged

periods of watching and recording.

2. Subjective: Observation can be subjective, as the researcher's biases or interpretations may

influence the data collected.

3. Reactivity: Observation can be reactive, as the presence of the researcher may influence the

behavior or events being observed.

Documentary Secondary Data

Definition

Documentary secondary data refers to existing data that is contained in documents, such as

reports, articles, books, and other written materials.

Sources of Documentary Secondary Data


1. Books and Textbooks: Books and textbooks can provide valuable information on various

topics, including historical events, research findings, and theoretical perspectives.

2. Academic Journals: Academic journals publish research articles, reviews, and other scholarly

works that can provide secondary data.

3. Government Reports: Government agencies publish reports on various topics, including

economic indicators, demographic data, and social statistics.

4. Newspaper and Magazine Articles: Newspaper and magazine articles can provide information

on current events, trends, and issues.

5. Archival Records: Archival records, such as letters, diaries, and other historical documents,

can provide valuable insights into past events and experiences.

Types of Documentary Secondary Data

1. Published Documents: Published documents, such as books, articles, and reports, that are

widely available.

2. Unpublished Documents: Unpublished documents, such as letters, diaries, and other archival

records, that are not widely available.

3. Official Documents: Official documents, such as government reports, policies, and

regulations, that provide information on official procedures and practices.

4. Personal Documents: Personal documents, such as letters, diaries, and autobiographies, that

provide insights into individual experiences and perspectives.

Advantages of Documentary Secondary Data


1. Convenience: Documentary secondary data is often easily accessible and convenient to use.

2. Cost-Effective: Documentary secondary data can be cost-effective, as it eliminates the need

for primary data collection.

3. Time-Saving: Documentary secondary data can save time, as it provides existing information

that can be used for research purposes.

4. Established Validity: Documentary secondary data has already been validated by the original

authors or researchers.

Disadvantages of Documentary Secondary Data

1. Limited Control: Researchers have limited control over the data collection process and

methodology.

2. Potential Bias: Documentary secondary data may be biased due to the original authors'

perspectives or methodologies.

3. Outdated Information: Documentary secondary data may be outdated, which can limit its

relevance and usefulness.

4. Limited Depth: Documentary secondary data may lack depth or detail, which can limit its

usefulness for specific research questions.

Advantages of Secondary Data

1. Time-Saving: Secondary data is already collected, saving time and effort.

2. Cost-Effective: Secondary data is often free or low-cost, reducing research expenses.


3. Wide Availability: Secondary data is widely available from various sources, including

government agencies, academic journals, and online databases.

4. Established Validity: Secondary data has already been validated by the original researchers or

authors.

5. Comprehensive: Secondary data can provide a comprehensive overview of a topic or issue.

6. Comparability: Secondary data can be compared across different studies, populations, or time

periods.

Disadvantages of Secondary Data

1. Limited Control: Researchers have limited control over the data collection process and

methodology.

2. Potential Bias: Secondary data may be biased due to the original researchers' perspectives or

methodologies.

3. Outdated Information: Secondary data may be outdated, which can limit its relevance and

usefulness.

4. Limited Depth: Secondary data may lack depth or detail, which can limit its usefulness for

specific research questions.

5. Inconsistent Quality: Secondary data can vary in quality, which can affect the accuracy and

reliability of the findings.

6. Difficulty in Verification: Secondary data can be difficult to verify, especially if the original

data sources are unclear or unavailable.


7. Lack of Originality: Secondary data may lack originality, as it is based on existing research

and data.

8. Dependence on Original Research: Secondary data is dependent on the quality and accuracy of

the original research

Survey-based secondary data refers to existing data that was collected through surveys, but is

being used by someone else, often for a different purpose.

This type of data includes:

1. Questionnaire responses

2. Interview transcripts

3. Opinion polls

4. Attitude surveys

5. Customer feedback surveys

Survey-based secondary data can be obtained from:

1. Published research studies

2. Government reports

3. Market research reports

4. Online databases and repositories

5. Organizations that conduct regular surveys

Examples of survey-based secondary data include:


- A national opinion poll on political attitudes

- A customer satisfaction survey conducted by a company

- A health survey conducted by a government agency

- A market research report on consumer behavior

Survey-based secondary data can be quantitative (numerical) or qualitative (text-based), and can

provide valuable insights into attitudes, opinions, behaviors, and trends.

Advantages of secondary data:

1. Time-saving: Secondary data is already collected, so you don't need to spend time and

resources on data collection.

2. Cost-effective: Secondary data is often free or low-cost, reducing research expenses.

3. Wide coverage: Secondary data can provide a broad coverage of topics, industries, or

populations.

4. Established reliability: Secondary data has already been collected and analyzed, so its

reliability has been established.

5. Comparability: Secondary data can be compared across different studies, industries, or time

periods.

Disadvantages of secondary data

1. Limited control: You have limited control over the data collection process, which can affect

data quality.
2. Lack of relevance: Secondary data may not be directly relevant to your research question or

objectives.

3. Outdated data: Secondary data may be outdated, which can affect its accuracy and relevance.

4. Bias and errors: Secondary data may contain biases or errors, which can affect research

findings.

5. Limited depth: Secondary data may lack depth and detail, which can limit its usefulness for in-

depth analysis.

Overall, secondary data can be a valuable resource for research, but it's essential to carefully

evaluate its quality, relevance, and limitations before using it.

Data Presentation

Data presentation refers to the process of communicating data insights and findings in a clear,

concise, and meaningful way. It involves using various techniques and tools to present data in a

format that is easy to understand, analyze, and interpret.

Effective data presentation helps to:

1. Communicate complex data insights: Clearly convey complex data findings to both technical

and non-technical audiences.

2. Support decision-making: Provide actionable insights that inform business decisions, policy

changes, or other important choices.

3. Identify trends and patterns: Highlight important trends, patterns, and correlations within the

data.
4. Tell a story: Use data to tell a compelling story that resonates with the audience.

Common data presentation techniques include:

1. Tables and charts: Use tables, bar charts, line graphs, and other visualizations to display data.

2. Graphs and plots: Utilize scatter plots, histograms, and other graphical representations to show

relationships and distributions.

3. Infographics: Combine data visualizations, images, and text to create engaging and

informative graphics.

4. Reports and dashboards: Compile data insights into comprehensive reports or interactive

dashboards.

5. Presentations and storytelling: Use narrative techniques to present data insights and findings in

a clear and compelling way.

Tabulation is the process of organizing and presenting data in a systematic and structured format,

typically in a table or spreadsheet. It involves arranging data into rows and columns to facilitate

easy understanding, analysis, and comparison.

Tabulation helps to:

1. Simplify complex data: Break down large datasets into manageable and understandable parts.

2. Identify patterns and trends: Reveal relationships and correlations within the data.

3. Facilitate comparison: Enable comparisons between different groups, categories, or time

periods.
4. Support analysis and decision-making: Provide a clear and concise format for data analysis

and interpretation.

Types of tabulation:

1. Simple tabulation: Involves arranging data into a basic table format.

2. Cross-tabulation: Involves creating tables that show relationships between two or more

variables.

3. Multiple tabulation: Involves creating tables that show relationships between multiple

variables.

Tabulation is commonly used in:

1. Research studies: To present and analyze data.

2. Business reporting: To summarize and analyze business data.

3. Government statistics: To present demographic and economic data.

4. Data analysis: To identify trends, patterns, and correlations.

Here's a differentiation between two-way and three-way tables:

Two-Way Table

1. Two variables: A two-way table, also known as a contingency table, displays the relationship

between two variables.

2. Rows and columns: One variable is represented by the rows, and the other variable is

represented by the columns.


3. Cell contents: Each cell contains the frequency or value associated with the combination of the

two variables.

4. Example: A table showing the relationship between gender (male/female) and education level

(high school/college) would be a two-way table.

Three-Way Table

1. Three variables: A three-way table displays the relationship between three variables.

2. Multiple layers: A three-way table can be thought of as multiple two-way tables stacked on top

of each other.

3. Pages or layers: Each layer or page represents one level of the third variable.

4. Cell contents: Each cell contains the frequency or value associated with the combination of the

three variables.

5. Example: A table showing the relationship between gender (male/female), education level

(high school/college), and income level (low/high) would be a three-way table.

In summary, the main difference between two-way and three-way tables is the number of

variables being analyzed. Two-way tables examine the relationship between two variables, while

three-way tables examine the relationship between three variables.

Textual Form

1. Descriptive text: Data is presented in a narrative or descriptive text format.

2. Qualitative data: Often used for qualitative data, such as opinions, attitudes, or open-ended

responses.
3. Summary and analysis: Textual form provides a summary and analysis of the data,

highlighting key findings and trends.

4. Example: A report summarizing the results of a survey on customer satisfaction, written in a

descriptive text format.

Percentage Form

1. Numerical data: Data is presented in numerical form, with percentages used to show

proportions or rates.

2. Quantitative data: Often used for quantitative data, such as frequencies, proportions, or rates.

3. Comparison and trend analysis: Percentage form facilitates comparison and trend analysis,

making it easier to identify patterns and changes.

4. Example: A table showing the percentage of customers who rated a product as "satisfactory"

or "unsatisfactory".

You might also like