0% found this document useful (0 votes)
69 views43 pages

Business Research Methods

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views43 pages

Business Research Methods

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 43

BUSINESS RESEARCH METHODS: SURVEY

INSTRUMENTs
FELIX ELIAB OMOTTO
VERONICAH MULI
MASTER OF BUSINESS ADMINISTRATION
SURVEY INSTRUMENT DEFINITION
This is a tool used to collect data from respondents in a research study.
 it typically consist of set of questions or statements designed to gather information on
specific topics.
 it can be administered in various formats such as
questionnaire,interviews,scales,observations checklist and focus group guides.
Sources of data

1.Primary data
It is a data collected firsthand by a researcher or organization for specific purpose.
This type of data is tailored to meet the exact needs of study providing direct and relevant insight.
Characteristics
i. Originality - Primary data is original because it is collected by the researcher directly from the source. It
has not been processed or interpreted by anyone else.

ii. Specificity - The data is tailored to the specific needs of the research, meaning the questions,
measurements, and data collection methods are designed to gather precise information relevant to the
research objectives.

iii. Control - Researchers have full control over the data collection process, including the methods used, the
timing, and the conditions under which the data is collected.
Sources of Primary Data
1. Surveys

 Structured questionnaires or interviews designed to gather quantitative or qualitative data from a predefined group of
respondents.

 Surveys can be conducted in various forms, including online, face-to-face, telephone, or paper based.

 They are useful for collecting data on opinions, behaviours, demographics, and other specific variables.

2. Experiments

 Controlled studies where variables are manipulated to observe the effect on other variables.

 Experiments are often used in fields like marketing, psychology, and medicine to establish cause-and-effect
relationships.

 For example, a company might conduct an experiment to test the effectiveness of different advertising strategies on
consumer purchase behaviour.

3. Observations

 The process of collecting data by watching and recording behaviours, events, or phenomena as they occur naturally.

 Observations can be structured (where the researcher follows a predefined framework) or unstructured (where
observations are more open-ended).
Sources of Primary Data
4. Interviews

 One-on-one or group conversations between the researcher and respondents to gather in-depth qualitative data.

 Interviews can be structured (with predetermined questions), semi-structured (with a flexible guide), or
unstructured (open-ended discussions).

 They are particularly useful for exploring complex issues, opinions, or experiences.

5. Focus Groups

 Group discussions moderated by a researcher to collect data on perceptions, opinions, or attitudes towards a
particular subject.

 Focus groups are often used in marketing research to explore consumer attitudes towards a product, service, or
advertisement.
Advantages & Disadvantages of
Primary Data
Advantages of Primary Data

i. Relevance - The data is directly related to the specific research question, ensuring high relevance and
applicability.

ii. Accuracy - Researchers can ensure the accuracy and reliability of the data by carefully designing the data
collection process and controlling the conditions.

iii. Timeliness - Primary data is current, reflecting the latest information available, which is crucial for time-
sensitive research.

Disadvantages of Primary Data

iv. Costly - Collecting primary data can be expensive, requiring resources such as time, money, and
personnel.

v. Time-Consuming - The process of designing, collecting, and analyzing primary data is often time-
consuming.

vi. Complexity - Primary data collection requires careful planning and execution, making it a more complex
Secondary Data
Secondary data refers to information that has been previously collected, processed, and published by
others. This data is typically used for purposes other than the current research but can be repurposed to
help answer new research questions.

Characteristics

i. Pre-Collected - Secondary data is already available, having been collected by other researchers,
organizations, or institutions for their purposes.

ii. Less Control - Researchers have little or no control over how the data was collected, the
methodology used, or the accuracy of the data.

iii.Varied Quality - The quality and relevance of secondary data can vary widely, depending on the
original source and purpose of data collection.
Sources of Secondary Data
1. Published Reports

 Government publications, industry reports, market research reports, and white papers that provide statistical
data, insights, and trends.

 For example, a company might use government economic reports to understand market conditions before
launching a new product.

2. Academic Journals

 Peer-reviewed articles and papers that present research findings in specific academic fields.

 Researchers often use academic journals to review existing literature, identify gaps in knowledge, and build on
previous research.

3. Databases

 Access to statistical databases, financial records, company reports, and other data repositories.

 Examples include census data, stock market data, and consumer behavior databases.
Sources of Secondary Data
4. Books and Online Resources

 Textbooks, online publications, websites, and blogs that provide background information,
theories, and historical data.

 These sources are often used for literature reviews and theoretical frameworks.

5. Media Sources

 Newspapers, magazines, news websites, and other media outlets that provide current information
on various topics.

 Media sources are often used to track trends, public opinion, and events that may impact
business decisions.
Advantages & Disadvantages of Secondary Data

Advantages of Secondary Data

i. Cost-Effective - Secondary data is usually less expensive to obtain, as it has already been collected
and processed by others.

ii. Timesaving - Researchers can access secondary data quickly, allowing them to bypass the time-
consuming process of data collection.

iii.Wide Availability - A vast amount of secondary data is available from various sources, providing a
wealth of information on almost any topic.

Disadvantages of Secondary Data

iv. Relevance Issues - Secondary data may not perfectly match the research needs, leading to
potential gaps in information.

v. Data Quality Concerns - The accuracy, reliability, and validity of secondary data can be
questionable, depending on the original source and purpose of collection.

vi. Outdated Information - Some secondary data may be outdated, which can be problematic for
research that requires current information.
Factors to be considered when choosing between primary and
secondary data
i. Research Objectives - Primary data is ideal for specific, detailed research, while secondary data
suits broader, exploratory studies.
ii. Availability - Secondary data is preferred if relevant data is readily available. Primary data is
necessary when existing data is outdated or lacking.
iii. Time and Cost - Primary data collection is time-consuming and costly, making secondary data
more suitable when resources are limited.
iv. Accuracy and Control - Primary data allows greater control over quality, while secondary data
relies on the accuracy of other sources.
v. Relevance - If secondary data isn't directly relevant, primary data collection is needed.
NATURE OF DATA
1. Qualitative Data
 Qualitative data is non-numerical and focuses on the qualities, attributes, or characteristics of a subject.
It captures the "why" and "how" behind certain phenomena, offering rich, detailed insights that are not
easily reduced to numbers.
 This type of data is often used in exploratory research where the goal is to understand behaviours,
experiences, or social processes.
Examples include Interview transcripts, open-ended survey responses, observation notes, focus group
discussions.
Uses
 Exploratory Research - Qualitative data is particularly useful in the early stages of research, where
the goal is to explore a new area, generate hypotheses, or understand underlying motivations.

 Understanding Complex Issues - It helps in gaining a deeper understanding of complex issues such
as consumer behaviour, organizational culture, or customer satisfaction.

 Developing Theories - Qualitative data is often used to develop new theories or refine existing ones
based on the insights gathered.
Methods of Analysis

i. Coding - The process of categorizing qualitative data into themes or patterns. For example, interview responses
might be coded to identify common themes such as customer satisfaction, product preferences, or service issues.
ii. Thematic Analysis - Identifying and analysing patterns or themes within qualitative data. This helps in
understanding the broader context or underlying messages within the data.
iii. Narrative Analysis - A method of interpreting stories or accounts provided by respondents, focusing on how
people make sense of their experiences.
Strengths

Depth and Detail - Provides deep, nuanced insights into a subject, capturing the complexity of human behaviour
and decision-making.

Flexibility - Allows for exploration of new areas and can adapt as new themes or issues emerge during the research
process.

Limitations

Subjectivity - Qualitative data is often subjective, relying on the researcher’s interpretation, which can introduce
bias.

Generalizability - Findings from qualitative research may not be easily generalizable to larger populations due to
the typically smaller, non-random sample sizes.
2. Quantitative Data

Quantitative data is numerical and focuses on quantifying elements of a subject. It provides


measurable, objective information that can be analysed statistically.
This type of data is used to answer questions about "how many," "how much," or "how often" and is
essential for testing hypotheses and making predictions.

Examples include:
i. Sales Figures - Numerical data showing the number of products sold over a specific period.
ii. Survey Results - Responses to closed-ended questions, often using Likert scales (e.g., rating
satisfaction from 1 to 5).
iii. Demographic Statistics - Data such as age, income, education level, or employment status
collected from a population.
iv. Test Scores - Results from standardized tests or assessments that provide numerical data on
performance.
Uses

i. Descriptive Research - Quantitative data is used to describe characteristics of a population or


phenomenon, such as market size or average income.

ii. Correlational Research - It helps in identifying relationships between variables, such as the
correlation between advertising spend and sales revenue.

iii. Experimental Research - Quantitative data is crucial for experiments where variables are
manipulated to observe their effects, such as testing different pricing strategies on sales.
Methods of Analysis

 Descriptive Statistics - Calculating measures such as mean, median, mode, and standard deviation to
summarize data.

 Inferential Statistics - Using techniques like regression analysis, t-tests, or ANOVA to make inferences about a
population based on a sample.

 Hypothesis Testing - Quantitative data allows researchers to test hypotheses and determine the likelihood that
observed patterns are due to chance.

Strengths

 Objectivity - Quantitative data is often seen as more objective and reliable, as it is less prone to researcher bias.

 Generalizability - With large, representative samples, findings from quantitative research can often be
generalized to a broader population.

 Precision - Provides precise, measurable information that can be used to make data-driven decisions.

Limitations

 Lack of Depth - While quantitative data provides breadth, it may lack the depth and context that qualitative data
offers. It may not fully capture the nuances of human behaviour.

 Rigidity - The structured nature of quantitative research may limit the ability to explore new areas or adapt to
TYPES OF DATA IN BUSINESS RESEARCH
Based on Measurement Scale

1. Nominal Data

Nominal data consists of categories or labels without any inherent order. It is used to identify and classify items.

Examples

Product Categories - Categories like Electronics, Clothing, and Furniture. Each category is distinct, but there is no ranking or
order among them.

Gender - Male, Female, Non-binary. These are categories used for classification without any order.

2. Ordinal Data

Ordinal data represents categories with a meaningful order or ranking but without equal intervals between the ranks. The position
matters, but the differences between ranks are not precisely measurable.

Examples:

Customer Satisfaction Levels - Ratings like Very Unsatisfied, Unsatisfied, Neutral, Satisfied, Very Satisfied. These ratings are
ordered, but the exact difference between each level isn't quantified.

Educational Attainment - High School, bachelor’s degree, Master’s Degree. These levels have a sequence, but the difference
in education level is not uniform.
Cntd…
4. Interval Data

Interval data is numerical and has equal intervals between values but lacks a true zero point. This means you can measure
the difference between values, but you cannot make meaningful statements about ratios.

Examples:

Temperature in Celsius: 10°C, 20°C, 30°C. The intervals between these temperatures are equal, but 0°C does not
represent a complete absence of temperature.

IQ Scores: 90, 100, 110. Differences between scores are consistent, but an IQ of 0 is not meaningful.

5. Ratio Data

Ratio data is numerical with equal intervals and a true zero point, allowing for meaningful comparisons of both differences
and ratios. It enables statements about how many times one value is compared to another.

Examples:

Sales Revenue: $10,000, $20,000, $30,000. The zero point (no revenue) is meaningful, and comparisons like "twice as
much" are valid.

Height and Weight: 150 cm, 160 cm, 170 cm; 50 kg, 60 kg, 70 kg. Both have a true zero and allow for meaningful ratios
and comparisons.
Based on Source
1. Primary Data

Primary data is collected firsthand by the researcher specifically for their study. It is original and directly relevant to
the research question.

Examples

Surveys - A company conducts a survey to gather feedback on a new product from its customers.

Experiments - Testing a new marketing strategy in a controlled environment to measure its effectiveness.

2. Secondary Data

Secondary data is collected by someone else for a different purpose but is used by the researcher for their own
analysis. It is pre-existing data that can provide context or background information.

Examples

Industry Reports - Market analysis reports published by research firms or industry associations.

Academic Research - Published studies or articles related to the research topic.


Based on Format
1. Structured Data

Structured data is organized in a predefined format, often in tables or databases, making it easy to search,
analyse, and manipulate using standard tools.

Example:

Spreadsheets: Excel files with columns for sales figures, dates, and product categories.

2. Unstructured Data

Unstructured data lacks a predefined format or structure, often consisting of text, multimedia, or free-form data.
It is more complex to analyse due to its varied formats.

Example:

Customer Reviews: Textual feedback posted online by customers, often varied in format and content.
Based on Time Dimension
1. Cross-Sectional Data

This is Data collected at a single point in time, providing a snapshot of a phenomenon or population. It captures
information about a specific moment.

Examples:

One-Time Surveys - A survey conducted in January 2024 to assess customer satisfaction.

Census Data - Data collected in a specific year, providing demographic details at that point.

2. Longitudinal Data

This is Data collected over multiple time periods, allowing researchers to track changes and trends over time. It
provides insights into how variables evolve.

Examples:

Annual Employee Surveys - Surveys conducted each year to monitor changes in employee satisfaction and
engagement.

Panel Studies - Tracking the same individuals or entities over several years to study long-term trends and
Based on Ownership
1. Proprietary Data

This is Data that is owned by an organization or individual and is not available to the public without permission. It is
often used for competitive advantage and internal decision-making.

Examples:

Internal Sales Reports - Detailed records of a company’s sales performance and customer data.

Exclusive Market Research - Customized research conducted for a specific client or business.

2. Public Data

Data that is freely accessible to anyone. It is published or made available by government agencies, organizations,
or researchers for public use.

Examples:

Government Publications - Reports and statistics on economic indicators, health data, or population
demographics.

Open Access Journals - Research articles and studies available online without subscription fees.
Data Collection Instruments
1. Surveys and Questionnaires

Description - Structured tools used to gather quantitative and sometimes qualitative data from many respondents.

Advantages - Can cover a wide population, standardized questions, easy to analyse.

Limitations - Can be subject to biases, limited by respondents’ willingness to participate.

2. Interviews

Description - One-on-one or group discussions used to gather in-depth qualitative data.

Advantages - Provides rich, detailed data, allows probing for deeper understanding.

Limitations - Time-consuming, can be subject to interviewer bias.

Observations

Description - Watching and recording behaviours or events in their natural setting.

Advantages - Captures real-time data, useful for studying behaviour.

Limitations - Can be subjective, observer bias may occur.


CONT’
4. Experiments

Description - Controlled studies where variables are manipulated to observe their effects.

Advantages - Allows for causal inferences, high control over variables.

Limitations - May lack ecological validity, can be expensive and time-consuming.

5. Focus Groups

Description - Guided discussions with a small group of participants to explore attitudes and perceptions.

Advantages - Provides insights into group dynamics, can generate new ideas.

Limitations - May not represent broader population, groupthink can influence outcomes.
DATA EDITING , CLEANING AND
ENTRY
 DATA HANDLING
Data handling is a crucial aspect of research and data analysis, involving the processes of collecting, storing,
processing, analyzing, and managing data. Proper data handling ensures the integrity, security, and usability
of data, which is essential for drawing accurate conclusions and making informed decisions.
Key data handling techniques
 Data collection
is the process of gathering information to answer research questions or test hypotheses. Effective data
collection techniques ensure that the data is accurate, reliable, and relevant.
Surveys and Questionnaires: Designing and distributing surveys with well-structured questions to gather data
from respondents.
Interviews: Conducting structured or semi-structured interviews to collect qualitative data.Observation:
Recording behavioral data through direct or participant observation.
Experiments: Designing and conducting experiments to collect data under controlled conditions.
Cont’
 Data Cleaning and Preprocessing
 Data cleaning involves identifying and correcting errors or inconsistencies in the data to ensure
its quality. Preprocessing prepares the data for analysis by transforming it into a suitable format.
 Handling Missing Data: Techniques such as imputation (filling in missing values), deletion
(removing incomplete records), or using algorithms that can handle missing data.
 Removing Duplicates: Identifying and removing duplicate entries that can skew analysis results.
 Outlier Detection: Identifying and addressing outliers that may be due to errors or represent
significant anomalies.
 Data Transformation: Normalizing, standardizing, or scaling data to bring it into a uniform
format suitable for analysis.
Cont’
 Data Storage and Management
 Data storage involves organizing and storing data securely to ensure its integrity and
accessibility. Effective data management practices help in maintaining data quality
and facilitating easy retrieval
 Database Management Systems (DBMS): Using relational (e.g., MySQL, PostgreSQL) or
non-relational databases (e.g., MongoDB) to store structured and unstructured data.
 Cloud Storage: Storing data on cloud platforms (e.g., AWS, Google Cloud) for
scalability and easy access .
 Data Backup: Implementing regular backup procedures to prevent data loss.
 Data Security: Protecting data through encryption, access controls, and secure
authentication to prevent unauthorized access and data breaches.
Data analysis techniques
 Data analysis involves applying statistical, computational, or qualitative techniques to
interpret data and extract meaningful insights .
 Descriptive Statistics: Summarizing and describing data using measures such as mean,
median, mode, standard deviation, and frequency distributions.
 Inferential Statistics: Making inferences about a population based on a sample using
techniques like hypothesis testing, regression analysis, and confidence intervals
 .Data Visualization: Creating charts, graphs, and dashboards to visually represent data,
making it easier to identify patterns and trends.
 Qualitative Analysis: Analyzing text or multimedia data through coding, thematic analysis, or
content analysis to identify patterns and themes.
 Machine Learning: Using algorithms and models (e.g., classification, clustering, neural
networks) to analyze large datasets and make predictions.
Data integration
 Data integration involves combining data from multiple sources to create a
unified dataset.
 Aggregation involves summarizing data to provide an overview or to facilitate
analysis.
ETL (Extract, Transform, Load): Extracting data from different sources, transforming
it into a suitable format, and loading it into a data warehouse or database.
 Data Merging: Combining datasets by aligning related fields, such as merging
customer data from different departments.
 Data Aggregation: Summarizing data at various levels (e.g., monthly, quarterly)
to identify trends or patterns over time.
Data quality assurance
Ensuring data quality is critical to the accuracy and reliability of
research findings.
Techniques for data quality assurance include
Data Validation: Checking for accuracy, completeness, and
consistency of data during and after collection.
Data Auditing: Periodically reviewing data processes and datasets to
identify and rectify errors or inconsistencies.
Data Provenance: Tracking the origin and history of data to ensure
its authenticity and traceability.
Data anonymization and de-
identification
 Data anonymization and de-identification involve removing personally identifiable
information (PII) from datasets to protect the privacy of individuals.
 Anonymization Techniques: Techniques such as data masking, pseudonymization,
and generalization to ensure that individuals cannot be re-identified from the
data.
 Compliance: Ensuring that data handling practices comply with regulations such
as GDPR (General Data Protection Regulation) or HIPAA (Health Insurance
Portability and Accountability Act) that mandate data privacy and protection.
Data documentation
 Proper documentation of data handling processes is essential for reproducibility
and transparency in research.
 This includes Metadata Documentation: Recording information about the data
(e.g., source, format, collection date) to provide context and facilitate future use.
 Data Dictionaries: Creating a data dictionary that defines each variable, its
meaning, and its data type to ensure consistency in data interpretation . Version
Control: Implementing version control systems (e.g., Git) to track changes in
datasets and ensure that the latest version is always accessible.
Data sharing and collaboration
Sharing data and collaborating with other researchers can enhance research
outcomes but requires careful handling to ensure data integrity and security
Data Sharing Agreements: Establishing formal agreements that define the
terms of data sharing, including access, usage rights, and confidentiality.
Collaborative Platforms: Using platforms like GitHub, Google Drive, or
institutional repositories to share data and collaborate in real-time.
Data Citation: Properly citing datasets in research publications to
acknowledge the original source and enable others to access the data.
Ethical consideration in data
handling
Ethical considerations are paramount in data handling, particularly when
dealing with sensitive or personal data.
Informed Consent: Ensuring that participants are fully informed about
how their data will be used and have given consent for its collection and
analysis.
Confidentiality: Implementing measures to protect the confidentiality of
participants and their data.
Bias Mitigation: Being aware of and addressing potential biases in data
collection, analysis, and interpretation to ensure fair and objective
results
Data entry
 Data entry is the process of inputting data into a computer system or database from various
sources, such as paper documents, forms, surveys, or digital files. It's a foundational task in
data management, ensuring that information is accurately recorded and easily accessible for
analysis and decision-making. Proper data entry practices are crucial to maintaining data
integrity and avoiding errors that could compromise the quality of research or business
operations .
Key Aspects of Data Entry
 Accuracy is the most critical aspect of data entry.
Errors in data entry can lead to incorrect analysis and decision-making. Ensuring that data is
entered correctly and without mistakes is essential.
Double-Checking Entries: Review each entry for accuracy before finalizing it.
Validation Rules: Implement validation rules in data entry forms to automatically check for errors
(e.g., ensuring a date field only accepts valid date formats).
Verification Processes: Use verification processes, such as cross-checking with the original source
or having a second person review the data.
cont’
2. Speed and Efficiency
While accuracy is paramount, speed is also important in data entry,
especially when dealing with large volumes of data. Efficient data entry
practices help save time and resources.
Keyboard Shortcuts: Utilize keyboard shortcuts to speed up the entry
process. Batch Processing: Enter data in batches rather than one record
at a time to improve efficiency.
Automation Tools: Implement automation tools where possible, such as
optical character recognition (OCR) for scanning and converting printed
text into digital format.
Data entry tools and softwares

Using the right tools and software can significantly enhance the efficiency
and accuracy of data entry.
Spreadsheets: Programs like Microsoft Excel or Google Sheets are
commonly used for manual data entry, especially for numerical data.
Data Entry Forms: Custom forms can be designed using software like
Microsoft Access or Google Forms, which can guide users through the data
entry process and reduce errors.
Database Management Systems (DBMS): For more complex data entry
tasks, using a DBMS like MySQL, PostgreSQL, or Microsoft SQL Server
allows for better data organization, validation, and retrieval.
Data formatting

Proper data formatting ensures that data is consistent, easy to


analyze, and compatible with other systems.
Standardized Formats: Use standardized formats for dates,
numbers, and text entries (e.g., YYYY-MM-DD for dates).
Consistent Coding: Apply consistent coding schemes for categorical
data (e.g., M/F for gender, Yes/No for binary responses).
Template Use: Create and use templates for common data entry
tasks to ensure consistency across records.
Error detection and correction

Even with careful data entry, errors can still occur.


Detecting and correcting these errors is essential to maintaining data
quality.
Error Logs: Keep a log of detected errors and their corrections to identify
patterns and improve the data entry process.
Regular Audits: Perform regular audits of entered data to catch and correct
any inaccuracies.
Automated Checks: Implement automated checks to flag inconsistencies,
such as duplicate entries or outliers that fall outside expected ranges.
Data security and confidentiality
Handling sensitive or confidential data requires strict security measures to
protect it from unauthorized access or breaches.
Access Controls: Limit access to data entry systems to authorized
personnel only.
Encryption: Use encryption for data entry and storage, especially for
sensitive information.
Secure Work Environment: Ensure that data entry workstations are secure,
with physical security measures and secure networks.
Training and best practices
 Proper training for data entry personnel is crucial to ensure that they understand the
importance of accuracy, speed, and security.
 Training Programs: Provide comprehensive training on the tools, software, and methods
used for data entry.
 Best Practices Documentation: Create and distribute documentation of best practices,
including guidelines for error detection, data formatting, and security protocols.
 Continuous Improvement: Encourage feedback from data entry personnel to continuously
improve processes and address any challenges they face.
Cont’

Challenges in Data Entry


Human Error: Mistakes like typos, omissions, or misinterpretation of
source data can occur, especially with large volumes of data.
Fatigue: Repetitive data entry tasks can lead to fatigue, increasing the
likelihood of errors.
Complex Data: Entering complex or unstructured data (e.g., handwritten
notes, qualitative responses) can be more challenging and error-prone.
Time Constraints: Tight deadlines may pressure data entry personnel to
prioritize speed over accuracy
THANK YOU

You might also like