0% found this document useful (0 votes)
16 views17 pages

RM 4

The document provides an overview of data and information, highlighting their definitions, characteristics, and differences. It categorizes data into primary, secondary, qualitative, quantitative, cross-sectional, time series, and panel data, explaining their unique features and collection methods. Additionally, it discusses big data, its sources, applications, and the importance of observation and interviews in data collection.

Uploaded by

rakibul hasan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views17 pages

RM 4

The document provides an overview of data and information, highlighting their definitions, characteristics, and differences. It categorizes data into primary, secondary, qualitative, quantitative, cross-sectional, time series, and panel data, explaining their unique features and collection methods. Additionally, it discusses big data, its sources, applications, and the importance of observation and interviews in data collection.

Uploaded by

rakibul hasan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Chapter: 4

Data:
Data refers to raw, unprocessed facts, figures, or observations collected from the real world. Data is often presented
in the form of numbers, symbols, text, or signals. Data by itself does not have meaning or context until it is
interpreted or analyzed.
Key Characteristics of Data:
❖ Raw: Data is unorganized and unprocessed. It may consist of numbers, text, symbols, images, sounds, other
formats.
❖ Unstructured or Structured:
✓ Structured data: Data that is organized and stored in a specific format, like in databases or spreadsheets (e.g.,
names, addresses, financial figures).
✓ Unstructured data: Data that doesn’t follow a specific format or organization, like emails, social media posts,
or videos.
❖ Without Context: Data alone does not have inherent meaning. It needs to be processed or interpreted to become
meaningful.
❖ Collected through Observation or Measurement: Data comes from observations, experiments, surveys,
sensors, or other sources.

Information: Information is data that has been processed, organized, structured in a way that adds meaning, context,
and usefulness. It is the result of analyzing and interpreting raw data, transforming it into a form that can help with
decision-making, problem-solving, or understanding a situation.

Key Characteristics of Information:


➢ Processed Data: Information is derived from raw data through sorting, organizing, summarizing, or analyzing.
It has context and relevance.
➢ Meaningful: Information provides insights or knowledge that can be acted upon. It conveys significance and
helps in making informed decisions.
➢ Actionable: Information can be used to make decisions, solve problems, or understand patterns and trends.
➢ Organized and Structured: Unlike raw data, information is presented in a structured format, such as reports,
graphs, or summaries, making it easier to interpret.

Data vs. Information:


Subject Data Information
Definition Raw, unprocessed facts or figures Processed, meaningful data
State Unstructured or structured, but lacks Organized and contextualized for
meaning understanding
Purpose Collected for analysis Provides insights and supports decision-
making
Example 100, 200, 300 "The total sales last quarter were $600."
Meaning None without processing or context Has specific meaning and can lead to
action
Actionable No Yes, can be used to solve problems or
make decisions

Types of Data
Primary Data: Primary data is data that is collected directly by the researcher for the specific purpose of the study.
It is first-hand and original. For examples: Survey responses, Observations from an experiment, Interviews
conducted by the researcher.
Key Characteristics of Primary Data:
➢ Original Source: Collected directly by the researcher from first-hand sources.
➢ Tailored for Specific Research: The data is collected specifically to address the research question or
hypothesis.
➢ Up-to-Date: Since it is freshly gathered, primary data is current and relevant to the researcher's needs.
➢ Control over Data Collection: The researcher has full control over the process of gathering data, including
the design of the data collection methods.
➢ Resource Intensive: Collecting primary data often requires significant time, effort, and financial resources.

Methods/Sources of Collecting Primary Data:


➢ Surveys/Questionnaires: A popular method where researchers ask participants a series of questions to gather
information on a particular topic. For example: A company conducts a customer satisfaction survey to assess
the quality of their service.
➢ Interviews: Involves asking open-ended questions to participants in one-on-one or group settings to gain deep
insights into their opinions, behaviors, or experiences. For example: A researcher interviews a group of
employees to understand workplace culture.
➢ Experiments: Controlled tests where researchers manipulate one or more variables to observe the effects on a
dependent variable. For example: A pharmaceutical company conducts a clinical trial to test the effectiveness
of a new drug.
➢ Observations: Researchers observe and record behaviors, actions, or events in a natural or controlled
environment without directly interacting with participants. For example: A researcher observes customer
behavior in a supermarket to understand purchasing patterns.
➢ Focus Groups: A small group of people is asked about their opinions, experiences, or perceptions on a specific
topic, often facilitated by a moderator. Example: A focus group discussion is conducted to gather feedback on
a new product concept.
➢ Field Studies: Data is gathered by directly studying subjects in their natural environment, often used in
ethnographic or sociological research. For example: A researcher spends time in a rural community to
understand local agricultural practices.

Secondary Data: Secondary data refers to data that has already been collected, processed, and published by
someone else for purposes other than the current research project. This type of data is often used to support research,
validate findings.

Key Characteristics of Secondary Data:


➢ Pre-existing: Secondary data is not collected directly by the researcher but is derived from existing sources.
➢ Cost-effective: It is generally less expensive to obtain than primary data since it does not require new data
collection.
➢ Widely Available: Secondary data can be accessed from various sources, including libraries, online databases,
government publications, and previous research studies.
➢ Less Control: Researchers do not have control over how the data was collected, which can affect its quality
and relevance.
➢ Variety of Sources: Secondary data can come from numerous sources, providing a wide range of information
on different topics.

Sources of Secondary Data:


➢ Published Research Studies: Academic journal articles, conference papers, and theses where researchers
present findings based on their original research. For example: Using findings from a published study on
consumer behavior to inform a new marketing strategy.
➢ Government Reports and Statistics: Data collected and published by government agencies, such as census
data, economic indicators, and public health statistics. Example: Analyzing census data to study demographic
trends in a particular region.
➢ Industry Reports: Reports from market research firms or industry associations that summarize trends, market
analysis, and economic forecasts. Example: Using a report from a market research firm to understand the
competitive landscape in the technology sector.
➢ Organizational Databases: Data from internal organizational sources, such as sales records, customer
databases, and performance metrics. Example: A business analyzing historical sales data to identify trends in
customer purchasing behavior.
➢ Books and Textbooks: Scholarly books that compile research findings, theories, and historical data. Example:
Referencing a textbook that summarizes key economic theories and includes relevant statistical data.
➢ Surveys or Polls Conducted by Others: Data collected from surveys or polls conducted by other organizations,
such as Gallup or Pew Research Center. Example: Using public opinion data from a Gallup poll to analyze voter
sentiment on a particular issue.
➢ Online Databases: Academic databases and repositories like JSTOR, Pro-Quest, and Google Scholar, which
provide access to a vast array of published research and data. Example: Searching a database for existing
literature on climate change impacts on agriculture.
➢ Media Reports: Articles and reports published in newspapers, magazines, or online media that provide insights
on various topics. Example: Analyzing articles from reputable news sources to gather information on recent
economic developments.

Qualitative data: Qualitative data is descriptive and non-numerical. This type of data is often gathered through
interviews, observations, or open-ended surveys and is typically used in social sciences, humanities, and market
research to understand experiences, opinions, or behaviors.

Key Characteristics of Qualitative Data:


➢ Descriptive: Qualitative data describes qualities or characteristics, often using words rather than numbers.
➢ Non-Numerical: It cannot be measured or counted in the traditional sense.
➢ Subjective: Often based on personal experiences, perceptions, or interpretations.
➢ Rich and Detailed: Provides depth and complexity to understanding phenomena, capturing subtleties that
numbers might miss.
➢ Contextual: Offers insights within the context of the situation being studied (e.g., cultural, social, or personal
context).

Quantitative data: Quantitative data refers to numerical data that can be measured, counted, and expressed in
numbers. Quantitative data allows researchers to identify patterns, make predictions, and generalize results across
populations because of its precise, measurable nature.

Key Characteristics of Quantitative Data:


➢ Numerical: Quantitative data is expressed in numbers, allowing for mathematical computations.
➢ Measurable: It represents measurable quantities, such as height, weight, time, and temperature.
➢ Objective: The data is objective, meaning it is less influenced by individual perceptions or opinions.
➢ Statistical Analysis: Can be analyzed using statistical methods to identify trends, relationships, or differences.
➢ Structured: The data is often collected in a structured format, such as surveys with predefined answer choices
or experiments with controlled variables.

Cross-sectional data: Cross-sectional data refers to data collected at a single point in time across multiple subjects,
such as individuals, organizations, or countries.
Key Characteristics of Cross-Sectional Data:
➢ Single Time Frame: Data is collected at one specific point in time, rather than over an extended period.
➢ Multiple Entities: Cross-sectional data typically involves multiple subjects allowing for comparisons.
➢ Descriptive Analysis: It is often used for descriptive analysis, providing insights into the characteristics of a
population at a given moment.

Time series data: Time series data refers to a sequence of observations collected at successive points in time,
typically at equal intervals. This type of data is used to analyze trends, patterns, and fluctuations over time, making
it valuable for forecasting and understanding temporal dynamics in various fields, including economics, finance,
environmental science, and social sciences.

Key Characteristics of Time Series Data:


➢ Sequential Observations: Time series data consists of observations recorded sequentially over time, allowing
for analysis of changes or trends.
➢ Time Dependency: The values in a time series are often dependent on previous values, making the order of
data collection crucial for analysis.
➢ Regular or Irregular Intervals: Data can be collected at regular intervals (e.g., daily, monthly, yearly) or
irregular intervals (e.g., based on events).
➢ Trend, Seasonality, and Cyclic Patterns: Time series data can exhibit various patterns, including long-term
trends, seasonal variations, and cyclical fluctuations.
➢ Single Entity: Basically, time series data is collected from a single entity for multiple time series.

Difference between cross sectional and time series data


Aspect Cross-Sectional Data Time Series Data
Definition Data collected at one point in time Data collected over multiple time points
Structure Snapshot of multiple subjects Sequence of observations over time
Focus Comparison among subjects Changes in the same subject over time
Temporal No time dimension Strong time dimension
Aspect
Examples Survey results from a single day Daily stock prices over a year
Analysis Descriptive statistics, correlations Forecasting, trend analysis

Panel data: Panel data (also as known longitudinal data) refers to a dataset that combines both cross-sectional and
time series data. It consists of observations on multiple subjects (such as individuals, firms, countries, etc.) collected
at multiple points in time. For example: Data on GDP, unemployment rates, and inflation for various countries over
several years.

Key Characteristics of Panel Data:


➢ Multiple Dimensions: Panel data has two dimensions: cross-sectional (across different subjects) and temporal
(over time). This structure allows for more complex analysis compared to simple cross-sectional or time series
data.
➢ Repeated Observations: The same subjects are observed multiple times, which helps in analyzing changes and
dynamics over time.
➢ Rich Information: Panel data can provide more detailed insights as it combines information about individual
characteristics and changes over time.
➢ Variation: It allows researchers to observe both individual-level variations (differences between subjects) and
temporal variations (changes over time).
Structured Data: Structured data is highly organized and easily searchable in a fixed format, often stored in
databases or spreadsheets. Examples: Employee data in an Excel sheet (e.g., names, employee IDs, salaries).

Unstructured Data: Unstructured data lacks a predefined structure and is more difficult to organize, often involving
text, images, videos, or other formats. Examples: Customer reviews written in free text form.

Big Data: Big Data refers to extremely large and complex datasets that traditional data processing software cannot
efficiently handle. Big Data has gained prominence due to the exponential increase in data generation, driven by
advancements in technology and the rise of the internet.

Key Characteristics of Big Data


➢ Volume: Refers to the sheer amount of data generated. The data can come from various sources, including social
media, sensors, transactions, and devices.
➢ Velocity: Refers to the speed at which data is generated, processed, and analyzed. In today’s fast-paced
environment, data flows in at an unprecedented rate, necessitating real-time processing to derive actionable
insights.
➢ Variety: Refers to the different types of data collected. Big Data can include structured data (e.g., databases),
semi-structured data (e.g., XML, JSON), and unstructured data (e.g., text, images, videos). This diversity makes
it challenging to analyze and manage.
➢ Veracity: Refers to the quality and accuracy of the data. With vast amounts of data, ensuring its reliability and
authenticity becomes crucial. Poor data quality can lead to misleading insights and decisions.
➢ Value: Refers to the potential insights and benefits that can be derived from analyzing Big Data. The goal is to
extract meaningful information that can drive better decision-making, improve efficiency, and create new
business opportunities.

Sources of Big Data


➢ Social-Media: Platforms like Facebook, Twitter, and Instagram generate vast amounts of user-generated
content, interactions, and behaviors.
➢ Iot Devices: Smart devices, sensors, and machines collect data in real-time, contributing to the growing volume
of data.
➢ Transactional Data: E-commerce transactions, financial records, and other business operations generate large
datasets.
➢ Healthcare Data: Medical records, research data, and patient monitoring systems contribute to Big Data in the
healthcare sector.
➢ Web and Mobile Analytics: Data from website visits, clicks, and mobile app usage patterns provide insights
into user behavior.

Applications of Big Data


➢ Business Analytics: Companies analyze consumer behavior, sales patterns, and market trends to make
informed business decisions and improve strategies.
➢ Healthcare: Big Data is used for patient care optimization, drug discovery, personalized medicine, and
predictive analytics in public health.
➢ Finance: Financial institutions leverage Big Data for risk assessment, fraud detection, algorithmic trading, and
customer segmentation.
➢ Retail: Businesses use Big Data to enhance customer experiences, manage supply chains, and optimize
inventory management.
➢ Manufacturing: Big Data helps in predictive maintenance, process optimization, and quality control in
manufacturing processes.
➢ Transportation: Companies use data from GPS, traffic patterns, and logistics to improve routing, reduce
delays, and enhance overall efficiency.

Observation: Observation is a fundamental research method used to collect data and gather insights about
behaviors, events, or phenomena in a systematic manner. Observational methods are commonly used in various
fields, including social sciences, psychology, education, and natural sciences.

Data Collection Techniques in Observation


➢ Field Notes: Researchers take detailed notes during the observation, capturing behaviors, interactions, and
contextual information.
➢ Video Recording: Using video cameras to record interactions allows for later analysis and a more accurate
account of events.
➢ Audio Recording: Capturing conversations or sounds can provide valuable context for understanding
behaviors and interactions.
➢ Checklists and Coding Sheets: Researchers may use structured tools to systematically record specific
behaviors or events during the observation.

Interview: Interviews are a qualitative research method used to gather in-depth information from individuals
through direct interaction. This method involves asking questions to obtain insights, opinions, experiences, and
perspectives on a specific topic.

Types of Interviews
➢ Structured Interviews: In structured interviews, researchers ask a predefined set of questions in a specific
order. This approach ensures consistency across interviews, making it easier to compare responses.
➢ Semi-Structured Interviews: Semi-structured interviews combine predetermined questions with the flexibility
to explore topics in more depth as they arise during the conversation.
➢ Unstructured Interviews: Unstructured interviews are informal and do not follow a specific format or order.
This approach can lead to unexpected insights.
➢ Focus Group Interviews: Focus groups involve guided discussions with a small group of participants. A
moderator facilitates the conversation, encouraging participants to share their thoughts and interact with each
other.
Advantages of Interviews:
➢ In-Depth Information: Interviews allow researchers to gather rich, detailed information and insights that may
not be captured through surveys or other methods.
➢ Flexibility: The interviewer can adapt questions and explore topics in more depth based on participant responses,
allowing for a more comprehensive understanding.
➢ Non-Verbal Cues: Interviews provide an opportunity to observe non-verbal communication, such as body
language and facial expressions, which can enhance understanding.
➢ Building Rapport: Interviews can create a comfortable environment for participants, encouraging them to share
more openly and honestly.
➢ Clarification: Interviewers can clarify questions or concepts as needed, ensuring that participants fully
understand what is being asked.

Disadvantages of Interviews
➢ Time-Consuming: Conducting interviews can be labor-intensive and time-consuming, both for researchers and
participants.
➢ Subjectivity: Interviewer bias can influence the way questions are asked or interpreted, potentially affecting the
responses.
➢ Limited Generalizability: Findings from interviews may not be generalizable to larger populations due to the
typically small sample size.
➢ Data Analysis: Analyzing qualitative data from interviews can be complex and requires careful coding and
interpretation.
➢ Interviewer Influence: The presence and demeanor of the interviewer can impact participant responses, leading
to potential biases.

Data Collection Techniques in Interviews


➢ Physically: Physical interview allows bringing variability in interview and helps to understand the tone and
emotion of respondents.
➢ Audio Recording: Recording interviews allows for accurate transcription and analysis of the conversation.
➢ Video Recording: Capturing video can provide additional context through non-verbal cues and body language.
➢ Field Notes: Researchers may take notes during the interview to capture immediate impressions or observations.
➢ Transcription: Converting audio or video recordings into written text for detailed analysis.

Questionnaire method:
The Questionnaire Method is a popular data collection technique where a set of written questions is used to gather
information from respondents.

Key Features of the Questionnaire Method


➢ Standardized Questions: The same set of questions is presented to all respondents, ensuring consistency and
comparability of responses.
➢ Written Format: Questions are usually presented in written form, either on paper or electronically. Respondents
fill out the answers themselves.
➢ Anonymity: In many cases, questionnaires can be completed anonymously, which can encourage more honest
and unbiased responses.
➢ Efficiency: Questionnaires allow researchers to collect data from a large number of respondents quickly and at
relatively low cost.

Types of Questionnaires
➢ Structured Questionnaires: Consist of predefined questions with fixed response options (e.g., multiple-choice,
Likert scales, yes/no questions). These are easy to administer and analyze. Example: A survey asking
respondents to rate their satisfaction with a product on a scale of 1 to 5.
➢ Unstructured Questionnaires: Open-ended questions that allow respondents to answer in their own words.
These provide more in-depth qualitative data but are harder to analyze systematically. Example: A survey asking
respondents to describe their experience using a product.
➢ Semi-Structured Questionnaires: A combination of structured and unstructured questions. Some questions
have fixed responses, while others are open-ended, allowing for a balance of quantitative and qualitative data.
Example: A questionnaire with multiple-choice questions about demographics, followed by open-ended
questions about opinions.

Types of Questions in Questionnaires


➢ Closed-Ended Questions: Offer predefined answer choices. These are easier to analyze and are often used for
quantitative data collection. Example: Do you own a smartphone? (Yes / No).
➢ Open-Ended Questions: Allow respondents to answer in their own words, providing more detailed and nuanced
responses. These are typically used for qualitative research. Example: "What features do you look for in a
smartphone?"
➢ Multiple-Choice Questions: Provide several possible answers, from which the respondent can select one or
more options. Example: "Which of the following social media platforms do you use? (Facebook, Twitter,
Instagram, LinkedIn)"
➢ Likert Scale Questions: Ask respondents to rate their level of agreement or satisfaction on a scale, typically
ranging from "strongly agree" to "strongly disagree." Example: "How satisfied are you with our service? (1 =
Very dissatisfied, 5 = Very satisfied)"
➢ Ranking Questions: Ask respondents to rank items in order of preference or importance. Example: "Rank the
following factors in order of importance when purchasing a smartphone (Price, Features, Brand, and Design)."

Advantages of the Questionnaire Method


➢ Cost-Effective: Questionnaires are relatively inexpensive to design, distribute, and analyze, especially when
conducted online.
➢ Reach a Large Audience: Questionnaires can be distributed to a large group of people across different
geographical locations, making it possible to collect diverse data.
➢ Standardization: Since every respondent receives the same set of questions, data collected is uniform,
facilitating easier comparison and analysis.
➢ Anonymity and Privacy: Respondents may feel more comfortable answering sensitive or personal questions if
the questionnaire ensures anonymity.
➢ Quantitative and Qualitative Data: Questionnaires can collect both types of data, depending on how the
questions are structured.

Disadvantages of the Questionnaire Method


➢ Limited Depth: Structured questionnaires, with fixed responses, may not capture the full depth or complexity
of respondents’ opinions or experiences.
➢ Non-Response: Low response rates can be a significant problem, especially in online or mail surveys.
Respondents may choose not to participate or fail to complete the questionnaire.
➢ Misinterpretation: Respondents may misinterpret questions, especially if there is no interviewer present to
clarify. This can lead to inaccurate data.
➢ Bias: Questionnaires may be subject to response bias, where respondents provide answers they think are socially
acceptable rather than truthful.
➢ Time-Consuming: Though they can be efficient in terms of reach, designing a well-crafted questionnaire and
analyzing open-ended responses can be time-consuming.

Designing an Effective Questionnaire


➢ Clear Objectives: Define the goals of the questionnaire. What specific information are you trying to gather?
This helps in formulating relevant questions.
➢ Simple Language: Use clear, straightforward language that respondents can easily understand. Avoid technical
jargon or ambiguous terms.
➢ Question Order: Start with general or easier questions to build rapport and gradually move towards more
specific or sensitive questions.
➢ Avoid Leading Questions: Ensure that questions are neutral and do not suggest a particular response. Leading
questions can bias the results.
➢ Pilot Testing: Before distributing the questionnaire widely, conduct a pilot test with a small group to identify
any issues with question clarity, flow, or timing.

Steps of making questionnaire from literature review


Here are the key steps for creating a questionnaire from a literature review:
1.Define Research Objectives
➢ Clarify Research Goals: Before designing the questionnaire, clearly define what you want to investigate.
➢ Identify Key Constructs: Based on your research objectives, identify the key constructs or variables you want
to measure.
Example: If your research objective is to measure the impact of leadership style on employee motivation, the
key constructs would include "leadership style" and "employee motivation."
2. Conduct a Comprehensive Literature Review
➢ Identify Theories and Models: Review the literature to understand the theoretical frameworks and models
related to your research topic.
➢ Review Previous Questionnaires: Look for questionnaires used in similar studies and examine the types of
questions, response scales, and measurement techniques employed.
➢ Extract Measurement Scales: Using established scales adds rigor to your questionnaire.
Example: If you are researching job satisfaction, you might find that the "Minnesota Satisfaction Questionnaire" is
commonly used and includes relevant dimensions like pay, supervision, and work environment.
3. Operationalize Constructs from the Literature
➢ Define the Constructs in Measurable Terms: Operationalization involves turning abstract concepts from the
literature into measurable items. Each construct should have a clear, concrete definition and be linked to
specific questions.
➢ Break down Constructs into Sub-Dimensions: If a construct has multiple dimensions, create questions to
measure each dimension.
Example: Based on the literature, "employee motivation" may include sub-dimensions like intrinsic motivation
(e.g., enjoyment of work) and extrinsic motivation (e.g., salary and benefits).
4. Develop Questions Based on the Literature
➢ Use Existing Questions Where Possible: If the literature provides well-validated questions, it’s often best to
use them. Create New Questions Where Needed: Make sure they are clear, unambiguous, and align with the
definitions and constructs from the literature.
➢ Balance Closed and Open-Ended Questions: Use them if the literature suggests gaps or areas requiring
exploration.
Example: For leadership style, the literature might suggest using questions that rate different styles (e.g.,
transformational, transactional) on a Likert scale.
5. Choose the Right Response Format
➢ Use Appropriate Scales: Based on the literature, select appropriate response scales for your questions:
➢ Likert Scales: For measuring attitudes, opinions, or perceptions.
➢ Semantic Differential Scales: To measure people's reactions between two opposing adjectives .
➢ Maintain Consistency with the Literature: Ensure the response formats you use are consistent with those
found in the literature to allow comparison and validation.
Example: If previous studies used a 5-point Likert scale to measure job satisfaction, you should do the same to
ensure consistency.
6. Ensure Content Validity
➢ Cover All Relevant Constructs: This ensures the questionnaire is comprehensive and aligned with the
theoretical framework.
➢ Expert Review: Consult experts or peers familiar with the literature to review the questionnaire and ensure
that it adequately measures the key variables and constructs from the literature.
Example: If "job performance" is a key construct, make sure your questionnaire covers all aspects of
performance identified in the literature, such as productivity, quality of work, and teamwork.
7. Pretest and Pilot the Questionnaire
➢ Conduct a Pilot Test: The goal is to identify any issues with clarity, question order, or response options.
➢ Refine Based on Feedback: Use the feedback from the pilot test to make necessary adjustments. This might
include rephrasing questions, adjusting response options, or reordering questions for better flow.
Example: After a pilot test, you might find that respondents struggle with interpreting certain terms from the
literature, requiring you to simplify the wording.
8. Address Potential Biases
➢ Avoid Leading Questions: Ensure that your questions are neutral and not leading.
➢ Randomize or Group Questions: Based on the literature, consider whether certain questions should be
grouped by construct, or if randomizing them will help reduce bias.
Example: Instead of asking, "Do you agree that transformational leadership is the best style?" ask, "How would
you rate the effectiveness of transformational leadership in your organization?"
9. Finalize and Distribute the Questionnaire
➢ Final Review: After refining the questionnaire, conduct a final review to ensure all questions are aligned with
your research objectives and grounded in the literature.
➢ Select Distribution Method: Choose the most appropriate method for distributing your questionnaire
depending on your target audience. Example: If your respondents are geographically dispersed, an online
survey platform may be the best way to distribute the questionnaire efficiently.
10. Data Collection and Analysis
➢ Collect Data: Distribute the finalized questionnaire to your target population and collect the responses.
➢ Coding and editing data: Coding involves converting qualitative data into a numerical or categorized format
that can be analyzed statistically. Editing involves checking the data for accuracy, consistency, and
completeness before analysis.
➢ Analyze Data in Line with Literature: When analyzing your data, compare your findings with the literature.
Example: If the literature uses regression analysis to explore relationships between leadership style and
employee motivation, apply the same method to your data.

Processing and Analysis of Data

Processing Operations of Research Data.


Editing: Editing of data is a process of examining the collected raw data (especially in surveys) to detect errors and
omissions and to correct these when possible. With regard to points or stages of editing, that could be field editing
and central editing.
Coding: Coding refers to the process of assigning numerals or other symbols to answers so that responses can be
put into a limited number of categories or classes. Such classes should be appropriate to the research problem under
consideration.
Classification: Classification of data which happens to be the process of arranging data in groups or classes on the
basis of common characteristics. Classification can be Exclusive type and Inclusive Type.
Tabulation: When a mass of data has been assembled, it becomes necessary for the researcher to arrange the same
in some kind of concise and logical order. This procedure is referred to as tabulation.

STATISTICAL MEASURES USED IN RESEARCH


Statistical measures are tools used in research to summarize data, identify patterns, and make inferences. They are
broadly categorized into descriptive measures (summarizing data) and inferential measures (drawing conclusions
about populations).
➢ Descriptive Statistics: Descriptive statistics are used to summarize, organize, and present data in a way that is
easy to understand. They provide a simple overview of the data's main characteristics without making predictions
➢ Measures of Central Tendency: Measures of central tendency are statistical metrics used to describe the center
or typical value of a data set. They summarize the data with a single value that represents the "average" or most
common point. The main measures of central tendency are: Mean, Median, and Mode.
➢ Measures of Dispersion (Variability): Measures of dispersion, also known as variability, describe the spread
or distribution of data points in a dataset. They quantify how much the data deviates from the central tendency,
providing insights into the data's consistency and diversity. Here are the main measures of dispersion: Range,
Variance, Standard Deviation, Skewness, and kurtosis.

Inferential statistics: Inferential statistics involves using data from a sample to draw conclusions or make
inferences about a larger population. It provides tools to analyze data, test hypotheses, and make predictions. Unlike
descriptive statistics, which summarizes data, inferential statistics uses probability theory to make generalizations.
Main Techniques in Inferential Statistics are: Point Estimation, Hypothesis Testing, linear Regression Analysis, and
Multiple Regression Analysis.

Problem #01
Let us suppose that we want a sample of size n = 30 to be drawn from a population of size N = 8000 which is divided
into three strata of size N1= 4000, N2= 2400 and N3= 1600. Adopting proportional allocation, we shall get the
sample sizes as under for the different strata:

n x N1 n x N2 n x N3
Solution: n=------------ + -------------- + --------------
N N N

30x4000 30x2400 30x1600


30= ------------- + -------------- + -----------------
8000 8000 8000
30 = 15+9+6
From strata N1, N2, and N3 we have to select sample 15, 9, and 6 respectively.
Problem #02
Suppose, we want a sample size of n=380 from Islamic University from a population size of N=20000. Which is
divided into three strata of size N1 (Total number of Boys Student) = 10000, N2 (Total number Girls Student) =
8000 and Total number of N3 (Total number academic and administrative staff) = 2000. Adopting proportional
allocation get the sample sizes under different strata:
Solutioon:

Problem #03
A population is divided into three strata so that N1 = 5000, N2 = 2000 and N3 = 3000. Respective standard deviations
are: σ1 =15, σ2 =18 and σ3=5. How should a sample of size n = 84 be allocated to the three strata, if we want
optimum allocation using disproportionate sampling design?
Solution: Using the disproportionate sampling design for optimum allocation, the sample sizes for different strata
will be determined as under:
n. N1. σ1
ni=----------------------------------------------------
N1. σ1 + N2. σ2 + N3. σ3
Sample size for strata with N1= 5000
84x5000x15
n1 =--------------------------------------------------- = 50
(5000x15)+ (2000x18)+(3000x5)
Sample size for strata with N2= 2000
84x2000x18
n2 =--------------------------------------------------- = 24
(5000x15)+ (2000x18)+(3000x5)
Sample size for strata with N3= 3000
84x3000x5
n3 =--------------------------------------------------- = 10
(5000x15)+ (2000x18)+(3000x5)

Problem #04
The following are the number of departmental stores in 15 cities: 35, 17, 10, 32, 70, 28, 26, 19, 26, 66, 37, 44, 33,
29 and 28. If we want to select a sample of 10 stores, using cities as clusters and selecting within clusters proportional
to size, how many stores from each city should be chosen? (Use a starting point of 10).
Solution:
Since in the given problem, we have 500 departmental stores from which we have to select a sample of 10 stores,
the appropriate sampling interval is 50. As we have to use the starting point of 10th, so we add successively
increments of 50 till 10 numbers have been selected. The numbers, thus, obtained are: 10, 60, 110, 160, 210, 260,
310, 360, 410 and 460 which have been shown in the last column of the table against the concerning cumulative
totals. From this we can say that two stores should be selected randomly from city number five and one each from
city number 1, 3, 7, 9, 10, 11, 12, and 14. This sample of 10 stores is the sample with probability proportional to
size.

City Number No. of Departmental Stores Cumulative Total Sample


1 35 35 10
2 17 52
3 10 62 60
4 32 94
5 70 164 110, 160
6 28 192
7 26 218 210
8 19 237
9 26 263 260
10 66 329 310
11 37 366 360
12 44 410 410
13 33 443
14 29 472 460
15 28 500

State the meaning of Universe, Population, and Sample with an appropriate example.
Universe: The universe (also known as the target population or theoretical population) is the broadest group or
collection of elements that a researcher wants to study. It includes all potential subjects or elements that fit the
criteria of interest. For example: In a study about the reading habits of students, the universe could be all students
worldwide.
Population: The population is a specific group of individuals or items within the universe that meet the criteria of
interest for the research. It is more narrowly defined than the universe and often focuses on a specific geographical,
temporal, or contextual subset. For example: Continuing from the earlier example, the population might be all
students in higher education institutions in Bangladesh.
Sample: The sample is a smaller subset of the population that is actually studied or surveyed. Researchers collect
data from the sample because it is usually impractical or impossible to study the entire population. The sample
should be representative of the population to allow for generalization of results. For example: In the same study, the
sample could be 500 higher education students from 20 higher academic institutions across different states in the
Bangladesh.

Case study analysis


A case study analysis is a detailed examination of a specific instance or situation, often used to explore complex
issues in real-world contexts. The goal of case study analysis is to understand the intricacies of a situation, evaluate
key factors and stakeholders involved, and derive insights that can inform decision-making or theory development.
It is widely used in fields such as business, education, law, healthcare, and social sciences.

Steps in Case Study Analysis


❖ Read and Understand the Case
➢ Initial Reading: Begin by reading the case thoroughly to get a general understanding of the situation. Note the
key players, the timeline of events, and the main issues.
➢ Identify the Problem: Define the central issue or challenge in the case. What problem needs to be addressed or
resolved? Is it a managerial decision, a policy dilemma, or a strategic question?
➢ Contextual Understanding: Understand the broader context in which the case is situated, such as organizational
structure, external market conditions, or relevant industry trends.
❖ Identify Key Facts and Data
➢ Extract Relevant Information: Highlight important facts, figures, and events from the case. What data has
been provided to support decision-making? This might include financial statements, market research, operational
reports, or employee feedback.
➢ List Stakeholders: Identify all key stakeholders involved, such as company management, employees,
customers, or external entities like regulators or competitors. Understanding the interests and motivations of
each stakeholder is crucial for effective analysis.
❖ Define the Core Issues
➢ Identify Root Causes: Go beyond surface-level issues to determine the underlying causes of the problem. This
often requires an in-depth look at internal and external factors that contribute to the situation, such as leadership
style, organizational culture, or market dynamics.
➢ Recognize Conflicting Interests: In many cases, there are conflicting interests among stakeholders. Identify
these conflicts and assess how they impact the problem at hand.
❖ Apply Relevant Theories and Frameworks
➢ Choose Analytical Tools: Use theoretical frameworks and models to structure your analysis. For example, in
business cases, tools like SWOT analysis, Porter's Five Forces, or the PESTLE framework can help analyze
internal and external environments. In educational or healthcare cases, models like Bloom’s Taxonomy or the
Health Belief Model may be more appropriate.
➢ Link to Academic Theories: Connect the case to relevant academic theories or concepts. This may involve
drawing on leadership theories, organizational behavior, economics, or other disciplinary knowledge.
❖ Evaluate Alternative Solutions
➢ Generate Options: Brainstorm and list potential solutions to the core issue. Think about different strategies that
the organization or stakeholders could pursue.
➢ Assess Feasibility: Evaluate each potential solution in terms of feasibility, cost, time, and potential impact. What
are the pros and cons of each option? Are there any resource constraints or risks?
➢ Use Evidence: Use evidence from the case data, as well as your theoretical analysis, to support or refute each
potential solution.

❖ Recommend the Best Course of Action


➢ Select the Optimal Solution: After evaluating all alternatives, recommend the most suitable solution. Be
specific about what steps should be taken and by whom. Your recommendation should address the root cause of
the problem and balance the interests of key stakeholders.
➢ Justify Your Recommendation: Provide clear reasoning for your choice, drawing on the facts from the case,
theoretical models, and any external research. Explain why this solution is preferable to other options.
❖ Create an Implementation Plan
➢ Outline Key Actions: Detail a step-by-step plan to implement your recommended solution. Include timelines,
responsible parties, and any required resources.
➢ Address Potential Challenges: Identify any obstacles to implementation, such as resistance from stakeholders
or financial limitations, and propose ways to overcome these challenges.
➢ Measure Success: Suggest metrics or benchmarks to assess whether the solution has been successful. These
might include financial performance indicators, customer satisfaction, or employee engagement scores.
❖ Draw Broader Insights
➢ Relate to Larger Themes: Finally, consider what broader lessons can be learned from the case. How does this
case contribute to understanding similar issues in other organizations or industries? What can it tell us about
management practices, strategic decisions, or social dynamics?
➢ Reflect on Limitations: Acknowledge any limitations in the case or your analysis. Were there missing data, or
were certain aspects of the situation not fully explored?

Pilot Survey
A pilot survey is a small-scale preliminary study conducted before the main survey to test the feasibility, design,
methodology, and effectiveness of the research instruments, such as questionnaires or interviews. The purpose of a
pilot survey is to identify potential issues and make necessary adjustments before conducting the full-scale research.
Key Features of a Pilot Survey
➢ Small Sample Size: Pilot surveys are conducted with a limited number of participants, often representing the
target population. The small sample allows researchers to identify issues without expending too many resources.
➢ Testing Research Instruments: The primary goal is to test the survey questionnaire, interview guide, or other
data collection tools for clarity, relevance, and effectiveness. Researchers check if the questions are clear, if they
yield the required information, and if the respondents understand them properly.
➢ Identifying Flaws: Researchers look for issues such as ambiguous or confusing questions, response bias, and
technical difficulties (e.g., problems with online survey platforms). Pilot surveys also identify logistical
problems, like the time required to complete the survey or how to access the target population.
➢ Preliminary Data Analysis: The data from a pilot survey are usually analyzed to identify potential challenges
in data collection and processing. Although the sample is small, it can give researchers insights into patterns or
trends. This helps researchers refine or modify their survey design.
➢ Testing Operational Procedures: It checks the operational side of the survey, including participant recruitment,
distribution methods, and response rates. Researchers can test how well they are able to administer the survey
under real-world conditions.
Why do researchers conduct pilot survey?
➢ Validate Survey Design: To ensure that the survey questions are appropriately worded and relevant to the
research objectives.
➢ Improve Clarity: To identify ambiguous questions or questions that could be misinterpreted by respondents.
➢ Check Timing: To ensure that the survey doesn’t take too long to complete, which could result in participant
fatigue or low response rates.
➢ Refine Sampling Methods: To test whether the proposed sampling method is effective in reaching the intended
audience.
➢ Assess Data Collection Tools: To test whether the survey tools (online platforms, paper forms, etc.) are
functioning properly and collecting data as intended.
➢ Minimize Errors: To catch potential sources of error early, such as technical problems with online surveys,
incorrect question formats, or incomplete instructions.
➢ Adjust Survey Flow: To determine whether the order of questions is logical and easy to follow for participants.

Steps to Conduct a Pilot Survey


➢ Design the Survey Instrument: Create a draft version of the survey instrument (questionnaire, interview guide,
etc.).
➢ Select a Pilot Sample: Choose a small, representative group of participants from your target population. Ensure
the sample is diverse enough to reflect the range of participants in the full survey.
➢ Conduct the Pilot Survey: Administer the survey to the selected participants, following the same procedures
planned for the full survey.
➢ Collect Feedback: After participants complete the survey, ask for feedback on the clarity, length, and difficulty
of the survey. Find out if there were any unclear questions or technical problems.
➢ Analyze the Results: Analyze the data to check for consistency, accuracy, and patterns in responses. Identify
any trends that may indicate problems with the survey questions or methodology.
➢ Revise the Survey: Based on the feedback and data analysis, revise the survey to correct any identified issues.
This might involve rewording questions, changing the survey structure, or addressing any logistical challenges.
➢ Finalize the Instrument: After revisions, prepare the final version of the survey for the full-scale study.

Data Editing
Editing of data is the process of reviewing and adjusting raw data to ensure that it is accurate, complete, consistent,
and ready for analysis. The editing process ensures that the data is clean, reliable, and suitable for drawing
meaningful conclusions.
Objectives of Data Editing
➢ Ensure Accuracy: Correcting data entry errors, such as typos, misreported figures, or coding mistakes.
➢ Improve Completeness: Filling in missing data where possible or addressing incomplete responses.
➢ Enhance Consistency: Ensuring that the data follows the same format throughout, such as having consistent
units (e.g., all weights in kilograms, dates in a standard format).
➢ Eliminate Irrelevant Data: Removing data that is not relevant to the research objectives or analysis (e.g.,
outliers or unintentional duplications).
➢ Facilitate Analysis: Making the data suitable for statistical analysis or other methods of examination, ensuring
it meets the required standards for the chosen techniques.

Steps in Data Editing


➢ Review Data Collection Methods: Ensure that the data collection method (e.g., surveys, questionnaires,
observations) has not introduced systematic errors. Identify any data collection issues that could lead to
inaccuracies.
➢ Check for Missing Data: Identify and address any missing responses or incomplete records. Impute missing
values using statistical techniques. Exclude incomplete records, depending on the level of missing data.
➢ Correct Errors: Check for errors such as misreported figures, typing mistakes, or illogical responses (e.g., a
person’s age listed as 150 years). These errors should be corrected based on context or supplementary data.
➢ Standardize Data: Ensure that all data follows the same format (e.g., using consistent units, date formats, etc.).
This ensures comparability across the dataset.
➢ Check for Consistency: Identify any inconsistencies within the data. For example, in a demographic survey, if
a respondent indicates that they are unemployed but later mentions they work 40 hours per week, this
inconsistency needs to be resolved.
➢ Remove Duplicate Records: Ensure there are no unintentional duplicate entries in the data.
➢ Handle Outliers: Identify outliers (extreme values that deviate significantly from other observations) and decide
whether to retain, correct, or remove them. This decision depends on whether the outliers are due to errors or
represent valid but extreme cases.
➢ Document Changes: Keep a record of all changes made during the editing process. This helps maintain
transparency and allows other researchers or data users to understand what adjustments were made.

Common Data Editing Techniques


➢ Imputation: Replacing missing or incomplete data with estimated values, typically based on patterns in the
existing data (e.g., using the mean or median to fill in missing numeric data).
➢ Outlier Detection: Using statistical methods to identify and manage outliers that may skew the results. Outliers
are either corrected if they are errors or retained if they provide meaningful insights.
➢ Cross-Validation: Comparing the edited data against other related datasets to ensure consistency and accuracy.
For example, comparing sales data across different time periods to ensure there are no discrepancies.
➢ Range Checks: Ensuring that all values fall within a predefined range. For example, ages should be between 0
and 120, and sales figures should not be negative.

Coding of Data
Coding of data refers to the process of converting categorical or non-numerical data (such as responses from surveys,
interviews, or observations) into numerical values to facilitate statistical analysis. This is especially useful for
variables that are not inherently numerical, like gender, occupation, or educational level, so that the data can be
analyzed using statistical techniques like regression, correlation, or factor analysis.

Steps in Coding Quantitative Data


➢ Identify Categorical Variables: Review your dataset to identify which variables are categorical and need to be
coded into numerical form (e.g., gender, education level, occupation).
➢ Assign Numerical Codes: For each categorical variable, assign a distinct numerical code to each category or
label. Ensure there are no overlaps or ambiguities in the codes.
Example: In coding education level, you might assign:
High school = 1
Undergraduate degree = 2
Master’s degree = 3
PhD = 4
➢ Create a Codebook: Maintain a document or codebook that defines the meaning of each code. This ensures that
you or anyone else analyzing the data knows how to interpret the codes.
Example: A section of your codebook might look like:
Gender: Male = 1, Female = 2, Non-binary = 3
Marital Status: Single = 1, Married = 2, Divorced = 3, Widowed = 4
➢ Input Codes into Data Analysis Software: Input the coded values into statistical analysis software like SPSS,
R, Stata, or Excel for analysis. Make sure to double-check that the codes are entered correctly.
➢ Test and Clean Data: Once the data is coded, test it for inconsistencies or errors, such as incorrect coding or
missing values, and clean the dataset before proceeding with analysis.
➢ Run Descriptive Statistics: Run simple descriptive statistics (e.g., frequency counts) to ensure the coding is
correct and that all categories are represented as expected.

You might also like