0% found this document useful (0 votes)
7 views19 pages

pr2 Notes

Lesson 12 covers quantitative data-collection techniques, defining quantitative data as numerical representations that can be categorized, counted, or measured. It discusses various methods for collecting this data, including observation, surveys, experiments, and content analysis, highlighting their applications and types. The lesson emphasizes the importance of structured data collection to derive reliable insights and statistical analysis.

Uploaded by

yuahho4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views19 pages

pr2 Notes

Lesson 12 covers quantitative data-collection techniques, defining quantitative data as numerical representations that can be categorized, counted, or measured. It discusses various methods for collecting this data, including observation, surveys, experiments, and content analysis, highlighting their applications and types. The lesson emphasizes the importance of structured data collection to derive reliable insights and statistical analysis.

Uploaded by

yuahho4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Lesson 12: Quantitative Data-Collection Techniques

1.​ Definition of Quantitative Data


Quantitative data are data represented numerically, including anything that can be
counted, measured, or given a numerical value. Quantitative data can be classified in
different ways, including categorical data that contain categories or groups (like
countries), discrete data that can be counted in whole numbers (like the number of
students in a class), and continuous data that is a value in a range (like height or
temperature). Quantitative data are typically analyzed with statistics.

Examples:

Examples of quantitative data are a spreadsheet of numbers or data collected from a


survey question where an answer must be selected from a predetermined set of
values.

-​ Height

-​ Temperature​

-​ Income​

-​ sales figures

-​ population size

-​ test scores

-​ weights

2.​ Techniques in Collecting Quantitative Data


2.1 Observation

Qualitative research often uses observational studies, which tend to involve


observing people in certain situations or scenarios. However, observation can
be a valuable quantitative data collection method as well. The difference is
observation for quantitative data collection purposes is wholly focused on
numbers. Generally, quantitative observation studies involve an

observer who watches people in a certain situation. However, observations may


also be automated, especially online.
Examples:

-​ Observation may involve seeing how many products a shopper considers


before making a selection. Or, a website analysis tool may measure how many
minutes people spend evaluating furniture on a website before making a purchase.

-​ You are interested in the effects of caffeine consumption on heart rate. You ask
your participants to consume a caffeinated beverage of their choice, and you measure
their heart rate using a heart rate monitor before and after consumption. The heart rate
data is recorded and analyzed to determine if there is a significant increase in heart
rate after consuming the caffeinated beverage, or if there is a difference between
beverages, such as coffee, tea, and energy drinks.

-​ You are interested in the relationship between exercise and stress levels. You
ask your participants to rate their stress level on a scale of 1-10 (with 10 being the
highest) before and after engaging in a 30-minute high-intensity interval Straining
(HIIT) class, with the type of exercise and intensity level standardized across
participants. You record the stress level data and compare it using statistical analysis
to determine if there is a significant correlation between exercise and stress levels.

Observation data collection is valuable because it provides specific, highly reliable


insight into certain human behaviors. The collected data may be more reliable when
observed in real life. For example, people may not always know precisely how many
products they considered before making a purchase. However, an observational study
would show a researcher exactly how many products a person considered.

Types of quantitative observations

There are several types of quantitative observation. Here are some of the most
common ones that we found:
●​ Systematic Observation
-​ In its most structured form, systematic observation is a method of
quantitative data collection that involves one or more observers, observing events, or
behaviors, as they occur, and reliably recording their observations in terms of
previously structured codes or numerical categories. In some observational
studies the events or behaviors to be observed are first preserved on video or
audio recordings. Observations may take place through one-way mirrors as well.
Whatever the means of access to the observational data, systematic observation is
sufficiently structured to allow testing of research hypotheses or, in the case of
practice, to allow for assessment or evaluation.

Example:
Counting the number of times children laugh in a classroom.
●​ Case Study

-​ If you see, and hear them, not through your own eyes and ears but by
means of technological and electrical gadgets used to earlier events, images, or
sounds.

-​ An in-depth research design that primarily uses a qualitative


methodology but sometimes​​includes quantitative methodology​
.
-​ Used to examine an identifiable problem confirmed through research.​

-​ Used to investigate an individual, group of people, organization, or event.​

-​ Used to mostly answer "how" and "why" questions.

Types of Case Study

●​ Descriptive - used to describe a program, situation or phenomenon, and


provide a clear picture of what is happening and who is involved. Sometimes referred
to as an illustrative case study, it helps make the unfamiliar familiar, provide surrogate
experience, avoid over-simplification of reality, and give the target audience for study
findings a clear and common understanding of the “case” being studied. Arguably the
most common type of case study used in criminal justice research, descriptive case
studies are particularly valuable for documenting similarities and differences across
multiple implementations of a program model or type.

●​ Explanatory - typically used to answer “how” and “why” questions about a


particular phenomenon. As the name implies, the focus is on explanation rather than
mere description, such as how and why a program’s expected outcomes were or were
not attained. Explanatory case studies are particularly useful for discovering the
reasons for a program’s success or failure. Yin (1990) has suggested that in an
explanatory case study, competing explanations for the dynamics or events of interest
should be posed and tested for best fit (p.16). While case studies have been viewed by
some as lacking generalizability and the level of rigor needed to produce trustworthy
conclusions about how and why particular events occurred, a properly designed and
executed explanatory case study can indeed produce highly credible and generalizable
conclusions (Yin, 1990, p. 21).

●​ Exploratory - used to develop an initial understanding of the program or


phenomenon of interest. The focus is on discovery for the purpose of obtaining an

empirically based introduction to the structure, dynamics and context of the subject of
interest. Exploratory case studies are particularly useful for developing hypotheses to
be tested, research questions to be answered, and/or design options to be used in a
more focused and in-depth subsequent study.

●​ Multiple Case Studies or Collective Case Study


-​ Learn about the complexity of an issue.
-​ Use purposeful sampling of cases that tailor to the specific study.
-​ Provide a variety of context from different studies to determine a theme or
finding from the evidence provided.

●​ Intrinsic
-​ Understand a specific case through a primarily descriptive exploration.
-​ Not have set expectations, but rather focus on gaining a better understanding of
the case.

●​ Instrumental
-​ Provide information for further understanding of a phenomenon, issue, or
problem.
-​ Aim to focus on the outcome or results instead of the research topic.

●​ Archival Research
-​ Archival Research is the investigation of hard data from files that organizations
or companies have. US Census data are archival data. Telephone bills that are in the
computers of the telephone company are another example. Fire departments keep
records of fires, chemical spills, accidents and so forth, all of which constitute archived
data. Most IQPs use archived data. Indeed, many IQPs require the use of several
kinds of archived data, some of which get transformed into different forms so that they
can be used together to help the authors defend an argument.

Example:
Analyzing US Census data or telephone records

2.2 Survey
Surveys are the most common method for quantitative data collection. These
basic questionnaires are a simple, effective method for collecting quantitative
data and generally have a high rate of completion. Additionally, surveys can be
deployed both online and offline to reach a broad spectrum of participants.
Types of Surveys

An additional consideration is how you want to administer your survey with


respect to time, and whether you want to survey your population at a single
point or over an extended period.

●​ Cross-sectional survey
Which is given at just one point in time. Such a survey will tell you how things
were for the respondents at the particular time, such as who they would support
in the presidential election in the third week of August. The respondents'
answers might change before and after the survey, so you’re essentially getting
a snapshot of views and feelings at that moment.

For example, phone companies rely on advanced and innovative features to


drive sales. Research by a phone manufacturer throughout the target
demographic market validates the expected adoption rate and potential phone
sales. In cross-sectional studies, researchers enroll men and women across
regions and age ranges for research. If the results show that Asian women
would not buy the phone because it is bulky, the mobile phone company can
tweak the design to make it less bulky. They can also develop and market a
smaller phone to appeal to a more inclusive group of women.

●​ Longitudinal Survey
The longitudinal survey can be defined as a research process that involves
repeated observations of the same groups of people, whether that’s employees,
a specific group of customers, or another audience group to see if their opinions
have changed since you first collected their data.

On the other hand, it lets you make observations over an extended period of
time. There are several types of longitudinal surveys you can conduct. Three of
the main ones are: trend, panel, and cohort surveys.

●​ Trend Survey
The main focus of a trend survey is, perhaps not surprisingly, trends.
Researchers conducting trend surveys are interested in how people’s
inclinations change over time.

For example, if you want to know how Americans’ views on healthcare have
changed over the past 10 years, you would ask the same questions to people at
different points in time over a 10-year period. You wouldn’t have to survey the
same people each time because as a researcher you’re more interested in the
generalized trend over time than who is being sampled each time. What is
critical is asking the same question worded the same way, to capture changes in
people’s views.
●​ Panel Survey
Panel surveys involve a variety of question types, including open-ended
questions, multiple-choice questions, single-choice or yes/no questions, grids
and scales. Surveys are highly effective for collecting quantitative data because
they are efficient, often inexpensive to conduct, versatile, flexible, and offer
reliable, measurable data from large groups.

For example, a panel survey may be sent to a targeted group or general


population of online panelists. These surveys generally have a high rate of
response because they have opted into consumer panels for the sole purpose
of taking part in research. And these surveys shouldn't take much time. Offline
surveys are often more time-intensive but can be valuable for collecting data
from targeted individuals or customers. Offline surveys may be in the form of a
mailed questionnaire or a survey conducted over the phone.

●​ Cohort Survey
In which you identify a category of people of interest then randomly select
individuals from within that category to survey over time. It is important to note
that you don’t have to pick the same people each year; however, the people you
do pick must fall into the same categories that you have previously selected. For
instance in 1951 the British Doctors Study began by studying people who were
exposed to smoking to understand whether it had an impact on the likelihood of
lung cancer. They matched people who did smoke to non-smokers, and planned
to continue tracking those two groups until 2001. However, it only took until
1956 for them to find convincing evidence that smoking increased cancer rates.

An example of a cohort study is comparing the test scores of one group of


people who underwent extensive tutoring and a special curriculum and

those who did not receive any extra help. The group could be studied for years
to assess whether their scores improve over time and at what rate.

●​ Questionnaire
A paper containing a series of questions formulated for an individual and
independent answering by several respondents for obtaining statistical
information. Each question offers a number of probable answers from which the
respondents, on the basis of their own judgment, will choose the best answer.
Making up a questionnaire are factual and opinionated questions. Questions to
elicit factual answers are formulated in a multiple-choice type and those who
ask about the respondents views, attitudes, preferences, and other opinionated
answers are provided with sufficient space where the respondents could write
their sentential answers to opinionated questions.
Responses yielded by this instrument are given their numeral forms (numbers,
fractions, percentages) and categories that are subjected to statistical analysis.
Questionnaires are good for collecting data from a big number of respondents
situated

in different places because all you have to do is either to hand the paper to the
respondents or to send it to them through postal or electronic mail. However,
ironically, you are susceptible to wasting money, time, and effort for you do not
have any assurance of the return of all or a large number of fully accomplished
questionnaires.

●​ Interview
Survey as a data-gathering technique likewise uses interview as its
data-gathering instrument. Similar to a questionnaire, the interview makes you
ask a set of questions, only that, this time, you do it orally. Some, however, say
that with the advent of modern technology, oral interview is already a traditional
way of interviewing, and the modern ways happen through the use of modern
electronic devices such as mobile phones, telephones, smart phones, and other
wireless devices.

●​ Order of Interview Questions


In asking interview questions, you see to it that you do this sequentially;
meaning, let your questions follow a certain order such as the following:
(Sarantakos 2013; Fraenbel 2012)

First set of questions - opening questions to establish friendly relationships,


like questions about the place, the time, the physical appearance of the
participant, or other non-verbal things not for audio recording.

Second set of questions - Generative questions to encourage open-ended


questions like those that ask about the respondents’ inferences, views, or
opinions about the interview topic.

Third set of questions - Directive questions or close-ended questions to elicit


specific answers like those that are answerable with yes or no, with one type of
an object, or with definite period of time.

Fourth set of questions - Ending questions that give the respondents the
chance to air their satisfaction, wants, likes, dislikes, reactions, or comments
about the interview. Included here are also closing statements to give the
respondents some ideas or clues on your next move or activity about the
interview.
Guidelines in Formulating Interview Questions
From the varied books on research are these tips on interview question
formulation that you have to keep in mind to construct effective questions to
elicit the desired data for your research study:

A.​ Use clear and simple language.


B.​ Avoid using acronyms, abbreviations, jargons, and highfalutin terms.
C.​ Let one question elicit only one answer; no double-barrel question.
D.​ Express your point in exact, specific, bias-free, and gender-free
language.
E.​ Give away to how your respondents want themselves to be identified.
F.​ Established continuity or free flow of the respondents thoughts by using
appropriate follow-up questions (e.g., Could you give an example of it? Would
you mind narrating what happened next?)
G.​ Ask questions in a sequential manner; determine which should be your
opening, middle, or closing questions.

2.3 Experiment
An experiment is a scientific method of collecting data whereby you give the
subjects a sort of treatment or condition then evaluate the results to find out the
manner by which the treatment affected the subjects and to discover the
reasons behind effects of such treatment of the subjects. This quantitative
data-gathering technique aims at manipulating or controlling conditions to show
which condition or treatment has effects on the subjects and to determine how
much condition or treatment operates or functions to yield a certain outcome.

The process of collecting data through experimentation involves selection of


subjects or participants, pre-testing the subjects prior to the application of any
treatment of condition, and giving the subjects post-test to determine the effects
of the treatment on them. These components of the experiment operate in
various ways. Consider the following combination or mixture of the components
that some research studies adopt:

A.​ Treatment → Evaluation


B.​ Pre-test → Treatment → Post-test
C.​ Pre-test → Multiple Treatments → Post-test
D.​ Pre-test → Treatment → Immediate Post-test → 6 months Post-test → 1
year → Post-test

These three words: treatment, intervention, and condition, mean the same thing
in relation to experimentation. These are the terms to mean the things or
applied

to the subjects to yield certain effects or changes on the said subjects. For
instance, in finding out the extent of learning conditions where they will perform
varied communicative activities table-topic conversation, and the like.
Dealing with or treating their communicative ability in two or more modes of
communication is giving them multiple treatments. The basic elements of
experiments, which are subjects, pre-test, treatment, and post-test, do not
operate only for examining causal relationships but also for discovering,
verifying, and illustrating theories, hypotheses, or facts. (Edmonds 2013;
Morgan 2014; Picardie 2014)

2.4 Content Analysis

Content analysis is a research tool used to determine the presence of certain


words, themes, or concepts within some given qualitative data (i.e. text). Using
content analysis, researchers can quantify and analyze the presence,
meanings, and relationships of such certain words, themes, or concepts. As an
example, researchers can evaluate language used within a news article to
search for bias or partiality. Researchers can then make inferences about the
messages within the texts, the writer(s), the audience, and even the culture and
time of surrounding the text.

3.​ Measurement Scales for Quantitative Data


Measurement can be defined as the assignment of numerals to objects or events
according to rules, and assigning such numbers resulted in different types of scales
(S.S.Stevens, 1946). Thus scaling is the development of systematic rules and
meaningful units of measurement for quantifying empirical observations. The scales or
levels of measurement describe the nature of the values assigned to the variables in a
data set (Formplus Blog, 2020). It helps to define and group variables into different
categories. The type of data being collected determines the kind of measurement scale
to be used, and the scale determines the type of statistical techniques to be used for
analysis. There are four scales of measurement. The variable can be defined as being
one of the four scales. The four types of scales are:

3.1 Nominal Scale

A Nominal Scale is a measurement scale, in which numbers serve as “tags” or


“labels” only, to identify or classify an object. This measurement normally deals
only with non-numeric (quantitative) variables or where numbers have no value.

3.2 Ordinal scale

Ordinal scale is the 2nd level of measurement that reports the ranking and
ordering of the data without actually establishing the degree of variation
between them. Ordinal level of measurement is the second of the four
measurement scales. “Ordinal” indicates “order”. Ordinal data is quantitative
data which have naturally occurring orders and the difference between is
unknown. It can be named, grouped and also ranked.
3.3 Interval Scale

The interval scale is a quantitative measurement scale where there is order, the
difference between the two variables is meaningful and equal, and the presence
of zero is arbitrary. It measures variables that exist along a common scale at
equal intervals. The measures used to calculate the distance between the
variables are highly reliable.

The interval scale is the third level of measurement after the nominal scale and
the ordinal scale. Understanding the first two levels will help you differentiate
interval measurements. A nominal scale is used when variables do not have a
natural order or ranking. You can include numbered or unnumbered variables,
but common survey examples include gender questions, location, political party,
pets, and so on.

3.4 Ratio Scale

Ratio scale is a type of variable measurement scale which is quantitative in


nature. It allows any researcher to compare the intervals or differences. Ratio
scale is the 4th level of measurement and possesses a zero point or character
of origin. This is a unique feature of this scale. For example, the temperature
outside is 0-degree Celsius. 0 degree doesn’t mean it’s not hot or cold, it is a
value.

Lesson 13: Quantitative Data Analysis

Definition of Quantitative Data

Quantitative data analysis is the process of making sense of numerical data through
mathematical calculations and statistical tests. It helps you identify
patterns,relationships,and trends to make better decisions.

Examples:
Quantitative data is anything that can be counted in definite units and numbers. So,
among many, many other things, some examples of quantitative data include:

• Revenue in dollars
• Weight in kilograms or pounds
• Age in months or years
• Distance in miles or kilometers

1.​ Back Concept


To understand the numbers standing for the information, u need to analyze them; that
is, you have to examine or study them, not by taking part or element to see the
relationships between or among the parts, to discover the orderly or sequential
existence of these parts, to search for meaningful patterns of the components, and to
know the reasons behind the formation of such variable patterns.

2.​ Steps in Quantitative Data Analysis


Having identified the measurement scale or level of your data means are now ready to
analyze the data in this manner (Badke 2012; Letherby 2013; Mc Bride 2013):

2.1 Preparing the Data


Keep in mind that no data organization means no sound data analysis. Hence,
prepare the data for analysis by first doing these two preparatory sub steps:

1. Coding System
To analyze data means to quantify or change the verbally expressed data
into numerical information. Converting the words, images, or pictures into
numbers, they become fit for any analytical procedures requiring
knowledge of arithmetic and mathematical computations. But it is not
possible for you to do the mathematical operations of division,
multiplication, or subtraction at the word level, unless you code the verbal
responses and observation categories.

2.2 Analyzing the Data


Data Analysis is the process of systematically applying statistical and/or logical
techniques to describe and illustrate, condense and recap, and evaluate data.
According to Shamoo and Resnik (2003) various analytic procedures "provide a
way of drawing inductive inferences from data and distinguishing the signal (the
phenomenon of interest)from the noise (statistical fluctuations)present in the
data".

Examples:
The research will compile data such as age,gender,grade level, and
mathematics grades. This raw data is then interpreted through specific
statistical programs to show relationships between the different variables.

2.2.1 Descriptive Statistical Technique


Descriptive statistics refers to a branch of statistics that involves
summarizing, organizing, and presenting data meaningfully and
concisely. It focuses on describing and analyzing a dataset's main
features and characteristics without making any generalizations or
inferences to a larger population.

Examples:
Exam Scores Suppose you have the following scores of 20 students on
an exam:
85,90,75,92,88,79,83,95,87,91,78,86,89,94,82,80,84,93,88,81
1.​ Frequency Distribution

A frequency distribution is a representation, either in a graphical or tabular format, that


displays the number of observations within a given interval. The frequency is how often
a value occurs in an interval, while the distribution is the pattern of frequency of the
variable.

Types of Frequency Distribution

●​ Ungrouped frequency distributions: The number of observations of each


value of a variable.
●​
●​ Grouped frequency distributions: The number of observations of each class
interval of a variable's values.
●​
●​ Relative frequency distributions: The proportion of observation of each value
or class interval of a variable.
●​
●​ Cumulative frequency distribution: The sum of the frequencies less than or
equal to each value or class interval of a variable.

2.​ Measure of Central Tendency (Mean, Median, Mode)

The central tendency is the descriptive summary of a data set. Through the single
value from the dataset, it reflects the center of the data distribution. Moreover, it does
not provide information regarding individual data from the dataset, where it gives a
summary of the dataset. Generally, the central tendency of a dataset can be defined
using some of the measures in statistics.

Measures of Central Tendency


The central tendency of the dataset can be found out using the three important
measures namely mean, median, and mode.

Mean
The mean represents the average value of the dataset.it can be calculated as the sum
of all the values in the dataset divided by the number of values. In general, it is
considered as the arithmetic mean. Some other measures of mean used to find the
central tendency are as follows:

• Geometric Mean

• Harmonic Mean

• Weighted Mean
Median
Median is the middle value of the dataset in which the dataset is arranged in the
ascending order or in descending order. When the dataset contains an even number of
values, then the median value of the dataset can be found by taking the mean of the
middle two values.

Examples:
Consider the given dataset with the odd number of observations arranged in
descending order - 23, 21, 18, 16, 15, 13, 12, 10, 9, 7, 6, 5, 2 Here 12 is the middle or
median number that has 6 values above it and 6 values below it.

Mode
The mode represents the frequently occurring value in the dataset. Sometimes the
dataset contains multiple modes and in some cases, it does not contain any mode at
all.

Examples:
Consider the given dataset 5, 4, 2, 3, 2, 1, 5, 4, 5
Since the mode represents the most common value.Hence, the most frequently
repeated value in the given dataset is 5.

Based on the properties of the data, the measures of central tendency are
selected.

●​ If you have a symmetrical distribution of continuous data, all the three measures
of central tendency hold good. But most of the time, the analyst uses the mean
because it involves all the values in the distribution or dataset.
●​
●​ If you have skewed distribution, the best measure of finding the central
tendency is the median.
●​
●​ If you have categorical data, the mode is the best choice to find the central
tendency.

3. Standard Deviation
The standard deviation is a measure of the amount of variation of the values of a
variable about its mean. A low standard deviation indicates that the values tend to be
close to the mean (also called the expected) of the set, while a high standard deviation
indicates that the values are spread out over a wider range. The standard deviation is
commonly used in the determination of what constitutes an outlier and what does not.
Examples:
Suppose that the entire population of interest is eight students in a particular class. For
a finite set of numbers, the population standard deviation is found by taking the square
root of the average of the squared deviations of the values subtracted from their
average value.

2.2.2 Advanced Quantitative Analytical Methods


Quantitative analysis is the process of collecting and evaluating
measurable and verifiable data such as revenues, market share, and
wages in order to understand the behavior and performance of a
business.

In the past owners and company directors relied heavily on their


experience and instinct when making decisions. With data technology,
quantitative analysis is now considered a better approach to making
informed decisions.

1.​ Correlations
In statistics, correlation or dependence is any statistical relationship, whether causal or
not, between two random variables or bivariate data. Although in the broadest sense,
"correlation" may indicate any type of association, in statistics it usually refers to the
degree to which a pair of variables are linearly related. Familiar examples of
dependent phenomena include the correlation between the height of parents and their
offspring, and the correlation between price of goods and the quantity the consumers
are willing to purchase, as it is depicted in the so-called demand curve.

Examples:
An electrical utility may produce less power on a mild day based on the correlation
between electricity demand and weather. In this example, there is a causal
relationship, because extreme weather causes people to use more electricity for
heating or cooling.

2.Analysis of Variance (ANOVA)


Is a statistical formula used to compare variances across the means (or average) of
different groups. A range of scenarios use it to determine if there is any difference
between the means of different groups.

Examples:
To study the effectiveness of different diabetes medications, scientists design and
experiment to explore the relationship between the type of medicine and the resulting
blood sugar level. The sample population is a set of people.
3. Regression
Is a statistical method used in finance, investing, and other disciplines that attempts to
determine the strength and character of the relationship between a dependent variable
and one or more independent variables.

Example of How Regression Analysis is Used in Finance:


Regression is often used to determine how specific factors-such as the price of a
commodity, interest rates, particular industries, or sectors-influence the price
movement of an asset. The aforementioned CAPM is based on regression and it's
utilized to project the expected returns for stocks and to generate costs of capital. A
stock's return is regressed against the return of a broader index, such as the S&P 500,
to generate a beta for the particular stock.

Lesson 14: Statistical Methods

Statistical Methods - methods are mathematical formulas, models, and techniques


that are used in statistical analysis of raw research data. The application of statistical
methods extracts information from research data and provides different ways to assess
the robustness of research outputs.

1.Back Concept
What is statistics? Statistics is a term that pertains to your acts of collecting and
analyzing numerical data. Doing statistics then means performing some arithmetic
procedures like addiction, division, subtraction, multiplication, and other mathematical
calculations. Statistics demands much of your time and effort, for it is not merely a
matter of collecting and examining data, but involves analysis, planning, interpreting,
and organizing data in relation to the design of the experimental method you chose.
Statistical methods then are ways of gathering, analyzing and interpreting variable or
fluctuating numerical data.

2.Statistical Methodologies
Descriptive Statistics Descriptive statistics refers to a branch of statistics that involves
summarizing, organizing, and presenting data meaningfully and concisely. It focuses
on describing and analyzing a dataset's main features and characteristics without
making any generalizations or inferences to a larger population. The primary goal of
descriptive statistics is to provide a clear and concise summary of the data, enabling
researchers or analysts to gain insights and understand patterns, trends, and
distributions within the dataset. This summary typically includes measures such as
central tendency (e.g., mean,
median, mode), dispersion (e.g., range, variance, standard deviation), and shape of
the distribution (e.g., skewness, kurtosis).

2.1 Descriptive Statistics


refers to a branch of statistics that involves summarizing, organizing, and
presenting data meaningfully and concisely. It focuses on describing and
analyzing a dataset's main features and characteristics without making any
generalizations or inferences to a larger population. The primary goal of
descriptive statistics is to provide a clear and concise summary of the data,
enabling researchers or analysts to gain insights and understand patterns,
trends, and distributions within the dataset. This summary typically includes
measures such as central tendency (e.g., mean, median, mode), dispersion
(e.g., range, variance, standard deviation), and shape of the distribution (e.g.,
skewness, kurtosis).

Example:
a descriptive statistic could include the proportion of males and females within a
sample or the percentages of different age groups within a population. Another
common descriptive statistic is the humble average (which in statistics-talk is
called the mean).

2.2 Inferential Statistics


is a branch of statistics that makes the use of various analytical tools to draw
inferences about the population data from sample data. Apart from inferential
statistics, descriptive statistics forms another branch of statistics. Inferential
statistics help to draw conclusions about the population while descriptive
statistics summarizes the features of the data set.

●​ There are two main types of inferential statistics - hypothesis testing and
regression analysis. The samples chosen in inferential statistics need to
be representative of the entire population. In this article, we will learn
more about inferential statistics, its types, examples, and see the
important formulas.

Examples:
An inferential statistics is the calculation of a confidence interval. For instance,
after sampling test scores from a group of students, a confidence interval might
be used to estimate the range within which the average test score of all
students in the population likely falls.

3. Types of Statistical Data Analysis


Is the process of collecting and analyzing large volumes of data in order to identify
trends and develop valuable insights. In the professional world, statistical analysts take
raw data and find correlations between variables to reveal patterns and trends to
relevant stakeholders. Working in a wide range of different fields, statistical analysts
are responsible for new scientific discoveries, improving the health of our communities,
and guiding business decisions.

●​ Descriptive statistics
●​ Statistical inference
●​ Predictive analysis
●​ Prescriptive analysis
●​ Exploratory data analysis
●​ Causal analysis
●​ Mechanistic analysis

4. Statistical Methods of Bivariate Analysis


Bivariate analysis happens by means of the following methods (Argyrous 2011; Babble
2013; Punch 2014);

4.1 Correlation or Covariation
Describes the relationship between two variables and also tests the strength or
significance of their linear relation. This is a relationship that makes both
variables getting the same high score or one getting a higher score and the
other one, a lower score.

●​ Both covariance and correlation measure the relationship and the


dependency between two variables.
●​ Covariance indicates the direction of the linear relationship between
variables. Correlation measures both the strength and direction of the
linear relationship between two variables.
4.2 Cross Tabulation
is a useful analysis tool commonly used to compare the results for one or more
variables with the results of another variable. It is used with data on a nominal
scale, where variables are named or labeled with no specific order. Crosstabs
are basically data tables that present the results from a full group of survey
respondents as well as subgroups. They allow you to examine relationships
within the data that might not be obvious when simply looking at total survey
responses.

Example:
analyze the relation between two categorical variables like age and purchase of
electronic gadgets.

5.Measure of Correlation
Correlation is a statistical measure that expresses the extent to which two variables
are linearly related (meaning they change together at a constant rate). It’s a common
tool for describing simple relationships without making a statement about cause and
effect.
5.1 Correlation Coefficient

The correlation coefficient is a statistical measure of the strength of a linear


relationship between two variables. Its values can range from -1 to 1. A
correlation coefficient of -1 describes a perfect negative or inverse, correlation
with values in one series rising as those in the other decline, and vice versa. A
coefficient of 1 shows a perfect positive correlation, or a direct relationship. A
correlation coefficient of 0 means there is no linear relationship.

Examples:
As high oil prices are favorable for crude producers, one might assume that the
correlation between oil prices and forward returns on oil stocks is strongly
positive. Calculating the correlation coefficient for these variables based on
market data reveals a moderate and inconsistent correlation over lengthy
periods.

5.2 Regression
Regression analysis is a reliable method of identifying which variables have
impact on a topic of interest. The process of performing a regression allows you
to confidently determine which factors matter most, which factors can be
ignored, and how these factors influence each other.

Example:
We can say that age and height can be described using a linear regression
model.

You might also like