0% found this document useful (0 votes)
19 views5 pages

Business Stats

Uploaded by

guna tej velugu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views5 pages

Business Stats

Uploaded by

guna tej velugu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

6.

The Karl Pearson Coefficient of Correlation, also known as Pearson’s correlation coefficient (r), is a
statistical measure that quantifies the linear relationship between two variables. It indicates both the
strength and direction of this relationship, ranging from -1 to 1.

7.

Spearman’s Rank Correlation is a non-parametric measure of the strength and direction of


association between two ranked variables. Unlike Pearson’s correlation, which measures the linear
relationship between variables, Spearman’s Rank Correlation assesses how well the relationship
between two variables can be described using a monotonic function, without assuming the data is
normally distributed or linear.

8.

Simple Linear Regression is a statistical method used to model and analyse the relationship between
two variables: one independent variable (X) and one dependent variable (Y). The goal is to establish
a linear equation that predicts the dependent variable based on the independent variable.

9.

Multiple linear regression is a statistical technique used to model the relationship between one
dependent variable and two or more independent variables. The goal is to understand how changes
in the independent variables affect the dependent variable.

By analysing how each independent variable impacts the dependent variable, researchers can make
informed decisions and predictions.

10.

Multiple linear correlation refers to the statistical relationship between one dependent variable and
two or more independent variables. It quantifies how well the independent variables collectively
explain the variability of the dependent variable.

Overall, multiple linear correlation provides insights into how multiple factors interact and contribute
to the behaviour of a dependent variable.

12.

Normal distribution, often referred to as a Gaussian distribution, is a continuous probability


distribution that is symmetric about the mean. It is one of the most important distributions in
statistics due to its key properties and the central limit theorem.

Formula:

The probability density function (PDF) of a normal distribution is given by:

f(x) = 1/σ√2π(e-(x-µ)2)/2σ2
where:

 e is the base of the natural logarithm,

 π\pi is a constant (approximately 3.14159).

Overall, normal distribution plays a critical role in statistical analysis and modelling because of its
unique properties and the prevalence of normality in real-world data.

15.

Testing the Null Hypothesis - Procedure

1. State the hypotheses.

2. Choose significance level (α).

3. Select appropriate test.

4. Collect data.

5. Calculate test statistic.

6. Determine p-value.

7. Make a decision (reject or fail to reject H0).

17.

Null Hypothesis (H0):

- The null hypothesis is a statement that indicates there is no effect, no difference, or no


relationship between variables. It serves as the default assumption that any observed differences are
due to sampling variability or random chance.

2. Alternative Hypothesis (H1 or Ha):

- The alternative hypothesis is a statement that contradicts the null hypothesis. It suggests that
there is an effect, a difference, or a relationship between variables. The goal of hypothesis testing is
to provide evidence to support the alternative hypothesis.

Notations

Null Hypothesis (H0): Assumes no effect or no difference.

Alternative Hypothesis (H1): Assumes an effect or a difference.

18.

Level of Significance
Definition: The level of significance, often denoted by α\alphaα (alpha), is a threshold set by the
researcher to determine whether to reject the null hypothesis in a statistical hypothesis test. It
represents the probability of making a Type I error, which occurs when the null hypothesis is true but
is incorrectly rejected.

Types of Tests

- Left-tailed

- Right-tailed

- Two-tailed

- Illustrated with diagrams indicating rejection regions.

19.

ANOVA (Analysis of Variance) is a statistical method used to test differences between two or more
group means. It helps determine whether any of those differences are statistically significant.

Steps in Conducting ANOVA:

1. State the Hypotheses.

2. Choose a Significance Level (α\alphaα) (commonly 0.05).

3. Calculate the Group Means and Overall Mean.

4. Calculate the F-statistic:

o Compute the Between-Group Variance and Within-Group Variance.

5. Find the p-value associated with the F-statistic.

6. Make a Decision: Compare the p-value to α\alphaα to decide whether to reject or fail to
reject the null hypothesis.

Applications:

ANOVA is widely used in various fields, such as agriculture (to compare crop yields), medicine (to
compare treatment effects), and social sciences (to compare responses across groups).

Summary

ANOVA is a powerful tool for comparing multiple group means, helping researchers identify
significant differences in their data.
20.

Statistics plays a crucial role in managerial applications and the decision-making process by providing
tools and techniques to analyze data, make predictions, and improve outcomes. Here are some key
ways statistics is utilized in management

I. Data analysis
II. Forecasting
III. Quick control
IV. Market research
V. Financial analysis
VI. Strategic planning

Overall, statistics provides managers with a framework for making evidence-based decisions,
enhancing operational efficiency, and driving organizational success. By leveraging statistical
tools, businesses can gain insights from data, leading to better strategies and improved
performance.

21.

Skewness is a statistical measure that describes the symmetry of a probability distribution. It


indicates whether the data are skewed to the left (negative skew) or to the right (positive skew)
relative to the mean.

22.

Dispersion

Dispersion is simply the spread or scatter of a set of data values around the central value, which is
often a mean or median. This will give insight into how variable the consistency of the data is within
a data set. The variability lets statisticians understand just how tightly or loosely their data points
cluster about the tendency of central values.
24.

Correlation is a statistical measure that describes the strength and direction of a relationship
between two variables. It quantifies how closely two variables move in relation to each other.
Correlation is often expressed using a correlation coefficient, which ranges from -1 to 1.

AND

Regression analysis is an attempt to gain an understanding of the relationship between two or more
variables with an eye to modelling and predicting one variable based on the values of the others.

You might also like