Psych Stats - 2nd Sem
Psych Stats - 2nd Sem
Descriptive statistics
- It gives the reader an overview of Sample Statistics
what the data is all about ● M -
● SD²
Population ● SD
- The respondents you want to gather
data from Probability
Sample - Expected relative frequency of an
- The portion of the population that will outcome
represent the whole population used - The proportion of successful
in data gathering and analysis outcomes to all outcomes
Outcome
Inferential Statistics - The term used in discussing
- It is used to draw conclusions from probability for the result of an
the gathered data and survey experiment (or almost any event, such
as coin coming up heads)
Methods of Sampling Expected relative frequency
● Random Selection - A number of successful outcomes
- The goal standard is divided by the number of total
impossible due to needing an outcomes you would expect to get if
abundance of resources, time, you repeated the experiment a larger
and connection times
● Non-Random Selection
+ Purposive Sampling Steps for finding a probabilities
- Creating criteria and 1) Determine the number of possible
finding respondents successful outcomes
who will fit those 2) Determine the number of all possible
particular criteria outcomes
+ Convenience Sampling 3) Divide the number of possible
- Gathering a sample successful outcomes by the number of
from who is available possible outcomes
and still fits the Probabilities Expressed as symbols
criteria created ● Probability is p
● Haphazard Selection ● p = .5 or ½ or 50%
- Selecting respondents without
a system, plan, order, or Z scores and the Normal Curve
organization Z score
- “Careless selection” - An ordinary score (raw score) is
transformed so that it better
Population parameter describes the score’s location in a
● mu (μ) - population mean distribution
● σ² - population variance
- Number of standard deviations that a - A specific, mathematically defined,
score is above ( or below, if it is bell-shaped frequency distribution
negative) the mean of its distribution that is symmetrical and unimodal;
distribution observed in nature and in
Formula to change the Raw score into research commonly approximates it.
a Z score
1) Figure the deviation score: subtract
the mean from the raw score
2) Figure the Z score: divide the
deviation score by the standard
deviation
Symbols:
X = raw score
M = mean
SD = standard deviation of the X Normal Curve table and z scores
Normal curve
Z score into Raw Score - The table shows percentages of scores
1) Figure the deviation score: multiply associated with the normal curve
the Z score by the standard deviation - Usually includes percentages of scores
2) Figure the raw score: add the mean to between the mean and various
the deviation score numbers of standard deviation above
the mean and percentages of scores
more positive than various of standard
deviation above the mean
Note:
● The mean of any distribution of Z Confidence Interval
scores is always 0 - The range of scores (that is, the scores
● The standard deviation of any between the upper and lower value)
distribution of z scores is always 1 that is more likely to include the true
● A z score is sometimes called a population mean; the range of the
standard score; possible population means which it is
○ Z scores have standard values not highly unlikely that you could
for the mean and the standard obtain from your sample mean
deviation Confidence Limit
○ Z scores provide a kind of - The upper or lower value of a
standard scale of confidence interval
measurement for any variable
Decision Errors
Normal Curve - Incorrect conclusions in hypothesis
- A frequency distribution that follows a testing in relation to the real
normal curve (unknown) situation, such as deciding
the null hypothesis is false when it is compromise — thus the standard 5%
really true and 1% significance levels
- It is not about making mistakes in
calculations or even about using the
wrong procedures MidTerm
- Situations in which the right T test
procedures lead to wrong decisions - A hypothesis-testing procedure in
which the population variance is
● Type I error (α “alpha”) unknown; it compares t scores from a
- Rejecting the null hypothesis sample to a comparison distribution
when in fact it is true called t distribution
- Getting statistically significant T test for a single sample
results when in fact the - A hypothesis testing, a sample
research hypothesis is not mean is being compared to a
true known population mean, and a
- The lower the alpha, the population variance is
smaller the chance of getting unknown
the type I error ● Biased Estimate
- Researchers who do not want - Estimate of a population
to take a lot of risk set alpha parameter that is likely
lower than .05, such as p < .001 systematically to overestimate
● Type II error (β “beta”) or underestimate the true
- The probability of making this value of the population
error is called “beta” parameter. For example, SD
- Failing to reject the null squared would be biased
hypothesis when in fact it is estimate of a population
false variance (it would
- Failing to get a statistically systematically overestimate it)
significant result when in fact ● Unbiased estimate of the
the research hypothesis is true sample population variance
(S²)
The relationship between type I and type II - Corrected the biased
errors estimated population variance
- The insurance policy against type I so that it is equally likely to
error (setting a significance level, such overestimate or underestimate
as .001) has the cost of increasing the the true population variance,
chance of making a type II error the correction used is dividing
- The insurance policy against type I the squared deviation by the
error (setting a significance level, such sample size minus 1, instead of
as .002) has the cost of increasing the directly dividing by the sample
chance of making a type II error size.
- The trade-off between these two
conflicting is usually worked by
Degrees of freedom
- Number of scores vary when
estimating a population parameter;
- Usually part of the formula for making
that estimate
- N - 1