Chapter 7: Statistical Applications in Traffic Engineering

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 28

Chapter 7: Statistical Applications in

Traffic Engineering
Chapter objectives: By the end of these chapters the student will be
able to (We spend 3 lecture periods for this chapter. We do skip
simple descriptive stats because they were covered in CE361.):
Lecture Lecture Objectives
number (after these lectures you will be able to)
Lecture 3 Apply the basic principles of statistics contained in section 7.1 to traffic data analyses
(Chap 7a file) Explain the characteristics of the normal distribution and read the normal distribution
table correctly (section 7.2) and get necessary values from Excel.
Explain the meaning of confidence bounds and determine the confidence interval of
the mean (section 7.3)
Determine sample sizes of traffic data collection (section 7.4)
Explain how random variables are added (section 7.5)
Explain the implication of the central limit theorem (section 7.5.1)
Explain the characteristics of various probabilistic distributions useful for traffic
engineering studies and choose a correct distribution for the study(section 7.6)

Lecture 4a Explain the special characteristics of the Poisson distribution and its usefulness to
(Chap 7b file) traffic engineering studies (section 7-7)
Conduct a hypothesis test correctly (two-sided, one-sided, paired test, F-test) (section
7-8)

Lecture 4b Conduct a Chi-square test to test hypotheses on an underlying distribution f(x)


(Chap 7 file) (section 7-8)
Introduction
Traffic engineering studies: Infer the characteristics
in a population (typically infinite) by observing the
characteristics of a finite sample.
Statistical analysts is used to address the
following questions:
How many samples are required?
What confidence should I have in this estimate?
What statistical distribution best describes the observed
data mathematically?
Has a traffic engineering design resulted in a change in
characteristics of the population (hypothesis tests)?
7.1 An Overview of Probability Functions and Statistics

Most of the topics in this section are reviews of what we have learned
in CEEn 361. (Review 7.1.1, 7.1.3 and 7.1.4 by yourself.)

7.1.2 Randomness and distributions describing randomness

The discussion of Model the system as simply (or as


turning vehicles is very precisely) as possible (or necessary) for
instructive. P.132 right
all practical purposes.
column.

One new topic in 7.1.4 is a method to


estimate the standard deviation. This
is based on the normal distribution
the probability of one standard
deviation from the mean is 68.3% in
the two-way analysis. 85%-15% =
70%, close enough.
P85 P15
sest
2
Connection between the typical computation and probability
involving formulas for mean and variance

(Population) Mean = x*P(x) Variance 2 = (x - )2P(x)



i
N
1
x
2
(Sample) x i s2
x x
N i 1 N 1

P(x) x*P(x) (x-)^2*P(x)


3.50 0.17 0.58 0.01
4.25 0.17 0.71 0.05
2.70 0.17 0.45 0.17
Data (Population)
2.70 0.17 0.45 0.17
3.65 0.17 0.61 0.00
5.50 0.17 0.92 0.53
Mean 3.72 Sum 3.72 0.93
Variance 0.93
7.2 The normal distribution and its applications

Mean = 55 mph, STD = 7 mph


Whats the probability the next
value will be 65 mph or less?

z = (x - )/ (Discuss the 3
procedures in p.
137 left column
= (65 55)/7 top)
= 1.43

From the sample


normal distribution
to the standard 0.9236 from
Table 7.3
normal distribution.
Use of the standard normal distribution
table, Tab 7-3
Table 7-3

Z = 1.43

Most popular one is 95% within 1.96 (Excel functions:


NORMSDIST and
NORMSINV)
7.3 Confidence bounds (of the mean)
Point estimates: A point estimate is a single-values
estimate of a population parameter made from a sample.
Interval estimates: An interval estimate is a
probability statement that a population parameter is
between two computed values (bounds).

- - True population mean

X
Point estimate of X from a
sample
X
Two-sided interval
X tas/sqrt(n) X + tas/sqrt(n) estimate
7.3 (cont)

When n gets larger (n>=30), t can become z. The


probability of any random variable being within 1.96
standard deviations of the mean is 0.95, written as:
P[( - 1.96) y ( + 1.96)] = 0.95

Obviously we do not know and . Hence we restate


this in terms of the distribution of sample means:
_ _
P[( x - 1.96E) y ( x + 1.96E)] = 0.95
Where, E = s/SQRT(n), standard error of the mean

When E is meant to mean tolerance,


we use the symbol e.
7.4 Sample size computations
For cases in which the distribution of means can be
considered normal, the confidence range for 95%
confidence is:
s
1.96
n
If this value is called the tolerance (or precision),
and given the symbol e, then the following equation
can be solved for n, the desired sample size:
s s2
e 1.96 and n 3.84 2
n e
By replacing 1.96 with z and 3.84 with z2, we can use
this for any level of confidence.
7.5 Addition of random variables
Summation of random variables:

Y ai X i
Expected value (or mean) of the random variable Y:

Y ai xi

Variance of the random variable Y: These concepts are


useful for statistical

a
2
Y
2
i
2
xi
work. See the
sample problems in
page 140.
7.5.1 The central limit theorem

Definition: The population may have any unknown distribution with


a mean and a finite variance of 2. Take samples of size n from
the population. As the size of n increases, the distribution of sample
means will approach a normal distribution with mean and a
variance of 2/n.


f (X )
F(x)
approaches
X

x
X
X distribution X distribution
X ~ any (, 2) X ~ N ( , X 2 )
7.6 The Binomial Distribution Related to the
Bernoulli and Normal Distributions
7.6.1 Bernoulli and the Binomial distribution (discrete
probability functions))

Discrete distribution P(X = 1) = p


Has only two possible outcomes: P(x + 0) = 1 - p
Heads-tails, one-zero, yes-no
Probability mass function
Assumptions: 1-p
There is a single trial with
only two possible outcomes.
p
The probability of an
outcome is constant for each
trial. 1 0
Event X
Explanation of the Binomial distribution
Assumptions:
n independent Bernoulli trials
Only 2 possible outcomes on each trial
Constant probability for each outcome on each trial
The quantity of interest is the total number of X of positive
outcomes, a value between 0 and N. 0 1 2 3

Example: 3 trials of flipping a coin Outcome


No. of tails Possible outcomes Prob. of outcome
0 HHH (1/2)0(1/2)3
1 HHT HTH THH 3(1/2)1(1/2)2
(See equation 7-14)
2 TTH THT HTT 3(1/2)2(1/2)1
3 TTT (1/2)3(1/2)0

Read 7.6.2 for a sample application of the Binomial distribution.

Mean: Np, Variance: Npq Discuss 7.6.2.


7.7 The Poisson distribution (counting
distribution or Random arrival discrete
probability function)
With mean = m and
x m
me variance 2 = m.
P( X x)
x!
If the above characteristic is
not met, the Poisson
theoretically does not apply.
The binomial distribution tends to approach the Poisson distribution with
parameter m = np. Also, the binomial distribution approaches the normal
distribution when np/(1-p)>=9
When time headways are exponentially distributed with mean = 1/, the
number of arrivals in an interval T is Poisson distributed with mean = m =
T. Note that the unit is veh/unit time (arrival rate).
(Read the sample problem in page 144, table 7.5)
7.8 Hypothesis testing
Two distinct choices:
Null hypothesis, H0
Alternative hypothesis: H1

E.g. Inspect 100,000 vehicles, of which 10,000 vehicles are unsafe.


This is the fact given to us.

H0: The vehicle being tested is safe.


H1: The vehicle being tested is unsafe.

In this inspection,
15% of the unsafe vehicles are determined to be safe Type II error (bad error)
and 5% of the safe vehicles are determined to be unsafe Type I error
(economically bad but safety-wise it is better than Type II error.)
Types of errors
We want to minimize
especially Type II
Reality Decision error.

Reject H0 Accept H0 Steps of the Hypothesis Testing

H0 is true Type I error Correct State the hypothesis


Select the significance level
H1 is true Correct Type II error
Compute sample statistics
and estimate parameters
Fail to reject a false
null hypothesis Compute the test statistic
Determine the acceptance
Reject a correct null hypothesis
and critical region of the test
(see the binary case statistics
P(type I error) = (level of in p. 145/146. to get a
feel of Type II error.) Reject or do not reject H0
significance)
P(type II error ) =
Dependence between , , and sample
size n
There is a distinct relationship between the two probability values
and and the sample size n for any hypothesis. The value of any one
is found by using the test statistic and set values of the other two.

Given and n, determine . Usually the and n values are the


most crucial, so they are established and the value is not controlled.
Given and , determine n. Set up the test statistic for and
with H0 value and an H1 value of the parameter and two different n
values.
Here we are
( X )
The t (or z) statistics is: t or z comparing means;

hence divide by
n sqrt(n).
7.8.1 Before-and-after tests with
two distinct choices
7.8.2 Before-and-after tests with
generalized alternative hypothesis
The significance of the hypothesis test is indicated by , the type I error
probability. = 0.05 is most common: there is a 5% level of significance,
which means that on the average a type I error (reject a true H0) will occur 5 in
100 times that H0 and H1 are tested. In addition, there is a 95% confidence level
that the result is correct.
0.025
If H1 involves a not-equal relation,
each
no direction is given, so the
significance area is equally divided
between the two tails of the testing
distribution.
Two-sided
If it is known that the parameter can
go in only one direction, a one-sided 0.05
test is performed, so the significance
area is in one tail of the distribution.

One-sided upper
Two-sided or one-sided test

These tests are done to compare the effectiveness of an


improvement to a highway or street by using mean speeds.

If you want to prove that the difference exists between the


two data samples, you conduct a two-way test. (There is no
change.)

Null hypothesis H0: 1 = 2 (there is no change)


Alternative H1: 1 = 2

If you are sure that there is no decrease or increase, you


conduct a one-sided test. (There was no decrease)
Null hypothesis H0: 1 = 2 (there is no increase)
Alternative H1: 1 2
Example

The decision point (or


Existing After typically zc:
improvement
Sample size 55 55 For two-sided:

Mean 60 min 55 min 1.96*1.53 = 2.998


For one-sided:
Standard 8 min 8 min
Deviation 1.65*1.53 =2.525

12 22
82 82
Y 1.53
|1 - 2| = |60-55| = 5 > zc
n1 n2 55 55

z / 2 1.96 z 1.65
By either test, H0 is
At significance level = 0.05 (See rejected.
Table 7-3.)
7.8.3 Other useful statistical tests
The t-test (for small samples, n<=30) Table 7.6:

x1 x2 n1 1s12 n2 1s22
t sp
sp 1 n1 1 n2 n1 n2 2

The F-test (for small samples) Table 7.7:


In using the t-test we assume that the standard deviations of the
two samples are the same. To test this hypothesis we can use
the F-test.

s12 (By definition the larger s is


F 2
s2 always on top.)

(See the samples in pages 149 and 151.


7.8.3 Other useful statistical tests (cont)
The F-Test to test if s1=s2

When the t-test and other similar means tests are conducted,
there is an implicit assumption made that s1=s2. The F-test can
test this hypothesis.

s12
F 2 The numerator variance > The denominator
variance when you compute a F-value.
s2
If Fcomputed Ftable (n1-1, n2-1, a), then s1s2 at a
asignificance level.

If Fcomputed < Ftable (n1-1, n2-1,a), then s1=s2 at a asignificance


level.

Discuss the problem in p.151.


Paired difference test

You perform a paired difference test only when you have a control
over the sequence of data collection.
e.g. Simulation You control parameters. You have two different
signal timing schemes. Only the timing parameters are changed. Use
the same random number seeds. Then you can pair. If you cannot
control random number seeds in simulation, you are not able to do a
paired test.
Table 7-8 shows an example showing the benefits of paired testing
The only thing changed is the method to collect speed data. The
same vehicles speed was measure by the two methods.
Paired or not-paired example (table 7.8)

Method 1 Method 2 Difference


Estimated 56.9 61.2 4.3
mean
Estimated SD 7.74 7.26 1.5

H0: No increase in test scores (means one-


sided or one-tailed)
Not paired: Paired:
1.50
sY
7.742 7.262
2.74 E 0.388
15 15 15
|56.9 61.2| = 4.3 < 4.54 4.3 increase > 0.642
(=1.65*2.74) (=1.65*0.388)
Hence, H0 is NOT rejected. Hence, H0 is clearly rejected.
Chi-square (2-) test (So called
goodness-of-fit test)
Example: Distribution of height data in Table 7-9.
H0:The underlying distribution is uniform.
H1: The underlying distribution is NOT uniform.
25

20

15

10

5.0-5.2

5.2-5.4

5.4-5.6

5.6-5.8

5.8-6.0

6.0-6.2

6.2-6.4

6.4-6.6

6.6-6.8

6.8-7.0
Observed Freq Theoretical Freq

The authors intentionally used the uniform distribution to make the computation simple. We will
test a normal distribution I class using Excel.
Steps of Chi-square (2-) test

Define categories or ranges (or bins) and


assign data to the categories and find ni = the
number of observations in each category i. (At
least 5 bins and each should have at least 5 observations.)
Compute the expected number of samples for
each category (theoretical frequency), using the
assumed distribution. Define fi = the number of
samples for each category i.
Compute the quantity:
N
( n f ) 2
2 i i
i 1 fi
Steps of Chi-square (2-) test (cont)
2 is chi-square distributed (see Table 5-8). If this
value is low if our hypothesis is correct. Usually
we use = 0.05 (5% significance level or 95%
confidence level). When you look up the table,
the degree of freedom is f = N 1 g where g is
the number of parameters we use in the
assumed distribution. For normal distribution g =
2 because we use and to describe the shape
of normal distribution.
If the computed 2 value is smaller than the
critical c2 value, we accept H0.
Whats the Chi-square (2-) test testing?

Assumed You need to know how to pull


distribution out values from the assumed
distribution to create the
expected histogram.

Chi-square (2-)
Expected test Actual
distribution (or histogram
histogram)

You might also like