0% found this document useful (0 votes)
7 views

Matlabchapter 7

Matlab

Uploaded by

Soutik Mallick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Matlabchapter 7

Matlab

Uploaded by

Soutik Mallick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

PowerPoint to accompany

Introduction to MATLAB
for Engineers, Third Edition
William J. Palm III

Chapter 7
Statistics, Probability,
and Interpolation

Copyright © 2010. The McGraw-Hill Companies, Inc. This work is only for
non-profit use by instructors in courses for which this textbook has been
adopted. Any other use without publisher’s consent is unlawful.
Breaking Strength of Thread

% Thread breaking strength data for 20 tests.


y = [92,94,93,96,93,94,95,96,91,93,...
95,95,95,92,93,94,91,94,92,93];
% The six possible outcomes are ...
91,92,93,94,95,96.
x = [91:96];
hist(y,x),axis([90 97 0 6]),...
ylabel(’Absolute Frequency’),...
xlabel(’Thread Strength (N)’),...
title(’Absolute Frequency Histogram...
for 20 Tests’)

This creates the next figure.

7-2
Histograms for 20 tests of thread strength. Figure 7.1–1, page
297

7-3
Absolute frequency histogram for 100 thread tests.
Figure 7.1–2. This was created by the program on page 298.

7-4
Use of the bar function for relative frequency histograms
(page 299).

% Relative frequency histogram using ...


the bar function.
tests = 100;
y = [13,15,22,19,17,14]/tests;
x = [91:96];
bar(x,y),ylabel(’Relative Frequency’),...
xlabel(’Thread Strength (N)’),...
title(’Relative Frequency Histogram ...for
100 Tests’)

This creates the next figure.

7-5
Relative frequency histogram for 100 thread tests.
Figure 7.1–3

7-6
Use of the hist function for relative frequency
histograms.

tests = 100;
y =
[91*ones(1,13),92*ones(1,15),93*ones(1,22),...
94*ones(1,19),95*ones(1,17),96*ones(1,14)];
x = [91:96];
[z,x] = hist(y,x);bar(x,z/tests),...
ylabel(’Relative Frequency’),xlabel(’Thread
Strength (N)’),...
title(’Relative Frequency Histogram for 100
Tests’)

This also creates the previous figure.

7-7
Histogram functions Table 7.1–1
Command Description

bar(x,y) Creates a bar chart of y versus x.

hist(y) Aggregates the data in the vector y into 10 bins evenly


spaced between the minimum and maximum values in y.

hist(y,n) Aggregates the data in the vector y into n bins evenly


spaced between the minimum and maximum values in y.

hist(y,x) Aggregates the data in the vector y into bins whose center
locations are specified by the vector x. The bin widths are
the distances between the centers.

[z,x] = hist(y) Same as hist(y) but returns two vectors z and x that
contain the frequency count and the bin locations.

[z,x] = hist(y,n) Same as hist(y,n) but returns two vectors z and x that
contain the frequency count and the bin locations.

[z,x] = hist(y,x) Same as hist(y,x) but returns two vectors z and x that
contain the frequency count and the bin locations. The
returned vector x is the same as the user-supplied
vector x.
7-8
The Data Statistics tool. Figure 7.1–4 on page 301

7-9 More? See page 300.


Scaled Frequency Histogram (pages 301-303)

% Absolute frequency data.


y_abs=[1,0,0,0,2,4,5,4,8,11,12,10,9,8,7,5,4,4,3,1,1,0,1];
binwidth = 0.5;
% Compute scaled frequency data.
area = binwidth*sum(y_abs);
y_scaled = y_abs/area;
% Define the bins.
bins = [64:binwidth:75];
% Plot the scaled histogram.
bar(bins,y_scaled),...
ylabel(’Scaled Frequency’),xlabel(’Height (in.)’)

This creates the next figure.

7-10
Scaled histogram of height data. Figure 7.2–1, page 302

7-11
Scaled histogram of height data for very many
measurements.

7-12
The basic shape of the normal distribution curve.
Figure 7.2–2, page 304

More? See pages 303-304.


7-13
The effect on the normal distribution curve of increasing σ.
For this case μ = 10, and the three curves correspond to
σ = 1, σ = 2, and σ = 3.

7-14
Probability interpretation of the μ ± σ limits.

7-15
Probability interpretation of the μ ± 2σ limits.

7-16 More? See pages 431-432.


The probability that the random variable x is no less
than a and no greater than b is written as P(a ≤ x ≤ b). It
can be computed as follows:

a−μ
erf b − μ
1
P(a ≤ x ≤ b) = − erf (7.2−3)
2 σ √2 σ √2

See pages 305-306.


7-17
Sums and Differences of Random Variables (page 307)

It can be proved that the mean of the sum (or difference) of


two independent normally 2distributed
2 random variables
equals the sum (or difference) of their means, but the
variance is always the sum of the two variances. That is, if x
and y are normally distributed with means μx and μy, and
variances σ x and σ y, and if u = x + y and υ = x − y, then

μu = μx + μy (7.2–4)

μυ = μx − μy (7.2–5)

σ u = σ2υ = σ2x + σ2 y 2 (7.2–6)

7-18
Random number functions Table 7.3–1

Command Description

rand Generates a single uniformly distributed random number


between 0 and 1.

rand(n) Generates an n × n matrix containing uniformly


distributed random numbers between 0 and 1.

rand(m,n) Generates an m × n matrix containing uniformly


distributed random numbers between 0 and 1.

s = rand(’twister’) Returns a 35-element vector s containing the current


state of the uniformly distributed generator.

rand(’twister’,s) Sets the state of the uniformly distributed generator to s.

rand(’twister’,0) Resets the uniformly distributed generator to its initial


state.
rand(’twister’,j) Resets the uniformly distributed generator to state j, for
integer j.
rand(’twister’,sum(100*clock)) Resets the uniformly distributed generator to a different
state each time it is executed.

7-19
Table 7.3–1 (continued)

randn Generates a single normally distributed random number


having a mean of 0 and a standard deviation of 1.

randn(n) Generates an n × n matrix containing normally


distributed random numbers having a mean of 0 and a
standard deviation of 1.

randn(m,n) Generates an m × n matrix containing normally


distributed random numbers having a mean of 0 and a
standard deviation of 1.
s = randn(’state’) Like rand(’twister’) but for the normally distributed
generator.
randn(’state’,s) Like rand(’twister’,s) but for the normally
distributed generator.
randn(’state’,0) Like rand(’twister’,0) but for the normally
distributed generator.
randn(’state’,j) Like rand(’twister’,j) but for the normally
distributed generator.
randn(’state’,sum(100*clock)) Like rand(’twister’,sum(100*clock))but for the
normally distributed generator.
randperm(n) Generates a random permutation of the integers from 1
to n.

7-20
The following session shows how to obtain the same
sequence every time rand is called.

>>rand(’twister’,0)
>>rand
ans =
0.5488
>>rand
ans =
0.7152
>>rand(’twister’,0)
>>rand
ans =
0.5488
>>rand
ans =
7-21 0.7152
You need not start with the initial state in order to
generate the same sequence. To show this, continue
the above session as follows.

>>s = rand(’twister’);
>>rand(’twister’,s)
>>rand
ans =
0.6028
>>rand(’twister’,s)
>>rand
ans =
0.6028

7-22
The general formula for generating a uniformly distributed
random number y in the interval [a, b] is

y = (b − a) x + a (7.3–1)

where x is a random number uniformly distributed in the


interval [0, 1]. For example, to generate a vector y
containing 1000 uniformly distributed random numbers in
the interval [2, 10], you type y = 8*rand(1,1000) + 2.

7-23
If x is a random number with a mean of 0 and a
standard deviation of 1, use the following equation to
generate a new random number y having a standard
deviation of σ and a mean of μ.

y=σx+μ (7.3–2)

For example, to generate a vector y containing 2000


random numbers normally distributed with a mean of 5
and a standard deviation of 3, you type

y = 3*randn(1,2000) + 5.

7-24
If y and x are linearly related, as

y = bx + c (7.3–3)

and if x is normally distributed with a mean μx and


standard deviation σx, it can be shown that the mean and
standard deviation of y are given by

μy = bμx + c (7.3–4)

σy = |b|σx (7.3–5)

More? See pages 310-311.


7-25
Statistical analysis and manufacturing tolerances: Example
7.3-1. Dimensions of a triangular cut. Figure 7.3–1 , page
312

7-26
Scaled histogram of the angle θ. Figure 7.3–2, page 313

7-27
Applications of interpolation: A plot of temperature data
versus time. Figure 7.4–1, page 314

7-28
Temperature measurements at four locations. Figure 7.4–2,
page 316

More? See
pages 313-317.
7-29
Linear interpolation functions. Table 7.4–1, page 317
Command Description

Y_int = Used to linearly interpolate


interp1(x,y,x_int) a function of one variable:
y = f (x). Returns a linearly
interpolated vector y_int
at the specified value
x_int, using data stored
in x and y.

7-30
Table 7.4–1 Continued

Z_int = interp2(x,y,z,x_int,y_int)

Used to linearly interpolate a function of two


variables: y = f (x, y). Returns a linearly interpolated
vector z_int at the specified values x_int and
y_int, using data stored in x, y, and z.

7-31
Cubic-spline interpolation: The following session produces
and plots a cubic-spline fit, using an increment of 0.01 in
the x values (pages 317-319).

>>x = [7,9,11,12];
>>y = [49,57,71,75];
>>x_int = 7:0.01:12;
>>y_int = spline(x,y,x_int);
>>plot(x,y,’o’,x,y,’--’,x_int,y_int),...
xlabel(’Time (hr)’),ylabel(’Temperature
(deg F)’,...
title(’Temperature Measurements at a
Single Location’),...
axis([7 12 45 80])

This produces the next figure.


7-32
Linear and cubic-spline interpolation of temperature data.
Figure 7.4–3, page 319

7-33
Polynomial interpolation functions. Table 7.4–2, page 320

Command

y_est = interp1(x,y,x_est,’spline’)

Description

Returns a column vector y_est that contains the


estimated values of y that correspond to the x
values specified in the vector x_est, using
cubic-spline interpolation.

7-34
Table 7.4–2 Continued

y_int = spline(x,y,x_int)

Computes a cubic-spline interpolation where x and y are


vectors containing the data and x_int is a vector containing
the values of the independent variable x at which we wish to
estimate the dependent variable y. The result
Y_int is a vector the same size as x_int containing the
interpolated values of y that correspond to x_int.

7-35
Table 7.4–2 Continued

y_int = pchip(x,y,x_int)

Similar to spline but uses piecewise cubic Hermite


polynomials for interpolation to preserve shape and
respect monotonicity.

7-36
Table 7.4–2 Continued

[breaks, coeffs, m, n] = unmkpp(spline(x,y))

Computes the coefficients of the cubic-spline polynomials for


the data in x and y. The vector breaks contains the x
values, and the matrix coeffs is an m × n matrix containing
the polynomial coefficients. The scalars m and n give the
dimensions of the matrix coeffs; m is the number of
polynomials, and n is the number of coefficients for each
polynomial.

7-37
The next slide illustrates interpolation using a cubic polynomial
and an eighth order polynomial (top graph). The cubic is not
satisfactory in this case, and the eighth order polynomial is not
suitable for interpolation over the interval 0 < x < 0.5.

The cubic spline does a better job in this case (bottom graph).

7-38
Figure 7.4-4 Top: Cubic and eighth order
polynomial interpolation. Bottom: Cubic spline
(page 321).

7-39
The next slide illustrates interpolation using a fifth order
polynomial and a cubic spline (top graph). The cubic spline is
better because the fifth order polynomial displays wide variations
between the data points.

The pchip polynomial does a better job than the cubic spline in
this case (bottom graph).

7-40
Figure 7.4-5 Top: Fifth order polynomial and cubic
spline interpolation. Bottom: pchip and cubic spline
interpolation. (page 323)

7-41

You might also like