0% found this document useful (0 votes)
28 views6 pages

Lab1 ElementaryIntegration

Uploaded by

safyh2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views6 pages

Lab1 ElementaryIntegration

Uploaded by

safyh2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Lab 1: Elementary Integration and the Fundamental Theorem of

Calculus
Jason Cantarella∗
(Dated: January 5, 2013)

1. LEIBNIZ’S RULE

Our usual trick when confronted with a function defined by an integral is to do the
integral, already and then think about why we wanted it in the first place. Sometimes that
isn’t possible, or even desirable. Let’s take the example of functions defined like
Z v(x)
f (x) = g(t) dt.
u(x)

This is the area under a chunk of g(t), where which chunk we’re taking varies according
to the endpoint functions u(x) and v(x). Suppose we want to take the derivative f 0 (x).
Can we do it without actually doing the indefinite integral for g? We know that such an
antiderivative G(x) exists. Let’s write
Z v(x)
0 d
f (x) = g(t) dt.
dx u(x)
d
= G(v(x)) − G(u(x))
dx
= G0 (v(x))v 0 (x) − G0 (u(x))u0 (x).
But G(x) was an antiderivative for g(x), so this really means that
f 0 (x) = g(v(x))v 0 (x) − g(u(x))u0 (x)
which is known as Leibniz’s rule.
We can use this to compute some funky-looking derivatives. For instance, suppose we
want the derivative of Z √ 2 x
f (x) = √
sin t2 dt.
x


University of Georgia, Mathematics Department, Athens GA
2

√ √
Using Leibniz’s rule, and the fact that d/dx x = 1/2 x, we see that this is
√ 1 √ 1
f 0 (x) = sin(2 x)2 √ − sin( x)2 √ .
x 2 x
2 sin 4x − sin x
= √ .
2 x
We can use this equation to find an equation for the value of x where this derivative is
zero:
2 sin 4x = sin x
and see that (among other solutions), f 0 (π) = 0. Actually, f 0 (0) = 0 too, but you need to
do a limit to see it. This seems neat, but what could it possibly be good for? To answer
that question, we’ll introduce an endlessly fruitful application of calculus: probability.

2. PROBABILITY THEORY AND THE NORMAL DISTRIBUTION

We start with a few definitions from probability theory. The basic idea in probability
is that we have random variates which obey probability distributions. A random variate
has a certain probability of taking on values within an interval, and this probability can be
computed using a function which describes its probability distribution called a probability
density function.
Definition 1. If T is a random variate selected according to a probability distribution with
probability density function f (x), then the chance that T is between a and b is given by
Z b
P (a ≤ T ≤ b) = f (x) dx.
a

We usually called f (x) a pdf, which is short for “Probability Density Function”.

We can define the mean (or expected value) of a probability distribution by


Definition 2. The mean µ of a probability distribution with pdf f (x) is
Z ∞
µ= xf (x) dx.
−∞

The mean is a “weighted average” of all the values that the random variate can take on,
where the weighting is by the probability (f (x)) that any given value x will occur.

If the pdf f (x) is zero outside a finite interval, we can compute the mean by averaging
over that integral only. Figure 1 shows some examples of pdfs and their means.
3

1.4 3.0

1.2 1.5 2.5

1.0
2.0

0.8 1.0
1.5
0.6
1.0
0.4 0.5

0.5
0.2

0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0

FIG. 1: Some examples of pdfs and their means

The standard deviation of a pdf measures how “spread out” the probability distribution
is from the mean µ. It is defined to be the square root of the expected value of (T − µ)2 ,
or by
Definition 3. The standard deviation σ of a pdf f (x) with mean µ is given by
sZ

σ= (x − µ)2 f (x) dx.
−∞

Figure 3 shows some examples of distributions with the same mean, but different stan-
dard deviations.
2.0 2.0 2.0

1.5 1.5 1.5

1.0 1.0 1.0

0.5 0.5 0.5

-1.0 -0.5 0.5 1.0 -1.0 -0.5 0.5 1.0 -1.0 -0.5 0.5 1.0

FIG. 2: Some examples of pdfs with mean 0, but different standard deviations

The most common distribution in probability is


Definition 4. The normal distribution with mean µ and standard deviation σ is the distri-
bution given by the pdf
1 1 x−µ 2
f (x; µ, σ) = √ e− 2 ( σ ) .
σ 2π
This function is called a “Gaussian function”.

Here is a really important fact:

There is no elementary antiderivative for the Gaussian function f (x; µ, σ).


4

3. OPTIMIZING YOUR CHANCES

Suppose we know that we are manufacturing bolts and the diameters of the result bolts
are distributed normally with mean µ cm and unknown variance σ. We can accept and sell
any collection of bolts whose diameter varies by no more than 0.01 cm from the nominal
diameter d. Any bolts which differ by more than 0.01 cm from d have to be discarded.
Question 5. How should we choose d to maximize the probability that a given bolt will
be accepted?

Actually, it’s not too hard to guess that the answer ought to be that we ought to set
d = µ. But why? Understanding this will allow us to develop our intuition about the
fundamental theorem of calculus a bit more than we have already.
We start by writing down a function (of d) which gives us the chance that a given bolt
will be accepted. If T is the diameter of the given bolt, we know that
Z d+0.01
P (d) = P (d − 0.01 < T < d + 0.01) = f (x; µ, σ) dx.
d−0.01

We remember that the way to maximize a function is to take the derivative, and set it equal
to zero. Now it’s more than a little disconcerting that we don’t actually have a function
here (just an integral!). But don’t let that throw you: the Leibniz rule will let us compute
the derivative anyway!

P 0 (d) = f (d + 0.01; µ, σ) − f (d − 0.01; µ, σ).

That’s interesting! We know that P 0 (d) = 0 exactly when

f (d + 0.01; µ, σ) = f (d − 0.01; µ, σ).

So when is that? Well, it’s helpful to look at some pictures of the pdf for the normal
distribution. We can see from the pictures that the points where the function takes equal
values seem to be centered on the mean µ of the normal distribution. Actually, we can
prove this directly from our formula for the pdf. Suppose that we want to compare
?
f (µ + ∆; µ, σ) = f (µ − ∆; µ, σ).
5

2.0 2.0 2.0

1.5 1.5 1.5

1.0 1.0 1.0

0.5 0.5 0.5

-1.0 -0.5 0.5 1.0 -1.0 -0.5 0.5 1.0 -1.0 -0.5 0.5 1.0

FIG. 3: Places where the pdf f (x; µ, σ) takes equal values.

Plugging in, we see that


1 1 (µ+∆)−µ 2
f (µ + ∆; µ, σ) = √ e− 2 ( σ )
σ 2π
1 1 ∆ 2
= √ e− 2 ( σ )
σ 2π
1 1 −∆ 2
= √ e− 2 ( σ )
σ 2π
1 1 (µ−∆)−µ 2
= √ e− 2 ( σ ) = f (µ − ∆; µ, σ).
σ 2π
Now for the normal distribution, this is not such a great trick, since you could have guessed
the answer already. But not everything is normally distributed!
Example 1. Beta distributions. Suppose that instead, the distribution for radii of our
bolts was given by a “Beta (2,3) distribution”, which turns out to have the pdf

f (x) = 12(1 − x)2 x.

This has a mean of 2/5 (exercise: prove it!). Suppose we want to find the nominal diameter
d which will allow us to accept the most bolts now? Will it be 2/5? We compute as before
that we want
f (d + ∆) = f (d − ∆)
or (as it turns out)
24∆(3d2 − 4d + (∆2 − 1)) = 0,
Plugging in ∆ = 0.01, we can solve the quadratic and get d = 0.333383. (Question: What
does the other root of the quadratic tell us?) Plotting everything in Figure 4 we can see
what’s going on. Note that in a real factory, we’d have to have the distribution of bolts
have a lot smaller standard deviation than this one in order to make any money!
6

1.5 1.5 1.5

1.0 1.0 1.0

0.5 0.5 0.5

0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0

FIG. 4: All three figures show the pdf of the Beta(2,3) distribution, together with its mean at 0.4.
The middle figure shows the interval around a nominal diameter of 0.333383 which we computed
to be the best. The right figure shows what’s really going on here: the interval is centered (approx-
imately) on the maximum value of the pdf at x = 1/3.

You might also like