0% found this document useful (0 votes)
7 views7 pages

Buffons Needle

The document explores Buffon's Needle problem, a foundational concept in geometric probability, which examines the likelihood of a needle intersecting parallel lines when dropped randomly. It discusses the mathematical framework for defining sample spaces, probability density functions, and joint probability density functions, ultimately deriving a formula for calculating the probability of intersection. Additionally, it introduces the Total Expectation Theorem and presents practice problems to further understand geometric probability concepts.

Uploaded by

M Tdeu Cordeiro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views7 pages

Buffons Needle

The document explores Buffon's Needle problem, a foundational concept in geometric probability, which examines the likelihood of a needle intersecting parallel lines when dropped randomly. It discusses the mathematical framework for defining sample spaces, probability density functions, and joint probability density functions, ultimately deriving a formula for calculating the probability of intersection. Additionally, it introduces the Total Expectation Theorem and presents practice problems to further understand geometric probability concepts.

Uploaded by

M Tdeu Cordeiro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Buffon’s Needle: An Expedition into Geometric Probability

Arushee Jha

Probability Density Functions


#1 What is the probability that a needle randomly dropped on a surface etched with equally
spaced parallel lines will intersect one of the lines?

Titled “Buffon’s Needle,” this is one of the earliest problems posed in the field of geometric
probability. Comparative measurements, instead of counting discrete, equally probable events,
form the foundation of problems in geometric probability. Problems of this kind also birthed
integral geometry, an incredibly useful tool in geometric probability theory. Before proceeding, I
strongly advise you to grab a pencil and paper.

Buffon’s Needle may seem dauntingly abstract. However, readers can make some simple
observations. Visualize yourself throwing a needle on a tiled floor, or you can even try the
experiment yourself! Does your needle intersect any lines? If so, does it intersect two lines or one?
This simple visualization brings us to our first observation: The length l of the needle can be
either shorter than, equal to or longer than the distance d between two consecutive parallel lines.
A needle must be longer than d for it to be able to intersect two lines. We shall discuss the former
scenario, informally referred to as the “short needle” problem, where l < d. This needle either
intersects one line or none. It cannot intersect two lines.

What variables can we define? Consider the midpoint of the needle at l /2. Drop a perpendicular
from this point to the parallel line closest to it. Let the length of this perpendicular be x. Since
this perpendicular is drawn to the line closest to the needle, x cannot exceed half of d. Notice how
the needle makes two angles with the parallel lines:one acute and one obtuse. We define θ to be
the acute angle the needle makes with the parallel lines. These two conditions aptly construe a
randomly thrown needle.

1
d
0≤x≤ 2
π
0≤θ≤ 2

How does one define sample spaces for such “abstract” problems? Think about it. Our conditions
on x and θ are quite straightforward. But what if we were dealing with points on the complex
plane? Or on uneven solids? Such problems beget the use of complex integral geometry but the
core idea remains the same : define variables and assign conditions to them. In this example, the
sample space is simply the set of all ordered pairs(x, θ) that satisfy the two inequalities above.

On to constructing a probabilistic model. For this we utilize probability density functions. To


illustrate, consider a football match. The typical football match is 90 minutes long. What is the
probability that a player will score exactly on the 37th minute? Many players score approximately
37 minutes into a match, but the chance that a player scores at exactly t=37.0000.... minutes is
1/infinity, or zero. However, we can quantify the probability she scores between 37 and 37.5
minutes. Assume this probability is 0.05. Then the chance of scoring between 37 and 37.005
minutes is 0.0005 since this interval is one-hundredth the original. Between 37 and 37.00005
minutes it is 0.000005. And so on. Notice how the ratio probabilityof scoringinaninterval
durationof theinterval is constantly
−1
about 5 per minute. This quantity, 5min , is the probability density of scoring at around 37
minutes. It follows that the probability of scoring within an infinitesimal time period of duration
−1
dt around 37 minutes, is (5min) dt. Therefore, there exists a probability density function f
with f (37minutes) = 5min−1 . Integrating this function over any window of time yields the
probability that the player scores in that period. Naturally, the integral over the entire duration
of the match is then 1.

Formally,
Rb
P (a ≤ k ≤ b) = a
f (k)dk

x and θ have predefined ranges with uniformly distributed potential values. They are examples of
independent discrete random variables. Let us assume that k is a discrete random variable. with a
predefined range [a, b]. k can geometrically be represented as a rectangle of length b-a, which
1
represents its range. The height of the rectangle is b−a ,, the marginal, or individual probability

2
density function. The area of the rectangle, which represents the total probability of k falling in
range [a, b] is then 1 square unit, representing 100 percent, thus confirming our notion. Therefore
the individual probability density function of k, termed its marginal probability density function, is:
(
1
if a ≤ k ≤ b
f (k) = b−a
0 if k ≤ a or k ≥ b

Computing marginal probability density functions for x and θ over their predefined ranges:
2
f x (x) = d

2
fθ (θ) = π

We may now construct a joint probability density function to define sample space probability.
Without delving too deep into probabalistic theory:Since x and θ are independent variables, their
joint probability density function is simply the product of their marginal probability density
functions. This makes intuitive sense.

Therefore,
4
f (x) = πd

When does the needle intersect a line? We observe that the green line, the “vertical extent” of the
triangle below must be longer than distance x. This vertical extent is clearly 2l sinθ. Therefore, for
a needle to intersect a line: x < 2l sinθ.

Integrating x to 2l sinθ and θ to π


2 to find joint probability,
R π R lsinθ
P(a needle crosses a line)= 2
0 0
2
f (x)dxdθ
π lsinθ
4
R R
πd dx.dθ
2 2
0 0

3
π
4 l
R
πd . 2 sinθ.dθ
2
0

 2l  π2
πd (−cosθ) 0

2l
P(a needle crosses a line)= πd

And voila, we have computed the “short” needle’s likelihood of intersecting a line. Notice how in
doing so we have also procured a fascinating method to compute π!
2l
π= P(aneedlecrossesaline).d

Here is a graph of experimental values of π plotted against its actual value after dropping the
needle 277 times:

Larger sample sizes, accompanied by intensive random sampling, termed Monte Carlo simulation,
will yield approximations closer to π Although developing a formula for π was not an intended
consequence of this problem, scientists and engineers do frequently use Monte Carlo methods to
approximate important scientific constants. Mathematical labour rarely goes to waste.

Now that we have a general outline to solve problems in geometric probability, here is a slightly
more complex problem:

#2 Solve Buffon’s Needle for the ”long needle” scenario where l >d. Try to construct suitable
probability density functions!

Total Expectation Theorem


# Consider a log of length l. Suppose a lumberjack chops this log at X. The left side of the
broken log is then chopped at Y . Again the left piece is kept. What is the expected length of this
piece of the log?

In probability theory expected value refers to the average value of potential outcomes.

Length of the log remaining after the first chop, x, is uniformly distributed between [0, l]. Length
of the log remaining after the second chop, y, now uniformly varies between [0, x].

4
0 ≤ y ≤ x ≤ l.
Unlike the previous example x and y are not independent variables. The range of potential values
of y is conditional on the value of x. We can now define a marginal probability distribution
function for x and a conditional probability distribution function for y:
1
fx (x) = l
1
fy|x (y|x) = x

Utilizing the multiplication rule to compute a joint probability density function:


fxy (x, y) = fx (x).fy|x (y|x) = 1l . x1
1
fxy (x, y) = lx

We can visualize the function as follows:

Notice how the slope of this line is 1. From 0 ≤ y ≤ x ≤ l the maximum value of both x and y is
l. The topmost vertex corresponds to (l, l).
We can now find the marginal probability density function for y by integrating the joint
probability density function over the range of values of x. What is this range? For a given value of
y the joint probability density function is defined only for values of x between x=l and x=y.

5
Rl
fy (y) = y
fxy (x, y) dx
Rl 1
y lx
dx
 
fy (y) = 1l log l
y

The expected value of Y can by found by integrating y times its probability density. The range of
integration is all possible values of y, [0, l].
Rl  
E [y] = 0 y. 1l log yl dy

A complex integral indeed! Should we take the algorithmic route and proceed to integrate by
parts? Or maybe there is a simpler alternative? Mathematics begets elegance... Of course we
should search for a simpler approach!

The Total Expectation Theorem comes to our rescue. The expected value of an experiment can be
calculated by a weighted method: by adding the products of the probabilities of events and their
expected values.

E (A) = P (K1 ) E (A|K1 ) + P (K2 ) E (A|K2 ) + ...... + P (Kn ) E (A|Kn )

x has infinite potential lengths between 0 and l. Instead of summing discrete weights we must
integrate the product of probability density of x ( 1l ) and the expected value of y given x over this
range (E (y|x)).
Rl 1
E (y) = 0 l
E (y|x) dx

where
x
E (y|x) = 2
Rl 1 x
E (y) = . dx
0 l 2

E (y) = 21 E (x)
l
E (x) = 2

E (y) = 21 . 2l = l
4

This is an intuitive answer. After the first chop we can expect the mean value of the log length to
be 2l . Similarly for the second chop we obtain an expected value of 4l . ”I could have guessed that!”
Now you have the probabalistic logic to prove it! This is not to say that computational thinking

6
in any way subdues intuition. Intuitive thinking is an immensely powerful tool in a mathematical
environment. Many complex problems can be greatly simplified via intuition. After all,
algorithms can be encoded- cognition sets us apart. A good example of a problem requiring
probabilistic intuition, especially because its mathematical proof is rigorous, is problem A6 from
the 1992 Putnam Competition:

#3 Four points are chosen uniformly at random on the surface of a sphere. What is the probability
that the center of the sphere lies inside the tetrahedron whose vertices are at the four points?

Problems
Unchallenged? Here are some practice problems! For an even greater challenge solve using
probability density functions.

#4 Three points are randomly chosen on a circle. What is the probability that the triangle with
the vertices at the three points has the center of the circle in its interior?

#5 Assume a stick is broken at random into three pieces. What is the probability that the pieces
can form a triangle?

Bibliography
All related pages from (https://www.cut-the-knot.org) The image of the Buffon’s Noodle
simulation is from (http://www.di.fc.ul.pt/ jpn/r/animation/buffon.needle.html). All other
images are my own.

You might also like