Physics 139A Homework 1: 1 Problem 1.1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Physics 139A Homework 1

Eric Reichwein
Department of Physics
University of California, Santa Cruz
April 14, 2013

Problem 1.1
For the distribution of ages in Section 1.3.1:


2
(a) Compute j 2 and hji .
(b) Determine j for each j, and use Equation 1.11 to compute the standard deviation.
(c) Use your results from (a) and (b) to check Equation 1.12.

1.1

Solution to 1.1(a)


2
To calculate the j 2 and hji I used Excel and an Excel add-in called Excel2latex to produce the table see below. We
make use of Equations 1.7 and Equation 1.8 from Griffiths, which are
N

2 X
j =
ji2 P (ji )

hji =

i=1

With P (ji ) =

N (ji )
N

N
X

!2
ji P (ji )

i=1

where N is the total number of samples.

Number (N (ji ))

Age (ji )

1
1
3
2
2
5

2
j

14
15
16
22
24
25

459.5714

hji

ji2 N (ji )

1
14

ji N (ji )

1
14

14
16.07143
54.85714
69.14286
82.28571
223.2143


2
Variance 2 = j 2 hji

Standard Deviation

18.57143

4.309458

441

1
1.071429
3.428571
3.142857
3.428571
8.928571

Table 1: A table of the age distribution of a population. The population consists of 14 samples in between 14 and 25. The
variance and standard deviation are calculated using Equation 1.7 and 1.8.

1.2

Solution to 1.1(b)

By Equation 1.10 the spread from the average value is,


j = j hji
To compute j we must first compute hji. However, we computed the square of it in the second column of table 1. Hence,
all we need is the square root of that value
q

2
hji = hji = 441 = 21

number

age

hji

(j)2

1
1
3
2
2
5

14
15
16
22
24
25

21
21
21
21
21
21

-7
-6
-5
1
3
4

49
36
25
1
9
16

Table 2: A table computing the spread from the average value of the age. The average age is 21.
Now using Equation 1.11 we can obtain the standard deviation as

49 + 36 + (3)25 + (2)1 + (2)9 + (5)16


2 = (j)2 =
= 18.57143
14

1.3

Solution to 1.1(c)

Equation 1.12 is =
is just


2
2
hj 2 i hji and we have calculated both j 2 and hji in part (a) therefore the standard deviation
q

hj 2 i hji =

459.5714 441 =

18.5714 4.309


Where as in part (b) the square root of the spread from the average squared average, (j)2 , is just
p

h(j)2 i = 18.57143 4.309


=

Problem 1.3
Consider the gaussian distribution:
2

(x) = Ae(xa)
Where A, a and are positive real constants.
(a) Use Equation 1.16 to determine A.


(b) Find hxi, x2 , and .
(c) Sketch the graph of (x).

2.1

Solution to 1.3(a)

The constant A is determined by Equation 1.16. It states that the probability density integrated over the entire R is
equal to 1. Hence, we should integrate the given probability density and the value of A should make the integral equal to 1.
Note that we will use the standard gaussian integral of
Z

2
ex dx =

see appendix for a brief derivation of this integral. Since the probability density is almost of the form of our standard
gaussian we just need to do some u-subs to get it in the correct form.
Starting with (x)
Z

We use u =

(x a), with du =

Ae(xa) dx

1=

dx we get

Z
1 =A

eu

A
1 =

Where the actual integral is of standard gaussian form

du

eu du

1=
A =

2.2

Solution to 1.3(b)

2.2.1 Computing hxi


We will use our definition of average value for a function (Equation 1.17 or 1.18) which is
Z
Z

2
hxi =
x(x)dx
x =
x2 (x)dx

with our probability density. Where we have solved for the constant A in part (a).

We start by integrating x multiplied by (x)


Z
2

xe(xa) dx
hxi =

We use the same substitution as in part (a), u = (x a), with du = dx we get


Z
2

du
xeu
hxi =


Note that with our substitution we will get x = u + a. Plugging this in we obtain
Z
2
1
hxi =
(u + a)eu du

Z
Z
2
2
1
1
ueu du +
aeu du
hxi =

Now we have computed the second integral in part (a), which is just a since a is a constant. The first integral can be solved
2
using properties of even and odd functions. Since u is odd and eu is even the resultant function is odd. Hence, a symmetric
integral about the origin is zero. We can also see this by our standard substitution method. By making the substitution
w = u2 we would get our jacobian to be dw = 2udu which would cancel the u being multiplied by the exponential. However,
our limits would change by w() = 2 = and w() = ()2 = which would give us the same lower and upper
limit. Consequently, the integral is zero. Therefore,
Z
2
a
hxi =0 +
aeu du

a
=a
hxi = 0 +

This makes sense physically because the probability density function has a maximum when x = a.


2.2.2 Computing x2
Using Equation 1.18 (with f (x) = x2 ) we can set up the integral needed to compute the average value of x2 . The integral
is
Z

2
2

x2 e(xa)
x =

First we shall use substitution to get it in the standard gaussian integral. So u = x a and x2 = (u + a)2 = u2 + 2ua + a2 .
We have the same limits due to the one to one relation of our substitution and the nature of infinity. Our jacobian is du = dx.
Therefore,
Z

2
2

x =
x2 e(xa) dx

Z

2
2

(u + a)2 eu du
x =

Z

2
2

x =
(u2 + 2ua + a2 )eu du

Z

Z
Z

2 u2
u2
2
u2
x =
u e
du + 2a
ue
du + a
e
du

Note that the last two integrals are the same as when we calculated the average value of x (except for a factor of a). Hence,
Z
r 

2
2

u2 eu du + 2a 0 + a2
x =

Now we must evaluate the first integral. There is a nifty trick that Matthew Whitman, a graduate student at UCSC,
taught me a while ago. Apparently, it is one of Feynmans favorite tricks when tackling integrals2 . The trick is called
differentiation under the integral sign. Since we know that

Z
2

ex dx =
()

And we want to calculate the integral of x2 ex which happens to be the negative of the derivative of the gaussian with
respect to , or
2
d x2
e
= x2 ex
d
Noting that the integral is completely dependent of the derivative with respect to , we just differentiate both sides (*)

Z
d
d
u2
e
du =
d
d

Z
1
d u2
e
du =
2 3/2
d

Z
2
1
x2 eu du =
2 3/2

Z
2
1
x2 eu du = 3/2
2

Which is expression for the integral we were trying to solve. Therefore,



r 

x =
+0+a

23/2
hx2 i =

1
+ a2
2

2.2.3 Computing Standard Deviation


The standard deviation is just
r
=

2.3

hx2 i

hxi2

1
+ a2 a2 =
2

1
1
=
2
2

Solution to 1.3(c)

To graph the function I will use points such as x = 0, a, 2a etc. And I will look at both limits as x gets very positively
large and very negatively large (ie x )

I will also use Wolfram to confirm my hand drawn graph above. Due to the limits of Wolfram I will set a = 10.
2 See

Appendix B for a nice anecdote about Feynmans point of view of this technique.

Problem 3

Consider a two state system. The most general matrices one can write are linear combinations of the unit matrix and the
three Pauli matrices ~ . Adding the unit matrix to a Hamiltonian only changes the overall zero of energy, so without loss of
generality we may consider a Hamiltonian
H=

3
X

Ba a

a=1

where Ba are 3 real numbers. Prove the following identity, starting from the algebra
a b = ab + iabc c
And show that
eiHt a eiHt = Rab (t)c
axis. What is the angle of rotation?
where Rab is the matrix of a three dimensional rotation around the B

3.1

Solution to Problem 3 part A

The Pauli matrices are



1 =

0
1


1
0


2 =

0
i


i
0


3 =

1
0

0
1

Let us first define every possible multiplication combination to sniff out a pattern and use for later reference.


 
 

0 1
0 i
00+1i
0 i + (i) 0
i 0
1 2 =
=
=
= i3 = 2 1
1 0
i 0
1 0 + 0 (i) 1 (i) + 0 0
0 i
We will leave some elementary matrix algebra steps as an exercise for the reader for the following combinations,


 

0 i
1 0
0 i
2 3 =
=
= i1 = 3 2
i 0
0 1
i 0

3 1 =

1
0


0
0
1
1

 
1
0
=
0
1


1
= i2 = 1 3
0

12 = 22 = 32 = 1
From the previous algebra we see the pattern
a b = b a
Now we shall look at the commutation and anti-commutation of the Pauli matrices. Lets for look out the commutative
properties of the Pauli matrices
[a , b ] = a b b a
But as we noted above b a = a b hence,
[a , b ] = a b (a b ) = 2a b = 2ic
Since the Pauli matrices are Hermitian and Unitary the are also self commutating or [a , a ] = 0. We can simplify these
two ideas into one line using the Levi-Civita symbol, ijk .

[a , b ] = a b (a b ) = 2iabc c
Where abc = cab = bca = cba = bac = acb and equals zero otherwise. From the above algebra we see that the
order of multiplication will give a negative sign or positive sign corresponding to the Levi-Civita symbol subscript rules.
Hence, the commutation of the Pauli matrices are written in a compact form of what we see above.
For the anti-commutation we get the relation
{a , b } = a b + b a = a b + (a b ) = 0
However, if a = b then we have
{a , a } = a a + a a = a2 + a2 = 1 + 1 = 21
We can combine these two results into a compact by making use of the Dirac-Delta symbol ij which is 1 if i = j and
zero otherwise. Hence,
{a , b } = a b + b a = 2ab
Finally, adding the commutator of the Pauli matrices to the anti-commutator we can find a general form for the multiplication of two Pauli matrices, as follows
{a , b } + [a , b ] =2ab + 2iabc c
(a b + b a ) + (a b b a ) =2ab + 2iabc c
2a b =2ab + 2iabc c
a b = ab + iabc c

3.2

Solution Problem 3 Part B

From class notes we have proved that


a a sin(|B|t)
eiHt = cos(|B|t) iB
where
a = Ba
B
|B|
Now we will just expand the exponential parts and carry out the algebra. Each exponential expansion could potentially
represent different Pauli matrices since there is an implied summation occurring. This also gives us the most general form.
|B| =

Ba2 + Bb2 + Bc2

and

b b sin(|B|t)]a [cos(|B|t) iB
c c sin(|B|t)]
eiHt a eiHt =[cos(|B|t) + iB
iHt
iHt
b b sin(|B|t)][a cos(|B|t) iB
c a c sin(|B|t)]
e a e
=[cos(|B|t) + iB
b b a cos(|B|t)sin(|B|t) + iB
c a c cos(|B|t)sin(|B|t) i2 B
c B
b a b c sin2 (|B|t))
eiHt a eiHt =(a cos2 (|B|t) + iB
Here we see that we have four terms due to basic FOILing. Now lets group terms to see if we can make use of our
c c = B
b b . Please see the
commutator relations and Kronecker-Delta and Levi-Civita symbols. Also, please note that B
appendix for a clear proof of this property.
c c a cos(|B|t)sin(|B|t) iB
c a c cos(|B|t)sin(|B|t) +B
c B
b a b c sin2 (|B|t))
eiHt a eiHt =(a cos2 (|B|t) + iB
|
{z
}
c cos(|B|t)sin(|B|t) [c , a ]
iB
The underlined term can be represented using commutator notation . We can also simplify further by using the trigonometry
identity 2sin()cos() = sin(2).
c cos(|B|t)sin(|B|t) [c , a ] + B
c B
b a b c sin2 (|B|t))
eiHt a eiHt =(a cos2 (|B|t) + (a cos2 (|B|t) + iB
is made of linearly independent (orthogonal) vectors, any scalar multiplication between two is equal to
Since the vector B
one if a = b or zero otherwise. Invoking the property of commutation of two Pauli matrices, [c , a ] = iabc b , from Part A
we obtain
c cos(|B|t)sin(|B|t)abc b + B
c B
b a b c sin2 (|B|t))
eiHt a eiHt =(a cos2 (|B|t) B
1
2
eiHt a eiHt =(a cos2 (|B|t) B
c sin(2|B|t)abc b + ab c (ab + iabc c )sin (|B|t))
2
1
2
eiHt a eiHt =a cos2 (|B|t) B
c sin(2|B|t)abc b + (ab c ab + iab c abc c )sin (|B|t))
2
6

Since we are multiplying Kronecker-Delta by a Levi-Civita function together this is always zero because one is non-zero only
when the other is zero. Therefore we can neatly write the previous line as
1
2
eiHt a eiHt =a cos2 (|B|t) B
b sin(2|B|t)abc a + bc a sin (|B|t)
2
1
2
eiHt a eiHt =a cos2 (|B|t) B
b sin(2|B|t)abc a + bc a sin (|B|t)
2
Now we have a cos2 () sin2 () = cos(2) let us exchange these terms with one, using the double angle identity for cosine.
b = (~ B).
~
Also, note that abc a B
~ ab 1 B
b sin(2|B|t) b = Rab (t)b
eiHt a eiHt = a cos(2|B|t)b + (~ B)
2
{z
}
|
Rab (t)

Where Rab (t) is the rotation matrix and the angle of rotation is just 2|B|t. Expanding out we see that the rotation matrix
is

cos(2|B|t) sin(2|B|t)
Rab (t) = sin(2|B|t) cos(2|B|t)
0
0

0
0
1

This corresponds to an rotation around one axis, in our case it is the z-axis.

Problem 1.5
Consider the following wavefunction:
(x, t) = Ae|x| eit

Where A, and are positive real constants.


(a) Normalize (x, t).
(b) Determine the expectation values of x and x2 .

4.1

Solution to 1.5(a)

To normalize the given wavefunction we must use Equation 1.20 and 1.27 which just states that the wave function integrate
over all space gives you a probability of 1, or there is 100% chance of finding the particle somewhere. And that the probability
is time-independent. Hence,
Z
Z
1=
|(x, t)|2 dx =
(x, t) (x, t)dx

Z
Ae|x| eit Ae|x| eit dx
1=

Z
1=
A2 e2|x| eit eit dx

Z
A2 e2|x| eit+it dx
1=

Z
1=
A2 e2|x| dx

Now let us use the substitution u = 2|x| where as always our limits stay the same and our jacobian is just du = 2dx. Since
the absolute value is an even function our integral is an even function. Consequently we can write it as just
Z
Z
Z
du
du
A2 u
A2 eu
=2
A2 eu
=
e du
2
2
0

0
Which is just the gamma function. Therefore our integral is greatly simplified.
1 = A2 (1)

1
A =

Since (1) = 0! = 1. Note that we have also satisfied the requirement of Equation 1.27, the normalization is indeed time
independent.
3 There

are some missing steps here but I will leave these missing steps as an exercise for the grader :).

4.2

Solution to 1.5(b)

4.2.1 Compute hxi


To calculate the average (x) position we must take the wavefunction multiplied by the x operator and by its complex
conjugate, then integrate over all space.
Z
(x, t)x (x, t)dx
hxi =

Z
=
e|x| eit xe|x| eit dx

Z
xe2|x| eit eit dx
=

Z
=
xe2|x| eit+it dx

Z
xe2|x| dx
=

However, we see that x is an odd function and e|x| is an even function. The product of these is another odd function.
Since we are integrating an odd function over a symmetric domain the integral is just zero. Hence,
hxi = 0
4.2.2 Compute hxi
To calculate the average (x2 ) position squared we must take the wavefunction multiplied by the x2 operator and by its
complex conjugate, then integrate over all space.
Z

2
x =
(x, t)x2 (x, t)dx

Z
=
e|x| eit x2 e|x| eit dx

Z
=
x2 e2|x| eit eit dx

Z
=
x2 e2|x| eit+it dx

Z
=
x2 e2|x| dx

Z
=2
x2 e2x dx
0

We see that this is an even function so we can just integrate from 0 to and multiply that integral by 2. We will also use
our standard substitution where the limits stay the same and the jacobian is du = 2dx.
Z 2
u u
e du
=
2
4
0
Z
1
= 2
u2 eu du
4 0
hx2 i =

1
1
(3) = 2
2
4
2

As in problem 1.3 our hxi2 is zero so the standard deviation is just the square root of hx2 i which is
q
=

hx2 i hxi =

1
hx2 i =
2

Derivation of the gaussian integral


First define the same integral with two different variables
Z
2
I=
ex dx

I=

ey dy

Now just multiply both integrals together (or equivalently square one of them) to get
Z Z
Z Z
2
2
2
x2 y 2
e
ex y dxdy
I =
e dxdy =

Then we just change coordinates from rectangular to cylindrical. Since we are integrating over all space in the rectangular
coordinates we will integrate over all space in the cylindrical coordinates as well. Remember that the Jacobian for cylindrical
coordinate is just rdrd. Lastly, note that r2 = x2 + y 2 . Hence the integral is now
Z

I =

x2 y 2

dxdy =

r 2

rer dr

rdrd = 2
0

Using simple u-sub with u = r and its Jacobian being du = 2rdr we get
Z
Z




r 2
2
dueu = eu 0 = e (e0 ) =
(2rdr)e
=
I =
0

Now we have obtained the value of the square of our integral. To obtain the original integral we just take the square root
of our previous result,
Z

2
ex dx =
I 2 = I =

Differentiation Under an Integral Sign: Feynman Anecdote


The following quote is from R.P. Feynmans book Surely Youre Joking, Mr.Feynman! Citation:
Feynman, R.P., Surely Youre Joking, Mr. Feynman! pp. 71-72, Bantam Books, 1986.
One thing I never did learn was contour integration. I had learned to do integrals by various methods shown in
a book that my high school physics teacher Mr. Bader had given me.
The book also showed how to differentiate parameters under the integral sign - Its a certain operation. It turns
out thats not taught very much in the universities; they dont emphasize it. But I caught on how to use that
method, and I used that one damn tool again and again. So because I was self-taught using that book, I had
peculiar methods of doing integrals.
The result was that, when guys at MIT or Princeton had trouble doing a certain integral, it was because they
couldnt do it with the standard methods they had learned in school. If it was contour integration, they would have
found it; if it was a simple series expansion, they would have found it. Then I come along and try differentiating
under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box
of tools was different from everybody elses, and they had tried all their tools on it before giving the problem to
me

One of the most fascinating people to walk the earth showing how such a simple trick simplified his life and gave him
a great reputation. Now for a simple proof to motivate the validity of this technique. We wish to evaluate the following
statement
d
db

f (x)dx
a

We will use the fundamental theorem of calculus and the definition of a derivative.
Z
F (b) F (a) =

f (x)dx

df (x)
f (x + x) f (x)
= lim
x0
dx
x

(F T C)

d
db

Z
a

1
f (x)dx = lim
b0 b

"Z

b+b

Z
f (x)dx

#
f (x)dx

1
[F (b + b) F (a) (F (b) F (a))]
b
1
= lim
[F (b + b) F (b)]
b0 b
= lim

b0

4 Note

that the limits do not change since they are originally 0 and and we looking at essentially the square roots of those.

This is the definition of a derivative with respect to b.

d
db

f (x)dx =
a

dF (b)
= f (b)
db

This is not a rigorous proof (or a proof of differentiation under an integral sign of a parameter) but it does provide
significant confidence that this technique is legitimate and mathematically sound.

10

You might also like