0% found this document useful (0 votes)
75 views9 pages

LVS2

Hi

Uploaded by

Utkar Sh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
75 views9 pages

LVS2

Hi

Uploaded by

Utkar Sh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 9
Sec. 3.2: Function Spaces 95 (b) Show that the eigenvalues of a unitary transformation have modulus 1. (c) Show that the eigenvectors of a unitary transformation belonging to distinct eigenvalues are orthogonal. 3.2 FUNCTION SPACES We are ready now to apply the machinery of linear algebra to the interesting and important case of function spaces, in which the “vectors” are (complex) functions of x, inner products are integrals, and derivatives appear as linear transformations. 3.2.1 Functions as Vectors Do functions really behave as vectors? Well, is the sum of two functions a function? Sure. Is addition of functions commutative and associative? Indeed. Is there a “null” function? Yes: f(x) = 0. If you multiply a function by a complex number, do you get another function? Of course. Now, the set of all functions is bit unwieldy—we'll be concerned with special classes of functions, such as the set of all polynomials of degree < N (Problem 3.2), or the set of all odd functions that go to zero atx = 1, or the set of all periodic functions with period 2. Of course, when you start imposing conditions like this, you’ve got to make sure that you still meet the requirements for a vector space. For example, the set of all functions whose maximum value is 3 would not constitute a vector space (multiplication by 2 would give you functions with maximum value 6, which are outside the space). The inner product of two functions (f(x) and g(x)] is defined by the integral (fle) = f fesy*ats) ax | 13871 (the limits will depend on the domain of the functions in question). You can check for yourself that it satisfies the three conditions (Equations 3.19, 3.20, and 3.21) for an inner product. Of course, this integral may not converge, so if we want a function space with an inner product, we must restrict the class of functions so as to ensure that (Sg) is always well defined. It is clearly necessary that every admissible function be square integrable: [ireoras <00 [3.88] (otherwise the inner product of f with itse/f wouldn’t even exist). As it turns out, 96 Chap. 3 Formalism ‘The first few Legendre polynomials, P, (x). Pp = 3Gx?-1) Ps = 405x535) Ps = 4(35x* — 30x? + 3) Ps = $(63x5 — 70x? + 15x) this restriction is also sufficient—if f and g are both square integrable, then the integral in Equation 3.87 is necessarily finite.'§ For example, consider the set P(N) of all polynomials of degree < N: 1 D(x) = a9 + ax $F agx? +++ -ayx’, [3.89] on the interval —1 < x < 1. They are certainly square integrable, so this is a bona fide inner product space. An obvious basis is the set of powers of x: lei) = 1, |e2) =x, Jes) = 27, ..., lew) =a, 13.90] evidently it’s an N-dimensinal vector space. This is not, however, an orthonormal basis, for 1 {erler) -/ Idx “1 and so on, If you apply the Gram-Schmidt procedure, to orthonormalize this ba- sis (Problem 3.25), you get the famous Legendre polynomials, P,(x) (except that Legendre, who had other things on his mind, didn’t normalize them properly): le.) = Vn = (1/2) Pre), (2 = 1,2,...,.N). [3.91] In Table 3.1 I have listed the first few Legendre polynomials. 1 (erles) -[ xdx = 2/3, “1 +Problem 3.25 Orthonormalize the powers of x, on the interval —1 < x < 1, to obtain the first four Legendre polynomials (Equation 3.91). «Problem 3.26 Let 7(N) be the set of all trigonometric functions of the form wei F(x) = Slay sin(nsex) + by cos(nx)], [3.92] n=0 'SThere is a quick phoney “proof” of this, based on the Schwarz inequality (Equation 3.27). The trouble is, we assumed the existence of the inner product in proving the Schwarz inequality (Problem 3.5), so the logic is circular. For a legitimate proof, see F. Riesz. and B, Sz.-Nagy, Functional Analysis (New ‘York: Unger, 1955), Section 21. Sec. 3.2: Function Spaces 97 on the interval —1 < x < 1. Show that ev, (n =0,41,...,4(N — 1) [3.93] len) = v2 constitutes an orthonormal basis. What is the dimension of this space? Problem 3.27 Consider the set of all functions of the form p(x)e~*"/?, where p(x) is again a polynomial of degree < N in x, on the interval oo for all x, means 0 = hap, ay = hay, a; = har, and soon. If. = 0, thenall the components are zero, and that's nota legal eigenvector: but if 4 4 0, the first equation says ap = 0, so the second gives a, = 0, and the third says a2 = 0, and so on, and we're back in the same bind. This Hermitian operator doesn’t have a complete set of eigenfunctions—in fact it doesn’t have any at all! Not, at any rate, in P(oo). What would an eigenfunction of ¥ look like? If x8(x) = Ag(x), [3.99] Sec. 3.2: Function Spaces 99 where A, remember, is a constant, then everywhere except at the one point x = 4 we must have g(x) = 0. Evidently the eigenfunctions of £ are Dirac delta functions: 8.(x) = B(x — 4), [3.100] and since delta functions are certainly not polynomials, it is no wonder that the operator £ has no eigenfunctions in P(oo). The moral of the story is that whereas the first two theorems in Section 3.1.5 are completely general (the eigenvalues of a Hermitian operator are real, and the eigenvectors belonging to different eigenvalues are orthogonal), the third one (com- pleteness of the eigenvectors) is valid (in general) only for finite-dimensional spaces. In infinite-dimensional spaces some Hermitian operators have complete sets of eigen- vectors (see Problem 3.32d for an example), some have incomplete sets, and some (as we just saw) have no eigenvectors (in the space) at all.'” Unfortunately, the complete- ness property is absolutely essential in quantum mechanical applications. In Section 3.3 I'll show you how we manage this problem. Problem 3.28 Show that exp(—x?/2) is an eigenfunction of the operator 0 = (d?/dx*) — x?, and find its eigenvalue. «Problem 3.29 (a) Construct the matrix D representing the derivative operator D = d/dx with Tespect to the (nonorthonormal) basis (Equation 3.90) in P(N). (b) Construct the matrix representing D with respect to the (orthonormal) basis (Equation 3.93) in the space T(V) of Problem 3.26. (©) Construct the matrix X representing the operator # = x with respect to the basis (Equation 3.90) in P (00). If this is a Hermitian operator (and it is), how come the matrix is not equal to its transpose conjugate? **Problem 3.30 Construct the matrices D and X in the (orthonormal) basis (Equa- tion 3.91) for P(co). You will need to use two recursion formulas for Legendre polynomials: 1 x Pal) = Gant TD) Put (x) + 2 Pai): [3.101] dPy Yen = 4k — DPy-2-100, (3.102 dx "In an n-dimensional vector space, every linear transformation can be represented (with respect to a particular basis) by an n x n matrix, and as long as n is finite, the characteristic Equation 3.71 is ‘guaranteed to deliver atleast one eigenvalue. But ifn is infinite, we can’t take the determinant, there is no characteristic equation, and hence there is no assurance that even a single eigenvector exists 100 Chap. 3. Formalism where the sum cuts off at the first term with a negative index. Confirm that X is Hermitian but iD is not. Problem 3.31 Consider the operator D? = d? /dx?. Under what conditions (on the admissable functions) is it a Hermitian operator? Construct the matrix representing DB? in P(N) (with respect to the basis Equation 3.90), and confirm that it is the square of the matrix representing D (Problem 3.29a). Problem 3.32 (a) Show that i is Hermitian in the space T(N) of Problem 3.26. (b) What are its eigenvalues and (normalized) eigenfunctions, in T(N)? (C) Check that your results in (b) satisfy the three theorems in Section 3.1.5. (d) Confirm that iD has a complete set of eigenfunctions in 7 (0) (quote the perti- nent theorem from Fourier analysis). 3.2.3 Hilbert Space To construct the real number system, mathematicians typically begin with the integers, and use them to define the rationals (ratios of integers). They proceed to show that the rational numbers are “dense,” in the sense that between any two of them (no matter how close together they are) you can always find another one (in fact, infinitely many of them). And yet, the set of all rational numbers has “gaps” in it, for you can easily think of infinite sequences of rational numbers whose /imit is not a rational number. For example, [3.103] is a rational number for any finite integer N, but its limit (as N > 00) is In2, which is not a rational number. So the final step in constructing the real numbers is to “fill in the gaps”, or “complete” the set, by including the limits of all convergent sequences of rational numbers. (Of course, some sequences don’t have limits, and those we do not include, For example, if you change the minus signs in Equation 3.103 to plus signs, the sequence does not converge, and it doesn’t correspond to any real number.) The same thing happens with function spaces. For example, the set of all polynomials, P(oo), includes functions of the form x xt x’ In@)= Lx 5 pt gt tw [3.104] (for finite N), but it does not include the limit as N — oo: x xt ox" > lest Sth te =a e [3.105] Sec. 3.2: Function Spaces 101 For e* is not itself a polynomial, although it is the limit of a sequence of polynomials. To complete the space, we would like to include all such functions. Of course, some sequences of polynomials don’t have limits, or have them only for a restricted range of x. For example, the series 1 -x ee converges only for |x| < 1. And even if the sequence does have a limit, the limit function may not be square integrable, so we can’t include it in an inner product space. To complete the space, then, we throw in all square-integrable convergent sequences of functions in the space. Notice that completing a space does not involve the intro- duction of any new basis vectors; it is just that we now allow linear combinations involving an infinite number of terms, la) = Drajle), {3.106} a provided (r|q) is finite—which is to say (if the basis is orthonormal), provided Vai? <0. (3.107) a ‘A complete’ inner product space is called a Hilbert space.” The completion of P(c0) is easy to characterize: It is nothing less than the set of aif square-integrable functions on the interval —1 (—00, +00) (of Lo, for short), because this is where quantum mechanical wave functions live. Indeed, to physicists Ly is practically synonymous with “Hilbert space”. ‘The eigenfunctions of the Hermitian operators i = id/dx and ¢ = x are of particular importance. As we have already found (Equations 3.95 and 3.100), they take the form fils) = Ae, and —_gy(x) = BS(x — 2), respectively. Note that there is no restriction on the eigenvalues—every real number is an eigenvalue of iD, and every real number is an eigenvalue of . The set of all eigenvalues of a given operator is called its spectrum; iD and £ are operators with continuous spectra, in contrast to the discrete spectra we have encountered 1 Note the two entirely different uses of the word “complete”: a set of vectors is complete if it spans the space; an inner product space is complete if it has no “holes” init (ie, it includes all its limits). + Byery finite-dimensional inner product space is trivially complete, sothey'reall technically Hilbert spaces, but the term is usually reserved for infinite-dimensional spaces. 102 Chap. 3 Formalism heretofore. Unfortunately, these eigenfunctions do not lie in Hilbert space, and hence, in the strictest sense, do not count as vectors at all. For neither of them is square- integrable: 0 eo f LOY fA@)dx = iar f ee dx = ae f ldx + 00, oe Z -_ and ~ ~ fi scotecods = 1a.7 f° ac — 2000 ay dx = 12450. —2) > ow. oe ~ Nevertheless, they do satisfy a kind of orthogonality condition: f Hoy fdr = Atay f ee" dx = |Aj/?278(A — 1) 00 (see Equation 2.126), and f Bix)" gu(x) dx = 538, f 8(x — A)S(x — x) dx = |B, PP8(A ~ pw). It is customary to “normalize” these (unnormalizable) functions by picking the con- stant so as to leave an unadorned Dirac delta function on the right side (replacing the Kronecker delta in the usual orthonormality condition; Equation 3.23).° Thus Li oe @= Tax with (Ail fu) = 8 - 1), (3.108) are the “normalized” eigenfunctions of iD, and g(x) = 5(x — 2), with (g.1gu) = 60 — 1), [3.109] are the “normalized” eigenfunctions of ¥.7! What if we use the “normalized” eigenfunctions of i D and ¥ as bases for L2?” Because the spectrum is continuous, the linear combination becomes an integral: in=f anh hdd; in=f Palen) da. 3.110) 20 0 207 [ call this “normalization” (in quotes) so you won't confuse it with the real thing. 21 We are engaged here in a dangerous stretching of the rules, pioneered by Dirac (who had a ki of inspired confidence that he could get away with it) and disparaged by von Neumann (who was more sensitive to mathematical niceties), in their rival classics (P. A. M. Dirac, The Principles of Quantum Mechanics, first published in 1930, 4" ed., Oxford (Clarendon Press) 1958, and J. von Neumann, The Mathematical Foundations of Quantum Mechanics, first published in 1932, revised by Princeton Univ. Press, 1955). Dirac notation invites us to apply the language and methods of linear algebra to functions that Tie in the “almost normalizable” suburbs of Hilbert space. It turns out to be powerful and effective beyond any reasonable expectation. That's right: We're going to use, as bases, sets of functions none of which is actually in the space! ‘They may not be normalizable, but they are complete, and that’s all we need. Sec. 3.2: Function Spaces 103 ‘Taking the inner product with | f,), and exploiting the “orthonormality” of the basis (Equation 3.108), we obtain the “components” a;: l= faathulsinda= [~ a8ue—2ya2= So 1 Wed evidently the —A “component” of the vector | f), in the basis of eigenfunctions of iD, is the Fourier transform (Equation 2.85) of the function f(x). Likewise, a= (All) = / e™ f(x)dx = F(-2); 3.411) b= (gilt) =/ 8x -Af@)dx = fR, [3.112] so the A “component” of the vector |) in the position basis is f(4) itself. [If that sounds like double-talk, remember that | f) is an abstract vector, which can be expressed with respect to any basis you like; in this sense the function f(x) is merely the collection of its “components” in the particular basis consisting of eigenvectors of the position operator.] Meanwhile, we can no longer represent operators by matrices because the basis vectors are labeled by a nondenumerable index. Nevertheless, we are still interested in quantities of the form (AIT fu)» which, by force of habit, we shall call the A, matrix element of the operator 7. «Problem 3.33 (a) Show that any linear combination of two functions in L2(a, b) is still in L2(a, 5). If this weren't true, of course, L2(a, b) wouldn’t be a vector space at all. (b) For what range of (real) v is the function f(x) = |x|" in Lo(—1, +1)? (c) For what range of a is the function f(x) = 1—x+x?—x3+-+-in L3(—a, +a)? (d) Show that the function f(x) = e~™" is in Lo, and find its “components” in the basis (Equation 3.108). (e) Find the matrix elements of the operator D? with respect to the basis (Equation 3.108) of Lo Problem 3.34 L(—1, +1) includes discontinuous functions (such as the step func- tion, (x), Equation [2.125], which are not differentiable. But functions expressible as Taylor series (f(x) = ao + aix + apx? + +++) must be infinitely differentiable. How, then, can (x) be the limit of a sequence of polynomials? Note: This is not a difficult problem, once you see the light, but it is very subtle, so don’t waste a lot of, time on it if you're not getting anywhere.

You might also like