Jump to content

Talk:Self-adjoint operator

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Untitled

Hey there! What about having the distinction between Hermitian operators and Self adjoint operators added to this otherwise very good presentation... se for example some common lectures like http://www.hep.caltech.edu/~fcp/math/differentialEquations/linDiffEq.pdf Thanks — Preceding unsigned comment added by 89.242.21.31 (talk) 07:30, 29 April 2012 (UTC)[reply]

It is already written: "Bounded symmetric operators are also called Hermitian." Boris Tsirelson (talk) 08:15, 29 April 2012 (UTC)[reply]

Notation Unclear

This is probably obvious to most experts, but I think that the notation < | > should be defined before it is used in the first section. I added a clarification tag to this. —Preceding unsigned comment added by BradLipovsky (talkcontribs) 02:12, 11 September 2010 (UTC)[reply]

Proof Needed?

A symmetric operator on a hilbert space can have no eigenvalues? Isn't 0 an eigenvalue?

Not in general. If it's invertible, for instance 0 is not an eigenvalue.--CSTAR 07:54, 30 December 2006 (UTC)[reply]

More general statement: Should there be a proof policy on wikipedia math articles? Or a proof tab or something for each claim? That would be nice. Maybe people should start adding proof pages or something. Discuss or copy and paste this where it's supposed to go (I don't want to look that up right now).

To mu knowledge there isn't a proof policy. This is supposed to be an encyclopedia, not a textbook or reference.--CSTAR 07:54, 30 December 2006 (UTC)[reply]
There are a couple of math wiki projects, though still in their infancy. It would be worth linking to them once they are up and running: Swim, VDASH. Drevicko (talk) 06:04, 23 March 2010 (UTC)[reply]

Quantum version

The latest anonymous "edit" introduces a so-called "quantum mechanical version" of the spectral theorem, which is the same version and unnecessarilly separates out the pure point from continuous spectrum. I have no objection to a comment that the spectral theorem is formulated in Bra-ket notation in some form, but to call it a "quantum mechanical version" is misleading.--CSTAR 20:13, 10 July 2005 (UTC)[reply]

Thank you for editing my "quantum mechanical version" of the spectral theorem. You are right when you say that the title was misleading. You have replaced the term analytical function by Borel function. The problem is that I don't understand what is a Borel function. In physics text books one finds analytical function and most physicists know what it is. The physicists define using the Taylor expansion of f. Is the Borel function somewhat equivalent to this? I also in principle agree with the fact that you suppress the summation over the discrete space but I think this is not very pedagogical for the people coming from the quantum physics or quantum chemistry community. I am teaching to chemists. I know what I am speaking about. I think this is an encyclopedia and one should take care that when possible it should be readable by people who don't want to cope with the whole functional analysis but nevertheless want to understand some key concepts they are using in their own field on a daily basis. Moreover I am persuaded you are right when you say both version of the theorem are the same. But I am very sorry but I don't understand really why and how. Do you think you could make some notice explaining why both version are equivallent? Thank you very much.

As was pointed out, the power series formulation of f(H) only works in general when one has a bounded H and an analytic f. In quantum mechanics, hamiltionians are never bounded unless the state space is finite dimentional. So the two version are really NOT the same. The Borel functional calculus is honest mathematics, while the other formulation, although useful for physicists, is fundamentally incorrect. But, loosely speaking, we can see the formal resemblance between the two approaches. The projection-valued measure(PVM) in the Borel functional calculus sort of correspond to Dirac's "rank 1 projection", |x><x|. Just like we integrate against PVM's to get a self adjoint operator, Dirac's approach "integrat" the |x><x| over the "spectrum" to obtain the operator. While the convergence of integrals in PVM's can be defined in various topologies(e.g. the weak topology), it's never pointed out what's exactly meant but Dirac's "integral" over sometimes nonexistent "spectrum".

Be my guest and chuck it. A while back (a year ago?), I tried to, but kept getting reverted.--CSTAR 18:04, 29 March 2006 (UTC)[reply]
I need to clarify about the validity of the power series formula. There is actually an analytic functional calculus for unbounded operators, using an operatorial version of the Cauchy integral formula.

necessary?

As i know sigma-finiteness is not needed for a multiplication operator to be densely defined. Enlighten me or i will delete the remark.

Hmmm. I think you're right.--CSTAR 21:32, 26 February 2006 (UTC)[reply]

geometrical interpretation

The article contains a section called "geometrical interpretation". It states that

I understand this to mean the orthogonal subspace

(Wrong!)

Is this correct? If so, then I'd like to know why J was choosen to be symplectic -- i.e. why was J set up so that ... it seems to me that the definition would have worked equally well for a map , right? I don't see the role played by the minus sign. I mean, that J is symplectic is very suggestive of something; I'd like to know why. linas 21:46, 18 March 2006 (UTC)[reply]

I think it does make a difference, since if (x,y) belongs to a linear space V, (x, − y) doesn't have to belong to V (consider a 45 degree line through the origin.)--CSTAR 21:55, 18 March 2006 (UTC)[reply]
Right. My definition is wildly flawed. Dohh. It should have read
Dohhh. Thanks. I'm going to copy this into the article.linas 00:06, 19 March 2006 (UTC)[reply]

Alternate characterization

The article states:

The following is an alternate characterization of symmetric operator: A densely defined operator A is symmetric iff

What the heck does this subset notation mean?? linas 21:46, 18 March 2006 (UTC)[reply]

This is standard terminology for operators. One is an extension of the other.--CSTAR 21:47, 18 March 2006 (UTC)[reply]
Do we have an article somewhere that defines this? I'm thinking of this in terms of topology, that the one is defined in a topology that is finer than the other. But rather than guessing, I'd like to send the gentle reader to an actual definition. linas 23:09, 18 March 2006 (UTC)[reply]
It means the graph is a subset. -lethe talk + 03:29, 19 March 2006 (UTC)[reply]
So should I edit the article to say "... This is often written as , with the use of subset notation implicitly implying that the relation holds for the graph of the operator." I'll make this edit now; if its wrong, back it out please. linas 05:19, 19 March 2006 (UTC)[reply]
That language sounds funny to me. It's not "often written as". It's the definition of extension of operators. -lethe talk + 05:23, 19 March 2006 (UTC)[reply]
So where is this defined? Read my first post, wherein I ask: "what the heck does this mean"? We need a definition of this notation. Anyone who does not already know the answer won't be able to glean the meaning from the current wording. Note redlinks: Graph of an operator, operator extension. linas 20:48, 19 March 2006 (UTC)[reply]
In the usual set-theoretic formalization of function, a function is identical to its graph .--CSTAR 20:55, 19 March 2006 (UTC)[reply]

Argh. I'm finding this conversation very irritating. The subset notation is unknown and utterly opaque to anyone coming from an engineering or physics background. I'm not convinced its all that common in the world of math that's not dealing with operator theory. I attempted to add words to define it such that an undergrad *could* understand. Lethe promptly reverted, with no explanation other than a trite complaint about the wording. Can I fix the article, already? linas 05:01, 20 March 2006 (UTC)[reply]

It looks OK now, mathematically, although I don't particularly like the way the inline tex looks, aesthetically.--CSTAR 05:14, 20 March 2006 (UTC)[reply]
I use a font size about the same size as the image fonts, and also enable MathML in my WP preferences. I have hopes for BlahTeX. linas 15:11, 20 March 2006 (UTC)[reply]
BTW further down, I had used \mathrm{perp} instead of \perp (maybe when I added that \perp wasn't working.) --CSTAR 05:16, 20 March 2006 (UTC)[reply]
Will fix shortly. linas 15:11, 20 March 2006 (UTC)[reply]

Extension of symmetric operators

I don't think this article is the right place to define an extension of an operator. It should go in unbounded operator. -lethe talk + 08:51, 20 March 2006 (UTC)[reply]

I'd suggest its own unique article, e.g. extension of symmetric operators. That section and everything beyond it certainly does take the discussion of operators to a whole new niveau. linas 15:38, 20 March 2006 (UTC)[reply]
Well, I might agree with you on that. I think there might be enough to say about extensions to symmetric operators to merit its own article, but I'm not sure. In any event, the definition of the closabillity of operators, and also of extension of operators, makes sense for general unbounded operators, not just symmetric operators, therefore that definition should go there. Then we don't have to clutter this article with definitions. As for your complaint that it's too "operator theoretic", well, the distinction between self-adjointness and symmetry of operators is itself a bit operator theoretic. Anyone who wants to read this article will have to get at least a little of the terminology of unbounded operators. I'm going to take a look at that article now. -lethe talk + 15:52, 20 March 2006 (UTC)[reply]
I'm looking at closed operator now, and I don't really like it. It's not really set-up to talk about general unbounded or non-everywhere-defined operators, which makes it difficult to have the definitions there that I want. I don't think I have the energy to do a rewrite at the moment. I'm starting to think that I'd like to see an article about linear operators on general TVSes. It could talk about the different kinds of boundedness, and when boundedness implies continuity, and operators not everywhere-defined, graphs, extensions, and closures. All of these make sense in general TVSes. What do you think? This might be nice because like eg Oleg doesn't like to have stuff about spaces that are not normable in a lot of places, but those spaces still need to have a home. -lethe talk + 16:08, 20 March 2006 (UTC)[reply]
No reason not to. They occur in nature.--CSTAR 20:47, 20 March 2006 (UTC)[reply]
PS. I don't like the idea of breaking up the article. I rather like the idea of having a lot of stuff on s.a. operators in one place.--CSTAR 20:50, 20 March 2006 (UTC)[reply]
I'm afraid I can't add anything. My understanding of these topics is incomplete and spotty; I don't have the vision to suggest an overall structure. I know little/nothing about operator extensions beyond what this article says, and a breif skim of some papers on rigged hilbert space and related topics. On the other hand, I do have a tried-n-true method of writing: write a few paragraphs, put them where they seem to fit best at this point in time. Write a few more. After a while, one realizes that the first few paragraphs would be better if moved somewhere else .. or expanded or deleted ... rinse, and repeat. As Charles Matthews put it, the result is "organic". linas 20:25, 20 March 2006 (UTC)[reply]

Crazy question

These days, whenever I see the words "Cayley transform" I think "Riemann surface/Fuchsian group/j-function/etc.". Can anything interesting be said along these lines? Or is this a crazy non-question? linas 15:38, 20 March 2006 (UTC)[reply]

Not much more than is already on the page to which it is linked (somewhere in the article). The Cayley transform is a fractional linear transformation; the namw used for operators is mostly used by analogy, although does have a symplectic structure, but not too much should be made of this fact, at least not in this article.--CSTAR 20:45, 20 March 2006 (UTC)[reply]
OK thanks then. I've come up with a remarkable little problem, then; I dare say no more. I haven't had this much fun playing at solving things in quite a while. linas 00:11, 21 March 2006 (UTC)[reply]

Multiplication operators

This article states, in the section on the spectral theorem, that

Theorem. Any multiplication operator is a (densely defined) self-adjoint operator. Any self-adjoint operator is unitarily equivalent to a multiplication operator.

By contrast, the article spectral theorem qualifies this further, stating that any bounded self-adjoint operator is unitarily equivalent to a multiplication operator. So, does an operator need to be bounded for this theorem to hold, or not? linas 23:05, 18 March 2006 (UTC)[reply]

No. It's true for arbitrary self-adjoint operators.--CSTAR 23:41, 18 March 2006 (UTC)[reply]
Can I get you to review the article on spectral theorem, and remove the "bounded" qualifier from it? linas 00:05, 19 March 2006 (UTC)[reply]
There is already a short section on the unbounded case. Isn't that good enough? --CSTAR 00:49, 19 March 2006 (UTC)[reply]
OK, right. I guess I skipped reading the last paragraph, which circles right back to this article. I've been feeling confused today. linas 05:07, 19 March 2006 (UTC)[reply]

Deleted graf

I deleted a paragraph which I believe used "support" incorrectly. The support of a measure, including a projection-valued measure, is the smallest closed set outside of which the measure vanishes. What the contributor meant, I am sure, was not support, but rather allude to the fact that a measure has a canonical decomposition into sum of atomic and continuous parts which are disjoint (a la Hahn decomposition), but that should be made clearer, and not use the word "support" which has a very precise meaning in analysis.--CSTAR 17:41, 2 April 2006 (UTC)[reply]

CSTAR Thanks. I got lazy and didn't wanna elaborate. —This unsigned comment was added by 24.155.72.152 (talkcontribs) .

Functional calculus

A recent anonymous edit added the sentences:

For the bounded case, an alternative way of obtaining the Borel functional calculus is the following: First pass from polynomial to continuous functional calculus using the Stone-Weierstrass theorem. The use the Riesz-Markov theorem to pass from integration on continuous functions to spectral measures.

This is very misleading. To extend the polynomial functional calculus to the continuous functional calculus requires showing continuity of the mapping

from polynomials to operators, as a function of the polynomial p in the supremum topology. This is not entirely trivial, and in fact is one stumbling block in the proof of the spectral theorem by elementary means. The proof hinges on the following fact: If T is a bounded self-adjoint operator with spectrum in the closed interval [a,b] and p is a polynomial which is non-negative on [a, b], then p(T) is a non-negative operator.

See for example S. Lang's treatment of the spectral theorem. I'm not sure of the historical facts here, but this may have been the way the spectral theorem for non-compact operators was first shown, before the Gelfand theory of commutative C*-algebras. --CSTAR 19:43, 2 April 2006 (UTC)[reply]

CSTAR the statement I made was correct. From the polynomial fucntional calculus one can be extended to an isometric continuous functional calculus by Stone-Weierstrass rather easily. The isometric property even holds for the Borel case, provided one is careful. See, say, Reed and Simon. This is perhaps a less deep approach than using Banach algebras, and stuff like Stone-Cech and Wiener's result are not corollaries. Also, it was sloppy of me to have said what I said about discrete spectral measures. Thanks for pointing it out. Perhaps someone can elaborate in an article about decompositon into spectral subspaces. —This unsigned comment was added by 24.155.72.152 (talkcontribs) .
I didn't say the polynomial functional calculus couldn't be extended in this way. What I said was that this extension is not entirely trivial, because you need to establish first that the mapping is continuous (or equivalently that it is positive). Your edit omits a reference to this fact, which is why I said it was misleading (notice that I didn't say it was wrong).--CSTAR 21:17, 2 April 2006 (UTC)[reply]
Also could you please sign your posts on talk pages? Thanks.--CSTAR 21:19, 2 April 2006 (UTC)[reply]
Do so by appending ~~~~ to your comments, for your information. -lethe talk + 21:36, 2 April 2006 (UTC)[reply]
Previous two comments modified for additional clarity. --CSTAR 21:35, 2 April 2006 (UTC)[reply]

Ok, sorry about not signing posts. So I am 24.155.72.152 on a few of the math pages Mct mht 22:35, 2 April 2006 (UTC)[reply]

CSTAR, i still don't quite get what's nontrivial about it. the fact you cite as essential:

If T is a bounded self-adjoint operator with spectrum in the closed interval [a,b] and p is a polynomial which is non-negative on [a, b], then p(T) is a non-negative operator.

is actually not needed to show the functional calculus is an isometry. when you say Lang's treatment do you mean his "real and functional analysis"? will look it up when i can. Mct mht 23:11, 23 April 2006 (UTC)[reply]

If you assume the continuous functional calculus, then of course it's trivial. Otherwise, you need to show that any non-negative polynomial on an interval [a,b] is a sum of polynomials which when applied to T is obviously non-negative. --CSTAR 23:14, 23 April 2006 (UTC)[reply]
Hm, if you're saying your "you need to show..." is required to show the existence of an isometric functional calculus (which is what i thought you meant in the initial comment), i think it can be bypassed in that proof. rather it follows as a consequence. Mct mht 23:23, 23 April 2006 (UTC)[reply]
Hmm. I don't believe it.--CSTAR 00:10, 24 April 2006 (UTC)[reply]

CSTAR, ok, from what i can see, our disagreement is the following: let be the usual polynomial functional calculus, you're saying the spectral mapping theoremi.e. is a major stumbling block en route to continuous func cal. i am saying it's rather straightforward. The following argument is from Reed and Simon:

Say . Given P, we factor . So . If has no bounded inverse, neither does . This shows .
For the other inclusion, let . Again, we factor . Now has no bounded inverse, that means has no bounded inverse for some i. So we have with . This proves the claim.

from the above follows the isometric property. it is perhaps good to compare with Lang's treatment you mentioned. i will look it up or perhaps you will tell me. Mct mht 02:58, 24 April 2006 (UTC)[reply]

OK, then how do you prove that IF A is a non-negative operator such that P(σ(A)) is contained in the interval [0,1] THEN P(A) has norm at most 1? At some point you are going to have to address the issue of the relation between spectral radius and norm. You haven't yet, by your argument. The crux of the standard Gelfand theorem as found in most books on C*-algebras is basically dealing with that technical point. You can avoid the Gelfand theorem by using the polynomial argument I mentioned. I believe that constituted von Neumann's proof of the spectral theorem for bounded self-adjoint operators.--CSTAR 03:10, 24 April 2006 (UTC)[reply]
Heh, ok, interesting. the isometry is shown as follows, for self adjoint A we have that its operator norm and spectral radius coincide. Also, in general, so
Mct mht 03:28, 24 April 2006 (UTC)[reply]
Nope. If you look carefully, you are using the spectral radius formula to equate norm with sup of the spectral values. That is true for C*-algebras, but is equivalent to the Gelfand isomorphism theorem.--CSTAR 03:33, 24 April 2006 (UTC)[reply]
ok, i said that before laying out the argument, that for self adjoint operators, spectral radius is operator norm. and the spectral radius formula can be proven by invoking the Laurent series for resolvents, without using the C* algebra formulation. Mct mht 03:44, 24 April 2006 (UTC)[reply]
Fine but that's the crux of the standard C*-theoretic proof of the Gelfand isomorphism!--CSTAR 03:46, 24 April 2006 (UTC)[reply]
it's very informative discussion. thanks. Mct mht 03:53, 24 April 2006 (UTC)[reply]

partial isometry

"partially defined isometry" is perculiar terminology. is it used at all in literature? as pointed out later in the same paragraph (extension of symmetric operators), one customarily extend by continuity and then by zero to get something defined everywhere. "unique partially defined linear operator", also rather uncommon language, should be taken to mean the continuous extension is unique anyhow. i didn't change it since i didn't see anything incorrect, just somewhat unusual and perhaps unnecessary. Mct mht

Why is this a problem? Partially defined function is a perfectly good concept and standard terminology.--CSTAR 23:00, 23 April 2006 (UTC)[reply]
Do you mean if they are unitarily equivalent or if not at least similar? The answer is no to both questions, because spectrum (or more strongly spectral measure) does not deal with spectral multiplicity.--CSTAR 16:24, 12 January 2007 (UTC)[reply]

operator integral

CSTAR actually, you can. it's not trivial but it's been done. i just thought it should be mentioned Mct mht 23:26, 23 April 2006 (UTC)[reply]

above comment is re possibility of convergence of operator integral in strong operator topology. Mct mht 23:28, 23 April 2006 (UTC)[reply]
OK I believe you, but it's news to me.--CSTAR 23:30, 23 April 2006 (UTC)[reply]
Could you provide a reference for how this is done? I'm still pretty skeptical.--CSTAR 13:17, 24 April 2006 (UTC)[reply]
CSTAR, ok, it is rather obscure. i haven't found an explicit reference to a paper. it's done using vector valued measures, according to Reed and Simon. convergence in the weak operator topology looks to be the most convenient and common approach taken. since all the well-known texts seem to take that approach, i am going to remove the SOT note. Mct mht 01:19, 27 April 2006 (UTC)[reply]
This is an ancient post, but I would like to mention that I am taking a graduate course where we are currently discussing operator integrals in the strong topology. Maybe screwy things can happen in the cases we haven't covered yet though.--91.11.110.253 (talk) 22:05, 13 January 2012 (UTC)[reply]

Bounded adjoint

No, an unbounded densely defined operator can have bounded adjoint. It can have even trivial adjoint, that is A^*={(0,0)}. Scineram (talk) 16:58, 2 November 2008 (UTC)[reply]

the statement only assumes a given operator A has a bounded adjoint B, in the sense that <Ax,y> = <x, By> for all x, y. this would in fact imply what you're claiming above. Mct mht (talk) 20:29, 2 November 2008 (UTC)[reply]
this is pretty laughable. the statement is a elementary one: if an operator, with no a priori assumptions on boundedness, has a bounded adjoint, then it is bounded. in particular, unbounded operators cannot have a bounded adjoint. someone else wanna step in here? Mct mht (talk) 15:12, 4 November 2008 (UTC)[reply]
Mct mht, what exactly is this "theorem of Hellinger-Toeplitz type" that you're using?
After looking through Reed & Simon, Section VIII.8, I believe the situation is as follows (I don't know that much functional analysis though, so I may well be wrong). There is no theorem that says that if a densely-defined operator A has a bounded adjoint, then A itself is bounded. Indeed, it's possible for a densely-defined operator A to have an adjoint A* whose domain D(A*) consists of only the zero element in the Hilbert space. However, there is a theorem that if an everywhere-defined operator A has a bounded adjoint, then A itself is bounded. From that theorem it follows that if an everywhere-defined operator A is unbounded, its adjoint is unbounded. Thus, it looks to me that the paragraph that you are arguing about needs to make the additional assumption that the operator is defined on the whole space. -- Jitse Niesen (talk) 16:00, 4 November 2008 (UTC)[reply]
yes, of course one needs to assume in the statement that the operator in question is defined everywhere. this is analogous to the fact that a symmetric (everywhere defined) operator must be bounded (this is essentially the Hellinger-Toeplitz theorem). Mct mht (talk) 22:37, 4 November 2008 (UTC)[reply]
That is still false. An everywhere defined unbounded operator can have bounded adjoint too.Scineram (talk) 05:52, 5 November 2008 (UTC)[reply]
I don't understand that. Could you point out where this is explained, or give an example?
My reasoning is as follows. Suppose T is a linear operator on a Hilbert space, defined on the whole space. Then T bounded implies T* bounded. Thus, T* bounded implies T** = T bounded. Thus, T unbounded implies T* unbounded. So what's wrong here? -- Jitse Niesen (talk) 13:24, 5 November 2008 (UTC)[reply]
nothing is wrong with it. the statement that "T bounded implies T* bounded" is due to the closed graph theorem, just as in Hellinger-Toeplitz theorem. he's been wasting people's time all along. Mct mht (talk) 22:59, 5 November 2008 (UTC)[reply]
The problem is there is no second adjoint. It the adjoint is not densely defined, as is the case with {(0,0)} of course, than you cannot use closed graph theorem.Scineram (talk) 20:34, 6 November 2008 (UTC)[reply]

pretty obvious that Scineram is talking nonsense. statement will be put back. Mct mht (talk) 06:13, 8 November 2008 (UTC)[reply]

Definition of a symmetric operator on a topological vector space

Drevicko (talk) 06:51, 23 March 2010 (UTC) In Section 1 "Symmetric operators" starts with by defining a symmetric operator A on a Hilbert space then on a topological vector space the definition[reply]

makes sense in a Hilbert space, as the inner product gives a natural mapping to the spaces dual but not in a topological vector space. Here we have no natural map to the dual space, so makes no sense : Ay is in the dual space E but a "ket" resides in E, not it's dual. If we want to use the bra-ket notation, we should write

which adds another element of confusion, as we would expect to conjugate.

Perhaps we should instead write something like:

or

It would also be good to add a reference to Bra-ket_notation


I came here to make a similar point. The notation

doesn't actually seem to make sense, since the stuff inside the in Dirac notation is just a label - so to me just looks like a bra called . I think what's really meant is

That would define a symmetric operator (which is what that part of the article seems to be talking about), but of course, a self-adjoint one would be defined by

where is the complex conjugate. The article seems to be confusing symmetric operators with self-adjoint ones - but I'm not really sure what it's getting at with the whole "partially defined" thing, and I may just be mis-interpreting the notation, so I'm hesitant to change it. Nathaniel Virgo (talk) 12:13, 25 June 2010 (UTC)[reply]

(later:) Oh, I get it - the author isn't intending this to be interpreted as Dirac notation at all - (s)he's just using to represent an inner product between vectors and . Since this confused both myself and Drevicko I suggest it be changed to the more usual . If no-one objects I'll go ahead and do this myself at a later date. Nathaniel Virgo (talk) 22:48, 26 June 2010 (UTC)[reply]

Symmetric != Hermitian

Is it just me or is this article confusing symmetric operators with Hermitian ones? It even goes so far as to say that they are synonymous, and fails to mention complex conjugates. This seems odd to me since for matrices, "adjoint" means "take the transpose, and also take the complex conjugate of each entry", hence a "self-adjoint" matrix would be a Hermitian rather than a symmetric one.

Am I missing some subtle point, or should the article be changed? Nathaniel Virgo (talk) 22:53, 26 June 2010 (UTC)[reply]

The article is fine in this point the way it is, but I am not sure how best to explain. A common definition for symmetric operators in a complex Hilbert Space is that a operator A is called symmetric if <Ax,x> is real for all x in the Hilbert space. This is merely a convention, but it is called symmetric because it implies that you can move an operator from the right side of an inner product to the other without changing it's value. It is slightly unfortunate however that this doesn't concide with the common definition/notation for operators on finite dimensional spaces. One thing to note is that for real Hilbert spaces, the same definition doesn't suffice. And actually the article doesn't cover this, but as soon as one allows operators to be defined only on a subset of the Hilbert space, then self adjoint and symmetric cease to be the same thing. In the case of operators defined on a dense subset of the entire space, for example, there are many examples of symmetric operators which are not self-adjoint.--91.11.110.253 (talk) 21:57, 13 January 2012 (UTC)[reply]

Edit war, why?

About the ongoing edit war [1], [2], [3], [4], [5] I fail to see, what is the fuss? All I see in the article (rather than its wikitext) is, few tiny distinctions in vertical spacing. Could the combatants explain here, what is the essense of the conflict? Boris Tsirelson (talk) 07:15, 21 April 2018 (UTC)[reply]

Hermitian operators

The section on Hermitian operators restricts the definition to "bounded operators". But, especially in quantum mechanics and quantum field theory, where this terminology is used, most operators under consideration are not bounded operators. Physicists often don't bother with boundednes and use the terminology Hermitian anyway. The preceding part of the article discusses unbounded operators, so is there a reason for the qualifier here? I feel it might confuse an uninitiated reader. Aaroneisenberg (talk) 09:30, 2 December 2021 (UTC)[reply]

Agreed. But what section is this in? I can't find the "section on Hermitian operators" in this article. StrokeOfMidnight (talk) 00:37, 3 December 2021 (UTC)[reply]

Puzzling sentence

In the section Definitions this sentence apppears:

"Since is dense in , symmetric operators are always closable (i.e. the closure of is the graph of an operator)."

But (as a beginner in this subject), I suspect that here the phrase "graph of an operator" means the graph of an everywhere-defined bounded operator?

Because, unfortunately, the notation is defined as "the graph of an (arbitrary) operator ".

Is that suspicion right? Whether it is right or wrong, it certainly appears that the word "operator" is used two different ways in the same sentence.

I hope that someone knowledgeable about the subject will clarify that the second instance of "operator" in the quoted sentence is specifically a certain type of operator, and not an "(arbitrary)" operator.