Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 87.102.83.204 (talk) at 10:05, 14 March 2008 (Happy Pi Day). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


March 8

Index of matrix

So my professor posted a question: Suppose A and B are n × n non-degenerate symmetric matrices and there is an n×n matrix C such that A = C(transpose)BC. Prove that the index of A equals that of B. my question is what in the world is the index of a matrix?199.74.71.147 (talk) 01:38, 8 March 2008 (UTC)[reply]

You might see if Sylvester's law of inertia sounds right. Otherwise, the only index I know of is the one described in Atiyah–Singer index theorem, which is probably not what you want since the index of a symmetric matrix should always be 0. JackSchmidt (talk) 02:45, 8 March 2008 (UTC)[reply]
I think it had something to do with the number of negative numbers in some matrix which described extrema or something? —Preceding unsigned comment added by 199.74.71.147 (talk) 04:30, 8 March 2008 (UTC)[reply]
I've never heard of the index of a matrix - I suggest you ask your professor. --Tango (talk) 12:54, 8 March 2008 (UTC)[reply]
If A is a real symmetric matrix then the number of positive eigenvalues of A is called the index of A.--Shahab (talk) 08:39, 9 March 2008 (UTC)[reply]

Pseudoinverse of a Matrix

Okay, the first question is that in the Pseudoinverse article here at Wikipedia, there are a couple of identity transformations which, if multiplied by some expression, can be used to expand or reduce expressions. My question is does it matter how or where is the expression multiplied? Is it right multiplication or left multiplication? Do they work for ALL expressions? Does it matter if I multiply by a vector or a matrix.?

The second question is that I already know that is a projector (easy to show by definition) but to which space does it project onto? A is any nxm (with ) matrix with real or complex entries and is its complex conjugate transpose. P would then be a nxn square matrix. I think that P maps a vector (x1,x2,...,xn) to (x1,x2,...,x_m-1,x_m,0,...0) where the last n-m entries are zero. Is this true? If it is then how to prove it because I can't seem to analytically prove my conjecture.

The third question is, in the same article, we have that . A is the same as above (an nxm matrix with ) and this is what I have so far.



and from here I don't know what to do. I cannot distribute the inverse sign because I don't know if A will be invertible or not (A is rectangular). So any hint as to how to proceed will be appreciated. ThanksA Real Kaiser (talk) 03:44, 8 March 2008 (UTC)[reply]

In response to question one, the identities are true identities, so you can replace the left-hand side with the right-hand side (or vice versa) wherever you want. The clause "the right hand side equals the left hand side multiplied with some expression" is somewhat obscure. To take one example, the identity
A+ = A* A+* A+
says that you can replace A+ with A* A+* A+ wherever the former occurs in some expression. It might be useful to picture this as multiplying on the left by A* A+*. These identities come in left-right pairs, so you can expand expressions in either direction.
For question two, the pseudoinverse article indicates (in the applications section) that AA+ projects onto the image of A. Is this good enough for your purposes?
As for question three, the fact that (A+)+ = A follows from the symmetry in the definition of pseudoinverse. I'll retranscribe the four defining properties here, using B instead of A+ to make the symmetry easier to see:
ABA = A;
BAB = B;
(AB)* = AB; and
(BA)* = BA.
Notice that the properties as a whole remain the same if you swap A and B (and, to be pedantic, also swap m and n). This symmetry immediately implies that B is a pseudoinverse of A if and only if A is a pseudoinverse for B. Michael Slone (talk) 06:54, 8 March 2008 (UTC)[reply]

For, the second part, how can I show that A projects onto range(A). And, for the third part, I only have that the definition of so I am trying to prove that using only this definition. That is why I posted those steps. Is there anyway that it can be done?A Real Kaiser (talk) 08:40, 8 March 2008 (UTC)[reply]

For the second question, you can use the identity A = AA+A to show that the image of A is the same as the image of AA+. However, the way in which AA+ projects onto the image of A is not quite as straightforward as either padding or replacing entries with zeros. For example, if
, then ,
which means that
.
For the third question, your current approach will not give a proof that (A+)+ = A for an arbitrary matrix A, since you are treating a special case. The identity you are using, A+ = (A*A)–1A*, only holds if A*A is invertible, which need not be the case. The actual definition of pseudoinverse consists of four identities that a matrix and its pseudoinverse must satisfy, and one can verify that the identities remain unchanged if the names of the matrix and its pseudoinverse are swapped. Assuming that you already know that pseudoinverses exist, that is the proof that (A+)+ = A. Michael Slone (talk) 03:45, 9 March 2008 (UTC)[reply]

Uniqueness of best fit curves

Hi, I was wondering if somebody here could give me some pointers on how to prove the uniqueness of best fit curves basically i have all the formulae derived for polynomial curves up to degree n but I'm not sure how to go about having data points of different x values implies uniqueness Thanks,199.74.71.147 (talk) 04:37, 8 March 2008 (UTC)[reply]

Hi again, i've made some progress on the degree 2 polynomial (which is the main thing i need to prove) such that i have a system of equations
sorry i don't know how to do the wikipedia syntax for math, so here the sums are just from i=1 to i=3 for the three (x,y) coordiates so like the first sum is just x1^4+x2^4+x3^4 where x1 x2 and x3 are unique
sum xi4 =lambda1*sum xi3+lambda2*sum xi2
sum xi3 =lambda1*sum xi2+lambda2*sum xi1
sum xi2 =lambda1*sum x+3*lambda2
so what i need to do is just prove that no lambda1 and lambda 2 exist that holds for all three equations and i've been looking at it for a while but can't figure it out199.74.71.147 (talk) 07:00, 8 March 2008 (UTC)[reply]
I think the solution would involve showing that the matrix formation of the equation that gives you your best fit curves involves an invertible matrix, and from that has a unique solution. Confusing Manifestation(Say hi!) 21:57, 10 March 2008 (UTC)[reply]

natural logarithm

what is a natural lgarithm? —Preceding unsigned comment added by 68.198.96.12 (talk) 06:02, 8 March 2008 (UTC)[reply]

See Natural logarithm. --~~MusicalConnoisseur~~ Got Classical? 06:59, 8 March 2008 (UTC)[reply]

Euler-Mascheroni constant: betting on its irrationality

Famously, the Euler-Mascheroni constant (γ) is not known to be irrational, let alone transcendental. But I think most mathematicians would be very surprised if it turned out not to be transcendental. Whether or not I am right to think this, what grounds might they have for strongly suspecting that γ is irrational? And transcendental? What sorts of odds would you give, on those questions?

A bit of mathematical epistemology – or mathematical doxastics, in fact.

¡ɐɔıʇǝoNoetica!T09:08, 8 March 2008 (UTC)[reply]

The measure of the algebraic numbers in the reals is 0, so in comparison preciously few numbers are algebraic. For some numbers that are defined in such a way that you can easily determine they are very close to 0 or 1, there is a difficult proof that they are not just very close, but in fact equal. Apart from such cases, I don't know of any case of a number arising naturally that is known to be algebraic but for which the proof of algebraicity is difficult. There is nothing in the definition of γ that makes you suspect it might be rational. Combining these things makes the conjecture plausible.
There is no lack of numbers that have not been proved transcendental but that everyone would believe to be so: cos 1 + ln 3, sin cos 1, e, ad nauseam. I don't know if anything has been written about this.  --Lambiam 10:53, 8 March 2008 (UTC)[reply]
Thanks Lambiam. Yes, I vaguely understood about the sort of measure you speak of; I know more about the terminology after consulting your useful link. Please answer this, if it is a well-formed question: What is the Lebesgue measure of the rationals in the algebraics?
With my theme of mathematical epistemology in mind, I am intrigued by your third sentence. Among the "naturally arising" numbers known to be algebraic, those for which the proof is easy must outnumber those for which the proof is hard, right? By a considerable ratio, wouldn't you say?
A lot of this is highly technical in mathematics, and therefore beyond me; but I can still pose questions in "operational" form. Indulge me, please:

Should a prudent and rationally self-interested mathematician-gambler stake her entire fortune $F for a return of 1.000001*$F if γ is transcendental, and of $0 if γ is algebraic? If not, what is the lowest number you would intuitively put in place of 1.000001 to make the bet acceptable?

¡ɐɔıʇǝoNoetica!T23:44, 8 March 2008 (UTC)[reply]
I don't think that's a well-formed question. As I understand it, the Lebesgue measure is only defined for subsets of . The rational and algebraic numbers have the same cardinality, but that doesn't really tell us much. As for your bet - I don't know for sure about this particular constant, but for a randomly chosen real number (with uniform distribution), there is no return that would make it worthwhile. A randomly chosen real number is almost surely transcendental. --Tango (talk) 23:53, 8 March 2008 (UTC)[reply]
After thinking about it, the question we want to ask is: What is the index of the rationals as a subgroup of the algebraics? I would imagine it's (countably) infinite, but can't immediately prove it. --Tango (talk) 23:58, 8 March 2008 (UTC)[reply]
Thanks, Tango. I'll explore those technical matters further, beyond what I already know about cardinalities.
As for your point about the bet, of course I understand your general point about "randomly chosen" real numbers, but haven't you applied it wrongly to the betting scenario? If we are utterly certain that a real is "randomly" chosen, wouldn't a bet with a factor of 1.000001 on its being transcendental clearly be good value? A further concern: isn't it rather difficult to formalise, or operationalise, the random selection a real number? A concern beyond that: no matter how we do that formalising or operationalising, why should we think that this has any bearing on the nature of γ? To answer that, we must surely know something rather privileged about γ! Why should we think we have such privileged knowledge?
Interesting? You can perhaps see why I am moved to ask my original questions.
¡ɐɔıʇǝoNoetica!T00:21, 9 March 2008 (UTC)[reply]
I, of course, had the bet thing backwards - *any* bet would be worth taking for the reasons given. For some reason I was thinking we were betting on it being algebraic, when you clearly wrote it the other way around. I'm not sure what the problem is with choosing a random real number, but it might be easier to choose one at random from the interval [0,1], then the probability density isn't infinitesimal. If we don't have any privileged knowledge of this constant, then the probability is exactly that for a random number. However, we do have quite a lot of knowledge about the number, so the probability probably is different. I would expect our knowledge of the number (for example, the fact that it can be expressed as a pretty simple formula) increases the chance of it being algebraic quite significantly. --Tango (talk) 01:03, 9 March 2008 (UTC)[reply]
[Corrected your "better" to "betting". :) ] Ah, I did not make clear what I meant by "privileged knowledge". It is hard for me to formulate without begging one or two questions. Anyway, you quite reasonably took me to mean the ensemble of all knowledge that we have about γ (like that presented in the article, except much more comprehensive).
Now suppose that we had asked exactly the same questions, but about π or e, before it was known that either of these was transcendental. Would the simple relations that either of these are involved in have increased our confidence that they were algebraic? Wouldn't we have been mistaken, to reason like that? (Perhaps not!) And surely those two constants are for present purposes comparable to γ – that is, in the history of mathematical discovery and in their derivations and roles in current mathematics.
I suppose the earliest mathematicians, reasoning in something like the same way, were justifiably amazed to find that √2 was irrational!
¡ɐɔıʇǝoNoetica!T01:47, 9 March 2008 (UTC)[reply]
I would say that the chance of pi being transcendental given only the information we had before it was proven was less than the chance of a randomly chosen real number being transcendental. The chance of a random real number being transcendental is, effectively, 1. There wouldn't have been so many mathematicians interested in the problem if it was just a matter of proving something we already knew, so I'd say the chance was less than 1. Just because you lose a bet doesn't mean you were wrong to place the bet - you can only base your decision on information you have at the time, so future information doesn't change what was the correct course of action. --Tango (talk) 13:44, 9 March 2008 (UTC)[reply]
Tango: Q has infinite index in Q(√2), let alone in the algebraics. A better measure of the 'size' of the rationals in the algebraics might be the degree of the field extension (aka the dimension of the algebraics as a rational vector space). This is also countably infinite. Algebraist 01:57, 9 March 2008 (UTC)[reply]
Thanks Algebraist. I'll look that up, too. (But I may not understand it.)
I should say why the notion of randomly selecting a real number seems problematic. Try it! Any finitely specifiable procedure for uniquely identifying a real number, or naming it by anything standard means, must, it seems, require some biased limiting operation. For a start, any positive number you offer me that is less than 10^10^10^10^10^1010^10^10^10^42 is clearly biased: it is so suspiciously low, in the range of the real numbers!
¡ɐɔıʇǝoNoetica!T02:23, 9 March 2008 (UTC)[reply]
Yes, there is no uniform probability distribution on the reals. That's why Tango changed his/her mind to a random number between 0 and 1. Btw, when you're talking about arbitrary reals, it's not very useful to talk about 'uniquely specifying' or 'naming' them: almost all reals are undefinable! (pinning down what 'definable' means in a non-paradoxical way is a bit tricky, but this'll be true however you do it) Algebraist 04:53, 9 March 2008 (UTC)[reply]
That's fine, Algebraist. That's the sort of difficulty I had in mind, but could not formulate rigorously. Two asides: First, there would be no uniform probability distribution on the natural numbers either, would there? Second, why is talk of randomly choosing a real number in the interval 0–1 any more straightforward? Are all those reals "definable" (any way you pin that down)? (Ignore these asides if answering them must assume mathematical apparatus that I clearly do not have at my command!)
Anyway, my question in bold, above, still stands. Perhaps there is no reasonable way to answer it  – or it is not for mathematicians qua mathematicians to answer. Meta-mathematicians? More likely. But still, a very slippery question for anyone, I suspect. It is a matter of epistemology, as I have said; and it is out of philosophical interest that I posed it. I wonder how one would pursue it further?
Thanks, all!
¡ɐɔıʇǝoNoetica!T07:41, 9 March 2008 (UTC)[reply]
So first, no, a real number that you pick at random between 0 and 1 will almost surely be undefinable, for a notion of definability that you fix in advance. Once you've picked it, it will take infinite time for you to tell us which one it is. But that doesn't particularly bother mathematicians. Take your time. Let's assume an interest/inflation rate of zero so that the value of the bet doesn't evaporate before it can be exchanged.
It's an interesting question, more a question for foundations of probability than anything else, I'd say. One thing I'd throw into the mix is, who is it that's offering to make the bet? Maybe that person knows something about γ that you don't, and that can affect the Bayesian probability by a lot. (This is one of the slippery points in analyzing the Monty Hall problem.) --Trovatore (talk) 08:02, 9 March 2008 (UTC)[reply]
Or to put it another way, son, do not bet him, for as sure as you do you are going to get an ear full of cider. --Trovatore (talk) 19:21, 9 March 2008 (UTC)[reply]

modular question

how can i solve this problem by step to step? i will make computer program, due to this reason i ve got to learn it's solving method...

ax = c ( mod m )

( example: 5x = 7 ( mod 37) )

could someone explain it with ax = c (mod m).. thank you and best regards... Altan B. —Preceding unsigned comment added by 81.215.233.51 (talk) 12:26, 8 March 2008 (UTC)[reply]

Assuming a is coprime to m (as in your example), the obvious thing to do is to find the multiplicative inverse of a mod m and multiply it by c to get the answer. The obvious way to do this is the extended Euclidean algorithm. Algebraist 12:58, 8 March 2008 (UTC)[reply]
In the case that a is not coprime to m, there are two possibilities for each factor they share: either c is divisible by it as well, or it isn't. If it is, divide a, c, and m, all through by that common factor and keep going. If it isn't, there's no solution. Black Carrot (talk) —Preceding comment was added at 06:39, 9 March 2008 (UTC)[reply]

Free numerical ODE libraries

If you were writing a computer program and wanted to solve an ordinary differential equation from inside it, how would you do it? Would you implement a solution method (e.g. Runge–Kutta) yourself, or use a library? What free, open source libraries are available?

Accuracy guarantees aren't a big concern; I'm not going to use this to design jet engines. =) —Keenan Pepper 17:18, 8 March 2008 (UTC)[reply]

I'd be inclined just to write it myself as its more fun that way and you'll probably learn more. Otherwise there are a good few open source libraries out there, if your using java try Apache Commons Maths.--Salix alba (talk) 18:36, 8 March 2008 (UTC)[reply]
While you could write it yourself, libraries are likely to have better methods than you would implement. For example, it might have a method to optimally determine the step length in your Runge-Kutta integration. I tend to use Matlab for these kinds of things, but it's not free (though it appears to be to me, since my institution has a license). GNU Octave is free and probably has an ODE library. If you're tied to a specific language, what language? I'm sure others can help you with various libraries in FORTRAN, C, C++, etc. moink (talk) 17:08, 9 March 2008 (UTC)[reply]
That's what I was thinking: why re-implement what a library can already do better? I already use Octave, and I had no idea it had ODE solvers, but there they are, so thanks for suggesting that. I also found the ODE solvers in the GNU Scientific Library, so I think that's everything I could need. —Keenan Pepper 19:21, 9 March 2008 (UTC)[reply]
Speaking of Octave, does anyone know of an IDE for it that doesn't suck? I googled around a bit and found a couple of mostly abandoned SourceForge projects that no one had touched much for a couple years. That doesn't prove they suck, of course, but it makes me reluctant to invest the time to find out. I know there's an Eclipse plug-in for it, but frankly I wasn't very impressed with Eclipse the last time I seriously tried it (which was more than a year ago). --Trovatore (talk) 07:15, 10 March 2008 (UTC)[reply]
You could try TeXmacs. Morana (talk) 11:26, 10 March 2008 (UTC)[reply]

Follow up on question of "straight lines"

I had posted the question about "straight lines" above. See "Lines" on March 7 posting. I wanted to follow-up with the following question. As I was driving the other day, I noticed those white (and yellow) lines that are painted on the road ... the markings that serve as a guide to delineate the lane change divisions, the center of the road, the edge of the road, etc. As roads are not perfectly straight (rather they twist and turn), those center yellow lines (for example) are not "straight lines". Rather, they curve along with the twists and turns of the road that they are marking. What is the mathematical / geometry term for that? Is it simply a curved line? Is it really a "line" at all (mathematically speaking)? Or is there a better term? Thanks. (Joseph A. Spadaro (talk) 20:47, 8 March 2008 (UTC))[reply]

It is called a median line. An important non-trivial application is to determine the median line of two counties coastlines where the territorial waters or exclusive economic zones etc overlap. There should be a mathematical name for this process of finding the median line between arbitrary curves but I don't know what it is. SpinningSpark 21:28, 8 March 2008 (UTC)[reply]
I'd call it a median curve. Considering the edges of the road are equidistant (not parallel, strictly speaking, since they aren't straight), it's a much simpler problem than defining borders between countries. --Tango (talk) 22:36, 8 March 2008 (UTC)[reply]
It's still called a median line by those that produce them, whatever you call it. SpinningSpark 19:56, 9 March 2008 (UTC)[reply]
See medial axis. --Salix alba (talk) 23:01, 8 March 2008 (UTC)[reply]
You may be interested in Voronoi diagrams and related topics. -- 128.104.112.85 (talk) 16:35, 13 March 2008 (UTC)[reply]

Solution to a Second Degree DiffEQ

Hello. I was wondering what the general solution to this differential equation is, if it exists:

...where K is a constant. It seems that the solution to this equation would give me a general free-fall equation, when you let K be the gravitational constant times the mass of the planet or whatever.

Thanks in advance, Phillip (talk) 22:06, 8 March 2008 (UTC)[reply]

Just multiply both sides by y2 and integrate (twice). --Tango (talk) 22:37, 8 March 2008 (UTC)[reply]
Huh? I don't understand this suggestion. How can you evaluate the first integral?
The method I recently learned for solving this kind of ODE (in which the independent variable, say , does not appear explicitly) involves introducing a new variable and then expressing as
This gives you a first-order, separable ODE for , and after you solve that you get another first-order, separable ODE for . —Keenan Pepper 01:13, 9 March 2008 (UTC)[reply]
Let me elucidate Tango's response. We have . We rearrange this to be . We then integrate both sides twice. -mattbuck (Talk) 01:40, 9 March 2008 (UTC)[reply]
That much I understood. What I don't understand is how to evaluate the integral and get something useful. Using the substituion I said, you can put it in the form where , but that doesn't help. —Keenan Pepper 02:27, 9 March 2008 (UTC)[reply]
Oh, I got it. "by parts" was all you needed to say. Turns out to be perfectly equivalent to the method I described, so think about it whichever way you want. —Keenan Pepper 02:38, 9 March 2008 (UTC)[reply]
I don't think you actually need to do it by parts, I think you can just say:
I'll find some pen and paper in a minute and check that... --Tango (talk) 13:49, 9 March 2008 (UTC)[reply]
Ok, when I try and do it by parts I get a horrible mess... I think I'll just blindly trust that the "multiply by dt and add an integral sign and ignore the fact that it's complete nonsense" method works when used repeatedly. --Tango (talk) 13:57, 9 March 2008 (UTC)[reply]
No no no... There's a reason it's written as instead of (which I don't know how to interpret) or (which would be ). The reason is that treating it that way leads to wrong answers.
Consider the equation you wrote.
If you actually do the integrations, you get
But this is not a solution to the original differential equation (try it!), so you must have done something wrong. What you did wrong was misinterpret as something other than the second derivative of .
Now, I thought at one point I tried rearranging the original equation as
integrating symbolically by parts, and getting the same answer I got with the method I proposed, but now I can't duplicate that, so I must have made an error before. I crossed out what I said.
In summary, just do it the way I proposed (which my textbook also recommends, and I've written up at Autonomous system (mathematics)#Solution techniques), because it eventually gives you the correct answer. —Keenan Pepper 16:56, 9 March 2008 (UTC)[reply]
Your point is probably right, but I think your integration is wrong - there should be a term linear in y from the 2nd integral acting on the constant from the first. That won't fix the problem, though. (I'll claim I was trusting Mattbuck, who'll probably claim he was trusting me, and then no-one has to take responsibility for anything!) --Tango (talk) 17:48, 9 March 2008 (UTC)[reply]
That's true. —Keenan Pepper 18:24, 9 March 2008 (UTC)[reply]
It's ok Tango, I don't blame you, I blame the Flying Spaghetti Monster - his noodly appendages messed it up. -mattbuck (Talk) 18:34, 9 March 2008 (UTC)[reply]

First step is to get rid of the constant. Use

as a new independent variable. Now the equation changed from

into

Second step is to substitute a power series

in the differential equation and get

Third step is to compute the terms of this series. Choose and . Compute for i=0,1,2,... . Bo Jacoby (talk) 19:26, 9 March 2008 (UTC).[reply]

So anyway, I continued with the correct method and got an integral that wasn't particularly nice, but not unreasonable, and I got a solution of the form . The problem is that the function is gnarly, and as far as I know, you can't invert it in general. (Mathematica can't solve it.) For one special case, though, you can invert it, and you get an "escape velocity" solution

The time-reversal of that is also a solution, but other than those two I can't get any in closed form. —Keenan Pepper 18:51, 9 March 2008 (UTC)[reply]

Using the method Keenan Pepper just described, I got just as far. My equation is:
and are arbitrary constants. Perhaps it's somehow solvable by the Lambert W function or something analogical?  Pt (T) 23:53, 9 March 2008 (UTC)[reply]

Note that if , then the 'energy', , is a constant of motion because This simplifies matters, because substituting gives a differential equation of the first order in the velocity , namely or, separating variables, , so you are left with the integral

Bo Jacoby (talk) 12:38, 11 March 2008 (UTC).[reply]

Is there a specific name for a town celebrating its 190th anniversary?

TXKay (talk) 22:43, 8 March 2008 (UTC)[reply]

Not a real one. Someone that speaks better Latin than me could make one up. See Anniversary#Latin-derived_numerical_names. --Tango (talk) 22:56, 8 March 2008 (UTC)[reply]
The best I can come up with is this made-up word: centumnonagintennial. The word nonagintennial is in actual but rare use for a 90th anniversary,[1][2] and I've turned that into 190th by prefixing it with centum meaning 100. Nonangintennial comes from Latin nonaginta meaning 90, and 190 in Latin is centum nonaginta. Squashing a numerical prefix into a word is not entirely un-Latin: decemviri, centumviri.  --Lambiam 00:07, 9 March 2008 (UTC)[reply]
Is there a reason they're celebrating that anniversary, instead of waiting a few years for the bicentennial? Black Carrot (talk) 06:34, 9 March 2008 (UTC)[reply]
Since when has anyone needed a reason to have a party? --Tango (talk) 13:49, 9 March 2008 (UTC)[reply]
Since when has anyone sought a reason to postpone a party?Zain Ebrahim (talk) 14:19, 10 March 2008 (UTC)[reply]


March 9

Effects of small sample size in ANOVA

I'm using a repeated measures ANOVA to establish whether there are differences in the density of neurons in particular columns of a brain structure, using three animals, with seven sections per animal and three columns, and to do this I'm using a repeated measures ANOVA with the 21 sections as the subjects and the columns as repeated measures.

As far as I can see, this is a small sample size, but I've found significant differences in density between different depths and different columns (though not for the interaction). As far as I understand, a small sample size increases the chance of a type II error and accepting the null hypothesis when it should be rejected, but I can't find many references to any other effects it has, so my question is: given that I've rejected the null hypothesis that neuron density is the same across sections and columns, and so have avoided type II errors, what other issues is the small sample size likely to cause? I'm having difficulty finding a clear answer anywhere.

Thanks for any help, sorry for the longwindedness Jasonisme (talk) 19:41, 9 March 2008 (UTC)[reply]

The major thing I can think of is that it becomes more difficult to do diagnostics, check for constant variance, autocorrelation, etc. OTOH, those techniques are often misused anyway, especially a priori to inform what kind of analysis to do, which then messes up the type I and II error rates. If you wanted to do some really rigorous diagnostics (cross validation, etc.), a small sample size makes the ecdf quite unsmooth. This is less of a "bad" thing and rather just "unattrarctive", however. Baccyak4H (Yak!) 14:09, 10 March 2008 (UTC)[reply]

Entropy maximizing distribution

It is well known that the normal distribution maximizes the entropy for a given mean and variance. If I'm not mistaken, it is easy to generalize this to the claim that, given the first 2n moments of a distribution, the one that maximizes the entropy has a density of the form where P is a polynomial of degree 2n. But what if the number of given moments is odd (say, we constrain the mean, variance and skewness)? Is there a distribution which maximizes the entropy, and does it have a closed form? -- Meni Rosenfeld (talk) 21:16, 9 March 2008 (UTC)[reply]

Just to make sure I'm on the same page: Is the distribution maximizing entropy having a given mean, the uniform distribution centered at the mean, or the point mass at the mean? JackSchmidt (talk) 21:24, 9 March 2008 (UTC)[reply]
Uniform. Remember, high entropy = uncertainty = spread. In this context I am interested only in distributions with a proper pdf, and since the entropy can be arbitrarily high when given only the mean, I am excluding this case. -- Meni Rosenfeld (talk) 22:02, 9 March 2008 (UTC)[reply]
Cool. One reason I asked, is because (assuming you meant entropy the way you do, and not its negative which is common in some areas) it seemed like there was no solution for the first possible case, n=1, without some other hypothesis. When I look at the differential entropy page, and compare your suggested method, something doesn't seem quite right. Analysis is not my strong suit, and statistics even less, so I'll assume I am just confused. If you sketch out the even moment case, I can see if I can make sense of it. JackSchmidt (talk) 22:18, 9 March 2008 (UTC)[reply]
this gives the proof for the normal distribution; my proof is a simple extension. Assuming that there is some with the correct moments, let be a pdf for any distribution with the same moments. Then
Thus g has higher entropy. What is it that didn't seem right? -- Meni Rosenfeld (talk) 23:58, 9 March 2008 (UTC)[reply]
Why is int(f*log(f)) >= int(f*log(g))? JackSchmidt (talk) 00:17, 10 March 2008 (UTC)[reply]
This is true for any two distributions, and the proof is on page 3 of the linked paper. -- Meni Rosenfeld (talk) 00:20, 10 March 2008 (UTC)[reply]
It is false for most pairs distributions. The statement on page 2 has a hypotheses. Why should your distributions satisfy the hypotheses? Where have you made use of the even-ness of the number of moments specified? In other words, you haven't really explained anything. I am trying to help, but I do require a little bit of detail. JackSchmidt (talk) 00:25, 10 March 2008 (UTC)[reply]
Hm? The hypotheses are satisfied by any distribution, except for which I'll throw in (that is, I'll maximize over all distributions satisfying it, which I'm pretty sure alters nothing). The evenness is used in the existence of g, which I have not proved, but for an odd number is obviously not a distribution. My argument is not symmetric in f and g, as the log of f is not a polynomial. I have provided only a sketch because I thought the details were clear, but you are of course welcome to ask about any which isn't. -- Meni Rosenfeld (talk) 00:37, 10 March 2008 (UTC)[reply]
←"For any two distributions f,g, int(f*log(f)) >= int(f*log(g))" is complete nonsense. "The hypothesis int(f-g)>=0 is satisfied for any pair of distributions" is also complete nonsense. JackSchmidt (talk) 00:47, 10 March 2008 (UTC)[reply]
Ah, I think I see the problem. We might be using distribution in two different ways. If g is a point mass, and f is its derivative, then inf(f-g)=-1 is clearly not greater than 0, but f and g are very well behaved distributions. However, I think you want int(f)=int(g)=1, so that they are probability distributions. Maximizing over a set of functions is often not possible, since many nice function spaces are not complete, so I figured you wanted to include distributions as well. Not all distributions have all moments (not even all probability distributions), but I believe there is a class of distributions either called Schwartz class functions or Schwartz distributions, that have all their moments, and I believe, are determined by them. I think I see what you meant in your responses.
Why does int(fP) = int(gP)? This is clearly false for general distributions (take f and g to be distinct point masses), but perhaps it is true here? Is this because int(fP) is a linear combination of moments of f, moments that were require to be equal to the moments of g? This might actually be basically the proof I was talking about below for moment matching. JackSchmidt (talk) 02:36, 10 March 2008 (UTC)[reply]
Yes, I did mean probability distributions - sorry for not making this clear, it has escaped me that "distribution" can be interpreted more generally. I did mention I am interested in proper pdfs, excluding point masses and the like (unless, of course, the maximum cannot be attained this way for odd m).
Indeed, int(fP) = int(gP) because P is a polynomial of degree m, thus these are linear combinations of the first m moments, which are assumed to exist and be equal for the two functions. -- Meni Rosenfeld (talk) 14:16, 10 March 2008 (UTC)[reply]
There is some technique called moment matching. there is some simple formula which given a sequence of moments corresponding to the moments of a nice (Schwartz distribution maybe?) function, gives back the function. It's like a fourier transform or a laplace transform or something. Does that ring a bell at all? I had some book that discussed this in very clear language, but I don't recall which book. It was some sort of formal statistics book, so really an analysis book that focussed on finite measure spaces. JackSchmidt (talk) 22:56, 9 March 2008 (UTC)[reply]
It seems like we need all moments for that, and I don't see how we would find the entropy-maximizing moments based on the first few. -- Meni Rosenfeld (talk) 23:58, 9 March 2008 (UTC)[reply]
My recollection is that the formula is so nice that you can show that your choice of all moments (subject to choosing the first few) maximizes entropy amongst all all distributions which both have all their moments and are determined by them (which I think includes Schwartz distributions, so should be general enough). I think it was something along the lines of giving the log of the distribution as a rapidly converging power series where estimates would be easy to make. JackSchmidt (talk) 00:17, 10 March 2008 (UTC)[reply]

There is an example on page 4 (of the paper Meni provided), on exponential distributions. Given mean, the exponential distribution would maximize the entropy. For odd n, can be a distribution in the same way the exponential distribution is. (Igny (talk) 00:53, 10 March 2008 (UTC))[reply]

But the exponential only maximizes entropy given the mean and that it is only supported on the positives. I do not desire that restriction. -- Meni Rosenfeld (talk) 10:33, 10 March 2008 (UTC)[reply]
But for unbounded distributions with fixed mean there is no maximum of the entropy. (Igny (talk) 13:09, 10 March 2008 (UTC))[reply]
Yes, as I have said before, I have excluded the case of for this reason. -- Meni Rosenfeld (talk) 14:02, 10 March 2008 (UTC)[reply]


March 10

I integrate over

On one hand,

On the other hand, the same integral is equal to

Have fun.(Igny (talk) 00:05, 10 March 2008 (UTC))[reply]

Since , and thus rather than . is correct. -- Meni Rosenfeld (talk) 00:17, 10 March 2008 (UTC)[reply]
Oh well, that was fast. I didn't hide it well enough. (Igny (talk) 00:22, 10 March 2008 (UTC))[reply]

In general, the value of a double integral may depend on the order you do the integration (though not in this case) 163.1.148.158 (talk) 10:27, 10 March 2008 (UTC)[reply]

Can you give an example? Our Multiple integral doesn't say a lot about this, but I recall that the equality holds under fairly mild conditions. -- Meni Rosenfeld (talk) 10:38, 10 March 2008 (UTC)[reply]
From Hilary Priestley's book: the function on the unit square will do it. It's pretty clear what's going to happen I think. Or on the same region, . Even on may do it. These are examples when Fubini's theorem tells you the function is not in . One thing they tell you is that a function can look fairly innocuous, and still fail to be integrable over a compact region. Edit: a sufficient condition, from Fubini/Tonelli, for the double integrals of to be equal is that one of the repeated integrals of exists.163.1.148.158 (talk) 11:01, 10 March 2008 (UTC)[reply]
Another example, from Reed and Simon, is defined on as,
.
This function is pretty easy to visualize. It is only non-zero in the first quadrant, between the lines and . It is 1 between and , and -1 between and . It's not hard to check that is not integrable, and that the double integrals are different. 134.173.93.127 (talk) 06:07, 11 March 2008 (UTC)[reply]

Question

What is the correct pronounciation of "kilometre": "ki-loh-mee-tre" or "ki-lo-ma-ta"? 58.168.209.250 (talk) 01:20, 10 March 2008 (UTC)[reply]

You are likely to receive more helpful responses at Wikipedia:Reference_desk/Language. Michael Slone (talk) 01:29, 10 March 2008 (UTC)[reply]
I'd go for kill-om-e-tur. -mattbuck (Talk) 09:32, 10 March 2008 (UTC)[reply]
Some think it should be keel-o-meet-ur or keel-om-eat-ur.87.102.94.48 (talk) 15:56, 10 March 2008 (UTC)[reply]
And some pronounce it "stupid" :-) --Carnildo (talk) 19:49, 10 March 2008 (UTC)[reply]
Kill-'em-eat-her? —Keenan Pepper 20:00, 10 March 2008 (UTC)[reply]
Kill-'im, eat her - the cannibals wedding...87.102.94.48 (talk) 22:31, 10 March 2008 (UTC)[reply]
As he said, you're more likely to receive a helpful response at the Language desk. :) Black Carrot (talk) 00:20, 11 March 2008 (UTC)[reply]
Try this and this link (available at Merriam-Webster online dictionary through two red loudspeaker icons at this page). --CiaPan (talk) 15:12, 11 March 2008 (UTC)[reply]

Rings

I was wondering why mathematical rings are called "rings"? I can't think of any way in which rings are more "ringlike" than other algebraic systems. What's the history behind the name? Thanks. --Bmk (talk) 04:55, 10 March 2008 (UTC)[reply]

According to Ring theory#History, "The term ring (Zahlring) was coined by David Hilbert in the article Die Theorie der algebraischen Zahlkörper, Jahresbericht der Deutschen Mathematiker Vereinigung, Vol. 4, 1897." —Bkell (talk) 06:17, 10 March 2008 (UTC)[reply]
Ah, thanks - I was looking in the Ring (mathematics) article, and I didn't notice the article on Ring theory. Anyone know why Hilbert called them rings? I don't think I have access to that article. --Bmk (talk) 06:54, 10 March 2008 (UTC)[reply]
A review of the English edition of Hilbert's article[3] contains the phrase: "even though Hilbert uses the word "(Zahl)ring" for orders on algebraic number fields, this must not be taken as evidence that Hilbert employs here parts of our current algebraic terminology the way we would do it; rather than referring to a general algebraic structure, the word "ring" is used for sets of algebraic integers which form a ring in our modern sense of the word." I'm not quite sure what to make of this; it sounds a bit like the statement that the works of Shakespeare were actually not written by Shakespeare but by another person of the same name. It also does not clarify why Hilbert chose to use the word "Zahlring" for these sets of algebraic integers, but it may be a piece in the puzzle. My first speculation on reading the question was that it might have something to do with the cyclic structure of the rings Z/nZ for n > 1, but that is less likely in view of the quotation.  --Lambiam 08:50, 10 March 2008 (UTC)[reply]
Dictionary.com doesn't have an etymology on it, but I'm impressed they even have the definition. Black Carrot (talk) 09:25, 10 March 2008 (UTC)[reply]
Searching for -ring group etymology- on Google, however, does turn up this, where he says, "Short for Zahlring (German for number ring). Think of Z[2^(1/3)]. Here the generating element loops around like a ring." Black Carrot (talk) 09:26, 10 March 2008 (UTC)[reply]
Sounds like that's probably the explanation for the name. Thanks folks! --Bmk (talk) 17:35, 10 March 2008 (UTC)[reply]

f(x)=1^x&g(x)=(-1)^x

I have to questions here,is the function,f(x)=1^x,afixed point function?what is the value of g(x)if ,x=an irrational number like,2^1/2?thank you.Husseinshimaljasimdini (talk) 13:36, 10 March 2008 (UTC)[reply]

Exponentiation over the complex numbers is inherently a multivalued function. In some cases we can choose a nice branch and it will be single-valued; in other cases we cannot. For the obvious choice of branch is which is a constant function. For there is no such obvious choice. Its values are For . -- Meni Rosenfeld (talk) 14:00, 10 March 2008 (UTC)[reply]

Learning Calculus fast

Anyone have a strategy of getting a decent understanding of Calculus within about a months time? 131.91.80.75 (talk) 15:46, 10 March 2008 (UTC)[reply]

Get a decent tutor.(Igny (talk) 16:20, 10 March 2008 (UTC))[reply]
I grabbed my Additional Mathematics textbook and did every single problem in it but I don't think it's for everyone. x42bn6 Talk Mess 16:27, 10 March 2008 (UTC)[reply]
If you’re just looking for the concept as to what it is, and not the computational ability for doing stuff with it, I’d recommend the book “Calculus for cats.” It’s short, and uses a bit of humor, and I found it to be exceptional at getting across the concepts of calculus. Again, though, it is not a textbook: it’s more for somebody that’s curious about calculus, but that doesn’t actually plan to use it, or a supplement to a textbook to explain concepts that textbooks don’t do well. GromXXVII (talk) 23:18, 10 March 2008 (UTC)[reply]

Special functions

Consider:

The function is its own derivative.

But what about:

The function is its own curvature????? --Goon Noot (talk) 21:43, 10 March 2008 (UTC)[reply]

Well yes. But you don't really have that function as yet.. only a differential equation - I wonder what solution(s) would look like......87.102.94.48 (talk) 22:29, 10 March 2008 (UTC)[reply]
Yes, I think that's the question that's being asked. We all know a function which is equal to its derivative, but is there a function which is equal to its curvature? The answer: I have no idea! --Tango (talk) 22:40, 10 March 2008 (UTC)[reply]
Of course there is a function (infinitely many, actually). There's the trivial zero function; another starts with . Whether it has a closed form is another matter entirely. Does anyone know of a version of Plouffe's inverter which gives a function based on its Taylor coefficients? -- Meni Rosenfeld (talk) 23:05, 10 March 2008 (UTC)[reply]
The zero function would be a dot at (0,0)? There's also a straight line at y=infinity. Is that right?87.102.14.194 (talk) 10:47, 11 March 2008 (UTC)[reply]
Well, yes, I was ignoring the 0 function. Any idea what the radius of convergence is for that power series? A function that's defined over the whole real line/complex plane would be nice. --Tango (talk) 00:34, 11 March 2008 (UTC)[reply]
I have no idea about this question other than if it has a full Taylor series (i.e., a positive radius of convergence), I would try to express the problem of finding it as one of recurrence relations. But otherwise no idea. Neat question though. Baccyak4H (Yak!) 01:56, 11 March 2008 (UTC)[reply]
I tried entering a power series expansion, and amazingly enough it turned ugly within the first couple of terms. If a_0 and a_1 are the constant and linear coefficients respectively (and are used to take care of the two degrees of freedom the DE allows) the expansion starts with (assuming I got the algebra right):

Don't ask me what the radius of convergence is like, though. I'm going to go out on a limb and suggest that the function doesn't have a nice closed form. However, like Baccyak4H says, you could always get the recurrence relation going so you at least know something about the coefficients. Confusing Manifestation(Say hi!) 03:53, 11 March 2008 (UTC)[reply]
It occurred to me that it might be more useful to work with the parametric form together with some extra constraint to fix the parameterization. With x'=1 you get the f(x) form above. With y'=1 you get , which has a closed-form solution for x' (not x): . Numerically integrated and plotted sideways (x(y),y) it looks like this:
for the case y0 = 1, x(1) = 0. Other values of y0 and x(y0) just scale and translate it. I assume the general solution can be assembled from pieces of this. -- BenRG (talk) 16:10, 11 March 2008 (UTC)[reply]
Oops, I think I got this backwards—f was supposed to be the reciprocal of its radius of curvature. In that case the parametric form is and I get . This does seem to have an antiderivative in terms of elliptic integrals, but it's fairly nasty and I'm not sure if it works for all C. I think Meni Rosenfeld's series is the case C = −3. For C > 0 you get a sinusoidal curve that looks like it could repeat across all of . -- BenRG (talk) 18:09, 11 March 2008 (UTC)[reply]
I'm guessing that the sinusoidal curve is very roughly like the cycloid curve, except more curvy at y=zero .. anyway how about using f(x)=a0+a1 sin x + a2 sin 2x +etc and attempting to solve in a similar fashion to the methods above that gave the first few terms easily.. 87.102.74.53 (talk) 19:07, 11 March 2008 (UTC)[reply]
Might just be me but I'm getting x'3(x'y''-y'x'')=y2(x'2+y'2)3 (where x' = dx/dt) - can someone point out where and if I'm,going wrong.. kindly please.87.102.74.53 (talk) 18:03, 11 March 2008 (UTC)[reply]
I don't think this can be right because it's not symmetric in x' and y'. -- BenRG (talk) 18:09, 11 March 2008 (UTC)[reply]
I just used the equation for curvature and put in the parametric derivatives.. maybe I made an obvious mistake - could you give a first step of what you did so I know I'm not barking up the wrong tree?87.102.74.53 (talk) 18:13, 11 March 2008 (UTC)[reply]
OOPS sorry I get x'3(x'y''-y'x'')2=y2(x'2+y'2)3.. hang on a minute —Preceding unsigned comment added by 87.102.74.53 (talk) 18:15, 11 March 2008 (UTC)
Ignore (boloks) I get the same, msut be drunk or getting old ignore previous87.102.74.53 (talk) 18:18, 11 March 2008 (UTC)[reply]

Mathematica churns for a few minutes, spits out several warnings exhorting us (humans) to check the answer by hand and various other terrible diagnostics, then says:


       {{y[x] ->    InverseFunction[(\[ImaginaryI] Sqrt[
        2] ((1 + 
             C[1]) EllipticE[\[ImaginaryI] ArcSinh[
              Sqrt[1/(2 - 2 C[1])] #1], (-1 + C[1])/(1 + C[1])] - 
          EllipticF[\[ImaginaryI] ArcSinh[
             Sqrt[1/(2 - 2 C[1])] #1], (-1 + C[1])/(
           1 + C[1])]) Sqrt[(2 + 2 C[1] - #1^2)/(1 + C[1])] Sqrt[(
        2 - 2 C[1] + #1^2)/(1 - C[1])])/(Sqrt[1/(1 - C[1])] Sqrt[
        2 - 2 C[1] + #1^2]
         Sqrt[-2 (1 + C[1]) + #1^2]) &][-\[ImaginaryI] x + 
    C[2]]}, {y[x] -> 
  InverseFunction[(\[ImaginaryI] Sqrt[
        2] ((1 + 
             C[1]) EllipticE[\[ImaginaryI] ArcSinh[
              Sqrt[1/(2 - 2 C[1])] #1], (-1 + C[1])/(1 + C[1])] - 
          EllipticF[\[ImaginaryI] ArcSinh[
             Sqrt[1/(2 - 2 C[1])] #1], (-1 + C[1])/(
           1 + C[1])]) Sqrt[(2 + 2 C[1] - #1^2)/(1 + C[1])] Sqrt[(
        2 - 2 C[1] + #1^2)/(1 - C[1])])/(Sqrt[1/(1 - C[1])] Sqrt[
        2 - 2 C[1] + #1^2]
         Sqrt[-2 (1 + C[1]) + #1^2]) &][\[ImaginaryI] x + C[2]]}}


Enjoy, Robinh (talk) 08:48, 11 March 2008 (UTC)[reply]

Could someone convert that back into maths? Or maybe not - it really looks like mathematica(TM) has 'gone insane' over this question - specifically 'square root insanity' would be my diagnosis... Poor old computer.87.102.14.194 (talk) 08:57, 11 March 2008 (UTC)[reply]

What does the graph look like????--Goon Noot (talk) 10:06, 11 March 2008 (UTC)[reply]

If you mean meni rosenfeld's answer .. it looks like a 'steep' parabola, or a cosh function.. that sort of shape (assuming the curvature is always considered positive ie magnitude.87.102.14.194 (talk) 10:35, 11 March 2008 (UTC)[reply]

Someone asked about a "A function that's defined over the whole real line/complex plane" technically menirosenfeld's function (as I'm now calling it) or in general functions that satisfy the equations given by Confusing Manifestation will work for imaginary numbers. But what if the complex function used to describe a scalar value ie f(x)=a+ib g(x)=sqrt(aa+bb) and the curvature of that scalar at x is equal to the the scalar at x.. That's impossible to analyse right? or am I missing some more maths to learn?87.102.14.194 (talk) 11:02, 11 March 2008 (UTC)[reply]

What does complex curvature mean??--Goon Noot (talk) 15:48, 11 March 2008 (UTC)[reply]

difficult to give a 'real world' equvivalent - as curvature is as you know the radius of a circle corresponding to the rate of change of slope at a point, then a complex curvature means the circle has a complex radius.. If the slope is changing complexely (ie the complex part is changing) then the radius of curvature will have a complex coefficient. Obviously this won't happen if x and y are always real.. Did that explain at all, oe help?87.102.74.53 (talk) 18:09, 11 March 2008 (UTC)[reply]
Is it just me or Jakob Bernoulli's "Spira Mirabilis" is its own evolute? So the equation is this curve that he is asking about, right?A Real Kaiser (talk) 05:02, 12 March 2008 (UTC)[reply]
Mmh I assumed they meant radius of curvature = y (rectangular), but if radius or curvature = r (polar) you are probably right.(or very close) luckily Spira_mirabilis#Properties saves me the bother of having to work it out, you are right it is indeed its own evolute.87.102.17.32 (talk) 13:40, 12 March 2008 (UTC)[reply]
As far as I can tell im certain that the radius of curvature of (sin(x)e^x,cos(x)e^x) is not e^x and can't find an exact solution87.102.17.32 (talk) 17:21, 12 March 2008 (UTC)[reply]
The functions ez, ekz have (eg the function used as radius when z is angle in polar coords) have radius of curvature sqrt(2)ez, sqrt(1+k2)ekz ie the radius of curvature is always proportional to the function, but not the same.. 87.102.32.239 (talk) 23:15, 12 March 2008 (UTC)[reply]

To GOON NOOT - I realised the answers I've been giving were based on radius of curvature not curvature - ie the reciprocal, apologies for any confusion.87.102.17.32 (talk) 16:15, 12 March 2008 (UTC)[reply]

One way of looking at the problem is to consider a parametric representation of the curve, in which the point (x(s), y(s)) is a function of the arclength s. If we also introduce φ(s), giving the direction of the curve expressed as the angle of the tangent with the x-axis, then, writing x =x(s) etc. as usual,
where κ is the curvature, which may itself be a function of s, x, y, and φ. Here it is given that |κ| = y, which is a bit indeterminate: whenever y reaches 0, in general two different continuations are possible.
Assuming κ = y, we have
which is separable and can be solved to give
This is the generic form of the integral curves of the vector field assigning to the point (φ,y) the directional vector (y,sin φ). Each value of C > −1 in (*) gives one curve. Assume we start in an area where −1 < C < 1. By inspection of the vector field diagram, we see anti-clockwise cycles around (π,0), which correspond to a sinus-like curve in the x-y-plane wiggling around the x-axis in the negative direction. As we move away from φ = π and pass φ = π/2 (or φ = 3π/2), the wiggling gets more pronounced and becomes like meandering.
If we start in a position with C > 1, y cannot vanish, so the sign of y is invariant. According to the sign, φ will monotonically increase or decrease, and in the x-y-plane we get to see cycles. I haven't analyzed the critical case C = 1.
Another approach I have not further looked into is to put v = dy/ds, so that on the one hand
while also
.
which combine to give another separable equation:
I hope I did not make many mistakes, since this was scribbled on a (too small) napkin. --Lambiam 18:12, 13 March 2008 (UTC)[reply]

division by zero

We all learn in elementary school that any number divided by itself is 1. Later, we learn that division by zero is undefined. I wondered why it is undefined, because if any number divided by itself is one, shouldn't be 1? I thought about this...

Consider where r is any real number.
As lim x→0+, x+∞
Also, as lim x→0, x-∞

So does =+∞ and -∞?
J.delanoygabsadds 23:28, 10 March 2008 (UTC)[reply]

We have an article on Division_by_zero. Black Carrot (talk) 23:29, 10 March 2008 (UTC)[reply]
Sorry, I didn't realize that... J.delanoygabsadds 23:31, 10 March 2008 (UTC)[reply]
No problem at all. Feel free to expand on your question if the article isn't detailed enough. Black Carrot (talk) 00:06, 11 March 2008 (UTC)[reply]

March 11

Topology

I was curious about the definition of a topology. Going by the article, it says after the definition that "All sets in T are called open; note that not all subsets of X are in T".

Considering this:

1) Isn't T equivalent to the powerset of X? Or is this just the case for simple examples like this?

2) Isn't "not all subsets of X are in T" incorrect in the above example? Damien Karras (talk) 09:24, 11 March 2008 (UTC)[reply]

It means 'not all subsets of X need be in T'. If they all are, we call T the discrete topology on X. Algebraist 10:31, 11 March 2008 (UTC)[reply]
Ok, I think I get you. Apologies for the informality, I'm trying to establish a "visual" ideal of what a topology is. Starting with a set of points (ordered pairs (m,n)) on a plane. If we have a set of points, T within X that are bounded within say, a circle, we can have a smaller subset of points not in T - i.e. a 2 dimensional projection of a torus (or a topologically equivalent shape)? Damien Karras (talk) 12:28, 11 March 2008 (UTC)[reply]
is a valid topology on the plane. Does that answer your question? Any collection of subsets satisfying the axioms of a topology is a topology. --Tango (talk) 12:52, 11 March 2008 (UTC)[reply]
Also note that that is only one way of defining it; some texts take T as a family of neighborhoods and derive the open sets from there. GromXXVII (talk) 11:35, 11 March 2008 (UTC)[reply]

Well, a topology just describes what open sets look like in your space (call it X). In general, a topology will not include all possible subsets of your space X. The power set of X is one of the possible topologies on X but it is not the only one. So in the power set, all subsets are open but in a general topology, this may not be true. For example, here is another topology on X, {the empty set, X} with only two elements. You can verify the axioms of a topology and prove that this is a perfectly valid topology (called the trivial topology). In fact this is the smallest topology and the power set topology is the largest topology in the sense that any other topology will be contained in the power set topology and contains the trivial topology. Another not obvious example of a topology would be that a set in R is open if it contains the number zero. Obviously, not all sets are open under this topology. Hope this helps!A Real Kaiser (talk) 04:55, 12 March 2008 (UTC)[reply]

It says in the article "all sets in T are open", yet what about ? Is that not a valid topology? I cannot include a point that exceeds a distance of one, even by the tiniest amount, so it's closed? Damien Karras (talk) 08:28, 12 March 2008 (UTC)[reply]
I believe you’re assuming a metric on the real numbers, which induces a different topology from the one you want to look at. GromXXVII (talk) 11:07, 12 March 2008 (UTC)[reply]
Don't confuse open sets in topology with open sets in real analysis. In real analysis you start with a distance function (a metric) and define an open set to be one in which every point in the set has a neighborhood in the set. Then you can prove various things about open sets: the empty set is open, the whole space is open, a union of open sets is open, a finite intersection of open sets is open, a function is continuous iff the preimage of any open set under the function is open. In topology you turn that on its head: you define the open sets as any set of subsets that contains the empty set and the whole space and is closed under arbitrary union and finite intersection, you define continuity of a function by the open-preimage condition, and so on. The reason is that in topology you're studying properties that are invariant under homeomorphisms, and (as a nontrivial theorem in analysis and a trivial theorem in topology) the open sets are exactly the structure on the space that's preserved by homeomorphisms. Of course, there turn out to be topologies that don't arise from metric spaces. -- BenRG (talk) 11:19, 12 March 2008 (UTC)[reply]
Put simply, all sets in T are open because that's what "open" means. What you generally think of as open is "open with respect to the topology induced by the Euclidean metric", but in topology "open" is just the name we give to members of the set T. --Tango (talk) 14:13, 12 March 2008 (UTC)[reply]
This may be nitpicking, but in the "open if it contains the number zero" example, doesn't the empty set have to be open (as well as closed) in any topology ? Should that be "open if it contains the number zero or is the empty set" ? Gandalf61 (talk) 07:08, 12 March 2008 (UTC)[reply]
Yes, the empty set is an open set, so that would need to be included as well. In the example, to be even more nitpicky, he didn’t say if and only if, and I bet you could derive that the whole space was closed, and so the empty set would be open anyway. I don’t think there are any other open sets though. GromXXVII (talk) 11:07, 12 March 2008 (UTC)[reply]
I did wonder whether specifically saying the empty set was open would be redundant in the "open if it contains the number zero" example, but I don't see how that can be. If you are not told that the empty set is open or (equivalently) that R is closed then I don't see how you can derive either of these propositions because:
  1. You can't construct R from the intersection of closed sets because one of these sets must be R itself, which you don't know is closed.
  2. You can't construct R from the union of closed sets because one of these sets must contain 0, and so will be open.
  3. You can't construct the empty set from the intersection of open sets because all the open sets that are given contain 0.
  4. You can't construct the empty set from the union of open sets because the empty set is only the union of other copies of itself, and you don't know that it is open.
Am I missing something here ? Gandalf61 (talk) 11:46, 12 March 2008 (UTC)[reply]
The best thing to do would be to specify that it is open. I still think the construction is possible, although not as easy as I thought it was. The empty set and the whole set are always open and closed for a reason – it’s not arbitrary. So something should break down by not having the empty set open: and the construction could come from that. GromXXVII (talk) 12:45, 12 March 2008 (UTC)[reply]
They are always open because the first axiom of a topology says so. If they could be proven to be open, you wouldn't need to take that as an axiom. What would break down is that you wouldn't have a topology so none of the results we have about topological spaces would apply. There is one way of "proving" the first axiom: The empty set is the empty union of open sets and the whole space is the empty intersection of open sets. That's just a matter of defining the empty union and intersection appropriately rather than defining a topology appropriately - it's still a matter of definition. --Tango (talk) 14:09, 12 March 2008 (UTC)[reply]
Well, you guys are right that I didn't specify the empty set being open. But I didn't need to say if and only if because I am defining my open sets to be all subsets of R which contain the number zero. In definitions, we don't "have to" say if and only if, because a definition already includes that. A set is open therefore it contains zero, by definition. A set contains zero therefore it is open by definition. So, in a definition, I don't need to specify if and only if. So here is my formal definition of a topology on R. A set in R is said to be open if it is either empty or if it contains the number zero.A Real Kaiser (talk) 20:02, 12 March 2008 (UTC)[reply]

math

how does a polynomial divided by a binomial be used in real life situations? —Preceding unsigned comment added by Lighteyes22003 (talkcontribs) 16:42, 11 March 2008 (UTC)[reply]

Rational functions#Applications has a few uses. I can't immediately think of any direct real world applications, but rational functions (of which a polynomial divided by a binomial is a special case) are very useful in various areas of maths which do have real world applications. --Tango (talk) 16:50, 11 March 2008 (UTC)[reply]
You can use this to help you find the roots of a polynomial. If you have, say, a fifth-degree polynomial, there is no general algebraic way to find its roots. But if you can discover one root somehow (maybe by guessing), say , you can divide the fifth-degree polynomial by to get a fourth-degree polynomial, which you can solve algebraically (though it's messy; see quartic equation). —Bkell (talk) 18:31, 11 March 2008 (UTC)[reply]
Yes, that's the most obvious use of polynomial division. I was thinking of the case where the binomial doesn't divide the polynomial, otherwise you actually just have a polynomial written oddly and not really a polynomial divided by a binomial. (The question say "divided by" not "dividing by", but I may be reading to much into the wording.) --Tango (talk) 18:35, 11 March 2008 (UTC)[reply]
Mmmh bit of a long shot - but for inverse square relationships somtimes there are equations such as
e-kx/x2 or e-k(x-a)/(x-a)2
which could be of theoretical interest to physicists.. does that count?87.102.74.53 (talk) 19:04, 11 March 2008 (UTC)[reply]
It could be of use in astrophysics where atronomers need to acurately calculate parabolic paths of bodies. I suppose this could also be applied to ballistics of any sort PiTalk - Contribs 20:01, 11 March 2008 (UTC)[reply]
Wouldn't a parabolic path be the solution to a quadratic?87.102.74.53 (talk) 21:11, 11 March 2008 (UTC)[reply]
That would count if an exponential were a polynomial... --Tango (talk) 21:53, 11 March 2008 (UTC)[reply]
It is, sort of - an infinite one..87.102.17.32 (talk) 13:43, 12 March 2008 (UTC)[reply]
It can be expressed as a power series, sure. A polynomial has a finite number of terms, otherwise it isn't a polynomial - polynomials and power series have some very different properties. --Tango (talk) 18:54, 12 March 2008 (UTC)[reply]
You've never heard the term 'infinte polynomial' then or Euler's claim that "what holds for a finite polynomial holds for an infinite polynomial".
Also try searching for "finite polynomial" - it's a common phrase So just stop posting wrong stuff please.87.102.17.32 (talk) 19:41, 12 March 2008 (UTC)[reply]
"A polynomial has a finite number of terms" is not wrong. All due respect to Euler, but I reckon the vocabulary was quite limited in his time and he thus resorted to using this odd language (and of course, his statement is dead wrong). "Finite polynomial" is definitely not common, based both on my experience and a quick google test:
  • "Polynomial" - 5000000 ghits.
  • "Finite polynomial" - 5000 ghits, most seem to be taken out of context.
  • "Infinite polynomial" - 2000 ghits.
When people speak of polynomials they mean something with certain properties, most of which unsatisfied by a general power series. -- Meni Rosenfeld (talk) 20:17, 12 March 2008 (UTC)[reply]
I don't see much difference in properties between an infinite and finite polynomial, (or to change the semantics on its head - "infinte and finite power series"). Are there really any major differences I should be aware of, excluding the number of 'nomials' in each? 87.102.17.32 (talk) 20:31, 12 March 2008 (UTC)[reply]
For a start, with power series you have to worry about convergence, which you obviously don't for polynomials. --Tango (talk) 21:00, 12 March 2008 (UTC)[reply]
Right, but as far as you're concerned a the exponential function is not a polynomial.87.102.17.32 (talk) 21:02, 12 March 2008 (UTC)[reply]
By the standard definition of polynomial (which is pretty much universal), an exponential is not a polynomial. --Tango (talk) 21:04, 12 March 2008 (UTC)[reply]
"Polynomial of infinite degree not a polynomial."?? where is this standard definition that excludes the infinite case87.102.17.32 (talk) 21:26, 12 March 2008 (UTC)[reply]
[outdent]See polynomial. One trademark characteristic of polynomials is that if you differentiate one enough times, you end up with zero. Another is that a polynomial can only grow so fast asymptotically (at a rate which is called, unsurprisingly, "polynomial"). Another is that it (for a positive degree) always has a complex root, in fact, as many as the degree of the polynomial if you count multiplicities. Polynomials exist over any ring and do not depend on a topology (being composed of just additions and multiplications, not limits). The list goes on, while your "infinite polynomial", a term which is virtually never used, is basically just any analytic function. -- Meni Rosenfeld (talk) 22:16, 12 March 2008 (UTC)[reply]
I get that these are properties of finite polynomials - again if you include polynomials of infinite order you lose that 'trademark characteristic' . or at least you would have to differentiate infinite times (I know that is meaningless)
Don't quite know what you meant by 'for positive degree always has a complex root' - I assume that was a reference to Fundamental theorem of algebra, though it sounded like you were saying polynomials always have complex numbered solutions for polynomial=0?
Look at Polynomial#Elementary_properties_of_polynomials this extends to polynomials of infinite order. I'm just trying to suggest not making a semantic distinction between polynomials and 'power series' - as far as I see it infinite series are a subset of polynomials and the exponential function, sin etc are in that subset, and as such inherit the properties of finite polynomials - being careful to note of course that when an operation on the function is dependent on the degree, that operation will never reach a final state in the case of degree=infinity.
I'm convinced that it's productive to treat both finite and infinite power series as examples of the same set. And yes all members of that set are analytical functions.
87.102.32.239 (talk) 23:43, 12 March 2008 (UTC)[reply]
Yes, no doubt finite power series are a special case of power series. In fact, this case is so special that it was even given a special name - "polynomial". But that's really backwards. Polynomials come first as a composition of the basic operations that exist in any ring - addition and multiplication. You can then discuss what happens when you take limits - a nontrivial feature of reals and complexes, not shared with most rings - of polynomials, and end up with power series. But almost everything which makes polynomials what they are is lost in the limiting process. -- Meni Rosenfeld (talk) 13:22, 13 March 2008 (UTC)[reply]
It could be used to approximate a more complicated function with a simple pole (i.e. something that behaves like it's been divided by (x+a)), though for what real life purpose I'm not sure.

It's not really an answer or anything, but this came up on my watchlist and made me laugh:

Wikipedia:Reference desk/Mathematics‎; 22:17 . . (+192) . . ConMan (Talk | contribs) (→math: possible answer)

No clue about the function. -mattbuck (Talk) 00:10, 12 March 2008 (UTC)[reply]

March 12

Least Squares Question

Let and and . Let . I am trying to find the vector in V that is closest to the vector b. Now, my questions is that I am trying to solve the system Ax=b and I know that b is my b above and is a four dimensional vector but what would my matrix A be? Would it be just ? But then I can't multiply both sides by transpose of A and them multiply by the inverse of (Transpose(A)*A)? Basically I want to have that but I can't figure out what my A will be. Any help would be appreciated! Thanks!A Real Kaiser (talk) 05:27, 12 March 2008 (UTC)[reply]

Because you're searching for the best approximation in V, you want the range of A to be V. The range of a matrix is the span of its columns, and so A is the 4 x 2 matrix with as columns. Your x is actually a length 2 vector, representing the coefficients of . In this case, will be square, and invertible. Check out Moore-Penrose inverse. 134.173.93.127 (talk) 07:46, 12 March 2008 (UTC)[reply]
Ironically, I can see by inspection that b is orthogonal to the vs. So the closest vector is just the zero vector. Baccyak4H (Yak!) 18:41, 12 March 2008 (UTC)[reply]

Actually, that was my intuition also that zero would be the answer (because that is the one and only point both spaces share so obviously it is the closest one). My I just couldn't understand how to setup my matrix A. And when I tried to do it (the way you basically said), I was wondering as to why my matrix x was only two dimensional. Now it makes sense the the entries of matrix x represent the coefficients for linear combination of v1 and v2. So this means that the x1v1+x2v2 would be the vector that I am looking for, right? In span{v1,v2}?A Real Kaiser (talk) 04:50, 13 March 2008 (UTC)[reply]

That's right. 134.173.93.127 (talk) 07:41, 13 March 2008 (UTC)[reply]
Which two spaces? There's the space V, and the vector b which is not in it. You may have meant span{b}, which indeed intersects V only at 0. But this is not enough for the closest vector to be zero - for this you need orthogonality. Consider v1 = {1, 0, 0}, v2 = {0, 1, 0} and b = {1, 1, 1}. Only {0, 0, 0} is common to span {v1, v2} and span {b}, but the closest vector in V to b is {1, 1, 0}. -- Meni Rosenfeld (talk) 14:36, 13 March 2008 (UTC)[reply]

Probabilities and this year's Champions League quarter finals

OK, I wasn't too shabby at Maths when I last studied it, but I always struggled with Probabilities. I thought of a nice bunch of Probabilities questions based on the forthcoming Champs Lg draw. Can you help me understand how to solve them, and thereby make up for either a duff teacher all those years ago, or, more likely, my inattention in class at crucial moments:

  • There are 8 remaining teams in the competition
  • 4 are English
  • The draw will put them into 4 pairings that will decide the semi-final line up
  • Assume all teams are of equal ability and AGF that UEFA don't rig their draws!

Q1 What's the probability of exactly 1 all-English quarter-final?
Q2 What's the probability of exactly 2 all-English quarter-finals?
Q3 What's the probability of at least 1 all-English quarter-final?
Q4 What's the probability of exactly 1 all-English semi-final?
Q5 What's the probability of exactly 2 all-English semi-finals?
Q6 What's the probability of at least 1 all-English semi-final?
Q7 What's the probability of an all-English final?

Oh, and leaving my probability of inattention in class aside (which, frankly is nearly 100%), if my maths teacher had used problems like this, instead of boys choosing socks in the dark (when they could just switch a ruddy light on) I might be able to do this without your help!

Cheers, --Dweller (talk) 12:23, 12 March 2008 (UTC)[reply]

Well, I will try to provide some of the answers, but I don't guarrantee that they are right :)
First, let's count how many combinations of quarter-final ties there are (only accounting that Team1 will be drawn against Team2, without accounting who is drawn to host the first leg). It should be .
A1. Out of these there is a possibilities that exactly 2 English clubs are drawn together, thus making a probability of , i.e. 21%.
A2. I suppose, therefore, that the probability of 2 all-English quarter-finals should be , i.e. 5%.
A3. And, therefore, the probability of at least 1 all-English quarter-final should be the sum of above two probabilities, i.e. , i.e. 26%.
It's hard to go on from here because we have to account for different cases.
A5. Of course, there can't be exactly 2 all-English semi-finals if there's at least 1 all-English quarterfinal. The probability that no English teams will be drawn together in the quarter-finals should be, according to the above information, , i.e. 74%. Assuming that all clubs are of equal ability to qualify for the semi-finals, the probability that all English teams will win their ties in this case is , i.e. 6.25%. Multiplying these two would give us the answer: , i.e. 5%.
Well, the other questions are more complicated than these, maybe I will come back to them later... maybe not :). Again, I'm not sure if what I did above is right, but it sure makes sense. To me at least! Hope that helps!  ARTYOM  13:18, 12 March 2008 (UTC)[reply]
Urk. Sorry to be picky after you've done all that work, but you'll need to slow down. I think I remember what the ! is (but can't remember what it's called - is it "factorial" or something?) but haven't a clue what the big C thingy is, nor how/why you populate the first equation that leads to 28. Remember, I want to understand how to do this, not just learn the answers! --Dweller (talk) 13:47, 12 March 2008 (UTC)[reply]
For the benefit of people similarly mathematically illiterate to me who come here (I advertised this thread at WT:FOOTY) the ! is indeed Factorial --Dweller (talk) 13:52, 12 March 2008 (UTC)[reply]
Yes, it is, and the C stands for choose. is the number of ways of choosing 2 members from a set of 8 elements. --Tango (talk) 14:43, 12 March 2008 (UTC)[reply]
Thanks for that. So remembering about ! and learning that C notation means I've already learned something. --Dweller (talk) 15:10, 12 March 2008 (UTC)[reply]

I don't think that's right - only a 26% chance of an all-England matchup doesn't sound right at all. Your is the probability that any given tie, taken independently of the others, will be an all-England clash. But you can't just square that to get the probability of two, because the ties are not independent.

By my reasoning the probability that there are no all-English quarter-finals is given by placing the English teams into the draw first; so the first team can be placed in any of the 8 spots, the second in 6 of the remaining 7 spots (i.e. not against the first), the third in 4 of 6, and the last in 2 of 5, giving . So the probability of at least one all-English tie is the remainder, . I can't think how to do the rest right now but I'll get back to you! — sjorford++ 14:53, 12 March 2008 (UTC)[reply]

In these things, it is often helpful to do the extreme cases first:
  • Question 3 is easiest to answer, since we just want to find the probability of NOT having an english QF. As above, this is , and thus P(at least 1 english QF) = .
  • Question 2. EXACTLY TWO english QF. Let us pick the english teams first. We can choose any position for the first team, so that is irrelevant. Let the 2nd team be a match (1/7). Then 6 choices for the 3rd, and 1 for the 4th. Thus we have chance of this. Now, suppose the 2nd team does not get a match (6/7), then the 3rd team has 2 options, and the 4th team just 1. Thus chances of 2 matches is
  • Question 1. Now, there were 3 possible cases with 4 teams - 1 match, 2 matches, or no matches. So, now
  • Question 5. For 2 english semi-finals, we require that the english teams all have different matches (8/35), and that they all win (1/2). Thus, P(2 E SF) = .
  • Question 4. Here it gets difficult, as we need to consider many cases.
    • The easiest is if there are 2 english QFs (3/35) - then we can guarantee exactly two english teams in the SFs. The chance of them facing each other is , which when combined with the probability of 2 english QFs is .
    • Now suppose that there are no english QFs (8/35). Then to get 1 english QF we require at least 2 teams to win.
      • Suppose exactly 2 do win - there are ways to do this, each of which has a probaility of (2 teams must win, 2 lose). Now, we require that the 2nd team face the 1st team. This has a chance of 1/3 as above. Thus here we have
      • Suppose 3 English teams win - ways, each with P = 1/16. Any draw we are bound to have 2 english teams facing off, thus we have
    • Now, suppose there is one english QF (24/35). We are bound to have 1 english team in the SF.
      • Suppose both other english teams win (P = (1/2)^2 = 1/4). Then we have 3 teams and 4 spots - there's bound to be an english SF, with
      • Now, suppose that only one of the other enlish teams win. There are 2 ways to do this, each with probability 1/4. Now the chances of them facing each other in the SF is 1/3. Thus
    • Thus, we now sum these probabilities:
  • Question 6 - this is now easy, we just sum the probabilities of 1 and 2 english SFs.
  • Question 7 - Almost there, but again several cases to consider.
    • The most obvious place to start is with 2 all-english SFs. We know this had probability 1/70. We must have an all-english final here.
    • Next consider there are exactly 3 english teams in the SFs. This happens if either there is one english QF and both other teams win (P = 6/35), or no english QFs and 3 teams win (P=2/35). Thus we have a total P=8/35. We're bound to have 1 english team in the final, and the other has a 0.5 chance. Thus we have a 4/35 chance from this case.
    • We could take forever finding the probability of 2 english teams in the semis. However, note that there are an equal number of english and foreign teams, and that they all have equal chances of winning. Thus, the chances of two no-english SFs is 1/70, and the chance of only one english team in the SFs is 8/35. Thus, we have used up 17/35 options, and the remaining 18 must be exactly 2 teams facing each other, of which 1/3 are going to have the english teams facing each other and thus can be discarded. So we have a 12/35 chance of having exactly 2 english teams in different matches. Now, we require that both win. This has P = 1/4. Thus the chance of an english final here is .
    • Now, sum these:
Damn that was a lot of work. We'd better fkn win. -mattbuck (Talk) 16:16, 12 March 2008 (UTC)[reply]
Simpler approach for Q7: Forget the route to the final - there are (8x7)/2=28 possible pairs of teams in the final, of which (4x3)/2=6 consist of a pair of English teams. So probability of an all-English final is 6/28, which is 3/14. Gandalf61 (talk) 16:32, 12 March 2008 (UTC)[reply]
Damn my looking for the complicated route. I thought there should be an easy way to do it. -mattbuck (Talk) 16:46, 12 March 2008 (UTC)[reply]

Math: real-life examples of polynomial division?

what are some examples from real life in which you i might use polynomial division? —Preceding unsigned comment added by Lighteyes22003 (talkcontribs) 13:38, 12 March 2008 (UTC)[reply]

Finding roots of polynomials is the obvious one. It comes up all the time in various disciplines, since lots of things are described (at least approximately) by polynomials and it's often useful to know when they are zero. --Tango (talk) 13:45, 12 March 2008 (UTC)[reply]
See Linear response function for one example. It is used in radio and television design. Bo Jacoby (talk) 13:56, 12 March 2008 (UTC).[reply]
I saw your previous question and felt I should answer it, but it's hard to come up with an answer at what I think is your level of understanding. As others have mentioned, it's used to find roots of polynomials (e.g. to find more roots after the first is known). I doubt you are familiar with eigenvalues or root locus plots, but those require polynomial roots. Let's say I'm designing a control system for an aircraft. I've taught this using the roll control system of an F-18 as one example. First, you write out the equations of motion of the aircraft. The result is a system of differential equations. You then apply a Laplace transform to that system, so you get a system of polynomial equations in 's' (the Laplace variable). If you can find one root, by numerical or other means, you can use polynomial division to simplify the equation, getting the entire set of real and complex roots. Depending on those roots (are they real or complex? are their real parts negative or positive? how close to the origin are they?) you can tell a lot about the behaviour of the aircraft. For instance, if the real parts are negative, the aircraft is stable, otherwise it is unstable. Adding a control system to the loop modifies the set of differential equations and allows us to set the roots where we want them, ensuring the behaviour of the aircraft is how we want it, and that the aircraft is stable, maneuverable, and other desirable properties. moink (talk) 14:37, 12 March 2008 (UTC)[reply]
And the reference desk serves its function of helping us improve articles: Root locus needs some serious work. moink (talk) 14:44, 12 March 2008 (UTC)[reply]
One thing is for sure - polynomial division is a much less important mathematical skill for an average individual than others which are unfortunately not studied enough (or at all) in school, such as logic and probabilistic thinking. -- Meni Rosenfeld (talk) 15:08, 12 March 2008 (UTC)[reply]
That's for sure -- I agree 100%. Logic especially needs to be taught more. It is completely foreign to most students, unfortunately. (Joseph A. Spadaro (talk) 08:18, 13 March 2008 (UTC))[reply]

Chi-square test

When calculating a chi-square test what are the steps between calculating the differences between the expected and observed values and obtaining the chi-square value? Then, knowing the degrees of freedom, how is the p-value of the chi-square obtained (other than by looking it up in a table)? Thanks. The chi-square test article does not explain these points. Itsmejudith (talk) 15:47, 12 March 2008 (UTC)[reply]

If you are talking about a contingency table, then the expected value for each entry is
(column total / table total) × (row total / table total) × table total.
I know this expression can be simplified, but in this form it is easier to see conceptually what is being estimated. Subtract this from the observed value, square the difference, and divide by this expected value again. The sum over all the table is the value of the χ2 statistic. Call that value X. Then the p-value is just the probability that a χ2 distribution with the appropriate degrees of freedom is at least X (that is, one minus the CDF of the χ2 distribution, at X). Baccyak4H (Yak!) 18:33, 12 March 2008 (UTC)[reply]
Can this detail be added to the worked example in the article? Thanks. Itsmejudith (talk) 08:50, 13 March 2008 (UTC)[reply]
OK, done. Although I can see the point someone might make that it is too much "How to...". Baccyak4H (Yak!)

March 13

Generating random numbers

How would I generate random numbers with a distribution matching Zipf's law? --Carnildo (talk) 04:18, 13 March 2008 (UTC)[reply]

Could you start with a probability distribution that is linear ie each value occurs equally often and scale that to the zipf distribution? Would you need help scaling the fnuction?
Can it be assumed that you already have a method of generating random numbers in general?87.102.8.240 (talk) 09:10, 13 March 2008 (UTC)[reply]
According to the classic version of Zipf's law, the number of occurrences of the word that has rank k in a large corpus of words is proportional to 1/k, which means it is of the form c/k for some large constant c. But actually the value of c depends on the size of the corpus, where c is the number of different words in the corpus, and the size n of the corpus is roughly . Therefore it makes more sense to generate a sequence of numbers for which each initial segment obeys Zipf's law.
In pseudocode:
Let z1, z2, ... be the sequence to be generated.
Set c to 1
For n from 1 to whatever:
Set ν to 1/((log c)+1)
Select one alternative from:
(A) with probability ν:
Set zn to c
Set c to c+1
(B) with probability 1 − ν:
Pick a random number uniformly from the range {1 ... (n−1)}
Set zn to zr
Obviously the beginning will have few different numbers, so you may want to discard a large initial part. It is then not actually necessary to store the segment to be discarded itself, as long as the number of occurrences of each value is kept track of. --Lambiam 09:58, 13 March 2008 (UTC)[reply]
The above is surely right - but looked a little mysterious to me - here's another version of 'psuedo-code' - in this case I don't explicitly normalise the probabilities but instead generate a sum "Total" of 1/k and find the position of a random number between 0 and "Total" ie the first 'word' is between 0 and 1/1, the second between 1/1 and 1/1+1/2 etc
For different functions simply change all instances of 1/value to the new thing eg 1/values I've marked these positions with a *


n=number of words
Total = Sum to n of 1/k * !ie 1/1+1/2+1/3 etc


LOOP 1
R=Total x RND() ! ie Generate a random number between 0 and Sum total called R {RND() is a random function between 0 and 1} !
Length=0
m=1 !a loop counter


LOOP 2
Length=Length+1/m *
if R<=Length then print/output "m"  ! m is the mth word ! : GOTO LOOP 1 !start again!
increase m by 1 : goto LOOP 2


Just added in case others found the first example a little confusing.. apologies to any offended by the GOTOS.. the !!'s are comments. not factorials!87.102.8.240 (talk) 10:57, 13 March 2008 (UTC)[reply]
A counting variable can be added so that the program would stop after a certain number of random values had been produced. Would suggest adding this inside LOOP 1 eg Count=Count+1:If Count=10000 then END/STOP>87.102.8.240 (talk) 11:03, 13 March 2008 (UTC)[reply]
Note - what I've done here shares some similarities with Arithmetic coding or Entropy encoding - at least in small part.87.102.8.240 (talk) 11:15, 13 March 2008 (UTC)[reply]
(it could be extended to non-zipf distributions that have finite states from a single event, also the above could be speeded up slightly computer wise if that was neccessary - ask if you want further details on these.)83.100.138.116 (talk) 16:28, 13 March 2008 (UTC)[reply]

Financial calculator question

I have a financial calculator question. I have the APY for a Certificate of Deposit; I am looking for a calculation that will give me its dividend rate if the dividends are paid Quarterly or if they are paid monthly.

For example, for a 12 month Certificate the APY is 3.25% what is the calculation to find out what the dividends are? —Preceding unsigned comment added by 207.109.247.177 (talk) 17:58, 13 March 2008 (UTC)[reply]

Comment: The poster is referring to the annual percentage yield. Pallida  Mors 19:16, 13 March 2008 (UTC)[reply]
Different institutions may use different definitions of the Annual Percentage Yield, which may be different from the effective rate, the Annual percentage rate, because certain costs have not been accounted for yet. Assuming the formula in our article holds for the definition actually used, and assuming dividend rate may be equated with the nominal interest rate, then:
  • for monthly dividend rate (12 periods) you get over a year:
  • for quarterly dividend rate (4 periods) you get over a year:
Note the caveats. Your mileage may vary.  --Lambiam 19:18, 13 March 2008 (UTC)[reply]

PDE -> ODE

Are there a set of "standard" methods one could try to use to transform a PDE in several variables to a set of coupled ODEs? —Preceding unsigned comment added by 12.196.44.226 (talk) 20:22, 13 March 2008 (UTC)[reply]

maybe separation of variables#Partial differential equations? --Spoon! (talk) 03:49, 14 March 2008 (UTC)[reply]

I need help with pi & circumference

If you calculated the circumference of a circle the size of the known universe, requiring that the answer be accurate to within the radius of one proton, how many decimal places of pi would you need to use?

 A. Two million
 B. 39
 C. 48,000
 D. Six billion

Thanks in advance.207.69.140.24 (talk) 21:48, 13 March 2008 (UTC)[reply]

I seem to recall the answer is 39, though I don't remember why. Strad (talk) 21:56, 13 March 2008 (UTC)[reply]
Assuming you’re calculating it from the radius of the known universe I think you could find an upper bound by dividing the smallest distance in question (radius of a proton) by twice the largest distance in question (radius of the universe), and using pi with as many decimal places as this number: because then would be accurate to the radius of a proton when multiplied by twice the radius of the known universe. GromXXVII (talk) 22:11, 13 March 2008 (UTC)[reply]
[ec] According to Observable universe, its diameter is ly, so its circumference is ly or m. The radius of a proton is roughly m. The ratio is roughly , thus roughly 39 digits of π are required. But don't let anyone fool you into thinking that this means that more digits are not required for science and applications. -- Meni Rosenfeld (talk) 22:18, 13 March 2008 (UTC)[reply]

A question similar to this has been asked before. It was to find the circumference of the known universe to the accuracy of a planck length.

And the answer is that 62 digits of Pi was required, preferably 64 digits. I'll see if I can dig up a link.

http://en.wikipedia.org/wiki/Wikipedia:Reference_desk_archive/Science/2006_September_27#How_many_digits_of_pi_for_the_known_universe.3F

202.168.50.40 (talk) 00:08, 14 March 2008 (UTC)[reply]

We have no data to base the calculation on, so knowing the decimal expansion of π to many places is not going to help. Even if we knew the radius of the observable universe to within the radius of a proton, and we could halt the universe so that it stopped expanding while we are doing the calculation, we don't even know if the universe is flat.  --Lambiam 08:40, 14 March 2008 (UTC)[reply]

March 14

Happy Pi Day

Just wishing all of you a happy Pi Day! Please celebrate responsibly. --Kinu t/c 04:12, 14 March 2008 (UTC)[reply]

What should I do at 1:59:26? I've only got an hour! HYENASTE 04:22, 14 March 2008 (UTC)[reply]
Um... run around in a circle? —Keenan Pepper 05:54, 14 March 2008 (UTC)[reply]
Just returned from my circle! Happy Pi Day!!! HYENASTE 05:59, 14 March 2008 (UTC)[reply]
getting dizzy! this is so exicting87.102.83.204 (talk) 10:05, 14 March 2008 (UTC)[reply]

Percentages

Here I am at a very advanced age totally ashamed to say that I cannot work out percentages! Please advise me how to work out the percentage of one sum against the other; for example what percentage has my pension increased from last year's income to this year's. How to I do that ? Thanks in anticipation.--88.109.224.4 (talk) 08:47, 14 March 2008 (UTC)[reply]

Simple a percentage is one over another times 100.
So if income was 110 this year, and 95 last year this gives (110/95)*100 = 115.8%
So the increase is 15.8% (since 100% equals 1:1 ratio or no change I subtract 100%)
Or if my wage (77) increases by 2% my new wage is 77 + ( 2/100 x 77 ) = 77+1.54 = 78.54 —Preceding unsigned comment added by 87.102.83.204 (talk) 10:05, 14 March 2008 (UTC)[reply]