Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 99.237.234.104 (talk) at 15:45, 25 April 2010 (→‎Square of wavefunction). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


April 19

Proof of Exponential Growth Formula

Hello. How was the exponential growth formula proven in the applications of population dynamics (A = Aoekt where A is the final population, Ao is the initial population, k is a constant, and t is time) and of finance (A = Aoert, A is today's value, Ao was the initial value, r is the interest rate, and t is time)? Thanks in advance. --Mayfare (talk) 02:29, 19 April 2010 (UTC)[reply]

  • I'm not sure if this qualifies as a "proof" and what you are seeking, per se, but the easiest way to derive such an equation is to find the solution to the differential equation . In other words, we are stating that the rate of change of A with respect to time is dependent on the amount of A that is present in the system, which is essentially the case in population dynamics and continuous compound interest. --Kinu t/c 05:05, 19 April 2010 (UTC)[reply]

Expected Values for Chi-Square Table

In my statistics class we are taught for a chi-square test (I am not sure which type as only when looking up the article did I find there were multiple types, but I assume it is the Pearson test), the expected values are equal to the row total multiplied by the column total, divided by the table total. i.e for a table that read A B C D The expected value for the square of A would be (A+B)*(C*D)/(A+B+C+D). Why is this the case? From what I understand, each value should be represented in equal proportion, but I can't see how that relates to these expected values. Thanks! 66.133.196.152 (talk) 05:16, 19 April 2010 (UTC)[reply]

There are A+B+C+D items, of which are in row 1, so the probability to be in row 1 is . Likewise, the probability to be in column 1 is . If the events are independent (the null hypothesis), then the probability to be in row 1, column 1 is , and out of a total items, the expected number in this square is . -- Meni Rosenfeld (talk) 08:17, 19 April 2010 (UTC)[reply]


Thanks, I get it now.66.133.196.152 (talk) 22:56, 19 April 2010 (UTC)[reply]

real numbers

hi guys,i want to know what is the correct definition of "real numbers". to find whether "a" is a real number or not,we say if a∧2 is greater than 0 ,a∈R. when we take √-2⇒(√-2)^2 so it is -2 so √-2∉R. But if we do like this,√-2⇒ (√-2)^2 ,so (√-2 × √-2)= √(-2×-2) and then , √4 =2 so, √-2∈R. ?????? i simply know that it can't be. so want to know where i was wrong. Thanks. —Preceding unsigned comment added by Coolerking.dany (talkcontribs) 17:51, 19 April 2010 (UTC)[reply]

(√-2 × √-2)= √(-2×-2) is incorrect. (√a × √b)= √(a×b) is not true in general when a and b aren't positive. --COVIZAPIBETEFOKY (talk) 18:02, 19 April 2010 (UTC)[reply]
I haven't seen that definition of real numbers before. It is not very meaningful, because it presupposes knowledge of complex numbers, which itself presupposes knowledge of real numbers. In effect, it contains elements of a circular definition. I would suggest you have a look at the articles Real number and Complex number, and see if they clarify matters for you.82.120.187.159 (talk) 18:18, 19 April 2010 (UTC)[reply]
Well, the squares of complex numbers are not ordered so it doesn't make sense to say reals are those that are greater than 0. So, it's even more circular as really you'd have to say a complex number is real if its square is real and its square is greater than 0. And, this also does not include 0, so really greater than or equal to 0. StatisticsMan (talk) 00:56, 20 April 2010 (UTC)[reply]
Indeed. The OP only gives examples of real and imaginary numbers. Strictly complex numbers (a sum of a real and imaginary part) are more complicated. --Tango (talk) 01:09, 20 April 2010 (UTC)[reply]
Real numbers are defined as the completion of rational numbers. Trying to define them as complex numbers with a particular property isn't going to be useful, since complex numbers are defined in terms of real numbers. You have to go with the very technical definition of them as a completion, even though that is rather difficult to understand without a university mathematics education (although you can get an intuitive sense of what is going on easily enough - it is the rigour that is hard to get). --Tango (talk) 01:09, 20 April 2010 (UTC)[reply]
The article Real number is not too bad. Generally we say that the real numbers are described by the axioms of an "Archimedean complete ordered field".[1] There are explicit constructions such as Dedekind cuts but those are made up after the fact, so to speak. 66.127.54.238 (talk) 02:46, 20 April 2010 (UTC)[reply]

thanks guys.i think i know where i was wrong. —Preceding unsigned comment added by Coolerking.dany (talkcontribs) 02:51, 20 April 2010 (UTC)[reply]

Curl

Does ? 131.111.248.99 (talk) 22:55, 19 April 2010 (UTC)[reply]

Yes. Rckrone (talk) 23:22, 19 April 2010 (UTC)[reply]
Thanks. 131.111.248.99 (talk) 23:58, 19 April 2010 (UTC)[reply]
The formula doesn't hold if is also a function of position. See vector calculus identities, penultimate section. HTH, Robinh (talk) 07:06, 21 April 2010 (UTC)[reply]


April 20

Distribution of size of largest prime factor

Resolved

I'm sure this is available somewhere in this encyclopedia but not exactly where. The question is what the probability is for a randomly chosen large positive integer that its largest prime factor is in some range. However that's stated wherever it is is fine with me. I'm trying to determine the expected number of bases, B, less than some large N s.t. the largest prime factor of 1223334444...(B-1)(B-1)...(B-1) is an emirp (B=10 is the only solution for B<17).Julzes (talk) 04:25, 20 April 2010 (UTC)[reply]

[2] gets a bunch of hits. 66.127.54.238 (talk) 07:30, 20 April 2010 (UTC)[reply]

Thanks. I'm just marking this as resolved. It seemed like a silly question when I asked it, it seems more so now, and after I have breakfast tomorrow I'll probably find it to the degree desired in my Hardy and Wright without even checking the Google results.Julzes (talk) 07:43, 20 April 2010 (UTC)[reply]

Math contests

Is performance on math contests (AMC, AIME, USAMO, etc.) a good indication of mathematical ability in general? --71.144.122.18 (talk) 23:56, 20 April 2010 (UTC)[reply]

Mathematical ability in general is sort of broad. The skills involved in contest math can definitely be called a subset of it. If you mean more specifically is it indicative of ability in like proof-based higher math, then I'd say some of the skills probably are and some aren't. Contest math puts a premium on speed, knowing certain formulas by heart, doing arithmetic efficiently and accurately, and making good guesses, which are skills that are not so important for doing proofs. But the insight needed to find the "trick" in contest problems and being able to carry principles across different branches of math I think is similar to the insight needed in proving results. And more generally having a good head for understanding math concepts is important for both. Just anecdotally, most of the people I know who were good at contest math in high school probably could have been successful in pursuing math further if they'd wanted to, but I could be misjudging that. Rckrone (talk) 01:15, 21 April 2010 (UTC)[reply]
As Rckrone says, mathematics (and, in particular, mathematical ability) is very broad. No mathematics contest, especially one at the school level, can capture every technique and intuition in formal mathematics. In fact, math contests in general tend to focus on the basic aspects of certain areas of mathematics, such as number theory, combinatorics, and Euclidean geometry (at least in my experience). There is usually little or no emphasis put on "infinite mathematics", branches of mathematics such as modern algebra, analysis, topology etc., and these are really diverse areas of mathematics (of course, this is mostly due to the fact that the contests are aimed at high school students). That is not to say that number theory and combinatorics are not diverse; they certainly are prominent areas in the mathematical world. However, within these math contests, even the breadth of the number theory and combinatorics tested is quite narrow (as I said, because these are high school contests, a very limited number of concepts can be tested). In number theory, for instance, there are many important areas such as algebraic number theory and analytic number theory, whose techniques and intuitions do not lie within math contests in general.
All that said, it is definitely the case that someone who can do well in math competitions is intelligent. However, there is so much more to mathematical intelligence than what lies in math competitions. In that regard, I disagree that students who do well in math competitions would definitely become good mathematicians (another important quality which comes to mind, that competitions do not test, is patience; the greatest mathematicians of all are also the most patient ones, in my opinion). Conversely, someone who cannot do well on math competitions may well go on to become a great mathematician. Succinctly, doing well in math competitions, and doing formal mathematics are almost completely independent of each other; it is extremely difficult to decide how good a mathematician a student will become based on his performances in math competitions (and by "good", I mean the student's potential; he may not become a mathematician later on, but even if that is not the case, it is still difficult to decide his potential). PST 02:08, 21 April 2010 (UTC)[reply]
I think that in line with the notion that one needs patience to be a top-flight mathematician, as opposed to a good competitive mathematics student, is a discernment for the difficulty of problems that have no known solutions yet. An expectation can be raised in competitions that whatever problem you work on will fall before you if you consider it well enough. In actuality, a first rate mathematician is going to have to be reasonably selective in what he or she spends time on and with the hardest problems is apt to just give up--perhaps returning later in life--when the ideas aren't forthcoming. However, good contest mathematics differs little in kind from research mathematics. Aside from the fact that the competitions generally end at the undergraduate level, leaving only such things as posed problems in journals as similar, the main difference between the one kind of mathematics and the other is generally a real divide between the certainty of the existence of or the amount of time required for a solution.
As far as poor contest participants being good mathematicians in the end, I think there are certain factors at play like the age at which one decides finally to focus on mathematics and choices in favor of depth rather than solidity (studying advanced topics when skills are still middling for the more elementary ones). I don't think there is a solid mathematician who couldn't train him- or herself to be good at the contests; and I suspect that for the vast majority who have become recognizably good mathematicians very little time would be required for it, but there wouldn't be any point for their research, with only possible benefits as regards their pedagogical tasks.Julzes (talk) 03:12, 21 April 2010 (UTC)[reply]
I should add/note that library skills for the profession are important for getting anywhere in mathematics, like other academic fields. A professional mathematician has to sort out what is already known, not only fully but partially. Unless one is working on highly idiosyncratic problems, working with the literature is something one must do that is nearly totally unnecessary not only for contest mathematics but mathematics pre-doctorate.Julzes (talk) 03:25, 21 April 2010 (UTC)[reply]


April 21

Teabags

The mass of tea in Supacuppa teabags has a normal distribution with mean 4.1g and standard deviation 0.12 g. The mass of tea in Bumpacuppa teabags has a normal distribution with mean 5.2g and standard deviation 0.15g.

i) Find the probability that a randomly chosen Supacuppa teabag contains more than 4.0 g of tea [SOLVED: normalcdf(4.0,E99,4.1,0.12) = 0.798]

ii) Find the probability that out of two randomly chosen Supacuppa teabags, one contains more than 4.0g of tea and one contains less than 4.0g of tea. [SOLVED: normalcdf(4.0,E99,4.1,0.12) = 0.798, normalcdf(-E99,4.0,4.1,0.12) = 0.202, 0.798*0.202*2=0.323]

iii) Find the probability that five randomly chosen Supacuppa teabags contain a total of 20.8g of tea.

I tried normalcdf(20.8,E99,4.1,0.12), normalcdf(20.8,E99,20.5,0.12) and normalcdf(4.16,E99,4.1,0.12) but none of the three give the correct answer. I just need a hint, what method to use.

iv) Find the probability that the total mass of tea in five randomly chosen Supacuppa teabags is more than the total mass of tea in four randomly chosen Bumpacuppa teabags.

I think once I figure out the method for iii) I can solve this but need your help for iii). If you ask me to do my own homework, note I already solved i) and ii) and my teacher sucks at explaining all these concepts. —Preceding unsigned comment added by 166.121.36.232 (talk) 09:53, 21 April 2010 (UTC)[reply]

Do you know what is the expectation of the sum of variables, and what is the variance of the sum of independent variables? -- Meni Rosenfeld (talk) 13:30, 21 April 2010 (UTC)[reply]
It will likely help to actually define some random variables, and then translate your problems into the probabilities of various events. For instance, say that is the mass of a randomly chosen Supacuppa teabag, and is the mass of a randomly selected Bumpacuppa teabag; then the distributions of and are as stated in the statement of the problem. In part (i), you are being asked to calculate ; I would generally recommend going through the motions of actually standardizing and expressing this probability in terms of the tail probability of a standard normal r.v. , though that's my own cup of tea (sorry for the terrible pun). In part (ii), you are asked to calculate , where and are two independent (i.e. randomly chosen) r.v.'s sharing the same distribution as . The independence is a crucial assumption here, as this allows you to assert that , which is as calculated in the OP.
For part (iii), let be five independent r.v.'s sharing the same distribution as . Then the "total mass" of five randomly selected Supacuppa teabags has the same distribution as , and the requested probability is (or perhaps , or even , the wording on that part is a bit vague). However, the point is, you need to be able to determine the distribution of ; this information should have been given to you in your class, though you might review some of the important properties of the normal distribution (particularly, the first property there, and note that it may be generalized to any finite sum of independent normal r.v.'s). Once you have the distribution of , the calculation of the probability is, in principle, extremely routine. Part (iv) can be done in a similar fashion. Nm420 (talk) 00:40, 23 April 2010 (UTC)[reply]

Wronskian and Linear Independance

The article on the Wronskian gives an example of two examples that are linearly independent with a Wronskian of zero. The second function used is defined as the negative of the first function for negative x's, and first function for positive x's. I was wondering if there was an example of two linearly independent and infinitely differentiable functions that have a Wronskian of zero. I would imagine that the second function would still have to be defined piecewise, with f2(x) = 0 for x<=0 and f2(x) = f1(x) for x>0, but I can't seem to make this second function infinitely differentiable. 173.179.59.66 (talk) 15:14, 21 April 2010 (UTC)[reply]

For instance, two non-vanishing, infinitely differentiable functions with disjoint support have, of course, a vanishing Wronskian, though they are linearly independent. (Consider e.g the smooth function f(x) defined here and the function f(2-x) ). "Analytic" instead of "C" works (precisely, if W(f,g)=0 for two analytic functions defined on a connected domain it follows that f and g are linearly dependent. Indeed, either g vanishes identically, or f/g is a constant). --pma 15:25, 21 April 2010 (UTC)[reply]

What came first? (trig + calc. related stuff)

So I've been looking into precalc/calc math a bit to see how things build upon each other and I've kinda gotten a bit stuck. I've always been told the sine and cosine angle sum formulas without any reason as to why they are true. Progressing through calculus, I have seen their use in the derivations for the derivatives of sine and cosine through the limit definition of the derivative. Knowing those, one can derive the Maclaurin series for the sine and cosine. Rearrangement of the Maclaurin series for gives Euler's formula. Plugging in into Euler's formula can prove the sine and cosine sum formulas. Clearly, we're just going in a circle. Which of these steps came first? How did we know that the derivative of sine is cosine without knowledge of the sine and cosine sum formulas? Or, alternatively, how did we know of the sine and cosine double angle formulas without the knowledge of the derivatives of sine and cosine? All the proofs that I've seen for these two things somehow have either gone back to calculus or gone back to the sum formulas, and I can't seem to find something to prove these concepts with the other known properties of the trig functions. How did these ideas come to be? — Trevor K. — 21:40, 21 April 2010 (UTC) —Preceding unsigned comment added by Yakeyglee (talkcontribs)

The sum of angles trig identities were first. See the image: x and y are angles, a through h are lenghts. One lenght is assumed unit, this will save a bit of writing. Now by simple cascade of sines and cosines:
and finally:
HTH :) --CiaPan (talk) 22:14, 21 April 2010 (UTC)[reply]
The answer, I think, is "it depends". For a lot of professional mathematicians, the sine and cosine are defined as power series. The fact that they have a beautiful interpretation in terms of triangles is something that you have to prove from the definition. That's not too hard once you know about complex exponentials, and if you have complex exponentials, you can also prove the sum formulas.
Classically, there were no such thing as complex exponentials. The sine and cosine were defined in terms of triangles. Under this approach, you need to prove the sum formulas using triangles as CiaPan just did for us above. Once you have that, then you can take derivatives using the definition. Both these approaches are logically correct. You have a good question, though: You noticed that we can't mix the two, or else we get a circular argument! Ozob (talk) 04:31, 22 April 2010 (UTC)[reply]

I was fortunate in that in 8th and 9th grades I had a math teacher who was honest, as opposed to one who says "This is important material for you to learn; you'll understand why later" when the instructor saying that does not in fact understand. In 9th grade we went through careful geometric proofs of these identities. I would think those must be much older than calculus; I suspect that Regiomontanus knew these identities. Michael Hardy (talk) 22:00, 23 April 2010 (UTC)[reply]

The trigonometric sum formulas follow from Ptolemy's_theorem. It is very old knowledge. Bo Jacoby (talk) 13:05, 27 April 2010 (UTC).[reply]

April 22

April 23

Rejection Method for simulating random values from a density function and Conditioning Approach for reducing the variance in a parameter estimate

I checked the wiki page but I'm still confused. Please give an intuitive explanation and also give a theoretical proof of why it works. Any illustrations that help me better understand it will be appreciated.

Also, how do you use a conditioning approach to reduce the variance in an estimate of a population parameter? —Preceding unsigned comment added by 70.68.120.162 (talk) 00:11, 23 April 2010 (UTC)[reply]

Please name the exact titles of the articles in question. Bo Jacoby (talk) 09:42, 23 April 2010 (UTC).[reply]

Cofree object

A free group is a free group. A freespace on a set X is the discrete space on X. A cofree space on X is the trivial space on X. What's a cofree group? Money is tight (talk) 13:15, 23 April 2010 (UTC)[reply]

There aren't any. Well, the trivial group is cofree on a one-element set, but that's it. Algebraist 15:55, 23 April 2010 (UTC)[reply]
Well that's funny... Is it reflected in the fact that the forgetful functor for Grp doesn't even preserve coproducts? It's obvious that the coproduct of groups is not the disjoint union of their underlying sets. Whereas this is true for Top. Still find this stuff quite confusing Money is tight (talk) 02:39, 24 April 2010 (UTC)[reply]
Yes, that's related. A functor with a right adjoint is cocontinuous, so if cofree groups always existed, then the forgetful functor from Grp to Set would have to preserve colimits, which it does not. Algebraist 02:43, 24 April 2010 (UTC)[reply]
Thanks for your help :D Money is tight (talk) 04:11, 24 April 2010 (UTC)[reply]

prime numbers

Is there an algorithm where I can determine the number of prime numbers found between 1 and X? I am trying to find out if the number of primes between 1 and 1,000,000 is itself a prime number. Googlemeister (talk) 19:13, 23 April 2010 (UTC)[reply]

Prime-counting function. 76.230.7.121 (talk) 19:16, 23 April 2010 (UTC)[reply]
You can just use http://primes.utm.edu/nthprime/ which says: "There are 78,498 primes less than or equal to 1,000,000." PrimeHunter (talk) 19:20, 23 April 2010 (UTC)[reply]
Cool, so 1 million does not, but 1-10million has a prime number of prime numbers. Googlemeister (talk) 19:46, 23 April 2010 (UTC)[reply]
Yes. Primality tests of the π(x) column in Prime-counting function#Table of π(x), x / ln x, and li(x) (copied from oeis:A006880) shows that the only n values from 1 to 23 for which π(10n) is prime are 4 and 7. PrimeHunter (talk) 20:47, 23 April 2010 (UTC)[reply]
The largest π(x) which can be found at http://primes.utm.edu/nthprime/ is π(3×1013) = 1000121668853 which is prime. The π(x) tables at http://www.ieeta.pt/~tos/primes.html include all x with a single non-zero digit up to 1023. Testing shows the largest of these where π(x) is prime is π(3×1013)! PrimeHunter (talk) 21:16, 23 April 2010 (UTC)[reply]

Constructing prime ideals

If I is a proper ideal in a commutative, unital ring R, define J to be . I've shown that when J is proper, it's a prime ideal (I think) - what extra conditions do I need to impose to ensure J is always proper? (Is it ever?) For non-prime ideals I in Z, J is always equal to Z, but, as far as I can see, for in , , because . Thanks, Icthyos (talk) 22:06, 23 April 2010 (UTC)[reply]

Woops, disregard that second example, 1+x does lie in J. Icthyos (talk) 22:48, 23 April 2010 (UTC)[reply]
You appear to have some typos. If r in J implies r is a non-unit, then J is always proper since it does not contain 1, though J is not an ideal in most cases. Also note that s=0 is not a unit, so that every r in R (that is a non-unit) is in J, since rs=0 is in I. JackSchmidt (talk) 00:07, 24 April 2010 (UTC)[reply]
Yes...I don't really know what I was trying to say. I came back hoping to correct before anyone had responded, but you beat me to it! Thanks for trying to talk some sense into me, Icthyos (talk) 00:13, 24 April 2010 (UTC)[reply]
No problem. Colon ideals have a definition similar to yours and are often useful. JackSchmidt (talk) 00:38, 24 April 2010 (UTC)[reply]


April 24

Linear / ... / logarithmic scale

On a linear scale, equal distances between points demarcate equal values, i.e., the difference between 8 and 9 is exactly equal to the difference between 0 and 1, and (20-10)=(30-20)=10.

On a logarithmic scale, such as decibels, incrementing by ten represents an augmentation by an order of magnitude. (9-8)!=(1-0), (20-10)!=(30-20); (20-10)=100, (30-20)=1000.

What do I call a scale on which incrementing by ten represents an augmentation by two-thirds (or some other fraction) of an order of magnitude, and how would I write down and calculate actual values? --92.116.6.112 (talk) 14:09, 24 April 2010 (UTC)[reply]

That's still logarithmic. The decibel scale is the case where there fraction is 1. Logarithmic scale has a lot of detail on this.--RDBury (talk) 14:29, 24 April 2010 (UTC)[reply]
Thanks. "where there fraction is 1", sorry I don't understand. Logarithmic scale, Thanks but that article is a bit long... maybe I can read it one section at a time... and hope that I won't have forgotten the beginning by the time I get to the end (smile).--92.116.6.112 (talk) 14:57, 24 April 2010 (UTC)[reply]
The defining feature of a logarithmic scale is that incrementing the scale by some constant amount always corresponds to multiplying the actual quantity we're measuring by the same factor. If the quantity we're measuring is x, then a log scale measures logb(x) for some base b. b is equal to the factor x has to increase by in order for logb(x) to increase by 1. We can choose whatever b we want. So for example using b = 10, when log10(x) increases by 1 then x increases by a factor of 10. With decibels incrementing logb(x) by 10 has x increasing by a factor of 10, so incrementing by 1 corresponds to a factor of 101/10, or in other words the base being used is b = 101/10. The example you asked about would be incrementing logb(x) by 10 increasing x by a factor of 102/3, so the base is b = 102/30. Rckrone (talk) 20:26, 24 April 2010 (UTC)[reply]

Schrodinger Eigenvalues

Hi all :)

I've just finished a self-taught course on Quantum Mechanics and in the appendix a few of the exercises are about numerically solving Schrodinger's equation in cases where we can't analytically solve for eigenfunctions. I'm trying to solve a (simplified) version of the 1-Dimensional Quantum Harmonic Oscillator equation, , first: it seemed more sensible to try and solve an equation for which the bound states are known already: I calculated that E takes odd positive integer values and as far as I can tell the internet confirms this. However, solving the differential equation numerically (using a Runge-Kutta method) I find that when plotting my solution, it becomes unbounded as X tends to infinity: my book points out that this happens even if you put an exact eigenvalue for E into the differential equation solver, and asks what the solution behaviour indicates to us about the eigenvalues. Problem is, there doesn't really seem to be much mention in the book of classifying eigenvalues in any particular way, particularly with regards to numerical rather than analytic methods such as using a differential equation solver, and I'm presuming that this fact tells us the eigenvalue is of some particular class... Why does this even happen, when our solutions are meant to be bounded? Are there such things as 'unstable eigenvalues' or something like that? I've looked through the book and the Wikipedia article and neither gives me much of a clue of what we can deduce about the eigenvalues from the fact that numerical solutions are unbounded even for exact eigenvalues... Am I right in thinking it's something about instability? Or would that be more relevant to the case 'even when you get arbitrarily close to the eigenvalue, our numerical solution diverges', rather than talking about inputting the exact value then solving the differential equation? Is there something deeper going on here that I'm missing, which this behavior indicates? I'd really appreciate any help or suggestions you could give, even if it's just a book to refer to or another website link.

Many thanks, Simba31415 (talk) 14:48, 24 April 2010 (UTC)[reply]

It seems to me that the problem is simply that you need to impose the boundary condition at infinity. At infinity the wavefunction should be zero. But the second order diff. equation has two linearly independent solutions and using numerical methods like Runge Kutta all you can do is choose the function and its derivative at some point, say x = 0. You will then be approximating a solution that is some linear combintation of the correct one that tends to zero, and one that blows up exponentially at infinity. Count Iblis (talk) 22:04, 24 April 2010 (UTC)[reply]
That makes sense - but then does that tell us anything about the eigenvalues? I was under the impression that you could solve the differential equation by a Frobenius series solution, which terminated if and only if E took on an odd integer value, and in that case we had a bound-state solution; but I guess we must have a second unbounded solution too! So can we actually say anything about the eigenvalues? Thanks very much for the help, unfortunately I don't have anyone to teach me this! :) Simba31415 (talk) 02:24, 25 April 2010 (UTC)[reply]
If you use the series method, then you would first look at the asymptotic behavior of the function, which will be x^p exp(±x^2/2). You then choose the minus sign in the exponent and write the full solution as
x^p exp(-x^2/2) times a series expansion. So, the other solution that blows up at infinity has been eliminated right from the start in this approach. Then the solution you find also blows up at infinity, unless the energy satisfies a condition which will terminate the series.
I don't see a relation with the nature of the eigenvalue spectrum here... Count Iblis (talk) 02:54, 25 April 2010 (UTC)[reply]
Again, that makes perfect sense, thankyou :) That's odd then, I wonder what they were referring to with the 'what does the solution behavior tell you about the eigenvalues'? The previous parts of the question seem to all refer to the behavior at infinity, and the fact that the solutions always tend to infinity no matter what value we put in for E - but I can't seem to figure out what, if anything, it does tell us. Thankyou so much for the help anyway, it's made a lot of things much clearer! :) Simba31415 (talk) 03:20, 25 April 2010 (UTC)[reply]

April 25

Square of wavefunction

Why is it that:

when:

and k1 is imaginary? How do you calculate the absolute value of t anyhow? --99.237.234.104 (talk) 00:50, 25 April 2010 (UTC)[reply]

So you're going to have to give us a bit more information. Your expression for doesn't contain or , so without additional facts you can't just manipulate one into the other. However, in general for a complex number . That'll be the gist of your calculation. Martlet1215 (talk) 10:58, 25 April 2010 (UTC)[reply]
Right, I didn't realize the two expressions use different variables. The two equations came from the rectangular potential barrier article, and k0 and k1 are defined in terms of E and V0 there. --99.237.234.104 (talk) 15:45, 25 April 2010 (UTC)[reply]

Polynomials

"In projective n-space over a field, a homogeneous multivariable polynomial of degree d-1 cannot have d roots on a line (projective line) without vanishing identically on the line." According to my textbook, this is not "too hard to see". I've been stuggling to see this. Do I need to do some heavy calculations, or am I supposed to visualise it? I can visualise why this is true somewhat, but I just can't see why a degree 2 multivariable polynomial for example cannot have 3 roots on a line algebraically, especially in projective n-space which I can't visualize for dimension greater than 3. How do I show that a d-1 degree multivariable homegeneous polynomial cannot have d roots on a given line in projective space without vanishing on the entire line? This is not homework (I'm just trying to understand the text). Thank you in advance. --Annonymous

Say the polynomial is P(X0,...,Xn). After a linear change of coordinates, you can assume that the line consists of all points with homogeneous coordinates of the form (λ,μ,0,...0). (The change of coordinates will change your polynomial, but it will still be homogeneous of the same degree.) Setting the variables X2,..., Xn equal to zero, you end up with a polynomial
a0X0d-1 + a1X0d-2X1 + ... + ad-1X1d-1
Now you need only evaluate it at points with homogeneous (X0,X1) coordinates (λ,1) and (1,0). Depending on whether (1,0) is a zero of the polynomial or not, you get a polynomial of degree d-1 or (at most) d-2 in terms of λ when you evaluate at (λ,1). The number of zeros of a nonzero polynomial is at most the degree of the polynomial. 86.205.30.114 (talk) 04:25, 25 April 2010 (UTC)[reply]

Reverse of an n-gram model

Given a table that yields the probability distribution for a term in a series given the n terms before it, is there an algorithm to invert the table so that it instead yields the distribution of a term given the n terms after it (i.e. if the table is empirically derived, to give the table we would have come up with had we reversed the order of the sample data)? NeonMerlin 04:09, 25 April 2010 (UTC)[reply]

Shouldn't there be an obvious brute force method? Are you asking if there is something better? 69.228.170.24 (talk) 06:59, 25 April 2010 (UTC)[reply]
I think there is missing data. Let's take a simple case where . Given , you have by Bayes' theorem. But you need to know the prior . -- Meni Rosenfeld (talk) 08:16, 25 April 2010 (UTC)[reply]

I was trying to solve the QHO and I found a power series expansion for the each energy eigenstate (in the position basis). This was where with and E measured in units of . What prevents me from using and arbitrary value of E rather than only odd integers? 74.14.111.225 (talk) 05:40, 25 April 2010 (UTC)[reply]

You will probably find a more receptive audience if you transfer this question to WP:Reference desk/Mathematics. (ie delete it from the Science Reference Desk and post it at the Mathematics Reference Desk.) Dolphin (t) 06:20, 25 April 2010 (UTC)[reply]
Okay. 74.14.111.225 (talk) 07:01, 25 April 2010 (UTC)[reply]
Mathematically, E can be arbitrary, and the series will converge nicely on the entire real line. Whatever reason there is for E to be an odd integer, it has to do with the physics of it. Perhaps the function needs to have zeros at specific places or something of the sort. -- Meni Rosenfeld (talk) 08:36, 25 April 2010 (UTC)[reply]
I wonder why there are so many QM questions lately? -- Meni Rosenfeld (talk) 08:36, 25 April 2010 (UTC)[reply]
It's not that the series fails to converge for other E, but that the Hamiltonian in the QHO has discrete eigenvalues which are described by that equation. 69.228.170.24 (talk) 08:45, 25 April 2010 (UTC)[reply]
I haven't checked your relation or anything but the usual reason to impose specific eigenvalues on the QHO is the additional requirement that a physical wavefunction must be square-integrable (therefore normalisable). Martlet1215 (talk) 11:19, 25 April 2010 (UTC)[reply]
As far as I'm aware it's just a big coincidence there are 2 Quantum Harmonic Oscillator questions in the last 2 days, I certainly could have asked mine any time in the last week, but perhaps there's something far more sinister going on... :_: With regards to E taking only odd integer eigenvalues, I found that when you solve for the series solution in the differential equation - this is the differential equation , which you get from a minor substitution for E and x - it's best to write your solution and then solve for f(x): you get a recurrence relation for the coefficients which is something like (or something similar) where '...' is something irrelevant - a quadratic in N if I recall correctly - on the denominator, and so you find your series only terminates when 2n-E+1=0 for some n, so E=2n+1, and if your series doesn't terminate, then the large-x behaviour of f(x) is the same as that of , so overall and this is obviously not a bounded wavefunction; that's why you want to take E=2n+1 for some n so your series for f(x) terminates, and then reverting back from your original substitution, you get that the actual energy , which is where your eigenvalues come from as in the wikipedia article on the QHO. Still, this is entirely self-taught and from memory, so someone please correct me if I'm wrong! :) Simba31415 (talk) 11:21, 25 April 2010 (UTC)[reply]