Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 193: Line 193:
== Multiple regression v. neural nets ==
== Multiple regression v. neural nets ==


Were neural nets just a fad? I would like to try predicting or forecasting the real-estate market using several time series of economic data. The dependant variable would be a time series of real estate average prices. The independant variables would be time series of things like unemployment, inflation, stock market index, and so on, and a lagged real-estate average price time series. Which would be the better methodology to use? And does anyone have an opinion on which of the various specific types of the two methods would be best for what I want to do? Would using independant variables which were correlated make any difference to the choice - I would also like to include lagged real-estate average price time series from other geographic areas. Thanks. [[Special:Contributions/89.240.217.9|89.240.217.9]] ([[User talk:89.240.217.9|talk]]) 21:35, 23 July 2009 (UTC)
Were neural nets just a fad? I would like to try predicting or forecasting the real-estate market using several time series of economic data. The dependant variable would be a time series of real estate average prices. The independant variables would be time series of things like unemployment, inflation, stock market index, and so on, and a lagged real-estate average price time series. Which would be the better methodology to use? And does anyone have an opinion on which of the various specific types of the two methods would be best for what I want to do? Would using independant variables which were correlated make any difference to the choice - I would also like to try using lagged real-estate average price time series from other geographic areas. Thanks. [[Special:Contributions/89.240.217.9|89.240.217.9]] ([[User talk:89.240.217.9|talk]]) 21:35, 23 July 2009 (UTC)

Revision as of 22:47, 23 July 2009

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:



July 17

Asking people for their names

OK, this is going to be a really odd question.

Say you've got a person who you don't know the name of, but you've got a list of names it might be, and you're allowed to ask yes-or-no questions. What is the number of questions you've got to ask to be guaranteed of knowing their name?

For example, say their name might be Mario, Luigi, Peach or Toad. You can ask them whether they're each name in turn, and be sure of their name after the third question ("Are you Mario?" "No." "Are you Luigi?" "No." "Are you Peach?" "No." So they must be Toad). But there is a quicker way:

"Are you Mario or Luigi?"

If YES, you can simply ask them if they're Mario, and if NO you can ask them if they're Peach. Either way you're guaranteed to know their name in only two questions.

If you have seven names, Mario, Luigi, Peach, Toad, Bowser, Wario and Daisy, you can ask them:

"Are you Mario, Luigi, or Peach?"

If they say YES, you can ask them each name in turn and so guaranteed to know their name in three questions (Are you Mario, Luigi, or Peach? Are you Mario? Are you Luigi?) If they say NO, you simply do what you did when there were only four names, and so you're also guaranteed to know their name in 3 questions. So the minimum number of questions you've got to ask to be guaranteed of knowing their name is 3. As far as I've figured it out, the sequence for the number of questions for each number of names is 0 for 1 (if there's only one name you don't have to ask any questions!), 1 for 2, 2 for 3, 2 for 4, 3 for 5, 3 for 6, 3 for 7, 3 for 8, and I think 4 for 9 and 4 for 10 but my brain is totally melting so I can't be sure. What I want to know is, is there an easier way to figure this out and does this fit with any mathematical sequences already discovered? Vitriol (talk) 11:30, 17 July 2009 (UTC)[reply]

Put it this way: if you are allowed to ask n questions, how many different n-ples of answers can you get? (from Y,Y,..Y to N,N..N). It's 2n right? So no way to detect the name if there are more that 2n names. But if the names are 2n , divide them into two equal groups and ask which one has the name, as you were saying, and in n steps you have it (and a fortiori if they were less than 2n). So the answer is log2(n), rounded. It's the base of information theory...--pma (talk) 11:47, 17 July 2009 (UTC)[reply]
Rounded up. BTW, for reference, the algorithm that you are implementing here is known in computer science as a binary search, and it is the most efficient algorithm possible for this scenario, as pma demonstrated. --COVIZAPIBETEFOKY (talk) 11:55, 17 July 2009 (UTC)[reply]
I think "binary search" refers specifically to the case of searching a sorted list. What we have here is the more general dichotomic search. -- Meni Rosenfeld (talk) 15:14, 17 July 2009 (UTC)[reply]
Aw, nobody told me we'd have to learn Greek! —Tamfang (talk) 17:04, 20 July 2009 (UTC)[reply]

Residue of exp(1/sin(z)) at z=0.

Is there a simpler way to find the residue of at z=0 than plugging into the power series for e and then doing long division on the first term, as no other term should include a 1/z term? It's not too hard or anything but I am just wondering. Thanks StatisticsMan (talk) 17:43, 17 July 2009 (UTC)[reply]

There's the residue formula: where C is a closed curve around z0 and f is holomorphic everywhere else in the curve. Or if you know the order of the pole is n you can use which is on the residue page.
However, it looks like the singularity you're asking about is an essential singularity, so no residue. Rckrone (talk) 18:15, 17 July 2009 (UTC)[reply]
What? Essential singularites have residues, just like any other isolated singularity. Algebraist 18:19, 17 July 2009 (UTC)[reply]
Oh sorry. I guess I was confused about that. I've only ever seen residues discussed in the context of poles. Rckrone (talk) 18:22, 17 July 2009 (UTC)[reply]
I will answer my own question. Consider the power series expansion of at z = 0. Since 's expansion has a z term but no constant term, only for n = 1 will this power series expansion have a term. So, the coefficient in front of 1/z in the expansion for is just the coefficient for 1/z in the expansion for . And, now we're just dealing with a simple pole so you multiply by z and take the limit to get 1. This, I consider to be simpler than actually doing the long division on the term, though you don't have to go far. StatisticsMan (talk) 19:26, 17 July 2009 (UTC)[reply]


It's not that simple. Say that your argument is OK with res0(1/sin z)=1 (it's equivalent to lim sin(z)/z=1 as z→0). But I fear that after that, you are confusing 1/sin(z) with sin(1/z). Indeed, res0(1/sin(z)n) is 0 for even n, but it is not for odd n. You can easily compute it by means of the Lagrange inversion theorem, using the power series expansion of arcsin(z). You should find
.
From this you get, expanding the exponential in a series of powers of 1/sin(z),  :.
There are other ways of course; maybe somebody has a more elegant solution.--pma (talk) 20:57, 17 July 2009 (UTC)[reply]
Note. I tried googoling the first digits of the decimal expansion 086521097, and as a matter of fact it corresponds to a telephon number. Actually they say they just moved there and do not have much information about that exp(1/sin(z)), so please do not bother them further. --pma (talk) 08:39, 18 July 2009 (UTC)[reply]
If you wanted to find a simpler representation of this number, surely Plouffe's Inverter would be more effective than Google (though I got no hit in this case). -- Meni Rosenfeld (talk) 20:04, 18 July 2009 (UTC)[reply]
Thanks, I'll bookmark it immediately :-) pma (talk) 21:30, 19 July 2009 (UTC)[reply]
Good work with the series. This is a hypergeometric series, . Using an identity, this in turn can be rewritten using the modified Bessel function and the modified Struve function as
which can be checked to agree numerically with the integral. The hypergeometric form is the simplest form yet of the solution, though, I think. Fredrik Johansson 08:58, 18 July 2009 (UTC)[reply]
Agree. I see now Maple also returns the hypergeometric form to the series. Talking about a residue of the form , the above computation admits the following generalization (as I write it, it's not exactly a generalization, but I liked the symmetric form...).
Let and be entire functions, with and . So and have local inverses at 0, respectively and . Then we have:
.
Indeed,
because the first series converges uniformly on any small circle around 0. Using Lagrange's
,
we get the above symmetric expression in and .
Since it seems quite a standard computation we could maybe mention it somewhere in residue (complex analysis)#Series methods. BTW, is there a direct way for seeing that is the derivative of some function meromorphic at 0? --pma (talk) 09:50, 18 July 2009 (UTC)[reply]
I just presented my solution today and was told it was wrong by another student. I had not looked back here since I thought I had it figured out. We decided today that the professors probably meant it to be the way I did it but also overlooked the fact that it doesn't work. Otherwise, from your solutions, it looks harder than anything they would actually expect us to do. For example, I've never heard of the Lagrange inversion theorem. So, either that, or they expect us to figure out it's too hard and then not do the problem because of that and that is how they are using it to test us. Is there some other way to do this (the original problem is the integral of this function around the unit circle) that they could possibly expect us to know? StatisticsMan (talk) 20:26, 22 July 2009 (UTC)[reply]
I do not see a simpler way to get the result. You can express the result with the series, or with the Bessel functions as shown by FJ, or also as . This seems to show that any method you follow to compute that residue could not be too elementary, because it will use more or less explicitly the modified Bessel functions. By the way, the Lagrange inversion theorem is a two lines formal computation; unfortunately our articles on the topic are not very satisfactory. --pma (talk) 15:23, 24 July 2009 (UTC)[reply]


July 18

Polynomial whose roots are powers of another polynomial

Hello again. This is part of a theorem I am reading on single Hensel lift of a polynomial. Suppose I am given a polynomial in hm in which divides xk-1 as well. By Hensel's lemma I can construct a polynomial h(x) which is equivalent to it mod pm and which divides xk-1 too but in mod pm+1. If we let x to be the root of hm and y=x+pmi the root of h then we have xk=1+pme as hm divides xk-1 in . Also in ; yp=xp and ykp=1. All this is fine. My book now claims (and so does the link) that if I consider another polynomial hm+1 whose roots are the pth powers of the roots of h then these roots coincide mod pm with those of hm. I can't justify this statement. I'll be grateful for any help. Thanks--Shahab (talk) 10:17, 18 July 2009 (UTC)[reply]

Irreducibility of multivariate polynomials

(redirect from sci desk) I cannot find anything about that. I can think of two reasons for this: either it's utterly complicated and not well understood or it can be so easily reduced to the question of irreducibility of polynomials of a single variable that no one cares to tell. What is it? 93.132.138.254 (talk) 09:35, 18 July 2009 (UTC) —Preceding unsigned comment added by 83.100.250.79 (talk) [reply]

Start maybe from here : Algebraic geometry.... --pma (talk) 11:47, 18 July 2009 (UTC)[reply]
Ah, your're right! I already have guessed this had something to do with mathematics somehow! 93.132.138.254 (talk) 14:44, 18 July 2009 (UTC)[reply]
My understanding was that Algebraic geometry primarily studied polynomials over algebraically-closed fields. What about polynomials over general rings? Can you provide any particular insight there? Eric. 76.21.115.76 (talk) 21:30, 18 July 2009 (UTC)[reply]
Not sure what kind of insights you are after, but as a starting point, multivariate polynomials over any unique factorization domain form themselves a UFD because of Gauss' lemma. — Emil J. 12:44, 20 July 2009 (UTC)[reply]

How does this work?

How does this work. I recently had it emailed to me and it seems to work. Thanks. Computerjoe's talk 22:33, 18 July 2009 (UTC)[reply]

Warning: link plays unsolicited audio. Algebraist 22:34, 18 July 2009 (UTC)[reply]
Let your initial number have first digit x and second y, so the number is 10x+y. Subtracting the sum of digits from this gives 9x. So all the page has to do is label all multiples of 9 with the same option and it wins. Algebraist 22:37, 18 July 2009 (UTC)[reply]
Ah okay. Thanks. Computerjoe's talk 22:46, 18 July 2009 (UTC)[reply]
but in fact it's a bit more subtle. As soon as you get the arithmetic trick, you try to cheat with n≠9x, and it still gets it right... The fact is that it knows the square where you are pointing the mouse on, when you choose it...--pma (talk) 08:19, 19 July 2009 (UTC)[reply]

Ummm... No...
I have not noticed that behavior. --COVIZAPIBETEFOKY (talk) 14:23, 19 July 2009 (UTC)[reply]
oh i really almost thought it read my mind... now i understand! it changes every time. --pma (talk) 16:37, 19 July 2009 (UTC)[reply]


July 19

Extrema Finding and Integral Transforms

Is there a transform which "simplifies" finding of minima or maxima? Or to put the question more generally, what are the "frequency domain" analogues for extrema finding for various transforms, such as the Fourier, Laplace, and Mellin transforms? (Or any others, if a straightforward analogue exists.) I'm looking at a class of non-trivial minimization problems, and am wondering if a transform might help with conceptualizing them, much like the Fourier transform helps with conceptualizing wave behavior with its time domain/frequency domain duality. -- 76.201.158.47 (talk) 00:37, 19 July 2009 (UTC)[reply]

Almost certainly not, without extra information. A local minimum or maximum is a local property of the function (that is to say, dependent on behavior within a small neighborhood of the point), and just about any integral transform (that is to say, the family of transforms involving integration against some kernel) "smears" that information out across the entire area. If you're looking for a "transform" that only takes into account local information, you're really talking about something like taking a derivative. RayTalk 02:23, 19 July 2009 (UTC)[reply]
Note that the transformations you are quoting are linear ones, that is, quite too rigid to really change the view of a nonlinear problem. In any case there is no transformation that works like a panacea for all minimization problems. Nevertheless, special classes of problems may possibly be treated by a special transformation. The first important example of what you have in mind is maybe the use of the Legendre-Fenchel transformation in convex minimization; for instance it is the way one passes from the Lagrangian to the Hamiltonian formalism in mechanics.--pma (talk) 06:56, 19 July 2009 (UTC)[reply]

Π-like notation for tupling

So I find myself wanting to write some ordered n-tuples of the form , except that in place of I have a longer expression, long enough that I don't want to write it twice, and the context in which this appears is such that I can't easily write a separate "where" clause as I usually would (). If I had a similar problem on the right hand side I could solve it easily by writing , but I don't think I've ever seen an analogous notation for the elements of a Cartesian product, even though one ought to exist. just looks silly. Has any prominent source ever defined a notation for this, or, failing that, can anyone suggest something that looks good? I have the full resources of LaTeX at my disposal. -- BenRG (talk) 11:35, 19 July 2009 (UTC)[reply]

How about ?. -- Meni Rosenfeld (talk) 12:35, 19 July 2009 (UTC)[reply]

I once used (X i : i ∈ B) in a published paper.

The idea was that B was some subset of the index set {1, ..., n). E.g. if B = {2, 4, 9} then

That may not be exactly what you need, since in the "tuples" I was using the order didn't actually matter and since B was simply a set, I couldn't have written

in this notation. But I didn't need that. Maybe you can play with variations on this theme and find one that suits your purpose. I didn't explain the notation; it seemed self-explanatory in the context in which I used it. Michael Hardy (talk) 14:38, 19 July 2009 (UTC)[reply]

Thanks. I think I'll go with where , which is compact enough for my purposes. But it still bugs me that the notation on the left doesn't parallel the notation on the right as it should. -- BenRG (talk) 20:04, 19 July 2009 (UTC)[reply]
Sometimes it's convenient to adopt the function notation for n-tuples, that, is just , with instead of . Precisely, as a section of the natural . --pma (talk) 21:14, 19 July 2009 (UTC)[reply]
The function notation is not just notation, it's what elements of the Cartesian product are according to its definition. — Emil J. 12:34, 20 July 2009 (UTC)[reply]

I don't understand the question. Could you be looking for the projection map ? 67.117.147.249 (talk) 23:28, 19 July 2009 (UTC)[reply]

I doubt that's what was meant. Michael Hardy (talk) 03:46, 20 July 2009 (UTC)[reply]

July 20

Magnitude of a vector in different units

Is there a definition, in the general case, for the magnitude of a vector whose components are in incommensurable units? NeonMerlin 00:12, 20 July 2009 (UTC)[reply]

I don't think so. What do you want one for? Algebraist 00:15, 20 July 2009 (UTC)[reply]
Dimensional analysis? Is Nondimensionalization of any help? 67.117.147.249 (talk) 03:37, 20 July 2009 (UTC)[reply]
NeonMerlin - what do you mean by "incommensurable units" ? Do you mean that the basis elements have different dimensions e.g. ex is in metres and ey is in seconds, so a typical member of the space might be "10 metres + 5 seconds" ? I don't think this makes sense as a physical vector space, because the dimension (in the physics sense) of a "vector" in this space is undefined. Gandalf61 (talk) 10:46, 20 July 2009 (UTC)[reply]

Certainly! See metric tensor. The usual formula for the length s of the vector (x,y)

generalizes to

when the components are in different units. The coefficients and are the squares of the magnitudes of the base vectors:

If the unit vectors of the coordinate system are not even orthogonal to one another, the formula reads

where

The formulas are simplified by renaming variables:

Here the superscript no longer means exponentiation, but rather indexing. The formula is further simplified by the Einstein convention of implying summation when the same index occurs once downstairs and once upstairs.

This is readily generalized to the three dimensional case of stereometry, and to the four dimensional case of relativity theory.

Bo Jacoby (talk) 08:26, 20 July 2009 (UTC).[reply]

July 21

sizes of round objects

My local pizza shop has for the same price, four 9 inch pizzas, three 12 inch pizzas or two 15 inch pizzas. which is the most pizza and how do you work it out? we also hate crusts? which has the most crust and hoe do you work it out? —Preceding unsigned comment added by Payneham (talkcontribs) 05:00, 21 July 2009 (UTC)[reply]

The formula to find an area of a circle is πr2. Assuming 9-inch, 12-inch, and 15-inch refer to the pizzas' diameters, then r (the radius of a pizza), would be half of these values, so 4.5 inches, 6 inches, and 7.5 inches, respectively. So the area of each (rounding to the nearest square inch) would be:
π(4.5)2 = 64 in2
π(6)2 = 113 in2
π(7.5)2 = 177 in2
Now multiply by the quantity of each pizza to find the total area of each pizza "combo": 4 x 64 = 256 in2, 3 x 113 = 339 in2, 2 x 177 = 354 in2. Therefore the two 15-inch pizzas have the most pizza for the money. However how much crust depends almost entirely, I suppose, on the kind of pizza it is. Ginogrz (talk) 05:18, 21 July 2009 (UTC)[reply]
The multiplication by π is not needed in order to answer the question, and omitting it makes the answers exact:
The ratio of areas is 324 : 432 : 450.
Never round until the last step. Michael Hardy (talk) 12:43, 21 July 2009 (UTC)[reply]
Assuming the crust has the same thickness on all the pizza sizes and is pretty thin compared to the overall size of the pizzas, the amount of crust on each pizza is going to be pretty much proportional to the length around the outside of the pizza, which is the circumference. Circumference equals π times the diameter, so the first combo has 4 x 9in x π = 36π in of crust, the second combo has 3 x 12in x π = 36π in, and the third combo has 2 x 15in x π = 30π in. So the two 15in pizzas also have the least crust. Rckrone (talk) 06:24, 21 July 2009 (UTC)[reply]

To minimize the arithmetic to what can be done without computer: Note that the diameters, 9 12 15 inches, are simply 3 4 5 times some length unit, 3 inches. So the areas per pizza are 9 16 25 times some common area unit. The area of pizza per price is then (4×9, 3×16, 2×25) = 36 48 50, and the third option is obviously the one with most area of pizza. The amount of crust goes as the circumference, which is as (4×3, 3×4, 2×5) = 12 12 10. The third option is the one with least crust. Bo Jacoby (talk) 08:03, 21 July 2009 (UTC).[reply]

I want to point out that Jacoby's solution makes use of proportions, which certainly simplifies the arithmetic a lot, as well as removing the need to use the mathematical constant π to calculate the areas, which is ugly and can only be done approximately. To write out what he did more explicitly, we have the formula:
Which translates to a proportionality:
The conversion from radius to diameter above shows that:
And similarly, the formula for a circumference:
Directly leads to:
Since they preserve order, the proportional quantities can be compared exactly, without the worry of rounding errors. Of course, you may still want to know the areas and circumferences of the pizzas, in which case you have to approximate using π. --COVIZAPIBETEFOKY (talk) 11:53, 21 July 2009 (UTC)[reply]

Changing someone else homework

If say, someone has done their mathematical homework.

If I being a naughty person and went to changed all the positive numbers into negative numbers by changing the sign and vice versa. Will the homework still be correct? 122.107.207.98 (talk) 13:01, 21 July 2009 (UTC)[reply]

No. If he wrote , changing it to will no longer be correct.
To put it differently, is not an automorphism of the field of real numbers. Compare this to , which is an automorphism of the field of complex numbers. -- Meni Rosenfeld (talk) 13:20, 21 July 2009 (UTC)[reply]

Estimate of Perron-Frobenius Theorem

In Perron–Frobenius theorem, we read that an estimate is . While this is not too hard to understand, I need a citeable source for this statement in the non-negative case and so far failed to find one. Does anyone know any on the web (especially in Google-books, which I have been browsing through without success yet)? --KnightMove (talk) 17:43, 21 July 2009 (UTC)[reply]

July 22

Showing inverse of a bijective holomorphic function on open set is holomorphic

I am pretty sure I proved this in detail, following a proof in my book. But, I am trying to get a quick proof in case I need to prove it on a qualifying exam just to use the result. By quick, I mean I want to explain it a bit but not show much of the detail, basically name a couple theorems. Is the following enough?

First, f(z) is never 0 since it is one-to-one. By the inverse function theorem, for each point. Since f(z) is holomorphic, f'(z) is also, so 1/f'(z) is also as the quotient of holomorphic functions is holomorphic as long as the denominator is not 0.

Thanks. StatisticsMan (talk) 01:04, 22 July 2009 (UTC)[reply]

Which inverse function theorem are you using, exactly? And how are you showing that injective holomorphic functions have nonzero derivative? Algebraist 01:08, 22 July 2009 (UTC)[reply]
First, I had the derivative formula incorrect and have changed it. As to which inverse function theorem I'm using, I do not know I guess. Are you asking because the formula was wrong? The detailed proof I got from the book mostly starts by showing the inverse is continuous at each point. Then, it shows the derivative is equal to 1 / f'(f^{-1}(z)) at each point. This is the result I want really, though I don't see the exact thing in the article on Inverse function theorem here. If I consider the total derivative, that is the complex derivative, and consider , then it says that the derivative of the inverse exists and is continuously differentiable. But, it does not give a formula for it. Is it always the same formula? As far as showing that an injective holomorphic function have nonzero derivatives, this is something that is tricky to me. My book gives a theorem that says if a nonconstant holomorphic function on a connected open set which has f(P) = Q with order k, then there exists a small disk around Q such that every point in the disk (other than Q) has exactly k distinct preimages in a small disk around P. But, I don't see how this implies a nonzero derivative implies that the function is locally one-to-one, as they say. StatisticsMan (talk) 01:45, 22 July 2009 (UTC)[reply]
I asked about your inverse function theorem because there is a complex inverse function theorem which contains the result you're trying to prove, and presumably would not be accepted as part of a proof of it in an exam. Algebraist 01:48, 22 July 2009 (UTC)[reply]
Well, I would like to know how to prove this directly and in a nice way. But, for now I'm working on a problem asking if there is a bijective holomorphic function which maps the unit disk onto the complex plane. The easy solution is, no, because its inverse is entire and bounded and thus constant. If I take some time to explain that, including using Liouville's theorem and the inverse function theorem and such, then perhaps it would not be trivial and would be good enough to get credit. If I have time later, I can come back to try to prove more. But, the point is, I want a quick proof of the fact to use in cases like this. StatisticsMan (talk) 01:56, 22 July 2009 (UTC)[reply]
Also, is the complex version just the same basic thing? I looked up "inverse" in my 3 main complex analysis books and found nothing basically. StatisticsMan (talk) 01:57, 22 July 2009 (UTC)[reply]
I would call the following result an inverse function theorem: let f be a holomorphic function from an open set U to the complex plane, let x be in U, and suppose f'(x)≠0. Then there is a neighbourhood V of x such that f is injective of V, the inverse map is holomorphic, and the derivative of the inverse is given by [whatever the formula is]. Algebraist 02:17, 22 July 2009 (UTC)[reply]
Here's how the book I have proves it:
First they prove an injective holomorphic function f on an open set U has non-zero derivative. Suppose f'(z0) = 0. Then for some a ≠ 0, k ≥ 2, and G(z) a function that vanishes to order k+1 at z0, so then we can choose a radius δ > 0 around z0 small enough so that |G(z)| < (a/2)δk = c for all z within δ of z0 and so that f'(z) ≠ 0 for z ≠ z0. Let , so that f(z) - f(z0) - c = F(z) + G(z). |F(z)| = c > |G(z)| along the circle of radius δ, so by Rouché's theorem, f(z) - f(z0) - c has k (not necessarily distinct) zeros in the circle. z0 is not a root, so f' is non-zero at each root, so the roots are distinct, so f is not injective. So I guess the basic idea is that if a holomorphic function has a multiple root at some point, then by adding a small constant we get a function that has distinct roots near the point, which shows that neither function is injective in a neighborhood of the point.
Then they prove the inverse function part just using the definition of the derivative. Let g = f-1 on f(U) and w = f(z). which converges to 1/f'(z). You don't need to show that 1/f' is holomorphic, just that the complex derivative of g exists everywhere in f(U).
(I hope I didn't screw any of this up.) Rckrone (talk) 03:54, 22 July 2009 (UTC)[reply]

July 23

A variation of the knapsack problem

I will describe the problem that I need to solve:

Given a budget I need to choose parts that will be used to build a car. To build a car I need a-

1. Engine

2. Body

3. Set of wheels

I need to choose one engine, once body, and one set of wheels from a set of 1000 engines, 1000 bodies, and 1000 sets of wheels. (I also know the price of each engine, each body, ... ).

In addition every part has a power ranking that is between 1-100.

I need to choose 3 parts so that the car that I get will cost less than (or equal to) my budget, and that its "power ranking" will be maximized. ("power ranking" := the sum of the power rankings of the engine, body, wheels.)

Can you please tell me if this can be reduced to the knapsack problem, and how I would solve this problem.

Thanks!

Quilby (talk) 03:11, 23 July 2009 (UTC)[reply]

We do have an article about the knapsack problem that describes some approaches. The problem is formally NP-hard, but instances that occur in practice appear to usually be tractable. 70.90.174.101 (talk) 05:04, 23 July 2009 (UTC)[reply]
Right, but my problem is different. Notice how in the regular knapsack problem, all weights are 'equal'. Here I have 3 different categories of weights and I need to choose 1 from each. If someone can give me some tips on how to solve this problem it would be helpful. 94.159.156.225 (talk) 05:23, 23 July 2009 (UTC)[reply]
The naive approach of trying all combinations is only n3 so it's clearly polynomial. An n2 algorithm is not too hard. Sort each set of parts by cost and throw out the strictly worse choices, then for each engine, traverse the bodies list from most expensive to cheapest while traversing the wheels list in the opposite direction looking for the most expensive wheels that still put everything under the budget. Hopefully someone can think up something more clever though. Rckrone (talk) 06:40, 23 July 2009 (UTC)[reply]
By the way you are looking for an exact solution, right? Rckrone (talk) 06:44, 23 July 2009 (UTC)[reply]
I dont know what you mean by exact, but as I said - the price of the final car needs to be smaller than or equal to my budget. So I think this is not an exact solution. Quilby (talk) 21:38, 23 July 2009 (UTC)[reply]

For each possible power ranking (1..100) select the cheapest engine, the cheapest body, and the cheapest set of wheels, and discard the rest. This reduces the size of the problem from 10^9 to 10^6 combinations to consider. Also discard any engine (body, wheels) that costs more than some higher ranked engine (body, wheels). Next, subtract the price of the cheapest engine (body, wheels) from the prices of all the engines (bodies, wheels), and from your budget. Now your budget reflects how much extra money you can spend to buy something more expensive than the very cheapest car. Then discard all parts that individually exceed you budget. The rest is brute force. First consider the set of pairs, (the cartesian product), of engines and bodies. Discard all pairs that cost more than your budget. Sort on ascending ranking. Retain only the cheapest pair of each rank. Also discard more expensive pairs of lower ranks. Finally combine the pairs with the wheels. Bo Jacoby (talk) 10:52, 23 July 2009 (UTC).[reply]

PS. ("power ranking" := the sum of the power rankings of the engine, body, wheels.) Is that really what you want? Is it a good idea to use fancy wheels together with a lousy engine? I suppose you would be happier maximizing the minimum of the power rankings of the engine, body, wheels. Bo Jacoby (talk) 13:18, 23 July 2009 (UTC).[reply]

That algorithm is still worst case n3. For a hard data set it doesn't reduce the number of combinations tried. Rckrone (talk) 17:05, 23 July 2009 (UTC)[reply]
Oh I see what you're saying. Since there are only 100 power ratings but 1000 parts, there are guaranteed to be some stupid ones (assuming only integer power ratings which is not specified). Still once you've weeded out the bad ones, you can do better than brute force. Rckrone (talk) 17:43, 23 July 2009 (UTC)[reply]
Let's try to solve a more general problem while combining all of the above. We have k types of parts, for each we have n models, each having an integer power rating between 1 and m (in the OP ).
We construct an array of the identity and cost of the cheapest model of each type and power rating, by going over all models. We then proceed in steps; in the first step we consruct an array of the identity and cost of the cheapest pair of type 1 and type 2 models for each combined power rating between 1 and , by going over all pairs. In the ith step we combine the list of 1-i combinations with the list of type objects to a single array by going over all pairs. In the end we have a list of the cheapest combination of parts for every possible power rating, in which we can binary-search whatever our budget is. All this takes operations.
Of course, in practice, if we know the budget in advance we can skip the last step and do only upwards\downwards traversal.
If the power rating need not be an integer, we might have to use a more sophisticated data structure to achieve better than . -- Meni Rosenfeld (talk) 18:57, 23 July 2009 (UTC)[reply]
If power isn't an integer, and k is large, then this is the knapsack problem. If, as in the OP, n is large and k is small, then the problems seem unrelated. -- Meni Rosenfeld (talk) 19:23, 23 July 2009 (UTC)[reply]
Yes you are correct. Quilby (talk) 21:38, 23 July 2009 (UTC)[reply]

I would like to thank everyone who answered my question. Quilby (talk) 21:38, 23 July 2009 (UTC)[reply]

Units

Consider the differential equation

In this differential equation, r is a density (of anything, I don't think it matters). And p is a probability. My questions is how can one explain that the units in this differential are correct. I came across this while reading a paper and the author derives this equation. On the LHS, it looks like the units should be 1/time but the right hand side appears unitless so the equal sign doesn't make sense. Am I missing something? Even if non-dimensionalization was used, what was done? I mean if all the terms on RHS had p or something, one can say something about p being proportional to a constant (which will correct the units after division and redefining the variable t) but that isn't true anyway. Is it r? There are powers of r on the RHS. What could be the units of r? Anyone can shed some light on this? Thanks!-69.106.207.49 (talk) 07:19, 23 July 2009 (UTC)[reply]

I don't think there's anything here you're missing; maybe with some more context from the paper (or a link) we might be able to help better. Because the expressions "1 - r" and "1 - p" appear, r and p have to be unitless, making the right-hand side as a whole unitless; for the units to match, we have to make t unitless as well, which quite possibly is what is intended by the author (or possibly not). There also could be an implicit conversion factor in the right side, say, dividing all of it by one second or some such.
It looks like the right hand can be simplified? The last two terms can be combined. Eric. 76.21.115.76 (talk) 07:44, 23 July 2009 (UTC)[reply]
Yes, it can be simplified to , unless I've made a mistake. --COVIZAPIBETEFOKY (talk) 12:25, 23 July 2009 (UTC)[reply]
Haha, I shouldn't have jinxed it like that! . This next one is not really a simplification, but expanding the part in square brackets gives , which can't be factored nicely. --COVIZAPIBETEFOKY (talk) 12:31, 23 July 2009 (UTC)[reply]

Max mod principle problem

Hi, I'm working on a problem and there are 2 cases. One I have and the other I have no clue. This is a qual problem so I have old solutions from two different people. But, one person didn't do this case and the other one did something that makes no sense. So, I guess I'm wondering if the problem is even correct. If it is, can you help me figure out how to do it?

Assume that f, g are holomorphic in the disk D = D(0, 1), continuous on the closure of D and have no zeros in D. If for |z| = 1, prove that f(z) = k g(z) in D, for some constant k of modulus 1.

Alright, so case 1 is f(z) is never 0 on the boundary. The condition given implies g is never 0 either. So, we can get and on D by the max modulus theorem since f = g on the boundary. Putting those together gives |f(z) / g(z)| = 1 on D. But, then the max occurs inside the disk so f / g is constant on D. And, clearly the constant is unit modulus.

Case 2: f(z) = 0 for at least one z in the boundary. I have no idea what to do here other than f(z) = 0 if and only if g(z) = 0. Any ideas? Thanks StatisticsMan (talk) 15:31, 23 July 2009 (UTC)[reply]

If f(z0)=0, consider f(z)/(z-z0) and g(z)/(z-z0). Bo Jacoby (talk) 18:22, 23 July 2009 (UTC).[reply]
Off the top of my head, I think you're saying to get rid of the zeros by dividing by (z - z_0), one for each zero of f (or g), and then use the previous case. Is this right? But, what if f is 0 an infinite number of times around the boundary? Or, if f and g are both zero but when getting rid of the zero by this method, maybe the new f and g satisfy f(1) = 100 and g(1) = 50... so they no longer have equal magnitude around the boundary and case 1 doesn't apply? This is part of my problem. Since f and g are not holomorphic on the boundary, do I know anything about the zeros? StatisticsMan (talk) 19:28, 23 July 2009 (UTC)[reply]
is not even necessarily continuous - for example, for an appropriate branch. -- Meni Rosenfeld (talk) 20:21, 23 July 2009 (UTC)[reply]

Multiple regression v. neural nets

Were neural nets just a fad? I would like to try predicting or forecasting the real-estate market using several time series of economic data. The dependant variable would be a time series of real estate average prices. The independant variables would be time series of things like unemployment, inflation, stock market index, and so on, and a lagged real-estate average price time series. Which would be the better methodology to use? And does anyone have an opinion on which of the various specific types of the two methods would be best for what I want to do? Would using independant variables which were correlated make any difference to the choice - I would also like to try using lagged real-estate average price time series from other geographic areas. Thanks. 89.240.217.9 (talk) 21:35, 23 July 2009 (UTC)[reply]