Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2008 December 29

From Wikipedia, the free encyclopedia
Mathematics desk
< December 28 << Nov | December | Jan >> December 30 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 29

[edit]

Jet interpolation, or what?

[edit]

Consider the following interpolation problem: find the minimal degree polynomial having prescribed derivatives in r prescribed complex points , for . In other words one looks for the polynomial with prescribed -jets at each point . Thus it generalizes both the Taylor expansion and the Lagrange interpolation. What is the name of this interpolation problem and the corresponding interpolation polynomial? I thought it was the Hermite's, but it seems that Hermite interpolation problem is bounded to prescribing only the derivatives up to the first order. By the way, the problem can be equivalently posed as a system of r simultaneous congruences , and in fact it has unique solution with by the Chinese remainder theorem in , an application that could be mentioned in that arcticle --I would do it myself, if only I knew what I'm talking about :) --PMajer (talk) 17:18, 28 December 2008 (UTC)[reply]

Are you sure you are not talking about the general hermite interpolation? Nodal values are given for all derivatives upto k orders giving a polynomial of order r(k+1)-1--Shahab (talk) 18:06, 28 December 2008 (UTC)[reply]
Thank you Shahab! I was missing the "general", thus. We don't need to ask the same number of derivatives at each point, of course. What is the base of polynomials, analog to Lagrange's  ? Do you have an on-line reference? (I'm isolated in this moment!)--PMajer (talk) 18:37, 28 December 2008 (UTC)[reply]
Surprisingly I couldn't find a online-reference. This is the best I got. Maybe it would be best if you looked it up in some good numerical analysis book. Note that it is possible to set up specialized Hermite interpolation polynomials which do not include all functional and/or derivative values at all nodes. There may be some missing functional or derivative values at certain nodes which I think is your case. This lowers the degree of the interpolating polynomial. Cheers--Shahab (talk) 15:28, 29 December 2008 (UTC)[reply]

MY DOUBTS

[edit]
1.

There are three brokers (X,Y,Z) who are involved in stock market trading. X follows the tactic of buying shares in the opening and selling it on the closing. Y follows the tactic of getting qual number of shares every hour and Z follows the tactic of dividing the total sum into equal amounts and gets shares every hour (EXAMPLE- at 11am he will divide his current amount and gets the shares).The trading starts at 10am and closes at 3pm.(Y and Z get shares every hour). All the shares bought are sold at the close of the day ie 3pm. NOTE-> All the three start the day with equal amount.


My questions, a. On a day where the prices of the shares are increasing linearly who will have the maximun return and who will have minimun return(maximum loss).Please give me the explanation.

b.On a day the prices of the shares are fluctuating (going up ,coming down) .on this sort of day who is on the safer side ie who has the chance of getting maximun returns and why?

c.Out of the above mentioned tactics which involves minimum risk?


2.

In a problem a function f(x) = ax^2 + bx+ c; There is a relation given as f(2)=f(5) and f(0)=5; Its mentioned that constant a not equal to 0(a != 0).

a.Is it possible to find the values of all the three constants(a,b,c) using the above details?

please help me.

Thank you... —Preceding unsigned comment added by 220.227.68.5 (talk) 13:06, 29 December 2008 (UTC)[reply]

1) I didn't really follow your descriptions of methods Y and Z, but you may be talking about dollar cost averaging, which is a good strategy in the long run.
2) Based on f(0) = 0 we get that c = 0. From f(2) = f(5) we get a(2)2 + b(2) + 0 = a(5)2 + b(5) + 0. This reduces to 4a + 2b = 25a + 5b. This becomes -21a = 3b, which simplifies to -7a = b. So, it looks like any equation where the b value is -7 times the a value, and c = 0, will satisfy those conditions. StuRat (talk) 13:27, 29 December 2008 (UTC)[reply]
Correction: I used f(0) = 0, when you said f(0) = 5. That changes c from 0 to 5, but otherwise my answer is still correct. StuRat (talk) 21:17, 29 December 2008 (UTC)[reply]
Your questions look very much like homework - have you actually tried to solve these yourself? -mattbuck (Talk) 13:33, 29 December 2008 (UTC)[reply]
It didn't look like homework to me, especially the first part, because it would have been more clearly stated if it was. StuRat (talk) 13:39, 29 December 2008 (UTC)[reply]

If ƒ(2) = ƒ(5) then the axis of the parabola is half-way between 2 and 5, i.e. at 7/2 = 3.5. And since ƒ(0) = c, you need c = 5. So

The coefficient a can be any number except 0 and it will satisfy the conditions you gave. All pretty routine. Michael Hardy (talk) 18:24, 29 December 2008 (UTC)[reply]

Summing 1's and 2's

[edit]

In how many ways can I sum 1's and 2's to a given number, while only using an amount of 1's and 2's divisible by two? I see that without the divisibility restriction I would have f(n)=f(n-1)+(n-2) ways of doing it, ie some fibonacci number as the series begins with a f(1)=1 and f(2)=2, but now I have no idea. --88.194.224.35 (talk) 13:35, 29 December 2008 (UTC)[reply]

A useful strategy for this sort of thing is to work out the first few values (up to 10, say) by hand and (either notice that the pattern is now obvious or) look it up on Sloane's. Algebraist 13:57, 29 December 2008 (UTC)[reply]

Nice link you have. Assuming my code is correct:

#include <stdio.h>
int g;
int f(int n, int s) 
{
    if (s > g) {
      return 0;
    } else if (s == g) {
        if (n % 2) {
            return 0;
        } else {
            return 1;
        }
    } else {
        return f(n+1,s+1) + f(n+1,s+2);
    }
}
int main(int argc, char **argv)
{
    sscanf(argv[1], "%d", &g);
    printf("%d\n", f(0,0));
    return 0;
}

...the relevant entry seems to be A094686. Thanks! --88.194.224.35 (talk) 14:55, 29 December 2008 (UTC)[reply]

acromagaly chances

[edit]

what is the chance of 2 ex brother's-in-law developing acromegaly or the same type of pituitary tumor at the same time. These men live in different cities & were married to twin sisters, who are very close. The chances of developing acromegaly is 1 in 20,000+. Could foul play be involved? —Preceding unsigned comment added by CAElick (talkcontribs) 14:50, 29 December 2008 (UTC)[reply]

How many different ways do you have to spell the word in question? 3 so far, in this question and in the near-identical one you asked on 26 December. Did you trouble to read the answers given then? →86.132.165.199 (talk) 16:42, 29 December 2008 (UTC)[reply]

CAElik, you gave the answer: it's 1/20,000 for each of them. Do you think that having married twin sisters is relevant in some way? --PMajer (talk) 09:45, 31 December 2008 (UTC)[reply]

on analytical geometry

[edit]

whats the difference(in both the formulae and the concept) between : perimeter of a rectangle-in 2D geometry & surface area a rectangular box in 3D geometry —Preceding unsigned comment added by 61.2.228.116 (talk) 14:55, 29 December 2008 (UTC)[reply]

The difference between 2(a+b) and 2(ab+bc+ca) is fairly obvious. As concepts, one gives a length and is linear, one gives an area and is quadratic.→86.132.165.199 (talk) 16:49, 29 December 2008 (UTC)[reply]

inverting pixels with convolution matrix

[edit]

I'm a math dummy trying to use PHP's imageconvolution function to modify some images on the fly (it is much, much faster than trying to manipulate the pixels manually in PHP). I'd like to invert the image (if a pixel has a brightness of 255, then it should be 0; if it has 0, it should be 255, and so on)—what convolution matrix/offset/etc. should I apply? This seems like it should be fairly obvious but I'm struggling, and Google has really been of no help. I thought I'd ask here since this seems like a mathy question more than a computery question (it doesn't require knowledge of the particular language, or any language, to answer—it requires knowing which matrix to apply to get the desired results). --98.217.8.46 (talk) 21:49, 29 December 2008 (UTC)[reply]

Nevermind, figured it out on my own... do a matrix of all 0s with -1 in the center, have a divisor of 1 and an offset of 255... sigh... --98.217.8.46 (talk) 22:20, 29 December 2008 (UTC)[reply]

Composition of polynomials is associative

[edit]

This seemed as if it should be a triviality, but it appears there's actually a little more going on than I thought at first. Taking the usual definition for polynomials in algebra, as sequences with finitely many nonzero terms and addition and multiplication defined as usual, one can form composite polynomials: if

and

then

is well-defined. How then to show that for polynomials one has

 ?

If one attempts to verify this relation directly in terms of coefficients one quickly runs into a combinatorial explosion. On the other hand, composition of functions in general is associative, so can't we just get away with saying

and that's that? The problem seems to be that our 'functions' are actually sequences and one can't compose sequences directly.

If, however, we choose a nice infinite field then we can conclude that two polynomials are equal iff their evaluation maps are equal, and these can be composed. Then we do get the result from the general associativity of functions.

Clearly my algebra is lacking, but can it really be the case that this is what's needed? It seems very strange!  — merge 22:47, 29 December 2008 (UTC)[reply]

If polynomials can be put into one-to-one correspondence with polynomial functions in such a way that composition of either corresponds to composition of the other, then the fact that composition of functions is associative does the job. That certainly works if the coefficients are real, or if they're complex. I never thought about this one if the coefficients are in rings in general. It seems to me that maybe failure of the "one-to-one"ness mentioned about might be the difficulty that prevents it from being that simple in general. Just thinking out loud—no actual answer yet........ Michael Hardy (talk) 00:52, 30 December 2008 (UTC)[reply]
Consider the polynomials as maps from R[X] into itself, and use the associativity of these maps at X. 24.8.62.102 (talk) 01:45, 30 December 2008 (UTC)[reply]
I'd begun to think along these lines. It seems almost too easy, but I guess this is the right way! If so this question has given me a new appreciation for the non-obvious relationships that can hide behind the seemingly simple property of associativity, and the power of the apparently trivial generic result for composition of functions. I also find it a bit odd that none of the algebra books I have to hand discuss this.  — merge 03:41, 30 December 2008 (UTC)[reply]

I agree with you, merge, I had the same feelings in very similar circumstances. There are operations and related properties that are clear and obvious for polynomials when you can see them as functions; but as soon as you get out the realm of functions, they require quite a long work of technical and somehow tedious verifications; and if are good and you do it you still remain with the doubt that maybe the work was unnecessary for some reason: that's no fair... For instance, take the ring of finite order formal series ; you have there a sequence of operations sum, product, composition, inverse, reciprocal, derivative, residue; to define each of them and to prove the whole list of relations between them is a pain. Of course, one can reduce himself in some ways to polynomials, but this sounds somehow artificious, doesn'it? I think it is a useful exercise to write the formal proofs, for you are forced to invent a clean method of treating indices in sums and products. In your case, also, one thing is to use the linearity of the composition in the first argument, which reduces some of the polynomials to treat to monomials. --PMajer (talk) 11:04, 30 December 2008 (UTC)[reply]

Interesting thoughts! I'm of two minds about this. On the one hand I can see the point of view that it's a good exercise to develop the properties of polynomials purely. On the other, the lazy bastard and the pragmatist in me says that if we can get the results more simply, why not do it? More subtly, if we can do this, doesn't this tell us something important about our mathematics? For instance, since polynomials-as-sequences are something we've abstracted from polynomials-as-functions to imitate them, isn't it a bit silly in some sense to force ourselves to prove functional properties using the sequence representation instead of falling back on the function representation whenever that's possible? And where does 24.8.62.102's method fit into this picture? It seems like a perfectly marvelous bit of trickery to me. Does it work for other things?
As it turns out I ran into this situation not through algebra but via complex analysis, in trying to work out the properties of formal power series just as you mention, which happen to be developed better here than in any algebra book I've seen. Lang's main point in that section is that operations on formal power series can be reduced to operations on polynomials, so he at least seems to view that particular reduction as the right way to go (and I have a healthy respect for his algebra cred). In the bit there where he treats composition of power series he reduces it to the polynomial case and then dismisses it, saying that it then follows from 'the ordinary theory of polynomials'—and thinking about that was what led to this question.  — merge 12:58, 30 December 2008 (UTC)[reply]

Plain computing:

What is the problem, gentlemen? Bo Jacoby (talk) 14:54, 30 December 2008 (UTC).[reply]

That was a lot of TeX just to write .  ;)  — merge 15:35, 30 December 2008 (UTC)[reply]
Yes, a lot of TeX, but no combinatorial explosion!  :) . Bo Jacoby (talk) 13:23, 31 December 2008 (UTC).[reply]

I used to expand everything writing the formula for the coefficient of , that is in any case important and has a combinatorial interpretation; then it is not that difficult to check the associativity; but this leaved me staring the screen for a good while! --PMajer (talk) 15:52, 30 December 2008 (UTC) Anyway the question you raised merge is very meaningful. Examples where it works... In fact certain functional representations are incredibly powerful tools, making apparently difficult problems become ridicolous when translated. I'm thinking to Gelfand duality for instance. Or to the representation of self adjoint operators in Hilbert spaces by means of multiplication operator in . Or the functional representation used in the theory of exchangeable random variables in Probability,....--PMajer (talk) 22:32, 30 December 2008 (UTC)[reply]

Wow, I confess it had not occurred to me to connect this situation to the Gelfand transform.  :) But I see your point. The situation is a bit backwards in this case since polynomials started as functions and then we created the difficulties ourselves by turning them into sequences. Interesting insight in any case—thank you!  — merge 12:54, 31 December 2008 (UTC)[reply]
Yes, my imagination run and I had to wait it came back :( But it is true that almost everything is represented by some set of functions! cheers --PMajer (talk) 14:40, 31 December 2008 (UTC)[reply]