Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 225: Line 225:


I have a [[vector bundle]] <math>\scriptstyle \pi \, : \, E \, \twoheadrightarrow \, X </math> with [[Fiber (mathematics)|fibre]] ''F''. At each point {{nowrap|1=''x'' ∈ ''X'',}} we have a [[linear map]] {{nowrap|1=''S<sub>x''</sub> : ''F'' → ''F''}} which varies smoothly with ''x''. What would we call the whole object {{nowrap|1=''S'' : ''E'' → ''E''?}} I think that each ''S<sub>x''</sub> is a type [[tensor|(1,1)-tensor]], is that right? If that's true then is ''S'' a type (1,1)-tensor field? Any suggestions? <span style="white-space:nowrap;">— [[User:Fly by Night|<span style="font-family:Segoe print">'''''Fly by Night'''''</span>]] <font color="#000000">([[User talk:Fly by Night|<span style="font-family:Segoe print">talk</span>]])</font></span> 13:28, 4 October 2010 (UTC)
I have a [[vector bundle]] <math>\scriptstyle \pi \, : \, E \, \twoheadrightarrow \, X </math> with [[Fiber (mathematics)|fibre]] ''F''. At each point {{nowrap|1=''x'' ∈ ''X'',}} we have a [[linear map]] {{nowrap|1=''S<sub>x''</sub> : ''F'' → ''F''}} which varies smoothly with ''x''. What would we call the whole object {{nowrap|1=''S'' : ''E'' → ''E''?}} I think that each ''S<sub>x''</sub> is a type [[tensor|(1,1)-tensor]], is that right? If that's true then is ''S'' a type (1,1)-tensor field? Any suggestions? <span style="white-space:nowrap;">— [[User:Fly by Night|<span style="font-family:Segoe print">'''''Fly by Night'''''</span>]] <font color="#000000">([[User talk:Fly by Night|<span style="font-family:Segoe print">talk</span>]])</font></span> 13:28, 4 October 2010 (UTC)

== Generalized Metric Space ==

Has anyone thought of generalizing the definition of a metric space in the following direction?

Let X be a set of points. Instead of considering the distance function ''d'' to map pairs of elements of X to nonnegative real numbers, let ''d'' map into a set S, equipped with a total order ≤, a minimum element 0, and a binary operation + for which 0 is identity and + respects the order in the sense that whenever a, b, c and d are elements of S with a ≤ c and b ≤ d, we have a+b ≤ c+d. Then copy the axioms of a Metric space over, substituting in S, 0, ≤ and + appropriately.

I can quickly think of ways to add more structure to this; this is an extreme generalization. It would certainly be useful to require + to be associative, for example, or to require that it simply be addition in a totally ordered field. My question is has anyone generalized in this direction, and is there a name for such generalized spaces? --[[Special:Contributions/24.27.16.22|24.27.16.22]] ([[User talk:24.27.16.22|talk]]) 23:22, 4 October 2010 (UTC)

Revision as of 23:22, 4 October 2010

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


September 28

Polynomials and symmetries

For each of the point groups in three dimensions, can I assume that there exists a polynomial in x,y,z with that symmetry (and none higher)? How about three independent ones? —Tamfang (talk) 02:12, 28 September 2010 (UTC)[reply]

On second thought, my purpose would be satisfied by two or three functions each of which has higher symmetry, but whose combination as a 'vector' does not. —Tamfang (talk) 02:16, 28 September 2010 (UTC)[reply]

There are many interesting algebraic surfaces which have the same symmetries as polyhedra see for example [1]. A few more can be found at y own site [2].--Salix (talk): 09:37, 28 September 2010 (UTC)[reply]
Thank you, but I'm more interested in the chiral groups and S_n. —Tamfang (talk) 23:48, 4 October 2010 (UTC)[reply]

Generalization of cut and choose

What should be the rules of a game involving n people being allowed to cut and/or choose pieces of a cake, such that the optimal strategy for each person leads to the cake being divided equally? Count Iblis (talk) 03:51, 28 September 2010 (UTC)[reply]

Good question. I don't know the answer, but here is a thought-starter. All participants sit in a circle with the cake in the middle. In turn, each person cuts a segment out of the cake; then hands the segment to the person on the left; and then hands the knife to the person on the right. What are the pros and cons of that set of rules? Dolphin (t) 04:03, 28 September 2010 (UTC)[reply]
If one person does all the cutting and then gets to choose last, their optimal strategy is to cut the cake as evenly as possible regardless of the number of people. Rckrone (talk) 04:15, 28 September 2010 (UTC)[reply]
But my notion of equal shares might be wildly different from yours, if the cake is not homogeneous; so only the cutter and the first chooser are assured of fair shares. —Tamfang (talk) 06:38, 28 September 2010 (UTC)[reply]
The Divide and choose article doesn't mention it, but I seem to recall reading something over a decade(?) ago (in Scientific American, perhaps? - or maybe it was Discover Magazine) about a generalization of the process. All I recall is that with more than two people, the algorithm becomes unsuitable for practice, necessitating many cuts and recuts of the slices. - Also note that the Wikipedia article mentions "The divide and choose method does not guarantee each person gets exactly half the cake by their own valuations", simply that no one will get the "worst" piece by their own evaluation. (this is for the general case where all pieces of the cake are not identical - half chocolate/vanilla, frosting flowers, etc.) -- 174.31.192.131 (talk) 04:22, 28 September 2010 (UTC)[reply]
There are some methods described at Fair division#Many players. In particular, perhaps something described at Proportional (fair division)#Many players is what you're looking for. —Bkell (talk) 04:59, 28 September 2010 (UTC)[reply]
Possibly relevant: http://www.math.hmc.edu/~su/fairdivision/Tamfang (talk) 06:36, 28 September 2010 (UTC)[reply]
Cutting cake strategies is serious business, see e.g. [3]. Most of the papers seem to cite Evans and Paz [4] for the best upper bound.—Emil J. 11:54, 28 September 2010 (UTC)[reply]


Thanks everyone! I had heard about a complicated solution when there are more than 2 players a long time ago, but I was unable to find more about this (apparantly because you need to search for "fair division"). Count Iblis (talk) 18:29, 29 September 2010 (UTC)[reply]

Evaluating the sum of the sums of geometric progression

?
?--Wikinv (talk) 05:08, 28 September 2010 (UTC)[reply]

Straightforward:
Bo Jacoby (talk) 06:55, 28 September 2010 (UTC).[reply]

Decomposing a standard normal CDF

Hi, a simple question (not homework): I have a standard normal CDF evaluated at some point: . Is there any way to decompose this into , presumably where the first part contains only some transformation of a and c and the second part only b and c? a, b and c are just numbers. (Maybe I should rather formulate this as "are there any functions f and g so that .) I feel this should be possible (but it probably isn't, couldn't figure it out by myself). Jørgen (talk) 07:43, 28 September 2010 (UTC)[reply]

This is impossible. Differentiate both sides of wrt a. You get . Thus does not depend on b, which is a contradiction. -- Meni Rosenfeld (talk) 10:00, 28 September 2010 (UTC)[reply]
Good point. Thanks! (What about multiplicative decomposition? Is that possible?) Jørgen (talk) 10:47, 28 September 2010 (UTC)[reply]
No, by the same argument, with first taking logs. -- Meni Rosenfeld (talk) 11:37, 28 September 2010 (UTC)[reply]

N throws of an unbalanced coin

I have an unbalanced coin that has a probability p of coming up heads when thrown. How do I calculate the probability of getting m heads when throwing it n times? Would it just be 1-(p^(n-m)) ? Not a homework question. Thanks. 92.28.249.130 (talk) 13:22, 28 September 2010 (UTC)[reply]

The answer is . See Binomial distribution. -- Meni Rosenfeld (talk) 13:31, 28 September 2010 (UTC)[reply]
Simple check of your original idea: the probability of getting 1 head with 1 throw (i.e. m = n = 1) is, by definition, p. But 1 − (p^(1−1)) = 1 − 1 = 0. Gandalf61 (talk) 13:53, 28 September 2010 (UTC)[reply]

Thanks. Is there an online calculator for this anywhere please? 92.29.114.118 (talk) 20:33, 4 October 2010 (UTC)[reply]

I imagine you can use Wolfram Alpha if you know the relevant Mathematica syntax. (I don't.) —Tamfang (talk) 23:38, 4 October 2010 (UTC)[reply]

Topology

SHOW THAT THE COLLICTION AQ={R,Ø}U{(Q,∞):Q IS RATIONAL NUMBER} IS NOT ATOPOLOGICAL SPACE. —Preceding unsigned comment added by 86.108.51.220 (talk) 16:49, 28 September 2010 (UTC)[reply]

Note that the Reference Desk will not do your homework for you. Check the axioms of topological spaces: does your collection contain the empty set and the full set? It obviously does. Is it closed under finite intersections? Is it closed under arbitrary unions?—Emil J. 17:08, 28 September 2010 (UTC)[reply]

An arbitrary union of open sets in a topological space is open. So look at the union of all intervals (Q, ∞) such that Q > √2. That's not a set of the form (Q, ∞) where Q is rational. Michael Hardy (talk) 03:26, 29 September 2010 (UTC)[reply]

Horizontal acceleration

Okay a child pulls an 11 kg wagon with a horizontal handle whose mass is 1.8 kg, giving the wagon and handle an acceleration of 2.3 m/s^2. What is the magnitude of the other horizontal forces acting on the child, besides the tension in the handle? Assume that the child moves along with the wagon.

I calculated the tension on both ends of the handle to have magnitudes of 29.4 N (the end at his hand) and 25.3 N (the end attached to the wagon. What other horizontal forces should I be considering?209.6.54.248 (talk) 22:51, 28 September 2010 (UTC)[reply]

I agree that your figures of 25.3 N and 29.4 N are correct. To calculate the resultant force on the child you need to know the mass of the child. When you know the resultant force on the child you can determine the horizontal force between the child's shoes and the ground. Dolphin (t) 22:58, 28 September 2010 (UTC)[reply]
OP here, the mass of the child isn't given in the problem--I'm tempted to think that I can simply add 25.3 N and 29.4 N and divide by the acceleration 2.3 m/s^2 but that can't be right because there are additional forces which would be on the other side of the equation. Is there something that I'm missing?209.6.54.248 (talk) 00:49, 29 September 2010 (UTC)[reply]
Without knowing the mass of the child your only option is to give a qualitative answer to the question What is the other horizontal force acting on the child? There is a horizontal force between the ground and the child's shoes. Seeing the child is accelerating at 2.3 m.s-2, the resultant horizontal force F on the child is 2.3 times the mass of the child. So there is a horizontal force between the child and the ground of F plus 29.4 N. Dolphin (t) 01:31, 29 September 2010 (UTC)[reply]


September 29

Proving converse of the Euclid's Lemma

The statement I'm trying to prove is that if p | ab and p | a or p | b, then p is prime. I know I'm supposed to show that if there exists a divisor which divides p, that divisor is either p or 1. I'm wondering how should I get about to proving this? —Preceding unsigned comment added by 142.244.143.233 (talk) 00:04, 29 September 2010 (UTC)[reply]

Well typically that's the definition of prime, but I guess you must be starting from some other definition. Just as a clarification, note that it should be "if for any a and b, p | ab implies p | a or p | b, then p is prime." What you wrote doesn't quite mean that. Anyway, one trick you can use if you are stuck on how to prove something is to try proving the contrapositive. Try assuming that p is not prime, and then show that there exist a and b such that p | ab but p does not divide a or b. Rckrone (talk) 01:29, 29 September 2010 (UTC)[reply]
I still seem to be getting nowhere with this proof, is there some kind of contradiction I have to use here? 142.244.143.233 (talk) 02:01, 29 September 2010 (UTC)[reply]
Actually nvm, I think I got it. 142.244.143.233 (talk) 02:09, 29 September 2010 (UTC)[reply]
What definition of prime are you using? --COVIZAPIBETEFOKY (talk) 02:08, 29 September 2010 (UTC)[reply]
Using the definition learned in elementary school, an integer that is greater than 1 that is only divisible by 1 and itself. 142.244.143.233 (talk) 02:09, 29 September 2010 (UTC)[reply]
You say you got it above, but for the benefit of anyone else for which this isn't obvious: take Rckrone's advice, and remember what it means for a number to not be a prime. Simply apply definitions, and it works out naturally. --COVIZAPIBETEFOKY (talk) 02:55, 29 September 2010 (UTC)[reply]
Also note that the theorem is false for p=1. Where does the proof hit a snag in that case? --COVIZAPIBETEFOKY (talk) 02:58, 29 September 2010 (UTC)[reply]

Euclid's lemma says that if p is prime and p|ab then p|a or p|b. A "converse" would simply reverse "if" and "then", so it would say if p|a or p|b then p is prime and p|ab. But that is clearly false. Your proposed statement, that "if p | ab and p | a or p | b, then p is prime", is not a converse, and is also clearly false. For example, 6|4×12, and 6|4 or 6|12. But 6 is not prime. Michael Hardy (talk) 03:15, 29 September 2010 (UTC)[reply]

Rckrone addressed this issue above; the op left out a universal quantifier. Euclid's lemma can be restated equivalently as "If p is prime, then (for any a, b, if p|ab then p|a or p|b)". This is the statement that we are taking the converse of. --COVIZAPIBETEFOKY (talk) 03:34, 29 September 2010 (UTC)[reply]

Linear Algebra

Question from my quantum physics class (I'm learning the math as I go, so I need some help!): An operator A is defined by A = i(I + U)(I - U)-1, where U is unitary and I is the identity operator. Show that if U does not have the eigenvalue 1, then A is hermitian.

Well, I managed to show that the hermitian of A = i(I - U)-1(I + U), so I guess all that's left is to show that U not having an eigenvalue of 1 implies that (I - U)-1(I + U) = (I + U)(I - U)-1, but I don't know how to do this. A push in the right direction would be appreciated! 74.15.136.172 (talk) 01:00, 29 September 2010 (UTC)[reply]

Also, if U doesn't have an eigenvalue 1, then doesn't that imply that det(U - I) ≠ 0? But if I - U is invertible, then don't we already know that det(U - I) ≠ 0, and hence that U doesn't have an eigenvalue 1? 74.15.136.172 (talk) 01:14, 29 September 2010 (UTC)[reply]

You wrote A = i(I + U)(IU)−1.
Are you sure it didn't say A = i(I + U)(IU−1)? Michael Hardy (talk) 03:18, 29 September 2010 (UTC)[reply]
PS: If it did say (IU−1) rather than (IU)−1, then it's easy. Michael Hardy (talk) 03:23, 29 September 2010 (UTC)[reply]

Yeah I'm sure, but it wouldn't be the first time my prof made a mistake. Is there any reason to think that the question isn't soluble if A = i(I + U)(IU)−1? 74.15.136.172 (talk) 03:25, 29 September 2010 (UTC)[reply]

PS: If my prof did make an error, then it seems that the solution doesn't require that U not have an eigenvalue equal to 1. Did you get the same thing? 74.15.136.172 (talk) 03:30, 29 September 2010 (UTC)[reply]

Okay I think I got it; that (I - U)-1(I + U) = (I + U)(I - U)-1 can be proved by simply distributing (I - U)-1 into the other bracket and then combining things into a common inverse, for both sides of the equation. I'm too lazy to type the algebra, but does this seem right? 74.15.136.172 (talk) 03:47, 29 September 2010 (UTC)[reply]

I'm not sure what you mean by "distributing (I - U)-1 into the other bracket", but what I'd do is to multiply from the left and from the right by (I - U). -- Meni Rosenfeld (talk) 06:47, 29 September 2010 (UTC)[reply]

Well I'm wondering how anyone misses the following: the expression

(IU)(IU)−1

is the product of a matrix and its inverse matrix, and clearly that is the identity matrix:

(IU)(IU)−1 = I.

Michael Hardy (talk) 17:32, 29 September 2010 (UTC)[reply]

Oh. One of them has a plus sign. Never mind..... Michael Hardy (talk) 17:33, 29 September 2010 (UTC)[reply]

z=cos2x+isin2x

I'm having difficulty with this question. If show that






I guess now I have to show that but I have so far not been able to do so. MrMahn (talk) 22:26, 29 September 2010 (UTC)[reply]

You might want to consider the formulae linking and to and . --81.153.109.200 (talk) 22:33, 29 September 2010 (UTC)[reply]
The key is that cos2x = cos2x - sin2x = 1 - 2 sin2x. Use the first of those on the top, the second on the bottom, replace sin2x with 2sinxcosx and simplify.--JohnBlackburnewordsdeeds 22:38, 29 September 2010 (UTC)[reply]


Hmm... what now? I can't really take a factor of i out of the expression on the numerator.MrMahn (talk) 23:00, 29 September 2010 (UTC)[reply]
You're almost there. i(sin x - i cos x) = (cos x + i sin x). Combining the last line above with your earlier expansion of the top gives
My easlier suggestion about was a bit misleading I think: you only need to use the double angle formulae on the bottom of the fraction.--JohnBlackburnewordsdeeds 23:23, 29 September 2010 (UTC)[reply]

Put z = exp(2 i x) and multiply numerator and denominator by exp(-i x) Count Iblis (talk) 23:03, 29 September 2010 (UTC)[reply]

Count Iblis (talk) 23:41, 29 September 2010 (UTC)[reply]

I wonder if the OP might benefit from remembering how to divide one complex number by another:

etc. Michael Hardy (talk) 23:53, 30 September 2010 (UTC)[reply]

Knowledge of Euler's formula or unwieldy trigonometric identities is not required. Let w = cis x so that w2 = z (by de Moivre's theorem). Then

This is in essence what Count Iblis said, but perhaps it's easier to follow if you don't know about eix. —Anonymous DissidentTalk 04:45, 1 October 2010 (UTC)[reply]

Implicit differentiation and polynomial division revisited

Hey, me from a few days ago. I have two completely unrelated questions. 1) When you differentiate both sides of a relation implicitly, for example 4x^4+y^2=3, what do you do with the y, if your only differentiating with respect to x? and 2) When you divide polynomial P(x) by f(x) there is a theorem that the remainder is P of the roots of f(x). But what if f(x) has more than 1 root, i.e., has a degree greater than 1? That wold imply that the remainder is P of the product of the roots, but there would a leftover x term that would not show up there. How can I find it? THanks. 24.92.78.167 (talk) 23:39, 29 September 2010 (UTC)[reply]

1) Differentiating y with respect to x gives you dy/dx, as you might expect. So in your example differentiating the y^2 term with respect to x gives you 2y(dy/dx) using the chain rule.
2) A polynomial P(x) divided by a linear term will always give you a remainder that is just a number, but when you take the remainder by a higher degree polynomial, that won't generally be the case. If f(x) has degree n, then the remainder of P(x) will generally be a polynomial g(x) of degree n-1 (it could be less though). If r is a root of f(x) then g(r) will still be equal to P(r), but that's definitely looks most exciting when g is just a constant.
It's easy to see why it works though. Saying g(x) is the remainder of P(x) divided by f(x) means that P(x) = q(x)f(x) + g(x) for some polynomial q(x). If we plug in r, then P(r) = q(r)f(r) + g(r), but f(r) = 0, so then P(r) = g(r). Rckrone (talk) 01:37, 30 September 2010 (UTC)[reply]


September 30

Poisson Process

I have a non-homogeneous Poisson process where the arrival rate is a function of time; . I want to find the expected time of the first arrival.

I know that in a regular Poisson process arrival times have an exponential distribution with parameter . However with the non-homogeneous process I cannot take the expected value () because this parameter is a function of time and is not constant. What can I do instead? —Preceding unsigned comment added by 130.102.158.15 (talk) 00:00, 30 September 2010 (UTC)[reply]

The solution T to the equation is some kind of mean time to the first arrival. (Perhaps not the kind you asked for, but probably the kind you want.) Bo Jacoby (talk) 07:32, 30 September 2010 (UTC).[reply]
The expected time of first arrival is . I think what Bo had in mind was that the median T is the solution to . -- Meni Rosenfeld (talk) 08:29, 1 October 2010 (UTC)[reply]

Use de Moivre's theorem and the Pythagorean identity?

Use de Moivre's theorem and the Pythagorean identity to show that .
The problem is trivially easy if you allow the identity but unfortunately they don't want it solved that way.MrMahn (talk) 00:14, 30 September 2010 (UTC)[reply]

First work out what happens when you treat x as the angular part of a complex number written in polar coordinates, and do the multiplications with de Moivre's theorem. 67.122.209.115 (talk) 00:18, 30 September 2010 (UTC)[reply]
Just think of in two ways. In one way, it's . On the other hand, it is . You multiply out this right side and then compute real and imaginary parts. You should get a formula for at no extra charge. StatisticsMan (talk) 02:05, 30 September 2010 (UTC)[reply]
StatisticsMan appears to have accidentally left out the "i" in ei3xand (eix)3 above -- not that Euler's formula is necessary here. As SM states, just take de Moivre's formula with n=3, multiply out the cube, equate the respective real and imaginary parts, and simplify with the Pythagorean identity. -- 111.84.196.147 (talk) 13:04, 3 October 2010 (UTC)[reply]
Oops, you are correct, thank you. I fixed it now I believe. StatisticsMan (talk) 03:04, 4 October 2010 (UTC)[reply]

Projectile motion problem

"A particle is projected, at an angle α and speed V, from the edge of a cliff of height h. When it hits the ground, its path forms an angle A = arctan(2 tan α) with the horizontal. Find its horizontal range R and the value of V."

I've had trouble interpreting what the angle of impact tells us about either R or V. Where y and x represent the vertical and horizontal components of motion, and g is gravity, it is usual to derive the following equations:

Then I reasoned that, viewing the terminal trajectory as a tangent to the particle's parabolic motion,

However, this quickly leads to an absurdity. Solving for t yields a time of flight T of

Where V is positive, g has been chosen as positive, and a is in the interval [0, π/2), this implies T < 0 (which is ridiculous). So I think my initial intuition is wrong. Thanks for any help. —Anonymous DissidentTalk 03:23, 30 September 2010 (UTC)[reply]

I agree with you up to . With a parabolic trajectory any expression for time will be a quadratic rather than linear function so I don't share your conclusion that:
When the projectile hits the ground its y co-ordinate is -h. You can find a quadratic equation that will look something like:
You can use the quadratic formula to solve for t. You must expect to find two values of t. One will be positive and the other negative. Good luck. Dolphin (t) 03:58, 30 September 2010 (UTC)[reply]
On second thoughts, your conclusion that is probably correct. When the particle hits the ground the gradient of the trajectory is negative so:
Therefore (and ) must be negative. Your expression also begins with a negative sign so time T will be positive. Dolphin (t) 05:45, 30 September 2010 (UTC)[reply]
If you are correct, and the equation for T is not a contradiction, then the range can be found by putting T into the equation for x:
However, this value for R is at odds with the textbook's answer of 2h cot α. Either these expressions for R are equivalent, and we're left to prove this, or I've been wrong all along. —Anonymous DissidentTalk 07:01, 30 September 2010 (UTC)[reply]
The time is a red herring in the beginning of this problem. The trajectory is the parabola y=f(x) where f(x)=ax2+bx+c satisfies f(0)=h, f(R)=0, f '(0)=tan(α)=S, f '(R)=−2S. Solve to get (a,b,c,R) as functions of the parameters (h,S). Then introduce the time t and the acceleration g. Bo Jacoby (talk) 07:54, 30 September 2010 (UTC).[reply]
If you say you end up with
That changes things a bit. I'm still working on it. Dolphin (t) 07:58, 30 September 2010 (UTC)[reply]
Bo Jacoby's approach yields R = 2h cot α; you don't even need to introduce t or g for this part. I suppose the key was to express the given information in a more useful way. —Anonymous DissidentTalk 08:29, 30 September 2010 (UTC)[reply]
(ec. I have corrected this, it was in error before. Sorry.) The trajectory is y=f(x)=−(3/4)(S2/h)x2+Sx+h, and the range is R=2h/S. Bo Jacoby (talk) 08:24, 30 September 2010 (UTC).[reply]
Yes, and putting f(R) = 0 brings out the solution for R quite easily. I'm still working on V. —Anonymous DissidentTalk 08:31, 30 September 2010 (UTC)[reply]
Then if you note x = Vt cos α, transform f from a function of x to a function of t, and equate f'(t) with the derivative of y in (2), you get V = √(2gh/3) csc α and we're done. Thanks to the both of you. My only point of concern is that this method differs significantly from the treatment of projectile motion given by my textbook, which stresses the need to derive equations (1) and (2) straight away. I wish I knew what the authors had in mind when they set the problem. —Anonymous DissidentTalk 08:52, 30 September 2010 (UTC)[reply]
If you solve equation (1) with respect to t and substitute this solution into equation (2), then t is eliminated and you get the parabolic trajectory equation y=f(x)=−gx2/(V2cos2(α))+x tan(α)+h, which must satisfy the requirements that f(R)=0 and f '(R)=−2 tan(α). Solving these two equations gives you R and V. Bo Jacoby (talk) 09:27, 30 September 2010 (UTC).[reply]

In mechanics there is a place for equations like your (1) and (2). My guess is that your textbook has covered that approach to problem solving, and then given this question as an example to be solved using this approach. Bo Jacoby's method is a simpler method but it doesn't exercise the student's skills at using equations like your (1) and (2). However, having sweated over your equations you have improved your skills in this area. Good luck with the remainder of the course! Dolphin (t) 12:33, 30 September 2010 (UTC)[reply]

Finding function definition from description

I was reading this part of the article about Srinivasa Ramanujan. One of the first problems he posed in a mathematical journal was finding the value of

here's my solution:

after some trial and error I found .

which means . The problem is I guessed f(n). What is a better way to find a function from its description ? Example: . George (talk) 04:12, 30 September 2010 (UTC)[reply]

There's not a universal way to solve recurrence relations (the article discusses some forms that are solvable). Some techniques to try would be guessing the form you think might work and then plugging in a generic solution and solving for the coefficients. For example if you wanted to try a polynomial solution to f(n)2 = 1 + (n+1)f(n+1), it's clear that any terms beyond the linear term would have to be zero, so plugging in f(n) = an + b you get a2n2 + 2abn + b2 = 1 + an2 + (2a+b)n + a+b. So a2 = a, 2ab = 2a + b, b2 = 1 + a + b, which does have a solution. I don't know how you would arrive at the idea of trying a polynomial except by using some intuition or trying a bunch of things.
To solve f(n+1)2 = n(f(n)2-1), note that letting f2 = g lets us write g(n+1) = n(g(n)-1). Supposing we start with g(1) which is a positive integer, g(n) is then
which I doubt has a closed form. Then f(n) is the square root of that. Rckrone (talk) 05:06, 30 September 2010 (UTC)[reply]
Edit: Had to fix that formula a bunch of times... Rckrone (talk) 05:30, 30 September 2010 (UTC)[reply]
If g(0) = 1, then , based on [5]. Dragons flight (talk) 06:21, 30 September 2010 (UTC)[reply]
Of course your sum has a rather closed form! Based on formula number (2) from here,
where is the incomplete gamma function. — Pt(T) 22:45, 30 September 2010 (UTC)[reply]

Thanks for the answers everyone.

I originally intended to write but wrote instead. anyway, now I have a few ideas about what to do. --George (talk) 05:05, 2 October 2010 (UTC)[reply]


the basic details regarding fourier transform..... —Preceding unsigned comment added by Priyabujji27 (talkcontribs) 14:12, 30 September 2010 (UTC)[reply]

{0,1} support RV

Is any random variable with support {0,1} describable as Bernoulli? That is, can any random variable with support {0,1} be written as a Bernoulli RV for some p? Thanks, --TeaDrinker (talk) 23:04, 30 September 2010 (UTC)[reply]

Yes. There's a certain probability that it is equal to 1. Call that p. Then the probability that it is equal to 0 is 1 − p. Michael Hardy (talk) 23:49, 30 September 2010 (UTC)[reply]
Ah, perfectly clear. Many thanks! --TeaDrinker (talk) 04:48, 1 October 2010 (UTC)[reply]

October 1

Transforming 3 points to a new coordinate plane in 3D Cartesian space

Resolved

Hello,

I've inherited some C code that, among other things, takes three points specified in one 3D Cartesian coordinate system and (indirectly) converts their coordinates to a new coordinate system. For the first two points, the process by which this is done is quite clear. However, for the third point, I can't seem to figure out how the expressions used to determine the X and Y coordinates of the point were derived. Although I could just assume that the code is working fine and move on, I'd like to understand the logic behind the calculations.

I've included the expressions used to determine the coordinates of the three points in the new coordinate system below. In terms of notation, I refer to the points as P1, P2, and P3; the subscripts denote the coordinate (x, y, or z) and the coordinate system (O = old, N = new). For (hopefully) added clarify, I've made the coordinates specified in the old coordinate system red and those in the new coordinate system blue. The coordinates in the old coordinate system are known prior to these computations.


Point 1


This simply indicates that point 1 is the origin of the new coordinate system.


Point 2


This is also straight-forward: the intent is clearly for point 2 to lie on the x axis of the new coordinate system and its position on the x axis to be the magnitude of the vector connecting point 1 and point 2 in the original coordinate system (i.e. the distance between them).


Point 3


This where things get a bit confusing (for me, at least). Since the new Z coordinate for point 3 is set to zero, clearly points 1, 2, and 3 are being used to define the XY plane of the new coordinate system. Points 1 and 2 have already defined the X axis, so point 3 is then being used to define the orientation of the XY plane overall.

However, I can't figure out how the expressions for and were derived. Logically, I'd imagine that the expression for must represent some sort of projection of point 3 onto the line formed by point 1 and 2, but it's been a long time since I've done projections in 3D and unfortunately the form of this expression doesn't look familiar to me. I'm also unclear on what the expression for is getting at; it's very close to simply being the magnitude of the line connecting points 1 and 3, but then there's the presence of the additional term to be accounted for.

So, if anyone who has more familiarity with 3D coordinate transformations off the top of their head than I do has a possible explanation of how the expressions for and were derived, I'd be most appreciative!

Thanks,

Hiram J. Hackenbacker (talk) 00:56, 1 October 2010 (UTC)[reply]

P3xN is the length of the projection of the vector from P1O to P3O to the vector from P1O to P2O. That is P3xN = (P3O - P1O).(P2O - P1O)/||P2O - P1O||, where that "." denotes the dot product (see vector projection). They take advantage of the fact that ||P2O - P1O|| has already been calculated, since it's P2xN. P3yN is then just chosen in such a way to ensure that ||P3N|| = ||P3O - P1O|| since those need to be equal. Rckrone (talk) 03:48, 1 October 2010 (UTC)[reply]
Thanks, Rckrone - that clears it up. Hiram J. Hackenbacker (talk) 11:56, 1 October 2010 (UTC)[reply]

Poincaré-Hopf Theorem

Assume that M is a compact, orientable, smooth manifold. Does the Poincaré-Hopf theorem tell us that a smooth vector field on M will have at most χ(M) zeros? If not then can someone give a counter example, e.g. a smooth vector field on the torus with two zeros: one with index −1 and one with index +1. Fly by Night (talk) 09:29, 1 October 2010 (UTC)[reply]

No it doesn't mean that (that would be the converse). I can't think at the moment of an example exactly like you say, but here's the standard example of a vector field on the torus with four zeros: Stand the torus up on vertically on its end, and dump syrup on the top of it. The syrup will flow down the torus, and use this flow to define a vector field. This will have four zeros: one at the top (source), one at the bottom (sink), and one at each of the top and bottom of the inner circle (saddles). So there's 4 zeros, of index +1, +1, -1, -1. (This example, with a good picture, is in Guillemin + Pollack's Differential Topology.) You can probably tweak this to make it have 2 zeros. Staecker (talk) 11:53, 1 October 2010 (UTC)[reply]

basic combinatorics

Resolved

I am trying to deduce a formula for the number of partitions of an integer, say f(n) where order of the integers dont matter. For example 4 can be written in 8 ways: 4, 2+2, 1+3, 3+1, 1+1+1+1, 1+2+1, 2+1+1, 1+1+2, so f(4)=8. This is how I proceeded: First I already had a proof of this statement: If A is the set then A has cardinality . Since we basically need to count the number of elements in the set , so this number is . How can I simplify this to get which is supposed to be the answer. Or is there an easier way find f(n)? Thanks. Also regarding the fact of which I already have a proof of, namely, A has cardinality , the proof I have relies on an argument of making bars in between blanks, and actually deals with making selections with repetitions. Is there a wikipedia article about this technique. Thanks-Shahab (talk) 11:14, 1 October 2010 (UTC)[reply]

First see Bell number. Then see Multiset#Multiset_coefficients and stars and bars (probability). Bo Jacoby (talk) 11:20, 1 October 2010 (UTC).[reply]
I have read them, thanks. "Stars and bars" answers my second question completely. But I am still at a loss for showing to answer my original question. Also I am not sure why you linked Bell numbers.-Shahab (talk) 11:50, 1 October 2010 (UTC)[reply]
The arrangements that you are counting are called compositions. Since each of your xi must be at least 1, the cardinality of A, the compositions of n with k parts, is in fact
It is true that
but a simpler way to count the total number of compositions of n is given in Composition (number theory)#Number of compositions. Gandalf61 (talk) 11:52, 1 October 2010 (UTC)[reply]
Thanks a ton!-Shahab (talk) 12:12, 1 October 2010 (UTC)[reply]

using CAS to find the catenary parameter a that will give us a curve that will pass through a given point

I want to find the equation of a catenary curve that will pass through the origin and a given (x,y) and (-x,y).

I see that a catenary is described by y=a*cosh(x/a). Since I want my curve to pass through the origin I will just subtract a to move it down to zero at x=0 so my new equation becomes y=a*cosh(x/a)-a. I figure that I can just solve for a in terms of the other variables but when I try to use a CAS (like ti-89 and wolframalpha) I get strange results. If i substitute actual values for x and y I can get a correct approximate numerical result but I cannot get a general equation of a in terms of x and y.

I am expecting that there can only be one value of a to satisfy a given (x,y) but apparently I am wrong. I have tried restricting the domains of a,x and y positive numbers less than infinity but I still can't work it out. Why can't I get a simple single result for a? Thanks so much. --- Diletante (talk) 20:41, 1 October 2010 (UTC)[reply]

I think a numerical solution is the best you can hope for since there isn't a way to solve for a with standard functions. The good news is that it's pretty easy reduce the problem to finding the inverse of a single function, namely (1/x)(cosh x -1).--RDBury (talk) 04:27, 2 October 2010 (UTC)[reply]
Set α=1/a. Then your equation y=a*cosh(x/a)−a is written cosh(xα)−yα−1=0. The left hand side is a nice entire function of α, and the equation is solved by a root-finding algorithm. Bo Jacoby (talk) 10:48, 2 October 2010 (UTC). The uninteresting solution α=0 is removed by division: , and the series expansion is . The first approximation is improved by Newton's method. Bo Jacoby (talk) 21:54, 2 October 2010 (UTC).[reply]

Thanks for the good answers. I will be honest and say that all this is a little bit beyond my ability to understand, but I am going to try my best to remember what I learned about approximation in cal 2 and read the articles Bo Jacoby linked. Thanks alot everyone! -- Diletante (talk) 02:08, 3 October 2010 (UTC)[reply]

Your premise is correct: Given and y, there is a unique value a for which y=a*cosh(x/a)-a. There is thus a function which gives for any values of x and y the corresponding value of a. However, just because the function exists doesn't mean it can be described in a nice "traditional" form, and indeed this function cannot. What you do next depends on your application. If you just want to be able to take numerical values of x, y and get a, I understand your calculator can already do this for you. You can also find a formula which gives an approximate answer. For example, if x is much larger than y, then , and you can solve that to find an approximate formula for a. -- Meni Rosenfeld (talk) 10:03, 3 October 2010 (UTC)[reply]
Ok I hope you all will indulge me for a little bit longer. I see that we can do a taylor series to approximate my f(x)=y. I see that we can solve the 2nd order taylor approximation to get a=2y/x^2. What i do not understand is how I can solve for a for greater orders of the taylor approximation (like the last approximation Meni gave). I take it that this is why Bo Jacoby suggested using the first approximation as a starting point to use newton's method, but I don't understand what that really means. Will newtons method allow me to write a better approximation of a algebraically, or just help me find a numerical result? In short, how can I write an approximation that is a little better than a=2y/x^2? -- 21:32, 3 October 2010 (UTC)
Newton's method is just for numerical results.
To solve the approximation I gave, you can use the method to solve cubic equations. But there's a better way to approach this.
Following RDBury's suggestion, we let . We can show that . Then we just need to find an approximation for . The series expansion of g is , and with some algebraic manipulations we can find as many terms as we want of the expansion . Plugging this in, we can find .
Again, this is good for small y. You can also find an approximation good for large y. Also, for any given , you can numerically find the value and derivatives of g around that point, and from that find an expression for the values of f around it (so you can get, for example, a formula which is good when ). -- Meni Rosenfeld (talk) 08:56, 4 October 2010 (UTC)[reply]

October 2

more combinatorics

I am reading a book on combinatorics and am stuck on the following three problems:

  • Why is the identity called the hexagon identity?
  • Compute the following sum: . I can reduce this to and no further.
  • Compute the following sum: . I want to use Vandermonde's convolution here but does it imply that this sum is ? What do I do next? I dont want any summation in the final result.

Can anyone help. Thanks-Shahab (talk) 07:19, 2 October 2010 (UTC)[reply]

You might like the book A=B. I'm not sure but it could be helpful for the last two problems. 67.122.209.115 (talk) 09:01, 2 October 2010 (UTC)[reply]
The binomial coefficients in the hexagon identity are corners of a hexagon in Pascal's triangle. The infinite series are actually finite sums, as =0 for k<0 and for k>n≥0. Bo Jacoby (talk) 10:19, 2 October 2010 (UTC).[reply]


Another nice book for learning how to do these manipulations is concrete mathematics. In your case, using generating functions seems a reasonable way for treating the sums. In particular, if you can write your expression in the form , you can see it as the coefficient of in the power series expansion of the product: (this is the Cauchy product of power series). Note that the Vandermonde's identity is a special case of this. In your case, the task is not difficult (but ask again here if you meet any difficulty). You can do it in the first sum either in the original form (writing ; in this case you also need a closed expression for , which is related to the binomial series with exponent ) or in your reduction, which is simpler to treat (then you need the simpler ). Another possibility in order to proceed from your reduction is, make a substitution: so and distribute: this leaves you with two sums, and which is identity (6a) here. Your last sum is indeed close to the Vandermonde's identity, but the result you wrote is not at all correct (you should have done something very bad in the middle). You may write ; put so to get the form of Vandermonde's identity--pma 15:53, 3 October 2010 (UTC)[reply]

Second moment of the binomial distribution

The moment generating function of the binomial distribution is . When I take the second derivative I get . Substituting 0 in for t gives me . Why is this not the same as the variance of the binomial distribution ?--220.253.253.56 (talk) 11:34, 2 October 2010 (UTC)[reply]

See cumulant. Bo Jacoby (talk) 11:51, 2 October 2010 (UTC).[reply]

The variance is not the same thing as the raw second moment. The variance is

where μ is E(X). The second moment, on the other hand, is

Michael Hardy (talk) 22:30, 2 October 2010 (UTC)[reply]

Then why does the article moment (mathematics) say that the second moment is the variance?--220.253.253.56 (talk) 22:50, 2 October 2010 (UTC)[reply]
It doesn't. Algebraist 22:53, 2 October 2010 (UTC)[reply]
Moment_(mathematics)#Variance--220.253.253.56 (talk) 23:00, 2 October 2010 (UTC)[reply]
There are eighteen words in that section. You seem to have neglected to read the third. Algebraist 23:03, 2 October 2010 (UTC)[reply]
So what is a central moment and how are they calculated (can you use the moment generating function)?--220.253.253.56 (talk) 23:05, 2 October 2010 (UTC)[reply]
Central moment might help. 129.234.53.175 (talk) 15:52, 3 October 2010 (UTC)[reply]

In Moment_(mathematics)#Variance, I've now added a link to central moment. Michael Hardy (talk) 02:48, 4 October 2010 (UTC)[reply]

There is already a link to Central moment in Moment (mathematics). I think the guideline is not to link to the same article twice. -- Meni Rosenfeld (talk) 08:28, 4 October 2010 (UTC)[reply]
Where is that guideline? To me that seems unwise in long articles. Michael Hardy (talk) 19:44, 4 October 2010 (UTC)[reply]
Wikipedia:Manual of Style (linking)#Repeated links. It does mention as an exception the case where the distance is large, but here the instances are quite close in my opinion. -- Meni Rosenfeld (talk) 20:35, 4 October 2010 (UTC)[reply]

More Limits

Hello. How can I prove ? I tried l'Hopital's rule but get . Thanks very much in advance. --Mayfare (talk) 15:19, 2 October 2010 (UTC)[reply]

Forget l'Hopital's rule and try to visualise what is happening. If a < 0 then both xa and e-x tend to 0 as x grows, so the result is obvious. The case a=0 is also easily dealt with. If a > 0 then as x gets larger, xa grows but ex grows even more quickly. In fact, if x > 0, then
where m is the next integer greater than a. So
I'll let you take it from there. Gandalf61 (talk) 16:06, 2 October 2010 (UTC)[reply]
It might not be obvious to the questioner that exponentials grow faster than polynomials (or even what a statement like that means). Mayfare, if you want to use l'Hôpital's rule for this, imagine using it over and over until you don't get ∞/∞ any more. What is going to happen? The exponent in the numerator is going to decrease by 1 each time you use the rule, while the denominator stays the same. So what can you conclude? —Bkell (talk) 17:52, 2 October 2010 (UTC)[reply]
If the questioner does not understand that an exponential function grows faster than any polynomial, or why this is implied by
then they cannot understand why . At best they are reproducing a method (l'Hôpital's rule) learnt by rote, without understanding. Once they do understand the behaviour of exponential functions then the result is intuitively obvious and a formal proof is easily found. Gandalf61 (talk) 09:25, 3 October 2010 (UTC)[reply]


While you can take Bkell's suggestion and work that into a proof, I would suggest using the definition of a limit directly. That is equals 0 if for every ε>0 there exists a δ such that if x>δ then |xa e^-x|<|ε|. Note that xa and e^-x are both eventually monotone functions. Can you solve |xa ex|=|ε| ? Taemyr (talk) 18:24, 2 October 2010 (UTC)[reply]

L'Hopital's rule will do it if you iterate it: a becomes a − 1, then after another step it's a − 2, and so on. After it gets down to 0 is less, the rest is trivial. However, there's another way to view it: every time x is incremented by 1, ex gets multiplied by more than 2, whereas the numerator xa is multiplied by less than 2 if x is big enough. Therefore it has to approach zero. Michael Hardy (talk) 22:27, 2 October 2010 (UTC)[reply]

Noticing that xa = exp(a ln x) is useful. Then

Since exp(x) is an increasing function, the original product must also go to 0. —Anonymous DissidentTalk 01:53, 3 October 2010 (UTC)[reply]

This seems highly questionable to me. Showing that the ratio of the exponents goes to zero does not prove that the ratio of the original functions goes to zero. For example consider the constant functions f(x) = 1 and g(x) = e.
but clearly f(x)/g(x) does not go to zero. What you actually need is for the difference in the exponents to go to negative infinity, but that's not any easier to prove than the original problem. Rckrone (talk) 02:17, 3 October 2010 (UTC)[reply]
I think it does if both functions are increasing. Your counter-examples seem a little trivial, since they are constant functions (which do not change, let alone strictly increase). Perhaps you are correct in general, but in this case the result seems quite clear. —Anonymous DissidentTalk 02:24, 3 October 2010 (UTC)[reply]
I picked a trivial counter example because it's easy to consider. Here is a case with strictly increasing functions: f(x) = (x-1)/x, g(x) = ef(x). Anyway, I was wrong before about taking logs not helping. If you consider ln(xae-x), which is aln(x) - x, you can show it goes to negative infinity by arguing that the derivative a/x - 1 goes to -1. I guess that's not too bad. Rckrone (talk) 02:32, 3 October 2010 (UTC)[reply]
Okay, point well taken. —Anonymous DissidentTalk 03:02, 3 October 2010 (UTC)[reply]

If the OP was thrown off by but has already proven that , then consider applying the squeeze theorem with g(x)=xne-x and h(x)=xme-x where n≤a≤m. See also floor and ceiling functions. -- 124.157.254.146 (talk) 02:52, 3 October 2010 (UTC)[reply]

And in case the OP didn't catch the remark by Rckrone above, xa = ea ln(x) so xae-x = ea ln(x) - x and it can be shown that a ln(x) - x → -∞ as x → +∞. -- 124.157.254.146 (talk) 05:50, 3 October 2010 (UTC)[reply]

October 3

k = x + 1/x

Why is this equation so ridiculously difficult to solve using standard high school algebra methods? I can only seem to solve it by running it through Mathematica. John Riemann Soong (talk) 00:58, 3 October 2010 (UTC)[reply]

You're solving for x, right? If you multiply through by x you get x2 - kx + 1 = 0 and then you can hit it with the quadratic formula. Rckrone (talk) 01:27, 3 October 2010 (UTC)[reply]
I just realised this. I actually started from the lens equation (k = S/f - 2, where S and f are known constants) and did a lot of rearranging to get k = di/do + do/di. Must have been too fatigued to realise that having a non-constant term on the other side would be a good thing. John Riemann Soong (talk) 01:55, 3 October 2010 (UTC)[reply]

Writing a puzzle

Good evening. I'm trying to write a puzzle but I don't know how to state it right. It's going to start like "Find the greatest value of <some f(x)>" and the solution should require changing f(x) somehow to a polynomial function g(x) (I think g(x) might be x^<some large constant>, but I'm not sure on that) that can be evaluated by taking the nth derivative of g(x) such that it is a constant (that is also the maximum of f(x)) and the n+1th (and every subsequent) is zero. I saw this problem way back when I was studying Calculus in high school and I want to give it to my students. The problem is I can't quite think of how it goes, namely what f(x) to use and how it is turned to g(x). I do remember that there was some way to discourage people from using the derivative test for extrema, hence my guess that g(x)=x^<some large constant>. Can anyone think of a way to present this? Many thanks. PS: It's late and I'm not exactly at my best right now so this won't be the most coherent (or concise). If there are questions just leave them here and I'll get back to you in the morning. Thanks again. --Smith —Preceding unsigned comment added by 24.92.78.167 (talk) 03:29, 3 October 2010 (UTC)[reply]

I like "find the minimum of x + 1/x for x > 0". The problem above reminded me of that. 67.122.209.115 (talk) 04:02, 3 October 2010 (UTC)[reply]
(edit conflict) This doesn't exactly sound like what you're looking for, but the easiest way to find the maximum value of, say, is to factor the numerator and cancel the common factor of so you get the polynomial function , which is easy to maximize. [Of course, in this example except at .] If you try to use the derivative test on directly, you have to use the quotient rule and you'll get an ugly mess. Is this example anywhere close to being on the right track? —Bkell (talk) 04:03, 3 October 2010 (UTC)[reply]

Continued Fraction Factorisation Method - What is the point?

I've been learning ways of factorising numbers through the Congruence of Squares method, and at the moment I'm looking at the continued fraction method, whereby you use the convergents of a continued fraction expansion of Sqrt(N) in order to find a congruence of squares modulo N: I think www.math.dartmouth.edu/~carlp/PDF/implementation.pdf section 2 has one of the few good descriptions of the process I can find online.

However, what I'm struggling to actually see is what the actual advantage of using these convergents is - what do we gain from using the convergents, does it somehow make it easier to find the congruence of squares? Does it typically make it easier to find B-smooth numbers congruent to a square for some small set of primes B, for example? I can see the method is closely related to Dixon's method, but as I said I fail to see why the CFRAC method is actually advantageous - does it perhaps typically find that the An2 (using the terminology of the linked CFRAC explanation) are congruent to a very small number, in which case more likely to factorise under a set of small primes? Or is it less likely to lead to a trivial factorisation, maybe?

I'd really appreciate any enlightenment - I can see why Dixon's method works, but not how this development helps. Ta very much in advance! Mathmos6 (talk) 17:14, 3 October 2010 (UTC)[reply]

October 4

Name the object

I have a vector bundle with fibre F. At each point xX, we have a linear map Sx : FF which varies smoothly with x. What would we call the whole object S : EE? I think that each Sx is a type (1,1)-tensor, is that right? If that's true then is S a type (1,1)-tensor field? Any suggestions? Fly by Night (talk) 13:28, 4 October 2010 (UTC)[reply]

Generalized Metric Space

Has anyone thought of generalizing the definition of a metric space in the following direction?

Let X be a set of points. Instead of considering the distance function d to map pairs of elements of X to nonnegative real numbers, let d map into a set S, equipped with a total order ≤, a minimum element 0, and a binary operation + for which 0 is identity and + respects the order in the sense that whenever a, b, c and d are elements of S with a ≤ c and b ≤ d, we have a+b ≤ c+d. Then copy the axioms of a Metric space over, substituting in S, 0, ≤ and + appropriately.

I can quickly think of ways to add more structure to this; this is an extreme generalization. It would certainly be useful to require + to be associative, for example, or to require that it simply be addition in a totally ordered field. My question is has anyone generalized in this direction, and is there a name for such generalized spaces? --24.27.16.22 (talk) 23:22, 4 October 2010 (UTC)[reply]