Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 345: Line 345:
== Algorithm to reduce polynary equations to minimum form ==
== Algorithm to reduce polynary equations to minimum form ==


I asked this before an maybe the question was ignored due to the holidays...
I asked this before and maybe the question was ignored due to the holidays...


Is therre an algorithm (like for the simplex method in linear programming) to reduce polynary equations to minimum form? <small> [[Special:Contributions/71.100.6.153|71.100.6.153]] ([[User talk:71.100.6.153|talk]]) 02:06, 27 December 2009 (UTC) </small>
Is there an algorithm (like for the simplex method in linear programming) to reduce polynary equations to minimum form? <small> [[Special:Contributions/71.100.6.153|71.100.6.153]] ([[User talk:71.100.6.153|talk]]) 02:06, 27 December 2009 (UTC) </small>

Revision as of 02:07, 27 December 2009

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:



December 19

Differentiability of a function from R^2 to R at the origin

Hi all,

could anyone tell me if the function with f(0,0)=0 is differentiable at (0,0)? I've shown it to be continuous at 0, and that directional derivatives exist in every direction at the origin, but I'm not sure whether or not it's differentiable at the origin (in the Fréchet derivative sense), and how to prove it if it is or prove it isn't if not.

Thanks for the help, Typeships17 (talk) 05:18, 19 December 2009 (UTC)[reply]

doesn't look continuous to me, it is 1 everywhere except when x=0 when it is 0. Dmcq (talk) 07:21, 19 December 2009 (UTC)[reply]
Since for every , is indeed continuous (...but you have probably just made a silly mistake anyway ;)). --PST 10:46, 19 December 2009 (UTC)[reply]
Oops silly me above about f(x,0) being 1, I left the y out of the numerator! Zero most certainly doesn't mean not present. Sorry. Dmcq (talk) 12:43, 19 December 2009 (UTC)[reply]
Therefore, if f is differentiable at , necessarily ; we shall appeal directly to the definition of a Fréchet derivative:
.
Since the final limit in the sequence is not defined is not 0, f is not differentiable at . Hope this helps (...and I also hope that I have not muddled anything here; I am in a bit of a hurry...I did muddle something, but have now corrected it). --PST 07:32, 19 December 2009 (UTC)[reply]
Too much in a hurry? Indeed what you have proved is that f is not differentiable at the origin 0. If it were, the differential at 0 would be 0, because both partial derivatives vanish there. But in any other direction the directional derivative is not 0 (in your line −2 there should be r4 r3 in the denominator I think).
Note also that this f is continuous at 0 (in fact everywhere) , and homogeneous of degree 1, that is for any t in R and v in R2. Any homogeneous function of degree 1 has all directional derivatives at the origin: But it is F-differentiable iff it is linear: if it is differentiable you have (and of course this f is not linear). --pma (talk) 08:33, 19 December 2009 (UTC)[reply]
You are right (I have corrected my error above). Thanks for correcting me! --PST 09:25, 19 December 2009 (UTC)[reply]
...But is not the directional derivative of f (at the origin), equal to zero in every direction, per the following computation ( denotes an arbitrary vector):
You are of course right that f is not differentiable at the origin, but I think that its directional derivative (at the origin) along every direction is 0 (or maybe I have made another muddle...). ;) --PST 11:04, 19 December 2009 (UTC)[reply]
You're welcome! (not another muddle: the same ;-) ) To summarize, the relevant facts to recall are:
1. The directional derivative of a map at x wrto the direction v is by definition the derivative of the composed map at The directional derivative at in the direction is usually denoted that is
2. if is F-differentiable at then has all directional derivatives at , and (this is a plain consequence of the differentiability of a composition);
3. if 1-homogeneous then has all directional derivatives at , and (this is immediate from just applying the definition, that is deriving wrto t);
4. Having all directional derivatives does not imply being F-differentiable. Due to the preceding remarks, a counterexample is any 1-homogeneous, not linear function (actually the OP's one is possily the simplest such example; note that it has nonvanishing directional derivatives at the origin in all directions that are not parallel to (0,1) or to (1,0) ).--pma (talk) 11:23, 19 December 2009 (UTC)[reply]
Thanks, but I think there has been a misunderstanding. My point was that the directional derivative of f in every direction is zero, whereas you had said that it was never zero except when the direction corresponded to either of the partial derivatives of f, so I was wondering whether I had made a mistake. I saw my mistake of saying that f is differentiable at the origin, once you pointed it out, but I still cannot see why my assertion that the directional derivative of f (at the origin) is zero in every direction, is incorrect. Sorry for not making myself clear. ;) --PST 11:47, 19 December 2009 (UTC)[reply]
Don't you agree with my point 3? I think there's a factor t missing in the denominator at line −2, in your last post. No matter, I also make these muddles; some of them survived hidden in my notes after years! --pma (talk) 11:53, 19 December 2009 (UTC)[reply]
You are right again ;)!!! I cannot believe I made that mistake. Yes, I should have read your points a bit more carefully. Thanks! --PST 12:18, 19 December 2009 (UTC)[reply]
By the way I believe you should next try with f(0,0)=0 <evil cackle /> Dmcq (talk) 13:03, 19 December 2009 (UTC)[reply]
I prefer a simpler example: f(x,y)=1 if y=x2 and x≠0, f(x,y)=0 everywhere else. Algebraist 14:52, 19 December 2009 (UTC)[reply]
This response has been fantastic, thankyou all very much! If you don't mind me asking, why is it that in PST's post taking the Frechet derivative directly, the denominator is rather than , which is surely ? Thanks again to everyone for their help! Typeships17 (talk) 15:45, 19 December 2009 (UTC)[reply]
You're completely right. Maybe he worked hard the past night! it happens to me too to make wrong computations the day after. I took the liberty of re-edit his post and correct; I sincerely apologize in advance if this is considered uncorrect (either socially or mathematically) ;-). --pma (talk) 16:07, 19 December 2009 (UTC)[reply]
Yes, you are right actually; I have had late (really late!) nights for the past one week, but that is another story. ;) Thanks for correcting my posts; I do not mind. --PST 06:42, 20 December 2009 (UTC)[reply]
That's great, thanks all! :) Typeships17 (talk) 17:09, 19 December 2009 (UTC)[reply]

Distance from a point to a line

find the distance from the line whose equation is 5x_12y+6=0 to p1(2;3) —Preceding unsigned comment added by 79.141.23.101 (talk) 15:07, 19 December 2009 (UTC)[reply]

(added heading JohnBlackburne (talk) 15:13, 19 December 2009 (UTC))[reply]

See Distance from a point to a line. -- Meni Rosenfeld (talk) 15:53, 19 December 2009 (UTC)[reply]
If you looked here earlier I've since updated it, thinking when I looked at it of something to add then not stopping until I'd rewritten it, so you may want to look again. --JohnBlackburne (talk) 22:46, 19 December 2009 (UTC)[reply]

Plotting a fractal curve defined by binary digits

Consider the functions and defined via binary representation this way: for all with binary expansion (choose the one with finitely many 1's in case of double representation), the values and have binary expansions

where the binary sequences and are respectively:

I'd like to plot the graphs of and and the curve in with parametric cartesian representation possibly with some finite sum approximations. I'm trying with Maple but something goes wrong. Would anybody teach me how to do it? The reason I'm interested, is that these pictures (if the computations I've just made are correct) should possibly give a nice addition to a certain wiki article... I'm not saying which one now, in the hope of making people more curious and plot the graph for me. Thank you, --pma (talk) 20:41, 19 December 2009 (UTC)[reply]

Should "" be mod 2? –Henning Makholm (talk) 08:39, 20 December 2009 (UTC)[reply]
Yes, exact, everything is mod 2, thanks. I've changed the sign in front of to avoid the ambiguity.--pma (talk) 09:30, 20 December 2009 (UTC)[reply]
In any case, (almost) every point in [0,1]×[0,1] will arise as a possible value of (x(t),y(t)), which makes your curve easy to plot – it's a square full of ink!
To see this, let some arbitrary x and y (and thus xk, yk) be fixed. In general, the first k bits of x and y are given by the first 2k bits of t. By induction on k, assume that we have chosen bits up to t2k-2 to give us the desired bits up to xk-1 and yk-1. Now t2k must be 0 or 1 according to whether xk=yk or not. And once we know t2k, the value of the entire card{...} bracket is given, and you can solve for t2k-1.
The only snag is that this procedure might produce a tk sequence that is identically 1 from some point onwards and therefore is not actually hit by your mapping. But there are at most countably many such cases, so they'll hardly show on your graph. –Henning Makholm (talk) 09:00, 20 December 2009 (UTC)[reply]
You got it immediately, excellent. That is a binary representation of the Hilbert curve, and as you are saying, it makes a continuous bijection between non-dyadic points in [0,1] and pairs of non-dyadic points in [0,1]. I was curious to translate it in a binary form, and the above expressions are what I got. I made it because I'd like to see separately the graphs of the two coordinate functions x(t) and y(t) and possibly add the pictures to the article (the curve x(t),y(t) is a square full of ink, as you are saying). But I'm not very fond of plotting programs. --pma (talk) 09:30, 20 December 2009 (UTC)[reply]
Oops, I missed the point about plotting the coordinate functions separately. Can't help you with that, I'm afraid. Not experienced with plotting programs either; I usually end up doing ad-hoc perl scripts that emit pbm's :-)
(And I didn't say that your function was continuous; in fact it was not clear to me in this formulation that it would be continuous at the dyadic rationals). –Henning Makholm (talk) 09:48, 20 December 2009 (UTC)[reply]
Thanks anyway. Note that represents the Hilbert curve exactly as shown in the linked picture, with the self-similar parametrization. Thus &c. In particular the above and do define continuous functions, although it is not apparent from the formulas (unless, of course, I made a mistake in deriving them). If I'm not wrong your inversion argument also says that the above and define a bijection between the Cantor spaces of binary sequences 2 N+2 N+×2 N+, actually a homeomorphism, which makes sense.--pma (talk) 11:20, 20 December 2009 (UTC)[reply]

So, I'll try to plot these graphs in the holidays. For whom is interested: the above xk and yk actually define a homeomorphism h: 2 N+2 N+×2 N+ such that for all binary sequences t and t' that are binary expansions of the same dyadic rational, the corresponding h(t):=(x,y) and h(t'):=(x',y') give binary expansions of the same pair of dyadic rationals (there are only a small number of cases to check). Therefore this map passes to the quotient , as it has to be, producing a continuous surjective map H:I→I×I, which is the (Hilbert variant of the) Peano map shown in the linked article, from which I deduced the above definition of xk and yk.--pma (talk) 13:31, 22 December 2009 (UTC)[reply]

X-coordinate of the Hilbert square-filling curve.

To the interested reader: I learnt how to make decent graphs of the above functions with Maple. Here they are.... Note that for time 0≤t≤1/2, x(t) varies from 0 to 1/2 while y(t) covers the whole interval [0,1]. --pma (talk) 23:56, 24 December 2009 (UTC)[reply]


December 20

hyperbolic tangent derivative

How is this expressed in Excel? Most references show it as 1-tanh2(x). In Excel is this 1-tanh(x)2? 71.100.0.206 (talk) 01:55, 20 December 2009 (UTC) [reply]

Why did you revert someone else's inquiry? Did you not know that this is prohibited? --PST 03:19, 20 December 2009 (UTC)[reply]
Presumably it was an accident... in any case, thanks for finding and fixing it. Eric. 131.215.159.171 (talk) 12:59, 20 December 2009 (UTC)[reply]
Yes. The notation f2(...) for f(...)2 is especially common when f is a (broadly) trigonometric function. This notation conflicts with the (also common) use of f −1 for an inverse function, but that cannot be helped. –Henning Makholm (talk) 02:12, 20 December 2009 (UTC)[reply]
Indeed, just ensure you never combine the notations as sin-2x - you will cause people's heads to explode! --Tango (talk) 13:05, 20 December 2009 (UTC)[reply]
I agree, but if pressed, I would probably interpret that as . Likewise, in an unofficial context, I might use to mean A inverse transposed. -- Meni Rosenfeld (talk) 13:29, 20 December 2009 (UTC)[reply]
I've used that one in papers.195.128.250.121 (talk) 00:12, 21 December 2009 (UTC)[reply]
But it could easily mean 1/sin2x. --Tango (talk) 17:39, 20 December 2009 (UTC)[reply]
Or arcsin(arcsin(x)). –Henning Makholm (talk) 22:18, 20 December 2009 (UTC)[reply]
Help! My head! No more! :) Dmcq (talk) 00:46, 21 December 2009 (UTC)[reply]
So soon? And we didn't even talk about how superscripts denote higher-order derivatives, with the possible extension of using negatives for antiderivatives... -- Meni Rosenfeld (talk) 16:00, 21 December 2009 (UTC)[reply]
I think all sane people use f(n) rather than fn for higher derivatives. There's always the insane authors to worry about, though. Algebraist 16:20, 21 December 2009 (UTC)[reply]

Local class field theory

Suppose is a tower of finite extensions of p-adic fields, pairwise Galois, with L/E abelian. Let . Let . One can easily see that every element of G fixes E and H (not pointwise). Therefore, we have an action of G on the quotient group , that is, a homomorphism

.

The Artin map gives us an isomorphism of with A, so in fact what we have is a homomorphism . One sees that A is in the kernel of this homomorphism.

Now, A is a subgroup of G. As such, conjugation gives us a homomorphism . As A is abelian, it is in the kernel of this homomorphism. I claim: (1) these two homomorphisms and are equal.

With some work I was able to show that if B is cyclic, then G is the direct product of A and B (that is, is trivial) if and only if is trivial. In fact I could show this holds whenever B is a product of cyclic groups of pairwise relatively prime orders. (This requires Hilbert's Theorem 90.) I claim: (2) if B is abelian, then G is the direct product of A and B if and only if is trivial. One of the directions -- G is a direct product implies is trivial -- is easy.

Finally, I am (3) looking for conditions under which splits, i.e., conditions under which G is a semi-direct product of A and B.

So: I have been unable to find any proof for claims (1) and (2). (I don't know if they are true or not, I conjectured them.) In particular, with (1), I don't have a sufficiently explicit form for the Artin map to even attempt a proof. As for (3), I don't really have any ideas. Would anybody be able to give any guidance with these 3 points? I'd like hints -- very small hints -- just to point me in the right direction. Thanks. Eric. 131.215.159.171 (talk) 02:40, 20 December 2009 (UTC)[reply]

New Identity?

Hi everyone, I found this identity(which I cannot think of any practical use):. Anyway, I haven't checked it so could anyone help me validate it? P.S. There is a possibility of a constant appearing which has to be added to one side. Thanks!The Successor of Physics 11:31, 20 December 2009 (UTC)[reply]

If it's an identity then it should be true for any valid values of x, B and C (i.e. as long as the term in the square root is not negative and the terms inside the logs are positive), yes ? So let's try:
But 14.76... is not equal to 11.54... Have I made a mistake ? Gandalf61 (talk) 12:27, 20 December 2009 (UTC)[reply]
If you try x = 0, then the left side goes to one but the right side still depends on B and C. Eric. 131.215.159.171 (talk) 12:52, 20 December 2009 (UTC)[reply]
Look at the PS(It's obvious this was derived from an antiderivative)!The Successor of Physics 12:56, 20 December 2009 (UTC)[reply]
Sorry, they're not equal up to an additive constant either. Try and the difference between the terms will be different than with .
Anyway, any identity looking like what you've written above can probably also be reached with basic algebra. -- Meni Rosenfeld (talk) 13:21, 20 December 2009 (UTC)[reply]

They can't be equal when x is negative. Michael Hardy (talk) 17:21, 20 December 2009 (UTC)[reply]

Do you have any reason to believe that this "identity" holds? I don't see any. It kinda seems like nonsense. For example if you let x = C = 1, then you end up with Taking the ln of both sides, which clearly doesn't hold since √B grows much faster than lnB. Rckrone (talk) 19:21, 20 December 2009 (UTC)[reply]

Unless you mean that for a specific C and B it holds for all positive x, in which case you can find such B and C by solving for ClnB = 1 and C = sqrt(2(B-lnc)/lnB). In terms of this additive constant you mentioned, I'm not sure where you mean it should be, but with enough arbitrary constants in the right places I'm sure you could make things work out for more general B and/or C. Rckrone (talk) 19:37, 20 December 2009 (UTC)[reply]

Hey, Thanks, Rckrone(I'm an idiot)!The Successor of Physics 05:54, 21 December 2009 (UTC)[reply]

Rouché's theorem is so called, because ... ? Was there a mathematician named Rouché? --Andreas Rejbrand (talk) 15:30, 20 December 2009 (UTC)[reply]

Apparently so. Why it's not in the article... - Jarry1250 [Humorous? Discuss.] 15:46, 20 December 2009 (UTC)[reply]
Thank you. --Andreas Rejbrand (talk) 16:08, 20 December 2009 (UTC)[reply]
It is almost certainly named after Rouché, but that doesn't mean he had anything to do with coming up with it! Mathematical theorems are often named after someone that popularised the theorem rather than first proved it. Sometimes they are named after someone that conjectured it, rather than proved it, as well - eg. Wile's theorem. --Tango (talk) 16:13, 20 December 2009 (UTC)[reply]
You mean Wiles' theorem (not that that exists any way).- Jarry1250 [Humorous? Discuss.] 21:55, 20 December 2009 (UTC)[reply]
Obligatory link to Stigler's law of eponymy (due to Merton). Algebraist 20:38, 20 December 2009 (UTC)[reply]
Wow, in this case it was proved by the person it's named after, see Theory of complex functions By Reinhold Remmert p. 392.--RDBury (talk) 06:31, 21 December 2009 (UTC)[reply]
Resolved

Matrix subgroups

Let MatnR denote the space of n×n matrices with real entries, and let SymnR denote the space of n×n symmetric matrices with real entries. Given M ∈ SymnR let GM denote the set of X ∈ MatnR such that XTMX = M; where XT denotes the transpose of X. I started thinking about the GM by thinking about metrics and isometries. For example, the orthogonal transformations are the special case when M is the n×n identity matrix. The matrices XGM are the linear transformations which preserve the two-form (u,v) ↦ uTMv. It's easy to show that each GM forms a group under matrix multiplication. Also, the GM seem like they would quite like to be a vector spaces: for example, if X,YGM and λ, μR then λX + μYGM. The only problem is that the zero matrix does not belong to GM unless M is itself the zero matrix, and that is a very dull example. There is some linear structure here too. If XGM and XGM then XGλM+μN for constants λ, μR.

  • I was wondering what kind of algebraic structure the GM have. They seem like they would like to be rings but with multiplication and addition reversed; i.e. multiplication taking the role of addition (e.g. there is a multiplicative identity, but no additive identity) and vice versa. But that doesn't quite work either because they're not abelian with respect to multiplication.
  • I was wondering how the GM fit together in the whole space of matrices.

I'm more of a recreational algebraist; so go easy on me. Please try to Wikify as much as possible. ~~ Dr Dec (Talk) ~~ 22:14, 20 December 2009 (UTC)[reply]

These are the indefinite orthogonal groups, of which I know nothing, but the article may be of use. Note that your statement "if X,YGM and λ, μR then λX + μYGM." is not true, and these groups aren't much like linear spaces. They are manifolds, though, and hence Lie groups. Algebraist 23:00, 20 December 2009 (UTC)[reply]
Yeah, I wrote the wrong thing; thanks. What I meant to say was that if XGM and XGN then XGλM+μN.
In that case you're just saying that the set of symmetric bilinear forms preserved by a matrix X forms a vector space, which is both true and unsurprising. Algebraist 16:17, 21 December 2009 (UTC)[reply]
If you want ring-like structures along these lines, you could look at the associated Lie algebras, which consist of matrices X such that XTM = −MX. Algebraist 23:41, 20 December 2009 (UTC)[reply]
I would also point out that a structure is not even slightly ring-like unless its two rules of composition are connected by the distributive law in the right direction. If distributivity is lacking, then it matters not at all whether each composition viewed separately has properties one would expect of a ring addition/multiplication –Henning Makholm (talk) 23:28, 20 December 2009 (UTC)[reply]


December 21

Calculus History

I am reading books on the history of Mathematics and its development, specifically Calculus and I am now further confused. I thought I had it right but I am sure that I don't so maybe some experts here can help clear up a few things. First about the Bernoullis, I know that they were Swiss but I always thought that their background was Italian. Is that true? The article here doesn't really say anything about this. Was it like an Italian family? Is the name Italian? Is the name German or something? Were they originally Italian who then relocated to Switzerland or something?

Second, the more significant question, is about the actual development of Calculus. As I understand, Newton (and Leibniz) are credited with the "invention" of calculus because they proved the Fundamental Theorem of Calculus. But then I learn that Riemann was the one who redefined the integral (using the definition that a function is said to be integrable if for a given epsilon greater than zero, there exists a partition such that the upper sum and the lower sum over that partition are within epsilon of each other) which allowed Riemann to prove all the properties of the integral previously known (such as linearity) and he could now integrate function with discontinuities (even with an infinite...with measure zero as we now know...number of discontinuities) and then Riemann also proved that integration and derivatives are inverses operations with his newly defined integral. So isn't Riemann the one who proved the fundamental theorem of calculus? Why isn't it credited to him? I mean the form we see it in today came from him.-Looking for Wisdom and Insight! (talk) 00:59, 21 December 2009 (UTC)[reply]

Wow, good questions! My information (s:1911 Encyclopædia Britannica/Bernoulli) is that the Bernoullis were fleeing the Spanish when they cam to Switzerland about a hundred years before they became famous. It doesn't say whether they actually were Spanish or how they got the name which doesn't sound Spanish any more than it sounds German. I will note though that 1) Itallian is spoken in Switzerland, though generally not as far north as Basel, and 2) People were a bit more flexible about their names then than we are now, so the name they used might change depending on who they were talking to or they might use a Latin version (which you had so speak to be considered literate at that time). I worked on the Bernoulli articles and I basically had to go by birth and death years to tell them apart; they all used two or three first names and most of the names were used by two or three relatives.
If you're interested in the history of calculus I recommend The Calculus Wars By Jason Socrates Bardi. The short version is at Leibniz and Newton calculus controversy. Anyway, Newton and Leibniz invented calculus using something called fluxions or infinitesimals (depending on which side of the English channel you were on). By modern mathematical standards they were very non-rigorous and it wasn't until Riemann and Cauchy and their generation that it was all put on a firm footing, whence the Riemann integral etc. My understanding is that part of the motivation for doing this was a scathing criticism of infinitesimals by Bishop Berkeley. This is a case of methods being ahead of the proofs that they work, which happens a lot more than mathematicians would like to think. In this case the the methods, known as the methods to calculate infinitesimals, or the infinitesimal calculus, or nowadays just calculus, while not rigorous, at least seemed plausible, so people used them because they were useful. In any case, the development of calculus took place over thousands of years so deciding wh gets credit for it is going to be arbitrary anyway, but that's the way the history of science goes much of the time. A lot of that is my personal viewpoint so take it with a grain of salt, but it does seem to be a more interesting subject than you would think.--RDBury (talk) 05:47, 21 December 2009 (UTC)[reply]

Fréchet Second Derivatives and Taylor Series of matrix functions

Hi all,

Another one from me! I've got a distressingly long list of Christmas work (how cruel!) I've got a big long list of Taylor series to calculate for matrix functions, using the Fréchet derivative - however, my lecturer has failed to give any examples (helpful) nor can I find any on the internet, so I'd greatly appreciate it if someone wouldn't mind showing me an example before I start beavering away at the list!

Say, for any n*n matrix A: , and so is the Frechet derivative. Now how do I go about calculating the second (third etc) Frechet derivatives? (This is the first example on my list - I have the formula , right?

Thanks very much for the help (again!), I think once I've got one example sorted I can get going on the rest!

Much appreciated! Typeships17 (talk) 03:32, 21 December 2009 (UTC)[reply]

Yes, that expansion sounds like the evil laugh of your lecturer. The second derivative of is the symmetric bilinear map 1/2(UAV+UVA+VAU+VUA+AUV+AVU) Actually this holds in any Banach algebra; if it is commutative, you find A very efficient way to prove that a map is Ck, and to compute its differentials is the converse of Taylor theorem: a map from an open set of to is of class Ck if and only if it has a polynomial expansion of order k at any point of the domain, with continuous coefficients, and with a remainder which is locally uniformly o(|h|k). For k=1 this is the definition of C1 of course.--pma (talk) 09:51, 21 December 2009 (UTC)[reply]
What is that 1/2 doing there? Algebraist 12:44, 21 December 2009 (UTC)[reply]
I wonder too... --pma (talk) 12:57, 22 December 2009 (UTC)[reply]
Hah, I wouldn't be all too surprised if he did laugh like that. That's great but how did you go about actually calculating it? I'm not sure I follow quite how to get from the first derivative to the second and so on, perhaps the concept of going from a linear to bilinear to trilinear etc map is bewildering me. What limit gave you your (1/2?)UAV+UVA+VAU+VUA+AUV+AVU? Many thanks again, Typeships17 (talk) 14:24, 21 December 2009 (UTC)[reply]
You just take the first derivative and perturb A again:. The Taylor series you end up with will of course just be what you get by multiplying out (A+H)3. Algebraist 16:13, 21 December 2009 (UTC)[reply]
That's great, I've got the idea now, thanks ever so much - now onto A-1, this one should prove a bit more challenging! (If anyone has any tricks for the general form of the nth derivative, please feel free to let me know, I managed to batter my way through the first but no further...) Anyway, many thanks again to both of you :) Typeships17 (talk) 17:58, 22 December 2009 (UTC)[reply]
Invertible matrix (more generally, invertible elements of a B-algebra) are an open set, and the inversion map is analytic: if the element is invertible and you have the expansion (a real evil laugh):
From this you can find all the differentials, simmetrizing. E.g and --pma (talk) 09:15, 24 December 2009 (UTC)[reply]

Vector cosine

I was looking at Amazon.com's "people who bought this item also bought..." algorithm and I noticed that they use vector cosine to group users. For example, if I bought items 1, 6, and 9 (the product ID for each item), my purchase vector would be {1,6,9}. If you bought {5,6,7}, the cosine of the two vectors would be 4.86 (if my math is correct). I know that when the vectors are identical, the cosine of the vectors is 1. What is the domain of vector cosine? Is there a limit that indicates "opposite", such as when comparing {1,2,3} to {3,2,1}? Is there a limit that indicates "nothing in common", such as when comparing {1,2,3} to {4,5,6}? I'm curious about how accurate it is to use vector cosine to identify how similar two vectors are. -- kainaw 05:30, 21 December 2009 (UTC)[reply]

You should check out the articles Collaborative filtering and Netflix prize. The vector cosine seems to be a term used by people who specialize in this area rather than most mathematicians, but my research indicates that it's simply the cosine of the angle between the two vectors. If two vectors are nearly the same direction then the angle between them is nearly 0 and the cosine is close to 1. If vectors aren't close to the same direction then the cosine is closer to 0 or even negative. It turns out that the cosine is easier to compute than the angle itself (See Angle#The dot product and generalisation so it's useful for doing computation.--RDBury (talk) 06:11, 21 December 2009 (UTC)[reply]
Thank you. That is a good link. I guess I'm just doing cosine of vectors wrong since I get 4.86. I thought cosine was limited to the range -1 to 1. Perhaps I'm just adding or multiplying wrong. -- kainaw 07:20, 21 December 2009 (UTC)[reply]
I have no idea what Amazon does (a link to your source would be welcome), but the purchase vectors above should probably be and . Their so-called cosine similarity (which is indeed between -1 and 1) is 1/3. It can only be negative when some of the entries are, which is impossible in this particular setting. Negatives in general indicate opposite directions, with -1 polar opposites. 0 indicates no common items here, or orthogonality in the general case. Also note that {1,2,3} and {3,2,1} are the same, not opposite. -- Meni Rosenfeld (talk) 16:12, 21 December 2009 (UTC)[reply]

I'm inclined to agree with Meni Rosenfeld, and the "cosine" reported to be 4.86 above must be a mistake: such a cosine cannot exceed 1. (There are complex numbers whose cosine is a real number greater than 1, but that doesn't apply here.) Michael Hardy (talk) 20:32, 21 December 2009 (UTC)[reply]

I did have some math mistake somewhere. The cosine is 0.91. The formula shown in all of the papers I've read is cosine(A, B) = (A·B)/(||A||*||B||). At first, I thought ||A|| was the length of A (how many items are in A). I then noticed that it was the square root of the sum of all the elements of A squared. The dot product is a bit of a problem - what if the vectors are different lengths? Just use zeros to pad the smaller one? I don't see how you can get 0 since all the vectors being used are positive integers greater than zero. -- kainaw 20:47, 21 December 2009 (UTC)[reply]
Again, I think you are confused about how vectors represent purchases. The simple way (which again, may or may not be what Amazon does) is to have a vector whose length is equal to the total number of items available for purchase, and which has 1 in indexes of purchased items and 0 elsewhere. With this encoding, the cosine similarity in the example you gave is 1/3, like I said. -- Meni Rosenfeld (talk) 05:05, 22 December 2009 (UTC)[reply]
None of the examples that I've seen use 0/1 representation. They all use a vector of integer identifiers. Many refer to it as Pearson product-moment correlation coefficient. I'm now reading about the "centering" involved to see how that affects the cosine. -- kainaw 05:15, 22 December 2009 (UTC)[reply]
If those examples are online, please provide a link. My guess is that they present a list of IDs for compactness but do the calculations with a 0/1 representation.
In any case, it should be crystal clear that what you have done - multiply out the IDs - makes absolutely no sense whatsoever. For starters, IDs are on a nominal scale, while multiplication requires a ratio scale (Level of measurement) (with centering, you only need an interval scale). Second, it creates completely absurd situations. {1,2,3} has <1 similarity with {3,2,1} although they are the same purchases. {1,2,3} had >0 similarity with {4,5,6} although they have nothing in common. The similarity between {1,2} and {1,3} is different than between {5,10} and {6,10} although they have the same structure. The similarity between {1,3,5,7,92678} and {2,4,6,8,93412} is very high although they have nothing in common, while the similarity between {1,2,3,4,87154} and {1,2,3,75642,5} is close to 0 although they have a lot in common. -- Meni Rosenfeld (talk) 05:45, 22 December 2009 (UTC)[reply]
See "Geometric Interpretation" in Pearson product-moment correlation coefficient. It uses {1,2,3,4,5,8} and {.11,.12,.13,.14,.15,.18}. I'm going to do some tests with centered vectors to see if the results make sense. According to the article, 1/-1 is highly correlated and 0 is no correlation. -- kainaw 06:08, 22 December 2009 (UTC)[reply]
This has nothing to do with purchases. Here each index represent a country, the first vector gives the GNP for each country and the second vector gives the poverty for each country. Taking the dot product (after centering) works, because you are multiplying matching quantitative measurements (the GNP of a country with the poverty of the same country).
In the purchasing scenario, you tried to multiply IDs (which of course cannot be multiplied) by matching them based on their position in the purchase list. So if the 8th item customer A purchased is a children's book (ID 134675) and the 8th item customer B purchased is a shotgun (ID 134677) (made up numbers), you count it as evidence for similarity. And if the 8th item customer A purchased is a children's book, while the 9th item customer B purchased is that very same book, you don't count it as anything.
I don't mean to sound disrespectful, but it seems you are biting a bit more than you can chew here. You shouldn't try understanding collaborative filtering algorithms if you've not yet mastered basic topics like Pearson's correlation coefficient. -- Meni Rosenfeld (talk) 06:33, 22 December 2009 (UTC)[reply]
I see my mistake now. In collaborative filtering, the term "similarity" is often used to mean "correlation". In actuality, those are two very different terms. I was trying to see how cosine produced a similarity when all it produces is a correlation. So, my initial assumption that cosine does not produce a valid similarity is correct if the definition of similarity is not rationalized to mean correlation. -- kainaw 12:01, 22 December 2009 (UTC)[reply]
That's not quite right. For sure, "correlation" is one thing and "similarity" is another. Indeed, the correlation between GNP and poverty has nothing to do with similarity. But nobody tries to imply that one means the other. Rather, it is claimed that the correlation between the features of two items can indicate similarity between the items. This may or may not be valid, depending on the features we choose.
In the case of Amazon, the "items" are customers. The features are the products they purchased, or more specifically, the ith feature is a 0/1 variable indicating if a customer purchased product i. It is claimed (or not. Where is the link to Amazon's algorithm?) that correlation between the features of customers indicates similarity between the customers. For example, customer 13 purchased products 1,2 but not 3, and customer 25 also purchased products 1,2 but not 3. This is used as evidence that customers 13 and 25 are similar (have the same shopping preferences, or whatever).
Of course, there are countless other ways to approach the problem of similarity, but representing items as feature vectors is very powerful. Even then, cosine similarity is just one of the ways to compute a correlation metric between feature vectors - and hence, by assumption, a similarity metric between items. -- Meni Rosenfeld (talk) 12:32, 22 December 2009 (UTC)[reply]
The length of A can be the number of non-zero coordinates in some contexts, e.g. coding theory, but in this case it means length in the Euclidean sense. The cosine formula includes the lengths of the vectors to allow for varying lengths of the vectors involved.--RDBury (talk) 05:08, 22 December 2009 (UTC)[reply]

Calculi(Non Newtonian)

Everybody, I don't seem to understand what's going on the pages "Other Calculi" here[1]. Can anyone explain it to me? Thanks!The Successor of Physics 06:21, 21 December 2009 (UTC)[reply]

I gather the idea is a variation on the definition of derivative using multiplication rather than addition. The result is something like a logarithmic derivative. Did you have a specific question?--RDBury (talk) 06:52, 21 December 2009 (UTC)[reply]
RDBury, I know that. Maybe I should restate my question. I meant that the bijective function φ there should have two inputs e.g. addition, x + y, x and y are two inputs. How come in those pages, the function φ only has one input?The Successor of Physics 08:03, 21 December 2009 (UTC)[reply]
The function φ is not supposed to be addition in ordinary derivatives and multiplication in multiplicative derivatives. Rather, it is the transformation that is applied to transform ordinary derivatives to new derivatives - it is the identity for ordinary derivatives (no transformation), and the exponential function for multiplicative derivatives (exponentiation transforms addition to multiplication - ). -- Meni Rosenfeld (talk) 10:27, 21 December 2009 (UTC)[reply]
Thanks, Meni!The Successor of Physics 14:09, 21 December 2009 (UTC)[reply]
To make sure my conception is correct, so what you mean if the f is the function with two inputs e.g. multiplication, then . Am I correct?The Successor of Physics 14:18, 21 December 2009 (UTC)[reply]
Precisely. -- Meni Rosenfeld (talk) 15:51, 21 December 2009 (UTC)[reply]
Thanks!The Successor of Physics 04:03, 22 December 2009 (UTC)[reply]
Resolved

Article introducing complex numbers

I would like to request a new article introducing the concept of complex numbers. The current article does not introduce them in a way that's accessible to someone who does not already have a great deal of mathematical knowledge. After looking up educational resources elsewhere on the web I found the concept fairly straightforward and logical but I'm not qualified to write it myself.

Just the introduction to 'Complex number' contains 27 links to other articles, of which at least half are similarly dense and inscrutable. As it stands there's no way for someone to develop an understanding of these concepts from reading the wikipedia because there's no starting point, you just wind up clicking between articles full of thick and unelaborated jargon.

FTA: Complex numbers form a closed field somehow with real numbers. OK, so what's a closed field? Don't know, go to the article. OK, it's some type of field, what's a field? Go to the article, and before I've left the introduction I'm wondering what 'quintic relations' are or an 'integral domain' and if I'd only read the wiki I still wouldn't know what a complex number is or what it has to do with anything. Now I'm not averse to learning all this, I'd love to understand it, but clicking from article to article isn't helping. It's frustrating in a way that other areas of the wikipedia aren't, I don't experience this in the physics or computer science sections for example, if understanding one area depends on understanding another one can usually just click through and read the prerequisite article without falling down the rabbit hole. —Preceding unsigned comment added by 196.209.232.87 (talk) 14:39, 21 December 2009 (UTC)[reply]

Hmm. I can't actually see a Reference Desk question anywhere in your complaint. You could add your request to Wikipedia:Requested articles/Mathematics, or you could take it to Wikipedia talk:WikiProject Mathematics. Gandalf61 (talk) 14:56, 21 December 2009 (UTC)[reply]
your complaint belongs in Talk:Complex_number. Ask short questions here and get short answers. Say, Question: "what is a complex number?" Answer: "a complex number is an expression of the form a+ib where a and b are real numbers and i·i=−1". Go on, ask your next question. Bo Jacoby (talk) 14:59, 21 December 2009 (UTC).[reply]
@Gandalf61 - that is what I needed to know, will do, thx —Preceding unsigned comment added by 196.209.232.87 (talk) 15:10, 21 December 2009 (UTC)[reply]

Unfortunately, we do not write multiple articles on a particular concept (in accord with the guideline that Wikipedia is an encylopedia, and not an introductory comprehensive textbook). However, I do not mind explaining the terms you have mentioned.

Before I proceed furthur, I would recommend you to read the article on rings, for this provides a reasonably basic introduction to the theory of rings, integral domains and fields (it would be appropriate for you to read from this section onwards).

The set of complex numbers, together with its two operations (addition and multiplication) may be defined as follws:

, where is the set of real numbers, and i is the "imaginary unit"; it satisfies the relation or (intuitively, it is a "root of −1").
If , we define their sum as .
If , we define their product as .
Hope this helps, and be sure to read this article from this point onwards. --PST 15:10, 21 December 2009 (UTC)[reply]
@above - thank you, clearer now. —Preceding unsigned comment added by 196.209.232.87 (talk) 15:50, 21 December 2009 (UTC)[reply]
I don't think people should need to read the Ring article to understand the Complex number article so there is definitely an issue with the Complex number article. Math articles tend be be written by mathies for mathies and unfortunately (and contrary to WP:MOSMATH) that sometimes includes articles that should be (at least partly) understandable to typical high school students. It's not a good idea to create new articles to solve this; some people have tried this with articles that have names like 'Introduction to X'. Not only do they amount of content forks but, judging from the amount of heat they generate in AfD discussions, they cause more problems than they solve. The correct solution is to have an non-technical, jargon free introductory section in each article that non-mathies are likely to come across. For the moment it would be a good idea to go over the Complex numbers article with an eye to making the introductory section more accessible, but maybe an more general review is in order.--RDBury (talk) 05:53, 22 December 2009 (UTC)[reply]
This is a discussion for Talk:Complex number or WT:WPM, not here. Algebraist 13:23, 22 December 2009 (UTC)[reply]

Is Principles of Mathematics a standard reading in math degrees? Is it still worth reading?--ProteanEd (talk) 17:36, 21 December 2009 (UTC)[reply]

Definitely not standard reading. It's worth reading from a historical perspective, but not as a way of learning logic. Mathematical logic has come on a long way in the last 100 years, so modern books are a better choice. It's also a very difficult read - I only got about half way through! --Tango (talk) 17:43, 21 December 2009 (UTC)[reply]
(edit conflict) It certainly wasn't mentioned in my degree programme. I haven't read the work, but from glancing at it, it doesn't seem to be a work of mathematics per se, but rather the philosophy of mathematics, which is not normally taught to mathematics undergraduates in any serious way. Even within the philosophy of mathematics, I believe Russell's logicism is rather out of fashion nowadays, though there are certainly still people around who are logicists in some sense. Algebraist 17:47, 21 December 2009 (UTC)[reply]

Reducible Polynomials With All But Constant Coefficient Equalling 1

Resolved

Is there any established theory for determining for what C>1 the polynomial xn+xn-1+...+x+C, n even, is reducible? I was previously unaware of the facts that 1) for n=4 you get a reducible polynomial for C=12 and 2) for n=8 you get a reducible for C=20. Empirically, it appears these are the only cases (I ran a PARI/GP program to C=1000 and n=240, but it doesn't generate a related result if there is a smaller odd n with reducibility in addition to some even n, so this list might be a little short--it seems unlikely it is).Julzes (talk) 18:52, 21 December 2009 (UTC)[reply]

Reducible over what? Z? Algebraist 19:15, 21 December 2009 (UTC)[reply]
Yes, over Z. Thanks for reminding me.Julzes (talk)

I decided to just mark this as resolved. If anybody reading this happens to have known about these two oddballs, let me know, but it just looks like a nice problem to prove their uniqueness, and I'm sure there is no theory to them.Julzes (talk) 19:20, 21 December 2009 (UTC)[reply]

googol

A friend of mine & I decided to look for the googol as a power of 2 (even moderately smart people get bored). We never thought to use the Google calculator (we didn't even know it existed). So, the TI-83. Is 2^332.1928094886 really EXACTLY one googol? Seems amazing... —Preceding unsigned comment added by 174.18.161.113 (talk) 21:28, 21 December 2009 (UTC)[reply]

No, it can't be, because log2(10) is a transcendental number. See the Gelfond-Schneider theorem. --Trovatore (talk) 21:31, 21 December 2009 (UTC)[reply]
I think that wins the prize for biggest hammer used to crack a nut. Algebraist 21:34, 21 December 2009 (UTC)[reply]
Mm, fair enough, especially given that when I looked it up and thought it through, I realized that to apply G-S, you first need to show that log2(10) is irrational, which already suffices to answer the question. At first glance I didn't see an easier way of showing log2(10) is irrational than using G-S, but actually it follows easily from the fundamental theorem of arithmetic. --Trovatore (talk) 21:47, 21 December 2009 (UTC)[reply]
To me, the question does not seem serious. The person asking the question is well aware of the fact that if there are more digits the calculator cannot show them. In fact, I imagine that in order to get 13 significant figure on a TI-83, one must subtract (or divide) out the whole part of the exponent, and this process seems too advanced for someone who did not know the answer to the question asked.Julzes (talk) 22:55, 21 December 2009 (UTC)[reply]

Here's a really simple way to see this: Suppose

where m, n are integers. Then

where

and so

But that is impossible because it says an even number equals an odd number. Any high-school student will understand that one—no Gelfond–Schneider theorem needed. Michael Hardy (talk) 00:12, 22 December 2009 (UTC)[reply]

Yes, that's what Trovatore and I were alluding to above. Algebraist 00:37, 22 December 2009 (UTC)[reply]
Well, it is a bit simpler than the argument I had in mind, as it doesn't need the full FTA. --Trovatore (talk) 10:12, 22 December 2009 (UTC)[reply]
Michael Hardy, you could use this simpler method to prove it is transcendental

which is impossibly transcendental.The Successor of Physics 04:19, 22 December 2009 (UTC)[reply]
I wasn't trying to prove it was transcendental. But as far as "simpler" goes, the fact is any high-school student can understand my argument, whereas yours would have to rely on more sophisticated results such as Gelfond–Schneider. What exactly did you have in mind as your grounds for inferring that that number is transcendental? Gelfond–Schneider? Or something else? In order to use Gelfond–Schneider, you'd need to know that ln 5/ln 2 is irrational, and the proof of that is just what I gave. I suspect your comments lack all merit. Michael Hardy (talk) 05:11, 23 December 2009 (UTC)[reply]
To answer what I think the OP intended to ask - yes, exactly, where x is approximately 332.19280948873623478703194294. -- Meni Rosenfeld (talk) 05:00, 22 December 2009 (UTC)[reply]

December 22

Desperately Seeking a Faster Algorithm

Pari/GP is remakably slow at determining whether a polynomial is irreducible. Now, it may be that the problem is just generally hard, but I find it difficult to believe that the following program to generate the smallest coefficients to build a sequence of irreducible polynomials cannot be made faster. It involves small positive coefficients, and I would think that there is at least a better way than simply using PARI/GP's polisirreducible function for the sizes of coefficients involved. Here is the program as it now stands (and a good many of the terms it output are at oeis:A171810):

x=1:for(d=1,4000,c=1;x=x+v^d;while(polisirreducible(x)-1,c+=1;x=x+v^d;next());print1(c" "))

If anybody knows of or can think up an algorithm for determining irreducibility (over Z) for the special case of small positive coefficients, it will make the terms of the sequence given more open to study. As things stand, certain things about the coefficients are more mysterious than they might be with access to hundreds of thousands rather than merely thousands of terms. Much appreciation for any worthwhile answer.Julzes (talk) 03:16, 22 December 2009 (UTC)[reply]

I'm not an expert in the field, but you're already using a computer algebra program and the people who write them generally know what they are doing. Not that the one you're using is perfect, you might want to try some other ones to see if they work any better, but I think anything you're going to learn here will already be incorporated into most programs with a good reputation.--RDBury (talk) 06:32, 22 December 2009 (UTC)[reply]

Well, I do appreciate that point of view, and generally I'd also guess the polisirreducible function is about as good as possible, but I was wondering whether there might be something a little more tuned to small coefficients (mostly 1s, regular 2s, few 3s, one 4 and that's it). A function like polisirreducible is going to be set up for the most general case, and is unlikely to be close to optimal for polynomials that are so strongly biased toward small positive coefficients.Julzes (talk) 08:19, 22 December 2009 (UTC)[reply]

I just came back to share an intriguing result of a different, related, problem. Starting with constant coefficient 1, the problem is to create a sequence of polynomials that are relatively prime to each other using the smallest positive coefficient. While acting in no particularly orderly way up to the 89th degree, beginning there up to at least the 1000th the coefficients are all 1s at degrees not congruent to 3 modulo 5, and are 2s there. I'm not looking for, and I don't expect, an explanation.Julzes (talk) 09:41, 25 December 2009 (UTC)[reply]

Summation of finite differences

I am trying to do a definite sum from a to b, viz.:
sum [ (n+j)! / n! ]
I had thought to sum it by parts, i.e.:
sum [v * delta(u) ] = [ u*v {with appropriate limits for a and b} ] - sum [ u * delta(v) ]
where v is [ (n+j) ! / n! ] and delta(u) is ( 1 ).
Using information from previous posts:
delta [ (n+j) ! / n! ] = [ j * (n+j)! / (n+1)! ] = [ j / (n+1) ] * [ (n+j)! / n! ]
Summing delta(u) leads to ( n ), or to [ (n+1) - 1 ].
[Being a definite sum, it seems there should be no constant of summation to make it ( n+1 ).]
The sum(delta(u)) combines with delta(v) to give
sum { [ j * (n+j) ! / n! ] - [ ( j / (n+1) ) * (n+j)! / n! ] }
The first part { sum [ j * (n+j)! / n! ] } fits in nicely with the original sum [ (n+j)! / n! ].
But what to do with the second part { sum [ ( -j / (n+1) ) * ( (n+j)! / n! ) ] };
there remains a sum whose summand is equal to the original summand times -j/(n+1).
Keep integrating by parts and wind up with an answer involving an infinite series?
Or perhaps trying to do the original summation differently?ImJustAsking (talk) 14:11, 22 December 2009 (UTC)[reply]

Are you talking about or  ? Bo Jacoby (talk) 00:53, 23 December 2009 (UTC).[reply]

Sorry about the ambiguity: n goes from a to b, and j is a constant.ImJustAsking (talk) 19:21, 23 December 2009 (UTC)[reply]

So write it as j! times a sum of binomial coefficients, and the sum is just the difference of two binomial coefficients. --pma (talk) 22:03, 23 December 2009 (UTC)[reply]

Can I do the following:
Because delta[ (n+j)! / n! ] = j * (n+j)! / (n+1)!
therefore sum{ delta[ (n+j)! / n! ] } = sum { j * (n+j)! / (n+1)! }
where “sum” goes from a to b,
so that by exchanging sides sum{ j * (n+j)! / (n+1)! } = sum{ delta[ (n+j)! / n! ] }
and cancelling “sum” and “delta”, one gets sum{ j * (n+j)! / (n+1)! } = [ (n+j)! / n! ]
Question: may I assume that the quantity on the right side is to be evaluated at a and b+1?
Of course this does not answer my original question, but it would show a way for solving it.ImJustAsking (talk) 23:54, 23 December 2009 (UTC)[reply]

Stationary Points in Higher Dimensions

To identify a stationary point of a function of more than one variable, more specifically f(x,y), do you simply have to identify the points at which every one of its partial derivatives is zero? Also, how do you go about classifying the stationary points as maxima, minima and saddle points? Thanks 92.0.129.48 (talk) 18:58, 22 December 2009 (UTC)[reply]

Yes, a stationary point is one in which f is differentiable and all partial derivatives are 0.
The first step of classification uses the signs of the eigenvalues of the Hessian matrix. In the case of two variables, this reduces to denoting , , and looking at and . If either is 0, higher order derivatives are required. If , it is a saddle point. If , then it is a local minimum if and a local maximum if . -- Meni Rosenfeld (talk) 20:28, 22 December 2009 (UTC)[reply]
In case of nondegenerate critical points the classification is quite simple even in several variables, and it is given by the Morse lemma. Check also Sylvester's law of inertia. --pma (talk) 00:00, 23 December 2009 (UTC)[reply]

Symbol for 'such that'

Hi all,

I was just wondering if there's any symbol (in terms of etc.) which is typically used to mean "Such that" (there exists A such that B, for example) in mathematics? I'm aware of the vertical bar |, and occasionally the colon, but sometimes these can be unclear in the context: are there any others?

Many thanks, 86.26.6.36 (talk) 22:29, 22 December 2009 (UTC)[reply]

Table of mathematical symbols only lists the colon. -- kainaw 22:40, 22 December 2009 (UTC)[reply]
I usually just use "s.t.". When defining a set as {A such that B} you can use a vertical bar or colon, but I wouldn't use them in any other context - too confusing. --Tango (talk) 22:42, 22 December 2009 (UTC)[reply]
I've seen used to mean "such that". Also a period is often used immediately after a to mean "such that". So the existence of could be written or . Of course english words are usually preferable to any of those. Staecker (talk) 22:49, 22 December 2009 (UTC)[reply]
You don't need any symbol, after the existential quantifier, to mean such that. Usually you just put either the formula that follows the quantifier, or the quantifier (plus variable) itself, in round brackets.
The symbol could be useful to translate other instances of such that, such as "given x such that φ(x) holds", in which you have no explicit quantifier symbol. In my experience, however, it sees fairly limited use. --Trovatore (talk) 22:58, 22 December 2009 (UTC)[reply]
Oh, it also occurs to me: The reason you don't use the symbol in existential statements is that such that doesn't actually mean anything in existential statements. "There exists x such that φ(x) holds" is just .
It does mean something, on the other hand, in universal statements, like "For every x such that φ(x) holds, τ(x) also holds". That statement translates as , but could also be written .
But again, could be written that way; usually isn't. --Trovatore (talk) 23:10, 22 December 2009 (UTC)[reply]
Great, thankyou :) 86.26.6.36 (talk) 23:27, 22 December 2009 (UTC)[reply]
Beware that you can't expect that a random reader (even assumed mathematically literate) will understand used with this meaning if it's not explicitly explained in the surrounding text (and then what would be the point?). For example, I managed to earn a Ph.D. in a fairly logic-heavy area of computer science without ever seeing used to mean anything but "contains as an element". –Henning Makholm (talk) 13:33, 23 December 2009 (UTC)[reply]
I've never seen that symbol used for "contains as an element". I've seen used for that purpose, though.--COVIZAPIBETEFOKY (talk) 13:48, 23 December 2009 (UTC)[reply]
But means "contained as an element". -- Meni Rosenfeld (talk) 13:56, 23 December 2009 (UTC)[reply]
... thus Harry {Harry, Sally} and {Harry, Sally} Harry. Gandalf61 (talk) 14:07, 23 December 2009 (UTC)[reply]
Shot myself in the foot, there, didn't I? Whoops... --COVIZAPIBETEFOKY (talk) 14:33, 23 December 2009 (UTC)[reply]
I think that usage of is even more obscure than the "such that" meaning. The element almost exclusively goes on the left and the set of which it's an element on the right; there's almost never a reason to reverse them. --Trovatore (talk) 19:29, 23 December 2009 (UTC)[reply]
I wouldn't do it in a formal paper, but I often see good reason to use it this way. How about, "Let and consider an open set " is sometimes nicer than "... and consider an open set with ". Staecker (talk) 20:21, 23 December 2009 (UTC)[reply]
That's true; good example. --Trovatore (talk) 21:30, 23 December 2009 (UTC)[reply]
I've seen once for "such that". I find it really ugly. --pma (talk) 15:35, 24 December 2009 (UTC)[reply]

Cartoon books about mathematics

Are there any cartoon or other fun books that teach mathematics? In particular at a level equalivalent to what we would call GCE "A" level in England (and Wales)? 92.24.76.99 (talk) 22:59, 22 December 2009 (UTC)[reply]

Don't know bupkus about A levels. But sure, things like Prof. E McSquared's Calculus Primer: Expanded Intergalactic Version (ISBN 0971462402) are out there. Do a search for "cartoon calculus", for example. --jpgordon::==( o ) 23:57, 22 December 2009 (UTC)[reply]
I'd say Murderous Maths but that only runs to about GCSE level (despite what our article may say about age range), and in particular is missing subjects solely taught at A level, such as calculus. - Jarry1250 [Humorous? Discuss.] 17:30, 24 December 2009 (UTC)[reply]

December 23

Functional Analysis

Where can I get a very basic introduction to the current research directions in functional analysis? Also I am interested in knowing about applications of Ramsey theory to functional analysis.[2] Thanks-Shahab (talk) 04:44, 23 December 2009 (UTC)[reply]

Edit Conflict
A good (and reasonably basic) book on the subject would be "Functional Analysis" by Walter Rudin. Alternatively, if you wish to learn about the theory of C* algebras, you could read "An Invitation to C* algebras" (in the GTM series).
Prior to studying functional analysis, it would be good to have a strong background in point-set topology, the topology of metric spaces, linear algebra, and ring theory. Although I think that you already have such a background, it is especially important to have a ring-theoretic intuition (or an intuition of linear transformations); for instance, it would help to be acquainted with a result of the nature of the Jacobson density theorem (somewhat related to the Von Neumann bicommutant theorem in functional analysis). In fact, a strong background in noncommutative ring theory would help should you wish to delve deeper into the subject.
Perhaps, it would be advisable to read the articles operator algebra, operator topology, and Von Neumann algebra, for this may give you a sense of the sorts of basic notions encapsulated in functional analysis. All in all, the two books I suggested may be useful (though there are other excellent texts), but it is important to have a good feel for linear transformations. Might I also add that there are many sorts of branches of functional analysis; the one I have emphasized here does not really encapsulate mathematical physics and the geometry of Banach spaces (note also noncommutative geometry)? --PST 05:34, 23 December 2009 (UTC)[reply]
Sorry - I made the above post before you altered your inquiry to note Ramsey theory. --PST 05:34, 23 December 2009 (UTC)[reply]
Now that you have mentioned Ramsey theory, the book "Geometric Functional Analysis and Its Applications" by Richard B. Holmes, may be appropriate (it is a book in the GTM series). --PST 05:40, 23 December 2009 (UTC)[reply]
I'm a big fan of Kreyszig's Introductory Functional Analysis with Applications which is very clear and well written, though doesn't have any Ramsey theory 86.15.141.42 (talk) 12:45, 23 December 2009 (UTC)[reply]
Thank you both. I have obtained the recommended books and will start reading them.-Shahab (talk) 16:49, 23 December 2009 (UTC)[reply]

reduction algorithm

What algorithm will reduce to minimum form an equation consisting of polynary variables? 71.100.6.206 (talk) 04:45, 23 December 2009 (UTC) [reply]

I thought there would only be a network of 10 links between five people. But the man here says there are 120: http://www.ted.com/index.php/talks/bruce_bueno_de_mesquita_predicts_iran_s_future.html How does he calculate a figure of 120, not 10? 92.29.68.169 (talk) 16:03, 23 December 2009 (UTC)[reply]

Can you indicate where he said that? Anyway, so he may have talked about ways to arrange 5 people in a line or something. -- Meni Rosenfeld (talk) 16:13, 23 December 2009 (UTC)[reply]
Interesting... there are 10 lines on his diagram. Either he's simply wrong (which seems unlikely, since he did include the diagram and one would hope he can count to 10!) or he means something different by "link". He talks about one person knowing what others are saying to each other, so if we count thinks like "A thinks B has said X to C" (where A-E are people and X is an idea) as a link then there are far more than 10. There are 120 different ways to order the five people (you have 5 choices for the first, 4 for the second and so on), so there are 120 links of the type "A thinks that B thinks that C thinks that D thinks that E thinks X". It could be that he's talking about that. He doesn't explain it at all well, though. --Tango (talk) 16:34, 23 December 2009 (UTC)[reply]
PS My greater concern is about the 90% accuracy claim. That is a completely meaningless number. First of all, we need to know if the predictions were made before or after the events happened - it is far easier to come up with a method that "would have" predicted the outcome once you know what the outcome was. Secondly, we need to know how well other methods predicted those outcomes (eg. just surveying experts and seeing what most of them say is likely to happen). --Tango (talk) 16:37, 23 December 2009 (UTC)[reply]

Why does a*b give the area of a rectangle? / Why does arithmetic give meaningful geometric results?

All my life I have known that a*b gives the area of a rectangle with sides a and b. It's repeated so often that I surely don't doubt it. I've realized, though, that I don't feel like I have a solid understanding of why it's true. It seems like something that needs further explanation.

For rectangles with integer sides, there's an explanation that's at least mostly satisfying:

  • By definition, the area of a figure is the # of 1x1 unit squares that fit inside it
  • If a rectangle has integer sides a and b, then you can fit an axb array of 1x1 squares inside it. (This seems like it could use some kind of justification of its own, but it's at least pretty intuitive to visualize.)
  • We know that a*b is a good way to count an axb array of objects. (If you have any doubts there, they can be addressed in this case by thinking of multiplication as repeated addition.)
  • So a*b is the # of 1x1 squares inside an axb rectangle.
  • So, by definition, a*b is the area of the rectangle.

Moving beyond integers it seems more mysterious to me. One way to phrase the mystery is this; How does a*b "know" how many 1x1 squares are inside an axb rectangle? If we're talking about real numbers, we can't just count object arrays anymore, so the above justification won't work.

One possibility I've encountered is that maybe you shouldn't think of area of a figure as the # of 1x1 squares in it but rather as the ratio between its area and that of a 1x1 unit square. (See http://www.math.ubc.ca/~cass/graphics/manual/pdf/ch2.pdf) But I haven't figured whether looking at area in that different way could make the connection between multiplication and area seem less mysterious for real numbers.

For context, this may be part of a larger confusion of mine about how arithmetic relates to geometry: On one hand, it seems like real numbers are defined axiomatically (I know there are other approaches, but see http://en.wikipedia.org/wiki/Real_number#Axiomatic_approach), and if you derive an algorithm for multiplication, you do that from the axioms for fields and such, without consulting geometric facts in any way. And yet, having done so, you wind up with algorithms/formulas that can be used to find the area of rectangles. What is it about this abstractly defined operation of multiplication that makes it suitable for answering anything about geometry? And what makes it suitable for answering questions about area in particular?

Ryguasu (talk) 21:56, 23 December 2009 (UTC)[reply]

I think maybe the first step is for you to ask yourself just what you mean by "area", as distinct from the product of the sides of the rectangle, which is usually taken to be pretty much the definition. If it turns out that your meaning is motivated by physical reality, you might check out The Unreasonable Effectiveness of Mathematics in the Natural Sciences, which raises questions for which there are not yet any generally accepted satisfactory answers. --Trovatore (talk) 22:02, 23 December 2009 (UTC)[reply]
Let's assume we have defined the concept of 'shape', and that you are willing to accept the following axioms regarding area:
  • A 1x1 square has area 1.
  • The area of a shape is unchanged when you translate or rotate it (ie. move it)
  • Placing two shapes adjacent to each other so there is no overlap results in a new shape whose area is the sum of the original shapes.
  • If one shape can be translated/rotated to completely cover another, the area of the first is larger than the area of the second.
As you have already demonstrated, the area of an axb rectangle where a and b are integers can be established by adding together several 1x1 squares. This gives a*b as their area.
Similarly, if you have a (1/a)x(1/b) rectangle, you can show that its area must be (1/a)*(1/b) by adding the same rectangle to itself a*b times in such a way as to make a 1x1 square. If we call the area A, this means that A*a*b=1, or A=1/(a*b). It is then just as easy to show that any (a/b)x(c/d) rectangle has area (a/b)*(c/d)=(ac)/(cd).
To show that the area of a qxr rectangle is q*r, where q and r are any real numbers, you can bound the area of the qxr rectangle above and below by use of rational-sided rectangles and the forth axiom, and get the bounds arbitrarily close to q*r. Then the only possible area is q*r.
HTH. --COVIZAPIBETEFOKY (talk) 22:41, 23 December 2009 (UTC)[reply]

Area is additive. If you have two rectangles in a plane sharing a common side, so that their union is a rectangle, then the area of that larger rectangle is the sum of the two areas. That's why. Michael Hardy (talk) 07:52, 24 December 2009 (UTC)[reply]

... which determines area as a function of a and b up to a scalar constant, which is set by our choice of units. If we measure lengths in metres and areas in square metres then the constant is 1 and area = ab; if we measure lengths in picometres and areas in barns then area = 10,000ab. Gandalf61 (talk) 11:21, 24 December 2009 (UTC)[reply]
Imagine a small, unit square made of sticky paper. Think of the process of measuring area as covering the surface of an object (say, a cup) with such squares. When covering the object, try to minimize gaps and overlaps. The number of squares needed to cover the surface, is your best approximation of its area. If you repeat the process, with squares that are 1/4 of a unit square, you'll be able to do a better job of minimizing gaps and overlaps. (If that's not immediately intuitive, think of what will happen if you increase the size of the sticky paper squares). Now, your area is the number of squares, divided by four. Repeat with squares that are 1/16, 1/64, 1/256, 1/1024 ... unit squares. By doing so, you will get a closer and closer approximation of the real number that is the object's area. --NorwegianBlue talk 15:02, 24 December 2009 (UTC)[reply]
The additivity of area is, of course, necessary to move beyond rectangles and to consider such as triangles. I've always taken it as axiomatic, but am happy to justify it on the painting analogy of considering how much "cover" is required.→→86.155.184.27 (talk) 15:48, 24 December 2009 (UTC)[reply]
(Continuation of sticky paper post, after peeling a ton of potatoes):
Now imagine measuring the area of a rectangle of arbitrary dimensions by tiling it with unit squares. You won't have the problem of overlaps, but will have to decide whether you want to leave a small uncovered strip at (say) the right edge and bottom edge, or to cover these strips, thus covering a surface that is larger that the rectangle. Imagine doing both, getting a low estimate and a high estimate of the area of the rectangle. In both cases, your estimate of the area will be the product of the number of unit lengths that fit along each edge. When you repeat this process with squares that are tinier and tinier fractions of a unit square, you can make the difference between the estimates as small as you want. At each step, the area will be the product of the number of squares that fit along each edge, divided by 4, 16, 64, 256, 1024, ... or, equivalently, the product of the number of squares that fit along the top edge divided by 2, 4, 8, 16, 32, ... and the number of squares that fit along the left edge divided by 2, 4, etc. The number of squares that fit along an edge divided by 2, 4, 8, 16, 32 ... approaches the real number that is the length of the edge, and the product of the number of squares that fit along each edge, divided by 4, 16, 64, 256, 1024..., approaches the area. --NorwegianBlue talk

In answer to your second question, the best answer I can think of (and there may be a better one) is that arithmetic is, in some sense, defined with geometry in mind. The properties of addition and multiplication have geometric counterparts; for instance, the distributive property of multiplication over addition can be justified geometrically for positive real numbers by representing a(b+c) as a rectangle whose sides are a and b+c, and noticing that we can also represent the same rectangle as a juxtaposition of two rectangles, axb and axc, giving a*b+a*c.

Don't get me wrong; numbers and lengths and areas are distinct concepts. But the first application of numbers was probably to measure geometric constructs, and geometry has had a big impact on the development of numbers, so that's probably historically the best explanation. --COVIZAPIBETEFOKY (talk) 18:07, 24 December 2009 (UTC)[reply]

December 24

Green's second identity and Green's functions for the Laplacian

Hi all,

I'm trying to prove that the for Green's function for the Laplacian, G(r;r0) in any arbitrary 3D domain, symmetry holds between r and r0; i.e. G(r;r0)=G(r0,r). My friend suggested I should try using the Second Green's identity (I sometimes wonder if my life would be a more interesting place if Green were never born!), but I can't seem to get anything out, perhaps I'm being slow this time of night.

Does anyone else have any luck using Green's 2nd identity? Thanks very much, Delaypoems101 (talk) 02:30, 24 December 2009 (UTC)[reply]

sequence space

Resolved

I'm trying to prove that the sequence space of all complex sequences is a metric space with the metric . My questions are how can I show that this series is always convergent and why does d(x,y)=0 imply x=y. Thanks-Shahab (talk) 06:36, 24 December 2009 (UTC)[reply]

Doesn't really matter, d fails to satisfy the triangle inequality.--RDBury (talk) 07:18, 24 December 2009 (UTC)[reply]
No it satisfies the triangle inequality. I can reproduce the proof for that given in my book.-Shahab (talk) 07:26, 24 December 2009 (UTC)[reply]
My apologies, I got mixed up when I was checking it.--RDBury (talk) 12:13, 24 December 2009 (UTC)[reply]
(ec) Second question is easy: all fractions are non-negative, so d() is a sum of non-negative terms, and thus can only be zero if all terms are zero, which implies all numerators are zero, so x=y. Now the first question gets easy: as all terms are non-negative AND less than 1 (because for we have which is a reciprocal of something greater than 1), the sum --CiaPan (talk) 07:25, 24 December 2009 (UTC)[reply]
Thank you, it's clear. Instead of saying d() is a sum of non-negative terms, and thus can only be zero if all terms are zero isn't it more appropriate to say that d(x,y)=0 is a limit of a monotonic increasing sequence of non-negative terms which is only possible if all terms are zero. I tend to think of series as sequences only.-Shahab (talk) 07:39, 24 December 2009 (UTC)[reply]
Both are valid arguments. CiaPan's argument is rooted on the assertion that if is a convergent sum, with each term in the sum non-negative, for all j. The argument you have suggested is rooted on the assertion that if is the jth partial sum of a convergent series, for all k. Essentially, both arguments are correct (and similar in nature). However, you are correct to note that in a situation where basic intuition does not apply, it is often more appropriate to employ a formal argument. --PST 09:13, 24 December 2009 (UTC)[reply]
By the way, which book are you studying? --PST 09:14, 24 December 2009 (UTC)[reply]
Kreysig's. I found an online copy.-Shahab (talk) 09:40, 24 December 2009 (UTC)[reply]
Note that you can use other functions in place of t/(1+t) in the definition of d(): precisely, any bounded continuous subadditive increasing function φ such that φ(0)=0 and φ(t)>0 if t>0 produces a distance on the space of sequences. These are topologically equivalent, and induce the product topology. For instance, is often used. --pma (talk) 12:24, 24 December 2009 (UTC)[reply]
Thanks everyone and merry christmas-Shahab (talk) 04:34, 25 December 2009 (UTC)[reply]

Rolling sphere

An unconstrained sphere resting on the top of a fixed one is in unstable equilibrium. Suppose a minute disturbance (e.g. it's given an initial velocity of one millionth of the fixed sphere's radius per second) starts it rolling under gravity. Assuming no slipping, are there any circumstances which will make it leave the surface of the fixed one before the 90° point has been reached?→→86.155.184.27 (talk) 17:21, 24 December 2009 (UTC)[reply]

I suspect the answer might depend on whether "rolling" means it's not "slipping".
But then on another couple of seconds' thought (I haven't thought this one through) I would think it would have to leave the surface before reaching the 90° point, because its motion has a horizontal component and there's inertia. Reaching the 90° point would mean going straight down with no horizontal component to its motion. Michael Hardy (talk) 19:53, 24 December 2009 (UTC)[reply]
OK, now I see there's an explicit statement that it's not slipping. I don't know whether that actually matters. Michael Hardy (talk) 06:43, 25 December 2009 (UTC)[reply]
This problem is suited for Lagrangian mechanics. Bo Jacoby (talk) 23:09, 24 December 2009 (UTC).[reply]
I think in the general case of an object rolling off the fixed sphere (assuming the size of the rolling sphere is much smaller than that of the fixed one), it will depart at (where is the moment of inertia of the sphere, its mass, and its radius). Note that this reduces to a constant in the special case of a particle sliding down the sphere (), giving . Michael is completely right---since the object acquires some horizontal velocity, it has to leave before the 90° point. You can analyse this by working out the velocity as a function of angle (by conserving energy), then working out the angle at which the gravitational pull on the object (directed towards the sphere) is no longer enough to keep it in circular motion around it. — Zazou 00:40, 25 December 2009 (UTC)[reply]

December 25

Easy way of deciding if two lines cross?

I am writing a computer program where many lines are stored as a pair of x.y coordinates. I would like to be able to decide if two lines cross. What would be the easiest way to program this please? I can think of changing the lines into y=mx+c format, doing a simultaneous equation (I think) to find the point of intersection, and then checking that this intersection point is within each line segment. But is there any easier way please? (I am not fluent with matrices and the language I am using has no matrix commands). Maybe something regarding the angles between the four points - I'm guessing. A simple way to find the x.y coordinate of the point of intersection would also be useful. Thanks 92.24.44.4 (talk) 14:52, 25 December 2009 (UTC)[reply]

Just determine whether or not they are parallel. If they're parallel, then either they never intersect or they're the same line. If they're not parallel, then they must intersect at exactly one point. No need to determine the point of intersection. I'll leave it as an exercise to you to figure out how to determine if they're parallel, and to explain why this technique doesn't work in 3 dimensions. --COVIZAPIBETEFOKY (talk) 16:58, 25 December 2009 (UTC)[reply]

Sorry, I should have made clearer than the lines are not infinate. They can be non-parallel and still not cross. 78.146.194.118 (talk) 17:07, 25 December 2009 (UTC)[reply]

Your method sounds best to me. You should first check they aren't the same line (if they are you just need to check the order of the endpoints to see if they overlap) then that they aren't parallel (if they are and they aren't the same line, they won't intersect) and then you can find the point of intersection and see if it is in both lines. To get the intersection point from two equations, y=mx+c and y=nx+d, you can just do x=(d-c)/(m-n) (to derive that just put mx+c=nx+d and rearrange) and then plug that into y=mx+c to get y. --Tango (talk) 17:37, 25 December 2009 (UTC)[reply]

I'm wondering if the four end points of the two lines would always make a polygon with a concave part in it if they do not cross? 78.146.194.118 (talk) 17:49, 25 December 2009 (UTC)[reply]

One way you could do it is that if the segment from A to B and from C to D don't cross then either (B-A)×(C-A) and (B-A)×(D-A) will have the same sign or (D-C)×(A-C) and (D-C)×(B-C) will have the same sign. Here × is the cross product, A×B = xAyB - xByA. There might be some more efficient way to get to that though.
For the intersection point, I think it should be A + (((C-A)×(D-C))/((B-A)×(D-C)))(B-A) if I didn't screw anything up. You could also use that intersection point to decide if the segments cross, although I think this way is more computationally intensive unless you need the intersection point anyway. Rckrone (talk) 21:23, 25 December 2009 (UTC)[reply]
(Answering 78.*)... No, they do not make a concave polygon. This is a common homework or quiz question in algorithms programming. Nothing in the question assume that the direction of the lines is from the Y axis towards infinity. One may be right-to-left. The other may be left-to-right. This creeps in again in processor/ALU design. Division is a very nasty time consumer. Comparison is not. So, using less-than/greater than, you can sort the points to form a concave polygon. Then, if you go around the four points in a clockwise manner, you can detect that each turn is a right turn by only using subtraction (which is actually a very cheap addition process inside the computer). -- kainaw 21:47, 25 December 2009 (UTC)[reply]
I forgot to mention that some student always comes up with the idea of just comparing the endpoints. It is a bit trivial to come up with an example that nullifies anything that depends solely on comparing endpoints. -- kainaw 21:50, 25 December 2009 (UTC)[reply]

See also Wikipedia:Reference desk/Archives/Mathematics/2009 October 4#Best way to calculate if a line crosses another line, or a polygon.. Is there an article to point to on this?--RDBury (talk) 21:59, 25 December 2009 (UTC)[reply]

To the OP: please do look at the link that RDBury gives, and the explanation that RDBury and BenRG give there. Intuitively, if we wish to check whether AB and CD cross, we check whether A and B are on opposite sides of the line CD, and whether C and D are on opposite sides of AB. To check which side of a line that a point is on, we compute the appropriate cross product (or equivalently, the signed area of the triangle the three points form). BenRG provides code in C++; while I haven't checked it myself, it looks correct.

Don't use methods that involve computing intersection points, because these are generally more difficult to code correctly (with special cases like infinite slope, etc.), not numerically stable, and slower (although speed is unlikely to be a concern either way). Although there is nothing mathematically wrong with this approach (and this is a mathematics reference desk, after all), from a programming perspective it is not preferred. Eric. 131.215.159.171 (talk) 23:54, 25 December 2009 (UTC)[reply]

The formula: (x2-x1)(y3-y1)-(y2-y1)(x3-x1) :is commonly referred to as "turn" in computer programming - mainly in graphics. Going from point 1, to point 2, to point 3, if the value is positive, you made a left turn. If it is negative, you made a right turn. If it is zero, the three points are on a line (note: it could be a 180 degree turn). Calculating turn comes in handy in a lot of graphics programming. -- kainaw 02:25, 26 December 2009 (UTC)[reply]
I didn't know that... in the context of computational geometry I've only heard it referred to as the "signed area". Eric. 131.215.159.171 (talk) 08:25, 26 December 2009 (UTC)[reply]

To answer the OP and my own question, we have an article, Line segment intersection in this topic but it's in dire need of expansion. I get the impression that mathematicians look at the problem and see the main issue as determining whether two line segments intersect; multiple line segments are just a matter of applying the solution multiple times. While to people in computer science, the problem of whether two line segments intersect is simple algebra and the real issue is to organize the problem so you don't have to test all possible pairs of segments. It seems to me that the article should cover both points of view and right now it just gives an outline of the second. I found lectures notes [3]] which give a pretty good introduction to the subject except they are not self-contained. (For example pseudocode calls a function CCW whose implementation is not given.)--RDBury (talk) 12:16, 26 December 2009 (UTC)[reply]

Thanks, as the OP is there any consensus on what the easiest way to check if two lines cross or not, given that I am only fluent in an old version of BASIC and that my maths education stopped when I was 16 years old? 89.240.110.255 (talk) 16:28, 26 December 2009 (UTC)[reply]

December 26

dual spaces of Sobolev spaces

The Rellich-Kondrachev theorem gives compact embeddings of into , but what can we say about, say, and ? I remember it was straightforward, but I'm having trouble finding it in the references, and am rather embarassed that it's not working out easily. Many thanks. 96.235.177.218 (talk) 03:47, 26 December 2009 (UTC)[reply]

(To be precise, there is compactness only when q is strictly below the critical exponent: q<p*). I'm not sure of what's exactly your question though. One thing is that dualizing the RK embedding you still get a dense, injective, compact map of the dual of into the dual of What you possibly had in mind is that if a bounded linear operator between Banach spaces is compact/injective/dense, then the transpose operator is respectively compact/w*-dense/injective. If 1<p<n the space is reflexive, so that "w*-dense" above is the same as just "dense". Was this your question?--pma (talk) 16:55, 26 December 2009 (UTC)[reply]

Is the word "induce" used technically or non-technically?

Suppose A and B are groups, and N is a normal subgroup of A. Suppose we have an isomorphism ; then we would say that f naturally induces an isomorphism .

I would like to know the limits of the word induce. I see two alternatives:

(1) Is the phrase "the map induced by f" rigorously defined to refer to that map which results from passing to the quotient spaces, as in my example? In this case, the phrase "induced map" would have a formal, unambiguous meaning, just as "the pullback of f" has a formal, unambiguous meaning.

(2) Is the phrase "the map induced by f" used informally to refer to any map that results from some kind of a canonical process? For example, would it be correct to say the restriction map is induced by f? Could we also say the pullback of f (by some other map) is "induced" by f? Maybe the lift of f is also "induced" by f? In this case, the phrase "induced map" would have a subjective meaning, depending on context to establish what particular process we mean.

Of course the actual usage of the word "induce" could differ from both of the two above descriptions, and can vary from one mathematician to another; but I am most interested in the distinction between a formal, technical meaning for "induce" and an informal, non-technical meaning. Thanks. Eric. 131.215.159.171 (talk) 08:43, 26 December 2009 (UTC)[reply]

Let A, B, C and D be objects in a category, and let be an element of . Formally, I would say that f induces an element of , if there exist morphisms and such that the following diagram commutes:
We shall now restrict our attention to abelian categories. The above diagram includes the two cases you mentioned, as is demonstrated by the following special commutative diagrams (let and be the respective canonical homomorphisms):
The above diagram commutes, as you can check. Similarly, consider the following commutative diagram (in this case, let and be the respective inclusion maps):
Note that the vertical arrows in the above commutative diagram point up, as opposed to those in the other commutative diagrams, which point down. I have merely given you a formal definition (in my view) of "induce" in the mentioned situations. I do not quite understand what you would call an "informal definition"; could you please clarify? Hope this helps (and try not to notice the ugly commutative diagrams...). --PST 11:52, 26 December 2009 (UTC)[reply]
Thank you for your reply; it was helpful. Perhaps I should be more clear. I am not so interested in what the definition, whether technical or non-technical, of "induce" is, per se, as whether mathematicians view the word "induce" as a technical term (like the terms "pullback", "lift", "inverse limit"), or as a non-technical term (like the terms "trivial", "characterization", "equivalent", "canonical", "natural"). I gave examples of what a technical definition for "induce" (the result when passing to the quotient space) and a non-technical definition for "induce" (the canonical result of some natural process) to clarify my meaning of a technical term vs. a non-technical term, but I am not necessarily convinced that either one of those is what most mathematicians use the word to mean. Eric. 131.215.159.171 (talk) 12:55, 26 December 2009 (UTC)[reply]
I think that in specific cases, many mathematicians view "induce" as a technical term; one example being "pullback" (or "pushforward") as you mentioned. However, in general, I do not think that all mathematicians have a specific view as to what induce should mean. If I was talking about the pushforward measure in measure theory, the tangent bundle in differential topology, or even quotient spaces in algebra, I would employ specific aspects of the term "induce"; I would not use it in its full generality. I think that this is the case in most of mathematics - often we would generalize a term if we feel that the generalization sheds new light on concrete (or even abstract) cases. For instance, the snake lemma in homological algebra, amidst all this "abstract nonsense" about generalized abelian categories, actually allows one to construct long exact sequences in homology (Zig-zag lemma); a particularly basic tool in homology. Although it seems unnaturally general at first to the beginning student, it actually does shed light on basic tools in singular homology theory (as an example). To summarize, I do not think that mathematicians have found a "specific purpose" of viewing "induce" as a formal term, as people have done in many other branches of mathematics such as point-set topology or abstract algebra. Rather they have formalized the term in specific situations such as the ones I have mentioned ("pushforward", "pullback" etc) and this has been particularly useful. Does this answer your question? --PST 13:30, 26 December 2009 (UTC)[reply]
I personally use "X induces Y (via Z)" for any object or situation Y whose existence and (essential) unicity is guaranteed by X (as a consequence of the theorem or the construction Z, to be specified unless it is clear). It seems to me that this generic use is the most common. In some cases "deduce" or "produce" may be valid alternatives (although probably nobody cares about the etymology). --pma (talk) 17:26, 26 December 2009 (UTC)[reply]

Does U(1) = SU(1)

Does the circle group equal the special unitary group of one dimension? -Craig Pemberton 08:44, 26 December 2009 (UTC)[reply]

No. SU(1) is the trivial group {1} - see special unitary group. Gandalf61 (talk) 09:12, 26 December 2009 (UTC)[reply]

An element of U(1) is of the form where , and (since is the conjugate transpose of ). This fact allows one to conclude that which you note; U(1) is isomorphic to the circle group. Now, SU(1) is the set of all elements of U(1) having determinant 1. However, a matrix has determinant 1 iff ; equivalently, iff . Thus, SU(1) is the trivial group, whereas U(1) is the circle group (perhaps I have over-explained a little...). --PST 12:05, 26 December 2009 (UTC)[reply]

Linear combination of matrices

Is there a reference book or wiki article or something to direct me to research of eigenvalue problem of matrices like , in particular, properties of the roots of the characteristic polynomial with the parameter ? (Igny (talk) 17:29, 26 December 2009 (UTC))[reply]

Have you tried this article? If not, I recommend it. If so, do you have a specific question?--Leon (talk) 17:50, 26 December 2009 (UTC)[reply]
Well I know the general theory of eigenproblem, and I know implicit differentiation well enough to figure out, for example, . However I thought that there was some more obscure research of the roots from the point of view of the Galois theory, for example. (Igny (talk) 18:22, 26 December 2009 (UTC))[reply]
Tosio Kato, Perturbation theory for linear operators. --pma (talk) 20:05, 26 December 2009 (UTC)[reply]

Algebra over a ring that is a field?

Is it possible for an algebra over a ring-that is, a ring that is not also a field-to be a field? I'm aware that you can describe rational numbers as pairs of integers, but in as much as I understand the term "algebra over a ring", that does not qualify as addition needs to be defined differently to that on a vector space over the ring of integers.--Leon (talk) 17:48, 26 December 2009 (UTC)[reply]

What about as an algebra over ..?--pma (talk) 20:12, 26 December 2009 (UTC)[reply]
Didn't I just mention that, and further explain why I figured that it didn't count?--Leon (talk) 21:05, 26 December 2009 (UTC
I don't see why it doesn't count. An algebra over a ring is a module with a suitably-behaved multiplication, that's the only definition I know. can certainly be considered a module over and the usual multiplication is suitably-behaved. --Tango (talk) 21:50, 26 December 2009 (UTC)[reply]
Sorry: actually you did, but your explanation was (and I fear, will remain) rather obscure to me. I do not understand your doubts: the definition of algebra is very clear, unambiguous, and standard; and obviously any field is an algebra over any sub-ring of it. --pma (talk) 23:57, 26 December 2009 (UTC)[reply]

December 27

Algorithm to reduce polynary equations to minimum form

I asked this before and maybe the question was ignored due to the holidays...

Is there an algorithm (like for the simplex method in linear programming) to reduce polynary equations to minimum form? 71.100.6.153 (talk) 02:06, 27 December 2009 (UTC) [reply]