Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 71.100.6.153 (talk) at 00:11, 28 December 2009 (Algorithm to reduce polynary equations to minimum form). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:



December 21

Calculus History

I am reading books on the history of Mathematics and its development, specifically Calculus and I am now further confused. I thought I had it right but I am sure that I don't so maybe some experts here can help clear up a few things. First about the Bernoullis, I know that they were Swiss but I always thought that their background was Italian. Is that true? The article here doesn't really say anything about this. Was it like an Italian family? Is the name Italian? Is the name German or something? Were they originally Italian who then relocated to Switzerland or something?

Second, the more significant question, is about the actual development of Calculus. As I understand, Newton (and Leibniz) are credited with the "invention" of calculus because they proved the Fundamental Theorem of Calculus. But then I learn that Riemann was the one who redefined the integral (using the definition that a function is said to be integrable if for a given epsilon greater than zero, there exists a partition such that the upper sum and the lower sum over that partition are within epsilon of each other) which allowed Riemann to prove all the properties of the integral previously known (such as linearity) and he could now integrate function with discontinuities (even with an infinite...with measure zero as we now know...number of discontinuities) and then Riemann also proved that integration and derivatives are inverses operations with his newly defined integral. So isn't Riemann the one who proved the fundamental theorem of calculus? Why isn't it credited to him? I mean the form we see it in today came from him.-Looking for Wisdom and Insight! (talk) 00:59, 21 December 2009 (UTC)[reply]

Wow, good questions! My information (s:1911 Encyclopædia Britannica/Bernoulli) is that the Bernoullis were fleeing the Spanish when they cam to Switzerland about a hundred years before they became famous. It doesn't say whether they actually were Spanish or how they got the name which doesn't sound Spanish any more than it sounds German. I will note though that 1) Itallian is spoken in Switzerland, though generally not as far north as Basel, and 2) People were a bit more flexible about their names then than we are now, so the name they used might change depending on who they were talking to or they might use a Latin version (which you had so speak to be considered literate at that time). I worked on the Bernoulli articles and I basically had to go by birth and death years to tell them apart; they all used two or three first names and most of the names were used by two or three relatives.
If you're interested in the history of calculus I recommend The Calculus Wars By Jason Socrates Bardi. The short version is at Leibniz and Newton calculus controversy. Anyway, Newton and Leibniz invented calculus using something called fluxions or infinitesimals (depending on which side of the English channel you were on). By modern mathematical standards they were very non-rigorous and it wasn't until Riemann and Cauchy and their generation that it was all put on a firm footing, whence the Riemann integral etc. My understanding is that part of the motivation for doing this was a scathing criticism of infinitesimals by Bishop Berkeley. This is a case of methods being ahead of the proofs that they work, which happens a lot more than mathematicians would like to think. In this case the the methods, known as the methods to calculate infinitesimals, or the infinitesimal calculus, or nowadays just calculus, while not rigorous, at least seemed plausible, so people used them because they were useful. In any case, the development of calculus took place over thousands of years so deciding wh gets credit for it is going to be arbitrary anyway, but that's the way the history of science goes much of the time. A lot of that is my personal viewpoint so take it with a grain of salt, but it does seem to be a more interesting subject than you would think.--RDBury (talk) 05:47, 21 December 2009 (UTC)[reply]

Fréchet Second Derivatives and Taylor Series of matrix functions

Hi all,

Another one from me! I've got a distressingly long list of Christmas work (how cruel!) I've got a big long list of Taylor series to calculate for matrix functions, using the Fréchet derivative - however, my lecturer has failed to give any examples (helpful) nor can I find any on the internet, so I'd greatly appreciate it if someone wouldn't mind showing me an example before I start beavering away at the list!

Say, for any n*n matrix A: , and so is the Frechet derivative. Now how do I go about calculating the second (third etc) Frechet derivatives? (This is the first example on my list - I have the formula , right?

Thanks very much for the help (again!), I think once I've got one example sorted I can get going on the rest!

Much appreciated! Typeships17 (talk) 03:32, 21 December 2009 (UTC)[reply]

Yes, that expansion sounds like the evil laugh of your lecturer. The second derivative of is the symmetric bilinear map 1/2(UAV+UVA+VAU+VUA+AUV+AVU) Actually this holds in any Banach algebra; if it is commutative, you find A very efficient way to prove that a map is Ck, and to compute its differentials is the converse of Taylor theorem: a map from an open set of to is of class Ck if and only if it has a polynomial expansion of order k at any point of the domain, with continuous coefficients, and with a remainder which is locally uniformly o(|h|k). For k=1 this is the definition of C1 of course.--pma (talk) 09:51, 21 December 2009 (UTC)[reply]
What is that 1/2 doing there? Algebraist 12:44, 21 December 2009 (UTC)[reply]
I wonder too... --pma (talk) 12:57, 22 December 2009 (UTC)[reply]
Hah, I wouldn't be all too surprised if he did laugh like that. That's great but how did you go about actually calculating it? I'm not sure I follow quite how to get from the first derivative to the second and so on, perhaps the concept of going from a linear to bilinear to trilinear etc map is bewildering me. What limit gave you your (1/2?)UAV+UVA+VAU+VUA+AUV+AVU? Many thanks again, Typeships17 (talk) 14:24, 21 December 2009 (UTC)[reply]
You just take the first derivative and perturb A again:. The Taylor series you end up with will of course just be what you get by multiplying out (A+H)3. Algebraist 16:13, 21 December 2009 (UTC)[reply]
That's great, I've got the idea now, thanks ever so much - now onto A-1, this one should prove a bit more challenging! (If anyone has any tricks for the general form of the nth derivative, please feel free to let me know, I managed to batter my way through the first but no further...) Anyway, many thanks again to both of you :) Typeships17 (talk) 17:58, 22 December 2009 (UTC)[reply]
Invertible matrix (more generally, invertible elements of a B-algebra) are an open set, and the inversion map is analytic: if the element is invertible and you have the expansion (a real evil laugh):
From this you can find all the differentials, simmetrizing. E.g and --pma (talk) 09:15, 24 December 2009 (UTC)[reply]

Vector cosine

I was looking at Amazon.com's "people who bought this item also bought..." algorithm and I noticed that they use vector cosine to group users. For example, if I bought items 1, 6, and 9 (the product ID for each item), my purchase vector would be {1,6,9}. If you bought {5,6,7}, the cosine of the two vectors would be 4.86 (if my math is correct). I know that when the vectors are identical, the cosine of the vectors is 1. What is the domain of vector cosine? Is there a limit that indicates "opposite", such as when comparing {1,2,3} to {3,2,1}? Is there a limit that indicates "nothing in common", such as when comparing {1,2,3} to {4,5,6}? I'm curious about how accurate it is to use vector cosine to identify how similar two vectors are. -- kainaw 05:30, 21 December 2009 (UTC)[reply]

You should check out the articles Collaborative filtering and Netflix prize. The vector cosine seems to be a term used by people who specialize in this area rather than most mathematicians, but my research indicates that it's simply the cosine of the angle between the two vectors. If two vectors are nearly the same direction then the angle between them is nearly 0 and the cosine is close to 1. If vectors aren't close to the same direction then the cosine is closer to 0 or even negative. It turns out that the cosine is easier to compute than the angle itself (See Angle#The dot product and generalisation so it's useful for doing computation.--RDBury (talk) 06:11, 21 December 2009 (UTC)[reply]
Thank you. That is a good link. I guess I'm just doing cosine of vectors wrong since I get 4.86. I thought cosine was limited to the range -1 to 1. Perhaps I'm just adding or multiplying wrong. -- kainaw 07:20, 21 December 2009 (UTC)[reply]
I have no idea what Amazon does (a link to your source would be welcome), but the purchase vectors above should probably be and . Their so-called cosine similarity (which is indeed between -1 and 1) is 1/3. It can only be negative when some of the entries are, which is impossible in this particular setting. Negatives in general indicate opposite directions, with -1 polar opposites. 0 indicates no common items here, or orthogonality in the general case. Also note that {1,2,3} and {3,2,1} are the same, not opposite. -- Meni Rosenfeld (talk) 16:12, 21 December 2009 (UTC)[reply]

I'm inclined to agree with Meni Rosenfeld, and the "cosine" reported to be 4.86 above must be a mistake: such a cosine cannot exceed 1. (There are complex numbers whose cosine is a real number greater than 1, but that doesn't apply here.) Michael Hardy (talk) 20:32, 21 December 2009 (UTC)[reply]

I did have some math mistake somewhere. The cosine is 0.91. The formula shown in all of the papers I've read is cosine(A, B) = (A·B)/(||A||*||B||). At first, I thought ||A|| was the length of A (how many items are in A). I then noticed that it was the square root of the sum of all the elements of A squared. The dot product is a bit of a problem - what if the vectors are different lengths? Just use zeros to pad the smaller one? I don't see how you can get 0 since all the vectors being used are positive integers greater than zero. -- kainaw 20:47, 21 December 2009 (UTC)[reply]
Again, I think you are confused about how vectors represent purchases. The simple way (which again, may or may not be what Amazon does) is to have a vector whose length is equal to the total number of items available for purchase, and which has 1 in indexes of purchased items and 0 elsewhere. With this encoding, the cosine similarity in the example you gave is 1/3, like I said. -- Meni Rosenfeld (talk) 05:05, 22 December 2009 (UTC)[reply]
None of the examples that I've seen use 0/1 representation. They all use a vector of integer identifiers. Many refer to it as Pearson product-moment correlation coefficient. I'm now reading about the "centering" involved to see how that affects the cosine. -- kainaw 05:15, 22 December 2009 (UTC)[reply]
If those examples are online, please provide a link. My guess is that they present a list of IDs for compactness but do the calculations with a 0/1 representation.
In any case, it should be crystal clear that what you have done - multiply out the IDs - makes absolutely no sense whatsoever. For starters, IDs are on a nominal scale, while multiplication requires a ratio scale (Level of measurement) (with centering, you only need an interval scale). Second, it creates completely absurd situations. {1,2,3} has <1 similarity with {3,2,1} although they are the same purchases. {1,2,3} had >0 similarity with {4,5,6} although they have nothing in common. The similarity between {1,2} and {1,3} is different than between {5,10} and {6,10} although they have the same structure. The similarity between {1,3,5,7,92678} and {2,4,6,8,93412} is very high although they have nothing in common, while the similarity between {1,2,3,4,87154} and {1,2,3,75642,5} is close to 0 although they have a lot in common. -- Meni Rosenfeld (talk) 05:45, 22 December 2009 (UTC)[reply]
See "Geometric Interpretation" in Pearson product-moment correlation coefficient. It uses {1,2,3,4,5,8} and {.11,.12,.13,.14,.15,.18}. I'm going to do some tests with centered vectors to see if the results make sense. According to the article, 1/-1 is highly correlated and 0 is no correlation. -- kainaw 06:08, 22 December 2009 (UTC)[reply]
This has nothing to do with purchases. Here each index represent a country, the first vector gives the GNP for each country and the second vector gives the poverty for each country. Taking the dot product (after centering) works, because you are multiplying matching quantitative measurements (the GNP of a country with the poverty of the same country).
In the purchasing scenario, you tried to multiply IDs (which of course cannot be multiplied) by matching them based on their position in the purchase list. So if the 8th item customer A purchased is a children's book (ID 134675) and the 8th item customer B purchased is a shotgun (ID 134677) (made up numbers), you count it as evidence for similarity. And if the 8th item customer A purchased is a children's book, while the 9th item customer B purchased is that very same book, you don't count it as anything.
I don't mean to sound disrespectful, but it seems you are biting a bit more than you can chew here. You shouldn't try understanding collaborative filtering algorithms if you've not yet mastered basic topics like Pearson's correlation coefficient. -- Meni Rosenfeld (talk) 06:33, 22 December 2009 (UTC)[reply]
I see my mistake now. In collaborative filtering, the term "similarity" is often used to mean "correlation". In actuality, those are two very different terms. I was trying to see how cosine produced a similarity when all it produces is a correlation. So, my initial assumption that cosine does not produce a valid similarity is correct if the definition of similarity is not rationalized to mean correlation. -- kainaw 12:01, 22 December 2009 (UTC)[reply]
That's not quite right. For sure, "correlation" is one thing and "similarity" is another. Indeed, the correlation between GNP and poverty has nothing to do with similarity. But nobody tries to imply that one means the other. Rather, it is claimed that the correlation between the features of two items can indicate similarity between the items. This may or may not be valid, depending on the features we choose.
In the case of Amazon, the "items" are customers. The features are the products they purchased, or more specifically, the ith feature is a 0/1 variable indicating if a customer purchased product i. It is claimed (or not. Where is the link to Amazon's algorithm?) that correlation between the features of customers indicates similarity between the customers. For example, customer 13 purchased products 1,2 but not 3, and customer 25 also purchased products 1,2 but not 3. This is used as evidence that customers 13 and 25 are similar (have the same shopping preferences, or whatever).
Of course, there are countless other ways to approach the problem of similarity, but representing items as feature vectors is very powerful. Even then, cosine similarity is just one of the ways to compute a correlation metric between feature vectors - and hence, by assumption, a similarity metric between items. -- Meni Rosenfeld (talk) 12:32, 22 December 2009 (UTC)[reply]
The length of A can be the number of non-zero coordinates in some contexts, e.g. coding theory, but in this case it means length in the Euclidean sense. The cosine formula includes the lengths of the vectors to allow for varying lengths of the vectors involved.--RDBury (talk) 05:08, 22 December 2009 (UTC)[reply]

Calculi(Non Newtonian)

Everybody, I don't seem to understand what's going on the pages "Other Calculi" here[1]. Can anyone explain it to me? Thanks!The Successor of Physics 06:21, 21 December 2009 (UTC)[reply]

I gather the idea is a variation on the definition of derivative using multiplication rather than addition. The result is something like a logarithmic derivative. Did you have a specific question?--RDBury (talk) 06:52, 21 December 2009 (UTC)[reply]
RDBury, I know that. Maybe I should restate my question. I meant that the bijective function φ there should have two inputs e.g. addition, x + y, x and y are two inputs. How come in those pages, the function φ only has one input?The Successor of Physics 08:03, 21 December 2009 (UTC)[reply]
The function φ is not supposed to be addition in ordinary derivatives and multiplication in multiplicative derivatives. Rather, it is the transformation that is applied to transform ordinary derivatives to new derivatives - it is the identity for ordinary derivatives (no transformation), and the exponential function for multiplicative derivatives (exponentiation transforms addition to multiplication - ). -- Meni Rosenfeld (talk) 10:27, 21 December 2009 (UTC)[reply]
Thanks, Meni!The Successor of Physics 14:09, 21 December 2009 (UTC)[reply]
To make sure my conception is correct, so what you mean if the f is the function with two inputs e.g. multiplication, then . Am I correct?The Successor of Physics 14:18, 21 December 2009 (UTC)[reply]
Precisely. -- Meni Rosenfeld (talk) 15:51, 21 December 2009 (UTC)[reply]
Thanks!The Successor of Physics 04:03, 22 December 2009 (UTC)[reply]
Resolved

Article introducing complex numbers

I would like to request a new article introducing the concept of complex numbers. The current article does not introduce them in a way that's accessible to someone who does not already have a great deal of mathematical knowledge. After looking up educational resources elsewhere on the web I found the concept fairly straightforward and logical but I'm not qualified to write it myself.

Just the introduction to 'Complex number' contains 27 links to other articles, of which at least half are similarly dense and inscrutable. As it stands there's no way for someone to develop an understanding of these concepts from reading the wikipedia because there's no starting point, you just wind up clicking between articles full of thick and unelaborated jargon.

FTA: Complex numbers form a closed field somehow with real numbers. OK, so what's a closed field? Don't know, go to the article. OK, it's some type of field, what's a field? Go to the article, and before I've left the introduction I'm wondering what 'quintic relations' are or an 'integral domain' and if I'd only read the wiki I still wouldn't know what a complex number is or what it has to do with anything. Now I'm not averse to learning all this, I'd love to understand it, but clicking from article to article isn't helping. It's frustrating in a way that other areas of the wikipedia aren't, I don't experience this in the physics or computer science sections for example, if understanding one area depends on understanding another one can usually just click through and read the prerequisite article without falling down the rabbit hole. —Preceding unsigned comment added by 196.209.232.87 (talk) 14:39, 21 December 2009 (UTC)[reply]

Hmm. I can't actually see a Reference Desk question anywhere in your complaint. You could add your request to Wikipedia:Requested articles/Mathematics, or you could take it to Wikipedia talk:WikiProject Mathematics. Gandalf61 (talk) 14:56, 21 December 2009 (UTC)[reply]
your complaint belongs in Talk:Complex_number. Ask short questions here and get short answers. Say, Question: "what is a complex number?" Answer: "a complex number is an expression of the form a+ib where a and b are real numbers and i·i=−1". Go on, ask your next question. Bo Jacoby (talk) 14:59, 21 December 2009 (UTC).[reply]
@Gandalf61 - that is what I needed to know, will do, thx —Preceding unsigned comment added by 196.209.232.87 (talk) 15:10, 21 December 2009 (UTC)[reply]

Unfortunately, we do not write multiple articles on a particular concept (in accord with the guideline that Wikipedia is an encylopedia, and not an introductory comprehensive textbook). However, I do not mind explaining the terms you have mentioned.

Before I proceed furthur, I would recommend you to read the article on rings, for this provides a reasonably basic introduction to the theory of rings, integral domains and fields (it would be appropriate for you to read from this section onwards).

The set of complex numbers, together with its two operations (addition and multiplication) may be defined as follws:

, where is the set of real numbers, and i is the "imaginary unit"; it satisfies the relation or (intuitively, it is a "root of −1").
If , we define their sum as .
If , we define their product as .
Hope this helps, and be sure to read this article from this point onwards. --PST 15:10, 21 December 2009 (UTC)[reply]
@above - thank you, clearer now. —Preceding unsigned comment added by 196.209.232.87 (talk) 15:50, 21 December 2009 (UTC)[reply]
I don't think people should need to read the Ring article to understand the Complex number article so there is definitely an issue with the Complex number article. Math articles tend be be written by mathies for mathies and unfortunately (and contrary to WP:MOSMATH) that sometimes includes articles that should be (at least partly) understandable to typical high school students. It's not a good idea to create new articles to solve this; some people have tried this with articles that have names like 'Introduction to X'. Not only do they amount of content forks but, judging from the amount of heat they generate in AfD discussions, they cause more problems than they solve. The correct solution is to have an non-technical, jargon free introductory section in each article that non-mathies are likely to come across. For the moment it would be a good idea to go over the Complex numbers article with an eye to making the introductory section more accessible, but maybe an more general review is in order.--RDBury (talk) 05:53, 22 December 2009 (UTC)[reply]
This is a discussion for Talk:Complex number or WT:WPM, not here. Algebraist 13:23, 22 December 2009 (UTC)[reply]

Is Principles of Mathematics a standard reading in math degrees? Is it still worth reading?--ProteanEd (talk) 17:36, 21 December 2009 (UTC)[reply]

Definitely not standard reading. It's worth reading from a historical perspective, but not as a way of learning logic. Mathematical logic has come on a long way in the last 100 years, so modern books are a better choice. It's also a very difficult read - I only got about half way through! --Tango (talk) 17:43, 21 December 2009 (UTC)[reply]
(edit conflict) It certainly wasn't mentioned in my degree programme. I haven't read the work, but from glancing at it, it doesn't seem to be a work of mathematics per se, but rather the philosophy of mathematics, which is not normally taught to mathematics undergraduates in any serious way. Even within the philosophy of mathematics, I believe Russell's logicism is rather out of fashion nowadays, though there are certainly still people around who are logicists in some sense. Algebraist 17:47, 21 December 2009 (UTC)[reply]

Reducible Polynomials With All But Constant Coefficient Equalling 1

Resolved

Is there any established theory for determining for what C>1 the polynomial xn+xn-1+...+x+C, n even, is reducible? I was previously unaware of the facts that 1) for n=4 you get a reducible polynomial for C=12 and 2) for n=8 you get a reducible for C=20. Empirically, it appears these are the only cases (I ran a PARI/GP program to C=1000 and n=240, but it doesn't generate a related result if there is a smaller odd n with reducibility in addition to some even n, so this list might be a little short--it seems unlikely it is).Julzes (talk) 18:52, 21 December 2009 (UTC)[reply]

Reducible over what? Z? Algebraist 19:15, 21 December 2009 (UTC)[reply]
Yes, over Z. Thanks for reminding me.Julzes (talk)

I decided to just mark this as resolved. If anybody reading this happens to have known about these two oddballs, let me know, but it just looks like a nice problem to prove their uniqueness, and I'm sure there is no theory to them.Julzes (talk) 19:20, 21 December 2009 (UTC)[reply]

googol

A friend of mine & I decided to look for the googol as a power of 2 (even moderately smart people get bored). We never thought to use the Google calculator (we didn't even know it existed). So, the TI-83. Is 2^332.1928094886 really EXACTLY one googol? Seems amazing... —Preceding unsigned comment added by 174.18.161.113 (talk) 21:28, 21 December 2009 (UTC)[reply]

No, it can't be, because log2(10) is a transcendental number. See the Gelfond-Schneider theorem. --Trovatore (talk) 21:31, 21 December 2009 (UTC)[reply]
I think that wins the prize for biggest hammer used to crack a nut. Algebraist 21:34, 21 December 2009 (UTC)[reply]
Mm, fair enough, especially given that when I looked it up and thought it through, I realized that to apply G-S, you first need to show that log2(10) is irrational, which already suffices to answer the question. At first glance I didn't see an easier way of showing log2(10) is irrational than using G-S, but actually it follows easily from the fundamental theorem of arithmetic. --Trovatore (talk) 21:47, 21 December 2009 (UTC)[reply]
To me, the question does not seem serious. The person asking the question is well aware of the fact that if there are more digits the calculator cannot show them. In fact, I imagine that in order to get 13 significant figure on a TI-83, one must subtract (or divide) out the whole part of the exponent, and this process seems too advanced for someone who did not know the answer to the question asked.Julzes (talk) 22:55, 21 December 2009 (UTC)[reply]

Here's a really simple way to see this: Suppose

where m, n are integers. Then

where

and so

But that is impossible because it says an even number equals an odd number. Any high-school student will understand that one—no Gelfond–Schneider theorem needed. Michael Hardy (talk) 00:12, 22 December 2009 (UTC)[reply]

Yes, that's what Trovatore and I were alluding to above. Algebraist 00:37, 22 December 2009 (UTC)[reply]
Well, it is a bit simpler than the argument I had in mind, as it doesn't need the full FTA. --Trovatore (talk) 10:12, 22 December 2009 (UTC)[reply]
Michael Hardy, you could use this simpler method to prove it is transcendental

which is impossibly transcendental.The Successor of Physics 04:19, 22 December 2009 (UTC)[reply]
I wasn't trying to prove it was transcendental. But as far as "simpler" goes, the fact is any high-school student can understand my argument, whereas yours would have to rely on more sophisticated results such as Gelfond–Schneider. What exactly did you have in mind as your grounds for inferring that that number is transcendental? Gelfond–Schneider? Or something else? In order to use Gelfond–Schneider, you'd need to know that ln 5/ln 2 is irrational, and the proof of that is just what I gave. I suspect your comments lack all merit. Michael Hardy (talk) 05:11, 23 December 2009 (UTC)[reply]
To answer what I think the OP intended to ask - yes, exactly, where x is approximately 332.19280948873623478703194294. -- Meni Rosenfeld (talk) 05:00, 22 December 2009 (UTC)[reply]


December 22

Desperately Seeking a Faster Algorithm

Pari/GP is remakably slow at determining whether a polynomial is irreducible. Now, it may be that the problem is just generally hard, but I find it difficult to believe that the following program to generate the smallest coefficients to build a sequence of irreducible polynomials cannot be made faster. It involves small positive coefficients, and I would think that there is at least a better way than simply using PARI/GP's polisirreducible function for the sizes of coefficients involved. Here is the program as it now stands (and a good many of the terms it output are at oeis:A171810):

x=1:for(d=1,4000,c=1;x=x+v^d;while(polisirreducible(x)-1,c+=1;x=x+v^d;next());print1(c" "))

If anybody knows of or can think up an algorithm for determining irreducibility (over Z) for the special case of small positive coefficients, it will make the terms of the sequence given more open to study. As things stand, certain things about the coefficients are more mysterious than they might be with access to hundreds of thousands rather than merely thousands of terms. Much appreciation for any worthwhile answer.Julzes (talk) 03:16, 22 December 2009 (UTC)[reply]

I'm not an expert in the field, but you're already using a computer algebra program and the people who write them generally know what they are doing. Not that the one you're using is perfect, you might want to try some other ones to see if they work any better, but I think anything you're going to learn here will already be incorporated into most programs with a good reputation.--RDBury (talk) 06:32, 22 December 2009 (UTC)[reply]

Well, I do appreciate that point of view, and generally I'd also guess the polisirreducible function is about as good as possible, but I was wondering whether there might be something a little more tuned to small coefficients (mostly 1s, regular 2s, few 3s, one 4 and that's it). A function like polisirreducible is going to be set up for the most general case, and is unlikely to be close to optimal for polynomials that are so strongly biased toward small positive coefficients.Julzes (talk) 08:19, 22 December 2009 (UTC)[reply]

I just came back to share an intriguing result of a different, related, problem. Starting with constant coefficient 1, the problem is to create a sequence of polynomials that are relatively prime to each other using the smallest positive coefficient. While acting in no particularly orderly way up to the 89th degree, beginning there up to at least the 1000th the coefficients are all 1s at degrees not congruent to 3 modulo 5, and are 2s there. I'm not looking for, and I don't expect, an explanation.Julzes (talk) 09:41, 25 December 2009 (UTC)[reply]

Summation of finite differences

I am trying to do a definite sum from a to b, viz.:
sum [ (n+j)! / n! ]
I had thought to sum it by parts, i.e.:
sum [v * delta(u) ] = [ u*v {with appropriate limits for a and b} ] - sum [ u * delta(v) ]
where v is [ (n+j) ! / n! ] and delta(u) is ( 1 ).
Using information from previous posts:
delta [ (n+j) ! / n! ] = [ j * (n+j)! / (n+1)! ] = [ j / (n+1) ] * [ (n+j)! / n! ]
Summing delta(u) leads to ( n ), or to [ (n+1) - 1 ].
[Being a definite sum, it seems there should be no constant of summation to make it ( n+1 ).]
The sum(delta(u)) combines with delta(v) to give
sum { [ j * (n+j) ! / n! ] - [ ( j / (n+1) ) * (n+j)! / n! ] }
The first part { sum [ j * (n+j)! / n! ] } fits in nicely with the original sum [ (n+j)! / n! ].
But what to do with the second part { sum [ ( -j / (n+1) ) * ( (n+j)! / n! ) ] };
there remains a sum whose summand is equal to the original summand times -j/(n+1).
Keep integrating by parts and wind up with an answer involving an infinite series?
Or perhaps trying to do the original summation differently?ImJustAsking (talk) 14:11, 22 December 2009 (UTC)[reply]

Are you talking about or  ? Bo Jacoby (talk) 00:53, 23 December 2009 (UTC).[reply]

Sorry about the ambiguity: n goes from a to b, and j is a constant.ImJustAsking (talk) 19:21, 23 December 2009 (UTC)[reply]

So write it as j! times a sum of binomial coefficients, and the sum is just the difference of two binomial coefficients. --pma (talk) 22:03, 23 December 2009 (UTC)[reply]

Can I do the following:
Because delta[ (n+j)! / n! ] = j * (n+j)! / (n+1)!
therefore sum{ delta[ (n+j)! / n! ] } = sum { j * (n+j)! / (n+1)! }
where “sum” goes from a to b,
so that by exchanging sides sum{ j * (n+j)! / (n+1)! } = sum{ delta[ (n+j)! / n! ] }
and cancelling “sum” and “delta”, one gets sum{ j * (n+j)! / (n+1)! } = [ (n+j)! / n! ]
Question: may I assume that the quantity on the right side is to be evaluated at a and b+1?
Of course this does not answer my original question, but it would show a way for solving it.ImJustAsking (talk) 23:54, 23 December 2009 (UTC)[reply]

Stationary Points in Higher Dimensions

To identify a stationary point of a function of more than one variable, more specifically f(x,y), do you simply have to identify the points at which every one of its partial derivatives is zero? Also, how do you go about classifying the stationary points as maxima, minima and saddle points? Thanks 92.0.129.48 (talk) 18:58, 22 December 2009 (UTC)[reply]

Yes, a stationary point is one in which f is differentiable and all partial derivatives are 0.
The first step of classification uses the signs of the eigenvalues of the Hessian matrix. In the case of two variables, this reduces to denoting , , and looking at and . If either is 0, higher order derivatives are required. If , it is a saddle point. If , then it is a local minimum if and a local maximum if . -- Meni Rosenfeld (talk) 20:28, 22 December 2009 (UTC)[reply]
In case of nondegenerate critical points the classification is quite simple even in several variables, and it is given by the Morse lemma. Check also Sylvester's law of inertia. --pma (talk) 00:00, 23 December 2009 (UTC)[reply]

Symbol for 'such that'

Hi all,

I was just wondering if there's any symbol (in terms of etc.) which is typically used to mean "Such that" (there exists A such that B, for example) in mathematics? I'm aware of the vertical bar |, and occasionally the colon, but sometimes these can be unclear in the context: are there any others?

Many thanks, 86.26.6.36 (talk) 22:29, 22 December 2009 (UTC)[reply]

Table of mathematical symbols only lists the colon. -- kainaw 22:40, 22 December 2009 (UTC)[reply]
I usually just use "s.t.". When defining a set as {A such that B} you can use a vertical bar or colon, but I wouldn't use them in any other context - too confusing. --Tango (talk) 22:42, 22 December 2009 (UTC)[reply]
I've seen used to mean "such that". Also a period is often used immediately after a to mean "such that". So the existence of could be written or . Of course english words are usually preferable to any of those. Staecker (talk) 22:49, 22 December 2009 (UTC)[reply]
You don't need any symbol, after the existential quantifier, to mean such that. Usually you just put either the formula that follows the quantifier, or the quantifier (plus variable) itself, in round brackets.
The symbol could be useful to translate other instances of such that, such as "given x such that φ(x) holds", in which you have no explicit quantifier symbol. In my experience, however, it sees fairly limited use. --Trovatore (talk) 22:58, 22 December 2009 (UTC)[reply]
Oh, it also occurs to me: The reason you don't use the symbol in existential statements is that such that doesn't actually mean anything in existential statements. "There exists x such that φ(x) holds" is just .
It does mean something, on the other hand, in universal statements, like "For every x such that φ(x) holds, τ(x) also holds". That statement translates as , but could also be written .
But again, could be written that way; usually isn't. --Trovatore (talk) 23:10, 22 December 2009 (UTC)[reply]
Great, thankyou :) 86.26.6.36 (talk) 23:27, 22 December 2009 (UTC)[reply]
Beware that you can't expect that a random reader (even assumed mathematically literate) will understand used with this meaning if it's not explicitly explained in the surrounding text (and then what would be the point?). For example, I managed to earn a Ph.D. in a fairly logic-heavy area of computer science without ever seeing used to mean anything but "contains as an element". –Henning Makholm (talk) 13:33, 23 December 2009 (UTC)[reply]
I've never seen that symbol used for "contains as an element". I've seen used for that purpose, though.--COVIZAPIBETEFOKY (talk) 13:48, 23 December 2009 (UTC)[reply]
But means "contained as an element". -- Meni Rosenfeld (talk) 13:56, 23 December 2009 (UTC)[reply]
... thus Harry {Harry, Sally} and {Harry, Sally} Harry. Gandalf61 (talk) 14:07, 23 December 2009 (UTC)[reply]
Shot myself in the foot, there, didn't I? Whoops... --COVIZAPIBETEFOKY (talk) 14:33, 23 December 2009 (UTC)[reply]
I think that usage of is even more obscure than the "such that" meaning. The element almost exclusively goes on the left and the set of which it's an element on the right; there's almost never a reason to reverse them. --Trovatore (talk) 19:29, 23 December 2009 (UTC)[reply]
I wouldn't do it in a formal paper, but I often see good reason to use it this way. How about, "Let and consider an open set " is sometimes nicer than "... and consider an open set with ". Staecker (talk) 20:21, 23 December 2009 (UTC)[reply]
That's true; good example. --Trovatore (talk) 21:30, 23 December 2009 (UTC)[reply]
I've seen once for "such that". I find it really ugly. --pma (talk) 15:35, 24 December 2009 (UTC)[reply]

Cartoon books about mathematics

Are there any cartoon or other fun books that teach mathematics? In particular at a level equalivalent to what we would call GCE "A" level in England (and Wales)? 92.24.76.99 (talk) 22:59, 22 December 2009 (UTC)[reply]

Don't know bupkus about A levels. But sure, things like Prof. E McSquared's Calculus Primer: Expanded Intergalactic Version (ISBN 0971462402) are out there. Do a search for "cartoon calculus", for example. --jpgordon::==( o ) 23:57, 22 December 2009 (UTC)[reply]
I'd say Murderous Maths but that only runs to about GCSE level (despite what our article may say about age range), and in particular is missing subjects solely taught at A level, such as calculus. - Jarry1250 [Humorous? Discuss.] 17:30, 24 December 2009 (UTC)[reply]


December 23

Functional Analysis

Where can I get a very basic introduction to the current research directions in functional analysis? Also I am interested in knowing about applications of Ramsey theory to functional analysis.[2] Thanks-Shahab (talk) 04:44, 23 December 2009 (UTC)[reply]

Edit Conflict
A good (and reasonably basic) book on the subject would be "Functional Analysis" by Walter Rudin. Alternatively, if you wish to learn about the theory of C* algebras, you could read "An Invitation to C* algebras" (in the GTM series).
Prior to studying functional analysis, it would be good to have a strong background in point-set topology, the topology of metric spaces, linear algebra, and ring theory. Although I think that you already have such a background, it is especially important to have a ring-theoretic intuition (or an intuition of linear transformations); for instance, it would help to be acquainted with a result of the nature of the Jacobson density theorem (somewhat related to the Von Neumann bicommutant theorem in functional analysis). In fact, a strong background in noncommutative ring theory would help should you wish to delve deeper into the subject.
Perhaps, it would be advisable to read the articles operator algebra, operator topology, and Von Neumann algebra, for this may give you a sense of the sorts of basic notions encapsulated in functional analysis. All in all, the two books I suggested may be useful (though there are other excellent texts), but it is important to have a good feel for linear transformations. Might I also add that there are many sorts of branches of functional analysis; the one I have emphasized here does not really encapsulate mathematical physics and the geometry of Banach spaces (note also noncommutative geometry)? --PST 05:34, 23 December 2009 (UTC)[reply]
Sorry - I made the above post before you altered your inquiry to note Ramsey theory. --PST 05:34, 23 December 2009 (UTC)[reply]
Now that you have mentioned Ramsey theory, the book "Geometric Functional Analysis and Its Applications" by Richard B. Holmes, may be appropriate (it is a book in the GTM series). --PST 05:40, 23 December 2009 (UTC)[reply]
I'm a big fan of Kreyszig's Introductory Functional Analysis with Applications which is very clear and well written, though doesn't have any Ramsey theory 86.15.141.42 (talk) 12:45, 23 December 2009 (UTC)[reply]
Thank you both. I have obtained the recommended books and will start reading them.-Shahab (talk) 16:49, 23 December 2009 (UTC)[reply]

reduction algorithm

What algorithm will reduce to minimum form an equation consisting of polynary variables? 71.100.6.206 (talk) 04:45, 23 December 2009 (UTC) [reply]

I thought there would only be a network of 10 links between five people. But the man here says there are 120: http://www.ted.com/index.php/talks/bruce_bueno_de_mesquita_predicts_iran_s_future.html How does he calculate a figure of 120, not 10? 92.29.68.169 (talk) 16:03, 23 December 2009 (UTC)[reply]

Can you indicate where he said that? Anyway, so he may have talked about ways to arrange 5 people in a line or something. -- Meni Rosenfeld (talk) 16:13, 23 December 2009 (UTC)[reply]
Interesting... there are 10 lines on his diagram. Either he's simply wrong (which seems unlikely, since he did include the diagram and one would hope he can count to 10!) or he means something different by "link". He talks about one person knowing what others are saying to each other, so if we count thinks like "A thinks B has said X to C" (where A-E are people and X is an idea) as a link then there are far more than 10. There are 120 different ways to order the five people (you have 5 choices for the first, 4 for the second and so on), so there are 120 links of the type "A thinks that B thinks that C thinks that D thinks that E thinks X". It could be that he's talking about that. He doesn't explain it at all well, though. --Tango (talk) 16:34, 23 December 2009 (UTC)[reply]
PS My greater concern is about the 90% accuracy claim. That is a completely meaningless number. First of all, we need to know if the predictions were made before or after the events happened - it is far easier to come up with a method that "would have" predicted the outcome once you know what the outcome was. Secondly, we need to know how well other methods predicted those outcomes (eg. just surveying experts and seeing what most of them say is likely to happen). --Tango (talk) 16:37, 23 December 2009 (UTC)[reply]

Why does a*b give the area of a rectangle? / Why does arithmetic give meaningful geometric results?

All my life I have known that a*b gives the area of a rectangle with sides a and b. It's repeated so often that I surely don't doubt it. I've realized, though, that I don't feel like I have a solid understanding of why it's true. It seems like something that needs further explanation.

For rectangles with integer sides, there's an explanation that's at least mostly satisfying:

  • By definition, the area of a figure is the # of 1x1 unit squares that fit inside it
  • If a rectangle has integer sides a and b, then you can fit an axb array of 1x1 squares inside it. (This seems like it could use some kind of justification of its own, but it's at least pretty intuitive to visualize.)
  • We know that a*b is a good way to count an axb array of objects. (If you have any doubts there, they can be addressed in this case by thinking of multiplication as repeated addition.)
  • So a*b is the # of 1x1 squares inside an axb rectangle.
  • So, by definition, a*b is the area of the rectangle.

Moving beyond integers it seems more mysterious to me. One way to phrase the mystery is this; How does a*b "know" how many 1x1 squares are inside an axb rectangle? If we're talking about real numbers, we can't just count object arrays anymore, so the above justification won't work.

One possibility I've encountered is that maybe you shouldn't think of area of a figure as the # of 1x1 squares in it but rather as the ratio between its area and that of a 1x1 unit square. (See http://www.math.ubc.ca/~cass/graphics/manual/pdf/ch2.pdf) But I haven't figured whether looking at area in that different way could make the connection between multiplication and area seem less mysterious for real numbers.

For context, this may be part of a larger confusion of mine about how arithmetic relates to geometry: On one hand, it seems like real numbers are defined axiomatically (I know there are other approaches, but see http://en.wikipedia.org/wiki/Real_number#Axiomatic_approach), and if you derive an algorithm for multiplication, you do that from the axioms for fields and such, without consulting geometric facts in any way. And yet, having done so, you wind up with algorithms/formulas that can be used to find the area of rectangles. What is it about this abstractly defined operation of multiplication that makes it suitable for answering anything about geometry? And what makes it suitable for answering questions about area in particular?

Ryguasu (talk) 21:56, 23 December 2009 (UTC)[reply]

I think maybe the first step is for you to ask yourself just what you mean by "area", as distinct from the product of the sides of the rectangle, which is usually taken to be pretty much the definition. If it turns out that your meaning is motivated by physical reality, you might check out The Unreasonable Effectiveness of Mathematics in the Natural Sciences, which raises questions for which there are not yet any generally accepted satisfactory answers. --Trovatore (talk) 22:02, 23 December 2009 (UTC)[reply]
Let's assume we have defined the concept of 'shape', and that you are willing to accept the following axioms regarding area:
  • A 1x1 square has area 1.
  • The area of a shape is unchanged when you translate or rotate it (ie. move it)
  • Placing two shapes adjacent to each other so there is no overlap results in a new shape whose area is the sum of the original shapes.
  • If one shape can be translated/rotated to completely cover another, the area of the first is larger than the area of the second.
As you have already demonstrated, the area of an axb rectangle where a and b are integers can be established by adding together several 1x1 squares. This gives a*b as their area.
Similarly, if you have a (1/a)x(1/b) rectangle, you can show that its area must be (1/a)*(1/b) by adding the same rectangle to itself a*b times in such a way as to make a 1x1 square. If we call the area A, this means that A*a*b=1, or A=1/(a*b). It is then just as easy to show that any (a/b)x(c/d) rectangle has area (a/b)*(c/d)=(ac)/(cd).
To show that the area of a qxr rectangle is q*r, where q and r are any real numbers, you can bound the area of the qxr rectangle above and below by use of rational-sided rectangles and the forth axiom, and get the bounds arbitrarily close to q*r. Then the only possible area is q*r.
HTH. --COVIZAPIBETEFOKY (talk) 22:41, 23 December 2009 (UTC)[reply]

Area is additive. If you have two rectangles in a plane sharing a common side, so that their union is a rectangle, then the area of that larger rectangle is the sum of the two areas. That's why. Michael Hardy (talk) 07:52, 24 December 2009 (UTC)[reply]

... which determines area as a function of a and b up to a scalar constant, which is set by our choice of units. If we measure lengths in metres and areas in square metres then the constant is 1 and area = ab; if we measure lengths in picometres and areas in barns then area = 10,000ab. Gandalf61 (talk) 11:21, 24 December 2009 (UTC)[reply]
Imagine a small, unit square made of sticky paper. Think of the process of measuring area as covering the surface of an object (say, a cup) with such squares. When covering the object, try to minimize gaps and overlaps. The number of squares needed to cover the surface, is your best approximation of its area. If you repeat the process, with squares that are 1/4 of a unit square, you'll be able to do a better job of minimizing gaps and overlaps. (If that's not immediately intuitive, think of what will happen if you increase the size of the sticky paper squares). Now, your area is the number of squares, divided by four. Repeat with squares that are 1/16, 1/64, 1/256, 1/1024 ... unit squares. By doing so, you will get a closer and closer approximation of the real number that is the object's area. --NorwegianBlue talk 15:02, 24 December 2009 (UTC)[reply]
The additivity of area is, of course, necessary to move beyond rectangles and to consider such as triangles. I've always taken it as axiomatic, but am happy to justify it on the painting analogy of considering how much "cover" is required.→→86.155.184.27 (talk) 15:48, 24 December 2009 (UTC)[reply]
(Continuation of sticky paper post, after peeling a ton of potatoes):
Now imagine measuring the area of a rectangle of arbitrary dimensions by tiling it with unit squares. You won't have the problem of overlaps, but will have to decide whether you want to leave a small uncovered strip at (say) the right edge and bottom edge, or to cover these strips, thus covering a surface that is larger that the rectangle. Imagine doing both, getting a low estimate and a high estimate of the area of the rectangle. In both cases, your estimate of the area will be the product of the number of unit lengths that fit along each edge. When you repeat this process with squares that are tinier and tinier fractions of a unit square, you can make the difference between the estimates as small as you want. At each step, the area will be the product of the number of squares that fit along each edge, divided by 4, 16, 64, 256, 1024, ... or, equivalently, the product of the number of squares that fit along the top edge divided by 2, 4, 8, 16, 32, ... and the number of squares that fit along the left edge divided by 2, 4, etc. The number of squares that fit along an edge divided by 2, 4, 8, 16, 32 ... approaches the real number that is the length of the edge, and the product of the number of squares that fit along each edge, divided by 4, 16, 64, 256, 1024..., approaches the area. --NorwegianBlue talk

In answer to your second question, the best answer I can think of (and there may be a better one) is that arithmetic is, in some sense, defined with geometry in mind. The properties of addition and multiplication have geometric counterparts; for instance, the distributive property of multiplication over addition can be justified geometrically for positive real numbers by representing a(b+c) as a rectangle whose sides are a and b+c, and noticing that we can also represent the same rectangle as a juxtaposition of two rectangles, axb and axc, giving a*b+a*c.

Don't get me wrong; numbers and lengths and areas are distinct concepts. But the first application of numbers was probably to measure geometric constructs, and geometry has had a big impact on the development of numbers, so that's probably historically the best explanation. --COVIZAPIBETEFOKY (talk) 18:07, 24 December 2009 (UTC)[reply]

December 24

Green's second identity and Green's functions for the Laplacian

Hi all,

I'm trying to prove that the for Green's function for the Laplacian, G(r;r0) in any arbitrary 3D domain, symmetry holds between r and r0; i.e. G(r;r0)=G(r0,r). My friend suggested I should try using the Second Green's identity (I sometimes wonder if my life would be a more interesting place if Green were never born!), but I can't seem to get anything out, perhaps I'm being slow this time of night.

Does anyone else have any luck using Green's 2nd identity? Thanks very much, Delaypoems101 (talk) 02:30, 24 December 2009 (UTC)[reply]

sequence space

Resolved

I'm trying to prove that the sequence space of all complex sequences is a metric space with the metric . My questions are how can I show that this series is always convergent and why does d(x,y)=0 imply x=y. Thanks-Shahab (talk) 06:36, 24 December 2009 (UTC)[reply]

Doesn't really matter, d fails to satisfy the triangle inequality.--RDBury (talk) 07:18, 24 December 2009 (UTC)[reply]
No it satisfies the triangle inequality. I can reproduce the proof for that given in my book.-Shahab (talk) 07:26, 24 December 2009 (UTC)[reply]
My apologies, I got mixed up when I was checking it.--RDBury (talk) 12:13, 24 December 2009 (UTC)[reply]
(ec) Second question is easy: all fractions are non-negative, so d() is a sum of non-negative terms, and thus can only be zero if all terms are zero, which implies all numerators are zero, so x=y. Now the first question gets easy: as all terms are non-negative AND less than 1 (because for we have which is a reciprocal of something greater than 1), the sum --CiaPan (talk) 07:25, 24 December 2009 (UTC)[reply]
Thank you, it's clear. Instead of saying d() is a sum of non-negative terms, and thus can only be zero if all terms are zero isn't it more appropriate to say that d(x,y)=0 is a limit of a monotonic increasing sequence of non-negative terms which is only possible if all terms are zero. I tend to think of series as sequences only.-Shahab (talk) 07:39, 24 December 2009 (UTC)[reply]
Both are valid arguments. CiaPan's argument is rooted on the assertion that if is a convergent sum, with each term in the sum non-negative, for all j. The argument you have suggested is rooted on the assertion that if is the jth partial sum of a convergent series, for all k. Essentially, both arguments are correct (and similar in nature). However, you are correct to note that in a situation where basic intuition does not apply, it is often more appropriate to employ a formal argument. --PST 09:13, 24 December 2009 (UTC)[reply]
By the way, which book are you studying? --PST 09:14, 24 December 2009 (UTC)[reply]
Kreysig's. I found an online copy.-Shahab (talk) 09:40, 24 December 2009 (UTC)[reply]
Note that you can use other functions in place of t/(1+t) in the definition of d(): precisely, any bounded continuous subadditive increasing function φ such that φ(0)=0 and φ(t)>0 if t>0 produces a distance on the space of sequences. These are topologically equivalent, and induce the product topology. For instance, is often used. --pma (talk) 12:24, 24 December 2009 (UTC)[reply]
Thanks everyone and merry christmas-Shahab (talk) 04:34, 25 December 2009 (UTC)[reply]

Rolling sphere

An unconstrained sphere resting on the top of a fixed one is in unstable equilibrium. Suppose a minute disturbance (e.g. it's given an initial velocity of one millionth of the fixed sphere's radius per second) starts it rolling under gravity. Assuming no slipping, are there any circumstances which will make it leave the surface of the fixed one before the 90° point has been reached?→→86.155.184.27 (talk) 17:21, 24 December 2009 (UTC)[reply]

I suspect the answer might depend on whether "rolling" means it's not "slipping".
But then on another couple of seconds' thought (I haven't thought this one through) I would think it would have to leave the surface before reaching the 90° point, because its motion has a horizontal component and there's inertia. Reaching the 90° point would mean going straight down with no horizontal component to its motion. Michael Hardy (talk) 19:53, 24 December 2009 (UTC)[reply]
OK, now I see there's an explicit statement that it's not slipping. I don't know whether that actually matters. Michael Hardy (talk) 06:43, 25 December 2009 (UTC)[reply]
This problem is suited for Lagrangian mechanics. Bo Jacoby (talk) 23:09, 24 December 2009 (UTC).[reply]
I think in the general case of an object rolling off the fixed sphere (assuming the size of the rolling sphere is much smaller than that of the fixed one), it will depart at (where is the moment of inertia of the sphere, its mass, and its radius). Note that this reduces to a constant in the special case of a particle sliding down the sphere (), giving . Michael is completely right---since the object acquires some horizontal velocity, it has to leave before the 90° point. You can analyse this by working out the velocity as a function of angle (by conserving energy), then working out the angle at which the gravitational pull on the object (directed towards the sphere) is no longer enough to keep it in circular motion around it. — Zazou 00:40, 25 December 2009 (UTC)[reply]

December 25

Easy way of deciding if two lines cross?

I am writing a computer program where many lines are stored as a pair of x.y coordinates. I would like to be able to decide if two lines cross. What would be the easiest way to program this please? I can think of changing the lines into y=mx+c format, doing a simultaneous equation (I think) to find the point of intersection, and then checking that this intersection point is within each line segment. But is there any easier way please? (I am not fluent with matrices and the language I am using has no matrix commands). Maybe something regarding the angles between the four points - I'm guessing. A simple way to find the x.y coordinate of the point of intersection would also be useful. Thanks 92.24.44.4 (talk) 14:52, 25 December 2009 (UTC)[reply]

Just determine whether or not they are parallel. If they're parallel, then either they never intersect or they're the same line. If they're not parallel, then they must intersect at exactly one point. No need to determine the point of intersection. I'll leave it as an exercise to you to figure out how to determine if they're parallel, and to explain why this technique doesn't work in 3 dimensions. --COVIZAPIBETEFOKY (talk) 16:58, 25 December 2009 (UTC)[reply]

Sorry, I should have made clearer than the lines are not infinate. They can be non-parallel and still not cross. 78.146.194.118 (talk) 17:07, 25 December 2009 (UTC)[reply]

Your method sounds best to me. You should first check they aren't the same line (if they are you just need to check the order of the endpoints to see if they overlap) then that they aren't parallel (if they are and they aren't the same line, they won't intersect) and then you can find the point of intersection and see if it is in both lines. To get the intersection point from two equations, y=mx+c and y=nx+d, you can just do x=(d-c)/(m-n) (to derive that just put mx+c=nx+d and rearrange) and then plug that into y=mx+c to get y. --Tango (talk) 17:37, 25 December 2009 (UTC)[reply]

I'm wondering if the four end points of the two lines would always make a polygon with a concave part in it if they do not cross? 78.146.194.118 (talk) 17:49, 25 December 2009 (UTC)[reply]

One way you could do it is that if the segment from A to B and from C to D don't cross then either (B-A)×(C-A) and (B-A)×(D-A) will have the same sign or (D-C)×(A-C) and (D-C)×(B-C) will have the same sign. Here × is the cross product, A×B = xAyB - xByA. There might be some more efficient way to get to that though.
For the intersection point, I think it should be A + (((C-A)×(D-C))/((B-A)×(D-C)))(B-A) if I didn't screw anything up. You could also use that intersection point to decide if the segments cross, although I think this way is more computationally intensive unless you need the intersection point anyway. Rckrone (talk) 21:23, 25 December 2009 (UTC)[reply]
(Answering 78.*)... No, they do not make a concave polygon. This is a common homework or quiz question in algorithms programming. Nothing in the question assume that the direction of the lines is from the Y axis towards infinity. One may be right-to-left. The other may be left-to-right. This creeps in again in processor/ALU design. Division is a very nasty time consumer. Comparison is not. So, using less-than/greater than, you can sort the points to form a concave polygon. Then, if you go around the four points in a clockwise manner, you can detect that each turn is a right turn by only using subtraction (which is actually a very cheap addition process inside the computer). -- kainaw 21:47, 25 December 2009 (UTC)[reply]
I forgot to mention that some student always comes up with the idea of just comparing the endpoints. It is a bit trivial to come up with an example that nullifies anything that depends solely on comparing endpoints. -- kainaw 21:50, 25 December 2009 (UTC)[reply]

See also Wikipedia:Reference desk/Archives/Mathematics/2009 October 4#Best way to calculate if a line crosses another line, or a polygon.. Is there an article to point to on this?--RDBury (talk) 21:59, 25 December 2009 (UTC)[reply]

To the OP: please do look at the link that RDBury gives, and the explanation that RDBury and BenRG give there. Intuitively, if we wish to check whether AB and CD cross, we check whether A and B are on opposite sides of the line CD, and whether C and D are on opposite sides of AB. To check which side of a line that a point is on, we compute the appropriate cross product (or equivalently, the signed area of the triangle the three points form). BenRG provides code in C++; while I haven't checked it myself, it looks correct.

Don't use methods that involve computing intersection points, because these are generally more difficult to code correctly (with special cases like infinite slope, etc.), not numerically stable, and slower (although speed is unlikely to be a concern either way). Although there is nothing mathematically wrong with this approach (and this is a mathematics reference desk, after all), from a programming perspective it is not preferred. Eric. 131.215.159.171 (talk) 23:54, 25 December 2009 (UTC)[reply]

The formula: (x2-x1)(y3-y1)-(y2-y1)(x3-x1) :is commonly referred to as "turn" in computer programming - mainly in graphics. Going from point 1, to point 2, to point 3, if the value is positive, you made a left turn. If it is negative, you made a right turn. If it is zero, the three points are on a line (note: it could be a 180 degree turn). Calculating turn comes in handy in a lot of graphics programming. -- kainaw 02:25, 26 December 2009 (UTC)[reply]
I didn't know that... in the context of computational geometry I've only heard it referred to as the "signed area". Eric. 131.215.159.171 (talk) 08:25, 26 December 2009 (UTC)[reply]

To answer the OP and my own question, we have an article, Line segment intersection in this topic but it's in dire need of expansion. I get the impression that mathematicians look at the problem and see the main issue as determining whether two line segments intersect; multiple line segments are just a matter of applying the solution multiple times. While to people in computer science, the problem of whether two line segments intersect is simple algebra and the real issue is to organize the problem so you don't have to test all possible pairs of segments. It seems to me that the article should cover both points of view and right now it just gives an outline of the second. I found lectures notes [3]] which give a pretty good introduction to the subject except they are not self-contained. (For example pseudocode calls a function CCW whose implementation is not given.)--RDBury (talk) 12:16, 26 December 2009 (UTC)[reply]

Thanks, as the OP is there any consensus on what the easiest way to check if two lines cross or not, given that I am only fluent in an old version of BASIC and that my maths education stopped when I was 16 years old? 89.240.110.255 (talk) 16:28, 26 December 2009 (UTC)[reply]

December 26

dual spaces of Sobolev spaces

The Rellich-Kondrachev theorem gives compact embeddings of into , but what can we say about, say, and ? I remember it was straightforward, but I'm having trouble finding it in the references, and am rather embarassed that it's not working out easily. Many thanks. 96.235.177.218 (talk) 03:47, 26 December 2009 (UTC)[reply]

(To be precise, there is compactness only when q is strictly below the critical exponent: q<p*). I'm not sure of what's exactly your question though. One thing is that dualizing the RK embedding you still get a dense, injective, compact map of the dual of into the dual of What you possibly had in mind is that if a bounded linear operator between Banach spaces is compact/injective/dense, then the transpose operator is respectively compact/w*-dense/injective. If 1<p<n the space is reflexive, so that "w*-dense" above is the same as just "dense". Was this your question?--pma (talk) 16:55, 26 December 2009 (UTC)[reply]

Is the word "induce" used technically or non-technically?

Suppose A and B are groups, and N is a normal subgroup of A. Suppose we have an isomorphism ; then we would say that f naturally induces an isomorphism .

I would like to know the limits of the word induce. I see two alternatives:

(1) Is the phrase "the map induced by f" rigorously defined to refer to that map which results from passing to the quotient spaces, as in my example? In this case, the phrase "induced map" would have a formal, unambiguous meaning, just as "the pullback of f" has a formal, unambiguous meaning.

(2) Is the phrase "the map induced by f" used informally to refer to any map that results from some kind of a canonical process? For example, would it be correct to say the restriction map is induced by f? Could we also say the pullback of f (by some other map) is "induced" by f? Maybe the lift of f is also "induced" by f? In this case, the phrase "induced map" would have a subjective meaning, depending on context to establish what particular process we mean.

Of course the actual usage of the word "induce" could differ from both of the two above descriptions, and can vary from one mathematician to another; but I am most interested in the distinction between a formal, technical meaning for "induce" and an informal, non-technical meaning. Thanks. Eric. 131.215.159.171 (talk) 08:43, 26 December 2009 (UTC)[reply]

Let A, B, C and D be objects in a category, and let be an element of . Formally, I would say that f induces an element of , if there exist morphisms and such that the following diagram commutes:
We shall now restrict our attention to abelian categories. The above diagram includes the two cases you mentioned, as is demonstrated by the following special commutative diagrams (let and be the respective canonical homomorphisms):
The above diagram commutes, as you can check. Similarly, consider the following commutative diagram (in this case, let and be the respective inclusion maps):
Note that the vertical arrows in the above commutative diagram point up, as opposed to those in the other commutative diagrams, which point down. I have merely given you a formal definition (in my view) of "induce" in the mentioned situations. I do not quite understand what you would call an "informal definition"; could you please clarify? Hope this helps (and try not to notice the ugly commutative diagrams...). --PST 11:52, 26 December 2009 (UTC)[reply]
Thank you for your reply; it was helpful. Perhaps I should be more clear. I am not so interested in what the definition, whether technical or non-technical, of "induce" is, per se, as whether mathematicians view the word "induce" as a technical term (like the terms "pullback", "lift", "inverse limit"), or as a non-technical term (like the terms "trivial", "characterization", "equivalent", "canonical", "natural"). I gave examples of what a technical definition for "induce" (the result when passing to the quotient space) and a non-technical definition for "induce" (the canonical result of some natural process) to clarify my meaning of a technical term vs. a non-technical term, but I am not necessarily convinced that either one of those is what most mathematicians use the word to mean. Eric. 131.215.159.171 (talk) 12:55, 26 December 2009 (UTC)[reply]
I think that in specific cases, many mathematicians view "induce" as a technical term; one example being "pullback" (or "pushforward") as you mentioned. However, in general, I do not think that all mathematicians have a specific view as to what induce should mean. If I was talking about the pushforward measure in measure theory, the tangent bundle in differential topology, or even quotient spaces in algebra, I would employ specific aspects of the term "induce"; I would not use it in its full generality. I think that this is the case in most of mathematics - often we would generalize a term if we feel that the generalization sheds new light on concrete (or even abstract) cases. For instance, the snake lemma in homological algebra, amidst all this "abstract nonsense" about generalized abelian categories, actually allows one to construct long exact sequences in homology (Zig-zag lemma); a particularly basic tool in homology. Although it seems unnaturally general at first to the beginning student, it actually does shed light on basic tools in singular homology theory (as an example). To summarize, I do not think that mathematicians have found a "specific purpose" of viewing "induce" as a formal term, as people have done in many other branches of mathematics such as point-set topology or abstract algebra. Rather they have formalized the term in specific situations such as the ones I have mentioned ("pushforward", "pullback" etc) and this has been particularly useful. Does this answer your question? --PST 13:30, 26 December 2009 (UTC)[reply]
I personally use "X induces Y (via Z)" for any object or situation Y whose existence and (essential) unicity is guaranteed by X (as a consequence of the theorem or the construction Z, to be specified unless it is clear). It seems to me that this generic use is the most common. In some cases "deduce" or "produce" may be valid alternatives (although probably nobody cares about the etymology). --pma (talk) 17:26, 26 December 2009 (UTC)[reply]
Thanks, you have answered my question thoroughly. Eric. 131.215.159.171 (talk) 21:44, 27 December 2009 (UTC)[reply]

Does U(1) = SU(1)

Does the circle group equal the special unitary group of one dimension? -Craig Pemberton 08:44, 26 December 2009 (UTC)[reply]

No. SU(1) is the trivial group {1} - see special unitary group. Gandalf61 (talk) 09:12, 26 December 2009 (UTC)[reply]

An element of U(1) is of the form where , and (since is the conjugate transpose of ). This fact allows one to conclude that which you note; U(1) is isomorphic to the circle group. Now, SU(1) is the set of all elements of U(1) having determinant 1. However, a matrix has determinant 1 iff ; equivalently, iff . Thus, SU(1) is the trivial group, whereas U(1) is the circle group (perhaps I have over-explained a little...). --PST 12:05, 26 December 2009 (UTC)[reply]

Linear combination of matrices

Is there a reference book or wiki article or something to direct me to research of eigenvalue problem of matrices like , in particular, properties of the roots of the characteristic polynomial with the parameter ? (Igny (talk) 17:29, 26 December 2009 (UTC))[reply]

Have you tried this article? If not, I recommend it. If so, do you have a specific question?--Leon (talk) 17:50, 26 December 2009 (UTC)[reply]
Well I know the general theory of eigenproblem, and I know implicit differentiation well enough to figure out, for example, . However I thought that there was some more obscure research of the roots from the point of view of the Galois theory, for example. (Igny (talk) 18:22, 26 December 2009 (UTC))[reply]
Tosio Kato, Perturbation theory for linear operators. --pma (talk) 20:05, 26 December 2009 (UTC)[reply]
Thank you, I think I read it quite a while ago, I will look it up again. (Igny (talk) 02:16, 27 December 2009 (UTC))[reply]

Algebra over a ring that is a field?

Is it possible for an algebra over a ring-that is, a ring that is not also a field-to be a field? I'm aware that you can describe rational numbers as pairs of integers, but in as much as I understand the term "algebra over a ring", that does not qualify as addition needs to be defined differently to that on a vector space over the ring of integers.--Leon (talk) 17:48, 26 December 2009 (UTC)[reply]

What about as an algebra over ..?--pma (talk) 20:12, 26 December 2009 (UTC)[reply]
Didn't I just mention that, and further explain why I figured that it didn't count?--Leon (talk) 21:05, 26 December 2009 (UTC
I don't see why it doesn't count. An algebra over a ring is a module with a suitably-behaved multiplication, that's the only definition I know. can certainly be considered a module over and the usual multiplication is suitably-behaved. --Tango (talk) 21:50, 26 December 2009 (UTC)[reply]
Sorry: actually you did, but your explanation was (and I fear, will remain) rather obscure to me. I do not understand your doubts: the definition of algebra is very clear, unambiguous, and standard; and obviously any field is an algebra over any sub-ring of it. --pma (talk) 23:57, 26 December 2009 (UTC)[reply]

I suspect that the motivation for this question comes from the fact that a finite-dimensional algebra (over a field) which is also an integral domain, must necessarily be itself a field; the OP may wish to know whether this can be generalized to an arbitrary ring in place of the field over which the algebra is defined. In general, as pma suggested, the "field of fractions" of an integral domain is an algebra (and the integral domain need not be a field). If a ring has zero divisors, then no algebra over it can be a field. However, if a ring is not necessarily commutative, but has no zero divisors, we can replace the "field of fractions" idea by a "noncommutative division ring" idea via the Ore condition (that is, we can obtain a "noncommutative division ring of fractions" which is an algebra over a given noncommutative ring with no zero divisors satisfying the Ore condition). Hope this helps. --PST 03:47, 27 December 2009 (UTC)[reply]

If K is a field that is also an R-algebra for R some commutative ring with 1, AND such that K is free as an R-module, then I believe R is a field. R is a subring of K, K is a free, divisible module, so R is a divisible R-module, so R is a field. "Free" basically means that the elements of K are ordered pairs, triples, tuples of elements of R with addition defined coordinate-wise, like for vector spaces. If you require the field to be "free", then the coefficient ring R has to have been a field. However, if the elements of K need not have any specific form, but merely need to be able to be multiplied by elements of R, then of course every field is an R-algebra for some R that is not a field (and if you wish, also not the integers). JackSchmidt (talk) 07:53, 27 December 2009 (UTC)[reply]

December 27

Algorithm to reduce polynary equations to minimum form

I asked this before and maybe the question was ignored due to the holidays...

Is there an algorithm (like for the simplex method in linear programming) to reduce polynary equations to minimum form? 71.100.6.153 (talk) 02:06, 27 December 2009 (UTC) [reply]

I believe that the problem is your use of "polynary". That is not a well-defined word. The meaning depends on which group of people is using it. If you just made it up, please define what you mean by it. Otherwise, define what the group of people you got it from intend for it to mean. It literally means "pertaining to many" in the way that binary means "pertaining to two". So, it means "can have many values". Most equations can have many values. Most variables can have many values. Therefore, the usage is very important to make sense of the question. -- kainaw 03:04, 27 December 2009 (UTC)[reply]
My intended meaning of polynary is identical to the phrases "multiple state variable" or "many stated variable" or "poly-stated variable" in the same sense that binary pertains to two. My usage covers binary variables and any variable with discrete and finite number of states. My usage does not include fractions directly since probability values can be normalized to percentages and percentages are meaningless beyond a few decimal places which can be rounded or truncated to result in only integer values. 71.100.6.153 (talk) 15:44, 27 December 2009 (UTC) [reply]
A polynary equation is a logical equation with variables which may have two states although binary is the specific term used to refer to such an equation as trinary is the specific term used to refer to an equation of variables made up of three states. The algorithm I am looking for, however, should be applicable to all discrete and finite stated equations to include binary, trinary and beyond. 71.100.6.153 (talk) 00:11, 28 December 2009 (UTC) [reply]

Measure Theory & Countability

First of all, when I say that a set A is "bigger than" another set B, it means that B is entirely contained in A and that A and B are not equal. So B is a proper subset of A. Working an the interval [0,1] for example, I know that starting from a single point, we can work our way up to the Cantor set which is uncountable and still has measure zero. My question is what is the "biggest" subset of [0,1] that still has measure zero. All of [0,1] has measure 1 obviously.

I have the same question regarding countable sets (countable means finite or countably infinite) in [0,1] for example. Cantor showed (using his usual ingenious arguments) that starting from a single point, we can work our way up to the algebraic numbers which are countable. Is there any set "bigger" than the set of all algebraic numbers in [0,1] which is still countable? Thanks! -Looking for Wisdom and Insight! (talk) 02:56, 27 December 2009 (UTC)[reply]

Adding a single point (or any countable set) to a set which is countable/measure zero yields a new set which is again countable/measure zero. You'll need some better questions to get more interesting answers. Algebraist 03:07, 27 December 2009 (UTC)[reply]
I think that the OP is looking for the existence of a subset of [0,1] containing the algebraic numbers within [0,1], maximal with respect to the property that it is countable and has measure zero (you might as well omit the "measure zero" from the OP's question since any such countable set has measure zero). In this case, however, there is no such maximal set, for the reason given by algebraist (if M is any such countable set, for contains M and is countable, so M cannot be maximal with respect to the property of having measure zero). --PST 04:08, 27 December 2009 (UTC)[reply]

You know what I meant and I was afraid of this answer. I thought about that too but that is not what I was looking for. So what is the largest subset of [0,1] which is countable/has measure zero?-Looking for Wisdom and Insight! (talk) 03:37, 27 December 2009 (UTC)[reply]

There is no such countable set for the reason given by algebraist above. There is no such set having measure zero for exactly the same reason (if M is any such set having measure zero, for contains M and has measure zero, so M cannot be maximal with respect to the property of having measure zero). --PST 04:11, 27 December 2009 (UTC)[reply]
Basically, although many aspects of mathematics are firmed on intuition, which formalizm consolidates (and is thus often tacit), in this case formalizm is important to obtain some interesting intuition. --PST 04:14, 27 December 2009 (UTC)[reply]
The question is analogous to asking the existence of a "largest number"; unless you add extra assumptions to your definition of "number", you cannot obtain a meaningful answer. In this case, you will need to add additional assumptions to your definition of "set". --PST 04:17, 27 December 2009 (UTC)[reply]

Thanks for the explanation, everyone!-Looking for Wisdom and Insight! (talk) 04:29, 27 December 2009 (UTC)[reply]

Maybe more interesting answers can be found if we also consider description complexity. "Rational numbers" fails because a much bigger set (algebraics) can be obtained with an equivalently short description. "Algebraic numbers and " fails because the added complexity of specifying is not justified by the increase of one element. In other words, that extra element does not "belong" in this set. So, is there a countable set which is significantly larger then algebraics and yet similarly simple? Is there some quantification of these notions, under which an optimal set can be found? -- Meni Rosenfeld (talk) 06:00, 27 December 2009 (UTC)[reply]

Perfect, like if all elements in a set share a common property. That is definitely a better wording of my question.-Looking for Wisdom and Insight! (talk) 07:02, 27 December 2009 (UTC)[reply]

The Computable numbers are countable and the only way you'll write a number that's not amongst them is by doing something like throwing a dice for each digit. Dmcq (talk) 22:55, 27 December 2009 (UTC)[reply]
Well, it depends on what you mean by the way, will I be man enough to ignore the barbarism a dice??? Probably not. At least I can rant about it in small text. by write such a number. Of course you can't really "write" it (because it would take too long) but you can certainly specify one, with no ambiguity whatsoever. For example, consider the number in binary representation that has a zero at position n if the Turing machine with Goedel number n halts, and a 1 otherwise. --Trovatore (talk) 23:01, 27 December 2009 (UTC)[reply]
The wikipedia article dice allows it for the singular form but yes die is better. A set larger than the algebraic numbers that includes practically everything before the 20th century can be got by using the hypergeometric series to generate extra numbers. Dmcq (talk) 23:12, 27 December 2009 (UTC)[reply]

Largest countable this that and the other

This isn't exactly responsive to what the original poster asked, but it seems similar enough to mention. There are results from effective descriptive set theory that certain lightface pointclasses have largest countable members. For example there is a largest countable set of reals. A set of reals is if it's the complement of a set; a set is if it's the union of a collection of (open) intervals with rational endpoints, where some computer program, given infinite time, can list all the pairs of rational endpoints involved.

(So a set is always closed, but being is a stricter notion than being closed. The sense in which it is closed has to be somehow "effective" or "computable". For example there are uncountably many closed sets, but there are only countably many sets, because there are only countably many computer programs.)

There's a classic paper by Donald A. Martin on this, in Proceedings of the Cabal Seminar, called Largest countable this that and the other.

I don't know how to characterize the largest countable set, but the largest countable set is simply the set of all reals that are in Goedel's constructible universe. Obviously this requires something beyond ZFC; you have to be able to show that that set is countable. But I think the very modest assumption of zero-sharp suffices. I don't know whether you can prove in ZFC alone that there exists a largest countable set in any of these pointclasses. --Trovatore (talk) 23:14, 27 December 2009 (UTC)[reply]

Math and calculations

I want to do math but I'm terrible with calculations. Is it possible to do math without calculating/solving complicated equations with formulas? If so, how? My friend's a math person and he says that when he does math he thinks and rarely does calculate. He says he researches algebra, but I thought algebra was about calculating??? He also says that math people don't do arithmetic with numbers. My world has collapsed! Help! Or maybe my friend's wrong. Can't calculators solve math? Why does he research it? What sorts of math are there (my friend says there's lots)? Please tell me how I can do math without my calculator! —Preceding unsigned comment added by 122.109.239.199 (talk) 09:54, 27 December 2009 (UTC)[reply]

See Mathematics. Some areas of mathematics (such as abstract algebra, as opposed to high-school algebra) don't involve arithmetic calculations at all. Some do, and application of these fields can benefit tremendously from the use of a calculator. Being comfortable around numbers and equations is an important skill for the mathematically literate. -- Meni Rosenfeld (talk) 11:49, 27 December 2009 (UTC)[reply]
So you should first learn how to do computations, and then you should learn how not to do computations. --pma (talk) 12:28, 27 December 2009 (UTC)[reply]
Good advice. --PST 12:44, 27 December 2009 (UTC)[reply]
"Your world has collapsed"? That should not be the case! Almost everyone is told that they are wrong about something, at some point in their lives; such is an important experience. With regards to computations, I have a couple of remarks. Firstly, I think that the mathematical brain, by default, has ability to do computations; that is, if you are able to do the "mathematics without computations", you should have the ability to do the "mathematics with computations". Basic computations really do not require extreme intelligence to carry out, save silly human errors. On the other hand, much of mathematics is not really concerned with computations; rather, it is concerned with deeper problems which may require computations (at some stage) to solve. In abstract algebra, for instance, one uses substructures to encapsulate computations abstractly (ideals in ring theory, subgroups in group theory, etc...). In another branch of mathematics, topology, much of the goal is to attain a strong intuition of closeness; although seemingly simple, this is very deep. Differential topology sometimes employs differential geometry for this purpose. Hope this answers your question(s). However, if this is an attempt to mock mathematics, please do not; mathematics has developed tremendously over the past few hundred years, and not surprisingly, it is difficult for many to comprehend the extent of this development (I should add that it was even difficult for me to comprehend during my first exposure to formal mathematics). --PST 12:43, 27 December 2009 (UTC)[reply]
Why do you want to do maths if you don't have some feel for what it is about? Being very good at calculating isn't that important though it is quite useful. I've known what maths is since I was a child stringing up a frame when I realized I didn't get a circle and I wondered how to arrange the pegs to get a circle. I can't see any basis for your desire to do maths. Dmcq (talk) 13:12, 27 December 2009 (UTC)[reply]
Skill with numbers is likely to develop with practice. The more you work with arithmetic and elementary algebra, the more proficient you will become. A solid basis in those areas is needed to progress to more advanced mathematics. Many of the branches of mathematics which do not work with computations are are too complex to be meaningful without the foundations. —Anonymous DissidentTalk 13:29, 27 December 2009 (UTC)[reply]

I want to become a mathematician but I hate equations and formulas, and my friend says that's not necessary for mathematics. You tell me something, and probably that was what my friend said but I wasn't paying attention. Thanks! But why do people research mathematics? People research physics because it's important to know about the universe. People research medicine because it helps our survival. What's the use of mathematics without calculating??? I want to do math because my math teacher said to me when I was in high school that it's purpose is to shape the world, and that mathematicians are society's core, because of their mind boggling human-calculator ability. And if math's not about numbers, what is it about? My high school teacher said that math is about understanding different properties about numbers, such as prime and composite and he has a math degree. Many thanks for the responses. Please tell me why people think math though! It hurts my brain and isn't fun. But I need it to be part of society's finest. —Preceding unsigned comment added by 122.109.239.199 (talk) 13:48, 27 December 2009 (UTC)[reply]

You want to become a mathematician but you hate some aspects of it; it hurts your brain and isn't fun. Have you considered a career in masochism?→→86.160.104.185 (talk) 16:15, 27 December 2009 (UTC)[reply]
Please don't be disrespectful to the OP; he or she doesn't understand what mathematics is about and would like to better understand it, I'm sure you can provide some insight that would be useful to him or her. Eric. 131.215.159.171 (talk) 21:59, 27 December 2009 (UTC)[reply]
It's possible you will find Lockhart's Lament interesting; it's an article about the teaching of high school mathematics in the US. Although it does not spend much time explaining what mathematics is, it does address some misconceptions of mathematics and explains what mathematics is not. Eric. 131.215.159.171 (talk) 22:05, 27 December 2009 (UTC)[reply]

Limit of a sequence

Let {xn} be a sequence with with g>0. How to prove the fact that ? --84.62.197.235 (talk) 11:24, 27 December 2009 (UTC)[reply]

First do it in the case (to this end you may write the inequality of arithmetic and geometric means with the n numbers . For the more general case observe that for large n; then use the former case and a sandwich argument. --pma (talk) 11:42, 27 December 2009 (UTC)[reply]
(Edit Conflict) Let and let for all . Do you accept that (Proof: )? Now apply this intuition to prove your claim. --PST 11:44, 27 December 2009 (UTC)[reply]
One of the problems with your approach is that (Igny (talk) 15:33, 27 December 2009 (UTC))[reply]

Let {xn} be a sequence with with g>0. How to prove the fact that ? --84.62.197.235 (talk) 14:22, 27 December 2009 (UTC)[reply]

Computationally expensive problems

Could you give examples of things that are expensive to solve but easy to check. I'd like to know the best solve/check ratio even if it's not well defined, but just any examples would be useful. --93.106.33.14 (talk) 14:32, 27 December 2009 (UTC)[reply]

Anyone with a piece of paper and pencil can calculate a product of two integers to verify the solution to a factorization problem (given time and persistence that can be done even for numbers with thousands of digits). The factorization problem for big integers however requires a lot of computational power, and modern supercomputers start chocking at factorization of 100 digit numbers. (Igny (talk) 15:57, 27 December 2009 (UTC))[reply]
All NP-complete problems (probably) have this property. Algebraist 16:55, 27 December 2009 (UTC)[reply]

The all-ones vector, and how to notate it

What's the most common way of writing the all-ones vector, that is, the vector, when projected onto each standard basis vector of a given vector space, has length one? The zero vector is frequently written , so I'm partial to writing the all-ones vector as , but I don't know how popular this is, and I don't know if a reader might confuse it with the identity matrix. I'm writing for a graph theory audience, if that helps pick a notation. --Bkkbrad (talk) 20:36, 27 December 2009 (UTC)[reply]

(1, 1, 1, ..., 1) ? I just had need of one in sixth dimension and actually wrote (1, 1, 1, 1, 1, 1). They don't come up very often, depending on you having made a choice of basis whereas most vector theory is designed to be independent of basis. I've not done any graph theory for a long time though. checking Adjacency matrix#Properties there's one like my first example, so maybe graph theory is not that different.--JohnBlackburne (talk) 20:53, 27 December 2009 (UTC)[reply]

December 28