# User talk:Marc van Leeuwen

Hello, Marc van Leeuwen, and welcome to Wikipedia! Thank you for your contributions. I hope you like the place and decide to stay. Our intro page provides helpful information for new users - please check it out! If you need help, visit Wikipedia:Questions, ask me on my talk page, or place {{helpme}} on this page and someone will show up shortly to answer your questions. Happy editing! Arcfrk (talk) 22:13, 29 February 2008 (UTC)

You may want to stop by Wiki Project Mathematics main page and the associated talk page and also to add yourself to the list of participants. By creating an article on Littlewood–Richardson rule you have filled a serious gap in Wikipedia coverage, I hope that you'll expand it further. Again, welcome! Arcfrk (talk) 22:13, 29 February 2008 (UTC)

## Your recent edits to Polynomial

I have some concerns about your recent edits to the polynomial article, so I thought I should raise them here to give you a chance to think about some corrections. My concerns are:

1. By replacing "monomial" with "term" and using a definiton of "term" that allows terms with coefficients of 0, you introduce complexity and the possibility of confusion. The number of "terms" in a polynomial is no longer well defined - as is apparent in your discussion of the zero polynomial. And you have to qualify "term" when defining the degree of a polynomial - the degree of a polynomial now becomes the highest degree of its non-zero terms.
2. You have defined the degree of a variable in a polynomial, but you have lost the definition of the degree of a polynomial itself.
3. By replacing "equivalent" with "equal" you have obscured the fact that equivalence of polynomials as formal expressions is (and should be) independent of any field in which they are evaluated as polynomial functions. For example, the polynomials x2+1 and x+1 are not equivalent, but in the field Z2 they are equal as functions, because they take the same value at each of the points in that field. In your terminology, you would have to say that x2+1 and x+1 are equal as functions over Z2 but not equal when considered as formal polynomials, which I think is confusing.

A lot of thought has gone into the polynomial article over a long period of time, and there is a danger that significant changes such as yours could trigger an edit war. To avoid this, I find it is often best to propose major changes on an article's talk page first, to test whether I am about to step into a controversial area. Gandalf61 (talk) 11:04, 7 March 2008 (UTC)

Let me reply point by point.

1. I did intentionally allow zero terms, because I think it is not common use to forbid them. Think of such uses as "to add two polynomials, one adds the coefficients of similar terms (terms involving the same monomial)". Note that to be able to pronounce such a fairly simple sentence, one must allow introduction of terms with zero coefficient, just to have something at hand to add. Also the explanation of "similar terms" needs some notion of the term stripped of its coefficient; if one does not allow "monomial" to refer to that, life gets rather hard. (I do plead guilty to trying to educate the general public by pushing the terminology "mononomial" for an isolated term viewed as a polynomial.) The fact that the number of terms of a polynomial is no longer well defined does not seem so much of a problem to me, since operations like gathering similar terms or cancelling terms do change the number of terms in an expression. What is well defined (even if not very frequently used) is the number of nonzero terms in the standardized form of a polynomial. If you want to allow dropping "nonzero" from that phrase by forbidding zero terms altogether, I do not object, although I don't think it really makes life easier. In fact I was, somewhat against my habits, trying to not be pedantic here.
Now that I think about this again, I see you may have a point that the initial description gives the standardized form of the polynomial, which need not contain zero terms; those terms will then be allowed further on per equivalence by the usual rules. For the zero polynomial one would have an expression that is a sum of no terms at all; while there is no doubt that the value of an empty sum is 0, it might schock people to manipulate expressions that could be completely void (but after all this is a bit like the empty string). The real difficulty is striking a balance between precision and language that could scare people. It is not hard to be exact: a polynomial is a linear combination of monomials (defined to exclude coefficients of course). But that is hardly informative to someone new to the subject.
1. I added the precision that what was being defined in the given place was in fact the degree in a variable (the corresponding paragraph failed to mention that it was supposing the single variable case, which I added as well). Feel free to add a corresponding sentence about the total degree, or about unqualified degrees in the presence of only a single variable; I just did not want to change too much at once.
2. As the lead states, a polynomial is an expression, not a function. This means one does not use evaluation (which is only introduced much later by the way) to decide equality (or equivalence) of polynomials, and I think most people agree about that (besides giving wrong results in specific cases, it would be a rather cumbersome test). The issue here has nothing to do with polynomial function, but whether one considers for instance (X+1)2 to be equal or only equivalent to X2+2X+1. I think if you ask, most mathematicians will vote for "equal". They are certainly equal in a polynomial ring, but one can maintain that they are only equivalent as polynomials. People using computer algebra would probably favor that point of view. (I see that the point you are advancing is actually that some polynomials could be considered equal without being equivalent; this to me would seem a very curious situation, and not just for polynomials.) But for the article it might be best to simply not raise the question, and use "equivalent". Go ahead and revert that part of my edit and remove the somewhat pedantic remark following it if you feel this is more appropriate. I was only being bold. Marc van Leeuwen (talk) 11:59, 7 March 2008 (UTC)
To the extent that you actually agree with the criticism, it is more appropriate that you make the requisite changes. I prefer the approach in which the coefficient of a term cannot be 0, as should be clear from the following edits I made earlier to clear up the confusion raised by the ambiguity of the issue: [1] & [2].  --Lambiam 12:17, 8 March 2008 (UTC)

## Undoing multiple edits

Tip. You can undo a sequence of consecutive edits in one go by the following steps.

• Go to the revision history of the page in question. You will see a radio button in front of each revision. These radio buttons are arranged in two columns, in each of which one is selected; initially the right column has one button only, for the latest revision.
• Select the radio button in the left column for the revision that is one older than the one resulting from the earliest edit you want to undo.
• Select the radio button in the right column for the revision resulting from the last (most recent) edit in the sequence to be undone.
• Click the button titled "Compare selected versions".
• You will get a diff page, in which the caption of the right column has an undo button. Click it.
• If you get a message that the edit could not be undone due to conflicting intermediate edits, you're out of luck.
• Otherwise, fill in the edit summary appropriately (usually I make sure I have the user name or IP address of the offending editor already in the copy buffer), and save the page.

If you want to do this often, there are "rollback" tools that will make this easier, but probably this will be good enough for now.  --Lambiam 12:37, 8 March 2008 (UTC)

• Another option is: go to the old version, click edit, copy all the text, go to the new version, click edit, select all and paste.  franklin  13:20, 7 January 2010 (UTC)

## Polynomial Undo

Marc, I redid the first two parts of the article because they were very unorganized, and the definition of a polynomial was very poor. It was not my intent to permanently leave out your edits, but I was a little upset that you did a complete undo and I did not have the time to re-include your edits. I will try to go back and re-include your edits. My apologizes. 24.96.130.30 —Preceding unsigned comment added by 24.96.130.30 (talk) 20:44, 8 March 2008 (UTC)

## coming to terms

I, too, like the word "term" better than "monomial", but the literature uses "monomial", and we have to reflect actual usage, rather than personal preference. (Also, the word "term" can mean an entry in an infinite sequence, and so "monomial" is more specific.) I'll double check to make sure the article uses "term" fairly high up in the discussion.

As a general rule, it is best to discuss edits on the talk page, before spending large amounts of time on a rewrite. Rick Norwood (talk) 17:32, 10 March 2008 (UTC)

## Nice proof

I like your proof of the irrationality of the golden ratio. I think Dicklyon would like to see a specific reference to that proof somewhere in the literature, since presumably someone has thought of it before. (If not, they really should have!) Cheers, silly rabbit (talk) 13:38, 28 March 2008 (UTC)

## Proofs of the Cayley-Hamilton theorem

Marc, I've completely changed the section (originally mostly due to you, it looks like) in Cayley-Hamilton theorem concerning proofs. I've tried to retain the philosophical points you made, but some of what you said was simply not true, and it was overly opinionated. I'm afraid a lot of the pedagogy of comparing and correcting incorrect proofs has gone by the wayside as a result of the last one. Since you seem to feel strongly about the issue, I thought you would like to know so you could take a look. By the way: do you know of a published source for the proof you wrote (now tidied up a bit and included as the "First proof")? I gave a second proof from Atiyah-MacDonald, a third proof based on one which had appeared on that page some time ago, and a fourth based on some of the comments you made, but these latter two are likewise unreferenced (unreferenceable?). Ryan Reich (talk) 14:22, 4 July 2008 (UTC)

## total ordering by degree

In the article of polynomials it says "Univariate polynomials have many properties not shared by multivariate polynomials. For instance, the terms of a univariate polynomial are totally ordered by their degree, while a multivariate polynomial can have many terms of the same degree." This, as written is false. Terms in multivariate polynomials, as well as univariate, can be totally ordered by degree (depends on the degree). The main difference is that that order is not natural.  franklin  16:19, 7 January 2010 (UTC)

Read the article. Overview: "The degree of the entire term is usually defined as the sum of the degrees of each variable in it.". "When a polynomial in one variable is arranged in the traditional order, the terms of higher degree come before the terms of lower degree." Classifications "A polynomial is called homogeneous of degree n if all its terms have degree n". Also I checked that all uses of "degree" in the article, or in degree designate a single (integer) number. All this corroborates that "degree" here must mean "total degree", and then the quoted sentence is true. So I don't know what you mean with "depends on the degree" but if you want to define a notion of degree for which terms in a multivariate polynomial cannot have the same degree, this certainly is not the usual sense of degree (if you mean a vector of all the exponents of individual variables, I don't believe this is commonly called simply degree; the terms multi-degree or exponent vector come to mind, but I'm not sure these are in common use either). What is usually used to totally order terms of a multivariate polynomial is a monomial ordering. So the difference is not whether the order is natural or not (whatever "natural" is taken to mean) but what one means by "degree", and again there is not much doubt about that. But I'll add "(total)" to make it clear beyond doubt. Marc van Leeuwen (talk) 09:55, 8 January 2010 (UTC)
• I agree the phrase "the terms of a univariate polynomial are totally ordered by their degree, while a multivariate polynomial can have many terms of the same (total) degree." is true now. Although I don't think it is optimal. I would rather hint the possibility of solving the difference between univariate and multivariate or not to establish the distinction at all (I prefer the first although the second doesn't complicate things). Look, right now the phrase is giving (IMO) a dogmatic knowledge. It says, in univariate you can trivially order the terms while in multivariate if I do the same I can'. I would prefer to hint that that is only because we are trying to use blindly the same strategy. In fact, using a slightly different idea you can order terms in all cases.
Now, about the phrase: "Univariate polynomials have many properties not shared by multivariate polynomials.". It is happening that we have two different objects, A and B and we are saying: "A have many properties not shared by B". OK, that is kind of the definition of A being different from B. It is a vacuous statement. Also is helping to sublimate the idea of the inevitability of not being able to order the terms in the multivariate case. Also we don't need it. Since the sentence following it can be rewrited as: "The terms of a univariate polynomial are totally ordered by their degree, while in a multivariate polynomial many terms can have the same (total) degree." The difference in removing the introductory sentence is that the fault of the lack of order is passed to the concept of degree and not to the polynomials being multivariate (you agreed with this at the end of your paragraph above). After all what is important in that section are the concepts the degree is defining in the univariate case, namely, monic, leading term, monic polynomial and not the possibility of ordering terms in polynomials. There is no need for a comparative statement in this sense between univariate and multivariate.  franklin  11:49, 8 January 2010 (UTC)
• Also in your last edit you said you needed the introductory phrase to warn that the following is only for univariate but in fact, for each of the subsequent concepts the sentences defining them warn that it is for univariate polynomials.  franklin  11:55, 8 January 2010 (UTC)
Look, I don't see why you have so much problems with "Univariate polynomials have many properties not shared by multivariate polynomials." There are a huge number of important algebraic properties that hold for K[X] but not for K[X,Y], like being a principal ideal domain. It is a statement like "Abelian groups have many properties that groups in general don't have", which most people would agree with (and again this goes way beyond the property of being commutative; that would be a vacuous statement). To say "Bivariate polynomials have many properties not shared by trivariate polynomials" would be much harder to justify. In fact there has been talk of splitting the polynomial article into one about univariate polys (to which a lot of its contents are restricted) and one about polynomials in general. The section "Extensions of the concept of a polynomial" starts with "Polynomials can involve more than one variable, in which they are called multivariate" to give just an impression of the state of affairs. So many readers probably have univariate polynomials in mind, and might naively think things are similar for arbitrary polynomials; is it so bad to explicitly warn them that things are not so nice? Many constructions on univariate polynomials, like Euclidean division, are strongly rooted in ordering terms, so it seems fairly accurate to say that problems for multivariate polynomials begin with ordering terms. I'm not saying of course that terms cannot be ordered at all, but life does get a lot more complicated. I'm copying this discussion to the polynomial talk page where it belongs, please continue there. Marc van Leeuwen (talk) 14:17, 8 January 2010 (UTC)
• Oh, I don't have problems with the statement itself. And I do agree with what you said let me try to explain because I mean something different. You said: "So many readers probably have univariate polynomials in mind, and might naively think things are similar for arbitrary polynomials; is it so bad to explicitly warn them that things are not so nice?" And this is precisely what I want to be said. But notice that here you are using a languaje different from the one in the article. Here you are hinting that there is a solution to the ordering. That's why I was saying about blaming the degree and not the polynomials for the lack of order.  franklin  14:25, 8 January 2010 (UTC)
• My main reason to be so picky about the writing of that point is because tiny difference convey wrong (very common) ideas as: see in the talk page of polynomials when someone asks me to order the terms of a^2+A^2+alpha^2. Common people do think that since degree doesn't order the terms in multivariate polynomials then that's the end of the story. franklin  14:34, 8 January 2010 (UTC)

Sorry the removed sentence was intended for another page.  franklin

## formula

I have to disagree with that formulas are finite expressions by definition. In the article for formula nothing is said about that. Actually the term formula refers to the fact the the formula sould give the solution of arbitrary coefficients. Formulas involving infinite number of additions and multiplications exists as is mensioned a just a paragraph below.  franklin  18:40, 9 January 2010 (UTC)

Sorry, two paragraphs.  franklin

## thank you

I got confused by the notation about the modules. -- m:drini 17:07, 19 January 2010 (UTC)

## Previewing footnotes and references

I saw in an edit summary at List of poker hands this familiar lament:

damn, why can't one preview footnotes in local edits!

To do this, add {{Reflist|group="note"}} (the same code that appears in the notes section) to the bottom of the section that you're editing. (Similarly, you can add {{Reflist}} to preview references, etc.)

It's not perfect, of course; the hard part is remembering to delete this when you're done with the preview!

Toby Bartels (talk) 06:54, 3 February 2010 (UTC)

## dot product

Actually, to be precise one would have to transpose one of the vectors, since dot product is defined for a pair of vectors of the same type. It may be clearer simply to point out the connection to scalar (dot) product. Tkuvho (talk) 15:08, 21 February 2010 (UTC)

Well dot product says it works for sequences of numbers, and matrix multiplication says the coefficients of a matrix product are the dot products of a row on the left and a column on the right, so that would seem to justify the current language at Cauchy-Binet formula. Actually I don't like the latter statement (even if I was involved in editing its paragraph), since a dot product is almost exclusively used for real vectors, but not so for matrix multiplication. For the same and other reasons, I don't see to much point in mentioning the dot product at Cauchy-Binet, but I did not want to bluntly revert you edit... Marc van Leeuwen (talk) 15:33, 21 February 2010 (UTC)
If it is inappropriate it should be deleted, but my impression is it may be useful to include links to related topics that may be more familiar. Thus, I included the relation to the trig identity sin^2+cos^2=1 at the Binet-Cauchy formula page, which helps clarify the nature of the identity (it may be worth including this at the lagrange formula as well). Tkuvho (talk) 15:41, 21 February 2010 (UTC)
If you want to make sense of notation Av where A is a matrix and v a vector, you have no choice but interpret v as a column vector. Then you have to say that dot product is defined between column vectors. Of course it is a sequence also, but that does not help to take a dot product between a column and a row. Tkuvho (talk) 15:49, 21 February 2010 (UTC)
(Please pardon if the following remark is not helpful.) The usual notation is that the dot product of two column vectors u, v equals the matrix product uTv. One can similarly choose to work with row vectors. (Note that I am deliberately ignoring the side issue of whether a 1x1 matrix "is" a scalar.) Dot product is usually not defined between vectors of different types. There are particular circumstances in which it might be useful to regard a dot product as between a row vector and a column vector; this is when one vector is considered to be in the dual space of the space containing the other vector. However, that is not usual, because a dot product is fundamentally between vectors in the same vector space. It would be misleading to define a dot product as a product of a row vector and a column vector. Zaslav (talk) 20:01, 21 March 2010 (UTC)

I am disturbed by your edits to Combinadic, which don't make sense to me. Please see remarks at Talk:Combinadic#van_Leeuwen_edits and explain, so I can try to understand what's going on. Thank you. Zaslav (talk) 20:16, 21 March 2010 (UTC)

Thank you for moving this page to a better name. Zaslav (talk) 11:03, 11 April 2010 (UTC)

As to the link to the Java code you removed, I would like suggestions on how to improve the Java code (since it made your eyes bleed *funniest comment I've seen in months!*). It does work, though I admit that my variable names may be odd *smirk*. I am not a professional when it comes to combinatorics (is it that obvious?), but I am eager to learn. I am open to whatever is necessary to increase the readability of the code and thus be resubmitted as a link to the article. --Lasloo (talk) 22:13, 16 July 2010 (UTC)

I am confused by your edits to the Factoradic page. Though what the link said was a proposal, a little more searching said that it isn't just a proposal, it is, in fact, used to represent this number system. Moreover, even you wrote more or less the same thing after editing. Please share your views. And, I am working on the mathematical operations on this system. If you have an insight in this field, I will be highly obliged to know. One Harsh (talkcontribs)

It is not entirely clear to me what you are saying. I suppose you refer to the passage that said
Making this system more general to write fractions, we can generalize the formula as:
   an*n! + an-1*(n-1)! +.. + a2*1! + a1*0! + b1/2! + b2/3! +.. + + bn-1/n! +..

where ai denotes the digits before decimal and bi the digits after decimal.
I've read what seem relevant parts of that discussion, and it struck me that most of it seems due to ignorance of properties of general mixed radix numeral systems. For any sequence of base values for each digit position there is a mixed radix number system; if only positions before the "decimal" point are used this will only represent integers, if finitely many positions after the "decimal" point are used certain fractions can also be represented, and with infinitely many positions after the "decimal" point one gets a representation of (non-negative) real numbers with similar properties to that of infinite decimals. In all cases digits are non-negative integers strictly less than the base value at that position. Representation of (non-negative) values is then unique in the case if integers or fractions with bounded number of "decimals", while with "infinite decimals" there are numbers with two different representations (one of which has repeating 0's), similarly to what is described in the 0.999... article for true decimals. So that one can extend the factorial number system with base values 2,3,4,.. after the decimal point is no news, and the is no need to "prove" this system (whatever that means); it is just an instance of midxed radix number systems in general. But so would be extending with base 10 everywhere after the decimal point, or base 2 or base 2,3,5,7,11,13,17... or whatever sequence of base values comes to mind.
My point is that nothing of this has a place in the factorial number system article. Just because one can extend the system does not mean one has to, and even less that there is agreement about the best way to do so. Wikipedia must resist gratuitous generalizations that are not actually used in the literature, as per WP:Notability. Furthermore the only natural way to extend the sequence of base values 4,3,2,1 beyond the decimal point would be 0,−1,−2,..., but this is not allowed; what is more the "corresponding place values" would be (−1)!,(−2)!,(−3)!,... but these values are undefined. Extending with base values 2,3,4,... maybe has some aesthetic charm, but no mathematical necessity. The factorial number system has particular properties with applications mainly to permutations; for instance it can be used to rapidly compute the Nth permutation of n in lexicographic order for all positive integers N up to n!; there does not seem to be any related property when extending beyond the decimal point in any particular way. To resume, there is no reason for wikipedia to mention (any or all) possible extensions of the factorial number system to deal with non-integer values. Marc van Leeuwen (talk) 12:25, 29 March 2010 (UTC)

Thanks for the insight. One Harsh talk —Preceding undated comment added 16:38, 31 March 2010 (UTC).

Hear, hear! Zaslav (talk) 11:06, 11 April 2010 (UTC)

## Reflection

Please read my comments on the definition of a reflection. I believe you are mistaken in how you wrote the definition. I restrained myself from reverting your recent change because we can't have a fight; we need a discussion. We need to settle this before revising the article once more. (By the way, I think you've done some nice work in other articles.) Zaslav (talk) 11:11, 11 April 2010 (UTC)

## Inorder

Inorder is perfectly well defined on infinite binary trees. However, what it is well defined as is not a sequence of nodes (obviously) but a total ordering on them. The ordering between any two nodes is the same as it is in the finite subtree formed by the paths from the two nodes to the tree root. —David Eppstein (talk) 15:35, 13 April 2010 (UTC)

Of course this is true. However inorder currently redirects to tree traversal, which does not actually define inorder in any other way than as an order of traversal. So this seems less than helpful for understanding the statement in Stern–Brocot tree. In particular since it followed a mention of binary search tree, where by definition the ordering implied by the tree structure is taken to be inorder. So I thought leaving out the mention would do no harm. Marc van Leeuwen (talk) 17:42, 13 April 2010 (UTC)

## Binomial coefficient

For the recursion

$\binom nk = \binom{n-1}{k-1} + \binom{n-1}{k},$

what do you think of initializing it with the Laurent series for the (1+X)0 case? That is, for any integer k, the initialization would be

$\binom 0k = \left\{ \begin{array}{ll}1 & \mbox{if} ~ k = 0 \\ 0 & \mbox{otherwise.} \end{array}\right.$

The use of the recursion would thus derive that the constant term for (1+X)n is 1 for any positive integer n. However, as the page currently is, we instead assume that the constant term is always 1, via the initial condition

$\binom n0 = 1.$

Thanks -- Quantling (talk) 15:23, 21 April 2010 (UTC)

Pascal's Triangle Arithmétique
I can understand you concern for symmetry with respect to the cases $k<0$ and $k>n$. However, this symmetry is present only in the binomial theorem with x and y form (i.e., it is equally sensible to consider $x^{n-k}y^k$ for $k<0$ as for $k>n$, and equally obvious that they don't occur), but in the combinatorial form this is not so (one can very well count subsets of $k>n$ elements of an n-set and find zero of them; talking of subsets with $k<0$ elements on the other hand does not make sense in the first place). Therefore at the point where this recursion is stated both n and k are assumed to be non-negative integers; questions of generalizing beyond that are considered later in the article. This also explains why the recurrence is tagged "integers $n,k>0$" in the article, whereas Knuth (The Art of Computer Programming and Concrete Mathematics) just labels it "integer k".
One could raise more philosophic points, such as that the above asymmetry is due to the "historic accident" of using parameters n and k for binomial coefficients, rather than the more symmetric k and nk, as it seems was Pascal's choice, in view of the way he arranged his "Triangle Arithmétique" depicted on the left; this works well with for instance the combinatorial interpretation as the number of way to intertwine k white dots and nk black dots on a line. I think though the current choice has the somewhat after-the-fact advantages of blending well with the extension to negative n (which extension breaks the symmetry and is not dictated by the recurrence, but rather by the form to the formal power series expansion $(1+X)^n$ for negative n), and with other combinatorial quantities such as Stirling numbers that more naturally interpret n and k than nk. Anyway, I've done what seemed reasonable in the article as it was. Marc van Leeuwen (talk) 06:28, 22 April 2010 (UTC)

## New proof of fundamental theorem of symmetric polynomials

I agree that there are some infelicities in my proof that should be cleared up, but the proof has the attractive property of being simple and self-contained. Many readers of the Wikipedia math contributions are not professionals looking for cutting edge stuff, but relative novices. There should be proofs for them too.

I should add that it's nice to give an example. This clears up lots of confusions.

## I have marked you as a reviewer

I have added the "reviewers" property to your user account. This property is related to the Pending changes system that is currently being tried. This system loosens page protection by allowing anonymous users to make "pending" changes which don't become "live" until they're "reviewed". However, logged-in users always see the very latest version of each page with no delay. A good explanation of the system is given in this image. The system is only being used for pages that would otherwise be protected from editing.

If there are "pending" (unreviewed) edits for a page, they will be apparent in a page's history screen; you do not have to go looking for them. There is, however, a list of all articles with changes awaiting review at Special:OldReviewedPages. Because there are so few pages in the trial so far, the latter list is almost always empty. The list of all pages in the pending review system is at Special:StablePages.

To use the system, you can simply edit the page as you normally would, but you should also mark the latest revision as "reviewed" if you have looked at it to ensure it isn't problematic. Edits should generally be accepted if you wouldn't undo them in normal editing: they don't have obvious vandalism, personal attacks, etc. If an edit is problematic, you can fix it by editing or undoing it, just like normal. You are permitted to mark your own changes as reviewed.

The "reviewers" property does not obligate you to do any additional work, and if you like you can simply ignore it. The expectation is that many users will have this property, so that they can review pending revisions in the course of normal editing. However, if you explicitly want to decline the "reviewer" property, you may ask any administrator to remove it for you at any time. — Carl (CBM · talk) 12:33, 18 June 2010 (UTC) — Carl (CBM · talk) 12:52, 18 June 2010 (UTC)

## Abel–Ruffini theorem

Hi, I made a question on the talk page of that article and I have the impression you might know the answer. Can you please help me with that? --Sandrobt (talk) 16:09, 6 September 2010 (UTC)

I added another question, I'd really appreciate if you can help me again! I was fixing the italian version of that page and I was confused by that sentence. Thanks a lot for your help!--Sandrobt (talk) 15:02, 7 September 2010 (UTC)

## Merger of symbolic computation with computer algebra system

You may be interested in Talk:Symbolic computation#Merger with computer algebra system. Yaris678 (talk) 17:21, 25 November 2010 (UTC)

## Defining a continued fraction with an "infinite expression"

I'd appreciate your input in this discussion of the current lead in of the continued fraction article. —Quantling (talk | contribs) 19:53, 30 November 2010 (UTC)

## Compact space

Nice job with the lead of compact space. I think that this is probably what was needed there. I'd like to solicit some input about how to approach the "Introduction" section. I have posted at Talk:Compact space. Sławomir Biały (talk) 13:30, 9 January 2011 (UTC)

## Permutation introduction

Hi Marc van Leeuwen, I'm the user whose changes to Permutation you've reversed. The current version is clearly better than what was there originally. Consensus on the talk page seemed to be that the historical notes and quotation are out-of-place in the introduction; perhaps they could be placed in a new "History" section? --18.87.1.234 (talk) 16:10, 20 January 2011 (UTC)

Yes, sorry about that, but as you see I did try to keep the spirit of your changes. I agree about the historical part, although it is somewhat limited for a full History section. I'd have moved it if I could decide where the History section should go. It often goes very early in an article, but I'm not sure. Marc van Leeuwen (talk) 16:29, 20 January 2011 (UTC)
Well, I'm sure you have more experience with this than I do; I also have a sense that history sections tend to occur early in articles, though perhaps one could argue for an exception in this case due to the relative lack of importance. The N. L. Biggs article cited for the quotation has at least another paragraph or two about the history of human knowledge of counting permutations, if you wanted to try to make a fuller section. (It would still be on the skimpy side, though.) I may try to poke around and see if anybody citing Biggs has anything interesting. --71.233.44.242 (talk) 02:37, 21 January 2011 (UTC)

## "Ridiculous"

In the recursion article, it's not particularly "ridiculous" to define the natural numbers as a subset of the reals. On the one hand, the reals can be defined as the unique complete ordered field, which does not make any mention of the natural numbers. On the other hand, without an "ambient" set of numbers, the notation "n+1" is meaningless, because the "+" operation isn't defined. — Carl (CBM · talk) 13:34, 14 February 2011 (UTC)

No. You mean the real numbers are the unique complete Archimedean ordered field, and the Archimedean property requires the natural numbers. Also complete is defined in terms of Cauchy sequences, which are indexed by natural numbers. But that just shows even this characterization of real numbers cannot be stated without defining natural numbers first. The main point is that one cannot define real numbers without first defining rationals, and therefore integers and natural numbers (maybe trying very hard you can come up with a definition that skips a step, but nobody does that in practice, and avoiding the natural numbers would be a real challenge). I do agree though for the problem you signal with addition, and I think the recursive definition in question is not really a very informative example. Properly done one should say if S is a natural number then $S\cup\{S\}$ is also a natural number, i.e., the set-theoretic construction of the natural numbers (and the empty set should replace 1, which by the way should of course have been 0). Marc van Leeuwen (talk) 14:10, 14 February 2011 (UTC)
Completeness can be defined in terms of subsets: every nonempty subset with an upper bound has a least upper bound. Archimedean means that the prime subring (= intersection of all subrings containing 1) is unbounded; this does not mention the natural numbers. Moreover, every ordered field with the least upper bound property is Archimedean. Otherwise the prime subring of the field is bounded above, so it has some least upper bound r. Then r-1 is not an upper bound for the prime subring, so it is less than some element m of the prime subring, which means that r is less than m+1, which is a contradiction. So there's no need to assume the Archimedean property.
So we can axiomatically define the real numbers as a complete ordered field (that is, an ordered field with the least upper bound property), and this does not require the natural numbers to be defined first. — Carl (CBM · talk) 14:24, 14 February 2011 (UTC)

By the way, I noticed your clean-up work on several articles, and it is much appreciated; don't take my comments here as a criticism of your editing. I just felt you might have overstated the claim about this particular issue. — Carl (CBM · talk) 14:40, 14 February 2011 (UTC)

OK, thanks for the insight. I still maintain that (1) it is hardly possible to define real numbers without defining natural numbers first: characterizing them does not suffice, a model proving their existence should also be supplied, and (2) even if it can be done, nobody would actually develop mathematics like this. Marc van Leeuwen (talk) 10:06, 15 February 2011 (UTC)

## raising a number to the power of an underlined integer

I'm not sure what you mean. Specifically, raising a number to the power of an underlined integer.

$\frac{52^{\underline{5}}}{5!}$.

This is in regards to your last edit at combinations. AAS 16:37, 27 March 2011 (UTC) — Preceding unsigned comment added by Ann arbor street (talkcontribs)

That's the falling factorial power, unfortunately known under many other names and (less readable) notations. The idea is to multiply as many factors as the exponent gives, just to subtract one for every next factor. On overline instead of an underline would add one for every next factor; this is also often useful. The notation is in fact used earlier in the article, when giving the first multiplicative formula for binomial coefficients, but an explanation is missing; I think it ought to be given. I'm pretty sure it is mentioned somewhere in binomial coefficient but haven't checked recently. This is stuff people keep fighting over and modifying, which is a bit sad. Marc van Leeuwen (talk) 19:16, 27 March 2011 (UTC)
Okay, thanks. Then under this notation
$\binom nk = \frac{n^{\underline{k}}}{k^{\underline{k}}}$.
I didn't change it back. While I personally like the notation now that you've explained it, I'm not sure how appropriate it is, given the citation of one book in the article. I did add a conversation about it to the talk page, and you're welcome of course to put it back in the main article without risk of me reverting it, despite my reservations. Ann arbor street (talk) 03:44, 28 March 2011 (UTC)

## Editing previous comments on talk page

Yes, this is a bad habit of mine. I'll try to avoid it in the future. There was an editing conflict. TR replied while I was still editing. I should have probably added my new text after TR's reply, but TR perfectly understood what I meant, even though my text was not complete yet, and his reply provided additional information, so his reply will still make sense to future readers. Paolo.dL (talk) 12:18, 29 April 2011 (UTC)

I don't understand the reason why you don't like my sentence describing a rotation matrix as a matrix representing a rotation "about the origin of a CS". This is perfectly correct. It would not be correct, however, to write that it represents a rotation "about an axis", as this is true only in 3-D (not in 2-D, not in 4-D, etc.), and only if you choose to interpret the rotation as a rotation about a single axis by a given angle (Axis angle), rather than a sequence of rotations about three axes, by three different Euler angles. Whatever option you choose, the rotation (or rotations) occurs about the origin, and this is the key point. The origin is the center of a circle (in circular motion) or a sphere (in 3-D) or n-sphere (in n-D). We are not giving a complete geometrical interpretation of the rotation matrix, and we don't need to. For instance, we both feel it's not necessary to add "by a given angle".

Expressions such as yours "in terms of a CS" or "w.r. to a CS" are not clear enough in my opinion.

Let me explain. If you say that the matrix rotates a vector, then you don't need to specify about what point it rotates, as by definition the tail of a vector is the origin of the CS. If you see a vector simply as an arrow with no fixed position for its tail (an oriented distance between two points in space), then the concept of rotation becomes even simpler, as you don't need to care about the origin of the CS. This arrow has an orientation in space, that does not change when you translate both its tail and tip. So, a rotation in this case is independent of the point about which it is performed. On the other hand, the concept of "rotation of a point" (see circular motion) is much more tricky, as a point has no orientation in space. Only the correspoinding vector has an orientation. The point can only rotate about another point which does not pass through it, and the final (linear and angular) position of this point (as well as its initial position!) depends on the point about which you choose to perform the rotation.

Paolo.dL (talk) 15:30, 29 April 2011 (UTC)

A rotation is an isometry of a Euclidean space that fixes a codimension 2 subspace, while its restriction to a complementary orthogonal plane is a rotation of that plane (excuse the circularity). If you say that rotation is about something, then it would seem that the something is its set of fixed points, not just an arbitrary fixed point. The earth turns about its axis, not about the south pole; saying that would be confusing to most people. So if one says rotation about the origin, then this implicitly assumes the space is a plane. When I removed "about the origin" it was because people might very well think of a 3D rotation, and that part of the sentence is not helpful in that case. I did not propose "about an axis" either, it's just superfluous to mention, in fact even "rotation" is not really relevant, but people my consider this more concrete. In fact I don't see why the origin needs mention at all. Sure, since a matrix can only represent a linear transformation, it is bound to fix the origin; if that was the point of mentioning the origin, one could say "fixing the origin", but again, does this add something useful?
To me "in terms of a CS" or "with respect to a CS"; don't make much difference (however it is not the rotation but its representation to which these apply). Agreed "image of the point by the rotation" is a bit pedantic, and I would change it for something better if I could think of it; the reason for the formulation is that not the point, but the entire space that is rotated.
For the remainder of what you say, I have difficulty following. Since the article introduces the rotation a "linear transformation", it must be acting on a vector space (not an affine space), and this seems to be a source of confusion. So I see the origin as the zero vector, unrelated to the coordinate system (whence I'd like to avoid "origin of a CS"); also vectors don't have tails or heads, they just are. Strictly speaking "point in space" is not correct (but saying a column vector describes a vector is confusing as well). But I don't care much, as long as the text is clear and reads naturally (which seems the case currently); people can have different perspectives (and WP articles aren't particularly clear about the affine/vector distinction). Marc van Leeuwen (talk) 18:59, 29 April 2011 (UTC)

I don't think it is necessary to say that the rotation is represented in a coordinate system. The problem is that "rotation of a point" does not make perfectly sense, while "rotation of a vector" perfectly does. However, we can't write that the matrix rotates the vector, because we are using the word vector to indicate a 3x1 matrix. In this case, "rotates" might be interpreted as "transposes" (to 1x3). Isn't this the reason why you did not like the expression "rotated vector", which I used when I wrote that phrase?

I understand your point, but we are dealing with a single point here, not a rigid body. I would not say that the earth rotates about a point, but I can safely say that a point rotates about another, meaning that the point moves along a spherical surface. Isn't that simple enough? Moreover, the motion along that spherical surface may occur along an arc of a circle or, (as in the representation with Euler angles), along three "orthogonal" arcs. You can think of a rotation of a point on the earth as a rotation about the "center of the earth". It's just a matter of representation.

The reason why I don't like "rotation of a point" is that a "rotation" is a "change in orientation", and a point cannot change its orientation, as it has no orientation in space. Only a set of points fixed with respect to each other, such as a line segment, a vector, a plane, or a rigid body can change its orientation. So, the expression "rotation of a point P" can be accepted (and is accepted, as in circular motion) only when you specify at least another point about which P rotates.

Two points are enough to define the concept of rotation, one point is not. I am just suggesting to give the minimum amount of information, because I cannot find a simple way to generalize the concept of "axis of rotation" to N-D. (Again, we are not saying that the rotation is "by a given angle").

You might not like it, and I can see why you don't like it, but the concept of rotation of a point about the origin, contrary to what you say, is perfectly correct in N-D. Think about the rotation of a spinning top "about its tip". Any given "particle" of the top moves along a spherical surphace about the top, while the top not only spins, but also "precesses" and "nutates". Also, a diver performing a twisting somersault can be described as a body rotating about its center of mass. A ship sailing on the ocean rotates (approximately) about the center of the earth. People can understand this easily enough.

Paolo.dL (talk) 10:30, 30 April 2011 (UTC)

This discussion is getting too confused, so I won't spend any more time on it. Just for the record, it said "rotated point" before this edit of mine, not "rotated vector", so I don't get your point at all. And yes, I didn't like the sound of "rotated point", since a point having no dimension (and therefore no orientation, as you put it) this will not be clear to everybody. Also it said "the product Rv rotates the vector with respect to its basis" before this older edit, which suggests the basis is needed for the rotation to act, rather than just for its representation by a matrix. So I don't see that my edits have been confusing the matter, honestly. For the rest there is lots of stuff you just said that I don't understand. And also for the record, I did define (on this talk page) the "axis" of rotation (taken in the sense of a certain type of isometry of Euclidean space, which is more to the point of the matrix article than the kind of rotation involved in a spinning, percessing and nutating top) for arbitrary dimensions, see "codimension 2" above. But please, no more personal discussion, discuss the article on its own talk page. Marc van Leeuwen (talk) 11:53, 30 April 2011 (UTC)

## Elementary symmetric function

Hello Marc van Leeuwen,

I appreciate your diligence in this matter. When I had requested a citation the equations appeared incorrect (in the range of the summation); however, that issue appears to have been corrected. Now that examples are clearly in congruence with the provided definition, I agree that no citation is necessary. Keep up the good work and collaborative mindset.KlappCK (talk) 14:19, 13 May 2011 (UTC)

Okay, I take that back. I was looking at the elementary symmetric polynomials page, not the Newton's identities page. I think $e_2(x_1,\ldots,x_n) = \textstyle\sum_{i should be $e_2 (X_1, X_2, \dots,X_n) = \textstyle\sum_{1 \leq j < k \leq n} X_j X_k$ unless some other information is provided to suggest why these definitions vary. I am going to go ahead and make the change as the form given on the elementary symmetric polynomials page seems more congruent with the definition. KlappCK (talk) 14:28, 13 May 2011 (UTC)
I left a question for you on the discussion page for elementary symmetric polynomial. Since you seem to be a major contributor on the subject, perhaps you can answer it eloquently...and thanks for leaning up my Tex. —Preceding unsigned comment added by Klappck (talkcontribs) 19:03, 13 May 2011 (UTC)

## Euler's proof: the $k$th differences of the sequence $1^k, 2^k, 3^k,\dots$ are all equal to $k!$

Hi Marc!

I've just read your reply on my question in the discussion of Proofs of Fermat's theorem on sums of two squares. Thank you for your attention, but unfortunately I didn't find any hint for the answer of my question in the page you suggested. If you think you understood my question, could you reply there and be more specific as where to find? I questioned the claim on the title of this section there, because it was used as tool in Euler's proof (in the fifth step) as though it was quite commonplace (but for me, who understood every other step in the proof, was totally unknown).

Best regards, Wisapi (talk) 00:21, 15 May 2011 (UTC)

## Normative language

I'm trying to let you indeed have the last word at talk:Determinant, so I am replying here instead. I have to ask: why do you feel the need to use such normative language? "Adjoint" is not "wrong", it is an older term for the adjutant adjugate. Yes it would be nice to have a uniform vocabulary, but mathematics grows and changes -- and older terms are often preserved in applied fields. Anyone refusing to acknowledge older terminology will just end up being confused -- and not letting our students know this does them a disservice. (My take, obviously.) -- Elphion (talk) 21:24, 4 June 2011 (UTC)

Sorry, I was just citing (from memory) the adjugate matrix article (which I did not contribute to, I think). Literally its second paragraph says "The adjugate has sometimes been called the "adjoint", but that terminology is ambiguous. Today, "adjoint" of a matrix normally refers to its corresponding adjoint operator, which is its conjugate transpose." Indeed "wrong" was a bit simplified, my excuses. I personally have no particular sympathy for "adjugate", but it is true that I dislike ambiguous or misleading terminology/notation in general. Marc van Leeuwen (talk) 21:31, 4 June 2011 (UTC)

## Lehmer code

Hi Marc,

Thank you for helping me editing the Lehmer Code article, but you see, I working on it right now at this very moment so your last move proved more ennoying than helpfull, sorry. — Preceding unsigned comment added by Herix (talkcontribs) 13:24, 8 October 2011 (UTC)

It's difficult to edit simultaneously. But your edits seem more problematic than mine, as currently the page has reference errors. Please save a coherent version and I'll put back the tag. Marc van Leeuwen (talk) 13:28, 8 October 2011 (UTC)

## Cite needed for proof in Menelaus' theorem

I tried to find a cite for the proof you added to Menelaus' theorem. Could you add your source so I can remove the 'unreferenced' tag on the section?--RDBury (talk) 21:23, 1 November 2011 (UTC)

My source is actually an exercise that I found in some geometry course (in French), but I forgot where. Not very practical. I certainly did not invent this myself, as you can see here. After looking for a long time in the library I found an indication to a solution to an exercise in a book by Michèle Audin. Which is better than nothing, so I'll put that in the article. Marc van Leeuwen (talk) 14:56, 2 November 2011 (UTC)
I'll add that a colleague just told me this is a well known application of homothecies in the French curriculum, and showed me a book preparing for the high-school exam (Terminal S) that mentions essentially the proof. Since the book is no longer edited and probably hard to find outside France, I'll not add the reference though. Marc van Leeuwen (talk) 16:26, 2 November 2011 (UTC)
I generally don't include material from exercises in WP articles but it's not something I'm going to nit pick about. Certainly it's better to have international points of view represented, even in a math article. Thanks for adding the cite.--RDBury (talk) 03:07, 3 November 2011 (UTC)
I would have loved a better reference, but this is apparently considered too trivial to state other than as an exercise. But it is so much cleaner than other proofs I've seen (including the one that was given before) that it seemed worth while to mention. Curiously the corresponding French WP article does not mention it, yet, but at least it avoids perpendiculars. I cannot believe those are part of the "standard proof", since the theorem is one of affine geometry. Marc van Leeuwen (talk) 10:15, 3 November 2011 (UTC)

## Search tree definition

I don't agree in following statement: a search tree is a binary tree data structure.

First, I think that a search tree can be any tree which performs searching operation effectively. For example, B-tree, Ternary search tree, and van Emde Boas tree are not binary search trees, but they are search trees.

Second, I see your reason in changing tree to binary tree. You describe that inorder only works for binary tree. However, from inorder, it states that tree traversal (included inorder) may be generalized to other trees as well.

Therefore, search tree definition should not be limited to only binary tree.

(Sorry for my bad English.) Nullzero (talk) 14:35, 27 October 2012 (UTC)

## September 2013

Hello, I'm BracketBot. I have automatically detected that your edit to Spectrum of a matrix may have broken the syntax by modifying 1 "()"s. If you have, don't worry, just edit the page again to fix it. If I misunderstood what happened, or if you have any questions, you can leave a message on my operator's talk page.

List of unpaired brackets remaining on the page:
• ''λ'' in the spectrum equals the dimension of the [[generalized eigenspace]] of ''T'' for ''λ'' (also called the [[algebraic multiplicity]] of ''λ''.

Thanks, BracketBot (talk) 15:07, 5 September 2013 (UTC)

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Frobenius normal form, you added a link pointing to the disambiguation page Minimal polynomial (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ • Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 09:02, 31 December 2013 (UTC)

## Waring formula

Since in the article [1] it is said that the formulas give "ever longer expressions that do not seem to follow any simple pattern", I would like to draw your attention to the "Waring formula", which gives a fairly easy way of computing the coefficients in this expansion: Namely the coefficient of the monomial M=\Pi_{i=1}^l e_i^{m_i} in the expansion of p_k (for 1*e_1+2*e_2+...+l*e_l=k) is given by (-1)^{m_2+m_4+...}*k*(e_1+e_2+...+e_l-1)!/(e_1!e_2!*...e_l!).

For instance in the example in the text we have l=3, m_1=5, m_2=0, m_3=1, m_4=3, k=5+3*1+4*3=20 and the coefficient is (-1)^{0+3}*20*8!/(5!*1!*3!)=-20*8*7=-1120

Maybe you can extend the article on Newton identities or maybe even write a separat article about the "Waring formula".

For proofs of this formula see the literature: [2] [3] [4] 213.47.239.29 (talk) 22:29, 16 February 2014 (UTC)

## Merge discussion for Polynomial expression

An article that you have been involved in editing, Polynomial expression, has been proposed for a merge with another article. If you are interested in the merge discussion, please participate by going here, and adding your comments on the discussion page. Thank you. Toby Bartels (talk) 16:18, 13 April 2014 (UTC)

You've clearly done a lot of great work on Wikipedia, so I hope that you agree that this is a better place to put what you wrote. —Toby
Cite error: There are <ref> tags on this page, but the references will not show without a {{reflist}} template (see the help page).