Talk:Eigenvalues and eigenvectors

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Former featured article Eigenvalues and eigenvectors is a former featured article. Please see the links under Article milestones below for its original nomination page (for older articles, check the nomination archive) and why it was removed.
Main Page trophy This article appeared on Wikipedia's Main Page as Today's featured article on November 1, 2005.
          This article is of interest to the following WikiProjects:
WikiProject Mathematics (Rated A-class, High-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
A-Class article A Class
High Importance
 Field: Algebra
One of the 500 most frequently viewed mathematics articles.
A selected article on the Mathematics Portal.
Wikipedia Version 1.0 Editorial Team / v0.7
WikiProject icon This article has been reviewed by the Version 1.0 Editorial Team.
Taskforce icon
This article has been selected for Version 0.7 and subsequent release versions of Wikipedia.
A-Class article A  This article has been rated as A-Class on the quality scale.


One of the main applications of eigenvalues/eigenvectors is in the solution of linear ODEs. The applications section has a section on vibrations, but there should be a more general section on solutions to ODEs and how/why they are related to the characteristic equations of linear ODEs.

Thanks. (talk) 15:01, 24 January 2013 (UTC)

General math cleanup[edit]

Hi, I just did a massive edit with the following changes:

  • Converted all the pseudo-math formulas (with wiki-italics, Greek unicodes, "math" and "mvar" templates) to the <math>...</math> notation. In my browser (Chrome) these now display quite effectively and neatly, apart from a few line breaking and spacing bugs. If this is not the case for everybody, please report here and I will consider changing some of them back.
  • Removed some unnecessary boldface. (The use of boldface for vectors, traditional in some engineering fields, has its merits in books or specialized papers; but is generally useless or worse for Wikipedia articles, considering that the target public can be assumed to have a high school or college background and are therefore unfamiliar with the convention.)
  • Elaborated some of the basic examples, such as characteristic polynomials.
  • Changed the notation for algebraic and geometric multiplicities to indicate that they are properties of the eigenvalue and of the operators. Moved the geometric multiplicities up, to the section where eigenspaces are discussed.
  • Added a rotation transform to the table of geometric transforms.
  • Removed almost all no-breaking spaces. (Cosmetic edits like preventing bad line breaks should be a very low priority goal, since readers come here for information not beauty. No-reaking spaces in particular make the wikisource much harder to edit, and this more than negates their positive value. Besides, most line breaking problems should be fixed invisibly -- in the browser, in the server, and/or in the javascript -- and not by the editors in the wikisource.)
  • Removed some duplication in the text.
  • Tried to clarify some parts, such as
    • The "bi-infinite shift" example for Spectral Theory section.
    • The diagonalization and eigendeomposition of a matrix
    • That non-real complex eigenvalues of a real matrix come in pairs
    • That left eigenvectors are right eigenvectors of the transpose.
    • That once aneigenvalue is known, the eigenvectors can be found by solving a linear system.

Hope I did not add too many errors. All the best, --Jorge Stolfi (talk) 02:16, 4 February 2013 (UTC)

I had to revert the changes as I encountered a lot of markup errors. The TeX renderer gave me dozens of errors and there were several
       preformatted text blocks like this
all over the place. I don't agree that expressions such as "3 x 3" or the name of a matrix such as simply "A" should be inside <math> tags, but if you do decide to use those, then be absolutely sure the contents of the tag are valid LaTeX code! They should also have consistent typesetting, such as the matrix name being a bold A.
For one, in my user preferences, I set it to always render math tags as PNG images. This is why I got to see all the errors and you probably did not. I advise you to temporarily change that setting to always render PNG just to be sure everything is working properly. And of course, double-check everything with the preview function before committing the changes.
I haven't had the time to look through all the other edits you've made to the rest of the page yet, but I just wanted to point out that urgent issue. — Kieff | Talk 02:24, 4 February 2013 (UTC)
  • Indeed there were a couple of things that MathJAX understands but the Wikipedia math-to-PNG renderer does not: "×" instead of \times, \phantom, "…", and some other UNICODE weirdos that I was unable to see but went away once I re-typed the offending formula. I have changed my Wikipedia profile to use the PNG renderer and now the page displays without errors.
    I agree that the PNG option makes the page rather too heavy; it also renders things in the wrong font size. Is it too unrealistic to expect people to switch to MathJAX yet? It seems to be the way of the future...
    As for using boldface for matrices and vectors, see the comment above. It not a general convention (mathematicians and physicists do not seem to use it), it is not needed in a text of this size, and it makes editing more painful. I believe the page is now consistent, without bold for matrices and vectors.
    All the best, --Jorge Stolfi (talk) 03:45, 4 February 2013 (UTC)

I do not understand a maniacal addiction of certain users to <math>. With MathJax, versions of Jorge Stolfi requires about 20 seconds to be rendered in my browser. I think that with PNGs it easily can cause a vomit. Why he uses this resource-consuming <math> to say “a”, “\mathbf v”, and “2\times 2” while the same is available for a much lower cost? Incnis Mrsi (talk) 06:52, 4 February 2013 (UTC)

  • See my reply below. Indeed I may have abused <math>, but I believe it is the way of the future. Efficiency problems will be solved. The way a looks on screen depends on the chosen "skin". Having two ways to enter formulas is very bad for editors (especially for novices) and even for looks. --Jorge Stolfi (talk) 15:59, 4 February 2013 (UTC)
    You can think it’s good, bad, or whatever you want, but three ways to enter formulas is a fact. I think the plurality is good. You think it’s bad, but there is no consensus in favour of the exclusive use of the current implementation of <math>. In any case, Wikipedia does not need a human job for conversion from simple {{math}}/{{mvar}} to <math>. A bot-like job could be performed with bots. You are human, so do fix formatting where it is really poor. Incnis Mrsi (talk) 17:20, 4 February 2013 (UTC)

Sections that should be moved elsewhere[edit]

The following sections should not be in this article and should be moved to (merged into) more specialized articles, leaving here only a short sentence mentioning their existence:

Also, the table of geometric transformations seems useful, but the following subsection (that describes the geometric effect of the transformations) is largely out of topic and should be moved to some article related to analytic geometry:

The following subsections of "Applications" are too confusing to help. They should be rewritten for general readers, or moved to specialized articles with only a brief mention here:

The last one is misplaced (should be in Applications) and fails to explain why eigenfunctions are relevant to the problem.
All the best, --Jorge Stolfi (talk) 02:29, 4 February 2013 (UTC)

Destruction of &nbsp;[edit]

With spaces With &nbsp;
If we think of a vector x as a single-column matrix with n rows, then the linear operator defined by a matrix A with n rows and columns maps the vector x to the matrix product Ax. If we think of a vector x as a single-column matrix with n rows, then the linear operator defined by a matrix A with n rows and columns maps the vector x to the matrix product Ax. If we think of a vector x as a single-column matrix with n rows, then the linear operator defined by a matrix A with n rows and columns maps the vector x to the matrix product Ax. If we think of a vector x as a single-column matrix with n rows, then the linear operator defined by a matrix A with n rows and columns maps the vector x to the matrix product Ax. If we think of a vector x as a single-column matrix with n rows, then the linear operator defined by a matrix A with n rows and columns maps the vector x to the matrix product Ax. If we think of a vector x as a single-column matrix with n rows, then the linear operator defined by a matrix A with n rows and columns maps the vector x to the matrix product Ax.
Attention! Samples compare merits of &nbsp; vs U+0020, not {{math}} vs <math>!

After edits of Jorge
Stolfi all “matrix&nbsp;{{mvar|A}}” (and similar), which I put into the article, became “matrix <math>A</math>”. I already said about addiction to <math>, but what is a substantiation for destruction of &nbsp;? Incnis Mrsi (talk) 07:12, 4 February 2013 (UTC)

  • Well, sorry for having erased the work you invested in adding the non-breaking spaces. I would gladly put them back if I could be convinced that they are a positive thing. Please read my arguments against their use.
    As for excessive use of <math>...</math>, you do have a point: by the same argument that contents and easy of editing are more important than looks, we should avoid it. And indeed I sould not have used math for "3×3"; and I will fix that. On the other hand, I believe that TeX/math should become the preferred way of entering formulas in any article with more than one or two formulas, and that the {{math}}/{{mvar}} should be avoided. Please read my arguments on this topic. <math> .
    Thus, I am sorry for all your work, but I would really object to adding back the non-breaking spaces and {{math}}/{{mvar}} templates. All the best, --Jorge Stolfi (talk) 15:50, 4 February 2013 (UTC)
    Are there, actually, arguments relevant to advanced editors? Say honestly: you just don't like all this clumsy formatting stuff. Let us compare readability of the text with &nbsp; and with ordinary spaces. Incnis Mrsi (talk) 17:20, 4 February 2013 (UTC)
    • On my browser (Chrome) and skin, I see absolutely no difference between the two columns. That creates a problem, right? How will an editor know whether to use NBSP or not? --Jorge Stolfi (talk) 18:38, 4 February 2013 (UTC)
      Try to alter style="width:…em" parameter in the table (simultaneously at both cells) and to push Preview. “Inconvenient” spaces can word wrap, but for an arbitrary text width there is a possibility that no “inconvenient” wrap occurred in a given text. Incnis Mrsi (talk) 19:00, 4 February 2013 (UTC)
      • I got to see some effect by squeezing the browser's window. An "inconvenient wrap" would be a break before a formula? If so, then editors would have to add an NBSP before every formula, since they cannot predict where the breaks would occur in other people's browsers. That is not acceptable. This is definitely a problem that should be fixed at the systems level, not by us editors. --Jorge Stolfi (talk) 19:13, 4 February 2013 (UTC)
        Before every formula? There is no "before" and I did not say anything about "every"; you can recall it in the version saved by me. I try to bind such non-verbal items as A and x with their heads (wherever it is possible; where it is not possible I do not). Although it may cause a controversy and eventually be rejected, you are not an owner or manager of this project where you could decide to "purge this nasty &nbsp;s now because they irritate me". I do not believe that my &nbsp;s actually hindered your edits of the wiki code to such extent that you opted to delete all of them only to alleviate your subsequent editing. Incnis Mrsi (talk) 21:21, 4 February 2013 (UTC)
  • Sorry, I got it now. (The intent was not obvious, was it? I assumed they were attempts to control the spacing around formulas, which sometimes get mangled when using HTML math.)
    I have never seen technical writers attempting this sort of fine line-breaking control, not even for the most finnicky technical journals. Perhaps because TeX already does a very good job at line breaking, and avoids breaks before formulas if it can. (That is an example of systematic problems being properly solved at the system's level rather than by user hacks.) TeX does have a "non-breaking-space", written "~", but it is used only where TeX cannot do the job by itself --- mainly, after abbreviation periods (where line breaks must be avoided), to distinguish them from sentence periods (where line breaks are preferred).
    While a break between "of" and "x" is not nice, it is not that terrible either. A break after "Dr." may be mistaken for end-of-sentence, but a break after "of" will not. Anyway, wikipedia editors and readers have much worse problems to worry about: bad grammar, jumbled order, confusing explanations and even incorrect statements. (One of my "contributions" to this article was a completely wrong value for the roots of \lambda^3-1. No one complained about that.)
    Editors should be working on those real problems, not on subtle formatting details; and anything that makes it harder for them to work on those problems, is bad for wikipedia. Won't you agree that the NBSPs make the affected sentences harder to read and edit, especially for novice editors (which are desperately needed to keep the project alive)?
    I did not delete the NBSPs because I did not "like" them, but because they were indeed standing in the way, and I could not see what good they were doing to the article. And I still don't. Since line breaks fall in different places for different readers, most of those NBSPs will have no effect; perhaps only one out of every 20-30 will actually prevent a slightly objectionable line break like
    thus the arrow is parallel to
    x, and therefore the vectors

    (but then also cause the previous line to be shorter, which is not nice either). Yet every one of those NBSPs will be visible in the source, standing in the way of editors.
    If there was some way to achieve the same effect without having so much impact on the readability of the source (say, like the "~" of TeX), I would offer to put back your edits, out of respect. But if the only way is to put back the "&nbsp;"s, then no, sorry: I still believe that wikipedia is better without them, and I will not work to make it worse.
    Please, please, consider investing your time into editing articles for contents and clarity, rather than looks. There are literally millions of articles that need such help, and that would really help millions of people out there. All the best, --Jorge Stolfi (talk) 17:31, 5 February 2013 (UTC)
Jorge may wonder, but the revision saved by me 4 days ago does not contain a single cubic equation. Not a single mention of it, so it is unclear how it might suggest a “completely wrong value for the roots of \lambda^3-1”. The same for the revision when I edited the article first. Possibly, Jorge fixed such a crap in the past, but why to speak about it now? Does he extort a vaunt? We all fix a crap (at least, me too), and I did not receive a single vaunt for it, but sometime receive insults and hatred. What about &nbsp;s? I just wait a third-party opinion. Arguments of Jorge are not convincing: “TeX already does a very good job at line breaking” and “TeX does have a "non-breaking-space"” are off-topical, and if one feels uneasy with a complicated wiki syntax, then let him/her just avoid making a serious job there. Incnis Mrsi (talk) 19:28, 5 February 2013 (UTC)
  • I did not mean to imply that you or NBSPs were in any way to blame for my mistake. I was lamenting to the birds that such a gross factual error went unnoticed until I fixed it myself. It shows how thin is wikipedia's editor base of these days: only five years ago, massive edits in such an important page would have attracted many critical eyes, complaints, and additional edits. And this dispute of ours would have turned into virtual bar-fight. Sigh... --Jorge Stolfi (talk) 23:37, 5 February 2013 (UTC)
I saw the note at the Village pump. My personal preference is to solve both problems at the same time by using the actual character rather than the HTML code. It's option-space on a Mac, or " " if you want to copy and paste. Then you'll have readability benefit of keeping the words connected and not have the intimidating mess in the edit window. WhatamIdoing (talk) 20:17, 6 February 2013 (UTC)

Opening sentence[edit]

Is something an eigenvector with respect to a matrix, or a transformation? Currently the article is written primarily with the perspective it is with respect to a matrix. But really it is a property with respect to a transformation (including those defined with matrices). From a pedagogic point of view I understand matrices might be easier than abstract linear transformations, but in this context I think the matrix point of view makes it more difficult to understand what an eigenvector is. So I think the opening sentence should use the term transformation instead of matrix. Mark M (talk) 03:28, 24 February 2013 (UTC)

Well, I do not think that the article is written "primarily with respect to a matrix". It does start that way, but it does give the abstract linear algebra definition too.
I am trying to imagine what sort of reader will look up this topic. As a computer guy, my view may be skewed; but I would guess that the reader will most likely be a technical person (programmer, scientist, engineer) who needs the concept but does not know what it is, or does not quite remember it anymore. I would guess that he understands matrices but not necessarily abstract linear algebra. If his need is related to a practical application, his "linear operator" will be a matrix anyway.
I agree that it is hard to see the importance of eigenvectors in a matrix context; but is the abstract definition really easier to understand? Abstractions are not easy to learn; one must learn at least two concrete examples before seeing the merit of an abstraction. I would think that matrix product is the first concrete example of a non-trival linear operator for most people.
The problem with mentioning lienar operator on the first sentence is that one would have to rewrite it entirely in those terms, and then the matrix view would be lost. Maybe there is a way out but I do not see it... --Jorge Stolfi (talk) 21:49, 24 February 2013 (UTC)
Hm, perhaps we can rearrange things so that the generalizations are mentioned earlier, in the second paragraph. Let me try. --Jorge Stolfi (talk) 21:57, 24 February 2013 (UTC)
OK, I have condensed the first two paragraphs (at the cost of losing the distinction of right/left eigenvector, but that is a nit that few will miss) and swapped another two paragraphs, so that the general definition is now closer to the top of the article. Do you think it is good enough? --Jorge Stolfi (talk) 22:20, 24 February 2013 (UTC)
I just want the first sentence (and paragraph) to be understandable to the widest possible audience. I guess my point is that the word "transformation" has an English meaning that is close enough to the mathematical definition (unlike the word "matrix"), which could be used to define what an eigenvector is. In this sense, a "matrix" is more abstract than a "transformation". Also, it is conceptually more natural to think about eigenvectors with respect to a geometrical transformation (as the pictures in the article suggest), as opposed to thinking about them with respect to a matrix.
That's why I had hoped to first sentence would use the word "transformation"; then later in the first paragraph, to give a better understanding to those who are more comfortable with matrices, say something like "Equivalently, an eigenvector of a square matrix is..". (Of course the unavoidable piece of jargon is the word "vector"..) Mark M (talk) 09:57, 25 February 2013 (UTC)
I see. However, it cannot be any transformation; we would have to say a linear one; and "linear transformation" is not a widely known concept. As for the widest possible audience, I cannot imagine how we could usefully explain eigenvalues and eigenvectors to someone who does not even know matrix multiplication.
Maybe we could write
A square matrix A can be viewed as a linear transformation that maps any column vector v to the vector A v. An eigenvector of A is a non-zero vector v that, when transformed by A, yields the original vector multiplied by a single number \lambda; that is, A v = \lambda v. The number \lambda is called the eigenvalue of A corresponding to v.<ref name=WolframEigenvector>...</ref>
Would that do? All the best, --Jorge Stolfi (talk) 16:18, 25 February 2013 (UTC)
Why does it have to be linear? Mark M (talk) 17:19, 25 February 2013 (UTC)
And anyway, you are still unnecessarily introducing matrices into the first sentence. Mark M (talk) 17:22, 25 February 2013 (UTC)
  • The concept is totally useless if the operator is nonlinear. For example, if F is not linear, and F(v) = λv, it does not follow that F(2v) = λ(2v), or F(-v) = λ(-v). Thus eigenvectors cannot be normalized and do not form eigenspaces. Indeed none of the properties (and applications) of the eigenXX of linear operators that are described in the article would apply to those of a non-linear operator.
    As for matrices: I may have a biased view of the world, but believe that, among all the people who need/use eigenvalues, perhaps 80% or more need/know then in a matrix context, and perhaps 50% would be at least confused by the term "linear operator". Perhaps for mathematicians "linear operator" is more basic/elegant/whatever, but I do not think it is the case for the rest of the world. Linear operator is one level of abstraction above matrices; most people learn things from concrete to abstract, not the other way around. --Jorge Stolfi (talk) 06:09, 26 February 2013 (UTC)
Consider Springer's encyclopedia of mathematics, which does not insist on linearity. It also doesn't mention matrices in the entire entry. "A non-zero vector which is mapped by to a vector proportional to it" is clear, concise, correct, and doesn't require matrices. Also, regarding who is seeing this article, perhaps consider the incoming links. I will also point out that the Spanish featured article doesn't even mention matrices in the lead. Mark M (talk) 09:35, 26 February 2013 (UTC)
Wow, you do hate matrices... 8-)
An entry on eigenvectors that does not mention matrices is not surprising in an encyclopedia of mathematics. (Are you familiar with the Bourbaki books? 8-) That is obviously a book that engineers should not buy...
I fully agree that a definition in terms of operators would be more elegant, and would be the best choice for a mathematics book; but this article does not exist for mathematicians, that is the point.
Now that you menioned it: the so-called "encyclopedias of X" have appropriated the name for something that is not at all like an encyclopedia. They have their role and merits, but Wikipedia defintely must not try to be like them.
Alas, that indeed seems to be happening in many technical areas: a few well-meaning but misguided experts decide to "organize" a subject X by turning all the articles on that subject into an "encyclopedia of X". The results have been disastrous: overly long articles full of formulas and poor on intuition, that use jargon and advanced abstract concepts from line 1, and that only experts can understand -- but which by their size and interconnections are very hard to read and impossible to edit. And, since those editors invariably underestimate the effort needed to write a coherent book, they run out of steam and give up when their "encyclopedia of X" is still is full of holes, inconsistencies and errors.
As for links: first, the vast majority of readers get to articles like this one via Google or the search box. Second, links from advanced math articles are naturally more numerous, but surely the links from elementary math and applied science articles (and there are many of those in the list) are followed much, much more often. (A reader of the "Hamiltonian lattice gauge theory" article is not likely to click on "eigenvalue", is he?) Moreover, this article is included in the navboxes "Areas of mathematics" and "Linear algebra"; so the "What Links Here" tool will stupidly list every article that includes either of those two navboxes, swamping the few articles that actually link to this article in the text.
Finally, another argument for starting with matrices: Eigenvalues are most used with symmetric operators, both because they are important and because they have all-real eigenvalues. Everybody understands what a symmetric matrix is; but in order to define a symmetric linear operator in the abstract one must introduce an inner product or some other non-trivial device.
All the best, --Jorge Stolfi (talk) 16:56, 26 February 2013 (UTC)
PS. The trend I described above is by no means limited to mathematicians: engineers are great offenders too -- and while mathematicians at least value elegance, engineers don't seem to worry much about it either... 8-(
You seem to have strong opinions on this matter that aren't likely to change, so I'm not going to pursue this. But I will correct something you've said: I like matrices quite a lot. :-) Mark M (talk) 17:17, 26 February 2013 (UTC)
Um, er, well, on second thoughts, perhaps I need to take another long vacation from Wikipedia. Sigh. Sorry for all the time made you waste on this nit. All the best, --Jorge Stolfi (talk) 19:08, 26 February 2013 (UTC)


Surely this:

The matrix A is invertible if and only if all the eigenvalues \lambda_2 are nonzero.

Should read:

The matrix A is invertible if and only if all the eigenvalues are nonzero. (talk) 23:34, 14 April 2013 (UTC)


where |\Psi_E\rangle is an eigenstate of H. It is a self adjoint operator, the infinite dimensional analog of Hermitian matrices (see Observable).

"It is" should be "H is"

Illustration in Eigenvalues and eigenvectors#An example[edit]

The removal of the illustration and the subsequent revert do highlight a slight problem, at least as I see it. While the illustration does correctly show how individual vectors transform, it introduces the additional and entirely superfluous features of an affine space that only serve to confuse, including a hint that a transformation of the vectors is associated with their position in the affine space and as a set of axes that incorrectly suggests that the affine space has an origin. IMO, it would be distinctly preferable to replace the illustration with vectors in a vector space, by having all the vectors radiating from the origin. —Quondum 06:30, 19 February 2014 (UTC)

The purpose of the illustration is written underneath it. The arrows are not supposed to be eigenvectors and it doesn't say they are. The point is that line segments parallel to eigenvectors don't rotate, but those which are not do rotate. It is an interesting and useful thing for the reader to learn, which would not be illustrated by vectors radiating from the origin. I strongly disagree with "incorrectly suggests that the affine space has an origin", since the image does not show an affine space. It shows the effect of transforming a vector space, whose origin is where the axes intersect as usual. McKay (talk) 07:15, 19 February 2014 (UTC)
Okay, so let's consider the illustration as depicting a vector space. Who said anything about eigenvectors? The caption refers to vectors, and so did I. And since we are dealing with a vector space, what are line segments? You seem to be imbuing a vector space with constructs that have not been defined. And if the vectors are shown radiating from the origin, the eigenvectors would maintain direction, and the vectors that are not eigenvectors would show exactly the rotation that they show now, only without the additional translation. —Quondum 07:42, 19 February 2014 (UTC)
I think the image would be enhanced by showing the eigenvectors as arrows from the origin, and removing the arrowheads from the line segments. Line segments are an elementary concept in vector spaces and will be meaningful even to people who can't write a definition for them, because they correspond to intuition. McKay (talk) 23:54, 19 February 2014 (UTC)
I'd be a lot happier with File:Eigenvectors-extended.gif. That clearly shows the eigen vectors. Also making it so it matched the example in the text \bigl[ \begin{smallmatrix} 3 & 1\\ 1 & 3 \end{smallmatrix} \bigr] would be nice. Having one of the eigen values 1 also over simplifies things and is a bit of a special case.--Salix alba (talk): 07:18, 20 February 2014 (UTC)
Line segments may be part of Euclidean geometry, and they are an elementary concept, but they are not definable just in terms of vector spaces. They require some form of betweenness geometry. It would be equally elementary to show a rectangle or any image being distorted by the linear map. The orientation of the line segments is unnecessary, and likely to mislead someone who has misunderstood the arrow metaphor to mean a vector is equivalent to an oriented line segment, which is exactly what the author of the image intended to suggest. ᛭ LokiClock (talk) 07:37, 2 March 2014 (UTC)
The elementary premathematical notion of a vector is something with a magnitude and direction. Thus vectors are arrows in Euclidean space, with arrows having the same magnitude and direction but different starting points identified. Of course, with this identification the space of vectors is indeed a vector space. So I don't really think it's a problem that the picture doesn't conform to our usual idea of a vector space as having all vectors emanating from the origin. I think a bigger problem is that which vectors are eigenvectors needs to be made more clear. Currently, the arrows that stand out the most are not the eigenvectors. Sławomir Biały (talk) 13:13, 8 March 2014 (UTC)
I don't think that this article is going to be of most interest to reader with an elementary premathematical level of mathematical sophistication. Nevertheless, let's accept the premise that we are representing displacement vectors from arbitrary points. The diagram then makes the implied association of movement of the starting point with the vector scaling, which is distracting and a burden on the reader to disentangle. This notion would suggest keeping the starting points fixed or have them move randomly rather. Any structure that is suggested but not relevant can lead to confusion. aA case in point: the massive, perennial arguments at Talk:Exponential function that are rooted in an invalid association between two fundamentally different definitions, and this is with people of some sophistication. Even presenting a context in which to interpret the association would help with interpretation, e.g. mentioning stretching of a rubber sheet affecting relative positional displacements. Nevertheless, it is imply an opinion/preference on presentation that I'm expressing; it is not a strong opinion. —Quondum 16:22, 8 March 2014 (UTC)
Quondum, this article definitely has a large premathematical audience - I probably first came to the idea through trying to find out what a quantum state is, and atomic orbital also links here in the elementary discussion, as do many articles on quantum physics topics that are elementary or else prominently referenced in popular science media. Even if it can't be processed well enough to provide new facility to such a person, it makes a lasting impression. The article may only be understood with the implicit understanding that vectors sit in a vector space, for which the meaning of transforming one space into another is to take a linear map. The perception of a valid transformation of a structure is the same as the perception of the structure. Instructing the reader to admit a generalization of one while maintaining a restriction on the other makes this structuralization of the restricted concept unintuitive. It damages the reader's intuition by confusing them as to the rationale of the restriction of fixed points of the map to lie on subspaces through a common origin. What happens when you try to go into the abstract algebra under the impression that vectors start from any location? The unique identity isn't recognizably true, and the concept that the linearity laws should constitute a function being a transformation of spaces is unjustifiable and obviously incomplete. Because the new picture doesn't fit with the old one and obviously match up to it, learning how to work with vector spaces through their axiomatic definition becomes a barrier that can only be resolved through the help of someone who knows the truth, often by accident, because what would the problem be, and how would a premathematical person articulate it? If they are supposed to be able to use Wikipedia math articles at all, and those articles are supposed to be free to depend on references to the treatment by Wikipedia of the concepts foundational to the article, then the concepts as presented in the article must be consistent with those references. Therefore the picture should conform to our usual notion of vector space, because that's the one Wikipedia is representing to the reader. ᛭ LokiClock (talk) 10:04, 9 March 2014 (UTC)
Yes, this echoes what I feel very well. The premathematical audience's "lasting impression" counts when they become no longer quite so premathematical (I guess this is where I meant that it would be "of interest" to them, even if they'd read it earlier), when they start needing to get to actually understand the use of the concept, and will be stuck with this image with its confusing mix. However, most of the editors do not seem to share this concern. —Quondum 16:06, 9 March 2014 (UTC)

To be clear (in case I have misunderstood): are we currently talking about this diff? Cf. a diff from June 2012. N.b. the related discussion. ←←νημινυλι (talk) 09:10, 18 March 2014 (UTC)

The latest one, although the other discussion is relevant for context, since you bring it up. ᛭ LokiClock (talk) 14:40, 19 March 2014 (UTC)

Math-free introduction?[edit]

Could we get one math-free sentence in the introduction? How about this:

If a piece of cloth is stretched, the eigenvector is the direction of stretching, and the eigenvalue is the amount of stretching. (talk) 17:53, 6 March 2014 (UTC)
Yes, and I took liberty to expand it. ᛭ LokiClock (talk) 10:29, 9 March 2014 (UTC)

Mangled lede[edit]

The second (depending on how you count) paragraph of the lede starts:"If 2D space is visualized as a piece of cloth being stretched by the matrix, the eigenvectors would make up the line along the direction the cloth is stretched in and the line of cloth at the center of the stretching, whose direction isn't changed by the stretching either. The eigenvalues for the first line would give the scale to which the cloth is stretched, and for the second line the scale to which it's tightened. A reflection may be viewed as stretching a line to scale -1 while shrinking the axis of reflection to scale 1. For 3D rotations, the eigenvectors form the axis of rotation, and since the scale of the axis is unchanged by the rotation, their eigenvalues are all 1."

This is virtually useless. 2D space? Is the target audience supposed to know what "2D" means? Are they supposed to know that most 2D spaces (eg. the surface of a sphere) can NOT be "visualized" as a piece of cloth? Are they supposed to know that an eigenvector is not JUST applicable to 2D? Where is it explained how the number of components of a vector relate to the dimensionality of some "space"? Why is a plane, which any kid in high school IS familiar with, called a space? How does a process of "stretching", which is obviously a change with time, have ANY bearing on the linear transformation of a vector? How can "vectors" ← note the plural! make up A LINE ←note the singular ?? I'm not qualified to write on this subject, but its my understanding that there must be as many eigenvectors as the dimensionality of the space. If this is correct, how do you get from a single direction (stretching) to two vectors?? Center of stretching?? What does that mean?? We are supposed to visualize a real object, a piece of cloth, being acted on by a matrix?? Please tell me you know the difference between a real object and a mathematical abstraction! An abstraction isn't going to have ANY effect on a piece of cloth. How do you "tighten" a piece of cloth that is being stretched? What is the meaning of "scale" and how is the cloth stretched to it? I have NO idea how to visualize a reflection being negative stretching, in point of fact, its not, since negative stretching would be compression, wouldn't it? Finally, tossing in a claim about rotations (of what? cloth?) in 3D is less than useful, imho. BTW it is "axes" not "axis" for the plural.

This paragraph is so flawed that either a total rewrite or just complete removal are the only alternatives. It is apparently an attempt to give a concrete example of a linear transformation, but does a really, really bad job of it. (talk) 22:23, 14 June 2014 (UTC)

Minor feedback[edit]

I would have understood the statement "yields a constant multiple of v" from the first paragraph much sooner if it was clear to me that "a constant multiple of v" implied multiplication by a scalar (or are there more complicated scenarios in which it is not a scalar?) — Preceding unsigned comment added by (talk) 04:15, 10 October 2014 (UTC)

I've reworded it slightly; hopefully it is better now. —Quondum 05:15, 10 October 2014 (UTC)