# Talk:Eigenvalues and eigenvectors

Eigenvalues and eigenvectors is a former featured article. Please see the links under Article milestones below for its original nomination page (for older articles, check the nomination archive) and why it was removed.
This article appeared on Wikipedia's Main Page as Today's featured article on November 1, 2005.
Article milestones
Date Process Result
October 5, 2005 Peer review Reviewed
October 14, 2005 Featured article candidate Promoted
September 19, 2006 Featured article review Demoted
Current status: Former featured article

## Typo

where |\Psi_E\rangle is an eigenstate of H. It is a self adjoint operator, the infinite dimensional analog of Hermitian matrices (see Observable).

"It is" should be "H is"

## Mangled lede

The second (depending on how you count) paragraph of the lede starts:"If 2D space is visualized as a piece of cloth being stretched by the matrix, the eigenvectors would make up the line along the direction the cloth is stretched in and the line of cloth at the center of the stretching, whose direction isn't changed by the stretching either. The eigenvalues for the first line would give the scale to which the cloth is stretched, and for the second line the scale to which it's tightened. A reflection may be viewed as stretching a line to scale -1 while shrinking the axis of reflection to scale 1. For 3D rotations, the eigenvectors form the axis of rotation, and since the scale of the axis is unchanged by the rotation, their eigenvalues are all 1."

This is virtually useless. 2D space? Is the target audience supposed to know what "2D" means? Are they supposed to know that most 2D spaces (eg. the surface of a sphere) can NOT be "visualized" as a piece of cloth? Are they supposed to know that an eigenvector is not JUST applicable to 2D? Where is it explained how the number of components of a vector relate to the dimensionality of some "space"? Why is a plane, which any kid in high school IS familiar with, called a space? How does a process of "stretching", which is obviously a change with time, have ANY bearing on the linear transformation of a vector? How can "vectors" ← note the plural! make up A LINE ←note the singular ?? I'm not qualified to write on this subject, but its my understanding that there must be as many eigenvectors as the dimensionality of the space. If this is correct, how do you get from a single direction (stretching) to two vectors?? Center of stretching?? What does that mean?? We are supposed to visualize a real object, a piece of cloth, being acted on by a matrix?? Please tell me you know the difference between a real object and a mathematical abstraction! An abstraction isn't going to have ANY effect on a piece of cloth. How do you "tighten" a piece of cloth that is being stretched? What is the meaning of "scale" and how is the cloth stretched to it? I have NO idea how to visualize a reflection being negative stretching, in point of fact, its not, since negative stretching would be compression, wouldn't it? Finally, tossing in a claim about rotations (of what? cloth?) in 3D is less than useful, imho. BTW it is "axes" not "axis" for the plural.

This paragraph is so flawed that either a total rewrite or just complete removal are the only alternatives. It is apparently an attempt to give a concrete example of a linear transformation, but does a really, really bad job of it.173.189.77.242 (talk) 22:23, 14 June 2014 (UTC)

## Minor feedback

I would have understood the statement "yields a constant multiple of v" from the first paragraph much sooner if it was clear to me that "a constant multiple of v" implied multiplication by a scalar (or are there more complicated scenarios in which it is not a scalar?) — Preceding unsigned comment added by 71.49.135.68 (talk) 04:15, 10 October 2014 (UTC)

I've reworded it slightly; hopefully it is better now. —Quondum 05:15, 10 October 2014 (UTC)

## General definition

Re. zero vectors as eigenvectors: "If we would like the zero vector to be an eigenvector, then we must first define an eigenvalue of T as a scalar \lambda in K such that there is a nonzero vector v in V with T(v) = \lambda v . We then define an eigenvector to be a vector v in V such that there is an eigenvalue \lambda in K with T(v) = \lambda v ."

This makes the zero vector an eigenvector whose eigenvalues are all the eigenvalues of all the other eigenvectors. This seems kluge-ish and poorly motivated. What is the motivation for wanting the zero vector to be an eigenvector? Philgoetz (talk) 16:02, 6 February 2015 (UTC)

Technically, what should be in the article is what is in the sources, and that would be the motivation for including the mention here. I have not read the one reference given, but the natural answer to your question seems to me to be that this allows the set of eigenvectors associated with a given eigenvalue to be an eigenspace, else they would differ by the zero vector. Having the set of eigenvectors associated with an eigenvalue being a vector space can simplify statements about it enough that the quirkiness of the definition may be justified. —Quondum 17:58, 6 February 2015 (UTC)

## Combination of "stretches"

Section §Overview has been edited to read

If two-dimensional space is visualized as a rubber sheet, a linear map with two eigenvectors would be a stretching along two directions corresponding to the eigenvectors.

This sounds like a definition, but is too ill-defined. It could describe two stretches, applied one after the other, neither of which preserves any eigenvectors of the other. In particular, it gives no sense of the crucial defining property of an eigenvector: that its direction is preserved by the map. —Quondum 17:43, 19 March 2015 (UTC)

I gave it a shot, if you find it's not an improvement, simply revert. Purgy (talk) 16:55, 23 June 2015 (UTC)

## Significance and meaning

We explain what they are (stretches, etc)... but what is their actual significance? We don't say this at all, can someone add something on it thanks!

By this, I mean to include two kinds of "significance:

• why do they matter, and why do they come up in these fields?
• what sorts of things do they signify in those fields and uses where they can be said to "signify" something?

Can someone add a section on "significance and meaning" to cover this? Thanks! FT2 (Talk | email) 12:21, 23 June 2015 (UTC)

There are some hints at the end of the *Overview*, axes of rotations and inertia are mentioned, ... I thinks these are almost ubiquitious, and one section on "Significance and meaning" wouldn't do the trick. Purgy (talk) 16:59, 23 June 2015 (UTC)

## Using a hyphen in "Two-dimensional example" but not in "Three dimensional example"

I think we should be consistent, but I do not know which one is correct. — Preceding unsigned comment added by 155.4.131.254 (talk) 11:16, 24 April 2016 (UTC)

Good catch! I have added the hyphens, also for "infinite-dimensional". Favonian (talk) 11:22, 24 April 2016 (UTC)

## Assessment comment

The comment(s) below were originally left at Talk:Eigenvalues and eigenvectors/Comments, and are posted here for posterity. Following several discussions in past years, these subpages are now deprecated. The comments may be irrelevant or outdated; if so, please feel free to remove this section.

 See the FAR discussion. Also I wonder if a different lead image would be better (e.g. the physics one). Geometry guy 16:46, 9 June 2007 (UTC)

Substituted at 05:09, 13 May 2016 (UTC)

## General definition of eigenvector in the lead

In the opening three sentences of the article, there seems to be disagreement about two points:

1. Mapping vs. equation notation for the formula giving the general definition of an eigenvector
2. Whether to mention the vector space V and the scalar field F in the first or second sentence

1) Mapping vs. equation

First, unfortunately I don't have a relevant textbook handy, but referring to several course lecture notes available online we see the general definition using the notation:

${\displaystyle T:V\mapsto V,}$
${\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} .}$

Second, the linear map article generally sticks to a convention where the mapping notation is used to denote the vector spaces that a transformation maps between, while parentheses and an equal sign are used to denote applying a transformation to a vector and getting a result. For example, in the section "Definition and first consequences", f:VW, fx)=αf(x).

Third, assuming either notations is acceptable, I think it makes sense to choose the notation that more obviously parallels the matrix version of the same equation so the link between them is clearer in the lead and overview.

Fourth, whatever notation is used here should match the notation used in the Overview and in the General Definition later in the article so the discussion is easy to follow as you move through the article.

2) Mentioning the vector space and field in the first or second sentence

Earlier on this talk page there was a discussion about going light on the math-speak early in the article. Many of this article's thousands of daily visitors, especially visitors who are here to read about matrices specifically and visitors learning about eigenvectors for the first time, may not know what a vector space and a field are. Additionally, some search engines pick up the opening sentence as the snippet of the article to preview on the search results page, which shows a limited number of characters and reaches an even wider audience. With these in mind, my intention was to simplify and shorten the language in the first sentence to make it more broadly accessible, then introduce the more formal terms in the second sentence. If the Wikipedia community would rather see a mathematically precise definition starting in sentence one, no big deal to me.

Error9312 (talk) 04:57, 18 August 2016 (UTC)

As I see it, you disagree with the opening three sentences of the article in their previous version. The sentences themselves are imho perfectly consistent, and reflect a contemporary mathematical view.
Of course, one might want to discuss the amount of mentioning mathematical fine print in the lede, but I strongly oppose to stepping back to adhering to the matrix view. Matrices are an extremly important means for any real world calculations, but impose strong restrictions on mathematical generality, especially, they impede any independence of basis selection.
Please, may I point you to a subtlety (${\displaystyle \mapsto }$ vs. ${\displaystyle \rightarrow }$) in your citation of the mapping notation: while ${\displaystyle T:v\mapsto T(v)}$ and ${\displaystyle T:V\rightarrow V}$ are fine, ${\displaystyle T:V\mapsto V}$ in this here context is not. It would possibly describe the identity map on a set of vector spaces, whereas the first version allows for the equation ${\displaystyle T(v)=\lambda v}$, and the second version describes domain and co-domain of the map ${\displaystyle T}$.
I plead for reinstating the more modern mapping view, rather pointing to matrices as examples, than placing them in front row. Breaking up with accustomed views generally pays the rent in mathematics.
I am unsure as to which degree vector spaces and their respective scalar fields are well known notions to those looking up eigenvalues and eigenvectors, and thus should be omitted in ledes. Purgy (talk) 07:23, 18 August 2016 (UTC)
Although I agree with Purgy regarding ${\displaystyle \mapsto }$ vs. ${\displaystyle \rightarrow }$, I prefer Error9312's version of the first few sentences. The first sentence in particular should be as jargon-free as possible without being flat-out wrong. McKay (talk) 07:56, 18 August 2016 (UTC)
I agree with McKay. However, I also agree with Purgy that I do not think focusing on matrices really simplifies anything. If there were a way of de-emphasizing the field and vector space, I would be fine with that. Can't we just say something like this: "Let T be a linear transformation. Then a non-zero vector v is an eigenvector if ${\displaystyle Tv=\lambda v}$ for some scalar λ." This leaves out some details, but that's generally fine in the lead. Sławomir Biały (talk) 11:25, 18 August 2016 (UTC)
Didn't catch the ${\displaystyle \mapsto }$ vs. ${\displaystyle \rightarrow }$ typo last night. Thanks for pointing it out. The typo doesn't invalidate my points above supporting use of the equation notation instead of using ${\displaystyle \mapsto }$. I'm fine with switching to more modern notation as long as it has sources to cite and the notation is consistent within the article and preferably with closely related articles, too.
Regarding matrices, just to be clear, I'm not advocating focusing only on eigenvalues and eigenvectors of matrices in the lead and the overview as it was before March, 2016. Rather, I'm arguing that the parallels between the general case and the matrix case are clearer when the equation notation is used for both,
${\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,}$
${\displaystyle Av=\lambda v,}$
rather than using different notation for each,
${\displaystyle T:\mathbf {v} \mapsto \lambda \mathbf {v} ,}$
${\displaystyle Av=\lambda v.}$
Sławomir, we could also use ${\displaystyle Tv=\lambda v}$, but that notation wouldn't be consistent with the linear map article or the sources I'm familiar with for this topic.
Whatever we settle on, it should be consistent across the lead, overview, and general definition sections. A compromise might be to use the equation notation in all three, then in the general definition section add a sentence stating that the ${\displaystyle \mapsto }$ notation is also valid and citing a source to back it up. Error9312 (talk) 18:17, 18 August 2016 (UTC)
I don't object to writing ${\displaystyle T(\mathbf {v} )}$. However, as far as I am aware writing ${\displaystyle T(v)}$ for linear transformations and ${\displaystyle Av}$ for matrices is not one that is generally supported by sources, and I don't think it is helpful to insist on that distinction here. Indeed, many authors write ${\displaystyle Tv}$ for the action of a linear transformation on a vector. Some even prefer, for their own mysterious reasons, to denote this ${\displaystyle vT}$. Anyway, generally speaking, consistency between different Wikipedia articles is too much to hope for, and we usually just ask that an article be internally consistent as far as is possible. Sławomir Biały (talk) 20:28, 18 August 2016 (UTC)
I do object to ...
• ... inconsistency in notion throughout one given Wikipedia article, favouring more modern notation (evolution!), which has reached general acceptance throughout the whole pertinent literature, the \rightarrow specifying domain and co-domain, the \mapsto defining properties or notation of the map.
• ... blurring the difference between
${\displaystyle T:\mathbf {v} \mapsto \lambda \mathbf {v} ,\quad }$ and
${\displaystyle Av=\lambda v,}$,
as the second employs an additional (representational) object ${\displaystyle A}$ to express the desired relations, and refers to an additional operation (matrix multiplication). I am indecisive whether to write ${\displaystyle T(\mathbf {v} )}$ or ${\displaystyle T\mathbf {v} }$. A subtle discrimination between variables and distinct values, based on this notational difference, is in my experience only rarely to find. Operating from the left or the right, respectively, are widespread diversifications in math notation. :)
• ... calling matrices "in parallel" to "linear maps". In this here context I prefer to see them as "specific representations" of linear maps, which impede the important concept of basis independence, are of extreme importance in numerical calculations only, and additionally are only appropriate for finite dimensional spaces. They should be treated in the article for that importance and for historical, or heuristical reasons, but should not clutter the foundations too much.
I see "eigenspaces" as properties of linear maps, living in arbitrary vector spaces (modules?), and they should be treated as such, without fencing them in by revering historical habits and education. Finally, I do not think that a good deal of the originally presented points are not affected by this discussion. Purgy (talk) 08:39, 19 August 2016 (UTC)

## Suggestion for a new lede

How about the following (meanwhile withdrawn):

______________________________________________

In linear algebra, an eigenvector or characteristic vector of a linear transformation ${\displaystyle T}$ is a non-zero vector that does not change its direction under application of this linear transformation. In other words, if ${\displaystyle \mathbf {v} }$ is a non-zero vector, then it is an eigenvector of a linear transformation ${\displaystyle T}$ exactly if ${\displaystyle T(\mathbf {v} )}$ is a scalar multiple of ${\displaystyle \mathbf {v} }$. This condition can be written as the mapping

${\displaystyle T:\mathbf {v} \mapsto \lambda \mathbf {v} ,}$

where ${\displaystyle \lambda }$ is a scaling factor, known as the eigenvalue, characteristic value , or characteristic root associated with the eigenvector ${\displaystyle \mathbf {v} .}$

There is a one-to-one correspondence between n by n square matrices and linear transformations from n-dimensional vector spaces to themselves. So for finite dimensional vector spaces, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations or the language of matrices.. Such a linear transformation ${\displaystyle T}$ can be uniquely represented as an n by n-square matrix ${\displaystyle A}$, and the vector ${\displaystyle \mathbf {v} }$ by a column vector, which is an n by 1-matrix. The above mapping is then rendered as a matrix multiplication on the left hand side and as a scaling of the column vector on the right hand side in the defining equation

${\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,}$

which holds for eigenvectors ${\displaystyle \mathbf {v} }$ and corresponding eigenvalues ${\displaystyle \lambda ,}$ belonging to the linear transformation ${\displaystyle T}$ represented by the matrix ${\displaystyle A.}$ Therefore these are usually called the eigenvectors and eigenvalues of the matrix.

Geometrically, an eigenvector corresponding to a real, nonzero eigenvalue points in a direction that is stretched by the transformation and the eigenvalue is the factor of this stretching. If the eigenvalue is negative, the direction is reversed.

__________________________________

Please, comment. Purgy (talk) 16:45, 21 August 2016 (UTC)

I'm not sure it is better than the current lead, and there are several things that are worse. Firstly, the notation ${\displaystyle \mapsto }$ is arguably not used correctly, or at least is misleading. When we write ${\displaystyle f:x\mapsto f(x)}$ we usually interpret this as a lambda expression, not something only true for a particular value of x. Secondly, I don't see how mentioning a one-to-one correspondence is helpful. What we actually mean (and the current lead says) is that the linear transformation is represented by the matrix. This is much stronger than "one-to-one correspondence". I'm not clear what else is different about the proposed lead. Sławomir Biały (talk) 18:10, 21 August 2016 (UTC)
While I can appreciate your reservations to the \mapsto, I lack understanding for "representing" being stronger than "one-to-one" in this here context, if one wants to avoid "isomorphisms" or the like. In my effort I tried to collect and compress the current content under the premise of minimized changes. I am not shy to confess that I am eager to repress the general use of language of matrices in math articles wherever they impose their, partly mentioned, native restrictions. Would not leaving out the one-to-one connection weaken the matrix position in even finite dimensional environments still more? In trying to contribute to improvement of this article, I certainly will never fight for some specific content. Purgy (talk) 05:43, 22 August 2016 (UTC)
What is meant is that the linear transformation is represented as a matrix, not that there is a one-to-one correspondence between the set of linear transformations and the set of matrices. Representation is the mathematically correct term here. Sławomir Biały (talk) 09:48, 22 August 2016 (UTC)