Talk:Vector space

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Good article Vector space has been listed as one of the Mathematics good articles under the good article criteria. If you can improve it further, please do so. If it no longer meets these criteria, you can reassess it.
WikiProject Mathematics (Rated GA-class, Top-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
GA Class
Top Importance
 Field: Algebra
One of the 500 most frequently viewed mathematics articles.
Wikipedia Version 1.0 Editorial Team
WikiProject icon This article has been reviewed by the Version 1.0 Editorial Team.

Dimension[edit]

Jacob, are you seriously saying that all infinite dimensional vector spaces are isomorphic to each other? How about the Hilbert spaces? Is H0=H1 ? −Woodstone (talk) 22:57, 18 January 2009 (UTC)

I'm seriously saying that two vector spaces of the same dimension are isomorphic as vector spaces. There may be v.sp. that are both infinite-dimensional, but the cardinality of the two bases is different. Also, L^p is isomorphic to L^q as vector spaces, but not as topological vector spaces. Likewise with any other counterexample you may think of. Just see the relevant article section and the refs cited therein. Jakob.scholbach (talk) 23:00, 18 January 2009 (UTC)
That's only a half answer. Are you stating that H0 and H1 are isomorphic as vector spaces? I think not. −Woodstone (talk) 23:09, 18 January 2009 (UTC)
I don't know that notation. What does it mean? But anyway, you can answer it yourself: if the dimensions agree (as cardinal numbers) they are isomorphic as v.sp., otherwise they are not. Jakob.scholbach (talk) 23:32, 18 January 2009 (UTC)
Two vector spaces of the same dimension are isomorphic, even if that dimension is an infinite cardinal. Indeed, any vector space over a field F with a basis set X is isomorphic to \oplus_{x\in X}F, which is the space of all finitely supported functions XF. By precomposition with a bijection to the cardinal |X| of X, this can be put into a one-to-one linear correspondence with the vector space of finitely supported functions |X|→F. For the other question, the example of H_0 and H_1 seems strange to me, because these typically denote Sobolev spaces, in which case H0 and H1 are both separable Hilbert spaces, and so are in fact isomorphic as Hilbert spaces as well. siℓℓy rabbit (talk) 23:43, 18 January 2009 (UTC)

I have added a content note to clarify this. I am generally opposed to footnotes in the lead. However, sometimes they are necessary. This seems to be such a case. Please change the wording around and provide references as appropriate. Originally, I thought that Halmos Finite dimensional vector spaces provided some discussion of this, but I was unable to find a suitable section there. Anyway, I think the clarification would be much more effective with a suitable reference. siℓℓy rabbit (talk) 01:02, 19 January 2009 (UTC)

I think that most people are familiar with the idea of a dimension. But can any laymen who reads this discussion confirm what exactly they think it is? Tracing back to my earlier days, I used to think that higher dimensional spaces are "more complex". When writing the article, perhaps you may like to bear that in mind. --PST 08:47, 19 January 2009 (UTC)

Reference[edit]

I added this reference sometime ago: Cohen 1998, p. 31, The Topology of Fibre Bundles, but it seems to have disappeared. Is this a problem with the formatting of the reference? I think that this PDF file has a good lot of information on vector bundles so it should be there. --PST 08:59, 19 January 2009 (UTC)

Yes, that was me. I didn't remove it primarily because of the reference, but of the statement that vector bundles form a monoid (which I thought leads a bit too far away). Remember this is an article about v.sp., not vector bundles, so we should not include any reference which is just somewhat nice on a side aspect of the article topic. Also, unless the thing is printed by a regular publisher (which it is not?), the file does not count as a reference, but as an external link, which makes it even less interesting to include here. Jakob.scholbach (talk) 16:21, 19 January 2009 (UTC)

Inner product notation[edit]

I've seen the following notations commonly used for inner products: (u,v); \langle u,v\rangle; \langle u | v\rangle (the last only as part of Dirac notation). This article uses yet another, viz: \langle {\mathbf u} |{\mathbf v}\rangle, which I have never seen (but I'm a physicist). I assume this is used by some mathematicians, but how common is it? At any rate shouldn't the other notations be mentioned? PaddyLeahy (talk) 11:45, 19 January 2009 (UTC)

How exactly are the last to different? Except that the later uses both braket notation and bold to (doubly) indicate that the objects are vectors. Anyway, if different notations are to be mentioned, shouldn't at least the ordinary dot notation be mentioned? (TimothyRias (talk) 12:34, 19 January 2009 (UTC))
I don't have a preference for either of the variants. The boldface for vectors is just a general notation in this whole article (and elsewhere), that's unrelated to the inner product. The dot notation is mentioned (for the standard dot product, for which, I feel, it is preferably used). I think additional notations should not be discussed here, since this is an article about vector spaces. Jakob.scholbach (talk) 16:18, 19 January 2009 (UTC)
Still using both boldface and braket notation is weird since it doubly denotes the objects as vectors. Notation wise \langle {\mathbf u} ,{\mathbf v}\rangle probably is nicer. (TimothyRias (talk) 22:49, 19 January 2009 (UTC))
Well, we use boldface all the time, so I don't see a reason to change it at that place. But whether we put a "|" or a comma in the middle, I don't care. Change it if you like... Jakob.scholbach (talk) 23:15, 19 January 2009 (UTC)

When to mention fields[edit]

It seems a bit early in the lead to bring up fields, since we cover them in the definition. I have tried leaving it in, but I am concerned it might get beyond interested high school students if we jump into it too quickly. Thenub314 (talk) 15:59, 20 January 2009 (UTC)

Well, I think they are too important to be omitted, but I like the way you trimmed it (except for the use of the second person, which I removed by making that sentence passive). But I think we should at least give a non-rigorous explanation of what a field is (indeed because high school students won't know what they are[1]), such as

... provided the scalars form a field (such as rational numbers or complex numbers), that is, that they can be added and multiplied together satisfying similar properties.

, or something like that.
[1] Incredibly, I've seen junior high school books with definitions of groups, rings, and fields, but I guess more than 99.99% of all teachers simply skip that parts. -- Army1987 – Deeds, not words. 16:22, 20 January 2009 (UTC)
I did another take of the first section of the lead. I also think we should not put too much emphasis on the base field. A people who does not know about this will never think: Oh and what if I consider a complex vsp over the reals? Also, I somewhat disagree that we should explain what a field is. Again, people who don't know what a vsp is will hardly digest the brief definition "you have +, -, *, /". Indeed most of the article, and most of the applications both in mathematics and beyond concern real and complex spaces, so conveying this particular case in the lead is fairly sufficient. Jakob.scholbach (talk) 18:56, 20 January 2009 (UTC)

Recent change to the lead sentence[edit]

I don't really like this edit. Apart from some errors (such as the inappropriate mention of mathematical physics) that could be cleared up with copyediting, I think it is actually more formal rather than more understandable for a non-mathematician. Do we have any "non-mathematicians" available for comment? Army, I believe, is an engineer or physicist. siℓℓy rabbit (talk) 02:46, 21 January 2009 (UTC)

I don't see why the mention of mathematical physics is inappropriate: that's where the motivation came from, and where virtually all the applications are. Re formality, there are basically two directions the article could go: it could start with a "physics" intuition of a vector as something with magnitude and direction, or it could start with an abstract description. If it starts with an abstract description, the previous version wasn't good enough. Saying "a vector is something that can be scaled, or multiplied by a number" is only understandable to somebody who already understands it. Looie496 (talk) 05:19, 21 January 2009 (UTC)
"Mention" of mathematical physics is appropriate, but in proportion to its prominence in the article. So far, not much of the article is dedicated to physics, and therefore the second sentence does not appear to be correct weight for this. Also, contrary to conventional dogma, the abstract notion of a vector space was not motivated directly by physics, mathematical or otherwise. (It is true that the notion of a physical vector emerged from such considerations.) siℓℓy rabbit (talk) 07:10, 21 January 2009 (UTC)
I tend to agree with siℓℓy rabbit. Though I am a mathematician, I am a pretty bad one, so hopefully my input carries some weight. I think the notion of scaling is pretty clear and intuitive for people who haven't seen vector spaces (we all have some seen scale models, drawings, etc.) The previous lead prehaps could be criticized because it inferred you could "add" some objects called "vectors". But I think the current picture next to the lead made that rather clear as well. Overall my opinion is we go back to the lead we had a day (or two) ago. Thenub314 (talk) 07:19, 21 January 2009 (UTC)
I think reverting Looie's edit there was appropriate. Do you still think the physics aspect has too much weight now? (Currently just one motivating and hopefully understandable example from physics in the lead. I plan to brush over the motivation section in this direction too, but there also highlighting the mathematical background, i.e. triples etc. of numbers). I think one motivational example in the lead is good, since then we can say that the axioms are modelled on that. Jakob.scholbach (talk) 07:30, 21 January 2009 (UTC)
I don't like the new lead either. And Looie496: "that's where the motivation came from, and where virtually all the applications are" is false. Some motivation does come from physics but there are heaps of applications of vector spaces in mathematics; perhaps as much as physics. Something about physics should be mentioned, but I strongly disagree that physics is the only reason why vector spaces were invented. --PST 07:54, 21 January 2009 (UTC)
By the way, Army is a high school math teacher as he/she noted already on the "comments" page. --PST 08:01, 21 January 2009 (UTC)
I'm not. I'm an undergraduate physics student, as noted on my user page. Probably you've been confused by the comment by Vb immediately above mine, which I suppose was signed with a ~ too many, displaying only the time in the signature. BTW I fixed that. -- Army1987 – Deeds, not words. 15:00, 21 January 2009 (UTC)
As for the lead, I don't think that factual accuracy and clarity are incompatible goals. While it's true that very few people know what the word field means, commutativity and associativity of addition and multiplication etc. are taught in grade schools. It shouldn't be impossible to write a lead which doesn't contain factual inaccuracies and yet can be understood by anyone in the last year of high school and also by sufficiently bright younger people. -- Army1987 – Deeds, not words. 15:16, 21 January 2009 (UTC)
Well, accuracy and clarity are not incompatible goals overall, over the span of 3-4 sentences they can be. Most high school students I have taught are much more comfortable with the concept of a vector as a pair of numbers then with the terms commutativity and associativity. While these are often taught in grade school, and again in middle school, and again in high school algebra, it doesn't exactly prepare students for the concept of "numbers" that are more general then the complex numbers. I think the goal for the lead should be to get a the idea across, and later in the article (say the definition section) we can discuss its more general formulations. Thenub314 (talk) 15:45, 21 January 2009 (UTC)
I agree with Thenub here. For what it's worth, as an undergrad math student I made a living tutoring people, and as a grad student I was a TA and taught courses in calculus, algebra, and discrete math, among other things, so I too have had some opportunities to learn what sorts of explanations actually work for people. Looie496 (talk) 17:55, 21 January 2009 (UTC)

Concrete proposals[edit]

OK, so now everybody has given his/her opinion. In order to get it back on a more concrete track, may I propose the following procedure: everybody interested writes a lead section (1st paragraph only) and puts its here. Then we can see and discuss advantages of the drafts.

Here is my take (which is the current version) Jakob.scholbach (talk) 16:06, 21 January 2009 (UTC)


A vector space is a mathematical structure formed by a collection of objects, called vectors, that may be added and scaled, or multiplied by numbers. For example, physical forces acting on some body are vectors: any two forces can be added to yield a third, and the multiplication of a force vector by a real factor—also called scalar—is another force vector. General vector spaces are modelled on this and other examples such as geometrical vectors in that the operations of addition and scaling or scalar multiplication have to satisfy certain requirements that embody essential features of these examples. In addition to scaling by real numbers, vectors and vector spaces with multiplication by complex or rational numbers, or scalars in even more general mathematical fields are used.

Jakob.scholbach (talk) 16:06, 21 January 2009 (UTC)


A vector space is a mathematical structure formed by a collection of objects called vectors, along with two operations called vector addition and scalar multiplication. Vector spaces are a primary topic of the branch of mathematics called linear algebra, and they have many applications in mathematics, especially in mathematical physics. The most basic example of a vector space is the set of "displacements" in N-dimensional Euclidean space. Intuitively, a Euclidean displacement vector is often thought of as an arrow with a given direction and length. Addition of displacement vectors is done by placing them end-to-end, with the vector sum being a vector that points from the beginning of the first vector to the end of the second vector. Scalar multiplication is done by altering the length of a vector while keeping its direction the same. Many of the properties of N-dimensional Euclidean vector spaces generalize to vector spaces based on other number systems, or to infinite dimensional vector spaces whose elements are functions.

Looie496 (talk) 17:48, 21 January 2009 (UTC)


Lead suggestions[edit]

Places where confusion arises in the lead:

  • The sentence beginning "for example" - readers unfamiliar with mathematics will be confused by this accumulation of terms
  • Perhaps a simple definition could be added before the precise mathematical definition, one that would be more accessible to non-mathematicians.
  • The "history" paragraph seems to interrupt the discussion of vectors
  • The relationship between "collection of objects" and "physical forces", both described as vectors, is unclear.

These suggestions come from my writing class, which consists of college sophomores, juniors, and seniors from across the disciplines. Hope they help! Awadewit (talk) 17:44, 21 January 2009 (UTC)

Thanks! As you see, we are in the middle of the discussion. We'll use your hints. Jakob.scholbach (talk) 22:09, 21 January 2009 (UTC)
1) OK, 2) Hm. Currently (as previously) a pretty vague "definition" is given. Do you think "objects that may be added together and multiplied ("scaled") by numbers" is still too difficult to grasp? 3) OK. 4) OK, that was a mis-wording (the objects in the collection are vectors, the collection of vectors is the vector space). Is this clearer now? Jakob.scholbach (talk) 23:22, 22 January 2009 (UTC)
My class thought the new version was a dramatic improvement, in particular the phrase you have highlighted above - "objects that may be added....". Awadewit (talk) 20:31, 26 January 2009 (UTC)

"A vector space is a set"[edit]

Out of curiosity, haven't vector spaces as proper classes been considered in the literature? GeometryGirl (talk) 13:44, 22 January 2009 (UTC)

I haven't come across it but I would be surprised if it didn't exist. If you ever run across a good reference it might be nice to include it in the generalizations section. Thenub314 (talk) 14:49, 22 January 2009 (UTC)

Further comments on the lead[edit]

To support the ongoing FAC process, a few comments on the lead:

  • Could shorten first paragraph by taking out the in-line explanation of what a field is: it is already wikilinked and a reader not familiar with the concept is not likely to learn yet another new definition while reading the lead;
  • Yes, I think the same. I have shrunk that. Jakob.scholbach (talk) 23:17, 22 January 2009 (UTC)
  • Euclidean vectors vs geometrical vectors: what is the intended difference? Current text appears to equate Euclidean vector with uses in physics, which is not really right. Should probably combine the terms geometric and Euclidean together (using only one of them) and then say that one very important use of these is representing forces in physics.
  • To be honest, I don't know what to do about Euclidean vectors. Physicists seem to insist on them, mathematically they have little or no importance (at least they are hardly called like that). I tried to reword it to make clear that, in essence, the same concept is used both in physics and geometry. Jakob.scholbach (talk) 23:17, 22 January 2009 (UTC)
We had a long argument about them on Talk:Euclidean vector. If one accepts that a "Euclidean vector" is the same thing as a "contravariant vector", then we had a long discussion starting about here, and my conclusion was that contravariant vectors are tangent vectors. There was a moment when I was convinced they were something else, but I changed my mind later (see my last comment under here). Ozob (talk) 02:00, 23 January 2009 (UTC)
We physicists don't usually call them Euclidean vectors either. I don't think thatn-dimensional Euclidean vector space means anything more specific than any n-dimensional vector space over R with a positive-definite inner product. If one wants to specify vectors acting on the particular Euclidean affine space used in non-relativistic physics to model physical space, one would just say "space vectors", "spatial vectors", "three-vectors" or stuff like that. But people on Talk:Euclidean vector seem to think otherwise, and I got tired of arguing. -- Army1987 – Deeds, not words. 18:10, 23 January 2009 (UTC)
  • The end of the first paragraph: "in that the axiom of vector addition and scalar multiplication embody essential features of these examples" is not very helpful and repeats the point. Could be cut to make the text mass lighter.
  • Linear flavor: what does this mean? Some could call this a circular reference...
  • Second paragraph should be split: it is not coherent as it begins with history (which should find its way in the lead) but ends with discussion of dimension.
  • Could it be more comprehensible to try to define dimension (finite, at least) vaguely in terms of "independent directions" existing in the space (technically this would be the maximal number of linearly independent vectors) rather than with "number of scalars needed to specify a vector" (technically size of minimal set of generators)? The reader at this stage does not know how a vector can be specified using a list of scalars (once a base is given) but the intuition about independent directions could be provided with the list of one direction in a line, two in plane and three in space.
  • Convergence questions is not very clear unless you know what's meant already. Could someting like Analytical problems call for the ability to decide if a sequence of vectors converges to a given vector provide more flavour without adding much text?
  • Now get the impression that among topological vector spaces Banach and Hilbert spaces are particularly complicated, a viewpoint I would not accept. Intention is presumably to claim that these are particularly important types of TVS, which is surely right.
  • OK. Reworded (and moved the footnote there). Jakob.scholbach (talk) 23:17, 22 January 2009 (UTC)
  • Applications section is strangely skewed. It is true that given the almost ubiquitous applications of vector spaces both within mathematics and in other disciplines, it is hard to write a balanced paragraph. But singling out Fourier analysis looks unwarranted. Differential equations make sense, in that they were instrumental for the development of topological vector spaces. Local linearisation of manifolds may be a tad too technical as the other example. Systems of linear equations? High school background should make these something to relate to.
  • Do you mean that the article is skewed or that the lead is skewed? If the article is OK, then the lead has but to sum up the article, so a word about Fourier and friends seems logical? Systems of linear equations are now mentioned.

Stca74 (talk) 15:03, 22 January 2009 (UTC)

Using bullets for scalar multiplication[edit]

Don't you think that writing The product of a vector v and scalar a is denoted av, but then denoting it a · v in the rest of the article, can be confusing? Is the reader going to understand they refer to the same thing? Also, in equivalently 2 · v is the sum w + w, why the same vector should be referred to as v on the LHS and as w on the RHS, and why there shouldn't be a {{nowrap begin}}/{{nowrap end}} around the w + w., as there is one around similar such expressions in the same paragraph? And why the word ordered should be hidden by a link such as pair, in flagrant violation of WP:EGG? -- Army1987 – Deeds, not words. 14:26, 24 January 2009 (UTC)

I guess you refer to my reverting your edit. Sorry, I had not realized these changes, only the removing of the dots. (I did watch the diff, but somehow missed them). I have reinstated your points (thanks for catching the 2*v = w+w, in particular) and put a notice that the product may also be denoted with a dot. I think points like rv = (rx, ry) could be confusing to some readers. Jakob.scholbach (talk) 15:43, 24 January 2009 (UTC)

The lead[edit]

In view of some of the problems people are having with the lead over at the FAC page, I thought I'd put something down here to see if this is more along the lines of what they want. I'm thinking that what is desired is that at least the first paragraph be some layman's terms way of describing what vector spaces over the reals are. Delving into other fields and such could then be left to the later paragraphs of the lead. So my question is, is the following the type of content that the opposition at the FAC page is looking for (clearly, the prose itself is quite lacking)?

A vector space is a mathematical structure that, in its simplest form, can be used to track one or more quantities. In this way, they generalize the real numbers, which can be used to track one quantity as in "I am 5.2 km down the road from your house" or "I am missing 1.3 cups of sugar for this cake recipe" (-1.3 cups of sugar). An element of a vector space (called a vector) could represent a position using three distances such as "I am 2.3 km east, 1.1 km north from you and 100 m below ground" (which can be represented as a triple of real numbers (2.3, 1.1, -100) ), or it could represent how much sugar and flour one requires as in "These cupcakes need 1 cup of sugar and 3.25 cups of flour" (which can be represented as a pair of real numbers (1, 3.25) ). Like real numbers, vectors in a vector space can be scaled and added together. In other words, one can speak of multiplying a vector by a real number, as in "I want to make 2.5 times as many cupcakes, so I will need 2.5 cups of sugar and 8.125 cups of flour" (written as 2.5 · (1, 3.25) = (2.5, 8.125) ), and one can speak of adding two vectors together, as in "This cake asks for 1.5 cups of sugar and 2.75 cups of flour, so in total I will need 2.5 cups of sugar and 6 cups flour" (written as (1, 3.25) + (1.5, 2.75) = (2.5, 6) ). From a mathematical point of view, the specific quantities a vector represents are immaterial so that only the number of quantities matters. For this reason, the mathematical structure of a vector space is determined by the number of quantities it tracks (called the dimension of the vector space).

One could then go on to say that "Mathematicians have generalized certain properties of the real numbers to invent the concept of a "field" ..." etc.

Now, I realize this is a rather poorly written paragraph, but in particular it seems hard to clearly describe what is going on without all the examples. Though perhaps they could be relegated to the "motivation" section. Also, for a mathematician, this is probably a non-ideal beginning since a mathematician prefers to say what something is before describing what it can do. However, it seems like a compromise is necessary. I hope what I've written can lead to some progress on the issue. Cheers. RobHar (talk) 16:13, 24 January 2009 (UTC)

Thanks for your suggestion. Frankly, I would be somewhat unhappy to have such a paragraph in the lead section. It contradicts the credo (or guideline, if you want) that working out detailed examples should be avoided. Also, we are not writing a textbook (or cookbook :), sorry I couldn't resist). I like "[v.sp.] can be used to track one or more quantities." and we could perhaps integrate that to the lead. I'm repeating myself, but we can not explain the whole concept from scratch in the lead section of the article. This would be totally unbalanced (also contradicting some guideline). If anywhere, we can explain it with this level of detail in the body of the text. But, I think even there it is inappropriate to do it this way.
I'd like to put here, for comparison, the relevant lead section paragraph of group (mathematics), which is a featured article whose accessibility has been validated by lay readers. It reads
In mathematics, a group is an algebraic structure consisting of a set together with an operation that combines any two of its elements to form a third element. To qualify as a group, the set and operation must satisfy a few conditions called group axioms, namely associativity, identity and invertibility. While these are familiar from many mathematical structures, such as number systems—for example, the integers endowed with the addition operation form a group—the formulation of the axioms is detached from the concrete nature of the group and its operation.
I should say I'm probably biased, because I contributed to that, but I think it has the right spirit of succinctly picking a simple key example and alluding to the concrete definition with its motivation/background/... which comes in the body. There will be many readers who will only fully understand the "integers and addition" thing. So what? We can not, for example (Notice that there are differences: e.g., group axioms are fewer. We should not mention the axioms of vsp in the lead).
Another example that comes to my mind: if you want to write the lead section for plane, say, you would not be able to talk in detail about explaining basics of aerodynamics, but perhaps just write that "Aircrafts often rely on carved wings, creating a difference in air pressure above and below the wings". You would and could not explain what pressure means, why moving air creates pressure differences etc. I think we have to face the reality that certain concepts can not be explained from scratch in one paragraph. Doing our best to educate the reader with the text body is our duty, and we should excel in it. However, putting everything into the lead is simply not going to work. Jakob.scholbach (talk) 16:59, 24 January 2009 (UTC)
RobHar's suggestion is well-meaning but totally contrary to WP:LEAD ("The lead serves both as an introduction to the article below and as a short, independent summary of the important aspects of the article's topic.") and the principle that Wikipedia is not a textbook. Geometry guy 20:39, 24 January 2009 (UTC)
Using the Group (mathematics) article's lead as a basis (pun not intended):
In mathematics, a vector space is an algebraic structure consisting of a set of vectors together with two operations, addition and scaling. These vectors track one or more quantities, such as displacement (the distance travelled and in which direction), or force. To qualify as a vector space, the set and operations must satisfy a few conditions called axioms. These axioms generalize the properties of Euclidean space (e.g. the plane, an idealized flat surface), but their formulation is detached from the concrete nature of any given vector space. Concepts like length and angle may be undefined or counter-intuitive in certain vector spaces.
I think this a bit too abstract. Also Euclidean space is too technical. Is there anything else that vs's generalize that is less technical? Alksentrs (talk) 17:55, 24 January 2009 (UTC)
Let me point out that the Euclidean vector article is quite nice, with minimal assumptions of background on the part of the reader. It might be helpful to direct readers with a weak background there—a reader who has read that article should be in a much better position to understand this one. Looie496 (talk) 18:15, 24 January 2009 (UTC)

Three elementary consequences of the definition need to be given explicitly[edit]

I think that three important consequences of the definition need to be stated. Namely that for all scalars a, and vectors u, the following hold:

  1. a0 = 0
  2. 0u = 0
  3. (−1)u = −u

Note that 2 is expressed in words, but I think it would be good to express it as a formula.

Paul August 17:41, 24 January 2009 (UTC)

I'm not sure. You are right that it is somewhat important to know these facts, but we can not (for space reasons) put everything that is important. Also, we have to maintain balance of elementary and advanced material. That said, I would propose adding a precise citation to the relevant paragraph, pointing to a book or so that has these (and further) elementary consequences. Jakob.scholbach (talk) 15:55, 25 January 2009 (UTC)

2nd paragraph of lead[edit]

I think the first paragraph has improved substantially, although it still needs work. I would like to make a couple of comments about the 2nd paragraph, which I feel misses opportunities. What I would like it to say is that the basic vector space properties are too weak to give any very interesting consequences, and that the additional structure needed to make them interesting is a norm—a concept of length. With a norm, you get a concept of distance, and therefore a topology. One particularly important way to get a norm is by means of an inner product. Thus you get Banach spaces, inner product spaces, and Hilbert spaces, with increasing levels of mathematical structure. Looie496 (talk) 18:11, 24 January 2009 (UTC)

This is not true. Plain old vector spaces are plenty interesting. For example, you don't need norms or inner products for some topics in differential geometry, such as the theory of differential forms. Nor do you need them for some convex geometry. It's very useful to look at homology and cohomology groups with coefficients in a field, and these are vector spaces. And as Stca74 pointed out above, over finite fields there are no non-trivial norms, so the idea of a Banach or Hilbert space is uninteresting. (The p-adic functional analysis I've looked at is very weird.) Ozob (talk) 23:50, 25 January 2009 (UTC)
This is exactly what I would reply, too. Jakob.scholbach (talk) 08:44, 26 January 2009 (UTC)

Mentioning nationalities of mathematicians in "History"[edit]

I've seen that some names are preceeded by nationality (French mathematicians René Descartes and Pierre de Fermat, others aren't (Bolzano introduced). This should be made consistent. Were this a more popular topic, we'd already have attracted the anger of nationalist fellow citizens of Möbius, Cayley etc.. But I'm reluctant to fix it myself because there would be the burden of deciding whether to mention all or no nationalities. What do you think? There is a similar issue for given names, but a good compromise for this would be using initials (e.g. R. Descartes and B. Bolzano). -- Army1987 – Deeds, not words. 13:18, 25 January 2009 (UTC)

I guess we should remove all nationalities. Unless there we want to make it a point that, the (fictional) islandic school of algebraists was instrumental in pushing forward v.sp. the nationalities play no role. I would also remove all given names, for brevity's sake. Jakob.scholbach (talk) 13:52, 25 January 2009 (UTC)

Comments from the recently withdrawn (or failed) FAC[edit]

I'm copying some comments from the recently withdrawn FAC, in order to discuss them here (and/or address them). Jakob.scholbach (talk) 21:16, 26 January 2009 (UTC)

Comments from Geometry guy[edit]

The following comment was copied from Wikipedia:Featured_article_candidates/Vector_space. Jakob.scholbach (talk) 21:16, 26 January 2009 (UTC)


  • Comments on the lead. After being asked to comment on the lead, I read the article quite closely this afternoon. WP:LEAD states: "The lead serves both as an introduction to the article below and as a short, independent summary of the important aspects of the article's topic." In articles on advanced mathematics (and even though vector spaces are extremely standard mathematics, the abstraction involved is advanced), achieving both of these goals in four paragraphs, while remaining as accessible as possible is a very difficult task. However, Wikipedia is not a textbook: it is not the purpose of the lead to teach readers what a vector space is, but to whet their appetite to learn more.
At the moment the lead does not adequately summarize the article: it focuses unnecessarily on forces, and on applications in analysis, without covering adequately the fundamental role of vector spaces in linear algebra. I realise that my view may contradict the view of some other editors, who are concentrating on making the lead easier to understand for the lay reader. That is a painful aspect of FAC: it is impossible to please everyone.
However, as a positive suggestion, should this FAC fail, one solution to the difficulty of both summarizing the article and providing an accessible introduction is to spin out the lead (summary style) to an "Introduction to vector spaces" article (or possibly an "Introduction to linear algebra"). This would make it easier to make the lead an encyclopedic summary of the topic , while providing an entrypoint for motivated high school students and similar readers. Geometry guy 21:56, 24 January 2009 (UTC)
  • Comments on the prose. In the spirit that FAC is a painful experience, the main thing that struck me on my read-through is that the prose is woeful. Sorry, I should say there is lots of good stuff, but there are patches that are painful to read, even for someone who is extremely familiar with vector spaces and knows what the prose is trying to say. I found myself banging my head in a Quasimodo-like experience of "The prose! The prose!". This may be as painful for article editors to read as it is for me to say. So, let me add that tremendous work has gone into this article and it is well on its way to being featured, following the path of Group (mathematics), which was featured thanks to the drive of the same main editor. Bravo! I wish I had the time and energy to do as much to help. I would be happy to copyedit the article, but realistically, I can't do that for at least two weeks, so let me highlight some issues. In any case I'd rather copyedit the article in a minor way rather than make big changes.
    • Encyclopedic language is not flowery. I can see that considerable effort has been made to use encyclopedic rather than textbook prose. However, encyclopedic language is neither flowery nor convoluted. Don't say "keystone" when you mean "central", "in the guise of" when you mean "as", "employed" when you mean "used". "Achieved" is shorter than "accomplished" and "provide" is more widely understood than "furnish". Other flowery usages include "envisaged", "encompasses", "conception", ...
    • Use bland adverbs sparingly. "Historically", "today", "actually", "notably", "usually", "particularly", "especially", "essentially", "completely", "roughly", "simply". These can often be omitted, or replaced by prose which centres the point.
    • Don't split infinitives needlessly. Sometimes they need to be split, but in most cases they don't. For instance, I would replace "To simultaneously encompass", by "To cover" or "To include".
    • Avoid editorial opinion. Whenever you use an adjective of opinion, or comment on the importance of something, it is helpful to ask the question "according to whom?". Then you can decide whether to provide a source, or to rephrase. Adjectives used here include "important", "crucial", "frequent", "fundamental", "suitable", "useful", ...
    • Avoid long noun phrases. They tend to lead to bad prose: "Resolving a periodic function into a sum of trigonometric functions forms a Fourier series, a technique much used in physics and engineering." is an example.
    • Omit needless words. "certain" is usually not needed and "call for the ability to decide" is wordy.
    • Use a consistent English variant: I see both "analog" and "honor" (American) and "idealised" (British).
Finally, resist the temptation to tell the reader how to look at the subject. This happens at the beginning of quite a few sections:
  • "Vector spaces have manifold applications as they occur in many circumstances, namely wherever functions with values in some field are involved."
  • "'Measuring' vectors is a frequent need"
  • "Bases reveal the structure of vector spaces in a concise way"
  • "The counterpart to subspaces are quotient vector spaces." (Only makes sense to those who already know what it means.)
Geometry guy 23:15, 24 January 2009 (UTC)
  • Further comments.
    • Why does the history stop at c. 1920? The relations to set theory could be discussed here, as could modern developments in homological algebra, Hilbert spaces (quantum mechanics), Banach spaces (Gowers).
    • The category of vector spaces is not boring, nor well understood, even if the isomorphism classes of objects are. See Quiver (mathematics).
    • The determinant of a linear map is not defined, but used.
    • The motivation section is poorly written: if "force" is a motivation, why not explain that addition of vectors corresponds to combining forces?
    • Hamel and Schauder bases are not clearly delineated. Too much effort is expended making the definition of Hamel basis apply in the infinite dimensional case. Further, refering to the existence of Hamel bases (and hence the axiom of choice) as "fact" is point of view.
Despite all my complaints, however, I must say that Wikipedia readers will be extremely fortunate to find such a comprehensive article on such an important concept. Geometry guy 23:37, 24 January 2009 (UTC)

  • Ad 2: Actually, I don't quite concur with "resist the temptation to tell the reader how to look at the subject". Obviously, we have to tell it rightly. But assuming we are able to do it rightly, I can't see why we should not do it. I mean, certainly wording here and there can be changed, but what is wrong, e.g., with "Bases reveal the structure of vector spaces in a concise way"? (The 4th point is not great, though, I agree). Giving a prose-style description of some mathematical fact is often conveying more of an intuition than mere definitions and sober statements of facts. This intuition is then fleshed out by the latter.
  • Ad 3: History:
  • What exactly do you mean by relation to set theory? Things like existence of bases vs. Zorn's lemma? If it's only that, I think that is less of a historical fact (in the sense of a longer development), but just a single mathematical fact (or a few).
  • Developments in homological algebra: in what sense were vector spaces (historically) crucial to h.a.?
  • Hilbert and Banach spaces: for brevity's sake and also for general considerations, I think historical development of H. and B. spaces should be treated in History of functional analysis, or the corresponding sections in Hilbert space and Banach space.
  • As far as I see, the article does not call the category of vector spaces boring or easy. Anyhow, the "degree of boredom" of it should be discussed in the subpage.
  • "Too much effort is expended making the definition of Hamel basis apply in the infinite dimensional case": Hm. Do you talk about taking the index set I and denoting the basis vectors by vij? I agree that the double index is a bit cumbersome, but I would really not say: "for simplicity, we treat only the case of finite bases". I don't see how to give the (necessary) information that there are infinite-dimensional spaces without introducing infinite (Hamel) bases.
  • "fact" is point of view: Hm. It could easily be reworded to "Every v.sp has a basis. This follows from ...", but in what sense would this not be POV? The only really clean way would be "Assuming the axiom of choice or, equivalenlty, L. of Z., every vsp has a basis", but this seems a bit exaggerated, right? Jakob.scholbach (talk) 16:08, 27 January 2009 (UTC)
A review is only a review, and reviewers differ, so there's no reason for you to concur. I will respond with clarifications where I can. Concerning bases, if we want to tell the reader how to think of them, we could equally say "Bases obscure the structure of vector spaces in an ugly way". Indeed, the equivalence of categories with matrices has led one reader to extend that equivalence, erroneously, to the category of finite sets. I agree it is nice to flesh out the bare facts with some intuition, but the latter needs to be backed up by a citation.
On history, set theory, and functional analysis, I can see your point about the latter, but the existence of bases, and the Hahn-Banach theorem play an important role in our intuition about whether the axiom of choice is "true" (and this is disputed). This is an issue which is rather difficult to handle: I dimly recall struggling with it myself at dual space — without choice one might not even be able to prove that the dual space is nontrivial (the article wrongly says "nonempty", but that was probably my gaffe, unfixed for nearly 2 years). Yet we cannot inflict all of this on the hapless reader. The history section may be a better place to discuss it than elsewhere.
You are probably right about homological algebra: the integer coefficient case was probably more dominant historically than the case of a field. These days, however, algebras and modules over algebras are very dominant in homological algebra, as are quivers in representation theory. Stopping the history so early gives a false impression of a sterile subject.
The "boredom" remark was a response to a comment at the FAC, not the article. Gabriel's theorem (1972) shows that almost all linear algebra problems are very hard, including the classification of two endomorphisms, or five subspaces.
As far as I am aware, Hamel bases are essentially useless in the infinite dimensional setting (this is related to the choice needed to show they exist). I'm not sure either how the article should respond to this, but obscuring the meaning of a Hamel basis for the sake of generality may not be the best way. Geometry guy 21:53, 27 January 2009 (UTC)
Perhaps this is fishy, but in a polynomial ring the basis given by the monomials is quite handy. I faintly remember Euler was a sceptic of Fourier transforms--he wondered how a uncountably-diml. space could be "generated" by a countable basis. So, a clear-cut notion of bases also in the infinite diml case seems noteworthy. Jakob.scholbach (talk) 21:58, 28 January 2009 (UTC)
You are right, polynomial algebras are a good motivation for "the" vector space with a countably infinite basis. Do you know a good example in the uncountable case? Geometry guy 22:06, 28 January 2009 (UTC)
(<--)How about the group algebra C[C] or the like? (It's not terribly all over the place, though). Jakob.scholbach (talk) 22:10, 28 January 2009 (UTC)
That's somewhat artificial unless you take into account the topology on C. But then it's probably countably generated as a topological vector space (by all monomials xα, α∈Q + iQ). Ozob (talk) 01:40, 29 January 2009 (UTC)
Right. Jakob.scholbach (talk) 09:33, 29 January 2009 (UTC)
Perhaps it's not too wrong to say: any object of somehow uncountable nature is obtained via limiting processes (i.e., topology)? Jakob.scholbach (talk) 09:35, 29 January 2009 (UTC)
If you are after a real world example of a nonseparable Hilbert spaces where the use of a basis is useful you just have to look at Quantum field theory. The basis states of QFT are of the form \left|k_1,...,k_n\right\rangle, where the ki are momenta. The corresponding Hilbert space is (the closure) of the span of these states. Since all of these state are orthoganal and the momenta live in a continuous space (as long as the QFT lives on a non-compact manifold) the Hilbert space is clearly nonseperable. Yet, the only sensible way of talking about it is in terms of its basis. (TimothyRias (talk) 10:40, 29 January 2009 (UTC))
I guess you are talking about a base in the topological sense? We try to find an example of a useful, not totally artificial (as opposed to \oplus_{i \in I} \mathbb R for some set I) vector space of uncountable (Hamel) dimension, a base of which can be explicitly given. Jakob.scholbach (talk) 16:19, 29 January 2009 (UTC)
Even deleting "the closure" parenthesis, the physical meaning is unclear, given our lack of understanding of physics at the Planck scale. Geometry guy 19:55, 29 January 2009 (UTC)

This is philosophy, but I think Jakob is right. We can only comprehend uncountable sets by viewing many of their elements as being very close together. That intuition inevitably has a topological flavour. Geometry guy 19:55, 29 January 2009 (UTC)

If I follow the points here correctly then such an example is considered in Loop Quantum Gravity. There the Hilbert space of functionals over (generalized) connections on a 3-manifold is defined using an inverse limit. This is non-seperable. It decomposes into subspaces labeled by all embedded networks. —Preceding unsigned comment added by 212.76.37.234 (talk) 23:06, 16 May 2010 (UTC)

Comments by TakuyaMurata[edit]

The following comment was copied from Wikipedia:Featured_article_candidates/Vector_space. Jakob.scholbach (talk) 21:16, 26 January 2009 (UTC)


Some feedback

I'm putting my response here for the ease of edit (for me and others). By categorical point of view, I was thinking of, for example, the fact that the category of finite-dimensional vector spaces is equivalent to the category of matrices (see Equivalence_(category_theory)). This is very important because it explains, for example, why in linear algebra one essentially doesn't have to study vector spaces as much as matrices. (I also think the cat of finite-dimensional vector spaces is equivalent to that of finite sets.) Also, one may start with a quotient map (i.e., a surjective linear map) instead of quotient spaces and use the universality to show this definition is essentially equivalent to the more usual one. The view points such as the above are abstract but are indispensable if one wants to study vector spaces seriously. On the other hand, I don't think, as the article currently does, mentioning the category of vector spaces is additive is important, for it is very trivial. It is important to mention the applications of isomorphisms theorem rather than how to prove them.

Next, about annihilators. (This is an important concept and the article has to discuss it) I think I was getting at is that the possibility of defining a bilinear form (or sesquilinear one) on a vector space. When studying vector spaces or related stuff in application, bilinear forms defined on them are often useful and indispensable. An inner product is one example, of course, but it doesn't scale well to infinite-dimensional vector spaces (which may not have topology, like infinite-dimensional Lie algebras). So, one also uses natural pairs for V x V^*. (Though this isn't quite a bilinear form.) Anyway, my point is that we need a discussion on bilinear forms (probably a whole section on it). A basis can be chosen according to such a form, and actually that's often what one does; e.g., orthonormal basis. (I just noticed the article doesn't even mention dual basis, which is an important concept.)

Finally, on the balance. Yes, the article is fairly lengthy already, but I think we can make a significant cut by eliminating stuff on trivial facts or some linear algebra materials such as determinants. Doing that would likely diminish accessibility (and thus usefulness) of the article for a first-time learner of vector spaces. But that's something we can afford since the focus of the article should be on important topics not trivial ones. —Preceding unsigned comment added by TakuyaMurata (talkcontribs) 14:09, 25 January 2009 (UTC)


I feel the need to repeat (although I'm sure it is well understood) that the category of finite dimensional vector spaces is not equivalent to the category of finite sets. Geometry guy 21:43, 26 January 2009 (UTC)
Certainly not if the morphisms between finite sets are the usual ones: i.e., just functions from the first set to the second. However, one can make sense of Taku's comment: given a field k and a finite set S, one can consider the free k-module k[S] on S, and then if we define Hom(S,T) to be Hom_k(k[S],k[T]), then that category is indeed equivalent to the usual category of k-fdvs. I have heard category theorists describe things this way before, so it is probably what Taku had in mind. Plclark (talk) 07:15, 28 January 2009 (UTC)
It is unlikely that he meant that. If he did, he should have been more explicit and actually described each category given. But I agree with what you say. --PST 17:20, 28 January 2009 (UTC)
  • I'm puzzled by the following: "An inner product is one example, of course, but it doesn't scale well to infinite-dimensional vector spaces". Surely Taku has heard of Hilbert spaces or, what is more directly relevant, "pre-Hilbert spaces"; the latter is precisely an infinite dimensional space with an inner product. Of course, the inner product then defines the topology, which may or may not be complete. siℓℓy rabbit (talk) 22:33, 26 January 2009 (UTC)
  • Taku, could you please not cut stuff from the article? As a good article, vector space does stay focussed and has an appropriate amount of content. In fact, it is missing content and accessibility (it seems). So rather, you should try adding some stuff to the article rather than deleting. And by the way, determinants are relevant to vector spaces. The determinant of a linear map between vector spaces is 0 iff the map is not an isomorphism. And it is fairly clear that the category of finite sets is not equivalent to the category of finite dimensional vector spaces. Surely, you know what a category and a functor are (if you know both definitions, I can't see how this is non-trivial)? --PST 08:58, 27 January 2009 (UTC)
    Taku: another comment. I don't want to sound rude, but it seems that you are not familiar with the FA guidelines. We must make technical articles accessible (please see the link) and we can't randomly delete content. Jakob is very familiar with the FA policies and I think that it is a safe bet that he knows what he is doing (when adding the section on determinants). Wikipedia is also not solely for reference work. For example, although I see many articles that I have never heard about every day, I also see some important facts in the lower level articles which I never knew about. It is a question about what we want to do with Wikipedia. In my view, Wikipedia should be aimed at everyone. For instance, suppose you did not know something in computer science and you wanted to learn about it? What if you went to the article and got piled with tecnical info which you didn't understand? Would that do you any good? We all want to learn and by using tecnical terms in subjects we don't know about, we wipe out this wonderful oppurtunity. I hope you feel the same way (for another example, I'm also interested in things apart from mathematics: it does not help me if the articles written use terms beyond my knowledge. It is also an important skill to be able to write technical articles in an accessible manner. Whenever I see such articles, I am always impressed.). --PST 09:49, 27 January 2009 (UTC)
(<-)I also think we must not remove "trivial" (for you!, but this is one the 500 most visited articles, I'm sure it's not only researchers who want to cheer up their minds with a cozy basic WP article) material. On the contrary, we must give an accessible account (which is challenging). If possible, (and this is the 2nd challenge) we can interweave more advanced material in a way such that the reader hardly ever thinks "eh, this is hard stuff".
Equivalence of categories of vsp and matrices: I have to say I disagree a bit with the point of view, linear algebra be all about matrices. The article does state this point, but does not call it an equivalence of categories. The most space I'd give to this point (in this article) is a footnote.
Universal properties of quotients etc. This is something that is not very specific to vsp. Also it will repel 95% of the readers. I would not write about that.
Bilinear forms: there is a brief mention at the tensor product section, but perhaps we can highlight their importance a bit more. I'm not convinced, though, that we have to mention annihilators here. That should go to bilinear form or, at most, to the see also section. Jakob.scholbach (talk) 16:18, 27 January 2009 (UTC)

Comments by Point-set topologist[edit]

General comments

  • The current intro to the article is weighed too much towards physics. While most people learn about physics before vector spaces, the current discussion is centered towards properties of Euclidean vectors; not vector spaces. I think it would be important to note that vectors have a magnitude and a direction (so in some sense, they "induce" a co-ordinate system). Suprisingly, this does not seem to be mentioned.
  • I think it would be good to have a section titled "Vectors in physics" rather than include it in the motivation section. In fact, I think it is unfair to mathematicians to say that physics was the reason for the invention of vector spaces. Vector spaces have so many important applications in mathematics.
  • What about noting something on the linear functional and stating that the integral operator is a linear functional from the vector space of continuous real-valued functions on a compact interval, to the vector space R?

More comments later... --PST 12:10, 27 January 2009 (UTC)

I have rewritten the motivation section. As G-guy points out, it's best to work on the lead when everything else is finished. "Vectors in physics": hm, I don't quite know what to write there. I think the current motivation section, just briefly mentioning force and velocity as examples for Euclidean vectors might be sufficient. Integral as a linear functional is already covered (in the distributions section). Jakob.scholbach (talk) 21:44, 28 January 2009 (UTC)


Point-set topologist, I am the author of the edit you recently reverted, which you explained by writing "...although it was done in good faith, the content of the edit did not conform to the article's layout." I do not feel that a bald revert was appropriate, because what I changed was simply not up to snuff. I ask that you restore what I wrote, after which you are welcome to edit my edit so that it reads as you would prefer.

This section of the entry should introduce the reader to how vector spaces relate to other basic algebraic structures. Specifically, a vector space marries an additive abelian group to a field, adding the bridging axioms that characterise modules. It should also mention that a vector space is a variety in the sense of universal algebra, which is perhaps surprising because a field is not. The entry should be consistent with:

http://math.chapman.edu/structuresold/files/Vector_spaces.pdf

Vector spaces are not esoterica; they are arguably the most ubiquitous of algebraic structure having more than one underlying set, and this entry should be written accordingly. I discovered vector spaces via mathematical statistics and economic theory, but freely grant their power in engineering and physics.132.181.160.42 (talk) 00:03, 14 May 2009 (UTC)

To begin, please post at the bottom of the page as this is the convention of Wikipedia. Please see [1]. The link does not serve to argue that my revert was correct; rather it explains that a similar edit has been made in the past. Modules are explained later in the article - the beginning was carefully constructed through hard work, to ensure that it is accessible. To conform with WP:MTAA, it is only appropriate to introduce modules once simple concepts about vector spaces are introduced such as subspace, basis etc... The concept of a module is abstract to most beginners, and are an analogue of "vector space over a field" with "field" replaced by "ring". Up to the point where you have edited, the concept of a field has not been formalized, and therefore it is not appropriate to start talking about rings (otherwise, fields should have been formally defined earlier). I fully understand that you edited with good intentions, but Wikipedia is a complex place. Articles such as this one are GA's and usually are of fine quality. Furthermore, your claim that this article should be consistent with the treatment of vector spaces at some university, is not justified. It is important to understand that Wikipedia is an encyclopedia, and that it should note resemble lecture notes, textbook content etc... I have contacted User:Jakob.scholbach and am leaving it to him and other Wikipedians to decide whether to keep your edit or not. These are the experienced Wikipedians whose decision, I do not doubt, will be justified clearly. On the other hand, as a Wikipedian with little experience, my view is that your edit, although done with good faith, makes the article somewhat redundant. This is not to say that my view is correct. --PST 08:02, 14 May 2009 (UTC)
I fully agree with PST. There are many ways to vector spaces, but the main aim of WP is to make it accessible to people who don't know yet. Talking of modules is helpful only to readers who only know modules (and most of them will, I bet, then also know vsp.), therefore mentioning the connection is done down in the appropriate section. Jakob.scholbach (talk) 08:20, 14 May 2009 (UTC)

Comments from Awickert[edit]

The following text is copied from my talk page Jakob.scholbach (talk) 21:39, 16 February 2009 (UTC)


I saw that you were looking for someone to read it. I don't qualify for the lack of knowledge, but I don't think in mathspeak, so maybe I'll have a crack at the lead and leave you a message here. Awickert (talk) 05:14, 9 February 2009 (UTC)

OK - just from giving it a skim, it seems to be on the dense and rigorous side to me. When I think of a vector space, I think of a set of vectors that define a n-dimensional space that consists of everything those vectors can reach. It's a much more tangible thing to me than a mathematical construct. So maybe something like this (which is rough, and I might be making mathematical terminology atrocities):
"A vector space is the set of all points in space that is accessible by combining multiples of the vectors that describe it."
And then maybe: "Its dimension is defined by the number of vectors in independent directions that constitute it; for example, three mutually perpendicular vectors define a three-dimensional rectangular coordinate system that is often used to describe the three observed spatial dimensions."
I don't know if the suggested sentences are any good, but I would suggest to ground it to the real world right away in some way and tell a reader what it can mean outside of just the formal mathematical definition.
Awickert (talk) 05:23, 9 February 2009 (UTC)
One other dubiously useful comment that could show you how a non-mathematician may use vector spaces as a construct for thinking: when solving problems with large numbers of unknowns and not enough constraints, I always think of the problem in terms of narrowing down an n-dimensional vector space and trying to get it down to a single point (i.e., use basic physics to constrain this, use this empirical relationship for that, can't constrain this axis but can set boundary conditions, etc. etc.). Awickert (talk) 05:27, 9 February 2009 (UTC)

Could we first focus on the motivation and definition section?I guess the lead will have some overhaul anyway, and it seems best to touch it only at the end... I'd appreciate if you would comment on the definition section. Thanks, Jakob.scholbach (talk) 21:41, 16 February 2009 (UTC)

OK - I'll get to it when I have a chance. I'm going to be pretty busy for the next week. Awickert (talk) 21:45, 16 February 2009 (UTC)

Differential geometry[edit]

The second half of the application subsection on differential geometry may need a bit of a rethink. Space-time is not (modelled by) a Riemannian manifold, the Einstein curvature tensor is only part of its curvature, differential forms are (usually) sections of the exterior algebra bundle of the cotangent bundle, not elements of a fibre, and they do not so much "generalize" the dx in calculus, as provide a way to interpret it (for instance dy/dx is a ratio of 1-forms on a 1-manifold).

There is also a question of focus. Linear algebra is used extensively in differential geometry (indeed, modulo the inverse function theorem, which is rapidly brushed under the carpet of intuition, the toolkit of differential geometry consists of little more than linear algebra and the product rule), so what should we select? We need to stay on topic here, and the topic is applications of vector spaces. The Lie algebra tangent to a Lie group seems like a good example to me, but curvature and integration may be a bit of a stretch. Any thoughts? Geometry guy 18:30, 1 March 2009 (UTC)

I may have been misguided by false beliefs/priorities in writing the stuff, but my intention was to somehow give an impression how universal vector spaces are. I wanted to avoid arid statements, instead offering a bouquet of guideposts leading in different directions. (The target audience for these bits of text I wanted to talk to is (advanced) undergrads who may have become bored by pure linear algebra.) It is probably right that the road from vsp to space-time is a bit (too) long, also Riemannian geometry may be far. However, if the "guideposts" idea is taken seriously, a reader willing to make the connection will have to read the referred topics anyway, so we may, I think, come up with kind of overview statements. Also, if "the toolkit of differential geometry consists of little more than linear algebra and the product rule" is right (and I think it's sound) this also means that diff geo is fairly tightly linked to lin alg and vsp, right?
But, as always, if you see a better way, go ahead. In particular, diff. forms should depend on the base point, that's right. I'm not sure how we can convey in one line the thinking of dy/dx as ratio, though. Jakob.scholbach (talk) 20:25, 2 March 2009 (UTC)
I have moved differential forms to the bundle section (and corrected the flaw). Jakob.scholbach (talk) 20:38, 2 March 2009 (UTC)
My intention was to raise questions and contribute to solutions rather than raise criticisms and propose fixes. The widespread usefulness of vector spaces is certainly something to emphasise and I think your priorities are spot on when you say to give specific pointers rather than bland general statements.
Diff. geom. is certainly tightly linked to linear algebra, but linearity and linear maps feature more strongly than vector spaces. One difficulty with the differential geometry applications is that the most interesting vector spaces are spaces of sections of vector bundles (vector fields, differential forms). Would it be feasible/reasonable to reverse the order of the Applications-Generalizations sections? Then more could be said in the differential geometry direction, although it would be tough going even for your advanced undergraduate reader. Geometry guy 22:18, 2 March 2009 (UTC)
Hm, I'm not sure we should reverse the order of the two sections. We could, though, put a word about vector spaces of sections of bundles in the generalization section and refer back to any applications in diff geo. This quickly brings us to sheaves, too. What are the most prominent examples of spaces of sections? (We already have vector fields). Jakob.scholbach (talk) 14:38, 3 March 2009 (UTC)

RFC: Length of 99,022 bytes?[edit]

Length of 99,022 bytes? Is this an article of an encyclopedia, or a single article encyclopedia of almost anything what is somehow related to the title of the article? prohlep (talk) 17:46, 5 June 2009 (UTC)

Yet another issue: the definition of the vector spaces is simply erroneous. As soon as you try to feed it into an automated reasoning system, you will understand why.

This error, I teach for my students in Hungary since 1978, is a good test, whether you correctly understand, what the evaluation of a first order formula does mean.

I leave the correction of this error here in the Wikipedia, as an instructive homework for those, who think that they have a firm natural scientific base for discovering the nature.

prohlep (talk) 17:46, 5 June 2009 (UTC)

The length of the article is perfectly OK. Many GA and FA of such a wide topic are this long. What error are you talking about? Jakob.scholbach (talk) 18:26, 5 June 2009 (UTC)

Well, it would not be instructive, if I told the error.

Too many natural scientists and mathematicians are ready to jump over gaps, asif it was not there. The result is, that some of them are ready to identify the nature with one of it's possible natural scientific model. This leads to blind believe in scientific models, what caused many suffer already.

I have just double checked, the error is still there. It is hard to detect, since similar error can be found in every second course book on vector spaces. If the others avoid this error, then it is usually due to an accident, that if the TYPE of the vector space is declared in a different way, then this fundamental error simply CAN NOT occur.

I have just gave the key hint: the error is in connection with the type of the universal algebra, we want to axiomatize as a vector space.

prohlep (talk) 20:48, 5 June 2009 (UTC)

Are you referring to the inconsistency between treating a vector space as a one-sorted structure containing only the vectors and calling scalar multiplication a binary operation? I don't think that's so bad. It may be possible to improve this, but I am not sure it can be done without compromising on the accessibility of the article. We should most definitely not talk about the usual signature of vector spaces, with one unary homothety for each scalar, or at least not early in the article. It's not even so wrong if you consider that there are many-sorted logical frameworks in which one sort is kept fixed. (Actually the only example I know has a fixed sort containing the natural numbers.) --Hans Adler (talk) 21:51, 5 June 2009 (UTC)
  • There is no problem here, and no need for an RFC. – Quadell (talk) 19:05, 16 June 2009 (UTC)

added reference in "basis and dimension"[edit]

Sorry, I don't know how to reference something in the article, but someone could easily add it.

A reference that {1,x^2, x^3, ...} is a basis for the vector space of polynomials can be found in Abstract Algebra with Applications: Volume 1: Vector Spaces and Groups by Spindler, Karlheinz; p.55, Example 3.14.d.

Setitup (talk) 02:13, 9 November 2009 (UTC)

Isn't it necessary to add that in F(t), the field F has to have an infinite number of elements? Otherwise, I'm not sure that {1,x^2, x^3, ...} is a basis for the vector space of polynomials. —Preceding unsigned comment added by Arayamuswiki (talkcontribs) 00:44, 26 January 2010 (UTC)

The article uses the polynomial ring, not the function space of polynomials, so the field does not have to be infinite. JumpDiscont (talk) 07:33, 28 February 2010 (UTC)

last axiom unnecessary?[edit]

The "identity element of scalar multiplication" given by 1v = v seems to follow from the other axioms: 1v-1v=0 1(1)v-1v = 0 1(1v-v) = 0 v=1v 220.239.204.210 (talk) 22:46, 15 November 2009 (UTC)

Without that axiom, nothing stops 1v = 0. That invalidates your last step. Ozob (talk) 02:49, 17 November 2009 (UTC)

Is the Second Axiom Necessary?[edit]

The "Commutativity of addition" axiom v + w = w + v seems as though it can be derived from the other vector addition axioms:

Given from the inverse axiom u + (-u) = 0,

Proposition: -u + u = 0

Proof:

-u + (-(-u)) = 0 <-- equation 1; inverse axiom; -u plus the inverse of -u equals the zero vector

LHS:

-u + u

= (-u + u) + 0 identity axiom

= (-u + u) + [-u + (-(-u))] applying equation 1/additive inverse axiom

= [(-u + u) + (-u)] + (-(-u)) associativity axiom

= [-u + (u + (-u))] + (-(-u)) associativity axiom

= [-u + 0] + (-(-u)) inverse axiom

= -u + (-(-u)) identity axiom

= 0 inverse axiom

RHS:

0

Therefore, LHS = RHS.

Q.E.D.

--LordofPens (talk) 08:05, 25 January 2010 (UTC)

Commutativity of addition does indeed follow from the other axioms. The article used to mention this. --Zundark (talk) 08:55, 25 January 2010 (UTC)
I was mistaken. That proof does not prove commutivity in general; it only proves commutivity for the additive inverse axiom. However, commutivity in general u + v = v + u follows from the proof above plus the proof for 0 + u = u and the use of most of the axioms. --LordofPens (talk) 04:35, 26 January 2010 (UTC)

Convex analysis and universal algebra[edit]

Maybe I'm displaying my ignorance of universal algebra here, but I don't see how one recently added paragraph can be right. It reads:

In the language of universal algebra, a vector space is an algebra over the universal vector space K^\infty of finite sequences of coefficients, corresponding to finite sums of vectors, while an affine space is an algebra over the universal affine hyperplane in here (of finite sequences summing to 1), a cone is an algebra over the universal orthant, and a convex set is an algebra over the universal simplex. This geometrizes the axioms in terms of "sums with (possible) restrictions on the coordinates".

In order to specify a vector space, you need to specify an underlying field. But you can't specify a field in the language of universal algebra because you can't express the existence of multiplicative inverses as an operation. So you have no way of specifying your coefficients. Furthermore, if you could specify your coefficients, then I don't see why one needs to say that vector spaces are an algebra over K.

I have the feeling that you're doing something different, but I'm not sure what that is. What do you mean by, "an algebra over"? Do you mean that a vector space is an algebra of the same type and that it has a homomorphism from K? If not, then that's what I'm confused about. Ozob (talk) 13:18, 26 February 2010 (UTC)

I also don't understand it. Also I think this article should devote one or two short phrases at most to universal algebra. I guess at least half of the recently added material should be moved to universal algebra or convex analysis. Jakob.scholbach (talk) 22:38, 26 February 2010 (UTC)
I'm starting to get some idea what's going on. Grätzer's and Cohn's books don't list vector spaces in the table of contents or in the index, but searching in them for "vector space" in Google Books gets one a little somewhere. I think it's like this: One has a binary operation, vector addition, and if K is the field, then, for each x in K, one has a unary operation mx corresponding to multiplication by x. One imposes the group conditions on addition, imposes mx + y(v) = mx(v) + my(v) for all x, y, and v (i.e., addition of operations is defined via distributivity and addition in the field), imposes associativity and commutativity of addition, imposes the relation mxy(v) = mx(my(v)) (i.e., multiplication of operations is defined by associativity and the field multiplication), and imposes commutativity on scalar multiplication. (I may be forgetting some relations, and I may have screwed some of these up.) Once this is done, an object with this type will be a vector space over K. It feels terribly messy, but in a way it's not: Often times when discussing vector spaces, we leave the field in the background. In this setup, the field really is in the background, since it's part of the type, not the algebra.
I still don't see where K enters the picture, though. Ozob (talk) 03:44, 27 February 2010 (UTC)

Metric Spaces Explicit Reference[edit]

I was looking for definitions of vector and metric spaces, and found that vector spaces are a class of metric spaces. If this is correct, shouldn't this be mentioned in the first lines of the article with the corresponding reference for the non specialists?

Thanks —Preceding unsigned comment added by Ercolino (talkcontribs) 15:26, 29 May 2010 (UTC)

It's not correct. Maybe what you were reading about were normed vector spaces. --Zundark (talk) 15:55, 29 May 2010 (UTC)

Small bug?[edit]

Hello: (I'm a novice Wikipedian.) I believe I've found a small bug in the matrices section:

x = (x1, x2, ..., xn) |-> (sum, sum, ..., sum)

Shouldn't the xn term be xm?

x = (x1, x2, ..., xm)...

as there are m rows in the matrix.

Thanks,

Myrikhan (talk) 15:30, 11 September 2010 (UTC)Myrikhan

The article looks right
Any m-by-n matrix A gives rise to a linear map from Fn to Fm,
so its a map from Fn hence the coordinate of the source should be (x1, x2, ..., xn). Also note in xAx the matrix is on the left hand side so the number of column agrees with dimension of x.--Salix (talk): 16:36, 11 September 2010 (UTC)

The topic of this article is "Vector space". Yet, in the first paragraph alone, there are three separate subjects, only one of which is a direct reference to the subject of the article. The article gets back on topic at the beginning of the second paragraph, only to go off-topic again.

Does everyone realize that it is not those "who already know" the subject that come to these articles to become better educated? Who those who contribute to articles such as this one "PLEASE" write with those people in mind, as compared to trying to include all the related but secondary information. Please stop leaving out the obvious. It is information that is not obvious to those of us who read these articles for new information.

These articles are not for you to show us how much you know. They are to provide accurate and understandable information to the uninformed who are seeking to be informed.

Richard (talk) 02:16, 26 December 2010 (UTC)

First vector wikilink (in lead)[edit]

I find it questionable that this GA defines vector in its lead by linking to a dab. After all, this article is the page where vector is formally defined... If that link to the dab is supposed to give some more intuitive notion, it surely fails to do so. Tijfo098 (talk) 11:01, 19 March 2011 (UTC)

Mixed Up[edit]

In the definition of a vector space it contains the text: "In the list below, let u, v, w be arbitrary vectors in V, and a, b be scalars in F." However in the list of axioms the vectors u and w, and the scalars a and b do not appear at all. s and n appear to represent scalars, however in the "Respect of scalar multiplication over field's multiplication" and "Identity element of scalar multiplication" axioms, s is a vector. — Preceding unsigned comment added by 69.30.62.114 (talk) 14:34, 17 October 2011 (UTC)

Direct explicit contradiction[edit]

First sentence in lead:"A vector space is a mathematical structure formed by a collection of vectors: objects that may be added together and multiplied ("scaled") by numbers, called scalars in this context." First sentence in Definition:"A vector space over a field F is a set V together with two binary operators that satisfy the eight axioms listed below." BUT in section Algebras over fields. The first sentence reads:"General vector spaces do not possess a multiplication operation." If this isn't a direct contradiction, I don't know what is. -Assuming these are not actually contradictory, there is some context that needs to be added, some qualification made for this to be intelligible to us mere mortals. Could it be that the multiplication operation that they do not posess is vector by vector multiplication? I suspect so. Could someone modify this so it makes sense? I'm unqualified. Thanks!71.31.147.72 (talk) 17:43, 23 November 2011 (UTC)

There is no contradiction here. The two binary operations the definition talks about are vector addition and multiplication of a vector by a scalar. This is explained in some detail immediately after the definition. The statement in the algebra section refers to the non-existence of a multiplication of two vectors in a general vector space. Jakob.scholbach (talk) 07:45, 24 November 2011 (UTC)

Excellent article[edit]

This article is an excellent article and is a benchmark for all other articles to follow suit. PsiEpsilon (talk) 12:13, 1 January 2012 (UTC)

Picture in lead[edit]

I think the picture in the lead showing addition and scaling of vectors is nice but grossly out of place. It should be moved lower, or replaced/removed. Rschwieb (talk) 19:23, 9 April 2012 (UTC)

It would be nice if you could propose an alternative image. My impression is that the current one is the least worst, based on earlier revisions. Sławomir Biały (talk) 12:40, 14 April 2012 (UTC)
It's an excellent diagram for visualizing vector operations, but for someone who is learning what a vector is, springing vector operations at the start is not helpful. A single diagram with two or three labeled vectors captioned "two/three vectors" would be a good replacement. We'd like to depict that vectors are like arrows with varieties of lengths and directions. Two of the vectors could have different sources, two be different lengths, and two be in different directions. Rschwieb (talk) 13:05, 14 April 2012 (UTC)
That seems reasonable overall. (Although perhaps not having different sources?) With a suitable caption, I could see that as being more informative to a casual reader than the current image. This was in fact a big issue at the FAC. Sławomir Biały (talk) 12:19, 18 April 2012 (UTC)
I just didn't want to fool readers into thinking all vectors begin at the origin, so that is what I meant by the source comment. It is not a huge deal though, if the new picture has this minor problem. I have no skill to enact any of the changes to the diagram... any good recommendations? Rschwieb (talk) 13:20, 18 April 2012 (UTC)

Complex numbers example[edit]

I believe that this section recently added to the article is misleading. Instead of shedding light on the definition of a vector space, it actually obscures that definition. In general, it makes no sense to multiple two vectors. However, this supposed example not only introduces the notion of "multiplication" of two vectors if those vectors happen to be complex numbers, but it alludes to multiplication being well-defined in a general vector space (in general only the operations of addition and homothety are available in a vector space though). The actual (real) vector space structure of the complex numbers is identical to that in the second example, so this adds nothing helpful to elucidate the definition, and only obscures it by introducing structure that is not part of that definition. Sławomir Biały (talk) 13:53, 22 April 2012 (UTC)

(edit conflict)I fully agree with Slawomir. The IP's repeated argument seems to pertain to that the example of the vector space of complex numbers sheds light on the characteristics of a vector space because it includes a different notion of vector multiplication(!) than what is traditionally defined as a scalar product. However, the IP seems to miss the point that a scalar product is not the same as a scalar multiplication; the former is a multiplication between vectors, while the latter is a multiplication between a scalar (from a field F) and a vector (from the vector space over F). Nageh (talk) 14:01, 22 April 2012 (UTC)

I am the guy with IP 177.41.12.4. I am an occasional contributor. I think editors should be tolerant to what people from other backgrounds think and see when they take the time to contribute. As an occasional contributor, I don't like when people just remove content from others on the basis of their personal vision/opinion. They should at least create a talk so that the anonymous guy who made the change can argue afterwards when he comes back at some point if he comes back. Please remember that the idea of wikipedia, as far as I understand, is to have as many people with good technical background contributing as possible. So the editor should not have the attitude of seeing an anonymous contribution and just because he doesn't know the person conclude that the person knows less than him. When I see that text, I see the guy has technical background. To the point: yes, you can multiply vectors in a more abstract sense than you understood, Slabwerk, and you can also divide them. See articles on quaternions. The idea of Hamilton when he invented them was precisely to include an operation to _divide_ vectors. And the complex numbers are the door to this kind of thinking, which is different from yours. So if I had to remove an example I would remove the second one, which for me, as a physicist, is very abstract and does not convey any intuition. Maybe a mathematician would disagree, so I would keep it as well. You see the kind of thinking that makes wikipedia better? Respect to the contributions of others. An editor cannot be so picky. Adieu. — Preceding unsigned comment added by 177.41.12.4 (talk) 14:07, 22 April 2012 (UTC)

No you can't multiply elements of a vector space. That requires additional structure which is not the subject of this article. Sławomir Biały (talk) 14:15, 22 April 2012 (UTC)

To try to explain the issue for the IPs in another way: the complex numbers are extremely important as a model of rotations, etc. They are also a nice example of a vector space. However, the idea of the complex number multiplication as a model of rotation is a fundamentally different issue than the model of the complex numbers as a vector space; in particular, the sentence "Instead of making the product between two vectors as we know it for a common spatial vector, we create the rule ..." is incredibly misleading -- it distracts from the vector space properties of complex numbers to discuss some other (very interesting) properties of complex numbers that come from their being something other than just a vector space. The complex number product is a fascinating, important object, but not as an introductory example of a vector space. Replacing complex multiplication with the multiplication of a complex by a real number would be more appropriate, for example. --Joel B. Lewis (talk) 14:23, 22 April 2012 (UTC)

Except that complex numbers under real homotheties is essentially identical to the ordered pairs example. Sławomir Biały (talk) 14:33, 22 April 2012 (UTC)
I don't see the problem: it's an extremely common problem for students learning about vector spaces for the first time that they don't realize different objects (R^2; complex numbers; degree <=1 polynomials over R with a single variable) are really the same as vector spaces. It seems to me that having the same example twice in different guises could be very helpful for clarifying this issue. --Joel B. Lewis (talk) 14:44, 22 April 2012 (UTC)
Remember, this is in the definition section, not the examples section. There is ample space elsewhere in the article to mention the example of complex numbers (e.g., under field extensions). The examples preceding the definition should be absolutely the minimal set of examples needed for a reader to understand the definition. Too many examples (especially ones that are the same example) will be counterproductive. Sławomir Biały (talk) 14:50, 22 April 2012 (UTC)
Ok, yes, I agree. It would fit much better in the subsequent "Other Examples" section (which should probably also be made much more gentle.) --Joel B. Lewis (talk) 16:20, 22 April 2012 (UTC)

To the IP user: yes you may define a multiplication between two vectors of a vector space. But, as soon you do that your vector space becomes a richer structure which is more than a vector space. Depending on the properties of this multiplication, this richer structure may be called normed vector space or Euclidean vector space, if the multiplication is the inner product, algebra over a field for the example under discussion. There are more exotic multiplications on vector fields, like that of Lie algebra, widely used in physics. It may be relevant to mention all this in this article, but only after the definition of a vector space, and not as a pure example of vector space. As the example has been introduced, it is not acceptable for an encyclopedia, because it induces, for the non expert reader, a confusion between vector space and algebra over a field. D.Lazard (talk) 15:52, 22 April 2012 (UTC)

I agree with Sławomir Biały regarding the IP's edit, but Joel B. Lewis has a point nevertheless. This article (and even the external main article) seems to feature only "abstract" examples of vector spaces and I don't quite see why can't provide at least more concrete like R^2, R^3 or the complex numbers.--Kmhkmh (talk) 10:48, 23 April 2012 (UTC)

I think the examples already given should be expanded and made a little more concrete. A paragraph could be added to the coordinate spaces section at the beginning mentioning R^3, and one should be added to the beginning of the field extension section mentioning the complex numbers. Sławomir Biały (talk) 11:44, 23 April 2012 (UTC)

From the edit summaries, the editor at 177.41.12.4 seems to think Slawomir is working alone. Let me express here that I agree with the reasons given by Slawomir for not using this example. The example itself is not bad, it's just misplaced here. Rschwieb (talk) 13:47, 23 April 2012 (UTC)

I completely agree with Slawomir. --Txebixev (talk) 11:45, 27 April 2012 (UTC)

Errors in the definition of basis of a vector space[edit]

1) The section Bases and dimension begins as follows:

"Bases reveal the structure of vector spaces in a concise way. A basis is defined as a (finite or infinite) set B = {vi}i ∈ I of vectors vi indexed by some index set I that spans the whole space, and is minimal with this property."

But a basis has nothing to do with an index set -- and the index set is not used in this definition, either, beyond the vacuous statement that the basis vectors are indexed by it.

This should be deleted.

2) Soon after, this section reads:

" Minimality, on the other hand, is made formal by requiring B to be linearly independent."

But linear independence has nothing at all to do with making formal the statement that a basis is a set of vectors minimal with respect to the property of spanning the vector space!

3) On the other hand, an equivalent definition of basis that deserves equal standing with the minimality condition is that a basis is a set of vectors that is linearly independent and is maximal with respect to this property.

4) A basis should be defined as a set of vectors satisfying either the minimality condition or the maximality condition, which are logically equivalent.Daqu (talk) 04:57, 27 August 2012 (UTC)

Normally, a basis is defined by using neither minimality or maximality (which are vague and would require further definition) but simply by the two requirements that a)the basis spans the whole vector space. b)The basis elements are linearly independent. In a way, linear independence can indeed be seen as way to formally implement the minimality requirement.TR 07:41, 27 August 2012 (UTC)
I think there is a good point here in that the definition of a basis does not need the index set. It's understandable that it appears, because in linear algebra we are almost always using an ordered basis which does require indexing. Maybe that distinction could be sorted out, if it isn't already somewhere else in the text. Rschwieb (talk) 13:11, 27 August 2012 (UTC)
Can we just define a basis to be "a subset of a vector space that is linearly independent and spans the whole space". Note a basis is the empty set if and only if the space is zero. -- Taku (talk) 13:29, 27 August 2012 (UTC)
We could, but that would involve getting rid of all the helpful motivating text in that paragraph or two. The index set is there not because it's necessary for the definition (it's neither required nor is its inclusion an "error") but because it introduces the notation used in the following equations. This section has a link to the main article basis which is where things need to be in full detail. --JBL (talk) 13:37, 27 August 2012 (UTC)
I don't think it involves that at all. In fact I just did it. Its inclusion was an error because we are talking about two things as if they were one thing, but that is no longer the case. I retained the existing ordered basis for convenience throughout the rest of the section. Rschwieb (talk) 14:00, 27 August 2012 (UTC)
I prefer TR's version -- the issue of basis ordering is at best a minor technical question, which you've now placed so as to make it look like it's of the utmost importance. (Indeed, the are now two long sentences separating the use of the word "span" from the explanation of what it means!) If you insist on making a comment about it (which still seems totally unnecessary to me) then a footnote seems like a good way to go about it. Actually if there were a way to write it without mentioning the words "index set" but still allowing the use of the notation, that would be good to. --JBL (talk) 19:21, 27 August 2012 (UTC)
Being seasoned with the concepts, you would naturally find it minor. However, this is exactly the sort of misassumption teachers have and exactly the sort of snag I see consternating undergraduates from time to time. It is hadly "far away": one sentence (15 odd words) is about "unordered bases", and the other "long sentence" you are referring to says "we'll just use ordered basis notation here". Your last comment about "trying to do it without an index set but still allowing the use of the notation" is fantastic and is exactly the idea I had in mind with those edits. Rschwieb (talk) 20:43, 27 August 2012 (UTC)
Really the indexed/ordered versus unordered thing is a side issue I think. In practice, bases almost always have an index set. This is so that expressions like \sum_{k=1}^n a_{i_k}v_{i_k} are meaningful. In summary style, we simply don't have the luxury to worry about such niceties as whether all bases have index sets/orders. Readers wanting a more leisurely definition can visit the main article (which is also wanting attention, by the way). Sławomir Biały (talk) 21:28, 27 August 2012 (UTC)

It is absolutely essential to not introduce side issues when making a definition, or else the definition will be wrong. (There is nothing preventing anyone from referring to some basis elements with subscripts later as needed. But it is important to make it clear that a basis is just a set of vectors, not a set of indexed vectors, and not an ordered set of vectors; those are additional structures that can be added later, but they are irrelevant to the definition of a basis.)

Someone's suggestion above of defining a basis as simply a linearly independent set of vectors that spans the vector space is excellent. That is the most direct and relevant way of formulating the definition. That is definitely a better suggestion than my item 4) at the top of this section.

I also believe that when there are extremely common logically equivalent definitions in mathematics, it is very important to mention them right away; in this case, a basis is a maximal set of vectors that is linearly independent, and a basis is a minimal set of vectors that spans the vector space, are the most natural two definitions that should accompany the original definition, as a theorem claiming that all three definitions are equivalent. Ideally in a form something like: "Let B be a subset of a vector space V. Then the following are equivalent: a) B is a basis; b) B is a maximal spanning set; c) B is a minimal linearly independent set."

Someone commented that "maximal" and "minimal" need clarification since they are vague. Well, maybe they ought to be clarified, but they are in no way vague; they are very basic technical terms in mathematics. A maximal set with property P is one that is not a proper subset of any set with property P. A minimal set with property Q is one that does not contain as a proper subset any set with property Q. That's all there is to it.Daqu (talk) 18:46, 28 August 2012 (UTC)

Now that the most important two-thirds of ideas of my controversial edit (hamhandedly reverted en masse) are back, and having had a few days to think about it, I think I'm willing to compromise the "ordered basis" issue with a footnote. It's not ideal, but students are going to encounter it this way in texts most of the time, and we can probably trust that they have more resources than WP to keep the distinction straight. I have come to believe that not saying to much is better, in this case. (As a sidenote, Daqu, you transposed maximal and minimal in your second to last paragraph, and you might want to switch 'em. I saw you wrote it correctly everywhere else, though!) Rschwieb (talk) 16:32, 30 August 2012 (UTC)
If we were writing a textbook, then I would agree that the "best" definition of a basis is as a spanning linearly independent set. However, in a textbook, there is ample time to dwell on things: giving examples, refining notation, etc. In an encyclopedia article, we just want to summarize the main points of various subsidiary articles (such as basis (linear algebra)). What appears in this article should be just the barest sketch of what appears there, including only an accessible overview, as well as any notation that is used in the rest of the article. We cannot dwell on irrelevant things like whether or not bases have index sets, are ordered, etc. Our readers do not expect a textbook treatment of such minutiae here, as they has nothing to do with the subject of this article. The subject of this article is "vector space", not "basis". If you have complaints about basis (linear algebra), then the correct forum for such a discussion is Talk:Basis (linear algebra), not here. Sławomir Biały (talk) 02:24, 1 September 2012 (UTC)
I agree that this is not the place to "dwell on irrelevant things". But it is essential not to even mention irrelevant things, particularly in introductory paragraphs.
There is no problem whatsoever with giving the basis vectors an index set -- best to call it something like J in keeping with tradition. (I was too hasty to say above that the basis vectors should not have subscripts.) But it is essential to say nothing that constrains the index set, since it could be any set, of any cardinality, and it could have nothing whatsoever to do with an "ordered" basis.
A footnote about ordered bases does not belong in introductory paragraphs. Since ordered bases almost always occur when the dimension is finite, it is fine to mention them in a section on finite-dimensional vector spaces -- that comes later. Regardless of the fact that students will encounter ordered bases, order plays no role in the definition of a basis and it is essential not to give the misimpression that it does. Or the misimpression that a basis can necessarily be indexed by some or all integers. (The usual Hilbert space -- a very important vector space -- having a countably infinite "Hilbert basis" is actually a vector space of uncountable dimension.Daqu (talk) 14:36, 3 October 2012 (UTC)
Since this discussion happened, I've seen that Basis_(linear_algebra) explains everything the way we would like to. I've been persuaded by the argument that vector space is not the correct article to be absolutely correct about bases, and that we should leave it to basis (linear algebra) is the right place. Rschwieb (talk) 16:00, 3 October 2012 (UTC)
If the concept of a basis were a complicated one, I might agree with you. Instead, it is exceedingly simple and central to any understanding of vector spaces. So there is no reason in the world that the mention of bases in this article should not be absolutely correct.Daqu (talk) 11:44, 4 October 2012 (UTC)
The reason given, that you are willing out of existence, is that it is a crucial fact about bases but not a crucial fact about vector spaces. There is no need for this article to duplicate the basis article in such detail. It would be best to say as little as possible here, and direct the reader to the basis article, which is much better suited to clarify the situation. Rschwieb (talk) 12:56, 4 October 2012 (UTC)

Concrete discussion of a few changes[edit]

1) "Spanning the whole space means" is clearer than "The former means". Arguments?

2) "Linear independence means that blah blah" is clearer than "Linear independence means that blah, (irrelevant digression into existence) blah". (Are we worried that when we say "This linear combination is unique" someone is going to interrupt us and ask "does that linear combination even exist?!".)

3) Part of the change I made (I haven't reintroduced it) before which didn't deal with index sets was to explain what we meant by minimality. Right now it says "linear indepenence=minimality=unique expression". I think it's pretty clear that we have failed to show what we really mean by minimality is: dependent vectors can be removed from the generating set without affecting the span. When you can't remove any more things without damaging the span, it's "minimal". I'll be waiting to hear everybody's comments for the next few days on these matters. Rschwieb (talk) 21:00, 27 August 2012 (UTC)

(1) agree. (2) agree. (3) agree to a point. I think we should state plainly, and in as few words as possible, what "minimality" actually means, and then get to the point about linear independence. I've gone ahead and tried to do this. Sławomir Biały (talk) 21:22, 27 August 2012 (UTC)
OK, glad to have the feedback. I'm happy with these changes. Rschwieb (talk) 16:27, 30 August 2012 (UTC)
OK, I also buy the "this is the wrong article to make the distinction" argument. Thanks for discussing it. Rschwieb (talk) 13:36, 4 September 2012 (UTC)

"Coordinatized viewpoint"[edit]

Sentence in section Bases and dimension
Since each vector in a vector space can be expressed uniquely by (1) as a linear combination of the basis vectors, and since the corresponding scalars ak can be viewed as generalizations of Cartesian coordinates, this point of view is referred to as the coordinatized viewpoint of vector spaces.

I believe that this sentence is irrelevant in a section that should be only a summary of a separate article (Basis (linear algebra), mentioned at the beginning of section as one of the main articles). Moreover, a search on Google shows that the expression "cooridinatized viewpoint" is never used elsewhere on the web, not even in the main Wikipedia article. The only websites which use this expression are plain copies of this article.

I propose to delete this sentence. Alternatively, it might be moved into the main article, provided that a reference is provided.
Paolo.dL (talk) 13:04, 31 August 2012 (UTC)

Surely the "whole point" of introducing bases in the first place is so there is some notion of coordinates. So I think your search methodology might be flawed. In any event, I have (essentially) rewritten the section. It was just too heavy-handed for a main article. It needs to present things in summary style. For precise definitions, the reader should consult the main articles on the respective topics (basis (linear algebra) and dimension of a vector space). If editors find these articles wanting, then surely they should improve them rather than whine that the top-level article vector space is incomplete! Sławomir Biały (talk) 00:14, 1 September 2012 (UTC)
Who is "whining that the top-level article is incomplete"?
This discussion is mainly about the expression "coordinatized viewpoint". As for the "notion of coordinates", my recent edit explains that the scalars ak are called coordinates of v with respect to B. It might be also useful to mention that ak can be viewed as a generalization of the Cartesian coordinates (e.g., x,y,z in R3). The fact that this is important enough to be mentioned in this summary seems to be just an opinion of yours, but I won't mind if this is mentioned.
On the contrary, the fact that this viewpoint is allegedly called the "coordinatized viewpoint" is irrelevant and not supported by references. Paolo.dL (talk) 12:46, 3 September 2012 (UTC)
" Who is "whining that the top-level article is incomplete"?" The previous thread, which was about this very same section. "The fact that this is important enough to be mentioned in this summary seems to be just an opinion of yours": Well, it's basically a centerpiece of any first course in linear algebra. See any linear algebra textbook. Sławomir Biały (talk) 13:25, 3 September 2012 (UTC)
OK, thanks for deleting the above mentioned sentence. See if you like my edit. Paolo.dL (talk) 11:27, 4 September 2012 (UTC)

Sense of a vector[edit]

Hello, I see that multiplying a vector by -1 yields to a vector pointing in "the opposite direction". Shouldn't it be the "opposite sense"? In fact, shouldn't we have a sense (vector) article redirecting there please? We need it at eigenvector. Thanks. 219.78.115.252 (talk) 14:13, 14 October 2012 (UTC)

While I understand that some people use "sense" this way, "direction" seems to be much more common, and I have typically only heard people use "direction" in practice. I don't see the need for an entire article for a synonym of direction (geometry). If you've got good ideas for additions, it seems like they'd be most useful at Direction (geometry). Rschwieb (talk) 12:58, 15 October 2012 (UTC)

Convex analysis section[edit]

Although this should be mentioned, these are articles real coordinate space and affine space which have to consider this in such details. As an whole section, is is off-topical here because the convexity requires an ordered field (or, with some adjustment, a valued field). “Vector space” ≠ “vector space over ”. Incnis Mrsi (talk) 11:52, 23 April 2013 (UTC)

I'll remove this section, which is unsourced and jeopardizes the GA status. Kiefer.Wolfowitz 16:17, 1 May 2013 (UTC)
It was not a very good section. Should convex analysis be added to the See Also links? --JBL (talk) 19:22, 1 May 2013 (UTC)

Diagrams[edit]

Here are two images which show the main idea of basis vectors - namely taking linear combinations of them to obtain new vectors and that a vector can be represented in more than one basis.

A linear combination of one basis set of vectors (purple) obtains new vectors (yellow). If they are linearly independent, these form a new basis set. The linear combinations relating the first set to the other extend to a linear transformation, called the change of basis.
A vector (red arrow) can be represented in two different bases (purple and yellow arrows).


Feel free to take/leave, edit the captions, request labeling (which I left out to ireduce clutter and see what it would be like without), move them to another article, complain, etc. Best, M∧Ŝc2ħεИτlk 17:20, 25 April 2013 (UTC)

I prefer the traditional parallelepiped design. Maschen’s drawings suggest a Riemannian curvature-style non-commutativity of translations, something that defeats a proper cognition of the concept of basis. Incnis Mrsi (talk) 10:13, 26 April 2013 (UTC)
Parallelepipeds are understandable but I thought at first the arrows would be simple and enough. I'll fix later. M∧Ŝc2ħεИτlk 16:41, 26 April 2013 (UTC)
Done. Better? M∧Ŝc2ħεИτlk 22:49, 26 April 2013 (UTC)
No: currently, parallelepipeds are ugly. I would recommend to draw all edges (not only those 3 which have to bear arrows), and to fill sides with transparency. The latter is quite cheap, but moderately effective approach, which is IMHO underestimated by the majority of polyhedron drawers. Incnis Mrsi (talk) 05:30, 27 April 2013 (UTC)
Orientation of a 3-volume on its boundary.
Right; draw all edges and make the sides transparent, something like this (minus all the circulations) --->
I'll get to that soon. M∧Ŝc2ħεИτlk 06:52, 27 April 2013 (UTC)
Done. Better? M∧Ŝc2ħεИτlk 09:27, 27 April 2013 (UTC)
Nice, but I’d suggest a more reasonable use of colours. Vectors and adjacent sides in exactly the same colour is definitely not a good design. Incnis Mrsi (talk) 09:35, 27 April 2013 (UTC)
BTW, what mean orange crooked arrows in File:3d_basis_addition.svg? I do not understand the idea behind this series of four pictures. Incnis Mrsi (talk) 09:35, 27 April 2013 (UTC)
I recoloured. The point of having green arrows in a green space for one basis basis, and similarly blue for another basis, was to show the bases span the space, with the colour correspondence between bases and the space. Also, the cyclic yellow/orange arrows (now red) are supposed to show that new bases can be found from old. The yellow basis can obtain the purple one, and vice versa. M∧Ŝc2ħεИτlk 12:44, 27 April 2013 (UTC)
All right, except the name of File:3d_basis_addition.svg. Does it depict two bases and their mutual presentations? Then the name should be changed, and IMHO the corresponding mutually inverse matrices should be presented at the image description page explicitly. A good job, anyway. Incnis Mrsi (talk) 12:54, 27 April 2013 (UTC)
Sorry, I didn't realize your new reply... How about moving File:3d_basis_addition.svg to File:3d basis linear combinations.svg, File:3d basis transformation.svg, or File:3d basis change.svg? I can't think of anything better, that's the whole point of the diagram... Feel free to move it. Thanks for the continuous feedback. M∧Ŝc2ħεИτlk 13:09, 27 April 2013 (UTC)
It is apparently solved by now. Incnis Mrsi (talk) 08:29, 4 May 2013 (UTC)

I'd drop the insensity of the colours of the sides a bit so the vectors are more dominant.--Salix (talk): 13:25, 27 April 2013 (UTC)

OK. Since a lot more feedback is continuously appearing than I expected, here and elsewhere, I'll wait another day for more opinions in case there are any, then do all the proposed changes at once. Thanks in advance. M∧Ŝc2ħεИτlk 13:40, 27 April 2013 (UTC)
3d basis transformation

Since File:3d_basis_addition.svg has changed plenty of times and the name could be better, once and for all I have uploaded a replacement File:3d basis transformation.svg: which should be clear and correct satisfying all concerns. M∧Ŝc2ħεИτlk 08:16, 4 May 2013 (UTC)

Yet one quibble: it is fairly consistent that parallelepipeds of the purple basis are purple. Thereby parallelepipeds of the red basis have to be red, no other. Incnis Mrsi (talk) 08:29, 4 May 2013 (UTC)
The parallelepipeds are a light blue, the bases are red and purple (deliberately dark colours so they stand out). M∧Ŝc2ħεИτlk 08:43, 4 May 2013 (UTC)


A linear combination of one basis set of vectors (purple) obtains new vectors (red). If they are linearly independent, these form a new basis set. The linear combinations relating the first set to the other extend to a linear transformation, called the change of basis.
A vector can be represented in two different bases (purple and red arrows).
Here are the purple/red recoloured versions. (The "2" in the names are because I didn't want to overwrite the other versions - if these newest ones are preferred then the others can just be deleted since they will not be of any use). M∧Ŝc2ħεИτlk 09:10, 4 May 2013 (UTC)

Adjusting Hermann Grassmanns time frame.[edit]

In this article grassmann is given a side swipe so to speak. This is in fact how it went historically, but in actuality The Grassmanns made an influential contribution dating from around !827 to 1844 . Peano's work is wholly influenced and inspired by Hermann Grassmann's Ausdehnungslehre in 1844, which in tur drew heavily on Justus Grassmanns insights in !827. . The 1862 version of Ausdehnungslehre was a redaction overseen by Robert Grassmann in unwilling cooperation with Hermann, after which Hermanns earlier work became accessible to a wider local Audience! Peano seemed to get it in the !840's! This is the section i am referring to

History

Vector spaces stem from affine geometry, via the introduction of coordinates in the plane or three-dimensional space. Around 1636, Descartes and Fermat founded analytic geometry by equating solutions to an equation of two variables with points on a plane curve.[1] To achieve geometric solutions without using coordinates, Bolzano introduced, in 1804, certain operations on points, lines and planes, which are predecessors of vectors.[2] This work was made use of in the conception of barycentric coordinates by Möbius in 1827.[3] The foundation of the definition of vectors was Bellavitis' notion of the bipoint, an oriented segment one of whose ends is the origin and the other one a target. Vectors were reconsidered with the presentation of complex numbers by Argand and Hamilton and the inception of quaternions and biquaternions by the latter.[4] They are elements in R2, R4, and R8; treating them using linear combinations goes back to Laguerre in 1867, who also defined systems of linear equations.

In 1857, Cayley introduced the matrix notation which allows for a harmonization and simplification of linear maps. Around the same time, Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstract objects endowed with operations.[5] In his work, the concepts of linear independence and dimension, as well as scalar products are present. Actually Grassmann's 1844 work exceeds the framework of vector spaces, since his considering multiplication, too, led him to what are today called algebras. Peano was the first to give the modern definition of vector spaces and linear maps in 1888.[6]

An important development of vector spaces is due to the construction of function spaces by Lebesgue. This was later formalized by Banach and Hilbert, around 1920.[7] At that time, algebra and the new field of functional analysis began to interact, notably with key concepts such as spaces of p-integrable functions and Hilbert spaces.[8] Vector spaces, including infinite-dimensional ones, then became a firmly established notion, and many mathematical branches started making use of this concept.

Jehovajah (talk) 08:41, 14 June 2013 (UTC)

The story here is very different from that in Euclidean vector#History. A good reference I've found is
Michael J. Crowe, A History of Vector Analysis; see also his lecture notes on the subject.
--Salix (talk): 09:14, 14 June 2013 (UTC)
References
  1. ^ Bourbaki 1969, ch. "Algèbre linéaire et algèbre multilinéaire", pp. 78–91
  2. ^ Bolzano 1804
  3. ^ Möbius 1827
  4. ^ Hamilton 1853
  5. ^ Grassmann 2000
  6. ^ Peano 1888, ch. IX
  7. ^ Banach 1922
  8. ^ Dorier 1995, Moore 1995

Definition[edit]

Is it worth mentioning that more general definitions exist? From Abstract Algebra by Pierre Antoine Grillet (GTM 242):

A vector space is a unital module over a division ring

For one thing, this allows for vector spaces over the non-commutative field of quaternions, which I have seen being referred to as true vector spaces elsewhere. YohanN7 (talk) 10:32, 25 March 2014 (UTC)

During my ongoing work on noncommutative geometry I frequently discuss right and left modules. But I never heard of right and left vector spaces; I know only (ambilateral) vector spaces. It is hardly possible to include every minority opinion into an article. Incnis Mrsi (talk) 12:33, 25 March 2014 (UTC)
It's not a matter of opinion. It is a matter of what can be found (as definitions) in reliable (and as in this case, reputable) sources. While you mention it, left and right vector spaces aren't unheard of either, just like left or right modules. YohanN7 (talk) 12:59, 25 March 2014 (UTC)
And Hungerford's Algebra (GTM 73) gives the same definition. That exhausts my source of pure math algebra texts. The definition is used in Lie Groups - An Introduction though Linear Groups by Wulf Rossman. It can't be all that uncommon or all that fringe. YohanN7 (talk) 14:22, 25 March 2014 (UTC)
You can add to the list Algebra by M. Isaacs (see pg 185), First course in noncommutative rings by T.Y. Lam (see page 4), both of Jacobson's abstract algebra books on multiple pages, as well as two less-well-known works by respected authors Geometric algebra by E. Artin and Linear algebra and geometry by Kaplansky. It is definitely a well-established usage in abstract algebra. However, that said, there are mountains of linear algebra books which will only use fields. This is justified since talking about bilinear forms does not go well with division rings, and such books are bound to cover bilinear forms.
I'm not entirely convinced that the main definition of "vector space" should use division ring. I definitely think this expanded usage deserves mention, but staying with "field" would better reflect the bulk of the literature, and would better satisfy the needs of readers who aren't very high in mathematics. Rschwieb (talk) 13:05, 26 March 2014 (UTC)
IMO, this terminology question has to be related with the use of "division ring" vs. "skew field" or "non commutative field". I guess (I have not verified in the cited book) that most of the authors who talk of "vector spaces" in the non-commutative case use "skew field" or "noncommutative field". In other words the phrase "vector space over a division ring" seems much less common than "vector space over a skew (or non-commutative) field". Again this would deserve to be checked. This being said, as we do not use "field" in the non-commutative case, I agree with the conclusion of Rschwieb. D.Lazard (talk) 14:53, 26 March 2014 (UTC)
This collective conclusion would be very nice if included in the article. Or perhaps in an article devoted to the topic (it seems like a particularly notable type of module), or failing that, in a section of Module (mathematics). —Quondum 15:14, 26 March 2014 (UTC)

Is it okay then if I add a sentence (at most two) mentioning this generailzation? I'll mention it in terms of non-commutative fields and make division rings a parenthetical remark. Quaternions will be mentioned as an example. Quondums remark came while I was previewing. The remarks would go into every affected article (E.g. Field (mathematics) should mention the terminology of non-commutative fiels. YohanN7 (talk) 15:59, 26 March 2014 (UTC)

Perhaps a short section titled "Vector spaces over [skew fields, or non-commutative fields, or division rings" lower down in the article? --JBL (talk) 16:29, 26 March 2014 (UTC)
An entire section seems like overkill. Why not just a note with refs at the end of the definition section? Rschwieb (talk) 16:32, 26 March 2014 (UTC)
We must not forget that a large part of the article generalizes verbatim to the non-commutative case, namely everything about bases, dimension, linear maps, matrices (in fact, sections 4, 5.1, first half of 5.2, 6.1 and 6.2), and also Gaussian elimination and row echelon form. The properties that do not generalize are the bilinear and multi-linear ones (determinant, tensor product, ...). The reader must also be informed of that. D.Lazard (talk) 17:05, 26 March 2014 (UTC)
Good point. This is a GA article, and shouldn't be messed around with too much. I vote for a couple of sentences at the bottom of the "Definition section". Some authors... YohanN7 (talk) 17:20, 26 March 2014 (UTC)
And now I really see your point, which should mean a sub-section! YohanN7 (talk) 17:27, 26 March 2014 (UTC)
I would like to point out that in projective geometry it is almost mandatory to define vector spaces over skewfields (and yes, we do talk about right and left vector spaces). Desarguesian planes are those that are defined in the usual way from vector spaces over skewfields, while Pappian planes come from vector spaces over fields. Skewfield does seem to be the preferred term (over division ring and non-commutative field), but I have seen the other terms used by geometers from time to time. Bill Cherowitzo (talk) 04:43, 27 March 2014 (UTC)
Agreed on most points, especially the role of division rings in nonDesarguesian geometry (which I've had the great pleasure of learning about this past year.) However, in my experience "skew field" is not preferred over division ring, and ngrams seems to corroborate that.Rschwieb (talk) 17:03, 27 March 2014 (UTC)
@D.Lazard : I really like the approach you described, and not categorically opposed to a section. I just didn't want a section devoted to saying "some places commutativity is dropped" and nothing else :) Rschwieb (talk) 17:37, 27 March 2014 (UTC)
While I don't think it is particularly crucial, I should have said that skewfield was preferred by geometers. One thing that the n-gram viewer doesn't provide is "who is saying what". I would hazard a guess that the spike in that graph in the late 40's and early 50's is due almost entirely to geometers (it was a very active time period for this type of geometry). Of course the books/articles that were written then are now "classics" and the terminology lives on in them, but the field is fairly well mined and you don't see as much new work coming from those who use this language. Bill Cherowitzo (talk) 18:01, 27 March 2014 (UTC)