# Talk:Basis (linear algebra)

WikiProject Mathematics (Rated C-class, Mid-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 C Class
 Mid Importance
Field: Algebra

## Proving that a finite set is a basis

This section confuses me... It says:

>To prove that a finite set B is a basis for a finite-dimensional vector space V, it is necessary to show that the number of elements in B >equals the dimension of V, and both of the following: > > * B is linearly independent, > * span(B) = V. > >It should be noted that this technique does not work for infinite-dimensional vector spaces.

This last sentence is not clear to me. The two conditions above seem to be ENOUGH to prove that ANY set B (be it finite or infinite) is a basis of V. Being linearly independent just means that if we take any finite subset B_0 of B we must have only trivial linear combinations leading to zero. span(B)= V just means that any vector v belonging to V must be a linear combination of a (finite) number of elements of B. These two definitions are just the two conditions mentioned above and they must work for infinite-dimensional vector spaces as well! Someone please tell me if I'm missing something here. Arestes (talk) 10:31, 7 December 2010 (UTC)

Ok, since nobody seems to reply to this, and I'm convinced that this section is just WRONG (not onoy confusing), I'm deleting it.

## Hamel bases

The page Hamel basis redirects here, and I'm not sure that is appropriate. If it is appropriate, important information about Hamel bases is missing from this article. Hamel bases are discussed most frequently as bases for the real numbers considered as a vector space over the rationals, and the article here omits that important idea. I'm going to remove the redirection and add "Hamel basis" to the list of requested mathematics article, and possibly supply a new "Hamel basis" article myself when I can. -- Dominus 18:46, 20 May 2004 (UTC)

You have a point. But in the scheme of things I'd say the point made at length about orthonormal bases not being vector space bases is a more central topic.

A dedicated article about Hamel bases for R over Q would be fine, IMO. There is enough to say.

By the way, the introduction of the idea of basis by the four conditions that can be proved equivalent is a bad old Bourbakiste trick, unsuitable for WP. Basis od a vector space is something very fundamental in algebra. It needs a more gentle and readable introduction. Charles Matthews 12:40, 6 Mar 2005 (UTC)

I'm curious about the following statement: Every Hamel basis of this space is much bigger than this merely countably infinite set of functions. Hamel bases of spaces of this kind are of little if any interest. I have the following problem in mind: let V be the space of infinite sequences of reals with finitely many nonzero components. The standard Hamel basis of this space is countably infinite. To prove that the dual of V, V*, is not isomorphic to V, one might show that any Hamel basis of V* is uncountable, no? 66.235.51.96 03:51, 12 November 2005 (UTC)

## Ordered Bases

More often than not bases are ordered. None of the examples uses set notation for the basis, all of them present an ordered basis. If nobody vetos, I will change the definition to "family of vectors". As any set X is naturally a family (x)xX the current definition is included in the new one. Markus Schmaus 21:24, 14 Jun 2005 (UTC)

Of course, elements in a set may not be ordered, and since a basis is a set, the basis may not be ordered. So I think I am not sure why we have to say a basis is a family of vectors not a set. Probably putting some example of non-ordered bases to the article is helpful for both us and readers. -- Taku 22:11, Jun 14, 2005 (UTC)
Is ( (1,0), (1,0), (0,1) ) a basis of R2? Why? Markus Schmaus 15:09, 15 Jun 2005 (UTC)
No, a basis is a set and ( (1,0), (1,0), (0,1) ) is, at least in the notation that I use, not a set. I don't see where in the examples in the article the ordering is used. Do you have any references which define the basis as a family? All references I checked define it as a set. -- Jitse Niesen 16:14, 15 Jun 2005 (UTC)
I am not sure what you mean by ( (1,0), (1,0), (0,1) ). Is it like, let a be a pair (1, 0) and b (0, 1). Then { (a, a, b) } forms a basis of R2? Isn't that a b forms such a basis? -- Taku 21:12, Jun 15, 2005 (UTC)
By the way, strictly speaking $F^n$ does not have frame, just only a basis (cause is n-times the product of $F$ with no order at all) So strictly seaping, giving a frame on a vector space$V$ still needs to give an order in the canonical basis of $F^n$ so to have an isomporphisms defined :) --79.150.25.26 (talk) 16:32, 15 April 2011 (UTC)

## Request for technical explanation

This article is crying out for a picture and an example using R3 or R2 with analogy to a simple XYZ or XY coordinate system. --Beland 16:17, 18 December 2005 (UTC)

## Too Technical

I am trying to understand dirac notation by reading wikipedia, and I am finding that all the articles are very technical. Now, "basis" is an idea I can wrap my head around, so I'll add an informal thingo. If you find it inaccurate, tweak it by all means but don't just remove purely it on the basis that "this is so basic that everyone already knows it".

Hi; I'm trying desperately to understand many of these advanced principals of mathematics, such as basis, but no matter how many times I review the material, it doesn't sink in. Could someone please provide examples, problems to solve (with their solutions) and/or ways to visualize this? beno 26 Jan 2006

Hmmm, in my experience learning university-level mathematics was a fairly involved project that would have been difficult to do off the web :). I highly recommend finding a university and at least sitting in on their classes (or better, enrolling), if you possibly can. -- pde 23:19, 8 March 2006 (UTC)
Also, MIT has video lectures in linear algebra that you might find helpful. They can be downloaded here: http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-2005/VideoLectures/index.htm. I'm still trying to learn this stuff myself! —The preceding unsigned comment was added by Fastfilm (talkcontribs) 16:44, 26 April 2007 (UTC).

## Co-ordinate vs. Coordinate

Assuming there's a difference, can someone explain why there would be inconsistent references in this article? It would only make sense logically to use one and use the same throughout (e.g. don't call apples oranges, and vice versa--just call it what you normally call it). -therearenospoons 16:30, 12 April 2006 (UTC)

Both are acceptable but 'coordinate' seems to be more standard in current usage. The change will be made.--RDBury (talk) 12:29, 13 July 2008 (UTC)

## Definition

Recently, the definition was changed from:

"Let B be a subset of a vector space V. A linear combination is a finite sum of the form
$a_1 v_1 + \cdots + a_n v_n, \,$
where the vk are different vectors from B and the ak are scalars. The vectors in B are linearly independent if the only linear combinations adding up to the zero vector have a1 = ... = an = 0. The set B is a generating set if every vector in V is a linear combination of vectors in B. Finally, B is a basis if it is a generating set of linearly independent vectors."

to:

"A basis B of a vector space V is a linearly independant subset of V that spans (or generates) V.
If B is endowed with a specific order (i.e. B has a finite sequence of linearly independant vectors that generate V), then B is called an ordered basis."

The editor said that this simplifies the definition. While the second definition is definitely shorter, it does not explain what "linearly independent" and "spans" means. For that reason, I prefer the first definition. I expect that most people who know what "linearly independent" and "spans" means, also know what "basis" means and do not need to read the definition. -- Jitse Niesen (talk) 04:57, 28 April 2006 (UTC)

I added some more detail in the definition, however, the definition of a basis is strictly what is in the first line (reference: Linear Algebra, 4th edition, Friedberg, Insel, Spence). The extra info is just an explanation of what the definition means. --Spindled 05:41, 28 April 2006 (UTC)
I like that the new section starts with the most important fact, what a basis is. However, I did make some edits to it (covered the infinite case, there is no A in independent in English, only variables should be italics but no parentheses, numbers, or other symbols); I hope you don't mind. By the way, have you already found Wikipedia:WikiProject Mathematics? -- Jitse Niesen (talk) 12:18, 28 April 2006 (UTC)

## Is "By brute force" a colloquialism?

The example of alternative proofs is terrific. However, the phrase "By brute force" sounds like a colloquialism, so I have changed it to "By algebra". I considered "By algebraic manipulation", which sounds like something a con artist would do, and "By calculation", which sounds like something a calculator could do. :-)

I have removed the characterization of alternative proof methods because it implied that anyone who uses the first method is unsophisticated. That is POV. Perhaps someone can rewrite it. --Jtir 18:58, 26 September 2006 (UTC)

Personally, I don't think 'brute force' is too colloquial, but on the other hand I on't think it is the right phrase to use in this case. The proof of independence is simply going back to the definition of independence, so maybe ... 'from the definition' might be appropriate? Madmath789 19:33, 26 September 2006 (UTC)
That's much better! I would never have thought of that because I was hung up on maintaining the "By ..." pattern. I have made the edit, which includes a smoother intro. --Jtir 20:16, 26 September 2006 (UTC)

## Unclear wording about Hamel basis

The section titled Hamel Basis currently begins with the sentance:

The phrase Hamel basis is sometimes used to refer to a basis as defined above, in which the fact that all linear combinations are finite is crucial.

But what lies above is the definition for the ordered basis. Is this sentence implying the Hamel basis is a kind of ordered basis? If so, then this should be stated explicitly, instead of the vague "defined above" referent. The second part of the sentence, ...is crucial is also confusing: the sections above don't really discuss finiteness, and so insisting that finiteness is now crucial is even more confusing. Can someone please fix this up? linas 23:20, 14 October 2006 (UTC)

Is it better now or just more confusing? -- Jitse Niesen (talk) 03:26, 15 October 2006 (UTC)

In a Wiki piece about the very fundamental algebraic notion of a "basis", I would avoid emphasizing weird-looking technicalities like R as a vector space over Q. Hilbert basis and the other variants, okay. LDH (talk) 17:57, 19 November 2008 (UTC)

## Shauder dimension

I'm not sure, but it seems to me by knowledge that Shauder basis have not to be of the same cardinality. Consequently Shauder dimension should be defined more clearly. —The preceding unsigned comment was added by 195.220.60.4 (talk) 12:08, 15 December 2006 (UTC).

## overly math-y wording

The definition currently contains the following sentence: "A vector space that admits a finite basis is called finite-dimensional." I found the word "admits" in this context to be unfamiliar and it took me a while to figure out exactly what was being said. I'm sure people who know the topic well or who are more math-oriented do not have a problem with it. However, why not "A vector space with a finite basis is called finite-dimensional." OR "A vector space that can be generated by a finite basis is called finite-dimensional."

Any objections? Am I misunderstanding the meaning of "admits"? -pg

You're probably right that the use of "admits" is a bit jargon, though I never realized this. I changed it to "A vector space that has a finite basis is called finite-dimensional." Is that better?
Your first suggestion might be taken to require that the basis is specified. It doesn't need to be specified, it only needs to exist. Your second suggestion is okay, but I like "that has a finite basis" better because it's shorter. -- Jitse Niesen (talk) 03:32, 27 April 2007 (UTC)
I see what you mean, and your new phrasing sounds great to me. Thanks! -pg —The preceding unsigned comment was added by 71.247.186.233 (talk) 06:08, 2 May 2007 (UTC).

The picture alongside "Definition" shown does not show up clearly in my browser window. Can something be done about this ?

Thanks

## Banach Section

In the subsection called "Banach Spaces" there is a statement that any Banach space with cardinality "strictly greater than the continuum ($\mathfrak c = 2^{\aleph_0}$)" has an uncountable Hamel basis. Isn't the statement $\mathfrak c = 2^{\aleph_0}$ the continuum hypothesis? So, if that is necessary to state shouldn't we just say $\mathfrak c = |\mathbb{R}|$ or say "assuming CH"? Gandalf (talk) 16:00, 13 February 2008 (UTC)

No, $\mathfrak c = 2^{\aleph_0}$ is provable without CH. There's a proof in the Cardinality of the continuum article. (The Continuum Hypothesis says that there is no cardinal $\kappa$ such that $\aleph_0<\kappa<\mathfrak{c}$. This is equivalent (in ZFC) to $\aleph_1=\mathfrak{c}$, or, if you prefer, $\aleph_1=2^{\aleph_0}$.) But that statement from the Banach space section is just silly, even though it's correct. --Zundark (talk) 16:54, 13 February 2008 (UTC)
I knew that that. I'm so silly. Sorry. Gandalf (talk) 15:47, 19 February 2008 (UTC)

## Related notions, Hamel basis, etc.

I reorganized (partially) the Related notions section since there was too much talking. Orthogonal basis and Schauder basis have their own pages, for one. Further - the statements concerning cardinality of the space and cardinality of the Hamel basis were false (indeed, the reals have cardinality continuum yet their Hamel basis is far from being uncountable). The distinction between Schauder dimension and Hamel dimension should be referenced and better explained, I took the liberty of deleting it for now. —Preceding unsigned comment added by Protony (talkcontribs) 22:40, 14 August 2008 (UTC)

I am looking for an exemplified difference between standard basis, Schauder bais, Ordered basis and Hamel basis. Any link or any hint. Please help me. Ahsanulhaqpk (talk) 14:07, 15 March 2009 (UTC)

## Pronunciation

Is the plural "bases" pronounced "base-eez" or "base-iz"? —Preceding unsigned comment added by 24.2.48.202 (talk) 22:23, 14 November 2008 (UTC)

I say base-eez. I don't care if it's legitimate. I just want to be understood. :) LDH (talk) 17:48, 19 November 2008 (UTC)

## hamel basis

In the section on "Analysis", it is implied that a Hamel basis is simply some basis for an infinite-dimensional vector space, which involves "taking infinite linear combinations of the basic vectors in order to generate the space".

This is totally wrong AFAIK; the whole point of a Hamel basis is that you only need finite linear combinations. The most famous example of a Hamel basis (see e.g. Hewitt and Stromberg, Real and Complex Analysis) for the real numbers over the rationals. There, the Hamel basis is a set of real numbers such that any real number can be written as a finite linear combination of these real numbers multiplied by rational coefficients.

See H&S for the more general definition; the other key point is that the existence of such a Hamel basis for uncountable sets requires the axiom of choice. This makes them rather odd as practical constructs, as they are only useful for non-constructive proofs.

— Steven G. Johnson (talk) 15:22, 19 March 2010 (UTC)

The section says that the Hamel basis is the notion discussed earlier in the article, and that the other notions allow "infinite linear combinations". Perhaps the phrasing is ambiguous (rather than the content being dubious). RobHar (talk) 15:59, 19 March 2010 (UTC)

## Dimension theorem needs only the ultrafilter lemma?

The article mentions that it can be proven from the ultrafilter lemma alone (without full choice) that any to bases are equipotent (the dimension theorem), and this claim is repeated on the page Dimension theorem for vector spaces. The proof on that page, however, isn't at all clear and seems to require the full axiom of choice (see [1]). Can anyone shed some light on this? --81.206.99.122 (talk) 11:20, 14 May 2010 (UTC)

Indeed, the dimension theorem (also known as Löwig's theorem) follows from the ultrafilter lemma. I have checked the primary source on this kind of issues: "Consequences of the axiom of choice" by Howard & Rubin. You can also check it online at http://consequences.emich.edu/file-source/htdocs/conseq.htm . Godelian (talk) 05:04, 11 October 2011 (UTC)

## Unexplained introduction of a new symbol

The article says "Let S be a subset of a vector space V. To extend S to a basis means to find a basis B that contains S as a subset. This can be done if and only if S is linearly independent. Almost always, there is more than one such B, except in rather special circumstances (i.e. L is already a basis, or L is empty and V has two elements)."

What is L? Is it just supposed to be S? Chesemonkyloma (talk) 20:32, 25 July 2012 (UTC)

Yes, it clearly is meant to be S, and I've fixed this. However, I think this section still need reviewing. — Quondum 09:30, 28 July 2012 (UTC)

## Basis and Bases

The article contains two spellings (Basis and Bases) for what I believe is the same thing. Please standardize the article by using the spelling, "basis". If indeed "basis" and "bases" ARE meant to represent different concepts, then explicit mention of the similar spellings is in order.

Since I am not 100% sure about this I will leave the edits to experts. Also please see the article on matroids which has a similar issue.

Churchill17 (talk) 21:39, 21 April 2013 (UTC)

"bases" is plural. I edited the entry to clarify this. AmirOnWiki (talk) 12:21, 7 November 2014 (UTC)

## Diagrams

Here are two images which show the main idea of basis vectors - namely taking linear combinations of them to obtain new vectors and that a vector can be represented in more than one basis.

A LC of one basis set of vectors obtains new vectors. If they are non-coincident and non zero, these form a new basis set. Each set is related by a linear transformation.
A vector (purple arrow) can be represented in two different bases (green and blue arrows).
If basis vectors are each scalar-multiplied by a number and then vector-added, i.e. a linear combination of vectors, the result is a vector in the space spanned by those basis vectors. Here the vector space is just 3d Euclidean space, shown are two possible bases (not orthonormal).

Feel free to take/leave, edit the captions, request labeling (which I left out to reduce clutter and see what it would be like without), move them to another article, complain, etc. Best, M∧Ŝc2ħεИτlk 17:20, 25 April 2013 (UTC)

NB: These have been redrawn twice now. If you have comments please state them here instead so feedback is all in one place. Thanks, M∧Ŝc2ħεИτlk 09:29, 27 April 2013 (UTC)

## Proof that every vector space has a basis

I was making my way through this otherwise elegant proof and came across the following in the body of the proof:

"we say that L1 ≤ L2 when L1 ⊂ L2"

Elsewhere in the Wikipedia Mathematics collection (namely "Subset", section "The symbols ⊂ and ⊃") it is conspicuously noted that:

" if A ⊆ B, then A may or may not be equal to B, but if A ⊂ B, then A is definitely not equal to B."

Am I correct to note this as a error in symbology? — Preceding unsigned comment added by Hubbligart (talkcontribs) 23:53, 27 December 2014 (UTC)