Jump to content

Talk:Gaussian elimination

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by ANONYMOUS COWARD0xC0DE (talk | contribs) at 02:10, 18 May 2007 (Contradiction: STILL!). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

" it is never correct to compare floating point numbers zero" should be " it is never correct to compare floating point numbers to zero"

You should feel free to make any edits you think they will improve the article (simly click Edit this page) if you wish to do so. Dori 13:27, Oct 21, 2003 (UTC)

Who or what is or was Jordan?

The page Gauss-Jordan elimination currently redirects here, with many backlinks, but this article makes no mention of this name except to use it with no explanation about half way down. Does anyone know anything about this term - is it a synonym, an extension, a misnomer? And, as my subject heading says, where does the Jordan bit come from? - IMSoP 13:35, 19 Apr 2004 (UTC)

A Google search reveals that it refers to the geodesist Wilhelm Jordan (1842 - 1899), "who applied the method to finding squared errors to work on surveying". Not to be confused with either Marie Ennemond Camille Jordan or Pascual Jordan. -- Dissident 13:58, 19 Apr 2004 (UTC)
Gauss-Jordan elimination computes a diagonal matrix, rather than the triangular matrices usually found in Gaussian elimination. G-J elimination should probably be a separate page, with a mention/comparison here. —The preceding unsigned comment was added by 24.236.179.177 (talk) 17:31, 8 April 2007 (UTC).[reply]

Row echelon form

This article currently mentions reduced row echelon form, but not row echelon form, which it should because the two are not the same thing and because row echelon form redirects to this article. Lowellian (talk)[[]] 21:36, Oct 8, 2004 (UTC)

Instability

As far as I know (but I'm not an expert on numerical analysis), one major reason why Gaussian elimination is never used for solving large systems in floating-point arithmetic is that it is quite unstable. Any error you commit at any point gets propagated. Iterative method, working on attractive fixed points, are much better in that respect. David.Monniaux 20:37, 15 Dec 2004 (UTC)

Gaussian elimination can indeed be unstable, even when using pivoting. Theoretical studies shows that it can be very unstable, but it is generally not a problem in practice. This discrepancy is not well understood. Gaussian elimination is not used for large systems because of its n^3 price tag; the instability may also be a reason but it is not often mentioned. A lot can (and should) be written in the current article about stability problems. Unfortunately, numerical linear algebra is not quite my speciality and I have no time at the moment, so somebody else has to do this. -- Jitse Niesen 21:32, 15 Dec 2004 (UTC)

fuziness as to what is the T matrix

In the section titled "The general algorithm to compute ranks and bases", it is stated that "This echelon matrix T contains a ..."

The implication is that the example matrix is the T matrix, but this is never explicitly stated. Wouldn't it be clearer if the matrix was preceded by "T = ..." ?

two suggestions to simplify the three rules

First, this isn't really a bug and most other books/articles publish the three rules for equivalent transformations nearly the same way:

1) Multiply or divide a row by a non-zero value.
2) Switch two rows.
3) Add or subtract a (not necessarily integer) multiple of one row to another row.

1st suggestion for the 1st rule: remove "or divide" and add "real" before "value" (reason: dividing by value 'x' is equivalent to multiplying by value '1/x')

2nd suggestion for the 3rd rule: replace the whole rule by

3) Add one row to another row.

(reason: old rule 3 is a concatenation of rule 1 and new rule 3, or in other words: the old rule 3 contains rule 1)

[In case you want to contact me, send a mail to ibbis at gmx dot de.]

Formatting

The matrices should really be in []s, not ()s. 129.44.216.105 02:24, 9 March 2006 (UTC)[reply]


REF and RREF requirements

I've edited the rules that determine whether a matrix is in REF/RREF, as they were wrong. Specifically, the article (before my edit) stated that every leading coefficient must be 1 for a matrix to be in REF; This is actually a requirement for RREF. I've also cleaned up the formatting in that area to make the rules clearer. My source for the rules is:

Lay, David C. "Linear Algebra And its Applications". Third Edition - Low Price Edition, Page 30.

If anyone finds a source to the contrary, I would be interested - I don't know where the original information on the page came from.

Braveorca 23:30, 24 May 2006 (UTC)[reply]

According to Talk:Row echelon form, Larson and Hostetler, Precalculus, 7th edition, states that the leading coefficients has to be one for row echelon form. -- Jitse Niesen (talk) 13:51, 2 December 2006 (UTC)[reply]

Rewritten

i've been working on a huge rewrite of this (and the split-offs it spawned, heh) for a couple of days, and it's now done. I think it's all consistent and correct, but I haven't checked the source code examples and proofread very thoroughly (because my head hurts from too much of this). Any changes to fix stuff would be greatly appreciated. I think the rewrite fixes some major problems of the old one (no offence :)), making it more modular and, most importantly, noting that gaussian elimination is not the same as gauss-jordan (though there are other "fixes" as well). -- Braveorca 02:01, 27 July 2006 (UTC)[reply]

Floating-point numbers to zero

I'm not a numerical specialist, but I do know that, although in general you shouldn't compare two floating-point numbers for equality, zero is a special case; it has an exact representation. Is there an algorithmic reason a small epsilon should be used instead of zero?

The problem is with the way in which numbers are stored and manipulated in a computer. Numbers are represented internally as binary numbers, and all mathematical operations are done in binary arithmetic, usually floating point. Internal numbers are limited in size, so rounding errors occur. Numbers are then displayed in decimal form. As an illustration, I just did the following calculation, using perl: (11/7)*(7/11)-(13/5)*(5/13) The result clearly should be zero, but in fact is 5.42101086242752217e-20 That is less than 0.000000000000000000006, obviously very small, but not zero, so comparing against zero would show inequality.
I usually compare against 1e-18, instead of zero, when using floating point numbers, however that is machine and platform dependent.

I think that some of the above statements and the remark in the article are the result of some misunderstanding or fuzzy thinking regarding how floating point works. There's some belief that floating point is a little random or uncertain or imprecise, but this isn't really any more true for floating point than fixed point, such as their special case, the integers. It is true that rounding errors and the lack of an exact representation for most real numbers often requires using a comparison band. However, it isn't true that floating point numbers will always be slightly off, or that they are incapable of exact representation of a particular number--and it isn't true that exact equality should never be used with floating point, which seems to be what is implied here.

I beleive the remark in the article is one of these misstakes, and should be removed. It is essential that the leading coefficient in each row be 1, not just something that is close to 1. And yes, in any sane non-broken fp implementation, dividing a number by itself will yield exactly 1. And yes, any sane non-broken fp implementation has an exact representation of 1. FP implementations don't just produce random imprecisions for no good reason, contrary to popular belief. Also, what specifically does testing for a narrow band around 1 instead of 1 itself add, even if this were a valid test? Dividing the row by a number very close to 1 is unlikely to have any significant negative effect on the result, unless the matrix is ill-conditioned, in which case other sources of error will probably be more significant anyway. Is this supposed to be some sort of ill-conceived performance optimisation suggestion, perhaps?

In fact, I think I'm going to remove the fp eps comment from the article. If you feel it needs to be re-added, please discuss it here. AaronWL 05:47, 10 November 2006 (UTC)[reply]

I'm sorry, but I've no idea what you're talking about. We're not talking about comparing to 1, but about comparing to 0. Surely, it can happen that a matrix which in exact arithmetic would give a zero pivot, gives a small but nonzero pivot in floating point. If not, why not? -- Jitse Niesen (talk) 06:38, 10 November 2006 (UTC)[reply]
You seem to have misunderstood the statement that floating point should not be thought of as random. Floating-point arithmetic is certainly not random, but it is not generally exact either. Floating-point arithmetic is exact only when the result of an operation can be exactly represented as a floating-point number, for example when both inputs are integer-valued and the result is integer value (for small integers), but this is not generally the case. It is definitely not the case for Gaussian elimination, even if the input matrix has small integer entries since Gaussian elimination relies heavily on division. Also, comparing against 1e-18 often doesn't make sense since it is smaller than the machine epsilon in double precision arithmetic. It may work with with 80-bit extended doubles, but they are not supported everywhere. Also be careful to distinguish between absolute and relative differences when comparing. Fredrik Johansson 11:24, 11 November 2006 (UTC)[reply]

Contradiction

Gauss–Jordan elimination article states:

In other words, Gauss-Jordan elimination brings a matrix to reduced row
echelon form, whereas Gaussian elimination takes it only as far as row
echelon form. It is considerably less efficient than the two-stage
Gaussian elimination algorithm.

Gaussian elimination article states:

Equivalently, the algorithm takes an arbitrary matrix and reduces it to
reduced row echelon form (also known as row canonical form).
Stated equivalently for matrices, the first part reduces a matrix to
row echelon form using elementary operations while the second
reduces it to reduced row echelon form, or row canonical form.

To me it sounds as if though Gaussian elimination is one stage and Gauss-Jordan is two stage. Yet they both contradict this aspect. --ANONYMOUS COWARD0xC0DE 05:36, 19 January 2007 (UTC)[reply]

I think the source of your confusion is that the two operations referred to in each article are different. In the Gauss-Jordan article, when it says "two-stage Gaussian elimination algorithm" it means: you first use Gaussian elimination, then find the solutions by back-substitution. It is this back substitution that is the stage 2. In Gauss-Jordan elimination, you have to work harder to get the reduced row echelon form, but then no back substitution is necessary.

A previous edit mentioned Lay's Linear Algebra and Its Applications as a source; nothing wrong with that but for a deeper treatment with more references, a very readable text is Carl Meyer's Matrix Analysis and Applied Linear Algebra. For more on the stability of Gaussian elimination, I would recommend Trefethen and Bau's Numerical Linear Algebra.

So they both give the same results, just a different way of doing it. Therefore Gauss–Jordan elimination article should be changed like this:
 In other words, Gauss-Jordan elimination brings a matrix to reduced row
 echelon form., whereas Gaussian elimination takes it only as far as row
 echelon form. It is considerably less efficient than the two-stage
 Gaussian elimination algorithm.
? --ANONYMOUS COWARD0xC0DE 23:10, 23 January 2007 (UTC)[reply]
Why? They both give the same results in that they both solve Ax = b, but Gaussian elimination does not reduce the matrix to reduced row echelon form. Putting it otherwise:
  • Gauss-Jordan elimination brings a matrix to reduced row echelon form, after which solving the system is trivial
  • Gaussian elimination brings a matrix to row echelon form and uses backsubstitution to solve the system
So I don't agree with your proposed change. -- Jitse Niesen (talk) 01:36, 24 January 2007 (UTC)[reply]
I'll restate my problem as I now see it; Either Gaussian elimination reduces to reduced row echelon form or just row echelon form. You can't say both!
Therefore if you deem my first proposed change to be unnecessary then the change would have to be
Gaussian elimination article states:
  Equivalently, the algorithm takes an arbitrary matrix and reduces it to
  reduced row echelon formrow echelon form(also known as row canonical form).
  Stated equivalently for matrices, the first part reduces a matrix to
  row echelon form using elementary operations while the second
  reduces it to reduced row echelon form, or row canonical form.
--ANONYMOUS COWARD0xC0DE 05:39, 5 February 2007 (UTC)[reply]
Or are you saying that back-substitution should not be a part of the Gaussian elimination algorithm? It seems like an important aspect, therefore I doubt this interpretation. I hypothesise this from the "first use Gaussian elimination, then find the solutions by back-substitution" statement, which seems to reinforce a separation of back-substitution from the Gaussian elimination algorithm. Or is there a difference between Gaussian elimination and Gaussian elimination algorithm. --ANONYMOUS COWARD0xC0DE 05:39, 5 February 2007 (UTC)[reply]

"The Gauss-Jordan (GJ) method is a variant of Gaussian elimination (GE). It differs in eliminating the unknown equations above the main diagonal as well as below the main diagonal. The Gauss-Jordan method is equivalent to the use of reduced row echelon form of linear algebra texts. GJ requires 50% more multiplication and division operation than regular elimination. However, it can be used to produce a matrix inversion program that uses a minimum of storage. Solving using GJ gives ." — Atkinson, Kendall E., An Introduction to Numerical Analysis, 2e., pp. 522-523. Another point to note is that the pseudocode given in the Gaussian elimination article implements Gauss-Jordan elimination, not Gaussian elimination. Bekant 23:52, 5 February 2007 (UTC)[reply]

Thanks, but the problem is whether or not Gaussian elimination reduces to reduced row echelon form, not Gauss-Jordan. The problem with the Gauss–Jordan elimination article is that it says Gaussian elimination does not bring a matrix to reduced row echelon form, but the Gaussian elimination article says it does. Jitse Niesen says that Gaussian elimination does not bring a matrix to reduced row echelon form, but did not comment further on the contradiction present in the artical. --ANONYMOUS COWARD0xC0DE 02:40, 18 February 2007 (UTC)[reply]

Let me restate this once again: if back-substitution is part of the Gaussian Elimination algorithm then Gaussian Elimination does bring a matrix to reduced row echelon form and the Gauss-Jordan article must be changed. Otherwise if back-substitution is not part of the Gaussian Elimination algorithm then Gaussian Elimination brings a matrix to JUST row echelon form and the Gaussian Elimination article needs to make explicit that back-substitution is NOT part of the algorithm. --ANONYMOUS COWARD0xC0DE 02:40, 18 February 2007 (UTC)[reply]


As of 2007-05-17

Gaussian Elimination article:

Equivalently, the algorithm takes an arbitrary matrix and reduces it to
row echelon form.
A related but less-efficient algorithm, Gauss–Jordan elimination,
brings a matrix to reduced row echelon form, whereas Gaussian elimination takes
it only as far as row echelon form.
Stated equivalently for matrices, the first part reduces a matrix to
row echelon form using elementary operations while the second reduces
it to reduced row echelon form, or row canonical form.
At the end of the algorithm, we are left with


That is, it is in reduced row echelon form, or row canonical form.

Gauss-Jordan elimination article:

In other words, Gauss-Jordan elimination brings a matrix to
reduced row echelon form, whereas Gaussian elimination takes it only as far
as row echelon form.

Sorry guys you haven't convinced me that these contradictions are non-existent. I am getting tired of people ignoring blatant contradictions. Please open your eyes and fix it or I will (someday). --ANONYMOUS COWARD0xC0DE 02:10, 18 May 2007 (UTC)[reply]

Too many END_IF statements in the pseudocode! 1 more than there are IF statements... (#)

See headline —The preceding unsigned comment was added by 63.243.91.254 (talk) 18:36, 9 February 2007 (UTC).[reply]

Thanks. -- Jitse Niesen (talk) 08:37, 11 February 2007 (UTC)[reply]

Citecheck tag

Why is the citecheck tag on this article? Which citations need support? What citations would you like added? Would you like to see a reference to a layman (undergrad) introduction to Gaussian elimination or to some more advanced material in numerical linear algebra? I could not find any information about it in the talk page. Maybe I could help. Fph 13:31, 19 March 2007 (UTC)[reply]

It's about the difference between reduced row echelon form and row echelon form, see the Section "contradiction" above. Actually, I think the article needs a complete overhaul (addressing the pivoting issue you mention below), so I don't feel like doing all sorts of small corrections. -- Jitse Niesen (talk) 00:23, 20 March 2007 (UTC)[reply]

Pivoting

I think more information about partial pivoting should be added. As the page is now, pivoting is only explained through a pseudo-code algorithm, and is never properly introduced. I suggest adding an appropriate section. Fph 13:36, 19 March 2007 (UTC)[reply]

Intro cleanup

Reading over the intro paragraph to this article, it struck me as taking a long time to get around to revealing exactly what Gaussian elimination is, so I fixed it. Now, I'm an English person, not a math person, so please check over it and fix any factual errors I may have inserted. I just felt the textual style could be cleaned up a bit. Thanks, Applejuicefool 14:24, 10 May 2007 (UTC)[reply]

Is this article confusing?

I would like to post comments on this article. In general, I have found Wiki to be exceptional and outstanding in its content. However, this particular article has left me quite perplexed. Please take my comments with a grain of salt, I do not know everything about matrix algebra. The first source of confusion is that in textbooks, online course notes (from professors), and hundreds of online references, most people today think of Gaussian elimination as an algorithm that, essentially, computes PA = LU. While this is mentioned in the article, the article seems to actively support the notion that the correct way of thinking about Gaussian elimination is as a factorization into ST. This is perplexing given that the three textbooks I have describe GE as a factorization into P, L, and U. As I mentioned, searching online provides the same discussion, be it class notes, theses, etc. Further, the general GE algorithm most practitioners are familiar with is only one "part", not two parts as the article claims. This one part computes the familiar LU decomposition that we would all find in textbooks, online pseudo-code, etc. It struck me that perhaps the author found the one textbook that truthfully proved that Gauss actually thought of both parts. All the textbooks I have describe only the first part. Considering the algorithm as a two-part entity thus seems to me to be highly likely to cause confusion for many other individuals besides myself.

Then the article claims that Gauss Elimination converts a matrix to row echelon form. The article also points out that a less efficient algorithm, Gauss-Jordan Elimination, converts a matrix to reduced row echelon form. I agree with these statements completely, as my understanding is the same. This distinction is made in the introductory section, before the table of contents for the article. Unfortunately, in the Example section, the article clearly states that at the completion of the Gaussian Elimination algorithm (after the second part), the resulting matrix is in reduced row echelon form. At the very least, this is confusing. It seems like the author(s) were, in some cases, familiar with the well-known GE algorithm, which always leaves matrices in row echelon form, regardless of whether they are augmented or not. If the discussed two-part algorithm always leaves matrices in reduced row echelon form, then at least the last sentence of the introduction must be changed. But I am suspicious of the authorship; It seems like half of the article was written with the commonplace understanding of the GE algorithm, while the last section of the example was written with the unexplained ST factorization. You will note in the Example section of the article, we move from row echelon form to reduced row echelon form with no explanation at all. Naturally, we may all be sufficiently familiar with matrix algebra to perform this missing step without concern, at least on paper. This missing transformation method has caused me quite a ruckus, as I am a numerical matrix library fellow. The article assumes I can correctly derive the algorithm needed to perform the last step myself (as if, to someone who is implementing a matrix library, this would be obvious in the general case). I beg to disagree.

Another problem is that the article states that the rank of a matrix can be trivially computed by counting the non-zero rows after the GE algorithm. Again, this would be true if GE computed the reduced row echelon form. However, this is almost never true if GE, as most textbooks describe it, only produces row echelon form. Given the confusion stated above, this is even more confusing. This might sound like a trivial objection. In fact, this fact is the only reason I am writing this post. I implemented the commonplace GE some time ago, and was baffled as to how it could be stated that rank only needs to count the non-zero rows. In practice, most matrices that are factored into LUP using the commonly-thought-of GE algorithm never have zero rows.

Lastly, the pseudo-code listing really hits me as unnecessarily obfuscated. I cannot even divine where the so-called "first part" is, let alone the "second part". Also, I believe pseudo-code should be so simple we can almost immediately implement and, to at least some degree, understand what is going on. The provided pseudo-code is opaque to me. The usual GE pseudo-code listed in textbooks is 8 lines, each of which can be trivially understood. The provided pseudo-code is 23 lines, almost triple the size. I cannot understand the intent of it at all. Therefore, as pseudo-code, can it be said, without a doubt, that it is an absolutely useful contribution to the article?

In light of these observations, I urge that the article be modified to provide the well-known and commonly accepted GE algorithm, along with its pseudo-code. If the purpose of the article was to demonstrate that the well-known GE algorithm can be expanded to include a post-processing algorithm to produce reduced row echelon form, this should be clearly stated and explored as a sub-topic of the article, and, in particular, it should be very clearly discussed how the post-processing algorithm of the commonplace GE algorithm is more efficient than the mentioned (but less efficient) alternative, Gauss-Jordan elimination. 67.164.52.124 07:59, 12 May 2007 (UTC)[reply]

possible redirect

I was looking for this page for a class I'm taking and had trouble finding it because I was typing "Gausian" instead of "Gaussian". This seems like a fairly common misspelling; shouldn't there be a redirect for this spelling? 207.233.124.3 21:19, 15 May 2007 (UTC)[reply]

Yes, there probably should. I added a redirect. Thanks. -- Jitse Niesen (talk) 01:08, 16 May 2007 (UTC)[reply]

I made that same booboo I made last time, and noticed the redirect. Thank you, Mr. Niesen. 71.137.6.98 23:34, 17 May 2007 (UTC)[reply]