Talk:Tridiagonal matrix algorithm

WikiProject Mathematics (Rated C-class, Low-priority)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 C Class
 Low Priority
Field:  Applied mathematics

Tridiagonal matrix algorithm l'exemple de trensport de la chaleure dans un bare

Edited the C example to clarify parts of the algorithm. Specifically, the C algorithm computed 1/m and then multiplied, instead of just dividing by m. This is dangerous in floating point arithmetic, and should be avoided. The code also more closely matches the algorithm in the example above. David Souther (talk) 16:17, 30 October 2008 (UTC)

Also, in the C# code, the variable 'f' is not declared anywhere so this code will not compile. I believe that the 'x' array is suppose to have the same length as 'd' (or 'b' for that matter). And seeing how 'N' is being used to determine it's limit within the program, changing f.Length to d.Length makes sense.206.162.205.234 (talk) 09:42, 29 November 2008 (UTC)

Alright, I am making some more changes. First off, the code is actually different between the two languages when it comes to the back substitution. Also, the beginning of this C# doesn't make much sense. All of the arrays need to have the same length or else it does match the math above and the code won't work. I modified so that it better fits with C sample code (which actually works). I just rewrote it to be more like the C code. I also duplicated the comments. I think this will make it all easier to read. 206.162.205.234 (talk) 11:14, 30 November 2008 (UTC)

Wrong indices?

Please correct me if I am wrong, but shouldn't the index for the primed d's run all the way up to n? Otherwise, the last primed d will never be calculated... 134.219.198.154 (talk) 12:52, 1 July 2016 (UTC)

C# code example superfluous?

What is the point of having the C# example there when it is virtually line for line identical to the C one right above it? I will remove it as it seems to add no value to the discussion. SiegeLord (talk) 02:12, 11 May 2009 (UTC)

The C# example had changed quite a bit by the time you saw it and removed it. It had been changed to match the C one quite well. It would be nice for a C# example to demonstrate a style much more characteristic of typical C# programming. --Limited Atonement (talk) 23:31, 21 April 2011 (UTC)

Hu-Gallash?

google says that the only reference in all the internet to the "hu-gallash" algorithm is on this page and its translations. can someone find a reference or else delete this? —Preceding unsigned comment added by 69.200.245.55 (talk) 11:55, 28 July 2009 (UTC)

I've removed Hu-Gallash. Probably a hoax. 13:04, 28 July 2009 (UTC)

correct me if I'm wrong...

but isn't the method and "derivation" described for solving a tridiagonal system just applying gaussian elimination to it? The only reason tridiagonal matrices take O(n) time is that the band width is fixed, independent of the matrix size, and we don't do the computations for entries that we know to be zero. For instance, pent-diagonal (nonzero main diagonal and two upper and lower diagonals) matrices would follow the idea and also take O(n) time. It might help to mention this to clarify the situation for some readers. Charibdis (talk) 15:26, 28 June 2010 (UTC)

Error in Matlab/Octave algorithm

There is an error in the matlab/octave algorithm. A corrected version would be:

Implementation in Matlab

Note that the index ${\displaystyle i}$ here is one based, in other words ${\displaystyle i=1,2,\dots ,n}$ where ${\displaystyle n}$ is the number of unknowns.

function x = TDMAsolver(a,b,c,d)
n = length(b);

% Modify the coefficients
c(1) = c(1) / b(1);	% Division by zero risk.
d(1) = d(1) / b(1);	% Division by zero would imply a singular matrix.

for i = 2:n-1
id = 1 / (b(i) - c(i-1) * a(i-1));  % Division by zero risk.
c(i) = c(i)* id;                % Last value calculated is redundant.
end

for i = 2:n
id = 1 / (b(i) - c(i-1) * a(i-1));  % Division by zero risk.
d(i) = (d(i) - d(i-1) * a(i-1)) * id;
end

% Now back substitute.
x(n) = d(n);
for i = n-1:-1:1
x(i) = d(i) - c(i) * x(i + 1);
end


Maybe someone could fix it, or correct the first code so it makes one iteration. [unsigned]

I'm not confident enough to make the change, but I ported it (the MatLab example) to my language, and it has a few errors, which seem to be corrected in your version.

--Limited Atonement (talk) 22:52, 21 April 2011 (UTC)

Crazy Examples

The algorithm demonstrated by the Fortran and C examples doesn't match very closely with the algorithm presented in the Method' section. The MatLab one apparently attempts to, but it has many errors. I wrote an implementation (in C#) and ported what I found here, and the Fortran and C examples agree, but my example doesn't agree with them, and the MatLab example isn't even close. Perhaps I'm completely lost, but I can't follow the C or Fortran Implementation from the method described in Method.'

--Limited Atonement (talk) 23:28, 21 April 2011 (UTC)

There is obviously something wrong with the C implementation - for my given tridiagonal matrix the algorithm produces results that are wrong in the first decimal place, according to the results from wolframalpha and the Python implementation listed below as well, so it is obviously an implementation mistake in C and not a problematic set of data (I suppose). — Preceding unsigned comment added by 77.254.139.197 (talk) 00:16, 29 November 2012 (UTC)

Complicated derivation

Is it only me, or are there others out there too who find the presentation of the derivation unnecessarily complicated? Anyone who knows the standard Gauss elimination can straight away derive the necessary recursive equations. Provided that we are trying to upper triangularize the matrix, I think it would be more helpful to present the row operations using notation such as ${\displaystyle R_{i}\leftarrow R_{i}-(a_{i}/b'_{i-1})\times R_{i}}$, where ${\displaystyle b'_{i}=b_{i}-{\frac {a_{i}c_{i-1}}{b'_{i-1}}}}$. Perhaps someone can clean it up. (Manoguru (talk) 16:18, 27 January 2012 (UTC))

Agreed. But actually it's not exactly the same algorithm, though the mathematical result is equivalent. In the usual one, only bi and di are updated in the forward step, here only ci and di are updated. I also have a ref (Rotella & Borne, see next section) stating this variant is from Cholesky, but I shall double-check this. Arbautjc (talk) 22:06, 23 August 2013 (UTC)

It's not just you - it is RIDICULOUSLY complicated as written out - All you need are N row operations to reduce the matrix to Upper Hessenberg form, followed by the same operation to modify the d-vector and you're all set for back substitution. Here's the VBA snippet that I use virtually every day!

Sub TDMA(N%, A#(), B#(), C#(), D#(), X#())

Dim i%, W#, BP#(), DP#()
ReDim BP#(N), DP#(N)
BP(1) = B(1): DP(1) = D(1)
For i = 2 To N           ' Front-Substitution -> Reduction to Upper-Hessenberg Form
W = A(i) / BP(i - 1)
BP(i) = B(i) - W * C(i - 1)
DP(i) = D(i) - W * DP(i - 1)
Next i
X(N) = DP(N) / BP(N)
For i = N - 1 To 1 Step -1             ' back-Substitution to Compute X
X(i) = (DP(i) - C(i) * X(i + 1)) / BP(i)
Next i


End Sub 179.87.13.132 (talk) 22:40, 1 October 2016 (UTC) 179.87.13.132 (talk) 22:38, 1 October 2016 (UTC) — Preceding unsigned comment added by 179.87.13.132 (talk) 16:39, 1 October 2016 (UTC)

An important limitation

I think it should be emphasized that TDMA works only for diagonally dominant matrices: http://www.cfd-online.com/Wiki/Tridiagonal_matrix_algorithm_-_TDMA_%28Thomas_algorithm%29#Assessments M4urice (talk) 14:20, 19 March 2013 (UTC)

I forgot to add a link to the definition (the one on the CFD Online Wiki is inaccurate): Diagonally_dominant_matrix M4urice (talk) 17:21, 19 March 2013 (UTC)

No. It is proved to work for diagonally dominant matrices, which does not mean it is proved not to work on other matrices. And actually, it can also be proved that even if there is a null "pivot", one can still solve the system (given it is regular of course), by solveing two smaller tridiagonal systems. There is a kind of "restarting" procedure. See for example (in french, sorry): Rotella & Borne, "Théorie et pratique du calcul matriciel", ed. Technip. I'll look for an english ref. Arbautjc (talk) 22:06, 23 August 2013 (UTC)
I think the point is that the algorithm as currently described can fail if matrices are not diagonally dominant. Simple example that the algorithm fails to solve (division by zero):
${\displaystyle \left[{\begin{matrix}{1}&{1}\\{0}&{0}\\\end{matrix}}\right]\left[{\begin{matrix}{x_{1}}\\{x_{2}}\\\end{matrix}}\right]=\left[{\begin{matrix}{1}\\{0}\\\end{matrix}}\right]}$
So either the condition should be mentioned, or the algorithm should be extended to handle all tridiagonal matrices. --Feklee (talk) 16:06, 22 November 2014 (UTC)

The number of codes being superfluous

We only need C++ for being fast and Matlab (or Python) for being intuitive and surely only one version of it. As for other languages we could link them from the article--Dixtosa (talk) 14:04, 26 June 2013 (UTC)

Why C++ for being fast ? Why not C or Fortran ? Is there a policy on WP for the choice of a language among others ? (on the french WP, it's usually restricted to pseudo-code, so that no language is favored) Arbautjc (talk) 16:35, 8 November 2013 (UTC)