# Talk:Circulant matrix

The page on Circulant Matrix is subject to do/undo/do/undo wars between those taking a Matrix Algebra view and those taking a Linear Systems Theory view. The latter are electrical engineers by training, while the former are computer programmers and numerical analysts. I belong with the programmers. Matrices love computers and computers love matrices. The linear systems approach of the electrical engineers is computer-unfriendly, obtuse, meritless, and dowdy. It dates from 1920s radio engineering, before the invention of computers. Since the electrical engineers are inured to it, they will continue to use it out of inertia. But an article about a circulant MATRIX should be done in MATRIX algebra, with matrix notation. Sean111111 21:08, 11 July 2007 (UTC)

No such edit war ensued until you started it!
The page before the "edit war" was adequate, and proved (up to a point) why the DFT was involved. Whilst it may not have been perfect, it was certainly better than the update, which offered no such "proof", was badly-formatted, and contained unencyclopedic phrases such as "This decomposition of C has got loads of practical applications" and "The greek letter ρ is called rho so it's good for denoting rhotation operations".
If you want to see the article move towards a matrix-style notation, please do it properly; any edits should be an improvement, not making the article worse.
P.S. computers don't love matrices any more than they love convolution. Claiming that "the linear systems approach of the electrical engineers is computer-unfriendly, obtuse, meritless, and dowdy" is completely unfounded. I hope that this won't cloud any further edits to the article that you may choose to.
Oli Filth 21:25, 11 July 2007 (UTC)

## Mistake in Diagonalizing Circulant Matrix?

I'm not an expert, but I think the article is wrong in its description of diagonalizing a circulant matrix. The article says that

${\displaystyle C=F_{n}\operatorname {diag} (F_{n}c)F_{n}^{-1},}$

where

${\displaystyle F_{n}=(f_{jk}){\text{ with }}f_{jk}=\mathrm {e} ^{-2jk\pi i/n}}$.

While this gives the correct matrix of eigenvalues ${\displaystyle \operatorname {diag} (F_{n}c)}$, I think the eigenvector matrices ${\displaystyle F_{n}}$ should have a factor of ${\displaystyle 1/{\sqrt {N}}}$. See, for example, http://ee.stanford.edu/~gray/toeplitz.pdf section 3.1 on the eigenvalues and eigenvectors of circulant matrices. Also, the page on discrete Fourier transform matrix says that the factor of ${\displaystyle 1/{\sqrt {N}}}$ is needed for unitarity.

72.130.185.73 (talk) 02:40, 8 April 2010 (UTC)

Agreed. I will fix this. Oli Filth(talk|contribs) 07:07, 8 April 2010 (UTC)

I'm not sure but I do not agree with the solution of linear system. According to my calculations (which rely on A Fast Algorithm for Deblurring Models with Neumann Boundary Conditions - Ng, Chan, Thang) it should be ${\displaystyle {\mathcal {F}}\left({\frac {{\mathcal {F}}^{*}(b)}{{\mathcal {F}}(c)}}\right).}$ My code works, but my studies are sixteen month old and I do not remember the steps.

## Eigenvalue properties

I'm having a hard time understanding why the following is in this article

 ${\displaystyle \ C=W\Lambda W^{-1}}$ by eigendecomposition ${\displaystyle \operatorname {det} (C)=\operatorname {det} \left(W\Lambda W^{-1}\right)}$ ${\displaystyle \operatorname {det} (C)=\operatorname {det} \left(W\right)\operatorname {det} \left(\Lambda \right)\operatorname {det} \left(W^{-1}\right)}$ ${\displaystyle \operatorname {det} (C)=\operatorname {det} \left(\Lambda \right)=\prod _{k=1}^{n}\lambda _{k}}$ where ${\displaystyle C}$ is an ${\displaystyle n\times n}$ circulant matrix ${\displaystyle W}$ is the unitary discrete Fourier transform matrix ${\displaystyle \Lambda }$ is a diagonal matrix with eigenvalues ${\displaystyle \lambda _{k}}$. The last term, ${\displaystyle \prod _{k=1}^{n}\lambda _{k}}$, is the product of the spectral values.

So, we have shown that the determinant is the product of the eigenvalues? How is this a special property for Circulant matrices? The determinant is always the product of the eigenvalues! I am removing this unless someone has valid reason why it should stay. Thanks! Plastikspork (talk) 17:14, 3 December 2008 (UTC)

I was thinking the same thing at nearly the same time. What IS important is that a certain DFT matrix can diagonalize a circulant matrix. Unfortunately, I dont have Horn and Johnson in front of me to fill in the details. Please someone fix this page. -- anonymous

That was already mentioned implicitly before ("The eigenvectors of a circulant matrix of given size are the columns of the discrete Fourier transform matrix of the same size"). However, I put some more details in, and I removed the fragment quoted by Plastikspork. I couldn't find that much in Horn & Johnson so I used Golub & van Loan; hope you don't mind. Cheers, Jitse Niesen (talk) 12:47, 4 December 2008 (UTC)
Exactly. The emphasis should be on special properties of circulant matrices. Golub and van Loan is the best first source for anything on matrices (in my opinion). Thanks for cleaning this up. Plastikspork (talk)

I'm looking to determine eigenvectors/eigenvalues of the matrix M=DC where C is a circulant matrix and where D is a diagonal matrix whose i-th entry is z^{2i} where z is the n-th root of unity... Does anybody know how to do this?

thanks —Preceding unsigned comment added by Asympt (talkcontribs) 02:32, 28 November 2009 (UTC)

## Determinant vs. Eigenvalues

Is there a particular reason that the determinant of a circulant matrix is interesting (moreso than its eigenvalues)? I ask because listing the eigenvalues (which are the individual entries in the determinant product provided) would provide both pieces of information (since the determinant is always the product of the eigenvalues). However, only the determinant is provided now which says nothing about the individual eigenvalues. JokeySmurf (talk) 13:43, 28 July 2010 (UTC)

The comparison of Determinant vs. Eigenvalues is irrelevant. Information of both quantities is valuable and should be present in the article. If you have something to add about eigenvalues to the article, please do. Maxal (talk) 14:13, 28 July 2010 (UTC)

## Definition clarification

This is with respect to a conflict I see in the eigenvalues with the definition stated here. From my understanding the eigenvalues are obtained as λ = c0 + c1j + c2j2 and so on where c0, c1, c2 and so on are the elements of the first row. But, this page gives the eigenvalues in terms of the first column which give us the eigenvalues but in an incorrect order. Please verify or if I am making a mistake somewhere, please clarify. [1] The above link is for determinants but as can be seen, the eigenvalues are w.r.t first row. Thank you. —Preceding unsigned comment added by 69.143.35.92 (talk) 04:05, 19 November 2010 (UTC)

The order of eigenvalues is arbitrary, so long as it matches the ordering of the eigenvectors. Plastikspork ―Œ(talk) 05:12, 19 November 2010 (UTC)

The eigenvectors defined here and the eigenvalues do not correspond to each other. Consider the multiplication of the matrix C with an eigenvector vj. We run across the row and not the column. And since the first entry of each eigenvector is 1, the first element of the product Avj is the eigenvalue corresponding to that eigenvector. —Preceding unsigned comment added by 69.143.35.92 (talk) 14:54, 19 November 2010 (UTC)

Thanks, fixed. Asympt (talk) 01:38, 21 November 2010 (UTC)

## Why is there a link to Weisstein page in external links

I mean, Weisstein does not appear anywhere in the body of the article... so how come there is an external link to him? Where it comes from?? It's not that i mind linking to him, but this should not be done automatically?? Asympt (talk) 02:06, 21 November 2010 (UTC)

## Mistake in Eigenvector formulation

Is that formula supposed to say something like "the real part of $\omega_v^k$"? Otherwise the eigenvectors will be complex, which doesn't make sense in the case of a symmetric circulant matrix. As an example, the matrix [[0,1,2,1],[1,0,1,2],[2,1,0,1],[1,2,1,0]] has eigenvector [1,0,-1,0]. The formula seems to predict in this case an eigenvector of (1, i, -1, -i). —Preceding unsigned comment added by 135.207.103.61 (talk) 19:31, 23 February 2011 (UTC)

No, there is no mistake. In your example, another eigenvector is (1,-i,-1,-i) and has the same eigenvalue of -2 as the eigenvector (1,i,-1,i). Therefore any linear combo of these two is again an eigenvector, and if you add them together you get (2,0,-1,0) which is twice your real eigenvector. Asympt (talk) 19:54, 16 March 2011 (UTC)

I tried the case of c1 = 1, and found that a conjugate was needed instead of a transpose. This might make the difference here... too.