Wikipedia:Reference desk/Archives/Mathematics/2012 April 27

From Wikipedia, the free encyclopedia
Mathematics desk
< April 26 << Mar | April | May >> April 28 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 27[edit]

Invariant theory: matrices and invariants under upper triangular matrices[edit]

Hi all, I'm learning some invariant theory for rings and I'm getting myself a bit confused with this question - feel like I might have made a mistake, and would appreciate some feedback from someone more experienced than me.

If acts on , we write for the invariant ring. In particular, if an element g acts on some finite dimensional vector space W over , then for an element of the coordinate ring , we have an action on f by .

We let denote the 2x2 matrices over , denote the upper triangular unipotent matrices (that is, matrices with '1's on the diagonal, a zero below and anything above), and denote the matrices of the form . These both act on by (left) matrix multiplication. I wish to find and , which I think means the polynomials (i.e. polynomials in the coordinate ring ) which are fixed under the map for any matrix M in respectively.

So a matrix in U looks like ; this takes . An f which is invariant under this transformation can be considered (i think) as rather than , so that : so the question is essentially asking us, if I understand correctly, to find the polynomials f which are invariant under that transformation. What can we say about such polynomials? I tried expanding it as

for all i, j, k, l, t.

I then tried looking at this as either a polynomial in or a polynomial in ; we know that all coefficients of are always zero for , and these coefficients are polynomials in the and the ; what I really wanted to do is show that these coefficients are necessarily nonzero polynomials in the , unless we assume all involved are zero; i.e. our polynomial can only be a polynomial in the latter two variables, otherwise it is not fixed for every t.

However, when I tried to determine the coefficient of as a poly in the , I find that multiple terms in can contribute to the same term in the coefficient of , and in fact the function satisfies the requirements but is obviously a function of all 4 variables (note that this is effectively the determinant, though I don't know if that has any significance). Indeed, functions such only in the latter 2 variables are *included* in our class of possible functions, but they don't make up the whole thing. I'm not sure what more we can say about the class; by choice of t I think we can deduce for some h, but I'm not sure where to go from here.

What is the invariant ring exactly? Is it just (where e happens to equal ad-bc)? And likewise with the diagonal matrices, I think we get that all terms in the polynomial must be of the form to be invariant, so would we deduce the invariant ring is or something like that? Or maybe just any old 3-variable since we effectively have 3 variables? Sorry for the long question, I've just started learning invariant theory and I'm still finding it a bit confusing. Thank you for your help :) 86.26.13.2 (talk) 08:14, 27 April 2012 (UTC)[reply]