Jump to content

Talk:Ricci calculus

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Theoretical wormhole (talk | contribs) at 21:21, 26 August 2014 (→‎"in the denominator"). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconMathematics B‑class High‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-priority on the project's priority scale.
WikiProject iconPhysics B‑class Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Physics, a collaborative effort to improve the coverage of Physics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.

Proposal?...

Just a proposal, would it help to add the following table and explanation to Ricci calculus (Raised and lowered indices) to illustrate how sup-/super-scripts and summation fit together in a way that relates "co-/contra-variance" to invariance?

Proposed table/text

This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation.

Basis transformation Component transformation Invariance
Covector, covariant vector, dual vector, 1-form
Vector, contravariant vector

This is far clearer and briefer (to me at least...) than the main article Covariance and contravariance of vectors, and it fits in with the summary style of this article. It’s also another example for the manipulation of indices, including the Kronecker delta.

What do others think? Just a suggestion - the article is excellent and I don't want to touch it! As always I'm not forcing this in - it's for take or leave. Thanks once again to the editors here (although maybe/maybe not F= after all...). Maschen (talk) 16:28, 15 August 2012 (UTC)[reply]

I also find that this gives a very intuitive and direct explanation for someone comfortable with symbolic algebra, and think it would be a sensible addition. I've taken the liberty of making a minor tweak to facilitate easy following (by avoiding the need for renaming dummy indices upon substitution to avoid duplication). I don't think it fits with the section Raised and lowered indices unless this is renamed, and should go in a separate (sub?)section Change of basis. I prefer that the way the passive transformation is introduced is not as a matrix with a non-zero determinant (especially since the a determinant of a matrix and the determinant of the abstract linear transformation it represents are entirely different things and have different values). This objection is superficial and can be ignored for now: a rewording in terms of linear combinations avoiding the terminology of matrices should be straightforward. — Quondum 02:13, 16 August 2012 (UTC)[reply]
That's very reasonable, by all means feel free to make changes.
About the determinant: I'm sure L could represent a linear transformation which is represented by an invertible matrix, so how could they have a different determinant? Active and passive transformation is a good link to add though.
About sources: The only source that supports what I'm saying is Maths methods for Physics and Engineering, Riley, Hobson, Bence, 2010 but this seems to be restricted to Cartesian tensors for most of the tensor chapter, no others to hand right now (will look for some soon...).
On the minus side for a balanced view: there is the concern it makes the article longer for not that much gain (according to the last archive, there were repeated additions and trims to make the article as short as possible with no inessential details, which this proposal may be...). Maschen (talk) 06:14, 16 August 2012 (UTC)[reply]
The expression of a general Lorentz transformation in the notation of this article does not seem excessive to me, but this proposal does unavoidably introduce explicit use of the abstract basis vectors, thus potentially expanding the scope. On this score I'll be interested in input from others.
On the determinant of an abstract quantity (tensor), being a linear transformation VV, or a linear transformation VV (i.e. any type (1,1) tensor), a basis-independent definition of the determinant would be the scalar change in n-volume it introduces, or equivalently the product of its eigenvalues. This cannot be defined for a type (2,0) or type (0,2) tensor (that is, without the use of a metric tensor). When a type (1,1) tensor is expressed in terms of a given basis (and its dual), this corresponds with the determinant of the associated matrix of components, and is a true invariant. A passive transformation, on the other hand, relates to distinct bases, and the matrix determinant is not invariant.
In this proposal, the concept of determinant is quite unnecessary; non-singularity of the basis-mapping is all that is necessary, and this is guaranteed by the fact that they are both bases. All that is needed is the two bases and their components when expressed in terms of the other basis, and the rest follows. No determinants, no inverses, no non-singularity requirement, and no mention of matrices; only of components. — Quondum 13:20, 16 August 2012 (UTC)[reply]
I made the suggested modifications. Maschen (talk) 13:32, 16 August 2012 (UTC)[reply]
I've tweaked it slightly again. No other comments seem to be forthcoming yet... — Quondum 02:25, 17 August 2012 (UTC)[reply]
Added to lead of covariance and contravariance of vectors, see talk. Maschen (talk) 08:06, 3 September 2012 (UTC)[reply]

I can understand the removal from covariance and contravariance of vectors; very hasty at the time the image was added... However, it's been 3 months, no objections and one favour. I will take the liberty of adding it as planned long ago, better here than anywhere else (in a slightly extended form)... feel free to revert. Maschen (talk) 10:15, 24 November 2012 (UTC)[reply]

While the proposed table is pretty compact and not controvertial, I'm not too happy with the extended form that has been inserted. I could list a few objections:
  • It introduces a pedagogical rather than explanatory perspective, and starts to deviate from the compact style of the article
  • It assumes a holonomic basis, which the Ricci calculus does not require
  • The term "normal" (orthogonal) has no meaning in the absence of a metric tensor; Ricci calculus does not need one
  • The term "inner product" similarly does not apply (it is not the same thing as a contraction, which corresponds to the action of a covector on a vector)
  • We have kept the article to scalar components in keeping with the way it is often presented; it's not a great idea to introduce symbols for abstract objects now without explanation (the closest we've come is to use the word "basis").
Perhaps you'd like to put in the originally proposed table instead; I'd certainly be happy with that. BTW, my courses used capital gamma as standard for the components of a Lorentz transformation. Whatever the dominant convention is, I think we should use. — Quondum 12:23, 24 November 2012 (UTC)[reply]
Ok - I anticipated this would be suggested; the simpler table replaces the extended one. I've always seen L or Λ for the transformation, several books (and in my tensor course) use L, so let’s keep L. Maschen (talk) 13:02, 24 November 2012 (UTC)[reply]

Query on revert

This revert carries an edit summary that could be construed as a personal attack. The reverted edits introduced explicit notation giving the abstract tensors rather than only their components. We have stayed away from the abstract presentation in this article thusfar, but only as a matter of article style. While I do not object to the revert because of this, I do not in any way agree with the edit summary, and in particular with its inference about the editor. — Quondum 10:23, 2 January 2013 (UTC)[reply]

My main objection was that he was confusing contravariant and covariant. See the details of his first edit. JRSpriggs (talk) 10:29, 2 January 2013 (UTC)[reply]
I agree that there was a minor confusion in this respect on the description of the basis and cobasis elements, which would be a reason only to correct it. (However, now that you draw attention to it, the normally confusing terminology as applied to the vectors and tensors rather than to their components is a further good reason to confine this article to components.) With summaries, please take care to stay within WP policy. — Quondum 15:16, 2 January 2013 (UTC)[reply]

Another standard notation for the derivatives?

In the section on differentiation, for the partial derivative should we not have

?

My impression is that using the nabla is a preferred by some authors, is intuitive and fits into the notation.

Similarly, for the covariant derivative, seems to be notable. I notice that Penrose (in The Road to Reality) uses the nabla for the covariant derivative. Should we deal with these notations in the article? I have too little experience on what is notable here. — Quondum 12:03, 29 June 2013 (UTC)[reply]

The nabla symbol with index subscripts is definitely used (it certainly was in the bygone 3rd year SR and continuum mechanics courses), although I don't have any books using this convention to hand right now. As for covariant derivatives, Penrose may use it but not sure about this in general. Given more sources we should indicate this in the article. M∧Ŝc2ħεИτlk 12:22, 29 June 2013 (UTC)[reply]
Browsing Google books throws up so many variants of notation in both cases that I am left not knowing what is notable. on a side note, the covariant and related derivatives sit slightly uncomfortably with the notation inasmuch as they use the whole set of components of a tensor, not only the explicit component regarded as a function on the manifold. I'm not sure whether this is worth making mention of. — Quondum 15:51, 29 June 2013 (UTC)[reply]

The recently added notation seems to me to be too incomplete to be encyclopaedic. In particular, it omits crucial information from the notation that makes it pretty meaningless without explanatory text defining the family of curves that apply. It strikes me as made-up notation that an author (even MTW) might use by way of explaining something, not a notation that might see any use in other contexts. Does it really belong here? — Quondum 10:49, 10 August 2013 (UTC)[reply]

I also thought it was rather obscure in a way, and included it because it may be used in other GR literature, but let's remove it. M∧Ŝc2ħεИτlk 15:32, 10 August 2013 (UTC)[reply]

Sequential Summation?

The operation referred to here as "sequential summation" doesn't make sense -- at least not as it's currently written.

Please explain how and why it's used, and why it's considered a tensor operation.

198.228.228.176 (talk) 21:37, 6 February 2014 (UTC) Collin237[reply]

I presume its use is restricted to cases when one of the two tensors involved is either symmetric or anti-symmetric. JRSpriggs (talk) 07:10, 7 February 2014 (UTC)[reply]
The section does mention the "either symmetric or antisymmetric" use, though it does not make sense to me in the symmetric case. The exclusion of summed terms is presumably merely a labour-saving contrivance and equivalent to a constant multiplier for the expression. Mentioning this use, the equivalence and an expression giving the constant factor in place of the rather vague "This is useful to prevent over-counting in some summations" would be sensible and would enhance this section's reference value somewhat. Any volunteers from those with access to the reference? —Quondum 17:27, 7 February 2014 (UTC)[reply]
Not sure what is difficult to understand in that section, nevertheless I tried to make it clearer. Yes, it does seem to be restricted to symmetric and antisymmetric tensors. M∧Ŝc2ħεИτlk 17:04, 26 March 2014 (UTC)[reply]
The definition given is clear enough, but AFAICT the restriction should be to only tensors that are fully antisymmetric in each set of indices that are sequentially summed over, otherwise I expect that it will not be Lorenz-invariant. In this context, where we are introducing the notation as a reference, so the restrictions should be given correctly. References that only use it (i.e. they do not bother to define it other than to explain what it means in the particular case) might not provide the correct criteria, because they would have preselected the tensors. Any use that is not inherently restricted to exclusively fully antisymmetric cases (or perhaps a special basis choice?) would surprise me. If I had access to the actual references that this comes from, I could probably figure out what is appropriate, but "sequential summation" on Google books seems to draw a complete blank. In effect, I'm saying that I expect the equation
to be satisfied for some constant k in all allowable cases. —Quondum 19:39, 27 March 2014 (UTC)[reply]
On second thought, I think that Quondum is correct. Symmetric is not good enough because having two indices (of the same kind (contravariant or covariant) in the same tensor) equal cannot be represented in an invariant way. JRSpriggs (talk) 06:24, 28 March 2014 (UTC)[reply]
I don't follow your argument, probably my misunderstanding your choice of words; symmetry in two indices of the same type is an invariant property, and it sounds almost as though you are saying the opposite. My argument runs along the following lines: Consider the product sequential summation of two symmetric order-2 tensors, the metric tensor and its inverse g|αβ|gαβ in say 2 dimensions. This is the componentwise product summed, but only on one side of the diagonal, so with an orthogonal basis, the result is zero. Change to a basis that is not orthogonal, so that the off-diagonal components become nonzero, and the result of the sequential summation would become nonzero, and hence not invariant. If half the sum of the products of the diagonal elements was included, it would have stayed invariant. One can go through all the symmetric/antisymmetric combinations, and only the case where both tensors are antisymmetric seems to remain invariant (it is easy to show that the sequential sum is half the full sum by symmetry and the zero diagonal, and we know that the full sum is invariant). I assume that this generalizes to more indices as a full antisymmetry requirement.
The question is essentially: does any source use this sequential summation when the indices involved are not fully antisymmetric? I have no way of finding or accessing such sources without links. Without this, I would incline towards simply asserting the full antisymmetric requirement, but really we should prove its correctness. —Quondum 00:40, 29 March 2014 (UTC)[reply]

The earliest I can tell MTW use it is in chapter 4: Electromagnetism and differential forms, box 4.1 (p. 91). It only seems to be used in the context of p-forms (which are ... antisymmetric tensors). The authors only say "the sum is over i1 < i2 < i3 < ... in". So Quondum is correct so far. I don't know any other sources using this notation for this purpose, and it doesn't appear in Schouten's original work either (cited and linked in the article). But this summation seems to appear in a different notation which Quondum quotes above, in another reference by T. Frankel (which I don't have, and haven't seen it at the library).

Clearly, this convention of "sequential summation" exists so we shouldn't really remove it from the article. For now, let's just restrict to antisymmetric tensors. M∧Ŝc2ħεИτlk 08:27, 29 March 2014 (UTC)[reply]

Agreed, we should keep it (with the correct qualifications). But my reasoning says that we should change the wording from "when one of the tensors is antisymmetric" to "when both of the tensors are antisymmetric". —Quondum 16:04, 29 March 2014 (UTC)[reply]
Thanks for your edits. M∧Ŝc2ħεИτlk 08:26, 30 March 2014 (UTC)[reply]

Further index notations

Further notations appear to be introduced in this reference, specifically pp. 30–31. I don't understand German, but it appears to allow nesting of [], () and || on indices. My supposition is that the intention is that each of the inner nested index expressions is excluded from the higher-level symmetrization/antisymmetrization. Since this article covers a subset of exactly this type of notation, and this appears to be explicitly documented in this reference (and such exclusions make perfect sense), could someone with knowledge of German please verify my supposition so that we can include this? —Quondum 17:43, 29 March 2014 (UTC)[reply]

I never noticed that before, if we can find out the meaning it should be in the article. There is an English translation of the book by Schouten and Courant at the library (if I recall correctly), I'll check next time I go. M∧Ŝc2ħεИτlk 08:26, 30 March 2014 (UTC)[reply]
It looks fascinating. It appears to be a detailed explanation. My interpretation of nesting is evidently incorrect. The various types of brackets evidently overlap rather than nest. The explanation seems to be saying that the indices are allocated to each (anti)symmetrization in turn, skipping anything between bars ||. Thus if Aαβγδεζη = Bαγζεβδη, then A[α(βγδ|ε|ζ]η) = B[αγζ]ε(βδη). Rather convoluted. This appears to give a simple notation for the Kulkarni–Nomizu product, for example. The English version would be helpful. —Quondum 17:54, 30 March 2014 (UTC)[reply]

Braiding on an expression

Does anyone know of conventions on the braiding of the free indices an expression in Ricci calculus? If so, this would be a useful addition to the article. The most obvious convention that might apply would lexicographic ordering be as in Abstract index notation#Braiding, but I do not know whether this extends to this context. —Quondum 00:09, 31 March 2014 (UTC)[reply]

"in the denominator"

This edit (with edit note I am referring to an expression where the x^{\mu} is in the denominator or x_{\mu} is in the denominator. I tried to clarify however I'm not the best at explaining. But I do think it is important enough to have.) appears to refer to a partial derivative. This is not a fraction, and has no numerator or denominator. In general the statement is also false, as the partial derivative only transforms covariantly (contravariantly) when the expression being differentiated is a scalar. This is handled under Ricci calculus#Differentiation, where I've added a mention of this special case. —Quondum 06:20, 26 August 2014 (UTC)[reply]

What about in the covariant derivative? Take the covariant derivative of a (1,0) tensor as an example.
The in is treated as a lower index, but in the fraction it 'appears' as a upper index.
That is what I mean. — Preceding unsigned comment added by Theoretical wormhole (talkcontribs) 2014-08-26T16:13:30‎
The covariant derivative is a bit more complicated to describe properly due to the extra term; it is probably best to let readers simply understand the behaviour from the expression, which is already there.
We could draw more attention to the apparent moving of the variance of the index in a partial derivative, but keep in mind that this is of a mnemonic nature. You probably would not have noticed this "variance switch" if it were not for the suggestive nature of the partial derivative used. Perhaps we could add to the section on the partial derivative the following:
Coordinates are typically denoted by xμ, but do not in general form the components of a vector. In flat spacetime and linear coordinatization, differences in coordinates, Δxμ, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. This is reflected by the lower index in the left of the notational equivalence
Would this do what you want? —Quondum 17:04, 26 August 2014 (UTC)[reply]

Yea that sounds good to me. Would you like to add it in or should I? Theoretical wormhole (talk) 21:21, 26 August 2014 (UTC)[reply]