Jump to content

Talk:Covariance and contravariance of vectors: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
m Archiving 1 discussion(s) to Talk:Covariance and contravariance of vectors/Archive 1) (bot
Line 237: Line 237:


[[User:CaffeineWitcher|CaffeineWitcher]] ([[User talk:CaffeineWitcher|talk]]) 12:59, 23 May 2020 (UTC)
[[User:CaffeineWitcher|CaffeineWitcher]] ([[User talk:CaffeineWitcher|talk]]) 12:59, 23 May 2020 (UTC)

== Covariance of gradient ==

I am a bit confused : this article takes the gradient to be a prime example of a "covariant vector", but the [[Gradient#Derivative|Gradient]] article claims that it is a contravariant vector. Which is correct? (Sorry if this is the wrong place to ask) --[[Special:Contributions/93.25.93.82|93.25.93.82]] ([[User talk:93.25.93.82|talk]]) 19:56, 7 July 2020 (UTC)

Revision as of 19:56, 7 July 2020

WikiProject iconMathematics B‑class High‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-priority on the project's priority scale.

Notation burdened with crap.

Definition section would benefit hugely from being written with some sort of coherent standard of notation. I think we are probably allowed to assume that the reader knows what a linear transformation, basis, and coordinate vector with respect to a basis are, or certainly is capable of clicking links. There's no need to be summing over indices in both superscripts and subscripts and what the hell is this needless reference to the basis in brackets doing anyway? — Preceding unsigned comment added by 50.141.31.25 (talk) 21:47, 19 November 2015 (UTC)[reply]

1. I don't see any place the notation is incoherent. The components of a vector depend on the choice of basis, and the notation indicates clearly shows the basis. This dependence is appropriately emphasized throughout the section ("coherently"). In fact, I would say that the whole definition section is scrupulously coherent. A transformation of the basis is also shown clearly in the notation, and it is easy to show how this changes the vector and covector. Another way to indicate this dependence is using parentheses as opposed to square brackets (see, for example, Raymond Wells, Differential analysis on complex manifolds), with subscripts (e.g., Kobayashi and Nomizu, Foundations of differential geometry), or just with some ad hoc method (like "Let be the components of v in the basis f, and be the components of v in the basis "). It seems here better to use a notation where the dependence on the basis is explicitly emphasized, because that's what the article is really about.
2. "I think we are probably allowed to assume that the reader knows what a linear transformation, basis, and coordinate vector with respect to a basis are, or certainly is capable of clicking links." I agree with this, which is why the article doesn't define these things, except to write them down and fix the notation as it's used in the article, which is just a best practice for writing clear mathematics and not intended to be a substitute for knowledge that can be gleaned from clicking links. Also, linear transformations are not even mentioned until the very last section of the article. And the article doesn't define those either.
3. As for why the Einstein summation convention is not used in the definition section, it seems likely that some readers of the article will not have seen it before. It does no harm to include the summation sign. It's not like we are going to run out of ink, or expressions are going to become tangled messes of indices there. Covariance and contravariance should arguably be understood before the Einstein summation convention anyway, so presenting the additional of abstraction doesn't seem like there is any clear benefit to readers. Sławomir
Biały
22:41, 19 November 2015 (UTC)[reply]
It might be beneficial to change notation to such that vectors are bold-face lower-case letter, transformations are bold-face capital letters, while standard italics are used for scalars only. This convention is widespread in continuum mechanics, and, I believe, used in most elementary undergraduate courses. 130.209.157.54 (talk) 13:22, 14 January 2019 (UTC)[reply]

confused about notation

Hi. Sorry if this is discussed above, but after a quick read of this article I'm confused by what (to me) seems like a switch in terms between the 2d example under Euclidean plane, and the very next paragraph Three D Euclidean plane. In the former the vector V has (contravariant) components 3/2,2 .. I assume that's therefor a 'contravariant vector', and uses e_1, e_2 as basis (correct me if I'm wrong there). But then in the 3D paragraph the 'contravariant (dual) basis' vectors are indexed with e^1, e^2, e^3 i.e. upper index as opposed to lower index...?! Are the components and the basis actually different naming conventions... one is contravariant and one covariant for the same 'covariant vector'? If the notation *is* correct here then could someone please add some explanation so as to clear this up as it's very confusing to the non-initiated. Thanks. S — Preceding unsigned comment added by 203.110.235.6 (talk) 23:31, 22 August 2016 (UTC)[reply]

Not sure what the best way to clarify this in the article is. If the components of a vector transform contravariantly, then the basis itself transforms covariantly to compensate. Thus the vector is invariant, because the components transform contravariantly, and the vectors transform covariantly. To extract the (contravariant) components , we must dot with the dual basis . This basis transforms contravariantly, and so is often called a contravariant basis. Sławomir
Biały
12:48, 23 August 2016 (UTC)[reply]

Velocity is contravariant wrt distance and covariant wrt time

Velocity = distance per time = d/t

Velocity is contravariant with respect to distance (as stated in the article). But velocity is covariant with respect to time.

Just granpa (talk) 19:27, 28 May 2017 (UTC)[reply]

acc = d/t^2
3600 meter/minute^2 = 1 meter/sec^2
Just granpa (talk) 03:24, 29 May 2017 (UTC)[reply]

Contravariant

I agree with others that the notation is distracting. As an offering for debate I submit the following for the explanation of contravariant.

The Einstein summation convention is used; for emphasise only is used as the dummy summation index.

An arbitrary point in has coordinates . Introduce any basis vectors , with , which generate coordinate vectors , such that and hence . In matrix notation with the element of matrix and , treated as column vectors.

Alternative basis vectors with generating such that , , .

Write then and with , the element of matrices , .

From then is given by and hence is a contravariant transformation.

This is simplistic, requires only elementary understanding of a vector, but still leaves elementary manipulations to the reader. It ties in with the idea of a matrix as a transformation and leads into the more difficult concepts of a dual space, covariant transformations and tensors. The notation is consistent with elementary tensor notation. — Preceding unsigned comment added by PatrickFoucault (talkcontribs) 09:52, 14 September 2017 (UTC)[reply]

Organisation

I advocate the article be organised along the following lines.

  • A very brief heuristic introduction about components transforming contra or co to the basis
  • Brief history and etymology sections
  • The above demonstration that a position vector always transforms contra. This also introduces the agreed notation.
  • A similar demonstration regarding a vector that we would like to transform co
  • Statement and link regarding the dual space and linear functionals
  • Demonstration of a co transformation using a specific linear functional which should be the dot product.
  • Discussion and links regarding tangents and normals and their use as bases.
  • Mention of how this develops into tensors with links.

The demonstrations should be clear about the exact transformations. The current article is good, but it seems from the talk that further clarity is required. I think some of the introduction could be trimmed in favour of detailed worked examples of contra and co transformations. The current description of a covariant transformation could be clearer, and would benefit by being couched in terms of a specific functional such as the dot product, and possibly along the lines of the above contravariant demonstration. This would allow the concepts of row space and column space to emerge naturally and hence links to these articles. The inherent difference between contravariant and covariant is exposed without relying on heuristic explanations using tangents and normals.Foucault (talk) 03:52, 15 September 2017 (UTC)[reply]

Your addition was entirely wp:unsourced wp:original research. It does not comply with MOS:LEAD either. I have reverted it, for obviouis reasons. - DVdm (talk) 08:34, 16 September 2017 (UTC)[reply]
@Foucault: I have not read your attempt in detail, but you cannot ignore the map that sends a contravariant vector to a covariant vector and vice versa. In Lorentz transformation#Covariant vectors you can find a similar overview (equally unsourced, but sourcable) in the special case that the map in question is the Lorentz metric. YohanN7 (talk) 10:37, 16 September 2017 (UTC)[reply]

It is not my intention to ignore that map, nor my intention to write a definitive study. From the talk above, it seems some people feel the article could be improved. My attempt was at writing an introduction that would draw readers into reading further and also spark editors into making further constructive edits. If my attempt sparks activity, then good. The alternative is the article remains moribund.Foucault (talk) 02:48, 18 September 2017 (UTC)[reply]

I don't share your rigid verdict of this article being moribund, but of course, it is improvable. My two cents on possible improvement are that with all the usual considerations on "invariance to co- and contravariant transformations" between two frames/vectors, there is a third frame, camouflaged in the backdrop, most often. I think, e.g., all pictures of such frames (Euclidean or curvilinear) necessarily rely on this "observer's frame", and active and passive transformations might thus be judged from an other, possibly alien space. Purgy (talk) 06:13, 18 September 2017 (UTC)[reply]

OK. Not moribund. It has a B rating, is high priority and is in the top 500 articles viewed. So it seems the subject is worthwhile, a lot of good work has already gone into the article, but contributers to this talk page are still debating what the words contravariant and covariant mean. Foucault (talk) 10:04, 18 September 2017 (UTC)[reply]

Math under Three-dimensional Euclidean Space

Can someone look into this?

While reading the following part,

Then the covariant coordinates of any vector v can be obtained by the dot product of v with the contravariant basis vectors: 
Likewise, the contravariant components of v can be obtained from the dot product of v with convariant basis vectors, viz.: 

I thought the math should have been this instead.

Then the covariant coordinates of any vector v can be obtained by the dot product of v with the contravariant basis vectors: 
Likewise, the contravariant components of v can be obtained from the dot product of v with convariant basis vectors, viz.: 

But then, the following makes even less sense. It would couple the covariant (contravariant) coordinates to one another through the covariant (contravariant) bases.

and we can convert from contravariant to covariant basis with 
and 

D4nn0v (talk) 08:38, 29 November 2017 (UTC)[reply]


I tried to fix the problem. Let me know, if you see anything I missed or disagree with what I did. JRSpriggs (talk) 02:18, 30 November 2017 (UTC)[reply]


Looks good! D4nn0v (talk) 02:16, 4 December 2017 (UTC)[reply]

Confusion between vector components, vector coordinate and the reference axes.

In the second paragraph under Introduction of there are too many undefined terms which are very confusing. While the component of the vector are the <v1,v2,v3> and the reference axis is given to mean the coordinate basis vector, its not clear what does the word "coordinate" refers to. From the example involving the change of scale of the reference axis, it's quite easy to understand what how the components of the vector vary inversely to change of scale of the basis vectors. But the second paragraph under Introduction states that components of contravariant vectors vary inversely to the basis vector(which is clearly understood) but "transform as the coordinates do". What do coordinates mean here? the coordinate of the vector which or the coordinate axis?. The word coordinate has been used multiple times without an explanation as to what it means while it is not at all required as the concept is clear by stating how the components change with the change of basis(ie the reference axes).

granzer92 Granzer92 (talk) 14:09, 10 June 2018 (UTC)[reply]

I removed some misinformation from the lead and the introduction. I hope that helps.
The coordinate system on a manifold generally determines the choice of basis vectors for the tangent space at each point on the manifold. That is why it is relevant. JRSpriggs (talk) 02:33, 11 June 2018 (UTC)[reply]

Standard definition apparent inconsistency

Could someone please explain the fallacy in the following?

Suppose that in we change from a basis in units of meters to a basis in units of centimeters. That is,

Then the coordinate transformation is

(1)

and the coordinate transformation in matrix form is

(2)

A position vector

(3)

with magnitude is transformed to

(4)

with magnitude

Suppose that a scalar function is defined in a region containing r. Then,

(5)

the gradient vector is

(6)

and the gradient vector at r,

(7)

with magnitude , is transformed to

(8)

with magnitude

While the position vector r and the gradient vector are invariant, the components of the position vector in equations (3) and (4) transform inversely as the basis vectors, and the components of the gradient vector in equations (7) and (8) transform directly as the basis vectors.

Now, most textbooks agree[1] that a vector u is a contravariant vector if its components and relative to the respective coordinate systems and transform according to

(9)

and that a vector w is a covariant vector if its components and relative to the respective coordinate systems and transform according to

(10)

but equations (4) and (8) seem to indicate the exact opposite of equations (9) and (10). Why?Anita5192 (talk) 17:44, 24 November 2018 (UTC)[reply]

There is an error in your comment right at the beginning. So I stopped reading it at that point. If is the position in meters and is the position in centimeters, then the conversion is just:
OK? JRSpriggs (talk) 22:46, 24 November 2018 (UTC)[reply]
The , with , are the old basis vectors; the , with are the new basis vectors.Anita5192 (talk) 03:17, 25 November 2018 (UTC)[reply]
Maybe this helps:
Let's have two ordered bases of a 3-D vector space over consisting of three orthogonal vectors each
and with
and for (To be honest, I am unsure how to formally conflate a multiplicative group of units with the norm of the vector space)
Further, let (for simplicity, and satisfying the above)
again for
This yields the matrix from above. Now let's have a vector, and its decompositions wrt the two bases (adopting Gaussian summation convention)
For the last equalities to hold, the scalar coefficients must satisfy
for
This yields the relation for coefficients, given by JRSpriggs above. Maybe it's all about the vectors, formed from the coefficients suitably arranged in tuples (and freely identified with abstract vectors) that vary contravariantly? (I'm struggling myself.)
OK? Purgy (talk) 13:14, 25 November 2018 (UTC)[reply]
This makes perfect sense to me, and I believe this is what I wrote in equations (1,2,3,4), but I do not see how equations (9,10) agree with this. For example, equation (1) seems to imply that for the basis vectors, and ,
(11)
(the other partial derivatives are zero), but then equation (9) for the components and of and would be
(12)
in contradiction to equation (4).
I have faith that equations (9,10) in textbooks are somehow correct, but because they look incorrect to me, I evidently have a misconception somewhere. I am trying to determine where and what that misconception is—Anita5192 (talk) 20:27, 25 November 2018 (UTC)[reply]
I do not know what you mean by equation (2) because you do not explain how you are using A. Equation (3) is obvious bull-shit. It should be
Again I stopped reading at that point until you clean up your act. You must explain what all the variables in your equations mean or you will never get your head straight. JRSpriggs (talk) 01:58, 26 November 2018 (UTC)[reply]
I regret having mistaken the question. I thought to perceive a mix up of scalar coefficients and vectors, wherefore I dismissed the s, seemingly used for both kinds, and introduced the s for vectors and the s for scalars, and wrote that boring triviality. Maybe, I tripped over the possibility of having abstract indices to denote vectors, but again, I have no formal conception for these. Purgy (talk) 07:08, 26 November 2018 (UTC)[reply]
I think I see where some of my misconceptions were. Correct me if I am wrong. (By chance, I discovered something in one of my books that clued me in, although it was not obvious.) Most textbooks that I have seen for linear algebra and tensors do not make clear the distinction between basis vectors, coordinate vectors, coordinates, and components. I tend to agree with Granzer92 (talk) in the previous section, that the Introduction did not make this clear either, and still does not. I have edited my original post above. Now equations (4) and (8) seem to agree with equations (9) and (10). Please let me know if I have this correct now.–Anita5192 (talk) 22:56, 27 November 2018 (UTC)[reply]
I agree to the reservations addressed by Granzer92, and admit not being a jour with JRSpriggs improvements. To my impression, much of this potential of confusion is caused by the wide spread, convenient notation of a basis-transformation, involving -say- "v-vectors" with bases-vectors as coordinates, and matrices of scalars as linear maps, without (sufficiently) emphasizing that the "v-vectors" live in "another" space than the basis-vectors, and that these spaces just share their scalars. The didactic migration from coordinate-bound to coordinate-free has not fully matured, yet, imho. Purgy (talk) 08:49, 28 November 2018 (UTC)[reply]

References

  1. ^ Kay, David C. (1988), Schaum's Outline of Theory and Problems of Tensor Calculus, New York: McGraw-Hill, pp. 26–27, ISBN 0-07-033484-6

Co-/contra- variance of 'vectors' (per se) or of their components

It is my understanding that a vector, per se (i.e. not its representation), is neither covariant or contravariant. Instead covariance or contravariance refers to a given representation of the vector, depending on the basis being used. For example, one can write the same vector in either manner as, . I think this point (in addition to the variance topic as a whole) can be both subtle and confusing to people first learning these topics, and thus the terminology should be used very carefully, consistently, and rigorously. I tried to change some of the terminology in the article to say, "vectors with covariant components" instead of "covariant vectors" (for example), but this has been reversed as inaccurate. So I wanted to open a discussion in case I am mistaken or others disagree. @JRSpriggs. Zhermes (talk) 16:45, 2 January 2020 (UTC)[reply]

I have several textbooks that describe covariance and contravariance of tensors. Most of them refer to the tensors themselves as having these properties; however, some of them indicate that these properties apply only to the components and that the tensors are invariant. I am inclined to agree with the latter, that is, that only the components change—not the tensors. I think we should point out in the article two things: 1. what we think is the correct parlance, and 2. the fact that some textbooks say otherwise. I can supply citations.—Anita5192 (talk) 17:54, 2 January 2020 (UTC)[reply]
The key point to understand is that the distinction between "covariant" and "contravariant" only makes sense for vector fields in the context of a preferred tangent bundle over an underlying manifold.
Otherwise, all the structures could just be called simply "vectors" and the choice of one basis or another would be completely arbitrary.
So, restricting ourselves to structures built from the tangent bundle, the vectors in the tangent bundle itself are called "contravariant", and the vectors in the dual or cotangent bundle are called "covariant". Tensor products of tangent and cotangent bundles may have both covariant and contravariant properties (depending on the index considered). JRSpriggs (talk) 02:13, 3 January 2020 (UTC)[reply]
I'm afraid that in a strict sense, there are two distinct meanings in play here, and unless we distinguish them, confusion will ensue. One is a description of components, and mentioned above, and the meaning used by JRSpriggs essentially means "is an element of the (co)tangent bundle". The former makes sense in the context of a vector space and its dual space, independently of whether these are associated with a manifold. It is unfortunate that texts often conflate a tensor with its components with respect to a basis. I expect that the interpretation of a "contravariant vector" as being "an element of the tangent bundle" arises from exactly this conflation, and should be avoided (there is a better term: a "vector field") in favour of restricting its use to describing the behaviour of components with respect to changes of a basis. If in doubt about the confusion implicit in the mixed use of the terms, consider this conundrum: the set of components of a vector is contravariant (transform as the inverse of the transformation of the basis), whereas the basis (a tuple of vectors) is covariant (transform as the transformation of the basis by definition, in the first sense), yet we would describe the elements of the basis as being contravariant (in the sense of belonging to the tangent space), making them simultaneously "covariant" and "contravariant". Let's not build such confusion into the article, and restrict its meaning to (the no doubt original) sense. —Quondum 18:06, 31 March 2020 (UTC)[reply]

Introduction for the layman

I would add a paragraph to the introduction, which is easier to understand. Maybe something like this:

contravariant vectors

Let's say we have three base vectors a, b and c. Each one has the length 1 meter. d = (3,2,4) is a vector (≙ (3m,2m,4m)).If we now double the length of every base vector, so that |a| = |b| = |c| = 2m, then d must be (1.5, 1, 2) using the new a, b, c basis.(but d would still be (3m,2m,4m))

covariant vectors

Let's say f is a scalar function and the base of the coordinate system is a, b and c. And suppose |a| = |b| = |c| = 1 meter. Then suppose that ∇f=(2,5,9); so that the slope in x-direction is 2/ meter (2 per meter). If we now double the length of each base vector, so that |a| = |b| = |c| = 2m, ∇f becomes (4,10,18). The slope in x-direction is the same: 4/2m = 2/m.

CaffeineWitcher (talk) 12:59, 23 May 2020 (UTC)[reply]

Covariance of gradient

I am a bit confused : this article takes the gradient to be a prime example of a "covariant vector", but the Gradient article claims that it is a contravariant vector. Which is correct? (Sorry if this is the wrong place to ask) --93.25.93.82 (talk) 19:56, 7 July 2020 (UTC)[reply]