Talk:Quaternion/Archive 2

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Comparison between Quaternions and spatial rotation and Rotation operator (vector space)

The orthogonal matrix corresponding to a rotation by the unit quaternion z = a + bi + cj + dk (with |z| = 1) is in Quaternions and spatial rotation said to be


\begin{pmatrix}
A_{11}         &A_{12}         &A_{13}        \\
A_{21}         &A_{22}         &A_{23}        \\
A_{31}         &A_{32}         &A_{33}
\end{pmatrix}=
\begin{pmatrix}
a^2+b^2-c^2-d^2&2bc-2ad        &2ac+2bd        \\
2ad+2bc        &a^2-b^2+c^2-d^2&2cd-2ab        \\
2bd-2ac        &2ab+2cd        &a^2-b^2-c^2+d^2
\end{pmatrix}

The following sentence is somewhat unclear

"There are two conventions for rotation matrices: one assumes row vectors on the left; the other assumes column vectors on the right; the two conventions generate matrices that are the transpose of each other. The above matrix assumes row vectors on the left. In general, a matrix for vertex transpose is ambiguous unless the vector convention is also mentioned."

Assuming that this means that the usual convention applies, i.e that the rotation is

 (1,0,0) \longrightarrow (A_{11},A_{21},A_{31})
 (0,1,0) \longrightarrow (A_{12},A_{22},A_{32})
 (0,0,1) \longrightarrow (A_{13},A_{23},A_{33})

and applying the relations of Rotation operator (vector space) this rotation matrix corresponds to the quaternion

q_1=\frac{A_{32}-A_{23}}{2} = 2ab
q_2=\frac{A_{13}-A_{31}}{2} = 2ac
q_3=\frac{A_{21}-A_{12}}{2} = 2ad
q_4=\frac{A_{11}+A_{22}+A_{33}-1}{2}= 2a^2-1

In Rotation operator (vector space) it is further shown that for the quaternion defined as above one has that

q_1=\sin \theta \ E_1
q_2=\sin \theta \ E_2
q_3=\sin \theta \ E_3
q_4=\cos \theta

where

\theta

is the rotation angle

0 \le \theta \le \pi

and

\hat E

is the unit vector of the rotation axis

For the quaternion as defined by Quaternions and spatial rotation one consequently has that

a=\cos \frac{\theta}{2}
b=\sin \frac{\theta}{2} \ E_1
c=\sin \frac{\theta}{2} \ E_2
d=\sin \frac{\theta}{2} \ E_3

Why use half the rotation angle?

0 \le \frac{\theta}{2} \le \frac{\pi}{2}

At least the autonomous starmappers I have known give quaternions according to the Rotation operator (vector space) convention!

Stamcose (talk) 14:11, 13 August 2008 (UTC)

To start with, what I think the paragraph about rows versus columns is meant to mean something like this: "There are two conventions for rotation matrices. One assumes that the input vector is a row vector and that the output vector is produced by multiplying the input vector on the right by the rotation matrix. The other assumes the opposite. This article assumes the former." That would be the reverse of what I would do and what I think you did above, but it seems to be very common in computer graphics.
Second, we ought to be able to compute the correct form of a rotation matrix from a quaternion, right? Start with q = a + bi + cj + dk and a vector v = xi + yj + zk. Then qv, by the formula for quaternion multiplication, is (-bx-cy-dz) + (ax + cz - dy)i + (ay - bz + dx)j + (az + by - cx)k. q−1 is a - bi - cj - dk since q is a unit vector, so qvq−1 has the unappealing formula
(-abx-acy-adz-abx-bcz+bdy-acy+bcz-cdx-adz-bdy+bdx) + (bbx+bcy+bdz+aax+acz-ady-ady+bdz-ddx+acz+bcy-ccx)i + (bcx+ccy+cdz+adx+cdz-ddy+aay-abz+adx-abz-bby+bcx)j + (bdx+cdy+ddz-acx-ccz+cdy+aby-bbz+bdx+aaz+aby-acx)k
from which we can see the corresponding matrix:
\begin{pmatrix}
a^2+b^2-c^2-d^2 & 2bc-2ad & 2bd+2ac \\
2bc+2ad & a^2-b^2+c^2-d^2 & 2cd-2ab \\
2bd-2ac & 2cd+2ab & a^2-b^2-c^2+d^2
\end{pmatrix}
(Wow, I got that right on the first try!) This agrees with the rotation matrix you quoted above, but I used the convention that vectors are column vectors and multiplication by a rotation matrix is on the left. Hmm, something's messed up: Either I'm wrong, or the article is wrong, because one of our matrices should be the transpose of the other (since we used opposite conventions).
As far as the problem with the domain of θ: It seems that the quaternions and spatial rotation article always works with an angle it calls α/2. Maybe this explains the problem? Ozob (talk) 22:01, 13 August 2008 (UTC)


I hadn't actually looked at Rotation operator (vector space) before. The quaternion representation defined there doesn't seem workable at all, since (0,0,0,−1) is ambiguous (rotations by π around different axes are distinct). I have no experience with autonomous starmappers, to the point of not really knowing what they are, but are you sure they use that representation? It's hard for me to believe. The reason for the half angle is explained to some extent in Quaternions and spatial rotation#Visualizing the space of rotations. -- BenRG (talk) 01:35, 14 August 2008 (UTC)


BenRG

If you read Rotation operator (vector space) more thoroughly you will see that I stress that an 180 deg rotation cannot be represented by a quaternion because:

From the text:

"For the rotation operator corresponding to a rotation of \pi around some axis the quaternion is

(0,0,0,−1)

whatever the axis is."


and later in the text


"This means that for all rotation operators corresponding to a rotation less then \pi there is one and only one quaternion that represents the rotation. But any rotation operator corresponding to a rotation of \pi corresponds to the quaternion (0,0,0,−1), i.e it cannot be found back from its quaternion."

I do not want to be rude but I really think that Quaternions and spatial rotation should be re-worked (or deleted with pointer to Rotation operator (vector space)).

Please note:

Any rotation is between 0 and 180 deg. With the strange convention of Quaternions and spatial rotation follows that the quaternion component "a" always is positive! It simply does not make sense!


Stamcose (talk) 09:22, 14 August 2008 (UTC)


PS:

This is the "gimbal lock" for quaternions. The "global non-singular" representation of a rotation is a 3x3 orthogonal matrix! But for practical applications the rotation angle is never precisely 180 deg!

Stamcose (talk) 11:39, 14 August 2008 (UTC)


Your understanding of quaternion rotation is simply mistaken. The rotation quaternions actually used in computer gaming are as described in the quaternions and spatial rotation article. So, to the best of my knowledge, are the rotation quaternions used everywhere else. There are good mathematical reasons for the representation; it wasn't chosen cavalierly.
  • There is no gimbal lock with quaternions—that's why people use them in the first place. Furthermore, the problem you describe is not gimbal lock. Euler angles can represent every rotation, but some of those representations are degenerate, i.e. many triples of Euler angles map to the same physical rotation. The problem you're describing is the opposite of that: many physical rotations map to the same quaternion, meaning that those rotations can't be represented by a quaternion at all in your scheme.
  • The standard quaternion representation of rotations is metrically uniform: quaternions a given distance apart on the unit hypersphere always represent rotations that differ by a given amount, regardless of where they are on the sphere. Your representation lacks that property: rotations with angles near 180° are packed more and more densely into the region near the (0,0,0,−1) singularity.
  • With the standard representation, rotations quaternions act on vectors by conjugation, and a composition of rotations is just a quaternion multiplication. I don't see a simple way to express a composition of rotations or the action of rotations on vectors in your representation.
I won't exclude the possibility that this representation is used somewhere, but I'd like to see a reference. Here are some references for the usual representation, turned up with a quick Google book search: [1] [2] [3] [4]. -- BenRG (talk) 13:03, 14 August 2008 (UTC)
I agree with BenRG. Here is a counterexample to your claim that it is impossible to represent a 180 degree rotation. We all agree that a rotation of angle θ around the z axis has matrix
\begin{pmatrix} \cos\theta &-\sin\theta&0\\\sin\theta&\cos\theta&0\\0&0&1\end{pmatrix}.
So for θ = π, this is
\begin{pmatrix} -1&0&0\\0&-1&0\\0&0&1\end{pmatrix}.
So, using the formulas you quoted at the beginning, we ought to have
q_1 = 0,
q_2 = 0,
q_3 = 0,
q_4 = -1.
Let's try this out. Start with the column vector \begin{pmatrix}x&y&z\end{pmatrix}^T. Write it as xi + yj + zk. We conjugate it by the quaternion −k:
(-k)(xi + yj + zk)(k) = -xkik - ykjk - zkkk = -xjk + yik + zk = -xi - yj + zk.
Which is clearly the same as we would get by multiplying on the left by the rotation matrix. Therefore the quaternion −k represents a rotation of π around the z axis. Ozob (talk) 17:18, 14 August 2008 (UTC)

Hello

Just as a reference, see

http://www.spacecenter.dk/research/aerospace-instrumentation-1/advanced-stellar-compass


Here you can read:

"These coordinates are then transformed to a user defined spacecraft coordinate system, and are output in the form of quaternions."

I have personally written operational software for spacecraft PROBA using these quaternions as input. PROBA has been operated several years doing successful operations with my interpretation of the quaternions! Sure, this is just an interface! The quaternions are immediately transformed to an orthogonal matrix that then is used for Flight Dynamics computations! Multiplication of quaternions is not done!

Stamcose (talk) 19:07, 14 August 2008 (UTC)


Well, that page doesn't say how the orientation is encoded in the quaternion, and this document specifically says that the encoding used by the ASC is the usual half-angle one (page 4). That's a claim I find easy to believe, even though it seems to be inconsistent with your experience. I'd still like to see a reference that mentions the full-angle encoding. -- BenRG (talk) 23:21, 14 August 2008 (UTC)

With your definition the quaternion component "a" would be \cos \left (\frac{\theta}{2}\right ) and always positive, i.e. there would be a mapping between the set of all rotations and a half-sphere of a^2+b^2+c^2+d^2. But

  d c b a = q_1\ ,\ q_2\ ,\ q_3\ ,\ q_4 =-0.40541446  0.41820359  0.79435016 -0.17248970

are genuine data from this star mapper (PROBA). You can see that a=q_4 is negative!

Stamcose (talk) 09:53, 15 August 2008 (UTC)

Sorry!

  b c d a = q_1\  \ q_2\  \ q_3\  \ q_4 =-0.40541446  0.41820359  0.79435016 -0.17248970

Stamcose (talk) 09:57, 15 August 2008 (UTC)

Sorry again!

A half-sphere of a^2+b^2+c^2+d^2=1.

Stamcose (talk) 11:14, 15 August 2008 (UTC)


Each rotation can be represented by two different unit quaternions which are negatives of each other. This one would represent a rotation by 2 cos−1 −0.17248970 ≈ 200° around the (−0.40541446, 0.41820359, 0.79435016) axis, while its negative (0.40541446, −0.41820359, −0.79435016, 0.17248970) would represent a rotation by 2 cos−1 0.17248970 ≈ 160° around the (0.40541446, −0.41820359, −0.79435016) axis, which is the same rotation. Internal calculations in the ASC might produce either form of the final orientation and there's probably no need to choose a canonical form for output. So this is all consistent with the ASC using the usual quaternion representation. I understand that you've been using this other representation without problems, but where did you first hear about it? There must have been an operating manual or a textbook or something. I still want a reference, and the Wikipedia article needs one in any case. -- BenRG (talk) 02:03, 16 August 2008 (UTC)

I will check next week to what extent DTU gives the definition of what a "quaternion" is in some Users Guide. But I suspect that DTU consideres "quaternion" as a general (wellknown!) concept the definition of which is outside the scope of an equipment specific manual. Anyway the instrument can be set up to provide output in many formats.

"http://www.ifa.hawaii.edu/users/pickles/AJP/spie3351.07.pdf"

says for example:

The user may choose to have the attitude measurement output in either right ascension, declination and roll, or in the form of a quaternion. The latter being the preferred format for Attitude and Orbit Control Systems (AOCS).

But if the user wants to find the definition of quaternion as used by TUD in Wikipedia they must select

Quaternion (spacecraft attitude) what I put up as a pointer to my article. The formula that should be used is:


\frac{1-q_4}{{q_1}^2+{q_2}^2+{q_3}^2}
\begin{bmatrix}
q_1 q_1 & q_1 q_2 & q_1 q_3 \\
q_2 q_1 & q_2 q_2 & q_2 q_3 \\
q_3 q_1 & q_3 q_2 & q_3 q_3 
\end{bmatrix}
+
\begin{bmatrix}
q_4 & -q_3 &  q_2 \\
 q_3 &  q_4 & -q_1 \\
-q_2 &  q_1 &  q_4 
\end{bmatrix}


With the formula


\begin{pmatrix}
a^2+b^2-c^2-d^2&2bc-2ad        &2ac+2bd        \\
2ad+2bc        &a^2-b^2+c^2-d^2&2cd-2ab        \\
2bd-2ac        &2ab+2cd        &a^2-b^2-c^2+d^2\\
\end{pmatrix}


one gets another incorrect result even if the reader would realize that the first component "a" above corresponds (most closely!) to the 4th component of the output of "ASC" while b,c,d correspond (most closely!) to components 1, 2, 3.


Stamcose (talk) 08:55, 17 August 2008 (UTC)


I'm happy to wait a few days for references, though I'd like to point out again that this reference specifically says that the very device you're talking about, the ASC, produces output in the half-angle representation. I agree that your formula gives the correct rotation matrix for a quaternion in the full-angle representation; I only dispute that that representation is used in real-world devices. Here are some further references for the half-angle representation from a search restricted to DTU web pages:
-- BenRG (talk) 13:40, 17 August 2008 (UTC)

Today I looked into the software I wrote a few years ago! What I had in memory was not correct and you are right! The formulas used for the ASC output were indeed

\cos \theta = {q_4}^2-( {q_1}^2+{q_2}^2+{q_1}^3)
\sin \theta = 2 q_4 \sqrt{ {q_1}^2+{q_2}^2+{q_1}^3}

This then means that the quaternions

q_1\ ,\ q_2\ ,\ q_3\ ,\ q_4

and

-q_1\ ,\ -q_2\ ,\ -q_3\ ,\ -q_4

correspond to the same rotation, i.e. each rotation is associated with two quaternions! Maybe ASC "throws a dice" to decide which one to output!

Anyway, I will correct my Wikipedia article and please excuse me for causing un-founded worries/excitements!

Stamcose (talk) 11:30, 18 August 2008 (UTC)


Rotation operator (vector space) has now been updated to use the term "Quaternion" properly, i.e. to use the angle of rotation divided with 2

Stamcose (talk) 12:31, 19 August 2008 (UTC)

A more pragmatic point of view

I don't have any training in abstract algebra. I have spent some time writing three dimensional graphics software, and am currently studying mechanical engineering. Engineers like things with exact definitions.

Maybe I am looking at this from an engineering point of view rather than from a math theory point of view. As you well know, a lot of engineering problems are still using the old Newtonian Mechanics, and the vectors they work with are really the old Cartesian coordinates, that Newton used but with some but by no means all the functionality of Hamilton's calculus notably the cross product grafted in, along with a few other extensions.

The other idea for vectors comes from the row vector and column vector of Caley's Matix algebra. Perhaps the ordered pairs and ordered triplets of numbers used can possibly be viewed as the notion from which all ideas of matrix and tensor algebra have grown from.

To my very specific non-general way of thinking, Geometrically Real vectors, the kind that when you square them the answer you get is a negitive scalar, the kind Hamilton used, are a distinctly different notion from the other idea, in other words different from the row and column vectors of matrix algebra.

Keep in mind that I am not talking about any sort of abstract generalized mathematical idea of vectors here but specialized narrowly defined vectors that have an exact meaning, the kind that engineers use to solve problems.

Hamilton and Tait and every one from their school of thought liked the term Geometrically real number. They called the numbers they used to represent the three spacial dimensions geometrically real.

In their calculus

x^2 = -1

Had infinitely many geometrically real solutions, corresponding to a ball in geometrically realspace. When you move from one place to another, traversing a quantity of geometrically real space, there is nothing imaginary about it. Hence they strongly objected to the term 'imaginary number'. This can be very well documented.

In elements of quaternions Hamilton talks about bi-vectors, but the coefficients of these quantities, are not the geometrically real numbers of Hamilton's calculus. To make a bi-vector the coefficients consist of what Hamilton calls the 'imaginaries of ordinary algebra'.

If you think of some equation that represents the intersection of a sphere and a line, there are the geometrically real solutions to this equation, but then on top of those, Hamilton speculates that there might be some sort of 'imaginary' intersections. In other words more additional solutions in addition to the Geometrically Real numbers. This would correspond to the idea of starting inside of a basket ball, and getting to the outside with out ever passing through the surface. This route from the inside of the ball to the outside that did not pass through the surface, has no physical meaning, hence it would be an 'imaginary' number is the sense of ordinary algebra. A biquaternion was then the quotient of a bivector and a vector. Sadly there is a foot note saying that Clifford used the word like just about every other word in early books on quaternions differently than what was its original intent.

The great thing about the term Right Quaternion has not been expropriated by anybody and redefined so that it still has a precise meaning. That might be an advantage.

I think that the quaternion that you need to develop a computer game, or to navigate a spaceship might be just the plain old classical quaternion, and while I would hate to create a bunch of unemployed mathematicians by suggesting that endlessly extending an idea, ever farther from its original precise definition has no value, I would assert that sometimes a generalized idea if of less value to an engineer than a very specialized exactly defined one.

Sometimes Ozbob a picture is worth 1,000 words. You could draw a picture of a right quaternion, or better two pictures, to really help your readers understand.

A right quaternion plotted along two axises where the first axis was the real time axis, while the vertical axis was a geometrically real would look like an arrow pointing straight up.

Then you could have another picture which would look exactly like the picture of Newtonian vector, from any math text book, yet amaze your less sophisticaed readers by explaining to them that this was a picture of a vector part of a quaternion. I think that this would be a great aid to them.

I spent a lot of time struggling with special relativity and with the idea of an 'imaginary' fourth dimension. Quaternions don't have one of those, they have three geometrically real spacial dimensions, and one real time dimensions. The two kinds of quantities are both real yet very different from each other.

Here is a true statement with no ambiguity.

The vector part of a quaternion, contains all the geometrical information of the row and column 3-vectors from linear algebra. In fact Hamilton proved that 'every vector function is also the function of a right quaternion'. However the converse is not always true, and is never true outside of the Newtonian region. While R^3 is still useful in the solutions of linear equations, and its row and column vectors can sometimes be used to approximate Right quaternions,its name is misleading because so called real space, is not the geometrically real space represented by the true vectors of quaternion calculus.(reference already give in foot notes)

Sometimes Ozbob a picture is worth 1,000 words. You could draw a picture of a right quaternion, or better two pictures, to really help your readers understand.

A right quaternion plotted along two axises where the first axis was the real time axis, while the vertical axis was geometrically real would look like an arrow pointing straight up. Hence right quaternion

Then you could have another picture which would look exactly like the picture of Newtonian vector, from any math text book, yet amaze your less sophisticaed readers by explaining to them that this was a picture of a vector part of a quaternion. I think that this would be a great aid to them. —Preceding unsigned comment added by Hobojaks (talkcontribs) 22:45, 25 August 2008 (UTC)

I agree that what Hamilton's school called a "geometrically real vector" is a different notion from the modern concept of a vector. As the article explains in the section "Square roots of −1", these vectors form a unit 2-sphere; the corresponding modern terminology is a unit vector in R3. I further agree that Hamilton's school objected to the designation "imaginary". But they lost, and now, as far as I am aware, the designation "imaginary" is universal. If you have a modern reference that claims that "geometrically real" and "right quaternion" are still used today, then I would be happy to have it added to the article. But my own knowledge of the literature indicates that those terms fell out of use about a century ago.
I'm not an expert at producing drawings on the computer, so I'm afraid I can't help with illustrating the article.
Also, you seem to believe that certain aspects of the article are ambiguous, ill-defined, or inexact. I think the article is correct, but if you've found an error, please WP:SOFIXIT.
Finally, you mentioned that you're studying mechanical engineering. Maybe it would be a distraction to your studies, but I'm of the opinion that everyone in engineering and the natural sciences needs to study linear algebra. I don't know if that's included in your program or not, or even if you've already taken it, but I think you might find it valuable. Ozob (talk) 14:38, 26 August 2008 (UTC)

Ozob

Just to give a little bit more information to help you consider what type of people might be reading your outstanding article, I did take a standard first class in linear algebra at one time. Back when that course was taught it was not computer based.

I am a little older and going back to school after spending some time in the world of work. But I was looking at my text book the other day, and noticed that it was limited to so called the idea of the so called real matrix. Engineering students get a lot of matrix algebra feed to us, but these days, the rage is to offer two semesters of something called 'matlab' which is an interactive programing language that with all its extensions seems to be endless. At its most basic level it is a matrix algebra programing environment. It is supposed to have some 'quaternion module' and I have not seen it but I suspect it may well have been written by someone who did not stay very true to Hamilton's notion of a quaternion.

I think that adding matlab syntax which seems to be the new 'standard' to your article would just clutter it up, but maybe some improvement would be in order for matlab users.

Notice that the idea of adding two different arrays that are of different sizes produces a logic error in matlab. At least at the beginning level. In other words.

1 + [2 3 4] = Error!!

I believe it is a logic error because in matlab the rules say that size of the arrays have to match for them to be added. Caley I believe who wrote a section in Tait's treaties on quaternions who contributed much to the study of matrix theory had good reason for not defining addition of matrixs of different sizes.

But the classical vector part of a quaternion is a sum, where as vectors in R^3, seem to work more like arrays. That is what my old (Howard Anton) book said at least. That a vector is an ordered n-tupple of numbers. So starting with the single quantity,

Q = 1 + 2i + 3j + 4k

One could construct an array representing this quaternion quantity in a large number of ways. It might help your article if you put in the intermediate step?

This step would express the notion of extracting the coefficients of Q and placing them inside an array. For example:

R(1,1) = 1

R(1,2) = 2

R(1,3) = 3

R(1,4) = 4

for short

R = [1 2 3 4]

R is certainly called a four dimensional real row 'vector' in matlab.

C(1,1) = 1

C(2,1) = 2

C(3,1) = 3

C(4,1) = 4

or for short

[1; 2; 3; 4]

C would be called a four dimensional column vector in matlab.

The other alternative is to make an array of quaternions. Since a scalar is actually a scalar quaternion, or a quaternion with a zero vector part an array of quaternions constructed by taking each of the individual elements in the sum and placing them in an array could be written as a four dimensional column vector with geometrically real components.

O = [1 ; 2i; 3j; 4k]

I use the O here for Ozob, in honor of the fact that I believe that it is what you are getting at with your argument about conjugate multiplication being identical to the dot product. However, in matlab I think that the conjugate transpose would work better than the conjugate, because as is well known two row vectors or two column vectors can't be multiplied in the rules of matrix algebra but if one of them is transposed matrix multiplication is possible.

More and more I am beginning to think that the term modern is a bias term. This is just my opinion but I think most of the best books on quaternions were written before 1901, with the possible exception of the theory of relativity written in the 1920's

I have yet to find a decent book length treatment of just quaternions, but rather have found them as an afterthought added in someplace after a full development of Gibbs Wilson vectors and matrix algebra have already been introduced.

By the way dusting off my old book on linear algebra, the jumping off point for generalized vector spaces is the notion of an inner product.

My book lists four conditions on the 'inner product', and as I was reading these it really struck me that the geometric or Hamiltonian product of two classical vectors, is very different from this notion of an inner product.

The first and most striking one is that the product of a vector with it self, at least the inner product has to be positive(called the positivity axiom). The Hamiltonian product of a vector with itself is decidedly negative.

The second thing I noticed is that the inner product idea takes two vectors and returns a real number. The Hamilton product of two vectors is a quaternion, not a real number.

I don't know if I can come up with a recent source on this stuff, but Hamilton named his book elements or quaternions because he and his followers held the opinion that Euclid had it all wrong. They thought that geometry needed to be rethought because Euclid had made a serious mistake. It because obvious to Hamilton when he tried to develop a complete algebra and calculus of vectors. Fronobius proved it after Hamilton had been dead 12 years but Hamilton saw that 'anything else would be an absurdity'.

So what I saw happening as I studied the development of thinking over the last 170 years on the subject is that Hamilton made a major breakthrough, and people stuck with it for about 70 years or so, but then they went backward from Hamilton's ideas, back to the old geometry of Euclid, and began trying to extend that musty 2500 year old idea with some but not all of Hamilton's thinking.

'Modern Vector analysis' first published in book length in 1901 should have died of crib death, in 1905.

This was when a new absurdity regarding R^3 was pointed out. Inertial frames of reference moving relative to each other don't transform correctly in R^3. Space and time are interconnected after all.

In fact the concept of absolute distance, the musty old 2500 year old Euclidean Norm, wasn't a real quantity in the space in which we live. It depended on reference frame.

More important was the Lorentz invariant, which did not change with a transform from one coordinate system moving relative to another. The Lorentz invariant is also the scalar part of the product of two quaternions. So while Hamilton's notion of a relation of time and distance could have some hope of carrying on, after 1905, E. B Wilson's vectors were not Geometrically Real.

Well so anyway, I don't think it is very fair to exclude Hamilton's thinking from a main article on quaternions. While in my opinion an article or two dedicated exclusively to classical thinking is in order, it seems to me that the 'main' article on quaternions should at least include a classical point of view, but would also need to include other points of view as well.

As far as typing, I have been working on documenting classical quaternion division.

At least the part about how vectors can be divided, which is an idea that I can't see how the world has been able to do without for the last 100 years.

If google books count as a 'modern source' then all those old classic books came on line just around a year ago. I am thinking about buying a brand new hard copy of Hardy's book on amazon. Reprints seem to be selling like hot cakes. The Sac State library version of Elements of Quaternions based on Hamilton's original notes was printed in 1967 as the third edition. Would that count as a modern source.

It is sad that it has not been checked out in more than 10 years. But let me think about this problem a little more before I make any changes to your outstanding article.

Hobojaks (talk) 23:48, 31 August 2008 (UTC)

It's good to hear that you've taken a linear algebra class! What I had in mind is an advanced course in linear algebra, but I believe that more linear algebra is better than less, whatever the level. (Can you tell that I'm a member of the linear algebra fan club?)
I am not a particular fan of Matlab, and I agree that introducing Matlab notation into the article would clutter it. That notation also has the disadvantaged of not being used outside of Matlab.
The representation of H as arrays of four numbers is in the "Definition" section.
I agree that inner products are meant to be very different from Hamilton products. Perhaps you've encountered Hilbert spaces before and you're aware that if f and g are square-integrable complex-valued functions, the pairing
\langle f, g \rangle = \int_{-\infty}^\infty f(x)\overline g(x)\, dx
defines an inner product on the space of all square-integrable complex-valued functions. (Of course, f and g can be real-valued, in which case the complex conjugation is unnecessary.) This is an extremely useful inner product (it turns up all the time in the mathematical formulation of quantum mechanics, for instance) which is totally unrelated to Hamilton products. Ozob (talk) 21:37, 1 September 2008 (UTC)

Modern Cast system????

My point of view, and I wish I could find a reference for it, is the current state of math education in America imposes somewhat of a cast system. I am headed back to school on monday, to pay the government around $2,000 a semester on borrowed money to learn about vector dynamics and other topics in mechanical engineering, according to Beers and Johnson, and Hibbeler as standard texts.

At the higher levels, the kids that get to learn about general relativity, and modern theoretical physics, learn about tensor notation, which from what I gather at least some people consider to be closely related to Classical Quaternion notation, except from what I have been able to understand, Classical Quaternion notation looks much, much easier to understand,

Quaternions look a lot more consistent (at least with my limited understanding, that I have managed to glean from independent reading) with special relativity.

So me being of the lower technical cast, gets trained in a version of math, that is inconsistent with relativity, while the upper technical cast gets trained in another version of math, which is very possibly needlessly complicated, and serves the function of maintaining the intellectual cast system.

To me quaternions seem relatively easy to understand in comparison to tensors. The course of education the government has planned for me will not include either of them.

So in other words the dumbed down Gibbs-Wilson's vectors only dominate the modern lower cast of technical education, while increasingly the upper cast is switching back ever closer to Hamilton's original notions, but just encrypted in a complex secrete tensor language of elite technocrats.

In particular the notion of a metric tensor with a trace of -2, [+ - - -], looks to me an awful lot, from a computational standpoint, like an overly complicated way to get at the scalar of the product of two quaternions that Hamilton invented back in the 1840's. The main difference as far as I can discern is that the arithmetic of Hamilton's geometric algebra, how to multiply, divide, add, subtract, raise to powers and take to log of a quaternion, is fairly basic, straight forward and easy to understand, where as Einstein notation and tensor algebra seems like a lot harder way to get at the same problem. The idea of an imaginary fourth dimension hurt my brain, in my younger days, when I read books with a metric tensor with a trace of +2, that explained that I should try and think of time as an imaginary fourth dimension. The idea of a space that consists of a time component that is a real number, plus a three dimensional geometrically real component seems a lot easier to understand.

In other words if the third order tensors, used in general relativity are just a harder to understand way of writing a quaternions, then we can no longer honestly claim that quaternions have been replaced by vectors, but can only say that for the lower cast of technical education, people like me in other words, Dumbed down 1901 Gibbs-Wilson vectors and 2500 year old ideas of 3 dimensional Euclidean Geometry, along with 16th century Newtonian mechanics are taught, under the somewhat misleading term of 'modern'.

But the elite are secretly taught about 'modern' quaternions.


Perhaps the history section should be changed to reflect not the triumph of modern vectors, but of triumph of a modern educational cast system.

Hobojaks (talk) 22:13, 1 September 2008 (UTC)

I think you mean caste, not cast.
Many educational programs will allow you to substitute more advanced courses for more elementary ones, but the workload will be correspondingly more difficult. Perhaps you should talk to a course counselor.
"Tensor" is often an abbreviation of "tensor field". These are not related to quaternions except in extremely special cases.
If you would like to make that change to the history section, you will need references. Ozob (talk) 17:46, 2 September 2008 (UTC)


You sound bitter. Maybe it helps to think about "courseware" as things that are generally accepted, on some kind of least-common-denominator across researchers? That could explain why it seems so boring at times. But if there would be a 2nd (or higher) caste of sorts, which would be taught (or teaching) the interesting stuff, I'd be interested to contact them as well ... Seriously, there is a strong research group at Anadolu University in Eskişehir, Turkey, around Kudret Özdaş, Murat Tanışlı, and Süleyman Demir. They look at quaternions in physics quite seriously. That's the strongest lead I would have for you right now, if you are looking for research professionals in the field.
My first contact with quaternions was in my physics studies, 2nd semester undergraduate. I was fascinated, and continued to study what was called "advanced algebra" on my own impulse (not an exam topic), while neglecting vector calculus (which was a required exam topic). If you're interested in good grades, then this is not the recommended approach. At some point, you might have to make a distinction between what is important for you, as compared what is expected from you. Sorry this might sound a bit lame, but I don't think you can expect a consensus-built curriculum to reflect the latest, "hot" research. Once it comes to the unexplored, there's not much guidance you can expect from outside. The possibilities are infinite, naturally, so you'll just have to do some digging by yourself. You might also want to prepare for the possibility that your personal quest may last much longer than your short study time.
As for your observations, regarding occurrences of quaternions in physics, you might want to add SU(2) to your list, the symmetry of the weak interaction in the standard model. The reason for quaternions not being in the typical courseware catalog is very likely that quaternions are not known to be required any further. Beyond the weak force, the strong force is governed by (non-quaternionic) SU(3) symmetry. Beyond Special Relativity, the formalism of General Relativity is governed by (non-quaternionic) tensors of 2nd order. Quaternions simply have not been proven to be able to do more than rewriting known laws in physics. Non-quaternionic formalisms are required if these laws are extended further.
If I may add a personal, subjective, and entirely POV-loaded comment here, I find octonions and other non-associative algebras to be "the next big thing" in physics; but who is to say that thought is any good? Everything unusual thought is great somehow, every research is new and interesting; eventually it comes down to what you do with your time. Hope this cheered you up a bit. Thanks, Jens Koeplinger (talk) 02:07, 3 September 2008 (UTC)

Ozob right about term 'imaginary'

I added a little foot note to your article, that traces the term imaginary vector back to its roots at the start of so called modern vector analysis in Gibb's 1901 book.

For a while I have been pondering the notion that the relationship between the vector part of a classical quaternion, and even some of the notions of a modern quaternion when written in quadranomial form, and the three dimensional vectors taught in lower division college classes.

If one entity is called a vector then the other entity is a bi-vector, because the two notions of a vector seem to differ by a factor of the square root of minus one, that is an imaginary number is the sense used in ordinary algebra and not the geometrically real type.

This is true at least as far as the value of a vector squared is concerned.

Multiply Hamilton's vector by an imaginary unit, and when you square it you get a positive number.

Multiply a Gibbs-Wilson vector by an imaginary unit and then when you square it you get a negative scalar.

However technically if the Gibbs-Wilson vector is called simply a vector then the vector of Hamilton would be a 'perverted imaginary' vector, because since the cross product would also change sign, it would have the effect of changing everything from a right handed to a left handed coordinate system, which in the so called modern language of Gibbs-Wilson is called a perversion.

Anyway for now, I just put in a reference to the origins of modern usage.

Also someone was asking if I ever looked into the idea of a unitary matrix, it just so happens that I believe I first read about it in 1985 in goldstein's classical mechanics, and I know this because I was reviewing this book just a short while ago.

At the time I underlined it and wrote notes in the margins to remind me to ask the instructor about this strange idea.

Hobojaks (talk) 21:29, 5 September 2008 (UTC)

The ABC of Relativity by Bertrand Russell

I found my old book that I had when I was 10 years old about special and general relativity. It was one of these popular books, originally written in 1925 but updated in the 60's.

I found a little drawing I had made as a child, based on some of the graphical constructions in that book. These constructions are very obviously, or could be very obviously based on the math contained within quaternions. Looking back on my interpretation of things, it seems like I can follow a current of thought, from Hamilton to the people who saw an application for quaternions in relativity that seemed to have flickered out for a while.

I remember thinking back on it now, that after reading this non-technical book, I went to the library and picked up a book about tensors, but could not understand a word of it, and perhaps from that experience developed a life long theory of tensors.

I guess my main point is that the exact same sort of graphical constructions that can be found in Elements of Quaternions and other 19th century quaternion books can also be found on the first book I read in my attempt to understand relativity. There is a great deal of talk about arcs of great circles and the curvature of space time, all of which seems rooted in the math of quaternions, at least with the geometrical approach that the old school used to take.

Well so thanks everyone for the encouragement!

As far as SU3 and quaternions, I was thinking that in my elementary linear algebra class the first thing they teach you to do when you encounter a three by three matrix, or at least one of the first is to find its 'quaternions'. After all is not an eigenvalue plus an eigenvector a quaternion? A three by three matrix would seem to have three of these, which reminds me of the 'diadic' approach that Gibbs takes in his book on vector analysis.

Along with the dot and cross product, early modern vector analysis also had the dyad product, which was written by just putting the two vectors together and saying that this quantity could not be reduced any farther. Then much was accomplished by taking the dot product and or cross product of a dyad with a vector.

Three dyads, made a dyadic, with worked for all the world like a three by three square matrix, but yet in Gibb's treatment the notion of a complex matrix was not introduced.

I was thinking that a 'complex matrix' could be broken down into the sum of a real matrix, and a pure complex matrix, and then a factor of i could be factored out giving the information in a complex matrix, as the same as that in two real matrixes.

Hence a complex three by three matrix would contain in general 6 quaternions?

Also if a Lorentz transform could be written as the triple product of a quaternion, the event being transformed and the inverse of the quaternion, it would seem to me that if the notion of a 'biquaternion' were involved, a factor of i could be factored out of both the biquaternion and its inverse, and be written by adding a minus sign at the outside of the whole expression.

Is using regular old quaternions with real coefficients to do lorentz transforms in the view of modern science something that does not work, or something that does work, but just does not give us any thing new. Not quite seeing why the biquaternions are needed? This topic is of great interest to me, and not something that is currently explained in an intelligible manner any where I can find so far on Wikipedia.

Well off to school!

Hobojaks (talk) 21:01, 8 September 2008 (UTC)

The number of eigenvalues and eigenvectors of a matrix is not necessarily equal to its size. For example, consider the matrix
\begin{pmatrix}1&1&0\\0&1&1\\0&0&1\end{pmatrix}
It's 3 by 3 but it has only a one-dimensional space of eigenvectors. You might want to read about Jordan canonical form.
Wikipedia has an article on dyadic tensors which you might like. If you already understand dyads, then you know about tensors. Ozob (talk) 22:30, 8 September 2008 (UTC)

I agree of course, the way to tell I believe is to check to see if the matrix has a nonzero determinant, and that will tell you if the basis vectors are independent. When I wrote that I believe it was in the context of SU3, which if I understand correctly among other things has a determanant of positive one. I have looked at the little article on dyads before, I personally love dyads and dyadics. Seems very sad that so little information appears about them on wikipedia. Dyads really helps me to understand more about the nature of the 3 x 3 matrix. I reread the chapters in gibbs vector analysis on linear transforms, written in dyadic notation, as well as my old book from college about real matrix algebra, because I thought I was going to take an upper division course in matlab this semester.

You might have noticed in the so called main article on the history of quaternions I mention dyads and being the third type of product in vector analysis. A little about the history of that article, basically it got started because I took all the point of view stuff, and unreferenced material out of the article about classical notation. So basically the article got started as a dumping ground for stuff that I was cutting out of another article.

By the way, I was seriously thinking about suggesting that that particular section of the history article be merged into the main article. The reason I suggest this is because of the outstanding improvements you have made to the article ozob, making it much more in line with my thinking on the subject.

You will notice that at the time that I wrote that I believed that it was necessary to use some sort of notation to distinguish the i,j,k of geometrically real vectors, the kind that Hamilton used and the vectors that Einstein proved in 1916 to not be real, but are still called that for sentimental reasons.

That is the old read letter version that I wrote to explain the difference between vectors in the two systems, which your new articles do a great job of explaining.

The other content that I have felt should be merged into the main article would be section 1.3 of the classical notation article

The reason why I suggest this is that the main article seems to be some what of a moving target. At one point it said nothing at all about the classical view, now it has been improved a great deal by the parenthetical addition of some classical terms into the main article as well as your outstanding explanation of the difference between different ideas of vectors in different systems.

My vision would be to explain to the readers about how classical quaternions differ from modern quaternions, and perhaps how modern notions of quaternions differ from each other.

A couple of other bits of interesting trivia I have discovered.

It was actually this guy named Fitzgarald, that invented the so called lorentz transform, I went out and got a little book with the original articles by Einstein Lorentz and some others and so I can provide a good citation for this from lorentz himself, as well as from Bertrand Russel, who talks about the Fitzgarald transform.

The other thought while we are on the subject of quaternions is about the 'scalar of the triple product' of three geometrically real vectors, works like the determanant and also gives the same result as two products,

a x b . c and a . b x c in the Gibbs Wilson notation.

A merger of articles might even include the subject of the relation between the Hamilton product and the dyad product of two vectors. Is a dyad in fact a quaternion in just written in another notation. I want to know bad enough to maybe do some experiments in matlab if this subject has never been discussed.

Oh yea, one of the reasons for my original interest in reading those old books on vector analysis is because they offered the most detailed development of dyad that I have ever found, sadly learning about them got me no closer to understanding general relativity.

And I can understand zero, first and second order tensors well, and it is just those pesky third order and up tensors that I had such great difficulty trying to learn on my own.

But hey I need to sleep so please have a look at the material I have suggested be merged into the main article. I need to do a project for one of my classes and maybe my teacher can give me 20% of my grade for helping with the subject of quaternions.

My teacher works for Nasa, and has a great interest in space craft attitude control, and so I am really lucky in the respect that I have a very strong suspecion that he will already know about quaternions.

Hobojaks (talk) 05:39, 12 September 2008 (UTC)

Hi - I'm noticing you're mentioning SU(3) and quaternions. Just want to make sure (sorry if I'm talking about something you already know): It is SU(2) (not SU(3)) that describes the symmetry of the non-real quaternion elements. SU(2) is certainly contained within SU(3), as you can see e.g. in a the matrix representation of the generators of SU(3) shown e.g. in Special unitary group: The \{ \lambda_1 , 
\lambda_2 , \lambda_3 \} are obvious, but also e.g. \{ \lambda_4 , \lambda_5 , \frac{ \sqrt{3} }{2} ( \lambda_3 + \lambda_8 ) \}. But it is not possible to describe all of SU(3) through elements of SU(2). Whatever part of SU(3) you chose to cover with SU(2), you will always end up with the remainder that can't be covered by SU(2). That means also that SU(3) can't simply be quaternionic. Hope this helps! Thanks, Jens Koeplinger (talk) 23:12, 12 September 2008 (UTC)
PS - I wanted to mention this earlier, but forgot (there's just too much going on). Your diligence and hard work, trying to understand certain questions regarding quaternions fully and "from the bottom up" is admirable. Surely, at first you'll be reinventing things that others have found many times before - but who is to say that some time along the way, you won't be suddenly finding something that was overlooked? Maybe that will be within the field of interest you're currently envisioning, or maybe it'll be something that just comes up unexpected? An unexpected branch of interest, some seemingly strange opportunity where everyone today, who has an opinion about it, knows "for sure" that this would be a dead end; yet you find a critical loophole? Or provide a notable contribution to a broader interest? Sometimes, answers to a question or line of thought may seem short, abrasive, and uninterested. Hopefully that is not discouraging ... Cheers, Jens Koeplinger (talk) 23:49, 12 September 2008 (UTC)

My experience and I think a lot of the less well educated people reading these articles has been limited to the subject of real matricies.

As far as I can tell all the members of SO(3) are a subset of SU(3). Since they have a determinant of one, and the transpose of an orthoginal matrix, is equal to its transpose conjugate, as well as its inverse?

Since each of these matrixes has a nonzero determinant, it would seem at least for the SO(3) group, (which I used to use, in computer graphics programing to rotate things, because I was never taught about quaternions), the SO(3) group like all three by three matrixes with a non-zero determinant would contain three eigenvectors.

Pair the eigenvectors with the eigen values and it seems you get eigen quaternions?

How many eigenvectors does a SU(3) matrix typically have?

My speculation was that there would probably be three bi-quaternions, that could be constructed from the eigenvectors and eigenvalues, which would mean that one could divise a notation that would represent SU(3) as six quaternions?

By the way Koeplinger, I am not sure if this will work or if anybody has thought about it before, but I just read "The trouble with physics" by Lee Smolin.

He mentions octonians only in passing. Having waisted much of my youth drinking beer, chasing groupies, and playing my electric guitar way, way to loud, I do know something about 'string theory'. As my view gets stronger that Hamilton over threw that old greek guy pothagerus(sic), it might be remembered that the same old greek guy that gave us the positive distance formula, also contributed a lot to basic music theory. So Hamilton's revolution and the choice of the word Octave, which every body that plays a guitar or any other stringed instrument understands to have a different meaning, in my mind started to suggest some sort of string theory.

I did not really understand 'Smolin', but it seems to me that if someone wanted to construct a higher dimensional space, it could go like this.

(Time, plus three dimensions of distance, quaternion number one), then density, another scalar, and the last vector, to make the second quaternion, the electric field vector at the point. Hence we live in an octonian space.

At least that seems to me to be a lot like the space we live in.

Now the trick would be to show that this 'octonian string theory' is a solution to the Einstein equations. That Russian guy, Tatarove, which I believe this group owes a collective apology, already gave you half the answer, by showing that a quaternion can satisfy these equations, Hamilton's pals in the 19th century showed that light moving through space, was both a particle and a wave, which is sort of a particles are vibrating strings like idea.

If it does not work at first, please don't try adding any extra dimensions, I know every body thinks that strings are supposed to have 10 or 11 dimensions.

The way an electric guitar works, is by 'dividing a string' in other words putting your finger on the fret board, or if you want to get fancy by producing a harmonic on an open string.

I thought that octonians were the only thing bigger than a quaternion that could be divided?

I realize that at first this might sound like complete insane ranting, and that it can't possibly work, but it is at least worth a try. I also understand that I am assigning a different meaning to the word string that what it means in older musty 20th century sting theory, hence calling it modern string theory might be of help.

I was thinking that if you took mass density and every possible vector of the electric field in the entire universe, that looking at it from the point of view of someone living in our time and distance space, since some electric field vectors are a lot more common than others, that some points in this mass density electric field vector space, would have a lot more time associated with it.

An intelligent life form living in that space would probably have a body made out of time, in the same way our bodies are made up of mass. Like wise, it would experience the passage of mass density the same way that we experience time.

I was thinking that some sort of wave could propigate through this other quaternion, the mass density, electric field vector space. That could be what all this dark energy-dark matter stuff lurks.

But it would be really hard for us to see. We would have to measure the electric field vector and the density of space at every point in time and physical distance.

A creature living in density-electric field string space, might think that what we think of as distance vectors, were some sort of force field, since at every point in the electric field vector space there would be a set of all the space vectors that just happened to have that electric field strength.

There would even be an infinite number of these Koeplinger-hobojaks string theories that would be generated by every possible polynomial one could construct by adding subtracting and multiplying, and raising octaves to powers, as long as it could be show that these expressions satisfy the Einstein equations. Since these operations are both non-commutive and non-associative, it seems obvious that for every higher dimensional string theory there could very possibly be more Koeplinger-hobojaks string theorys than atoms in the universe.

Thanks for all your encouragement.

Hobojaks (talk) 12:39, 13 September 2008 (UTC)

Some old issues

There has been a great deal of discussion about this page for quite some time now. The first thing that I would like to do is to congratulate Ozob, in particular and a lot of other people for the outstanding work they have done to improve this page in the time that I have been following its development.

Something that I found vexing when I first encountered this page, a problem that Ozob has now corrected was that there were a vast number of products listed as types of quaternion products. Ozob did an outstanding job of correcting this problem by introducing the term Hamiltonian product, to distinguish the product of Hamilton, the one that he and all the other folks were using exclusively in pre-1901 math books on quaternions, and its primary importance that it used to play in classical thought.

What I still find a little troubling is that I believe that some possibly important math ideas have been inadvertently lost with the so called modern treatment of the quaternion. Jhead is doing an outstanding job of documenting the rise of the notion of this modern quaternion, which has somehow been tamed and integrated into a broader scope of modern math. The trouble is that in doing this some of thinking of Hamilton and Tait and some of the other 19th century thinkers has been lost.

If you look carefully at the definition of an inner product space, that space is required to have an inner product. However the original idea of a quaternion with its original Hamiltonina product does not become in inner product space unless you define some new type of multiplication that is completely alien to Hamilton's thinking. Sadly once this inner product is defined, than arguments like the one that the tensor of a quaternion is its Euclidean norm can be made. This leads students, some what incorrectly in my view to the notion that quaternions are remotely Euclidean in nature.

I read an article once about a scientist that studied wolves. He studied them first in zoos, but then when he went into the wild, he discovered that every caged wolf was completely and totally insane. And that wild wolves were different. I suspect that in taming the wild quaternion of Hamilton, the effect has been to create a new, different and decidedly more Euclidean entity. So in other words just like modern math has redefined the idea of the vector so also has the idea of a quaternion been redefined.

I think this may be very unfortunate and that something has been lost along the way.

But I have to get to class now, I got 3 out of 10 on my dynamics test because I was spending to much time thinking about quaternions and their relationship to relativity and not enought time on my homework.

I might have some more comments on this subject in the near future.

130.86.76.119 (talk) 15:55, 17 September 2008 (UTC)

Very interesting. But first what I should say: Go study, good grades help. Here's what I want to say: In my view, you are too hard against the "Euclidean" nature of quaternions. The multiplicative 2-form that quaternions are equipped with is the Euclidean norm after all. But then again, this might create the (false) impression that quaternions are just built from four orthogonal basis elements which are exchangeable. This is - of course - not true; only the 3 non-real elements of a quaternion are exchangeable freely, the complete quaternion group is more intricate. Maybe this relates to the impression you have, for something to be "lost" in "modern treatment"? Quaternions are described in terms of other mathematical constructs due to the success of these other constructs; and the success of Clifford algebras in the description of nature places focus on the Euclidean 2-form of quaternions. But that focus is brought forward by the wider algebraic framework that quaternions are part in, and therefore look at quaternions from an angle that may filter out other things that seem important. So, maybe your feeling of "loss" comes from quaternions being part in wider algebras or programs, each with its particular focus and application? -- A key question seems to remain: What requires quaternions? Why are they more than a nice-to-have? You mention quaternions and their relation to relativity. As long as you describe relativity using tensors of 2nd order (and you would need really, really good arguments for not doing so), then you'll end up having to use multiplicative 2-forms to describe invariant properties between equivalent frames of reference. So maybe the question would be: What's the argument for not using tensors of 2nd order? In my opinion, quaternions alone can't do the trick, but I understand this is arguable.
As for the article, it has improved so much over the past years, I'm very happy to see this. I wished I had this clarity available to me in 1994 when I first ran into them. Maybe we could contemplate a section that references the various algebras quaternions are part in? Clifford algebra, Lie algebra, Special unitary groups, Cayley-Dickson construction and split forms, hypercomplex numbers ... and the ones I forgot. Thanks, Jens Koeplinger (talk) 13:31, 19 September 2008 (UTC)

A mistake?

It seems to me that the square roots of -1 in H are not the elements of the form bi+cj+dk with b^2+c^2+d^2=1 (as the article pretends), for example if b^2+c^2=1 and a=d=0 then (bi+cj)^2=-1+2 b c k \neq -1. I believe that they are even not infinitely many, because if we take a quaternion and we ask its square to be -1 we obtain relations like these: a^2-b^2-c^2-d^2=-1,\ ab+cd=ac-bd=ad+bc=0, and I think that they lead merely to the six elements i,-i,j,-j,k,-k. --147.162.114.205 (talk) 14:41, 14 October 2008 (UTC)

Remember, ij = - ji \;
So (bi+cj)(bi+cj) = -b^2 + bc ij + bc ji - c^2 = \, -1 (b^2 + c^2)
Jheald (talk) 14:50, 14 October 2008 (UTC)

Yes, you're right, I'm sorry. --147.162.114.205 (talk) 14:57, 14 October 2008 (UTC)

Don't worry, dear IP, it happens ;-) Thanks anyway for having given your contribution! --Fioravante Patrone en (talk) 15:07, 14 October 2008 (UTC)

Math tag consistency

Hey everyone, I have been thinking that the font size of the math expressions would look nicer if it were more consistent.

I went and read the guidelines and found out a more correct way to have a simple formulas be forced into PNG's. I am not sure that the guidlines forbid doing this to make the simple formulas and complex formulas appear consistent with each other.

It looks kind of weird to have expressions with different font sizes in the same article and particularly in the same section.

However the guidelines say that this is an emotional issue. I made the first equation back into

i^2 = j^2 = k^2 = ijk = -1.\,

using the suggested math tag techniques. Did not want to upset anyone, but are changes like this to other parts of the article considered acceptable, by the everyone to get some consistency?

I have a very, very strong objection to forcing PNGs: Accessibility. Anyone who needs a screen reader cannot see a PNG at all. Anyone who regularly uses a large font or a magnifying glass (like my mom) cannot see a PNG easily, because it cannot be scaled well. (And yes, my mom might read this article some day. She likes math, too.) I think I have seen exactly one good use of forcing PNGs on Wikipedia, ever. (It was to make a table line up; some of the table entries couldn't be expressed in HTML.)
I'm going to unforce the equation you quote above. You also changed the cross product, and I'm willing to compromise on that: I don't want it to be PNG, but for consistency with the other equations, it should say "p × q =" at the start, and that's output as a PNG.
I don't want to get into an edit war over this, so if you reforce the first equation then I'm not going to unforce it again. But please think hard about WP:ACCESS before so doing. Ozob (talk) 23:03, 23 November 2008 (UTC)

I agree that you mother should be able to read the brogham bridge law at the start of the article. Thank you Ozob for making me aware of some issues that I had been unaware of regarding math tags. I agree that accessability is an important goal for this article.

i^2 = j^2 = k^2 = ijk = -1.

The math tag above on my favorite browser (the current verion of fire fox on typical college computer lab monitors)defaults the size of the expression to what I believe is the line height of the surrounding text. With super scripts this makes this important formula actually smaller in text size than the text around it. I mean it looks really really tiny.

There must be some html tag that will set this expression to a large font, so us college student types can see it also? Before we change it again, I can put a copy of it in this article so that the proposed math tags can be tested in a couple of different browsers. We have a disability resource center here on campus, and I might be able to get a computer over there for testing maTH tags on more specialized browser configurations, but of course I would not have priority on these machines.

Gotta run to my next class, sorry for all the spelling mistakes in this message, but I thought it important to reply quickly rather than with perfect grammar.

Hobojaks (talk) 19:35, 25 November 2008 (UTC)

I agree that the font for a <math> tag rendered as HTML is too small. If you'd like to change it, your best bet is to ask for a change in Monobook. Manually changing the font size (or other aspects of the presentation) for a single equation is generally discouraged by the MoS; but if you could find settings that worked for all equations everywhere, you could ask for them to be put into Monobook. Ozob (talk) 22:55, 25 November 2008 (UTC)

What does the zero and the three stand for?????

I want to complement who ever wrote the outstanding section of this article on the relationship between Clifford Algebra and Quaternions.

However one problem that I have with understanding it is that it is written from the perspective of someone who already understands Clifford algebra better than I do, specializing this branch of mathematics down the the specialized subject of Quaternions.

In the expression

Cℓ+3,0(R)

What does the three stand for? What about the zero?

Since historically quaternions provided some of the motivation for quaternions a brief explanation for dummies on what the notation means specifically in the context of quaternions seems in order to make the article appeal more to the unwashed masses of folk who do not have a PHD in math, and learn about math entirely or to a large extent from reading wikipedia?

Hobojaks (talk) 22:10, 23 November 2008 (UTC)

This is the signature of the quadratic form. Signature (3,0) is the case of the standard inner product. You might like the Clifford algebra article. Ozob (talk) 23:06, 23 November 2008 (UTC)

I have been looking at that article about Clifford Algebra for quite some time, and have in general found it completely incomprehensible. Thanks for explaining this, I am sure that there are others interested in this question as well. If I have distorted the meaning of your reply in my linking of the term signature I apologize.

I would have thought that the signature of the quadratic form, if that means S.q^2 would be (+,-,-,-) or as they -2.

In other words, multiply a quaternion by itself using the Hamilton product and then take the scalar part.

\mathbf{S}.q^2 =\,w^2 - x^2 - y^2 - z^2

but I guess it might be talking about the other possibility.

q\mathbf{K}q = \mathbf{N}q =\,(\mathbf{T}q)^2=w^2+x^2+y^2+z^2

Also apparently there are two different Clifford amgebra approaches to the quaternion the other one where the signature is (0,2) I believe. This is stated at the start of the article but never really explained on a level that people who have not had a course in Clifford algebra would understand. —Preceding unsigned comment added by Hobojaks (talkcontribs) 18:41, 24 November 2008 (UTC)

I agree that the Clifford algebra article is not very friendly. If you skip the universal property section then you might find it more appealing. In particular, it might help to look at the section on bases. If we take x, y, and z as the standard basis vectors of R3, then Cl(3,0) has a basis consisting of the identity element 1 and the formal products x, y, z, xy, xz, yz, xyz. (Note that 1 is actually the empty product.) The even part of this Clifford algebra is generated by the basis vectors which are formal products of an even number of the standard basis vectors of R3, that is, by 1, xy, xz, and yz. In terms of quaternions, 1 corresponds to 1 and the other three basis vectors are supposed to correspond to i, j, and k.
I say "supposed to" because i, j, and k square to −1, and that's not automatic in a Clifford algebra. Instead it depends on the choice of quadratic form. It doesn't really matter whether you use signature (3,0) or (0,3) (after all, one is related to the other by a sign), but it does matter that you use a definite quadratic form, i.e. one where all the signs are the same. Otherwise you might get something like xy2 = xz2 = −1 but yz2 = +1, and that Clifford algebra isn't the quaternions.
The analogous thing to do for a quaternion is to look at q*q for an imaginary quaternion q. You find that the operation qq*q is a quadratic form with signature (3,0) (or, if you flip the sign and consider -q*q, (0,3)); in fact it's exactly the quadratic form we used in constructing Cl(3,0).
By the way, please don't edit my comments in the future. I realize that you were trying to be helpful by wikilinking, but others will identify those wikilinks as mine, and they're not. In the future, if you want to do something like that, you can use the Wikipedia:Sandbox. Ozob (talk) 23:56, 24 November 2008 (UTC)
Hobojaks: as in fact the article proposes, a better place to start might be the article Geometric algebra, and some of the websites and online papers that it links to at the end. Whereas the Clifford algebra article concentrates a lot on how different Clifford algrabras form a rather abstract algebraic pattern, the Geometric algebra is meant to be more concrete, discussing how each particular Clifford algebra can be understood in terms of some quite fundamental geometric properties, and used to do geometric calculations. Or at least that's what that article ought to survey - at the moment it's not very comprehensive, and there's not enough overview in it, and what it does treat it gets bogged down in too much detail. But I think it may well a better starting point for what you're looking for, and some of the links from it even more so. If you have access to a good university library, you might check out David Hestenes "New Foundations for Classical Mechanics", which gives quite a nice (though dense) physical presentation, showing how it can be used to simplify orbital mechanics calculations etc.
So, on to Cℓ0,2(R) and Cℓ+3,0(R).
Clifford algebras are closed algebras built up from simple basis elements e1, e2 ... en, under the multiplication constraint that for simple bases
 \mathbf{e}_i \mathbf{e}_j = - \mathbf{e}_j \mathbf{e}_i  \qquad \qquad (i \neq j)
 \mathbf{e}_i \mathbf{e}_j = \pm 1 \qquad\qquad (i = j)
The signature of the algebra tells you how many different simple bases e1, e2 ... en the algebra is built from, and how many of those square to +1, and how many to -1. The sign becomes particularly important for bivectors (products of two simple bases), which can be used to parametrise rotations. In an algebra where the bivector square has one sign (-1), the bivectors parametrise ordinary rotations (just like quaternions do); but in an algebra where the bivector square has the other sign (+1), the bivectors parametrise hyperbolic rotations (Lorentz boosts, just like quaternions don't). The mixed-signature algebra Cℓ1,3(R) used for special relativity has six independent unit bivectors, three of which correspond to ordinary rotations about different axes, and three of which to hyperbolic rotations (Lorentz boosts in different directions).
Cℓ0,2(R) means that the quaternions can be understood as an algebra based on two vectors i and j, with signature -1, and one (derived) bivector k = ij, used to represent rotations in that 2d geometrical space.
Cℓ+3,0(R) means that the quaternions can be understood as an algebra based on three bivectors i, j and k used to represent rotations in a 3d geometrical space, derived from three vectors e1, e2 and e3 each with the property that (ei)2 = +1. The + sign in the algebra signature means that only half the full Clifford algebra is there -- the bivector part is there in the quaternions, but not (in the Clifford algebra view) the part of the Clifford algebra that would be used to represent vectors. Jheald (talk) 11:56, 25 November 2008 (UTC)

Do all elements in Cℓ3,0(R) have an inverse?

The scary thing is that this is starting to make a little bit more sense to me. In this question I left the + sign off intentionally, because I wanted to indicate the whole algebra. What I found of particular interest was Hamilton's article Elements 214[1][2], where he proves that in addition to i,j and k that there has to be an imaginary scalar. This is the plane old imaginary of ordinary algebra. He later starts to call this quantity h.

This is somewhat tangent to the present subject. Remembering that Hamilton died before he ever finished his elements, I find it interesting that at the start of each article about biquaternions he always spells out that in this article h is both commutative and associative, as if there is another alternative that he intended to explore. Could this alternative be where h is anti-commutative and anti-associative? I was reading some place that Hamilton actually invented the word associative in his discussion with Graves about octonians, but he never published anything about them.

Anyway getting back to the subject at hand, if hh=-1 and if it is commutative and associative we have

hihi=(hh)(ii)=(-1)(-1)=1

This gives the most general form of a double quaternion as:

w + xi + yj + zk + w'h + x'hi + y'hj + z'hk

where w,x,y,z,w',x',y',z' are all real and h,i,j and k all square to minus one.

One thing I find perplexing about this approach, and which does not seem to be covered in any of the articles I have looked at, and which I have not yet located Hamilton's thoughts is the following question.

In general, does the (h commutitive associative) version of this entity have an inverse?

If it did then this would seem to contradict Hurwitz's theorem theorem.

Some readers may be interested in knowing that Elements of Quaternions mentions Clifford by name.[3]

Hobojaks (talk) 06:17, 28 November 2008 (UTC)


Let's write your hi as σ1, which is how it is often written by Clifford algebra people, relecting that Cℓ3,0(R) is also the Pauli algebra.
The answer to your question is that almost all elements of Cℓ3,0(R) have a multiplicative inverse. One can use this to define, for example, a multivector differential calculus, which (I believe) can be quite usefully applied to a range of physical applications.
But Cℓ3,0(R) is not a division algebra, because there is a set of exceptions; namely, numbers like (1+σ3).
Numbers like this give rise to idempotents, ½(1+σ3).½(1+σ3)=½(1+σ3); and zero divisors, (1+σ3).(1-σ3)= 0 .
Equivalently, if we pre and post multiply by σ1, then (σ1 + iσ2).(σ1 - iσ2)= 0, where i1σ2σ3.
It would be interesting to know how this algebraic property can best be related to geometric properties of R3. The property certainly directly determines the available spinors in R3. But that's as far as my current knowledge goes. Jheald (talk) 10:12, 28 November 2008 (UTC)