From Wikipedia, the free encyclopedia
Jump to: navigation, search

Orthogonality existed long before computers![edit]

Those trained in computer science think they invented everything known before computers existed: integrals, mathematical induction, orthogonality, etc. I've left the page a bit of a messy hodge-podge, but far better than what was here. Michael Hardy 02:27, 13 Jan 2004 (UTC)

If non-orthodox is "heterodox", is "heterogonal" non-orthogonal? (Google has one hit for that word, in an unmaths context.) 21:05, 5 Aug 2004 (UTC)
It is a needless complication of the definition of orthogonality to bring in the subscripts i and j when one is only trying to define what it means to say that two functions are orthogonal. Also, it is incorrect, unless one has first given the subscripts some meaning.
Michael Hardy 01:45, 6 Sep 2004 (UTC)


I'd like an example of two simple functions that are orthogonal. - Omegatron 16:22, Sep 29, 2004 (UTC)

Take two orthogonal vectors and then change basis to {1, t, t^2, ..., t^n}?Dysprosia 22:32, 29 Sep 2004 (UTC)
No, that won't work until you specify a measure (or "weight function") with respect to which those are orthogonal. See for example Chebyshev polynomials, Legendre polynomials, and Hermite polynomials (all exceptions to the rule that it is better use singular words as Wikipedia article titles). Those are examples. Also, see Bessel functions. Michael Hardy 00:32, 30 Sep 2004 (UTC)
Well, it does depend on the inner product you use to determine orthogonality, though. But yes, if you use the inner product defined in the article, it won't work.
Dysprosia 01:48, 30 Sep 2004 (UTC)
Some of us don't know what that means... Aren't sin(x) and cos(x) orthogonal? Also, certain pulse trains? - Omegatron 22:53, Sep 29, 2004 (UTC)
If you use the inner product from the article, and take the integral from -a to a with weight one, sin(x) and cos(x) are indeed orthogonal functions (calculate the integral for yourself).
Dysprosia 01:48, 30 Sep 2004 (UTC)

Also, please explain why the integral is a to b instead of -∞ to +∞? - Omegatron 16:24, Sep 29, 2004 (UTC)

No reason, though you can define another inner product with those bounds and then consider orthogonality with respect to that inner product.Dysprosia 22:32, 29 Sep 2004 (UTC)
I see that the a and b are used in the artcle on inner product, too.
Omegatron 22:53, Sep 29, 2004 (UTC)
You need to understand that when we set the limits of an integral as [a, b], then a and b can be whatever we want them to be, including minus or plus infinity, as long as the limits are taken to be real and not complex. (talk) 00:22, 25 August 2012 (UTC)

It is important to realize that functions are orthogonal only on a predefined interval. In other words, sin(x) and cos(x) are not orthogonal, generally speaking. They are only orthogonal on the interval [a, b] if |b - a| = n*pi where n is a nonzero integer. This is also why inner products (for sinusoids) are defined on [a, b] and not -∞ to +∞. Severoon 22:41, 1 May 2006 (UTC)

Missing bracket[edit]

There is a missing opening square bracket on the integration example image, I believe. --anon

Fixed now. I think that bracket was left out on purpose. But I agree with you that things look better with the bracket in. Oleg Alexandrov 18:25, 15 May 2005 (UTC)


for some positive integer a, and for 1 ≤ k ≤ a-1, these vectors are orthogonal, for example (1,0,0,1,0,0,1,0)T,(0,1,0,0,1,0,0,1)T ,(0,0,1,0,0,1,0,0)T are orthogonal.

interesting. So this is where discretely sampled signals like
come from? Also, these signals are orthogonal too, according to another site I saw. Can we extrapolate the signal processing version from the many dimensional vector version? Maybe graphs? - Omegatron 13:41, Sep 30, 2004 (UTC)
They appear to be. Calculate the dot product of these "signals", so to speak, across each triplet. If they sum to 0 for all the bit triplets over your time period they are orthogonal. I don't understand what you mean about "extrapolate the signal processing version from the many dimensional vector version".
Dysprosia 14:04, 30 Sep 2004 (UTC)
the difference being that this is a discrete function instead of a vector,



but I guess they can be seen as the same thing from different perspectives? Can you have infinite-dimensional vectors? The discrete-"time" function can be "converted" to a continuous-time function (think sampling), though, which can also be orthogonal to another similar function if they have the same "shape" relationship... - Omegatron 14:40, Sep 30, 2004 (UTC)

Heh. Lots of "quotes". I can explain better later. I will draw some pictures... - Omegatron 14:41, Sep 30, 2004 (UTC)
Yes, you can have vectors of infinite dimension. You know there is in fact nothing really special about any of these definitions of orthogonality - what is the important property is the inner product, which determines whether two vectors in a vector space are orthogonal or not, or determines a "length" or not. Change the inner product, and these definitions change also.
Dysprosia 14:49, 30 Sep 2004 (UTC)
Not sure that I understand what you're trying to say. So you could define your own "inner product" for which a cat is orthogonal to a dog? - Omegatron 19:55, Sep 30, 2004 (UTC)
Metaphorically, yes, as long as the inner product you define is in fact an inner product. There are some requirements on this, see inner product. Literally, you have to define what you mean by a cat and dog first before you can say they are orthogonal to each other... ;) Dysprosia 01:07, 1 Oct 2004 (UTC)
Can you have infinite-dimensional vectors?

Except that it's the space that is infinite-dimensional, rather than the vectors themselves. The two most well-known infinite-dimensional vector spaces are , which is the set of all sequences of scalars such that the sum of the squares of their norms is finite (for example (1, 1/2, 1/3, ...) is such a vector because 12 + (1/2)2 + (1/3)2 + ... is finite) and L2, the set of all functions f such that

("Whatever space" could be for example the interval from 0 to 2π, or could be the whole real line, or could be something else.) Michael Hardy 19:30, 30 Sep 2004 (UTC)

Yes. So what is the connection between the discrete function with an infinite number of points ...,f[-1],f[0],f[1],... and a vector with an infinite number of dimensions (...,x-1,x0,x1,...)? Are these the same concept said in two different ways or are there subtle differences? For instance, in MATLAB or GNU Octave you use vectors or matrices for everything, and use them to represent strings of sampled data or two dimensional arrays of data, both of which could also be thought of as functions of the vector or matrix coordinates.
Not that this is a site for teaching people math, but it could point out things that need to be included in various articles.:-)
Omegatron 19:55, Sep 30, 2004 (UTC)
Let xi = f(i)? Dysprosia 01:07, 1 Oct 2004 (UTC)

Orthogonal curves[edit]

This article does not mention orthogonal curves or explain what it means that two circles are orthogonal to each other. Hyperbolic geometry mentions orthogonal circles, but I had to look up the exact meaning elsewhere (more precisely, on MathWorld).

My question is, should orthogonal curves and circles be covered in this article, or do they qualify as a "related topic"? Fredrik | talk 03:16, 21 Oct 2004 (UTC)

The concept is not really that different, though Mathworld's geometric treatment may merit a seperate page. One could perhaps say generally that two curves parametrized by functions f and g are orthogonal, if where they interesect ∇f.∇g = 0, though I'm not sure that's a decent, established, or useful definition... Dysprosia 08:14, 21 Oct 2004 (UTC)
A section on orthogonal curves must certainly be added to the article.--Shahab (talk) 08:43, 8 March 2008 (UTC)

Quantum mechanics[edit]

The article states that

In quantum mechanics, two wavefunctions and are orthogonal unless they are identical, i.e. m=n. This means, in Dirac notation, that unless m=n, in which case . The fact that is because wavefunctions are normalized.
This is wrong in the general case. The author probably supposed that and are eigenstates of the same observable relating to two different eigenvalues, in which case it is trivially true. The definition of orthogonality in quantum mechanics is the same as in the space in mathematics, so that this precision can be removed without there lacking anything.-- 20:11, 16 April 2006 (UTC)
It is "trivial"? Not for everyone! I'm not an expert in quantum mechanics--my specialisation is in complex systems--so I will not defend my original statement down to the last letter. However, I do feel strongly that the comments on quantum mechanics should be modified, not removed.
The reason that I added the paragraph in question is because when I was studying for my last quantum mechanics class, I found that Wikipedia did not answer the questions that I had about orthogonality. If you simply take out the stuff on quantum mechanics, then other people will likely come along with the same queries as me--and they'll be unsatisfied too. If you want to clarify that it's for the two eigenvalues of the same observable, that's fine. But just because it's not the most general case doesn't mean it's not an important one.
Ckerr 16:12, 19 April 2006 (UTC)
Since there has been no reply, I'm going to reinstate the part on QM. Please correct it if it needs correcting, but please don't just axe it! Ckerr 09:04, 25 April 2006 (UTC)
If I may give my opinion: I think this should be removed or moved. The definition of orthogonality already caters for the quantum mechanics explanation. The only reason the two wavefunctions are said to be orthogonal is because they ARE orthogonal in the mathematical sense, therefore it does not make sense for this entry to be under "Derived Meanings". I will give a chance for the author to reply to my suggestion, but if I don't hear from you in 2 or 3 weeks I'll move it to the "Examples" section.
Maszanchi 09:38, 16 June 2007 (UTC)
I have performed the move and rewrote the section according to the comment above. The part on normality was removed as it didn't seem relavent to orthogonality.
TomC phys 09:04, 5 September 2007 (UTC)

Weight Function?[edit]

Why is there mention of a weight function w(x) in the definition of the inner product? Its presence plays no role whatsoever in the definition of the inner product of f and g, so why not remove it? (I understand the role of a weight function in PDEs like the heat eqn, but isn't it unnecessary and extraneous in a page on orthogonality?) Severoon 22:45, 1 May 2006 (UTC)

Well, I suppose weight functions aren't truly essential to the notion being discussed, but they make it much more accessible. We could just say "Given an inner product , f and g are orthogonal if ....". But the use of weight functions gives a good motivation for the construction of inner products, and for the notion that one can construct different inner products, and hence different notions of orthogonality, on the same underlying set of objects (e.g. polynomials.)
On second thought, I see your point. The section isn't very clear. I'll fix it.
William Ackerman 15:46, 12 May 2006 (UTC)
I agree that the section is not clear. In fact, it's so unclear it seems to have led to confusion right in the examples section: "These functions are orthogonal with respect to a unit weight function on the interval from −1 to 1." (See the third example.) In fact, the functions in the example are not "orthogonal w.r.t. a unit weight function"...they're orthogonal to each other on the specified interval!
This definitely needs to be changed. The introduction of a weight function should be brought up in the context of a physical example, something like the heat equation on a 1D conductive rod of nonuniform density. Short of an explicit physical application, it just seems to be confusing things. Severoon 23:34, 12 May 2006 (UTC)

Emergency fix[edit]

I have just put in an emergency fix for the question raised by, and left a note on his talk page. This was a proof that an orthogonal set is a linearly independent set.

It's not at all clear that putting in this proof is the right thing for the article as a whole -- I just needed a quick fix. (It's not even clear that this is the best proof. It was off the top of my head. And it is definitely not formatted well.) Maybe the linear independence is truly obvious, and saying anything about it is just inappropriate for the level of the discussion. Maybe the proof/discussion should be elsewhere.

If/when someone has the time to look over the whole article, and think about the context of the orthogonality/independence issue, and figure out the right way to deal with all this, it would be a big help.
William Ackerman 16:08, 21 July 2006 (UTC)

Thanks for the proof. But I tend to agree with your doubt that the proof was not the right thing for the article as a whole, especially that early in the article. Proofs are not really encyclopedic to start with (see also Wikipedia:WikiProject Mathematics/Proofs). I removed the proof for now.
Oleg Alexandrov (talk) 08:33, 22 July 2006 (UTC)

On radio communications 1[edit]

The radio communications subsection claims that TDMA and FDMA are non-orthogonal transmission methods. However, in the theoretically ideal situation, this is not the case. For FDMA, note the orthogonality of sinusionds of different frequencies. Thus, restricting users to a certain frequency range IS orthogonal so long as the frequency ranges are nonoverlapping. This is similarly true for the TDMA case. Assume that each user is restricted to transmit in in specific, non-overlapping time, i.e.,



so that the inner product


On radio communications 2[edit]

I agree with the comment already present in the comment page. The sentence "An example of an orthogonal scheme is Code Division Multiple Access, CDMA. Examples of non-orthogonal schemes are TDMA and FDMA," is wrong and should be deleted. All in all the section on Radio Communications is not satisfactory as it is. I would delete and replace with something such as the following text, or similar one:
"Ideally FDMA (Frequency Division Multiple Access) and TDMA (Time Division Multiple Access) are both orthogonal multiple access techniques, and they achieve orthogonality in the frequency domain and in the time domain, respectively. In practice all orthogonal techniques are subject to impairements, which however can be controlled to any desired level with appropriate design. In the case of FDMA the loss of orthogonality arises due to the imperfection of spectrum shaping, and it can be combatted with appriapriate guard bands. In the case of TDMA, the loss of orthogonality is the result of imperfect system syncronization.
The question can be asked if there are other "domains" in which orthogonality can be imposed, and the answer is that a third domain is the so called "code domain". This leads to CDMA (Code Division MA), which is a techniques which impresses a codeword on top the digital signal. If the set of codewords is chosen appropriately (e.g. Walsh-Hadamard codes), and some more conditions are assumed on the signal and on the channel conditions, CDMA can be orthogonal. However, in many conditions, to guarantee near ideal orthogonal condition for the CDMA implementations is more critical.
In packet communications, with noncoordinated terminals, other MA techniques are used. For example the Aloha technique originally invented for computer communications via satellite [FALSE. It was made for terrestrial radio communications in Hawaii.]

Since the terminals transmit as soon as they have a packet ready, in an uncoordinated manner, packets can collide at the receiver, so producing interference. Therefore Aloha is one example on nonorthogonal MA technique, even under ideal operational conditions." 09:55, 1 October 2006 (UTC)

The above totally disregards the concept of "slotted Aloha", which has been widely used. (talk) 00:22, 25 August 2012 (UTC)

Response to changes inserted regarding Orthogonality and Radio Communications:

The statement that TDMA and general FDMA are examples of orthogonal schemes, while CDMA is not, is incorrect. There are many in the wireless industry who erroneously believe that orthogonality is defined by whether or not two things interfere or produce "cross talk". However, that is NOT what defines orthogonality.

Orthogonality is a mathematical property with well-defined and SPECIFIC criteria:

   Integral [ Fi(x) * Fj(x) dx ]  = kronecker_delta   
                           (i.e. non zero if and ONLY if i = j)                     

Note the definition does NOT contain a windowing function (or a weighting function). Two non-coincidental events that do not interfere (0 sum), are NOT necessarily orthogonal. A bus and a train passing over a railroad crossing 15 minutes apart do not interfere. This does NOT indicate that buses and trains are orthogonal. A TDMA message sent duromg one second followed by a second one sent some time later do not interfere because they are not simultaneous and therefore never have the opportunity to interfere. This does NOT indicate they are orthogonal.

IF TDMA signals were orthogonal, why then do signals sent from adjacent cells within the same network interfere with each other?

Arbitrarily injecting a windowing function into the definition would suggest that ANY two functions could be orthogonal, which absolutely is not true. If we transmit a message convolved with a polynomial in "x" and 2 seconds later transmit a message convolved with [sin^2](x), the two (non-simultaneous) messages will not interfere. This is not because x^2 and [sin^2](x) are orthogonal (they are NOT), but because they were sent at completely different times.

Orthogonal-FDM IS orthogonal (by design), but generic FDMA is NOT orthogonal. If FDMA were orthogonal, then why would we in the industry have to spend BILLIONS of dollars on filtering specifically to keep adjacent signals from interfering with each other?

Orthogonal-FDM meets the mathematical criteria: sin(nx) and sin(mx) are orthogonal functions only when "n" and "m" are distinct integers, but otherwise they are NOT.

CDMA IS orthogonal (again, by design) due to the orthogonality of the Walsh Codes employed (?) (provided all the Walsh Codes are synchronous - a mathematical requirement for all orthogonal functions). The suggestion that CDMA is NOT orthogonal since it requires an integrator and "basis codes" to reject unwanted signals, reveals a significant lack of understanding regarding CDMA and orthogonality in general, in that the use of orthogonal Walsh codes is at the very core of what CDMA is and how it operates. The use of an integrator in CDMA fulfills the role of the integration process, which is itself fundamental to the definition of orthogonality.

No, CDMA often operates by using long orthogonal pseudorandom binary sequences (PN sequences). Read up on these in the Tracking and Data Relay Satellite System, for example. (talk) 00:22, 25 August 2012 (UTC)

You cannot [USUALLY] simply multiply two discrete fragments of any two orthogonal functions and get 0. For example sin(x) and cos(x) are orthogonal over multiples of pi over 2, but sin(45)*cos(45)=/=0. Existance of orthogonality between two such functions requires full integration over an extended window (e.g. over one or more periods).

If orthogonality didn't exist in CDMA, how then do hundreds of CDMA calls transmitted SIMULTANEOUSLY over the SAME RF channel remain isolated from one another? BILLIONS of such calls have been processed over active CDMA channels this past decade with enormous success, which would NOT have been possible IF these CDMA signals were not orthogonal to each other. Stevex99, 5 July 07

Yes, by using orthogonal PC sequences. (talk) 00:22, 25 August 2012 (UTC)
A couple of points:
  • TDMA is orthogonal. Separating by time is one way of satisfying the orthogonality condition, as (assuming the signalling pulses are T or less in time).
  • In practice, CDMA is pseudo-orthogonal, not orthogonal. While the channelization codes from a single base-station are typically orthogonal, the scrambing codes are not (in WCDMA, Gold codes are used, for instance). And you've already noted another problem, which is the requirement for perfect synchronization. In practice, this is very rarely achieved.

Oli Filth 08:45, 6 July 2007 (UTC)

Thanks Oli for your comments, however I would have to disagree. Separating by time is effectively inserting a windowing function on the signals being transmitted, and it does not make the fundamental signals orthogonal to each other (which is why they tend to interfere with emissions from adjacent cells).

What do you consider to be the "fundamental" signals, and why? If you mean the underlying sinusoidal carriers, then yes of course they're not orthogonal when delayed, but that's why we apply the "windowing function" (normally, this is known as the "pulse-shaping function"), to make the transmitted signals orthogonal to one another. In a baseband model, all you have is the pulse-shaping function. Oli Filth 15:46, 6 July 2007 (UTC)

But that's kind of my whole point. If you have to take action specifically to prevent two signals from interfering (in this case gating them on and off), then they clearly don't posess the mathematical characteristic of orthogonality. And, if they were in fact orthogonal, then you wouldn't have to deal with the issue of intercell interference within the system. Orthogonal-FDM signals (for example), can exist simultaneously on the same channel specifically because they do meet the definition for orthogonality (by design).

And, not to be nit-picky, but the integral that you show really just shifts the two functions in time. To actually model the TDMA scheme, it would need a windowing function (which, again, isn't part of the definition for orthogonality).It was nice to debate this with you, but I better get some work done. All the best! Stevex99.

The transmitted signals can be described by mathematical functions which are orthogonal. Therefore the transmitted signals are orthogonal. I'm not sure what else there is to say! The mathematical functions which describe the signals that occur at an earlier point in the transmitter processing chain (e.g. the carrier) may or may not be orthogonal, but so what? They aren't the signals being transmitted.
Yes, in all cellular systems, we have to deal with intercell interference. This is no different for OFDM or CDMA. On a cell-by-cell basis, the signals used may or may not be mathematically orthogonal, this doesn't remove the need for intercell considerations.
And yes, that is exactly what my integral shows. There is no requirement for a windowing function. If the function g(t) is T or less in support, then the integral will be zero. Therefore, the orthogonality is satisfied. There are many ways of obtaining an orthogonal signal family. Separation in time is just one method.
Oli Filth 02:57, 7 July 2007 (UTC)

Discrete function orthoganality?[edit]

If someone thinks its appropriate could they add the definitition for orthoginality for discrete functions. For example the kernel of the DFT. Thanks.


We say that two variables are orthogonal if they are independent. Uncorrelated seems much more plausible, since the distribution of the product of two variables is an inner product of the variables.
Septentrionalis PMAnderson 05:27, 19 September 2007 (UTC)

Should we give more prominence to the section on statistics in this article? My discipline is psychology, and if I were to refer to two variables as orthogonal, I would mean that they are not statistically significantly correlated. I would probably add a wikilink to orthogonal, so that curious readers could find out what this term means by going to this article, rather than defining the term in every article for which I use this word. However, at the moment, this article is rather heavy for a non-specialist. ACEOREVIVED 19:23, 13 October 2007 (UTC)
That's right: two random variables are called orthogonal if they're simply uncorrelated, not necessarily independent. I made the appropriate change. There's often confusion about this because for Gaussian random variables, uncorrelatedness implies independence. For general r.v.'s, though, this isn't true: independence is a stronger condition. Jgmakin 06:37, 25 October 2007 (UTC)

Intro Rewrite[edit]

Here is the current intro:

In mathematics, orthogonal, as a simple adjective not part of a longer phrase, is a generalization of perpendicular. It means "at right angles". The word comes from the Greek ὀρθός (orthos), meaning "straight", and γωνία (gonia), meaning "angle". Two streets that cross each other at a right angle are orthogonal to one another. In recent years, "perpendicular" has come to be used more in relation to right angles outside of a coordinate plane context, whereas "orthogonal" is used when discussing vectors or coordinate geometry.

I propose this replacement:

In mathematics, two vectors are orthogonal if they are perpendicular, i.e., they form a right angle. The word comes from the Greek ὀρθός (orthos), meaning "straight", and γωνία (gonia), meaning "angle". For example, a subway and the street above, although they do not physically intersect, are orthogonal if they meet at a right angle.

This version avoids the need to say that "orthogonal" is a generalization of "perpendicular" by saying that they are identical in the generalized context of vector mathematics. The example is improved by not using the coordinate plane. Lastly, the note about common usage is redundant because the intro sentence has already described the context of "orthogonal". --Beefyt (talk) 05:52, 4 August 2008 (UTC)

When I read this subway example I got very confused. How do a subway and a street above "meet" at a right angle? Once I read this talk page, I understood what you meant. I propose using the word "cross" instead... but I'm not sure if that's better. What do you guys think? Sunbeam44 (talk) 16:07, 23 October 2008 (UTC)

From beginning Euclidean geometry, two lines (straight) that do not intersect are "skew" [False. In plane geometry, two lines either intersect or they are parallel to one another. There are no any other possibilities.] "Perpendicular" implies the lines intersect in a right angle. "Orthogonal" implies far more than mere perpendicularity, as evidenced in the article and its attendant commentaries.
—Preceding unsigned comment added by Lionum (talkcontribs) 06:46, 11 October 2008 (UTC)
But the lead says "two vectors are orthogonal if they are perpendicular", not "two lines are orthogonal if they are perpendicular". Would you be satisfied with "two lines are orthogonal if their vectors are perpendicular"? --beefyt (talk) 21:28, 30 January 2009 (UTC)
Why not "two lines are orthogonal if they meet at a right angle"? Michael Hardy (talk) 22:17, 30 January 2009 (UTC)
The direction of a line in a two-dimensional plane is defined by a two-dimensionsl vector. Also, the direction of a line in three-dimensional space is defined by a three-dimensional vector. Two lines are perpendicular if and only if their defining vectors are perpendicular to each other. This is taught in advanced high-school mathematics. (talk) 00:22, 25 August 2012 (UTC)

The introduction needed to be rewritten, so I decided to Be Bold and do it. I expect it to be revised. The citations are poor but at least they are there, and they back-up what is said in the subsections. When the revisions start, I would like to make these suggestions:

  • Go from the general to the specific
  • Be as nontechnical as possible
  • Stay open to the many different meanings of the word in different contexts.

KSnortum (talk) 20:19, 13 January 2012 (UTC)

T - shape?[edit]

Isn't orthogonal a T shape? Can it say so on the article as a better description than right angle? Or, if the orthogonal is the L shape, is it OK to say "An Orthogonal is an L shaped intersection." and provide a nice L shaped joint picture? I can't remember if it is T or L and I cannot read complex formulae. ~ R.T.G 15:21, 30 January 2009 (UTC)

Two lines are orthogonal if they meet at a right angle. Thus the strokes of L are orthogonal, as are those of T or +. —Tamfang (talk) 03:08, 23 October 2011 (UTC)


You might be interested that the book Applied Mathematics for Database Professionals refers readers to this Wikipedia article. - (talk) 14:17, 23 June 2010 (UTC)

Additional citations[edit]

Why, what, where, and how does this article need additional citations for verification? Hyacinth (talk) 05:26, 2 August 2010 (UTC)

Where it says "citation needed." --KSnortum (talk) 18:06, 13 January 2012 (UTC)

(a, g, and n)[edit]

Another scheme is orthogonal frequency-division multiplexing (OFDM), which refers to the use, by a single transmitter, of a set of frequency multiplexed signals with the exact minimum frequency spacing needed to make them orthogonal so that they do not interfere with each other. Well known examples include (a, g, and n) versions of 802.11 Wi-Fi; WiMAX; ITU-T, DVB-T, the terrestrial digital TV broadcast system used in most of the world outside North America; and DMT, the standard form of ADSL.

Is there a good reason for (a, g, and n) to be enclosed in ()? —Tamfang (talk) 03:07, 23 October 2011 (UTC)

Orthogonality vs. Independence in ....[edit]

Orthogonality vs. Independence in random variables and statistics.

Independence is a much stronger specification or assumption than orthogonality or uncorrelated.

Orthogonal means that E(XY) = 0.

If E(XY) = E(X)E(Y), then X and Y are uncorrelated.

The above must not be confused by the following.
If two random variables or statistics X and Y are jointly Gaussian;
And X and Y are both zero mean; [This is often forgotten about.]
And X and Y are orthogonal;
Then X and Y are independent.

Otherwise, the independence of X and Y has to be considered on a case-by-case basis.
For independence, if f(x,y) is the joint probability density of X and Y, then we must have f(x,y) = f(x)f(y). There is no other way.

If E(XY) = 0, and either E(X) = 0 or E(Y) = 0, then X and Y are uncorrelated because E(XY) = E(X)E(Y) = 0. — Preceding unsigned comment added by (talk) 00:50, 25 August 2012 (UTC)

Ancient history[edit]

Attributing the concept of orthogonality (really perpendicularity) to the Babylonians or the Egyptians or whatever is probably not reasonable. All ancient mathematical civilizations (Babylonian, Egyptian, Indian, Chinese) had the concept of perpendicularity in two dimensions -- a discussion which belongs in perpendicularity. I am not sure that any of them had any of the generalizations that are called "orthogonality". --Macrakis (talk) 00:51, 25 January 2013 (UTC)

Alternate Definitions[edit]

The bottom of this page is lacking in succinctness, if you ask me. Many if not all of the smaller subsections starting at "Art and Architecture" should be grouped into a single "Definitions in other fields" section. The "Statistics, econometrics, and economics" section mentions nothing about economics and the only comments on econometrics can be generalized to optimization problems overall (and are only tangentially related to orthogonality, questioning the importance in an article devoted to explaining the concept of orthogonality). Furthermore, the general background knowledge assumed in the reader is very different for each heading. I propose the section be condensed with easy to comprehend definitions, more like the Taxonomy section (which defines its jargon somewhat) than the Neuroscience section (which leaves its jargon to the imagination without even linking another page or reference).

Also, rather than bring up individual tidbits that relate to or require orthogonality (I'm looking at econometrics again...), maybe a dedicated section called "Properties that follow from orthogonality" might be useful. Compiling this list would be difficult for one person, but if we could bring back all the people that made those ridiculous subheadings, they'd be able to fill a good amount of it in. — Preceding unsigned comment added by 2602:304:AB31:95F9:DD98:CE54:FF67:41C6 (talk) 17:36, 31 May 2013 (UTC)

Problem with etymology section[edit]

Under etymology it is stated that the Greek term ορθογώνιον (= orthogonal) was used to denote a rectangle, and then it came to mean "right triangle." This is incorrect.

A right triangle is denoted by "ορθογώνιον τρίγωνον" which literally translates to "right angle triangle." The term "ορθογώνιον" without being accompanied by the word "τρίγωνον" (= triangle) is still used to denote a rectangle, and was never used differently.

Note: Not sure whether the Latin term "orthogonalis" came to mean "right triangle" or not; I'm only certain about the Greek term. (talk) 15:04, 1 May 2015 (UTC)