Mean dependence

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In probability theory, a random variable Y is said to be mean independent of random variable X if and only if E(Y|X) = E(Y) for all x such that ƒ1(x) is not equal to zero. Y is said to be mean dependent if E(Y|X) ≠ μ(y) for some x such that ƒ1(x) is not equal to zero.[clarification needed]

According to Cameron and Trivedi (2009, p. 23) and Wooldridge (2010, pp. 54, 907), Stochastic independence implies mean independence, but the converse is not necessarily true.

Moreover, mean independence implies uncorrelation while the converse is not necessarily true.

The concept of mean independence is often used in econometrics to have a middle ground between the strong assumption of independent distributions f_{X_1}\perp{}f_{X_2} and the weak assumption of uncorrelated variables \text{Cov}(X_1,X_2)=0 of a pair of random variables X_1 and X_2.

If X, Y are two different random variables such that X is mean independent of Y and Z=f(X), which means that Z is a function only of X, then Y and Z are mean independent.

References[edit]

  • Cameron, A. Colin; Trivedi, Pravin K. (2009). Microeconometrics: Methods and Applications (8th ed.). New York: Cambridge University Press. ISBN 9780521848053. 
  • Wooldridge, Jeffrey M. (2010). Econometric Analysis of Cross Section and Panel Data (2nd ed.). London: The MIT Press. ISBN 9780262232586.