In probability theory, a random variable Y is said to be mean independent of random variable X if and only if E(Y|X) = E(Y) for all x such that ƒ1(x) is not equal to zero. Y is said to be mean dependent if E(Y|X) ≠ μ(y) for some x such that ƒ1(x) is not equal to zero.[clarification needed]
Moreover, mean independence implies uncorrelation while the converse is not necessarily true.
The concept of mean independence is often used in econometrics to have a middle ground between the strong assumption of independent distributions and the weak assumption of uncorrelated variables of a pair of random variables and .
If X, Y are two different random variables such that X is mean independent of Y and Z=f(X), which means that Z is a function only of X, then Y and Z are mean independent.
- Cameron, A. Colin; Trivedi, Pravin K. (2009). Microeconometrics: Methods and Applications (8th ed.). New York: Cambridge University Press. ISBN 9780521848053.
- Wooldridge, Jeffrey M. (2010). Econometric Analysis of Cross Section and Panel Data (2nd ed.). London: The MIT Press. ISBN 9780262232586.
|This probability-related article is a stub. You can help Wikipedia by expanding it.|