Product distribution

From Wikipedia, the free encyclopedia
Jump to: navigation, search

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product

Z = XY

is a product distribution.

Algebra of random variables[edit]

The product is one type of algebra for random variables: Related to the product distribution are the ratio distribution, sum distribution (see List of convolutions of probability distributions) and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios.

Many of these distributions are described in Melvin D. Springer's book from 1979 The Algebra of Random Variables.[1]

Derivation for independent random variables[edit]

If X and Y are two independent, continuous random variables, described by probability density functions f_X and f_Y then the joint distribution of Z = XY is[2]

f_Z(z) = \int^{\infty}_{-\infty} f_X \left( x \right)  f_Y \left( z/x \right)  \frac{1}{|x|}\, dx.

Proof [3][edit]

First note that

\mathbb{P}(Z<z) = \mathbb{P}(XY<z) = \mathbb{P}(Y<z/x , X \in \mathbb{R})

Then define the variable transformation  T(x,y) = (xy,x)=(z,x) which has the inverse  T^{-1}(z,x) = (x,z/x) =(x,y). The Jacobian is given by

\mathbf J (z,x) 
= \begin{bmatrix}
    \dfrac{\partial x}{\partial z} &  \dfrac{\partial y}{\partial z}\\
    \dfrac{\partial x}{\partial x} &  \dfrac{\partial y}{\partial x} \end{bmatrix}
= - x^{-1}

Equipped with this we obtain finally for the above probability, the following double integral

 \mathbb{P}(Y<z/x , X \in \mathbb{R})  = \int^{\infty}_{-\infty} \int^{z}_{-\infty} f_X \left( x \right)  f_Y \left( w/x \right)  \frac{1}{|x|}\, dw \,dx

and thus after changing the order of integration finally

\mathbb{P}(Z<z) =  \int^{z}_{-\infty} \int^{\infty}_{-\infty}f_X \left( x \right)  f_Y \left( w/x \right)  \frac{1}{|x|}\, dx \,dw

and the inner integral is just the desired probability density function.

Expectation of product of random variables[edit]

When two random variables are statistically independent, the expectation of their product is the product of their expectations. This can be proved from the Law of total expectation:

\operatorname{E}(X Y) = \operatorname{E}_Y ( \operatorname{E}_{X Y \mid Y} (X Y \mid Y))

In the inner expression, Y is a constant. Hence:

\operatorname{E}_{X Y \mid Y} (X Y \mid Y) = Y\cdot \operatorname{E}_{X\mid Y}[X]
\operatorname{E}(X Y) = \operatorname{E}_Y ( Y\cdot \operatorname{E}_{X\mid Y}[X])

This is true even if X and Y are statistically dependent. However, in general \operatorname{E}_{X\mid Y}[X] is a function of Y. In the special case in which X and Y are statistically independent, it is a constant independent of Y. Hence:

\operatorname{E}(X Y) = \operatorname{E}_Y ( Y\cdot \operatorname{E}_{X}[X])
\operatorname{E}(X Y) = \operatorname{E}_X(X) \cdot \operatorname{E}_Y(Y)

Special cases[edit]

The distribution of the product of two random variables which have lognormal distributions is again lognormal. This is itself a special case of a more general set of results where the logarithm of the product can be written as the sum of the logarithms. Thus, in cases where a simple result can be found in the list of convolutions of probability distributions, where the distributions to be convolved are those of the logarithms of the components of the product, the result might be transformed to provide the distribution of the product. However this approach is only useful where the logarithms of the components of the product are in some standard families of distributions.

The distribution of the product of a random variable having a uniform distribution on (0,1) with a random variable having a gamma distribution with shape parameter equal to 2, is an exponential distribution.[4] A more general case of this concerns the distribution of the product of a random variable having a beta distribution with a random variable having a gamma distribution: for some cases where the parameters of the two component distributions are related in a certain way, the result is again a gamma distribution but with a changed shape parameter.[4]

The K-distribution is an example of a non-standard distribution that can be defined as a product distribution (where both components have a gamma distribution).

In theoretical computer science[edit]

In computational learning theory, a product distribution \mathcal{D} over \{0,1\}^n is specified by the parameters \mu_1, \mu_2, \dots, \mu_n. Each parameter \mu_i gives the marginal probability that the ith bit of x \in \{0,1\}^n sampled as x \sim \mathcal{D} is 1; i.e. \mu_i = \operatorname{Pr}_{\mathcal{D}}[x_i = 1]. In this setting, the uniform distribution is simply a product distribution with every \mu_i = 1/2.

Product distributions are a key tool used for proving learnability results when the examples cannot be assumed to be uniformly sampled.[5] They give rise to an inner product \langle \cdot, \cdot \rangle on the space of real-valued functions on \{0,1\}^n as follows:

\langle f, g \rangle_\mathcal{D} = \sum_{x \in \{0,1\}^n} \mathcal{D}(x)f(x)g(x) = \mathbb{E}_{\mathcal{D}} [fg]

This inner product gives rise to a corresponding norm as follows:

\|f\|_{\mathcal{D}} = \sqrt{ \langle f, f \rangle_\mathcal{D} }

See also[edit]


  1. ^ Springer, Melvin Dale (1979). The Algebra of Random Variables. Wiley. ISBN 0-471-01406-0. Retrieved 24 September 2012. 
  2. ^ Rohatgi, V. K. (1976). An Introduction to Probability Theory and Mathematical Statistics. New York: Wiley. ISBN 0-19-853185-0. Retrieved 4 October 2015. 
  3. ^ Grimmett, G. R.; Stirzaker, D.R. (2001). Probability and Random Processes. Oxford: Oxford University Press. ISBN 978-0-19-857222-0. Retrieved 4 October 2015. 
  4. ^ a b Johnson, Norman L.; Kotz, Samuel; Balakrishnan, N. (1995). Continuous Univariate Distributions Volume 2, Second edition. Wiley. p. 306. ISBN 0-471-58494-0. Retrieved 24 September 2012. 
  5. ^ Servedio, Rocco A. (2004), "On learning monotone DNF under product distributions", Inf. Comput. (Academic Press, Inc.) 193 (1): 57–74, doi:10.1016/j.ic.2004.04.003 


  • Springer, Melvin Dale; Thompson, W. E. (1970). "The distribution of products of beta, gamma and Gaussian random variables". SIAM Journal on Applied Mathematics 18 (4): 721–737. doi:10.1137/0118065. JSTOR 2099424. 
  • Springer, Melvin Dale; Thompson, W. E. (1966). "The distribution of products of independent random variables". SIAM Journal on Applied Mathematics 14 (3): 511–526. doi:10.1137/0114046. JSTOR 2946226.