We find the desired probability density function by taking the derivative of both sides with respect to . Since on the right hand side, appears only in the integration limits, the derivative is easily performed using the fundamental theorem of calculus and the chain rule. (Note the negative sign that is needed when the variable occurs in the lower limit of the integration.)
where the absolute value is used to conveniently combine the two terms.
A more intuitive description of the procedure is illustrated in the figure below. The joint pdf exists in the - plane and an arc of constant value is shown as the shaded line. To find the marginal probability on this arc, integrate over increments of area on this contour.
Diagram to illustrate the product distribution of two variables.
Starting with , we have . So the probability increment is . Since implies , we can relate the probability increment to the -increment, namely . Then integration over , yields .
Let be a random sample drawn from probability distribution . Scaling by generates a sample from scaled distribution which can be written as a conditional distribution .
Letting be a random variable with pdf , the distribution of the scaled sample becomes and integrating out we get so is drawn from this distribution . However, substituting the definition of we also have
which has the same form as the product distribution above. Thus the Bayesian posterior distribution is the distribution of the product of the two independent random samples and .
For the case of one variable being discrete, let have probability at levels with . The conditional density is . Therefore .
When two random variables are statistically independent, the expectation of their product is the product of their expectations. This can be proved from the Law of total expectation:
In the inner expression, Y is a constant. Hence:
This is true even if X and Y are statistically dependent. However, in general is a function of Y. In the special case in which X and Y are statistically
independent, it is a constant independent of Y. Hence:
Variance of the product of independent random variables
Let be uncorrelated random variables with means and variances .
The variance of the product XY is
In the case of the product of more than two variables, if are statistically independent then the variance of their product is
Characteristic function of product of random variables
Assume X, Y are independent random variables. The characteristic function of X is , and the distribution of Y is known. Then from the law of total expectation, we have
If the characteristic functions and distributions of both X and Y are known, then alternatively, also holds.
Gamma distribution example To illustrate how the product of moments yields a much simpler result than finding the moments of the distribution of the product, let be sampled from two Gamma distributions, with parameters
whose moments are
Multiplying the corresponding moments gives the Mellin transform result
Independently, it is known that the product of two independent Gamma samples has the distribution
To find the moments of this, make the change of variable , simplifying similar integrals to:
The definite integral
is well documented and we have finally
which, after some difficulty, has agreed with the moment product result above.
If X, Y are drawn independently from Gamma distributions with shape parameters then
This type of result is universally true, since for bivariate independent variables thus
or equivalently it is clear that are independent variables.
The distribution of the product of two random variables which have lognormal distributions is again lognormal. This is itself a special case of a more general set of results where the logarithm of the product can be written as the sum of the logarithms. Thus, in cases where a simple result can be found in the list of convolutions of probability distributions, where the distributions to be convolved are those of the logarithms of the components of the product, the result might be transformed to provide the distribution of the product. However this approach is only useful where the logarithms of the components of the product are in some standard families of distributions.
Uniformly distributed independent random variables
Let the product of two independent variables each uniformly distributed on the interval [0,1], possibly the outcome of a copula transformation. As noted in "Lognormal Distributions" above, PDF convolution operations in the Log domain correspond to the product of sample values in the original domain. Thus, making the transformation , such that , each variate is distributed independently on u as
and the convolution of the two distributions is the autoconvolution
Next retransform the variable to yielding the distribution
on the interval [0,1]
For the product of multiple ( >2 ) independent samples the characteristic function route is favorable. If we define then above is a Gamma distribution of shape 1 and scale factor 1, , and its known CF is . Note that so the Jacobian of the transformation is unity.
The convolution of independent samples from therefore has CF which is known to be the CF of a Gamma distribution of shape :
Making the inverse transformation we get the PDF of the product of the n samples:
The following, more conventional, derivation from Stackexchange is consistent with this result.
First of all, letting its CDF is
The density of
Multiplying by a third independent sample gives distribution function
Taking the derivative yields
The author of the note conjectures that, in general,
The geometry of the product distribution of two random variables in the unit square.
The figure illustrates the nature of the integrals above. The shaded area within the unit square and below the line z = xy, represents the CDF of z. This divides into two parts. The first is for 0 < x < z where the increment of area in the vertical slot is just equal to dx. The second part lies below the xy line, has y-height z/x, and incremental area dx z/x.
The product of two independent Normal samples follows a modified Bessel function. Let be samples from a Normal(0,1) distribution and .
The variance of this distribution could be determined, in principle, by a definite integral from Gradsheyn and Ryzhik,
A much simpler result, stated in a section above, is that the variance of the product of zero-mean independent samples is equal to the product of their variances. Since the variance of each Normal sample is one, the variance of the product is also one.
The product of correlated Normal samples case was recently addressed by Nadarajaha and Pogány.
Let be zero mean, unit variance, normally distributed variates with correlation coefficient
Mean and variance: For the mean we have from the definition of correlation coefficient. The variance can be found by transforming from two unit variance zero mean uncorrelated variables U, V. Let
Then X, Y are unit variance variables with correlation coefficient and
Removing odd-power terms, whose expectations are obviously zero, we get
Since we have
High correlation asymptote
In the highly correlated case, the product converges on the square of one sample. In this case the asymptote is
which is a Chi-squared distribution with one degree of freedom.
Multiple correlated samples. Nadarajaha et. al. further show that if iid random variables sampled from and is their mean then
where W is the Whittaker function while .
Using the identity , see for example the DLMF compilation. eqn(13.13.9), this expression can be somewhat simplified to
The pdf gives the distribution of a sample covariance.
Multiple non-central correlated samples. The distribution of the product of correlated non-central normal samples was derived by Cui et.al. and takes the form of an infinite series of modified Bessel functions of the first kind.
Moments of product of correlated central normal samples
The distribution of the product of non-central correlated normal samples was derived by Cui et al. and takes the form of an infinite series.
These product distributions are somewhat comparable to the Wishart distribution. The latter is the joint distribution of the four elements (actually only three independent elements) of a sample covariance matrix. If are samples from a bivariate time series then the is a Wishart matrix with K degrees of freedom. The product distributions above are the unconditional distribution of the aggregate of K > 1 samples of .
The variable is clearly Chi-squared with two degrees of freedom and has PDF
Wells et. al. show that the density function of is
and the cumulative distribution function of is
Thus the polar representation of the product of two uncorrelated complex Gaussian samples is
The first and second moments of this distribution can be found from the integral in Normal Distributions above
Thus its variance is .
Further, the density of corresponds to the product of two independent Chi-square samples each with two DoF. Writing these as scaled Gamma distributions then, from the Gamma products below, the density of the product is
Independent complex-valued noncentral Normal Distributions
The product of non-central independent complex Gaussians is described by O’Donoughue and Moura and forms a double infinite series of modified Bessel functions of the first and second types.
The distribution of the product of a random variable having a uniform distribution on (0,1) with a random variable having a gamma distribution with shape parameter equal to 2, is an exponential distribution. A more general case of this concerns the distribution of the product of a random variable having a beta distribution with a random variable having a gamma distribution: for some cases where the parameters of the two component distributions are related in a certain way, the result is again a gamma distribution but with a changed shape parameter.
The K-distribution is an example of a non-standard distribution that can be defined as a product distribution (where both components have a gamma distribution).
In computational learning theory, a product distribution over is specified by the parameters
. Each parameter gives the marginal probability that the ith bit of
sampled as is 1; i.e.
. In this setting, the uniform distribution is simply a product distribution with every .
Product distributions are a key tool used for proving learnability results when the examples cannot be assumed to be uniformly sampled. They give rise to an inner product on the space of real-valued functions on as follows:
This inner product gives rise to a corresponding norm as follows:
^Gradsheyn, I S; Ryzhik, I M (1980). Tables of Integrals, Series and Products. Academic Press. pp. section 6.561.
^Nadarajah, Saralees; Pogány, Tibor (2015). "On the distribution of the product of correlated normal random variables". Comptes Rendus de l'Académie des Sciences, Série I. 354 (2): 201–204. doi:10.1016/j.crma.2015.10.019.
^Wells, R T; Anderson, R L; Cell, J W (1962). "The Distribution of the Product of Two Central or Non-Central Chi-Square Variates". The Annals of Mathematical Statistics. 33 (3): 1016–1020. doi:10.1214/aoms/1177704469.
^Nadarajah, Saralees (June 2011). "Exact distribution of the product of n gamma and m Pareto random variables". Journal of Computational and Applied Mathematics. 235 (15): 4496–4512. doi:10.1016/j.cam.2011.04.018.
Servedio, Rocco A. (2004), "On learning monotone DNF under product distributions", Information and Computation, 193 (1): 57–74, doi:10.1016/j.ic.2004.04.003
Springer, Melvin Dale; Thompson, W. E. (1970). "The distribution of products of beta, gamma and Gaussian random variables". SIAM Journal on Applied Mathematics. 18 (4): 721–737. doi:10.1137/0118065. JSTOR2099424.
Springer, Melvin Dale; Thompson, W. E. (1966). "The distribution of products of independent random variables". SIAM Journal on Applied Mathematics. 14 (3): 511–526. doi:10.1137/0114046. JSTOR2946226.