# Fisher transformation

(Redirected from Fisher z-transformation)
A graph of the transformation (in orange). The untransformed sample correlation coefficient is plotted on the horizontal axis, and the transformed coefficient is plotted on the vertical axis. The identity function (gray) is also shown for comparison.

In statistics, hypotheses about the value of the population correlation coefficient ρ between variables X and Y can be tested using the Fisher transformation[1][2] (aka Fisher z-transformation) applied to the sample correlation coefficient.

## Definition

Given a set of N bivariate sample pairs (XiYi), i = 1, ..., N, the sample correlation coefficient r is given by

${\displaystyle r={\frac {\operatorname {cov} (X,Y)}{\sigma _{X}\sigma _{Y}}}={\frac {\sum _{i=1}^{N}(X_{i}-{\bar {X}})(Y_{i}-{\bar {Y}})}{{\sqrt {\sum _{i=1}^{N}(X_{i}-{\bar {X}})^{2}}}{\sqrt {\sum _{i=1}^{N}(Y_{i}-{\bar {Y}})^{2}}}}}.}$

Here ${\displaystyle \operatorname {cov} (X,Y)}$ stands for the covariance between the variables ${\displaystyle X}$ and ${\displaystyle Y}$ and ${\displaystyle \sigma }$ stands for the standard deviation of the respective variable. Fisher's z-transformation of r is defined as

${\displaystyle z:={1 \over 2}\ln \left({1+r \over 1-r}\right)=\operatorname {arctanh} (r),}$

where "ln" is the natural logarithm function and "arctanh" is the inverse hyperbolic tangent function.

If (XY) has a bivariate normal distribution with correlation ρ and the pairs (XiYi) are independent and identically distributed, then z is approximately normally distributed with mean

${\displaystyle {1 \over 2}\ln \left({{1+\rho } \over {1-\rho }}\right),}$
${\displaystyle {1 \over {\sqrt {N-3}}},}$

where N is the sample size, and ρ is the true correlation coefficient.

This transformation, and its inverse

${\displaystyle r={\frac {\exp(2z)-1}{\exp(2z)+1}}=\operatorname {tanh} (z),}$

can be used to construct a large-sample confidence interval for r using standard normal theory and derivations.

## Derivation

Fisher Transformation with ${\displaystyle \rho =0.9}$ and ${\displaystyle N=30}$. Illustrated is the exact probability density function of ${\displaystyle r}$ (in black), together with the probability density functions of the basic Fisher (blue) and enhanced Fisher (red) approximations. Note that the latter approximation is visually indistinguishable from the exact answer (its maximum error is ${\displaystyle 0.3\%}$, compared to ${\displaystyle 3.4\%}$ of basic Fisher).

To derive Fisher transformation, one starts by considering an arbitrary increasing function of ${\displaystyle r}$, say ${\displaystyle G(r)}$. Finding the first term in the large-${\displaystyle N}$ expansion of the corresponding skewness results in

${\displaystyle {\frac {6\rho -3(1-\rho ^{2})G^{\prime \prime }(\rho )/G^{\prime }(\rho )}{\sqrt {N}}}+O(N^{-3/2}).}$

Making it equal to zero and solving the corresponding differential equation for ${\displaystyle G}$ yields the ${\displaystyle \operatorname {arctanh} }$ function. Similarly expanding the mean and variance of ${\displaystyle \operatorname {arctanh} (r)}$, one gets

${\displaystyle \operatorname {arctanh} (\rho )+{\frac {\rho }{2N}}+O(N^{-2})}$

and

${\displaystyle {\frac {1}{N}}+{\frac {6-\rho ^{2}}{2N^{2}}}+O(N^{-3})}$

respectively. Note that the extra terms are not the usual part of Fisher transformation, in spite of the fact that they represent a huge improvement of accuracy at minimal cost. Also note that the near-constant variance of the transformation is just an incidental by-product of removing its skewness – the actual improvement is achieved by the latter, not the former property. Now, it is

${\displaystyle {\frac {r-\operatorname {arctanh} (\rho )-{\frac {\rho }{2N}}}{\sqrt {{\frac {1}{N}}+{\frac {6-\rho ^{2}}{2N^{2}}}}}}}$

which has, to an excellent approximation, the standardized Normal distribution.[3]

## Discussion

The Fisher transformation is an approximate variance-stabilizing transformation for r when X and Y follow a bivariate normal distribution. This means that the variance of z is approximately constant for all values of the population correlation coefficient ρ. Without the Fisher transformation, the variance of r grows smaller as |ρ| gets closer to 1. Since the Fisher transformation is approximately the identity function when |r| < 1/2, it is sometimes useful to remember that the variance of r is well approximated by 1/N as long as |ρ| is not too large and N is not too small. This is related to the fact that the asymptotic variance of r is 1 for bivariate normal data.

The behavior of this transform has been extensively studied since Fisher introduced it in 1915. Fisher himself found the exact distribution of z for data from a bivariate normal distribution in 1921; Gayen in 1951[4] determined the exact distribution of z for data from a bivariate Type A Edgeworth distribution. Hotelling in 1953 calculated the Taylor series expressions for the moments of z and several related statistics[5] and Hawkins in 1989 discovered the asymptotic distribution of z for data from a distribution with bounded fourth moments.[6]

## Other uses

While the Fisher transformation is mainly associated with the Pearson product-moment correlation coefficient for bivariate normal observations, it can also be applied to Spearman's rank correlation coefficient in more general cases.[7] A similar result for the asymptotic distribution applies, but with a minor adjustment factor: see the latter article[clarification needed] for details.