# Transfer entropy

Transfer entropy is a non-parametric statistic measuring the amount of directed (time-asymmetric) transfer of information between two random processes.[1][2][3] Transfer entropy from a process X to another process Y is the amount of uncertainty reduced in future values of Y by knowing the past values of X given past values of Y. More specifically, if $X_t$ and $Y_t$ for $t\in \mathbb{N}$ denote two random processes and the amount of information is measured using Shannon's entropy, the transfer entropy can be written as:

$T_{X\rightarrow Y} = H\left( Y^t \mid Y^{t-1:t-L}\right) - H\left( Y^t \mid Y^{t-1:t-L}, X^{t-1:t-L}\right),$

where H(X) is Shannon entropy of X. The above definition of transfer entropy has been extended by other types of entropy measures such as Rényi entropy.[3]

Transfer entropy reduces to Granger causality for vector auto-regressive processes.[4] Hence, it is advantageous when the model assumption of Granger causality doesn't hold, for example, analysis of non-linear signals.[5][6] However, it usually requires more samples for accurate estimation.[7] While it was originally defined for bivariate analysis, transfer entropy has been extended to multivariate forms, either conditioning on other potential source variables[8] or considering transfer from a collection of sources,[9] although these forms require more samples again.

Transfer entropy has been used for estimation of functional connectivity of neurons[9][10] and social influence in social networks.[5]