Krichevsky–Trofimov estimator

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In information theory, given an unknown stationary source π with alphabet A, and a sample w from π, the Krichevsky–Trofimov (KT) estimator produces an estimate πi(w) of the probabilities of each symbol i ∈ A. This estimator is optimal in the sense that it minimizes the worst-case regret asymptotically.

For a binary alphabet, and a string w with m zeroes and n ones, the KT estimator can be defined recursively[1] as:


\begin{array}{lcl}
P(0, 0) & = & 1, \\ [6pt]
P(m, n+1) & = & P(m,n)\dfrac{n + 1/2}{m + n + 1}, \\ [12pt]
P(m+1, n) & = & P(m,n)\dfrac{m + 1/2}{m + n + 1}.
\end{array}

See also[edit]

References[edit]

  1. ^ Krichevsky, R.E. and Trofimov V.K. (1981), 'The Performance of Universal Encoding', IEEE Trans. Information Theory, Vol. IT-27, No. 2, pp. 199–207