Talk:Shannon's source coding theorem

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Reorganised/Merged[edit]

I suggest the 'Shannon noiseless source coding theorem' article be merged with this 'Source coding' article.

--Hpalaiya 22:54, 19 May 2006 (UTC)

this is a separate large article. maybe a summary of this article might be appropriate on the source coding page - strong disagree

—Preceding unsigned comment added by 81.104.79.141 (talkcontribs) 16:18, 21 December 2006

Merge carried out, having first:

Some more clean-up to do, to blend the presentation of the two theorems more closely. (Detailed editing not yet started).

-- Jheald 22:00, 6 March 2007 (UTC).

I don't think that the \mathbb{E} that keeps showing up is ever explained/defined in this article. A quick explanation as to what it is would go a long way for someone that is not familiar with the material already.

Jeejaw (talk) 02:24, 11 September 2009 (UTC)

An alternative proof for the symbol code case[edit]

Applying the Jensen's inequality for the expression  \sum_{i=1}^n p_i \ log_2 (a^{-s_i}/p_i) we can have a direct proof without using any q_i's:


\sum_{i=1}^n p_i \log_2 \frac{a^{-s_i}}{p_i} \leq \log_2 \sum_{i=1}^n a^{-s_i}

Using the Kraft's inequality  \sum_{i=1}^n a^{-s_i} \leq 1 on the right side:


\begin{matrix}
&& \\
\sum_{i=1}^n p_i \log_2 a^{-s_i} + \sum_{i=1}^n p_i \log_2 \frac{1}{p_i} &\leq& \log_2 1  \\
&& \\
\sum_{i=1}^n p_i s_i \log_2 a - \sum_{i=1}^n p_i \log_2 \frac{1}{p_i} &\geq& 0 \\
&& \\
\mathbb{E}(S) \log_2 a  - H(X) &\geq& 0. \\
&& \\
\end{matrix}

--Vamos (talk) 20:12, 18 November 2009 (UTC)

Source coding theorem for symbol codes is wrong?[edit]

As far as I can see, Source coding theorem for symbol codes is wrong or at least is not accurate. Here is the counterexample: let \Sigma_1=\Sigma_2=\{0,1\}, f(\epsilon)=0, f(0)=\epsilon, f(x)=x for other x (\epsilon is the empty word), X is 0 or 1 with equal probability. Then, it is clear that f is decipherable code, but \mathbb{E}S=0.5<H(X)=1. There is also a counterexample that does not use the empty word. Could someone help to state this theorem more accurate? Alexei Kopylov (talk) 17:30, 5 May 2010 (UTC)


Well Alexei, Using any zero cost symbol (empty symbol), a message longer than a single symbol will not be uniquely decodable. It contradicits to the Kraft's inequality which is a necessary condition for decodability. Vamos (talk) 17:19, 12 May 2010 (UTC)

Oh, thanks! The problem was that the statement was about decipherable code, and there were no definition what decipherable code was. I changed it to uniquely decodable code, with the wiki link. See my change Alexei Kopylov (talk) 01:06, 14 May 2010 (UTC)

Practical example[edit]

Why is no practical example included for the less mathematically inclined? I often find a practical example can greatly help to illustrate abstract concepts, and I think this would greatly enhance the usefulness of the article. — Preceding unsigned comment added by 145.18.212.47 (talk) 08:31, 27 June 2013 (UTC)