Eb/N0

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Bit-error rate (BER) vs Eb/N0 curves for different digital modulation methods is a common application example of Eb/N0. Here an AWGN channel is assumed.

Eb/N0 (the energy per bit to noise power spectral density ratio) is an important parameter in digital communication or data transmission. It is a normalized signal-to-noise ratio (SNR) measure, also known as the "SNR per bit". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account.

Eb/N0 is equal to the SNR divided by the "gross" link spectral efficiency in (bit/s)/Hz, where the bits in this context are transmitted data bits, inclusive of error correction information and other protocol overhead. When forward error correction (FEC) is being discussed, Eb/N0 is routinely used to refer to the energy per information bit (i.e. the energy per bit net of FEC overhead bits); in this context, Es/N0 is generally used to relate actual transmitted power to noise.[1]

The noise spectral density N0, usually expressed in units of watts per hertz, can also be seen as having dimensions of energy, or units of joules, or joules per cycle. Eb/N0 is therefore a non-dimensional ratio.

Eb/N0 is commonly used with modulation and coding designed for noise-limited rather than interference-limited communication, since additive white noise (with constant noise density N0) is assumed.

Relation to carrier-to-noise ratio[edit]

Eb/N0 is closely related to the carrier-to-noise ratio (CNR or C/N), i.e. the signal-to-noise ratio (SNR) of the received signal, after the receiver filter but before detection:

C/N=E_b/N_0\cdot\frac{f_b}{B}

where

fb is the channel data rate (net bitrate), and
B is the channel bandwidth

The equivalent expression in logarithmic form (dB):

\text{CNR}_{\text{dB}} = 10\log_{10}(E_b/N_0) + 10\log_{10}\left(\frac{f_b}{B}\right),

Caution: Sometimes, the noise power is denoted by N_0/2 when negative frequencies and complex-valued equivalent baseband signals are considered rather than passband signals, and in that case, there will be a 3 dB difference.

Relation to Es/N0[edit]

Eb/N0 can be seen as a normalized measure of the energy per symbol to noise power spectral density (Es/N0):

\frac{E_b}{N_0} =\frac{E_s}{\rho N_0},

where Es is the energy per symbol in joules and \rho is the nominal spectral efficiency in (bit/s)/Hz.[2] Es/N0 is also commonly used in the analysis of digital modulation schemes. The two quotients are related to each other according to the following:

\frac{E_s}{N_0} =\frac{E_b}{N_0}\log_2 M ,

where M is the number of alternative modulation symbols.

Note that this is the energy per bit, not the energy per information bit.

Es/N0 can further be expressed as:

\frac{E_s}{N_0} = \frac{C}{N}\frac{B}{f_s},

where

C/N is the carrier-to-noise ratio or signal-to-noise ratio.
B is the channel bandwidth in hertz.
fs is the symbol rate in baud or symbols per second.

Shannon limit[edit]

The Shannon–Hartley theorem says that the limit of reliable information rate (data rate exclusive of error-correcting codes) of a channel depends on bandwidth and signal-to-noise ratio according to:

 I < B \log_2 \left( 1+\frac{S}{N} \right)

where

I is the information rate in bits per second excluding error-correcting codes;
B is the bandwidth of the channel in hertz;
S is the total signal power (equivalent to the carrier power C); and
N is the total noise power in the bandwidth.

This equation can be used to establish a bound on Eb/N0 for any system that achieves reliable communication, by considering a gross bit rate R equal to the net bit rate I and therefore an average energy per bit of Eb = S/R, with noise spectral density of N0 = N/B. For this calculation, it is conventional to define a normalized rate Rl = R/2B, a bandwidth utilization parameter of bits per second per half hertz, or bits per dimension (a signal of bandwidth B can be encoded with 2B dimensions, according to the Nyquist–Shannon sampling theorem). Making appropriate substitutions, the Shannon limit is:

 {R \over B} = 2 R_l < \log_2 \left( 1 + 2R_l\frac{E_b}{N_0} \right)

Which can be solved to get the Shannon-limit bound on Eb/N0:

\frac{E_b}{N_0} > \frac{2^{2R_l}-1}{2R_l}

When the data rate is small compared to the bandwidth, so that Rl is near zero, the bound, sometimes called the ultimate Shannon limit,[3] is:

\frac{E_b}{N_0} > \ln(2)

which corresponds to –1.59 dB because:

\ln(2) = 0.693 and
10\log_{10}(0.693) = -1.59~\text{dB}

Cutoff rate[edit]

For any given system of coding and decoding, there exists what is known as a cutoff rate R0, typically corresponding to an Eb/N0 about 2 dB above the Shannon capacity limit.[citation needed]The cutoff rate used to be thought of as the limit on practical error correction codes without an unbounded increase in processing complexity, but has been rendered largely obsolete by the more recent discovery of turbo codes and low-density parity-check (LDPC) codes.

References[edit]

  1. ^ Chris Heegard and Stephen B. Wicker (1999). Turbo coding. Kluwer. p. 3. ISBN 978-0-7923-8378-9. 
  2. ^ Forney, David. "MIT OpenCourseWare, 6.451 Principles of Digital Communication II, Lecture Notes section 4.2". Retrieved 21 September 2010. 
  3. ^ Nevio Benvenuto and Giovanni Cherubini (2002). Algorithms for Communications Systems and Their Applications. John Wiley & Sons. p. 508. ISBN 0-470-84389-6. 

External links[edit]