0% found this document useful (0 votes)
51 views3 pages

Information Theory and Coding Case Study: Topic: Use of Forward Error Correction Code For White Gaussian Noise Channels

1. The document discusses the use of forward error correction (FEC) coding for noisy white Gaussian noise channels. FEC adds redundant data that allows receivers to identify and correct errors without retransmission. 2. It provides an overview of FEC, including common implementation methods like repeating each character twice. FEC is advantageous for noisy channels as it increases the number of errors that can be corrected, improving reliability. 3. The document describes modeling noise as additive white Gaussian noise and discusses how FEC performance depends on the transmission channel. It also summarizes common FEC code types like convolutional and block codes as well as more advanced codes like Turbo Codes.

Uploaded by

Vaibhav Misra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views3 pages

Information Theory and Coding Case Study: Topic: Use of Forward Error Correction Code For White Gaussian Noise Channels

1. The document discusses the use of forward error correction (FEC) coding for noisy white Gaussian noise channels. FEC adds redundant data that allows receivers to identify and correct errors without retransmission. 2. It provides an overview of FEC, including common implementation methods like repeating each character twice. FEC is advantageous for noisy channels as it increases the number of errors that can be corrected, improving reliability. 3. The document describes modeling noise as additive white Gaussian noise and discusses how FEC performance depends on the transmission channel. It also summarizes common FEC code types like convolutional and block codes as well as more advanced codes like Turbo Codes.

Uploaded by

Vaibhav Misra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Information Theory and

Coding
CASE STUDY
TOPIC: USE OF FORWARD ERROR
CORRECTION CODE FOR WHITE
GAUSSIAN NOISE CHANNELS

SUBMITTED BY -
TANYA SURI - 03551202816
VAIBHAV BHAWANI -03651202816
VAIBHAV MISRA -03751202816
VARSHA SINGH - 03851202816
VARUN TANEJA - 03951202816
VISHAL SINGH - 04051202816
VISHAL OJHA- 04151202816
YASH KHANDELWAL -04251202816
Use Forward Error Correction to Improve Data Communications
As bandwidth demands increase, the tolerance for errors and latency decreases, designers of data-
communication systems are looking for new ways to expand available bandwidth and improve the quality of
transmission. One solution isn't actually new, but has been around for a while. Nevertheless, it could prove
quite useful. Called forward error correction (FEC), this design technology has been used for years to enable
efficient, high-quality data communication over noisy channels, such as those found in satellite and digital
cellular-communications applications.
Recently, there have been significant advances in FEC technology that allow today's systems to approach the
Shannon limit. Theoretically, this is the maximum level of information content for any given channel. These
advances are being used successfully to reduce cost and increase performance in a variety of wireless
communications systems including satellites, wireless LANs, and fiber communications. In addition, high-
speed silicon ASICs for FEC applications have been developed, promising to further revolutionize
communication systems design.
The big attraction of FEC technology is how it adds redundant information to a data stream. This enables a
receiver to identify and correct errors without the need for retransmission.

What is FEC?
Forward error correction (FEC) is a method of obtaining error control in data transmission in which the
source (transmitter) sends redundant data and the destination (receiver) recognizes only the portion of the
data that contains no apparent errors. Because FEC does not require handshaking between the source and
the destination, it can be used for broadcasting of data to many destinations simultaneously from a single
source.

Methods for implementing FEC


In the simplest form of FEC, each character is sent twice. The receiver checks both instances of each character
for adherence to the protocol being used. If conformity occurs in both instances, the character is accepted. If
conformity occurs in one instance and not in the other, the character that conforms to protocol is accepted. If
conformity does not occur in either instance, the character is rejected and a blank space or an underscore (_) is
displayed in its place.
Simple FEC is one of two modes used by radio amateurs in a self-correcting digital mode called AMTOR (an
abbreviation for amateur teleprinting over radio). It is sometimes called Mode B. The other AMTOR mode,
automatic repeat request (ARQ), involves handshaking and is also used with communications systems such as
Global System for Mobile (GSM). In amateur radio, ARQ is sometimes called AMTOR Mode A.

Use Forward Error Correction for WGN channels


As the capabilities of FEC increase, the number of errors that can be corrected also increases. The advantage is
obvious. Noisy channels create a relatively large number of errors. The ability to correct these errors means
that the noisy channel can be used reliably. This enhancement can be parlayed into several system
improvements, including bandwidth efficiency, extended range, higher data rate, and greater power efficiency,
as well as increased data reliability.
FEC requires that data first be encoded. The original user data to be transmitted over the channel is called
information bits, while the data after the addition of error-correction information is called coded bits.
The FEC decoding process doesn't need to generate n-bit estimates as an intermediate step. In a well-designed
decoder, quantized channel-measurement data is taken as the decoder input. This raw channel measurement
data consists of n metrics where each metric corresponds to the likelihood that a particular bit is a logical 1.
Furthermore, the likelihood that a given bit is a logical 0 is related to this number. These metrics are usually
represented by 3- or 4-bit integers called soft decision metrics. The decoder output is an estimate of the k
information bits. Typically, decoding with soft decision metrics is computationally intensive. Very often, it's
performed on a decoder ASIC that's specifically designed for the task.
A code's performance is strongly dependent on the data transmission channel. In order to facilitate the
comparison of one code with another, a model is used where noise is added to antipodal signals. In this model,
the noise is additive white Gaussian noise (AWGN). Unrelated noise samples are added to antipodal channel
symbols . The variance of the noise is related to the power spectral density of the noise (No). Antipodal
signaling, a mapping where the 1s and 0s will be transmitted, are sent as +Z and -Z. For example, Z could
represent 1 V on a transmission wire. So, 1s and 0s would be transmitted as +1 V and -1 V, respectively. The
received energy per transmitted data bit (Eb) is proportional to Z 2. An important parameter in the system is the
signal-to-noise ratio, Eb/No. The AWGN model accurately represents many types of real channels. Many
times, channels exhibiting other types of impairments have AWGN-like impairment as well.
FEC codes come in two primary types, convolutional and block. In a simple convolutional encoder, a sequence
of information bits passes through a shift register, and two output bits are generated per information bit . Then,
the two output bits are transmitted. Essentially, the decoder estimates the state of the encoder for each set of
two channel symbols it receives. If the decoder accurately knows the encoder's state sequence, then it knows
the original information sequence too.
Using Constraint Length The encoder in Figure 2 has a constraint length of K = 3 and a memory of K-1 or K-
2. For this code, the optimal decoder, otherwise known as the Viterbi decoder, has 2 K-1, or four states. A
number of values are computed for each state. As K increases, so does the performance of the code—but at a
diminishing rate. The complexity of the decoder, though, increases exponentially with K. Today, popular
convolutional codes in use employ K = 7 or K = 9.
Block codes are a little more straightforward. A block code will take k information bits and generate one or
more "parity " bits. These parity bits are appended to the information bits, resulting in a group of n bits where
n > k. The encoded pattern of n bits is referred to as a code word, and this code word is transmitted in its
entirety. illustrates a simple (n,k) = (8,4) block-code encoder. The receiver obtains n channel metrics and the
decoder estimates the most likely sequence (of which there are 2 k) from these estimates. Block decoders are
usually rich in algebraic structure that can be used to facilitate decoding. Perhaps the most popular block codes
presently implemented are Reed Solomon codes. Often, these are a shortened version of an n = 2040 code (255
bytes). Until very recently, the most powerful codes were built from the concatenation of a convolutional code
and a Reed Solomon code.
The latest advance in FEC is a class of codes called Turbo Codes. These are very powerful codes built from
two or more smaller, simpler constituent codes. The constituent codes could be either systematic convolutional
or block type. The idea behind Turbo Codes is to encode the data once via encoder 1, in some way scramble
the order of these output bits known to the receiver, and then encode these bits with a second encoder, decoder
2. The twice-encoded bits are then transmitted.
At the receiver, decoding proceeds in the reverse order of the encoding process. The first decoding is as per
encoder 2 and the second decoding is as per encoder 1. Between the two decodings, the scrambling operation
is reversed.

You might also like