0% found this document useful (0 votes)
89 views

Linear Code - Wikipedia

Linear codes are error-correcting codes used in communications where any linear combination of codewords is also a codeword. They allow for more efficient encoding and decoding compared to other codes. Linear codes work by encoding messages into codewords with more symbols added to allow for error detection and correction. Parameters like codeword length, message length, minimum distance between codewords determine the error correction ability of the code. Popular examples include Hamming codes, Hadamard codes, and Reed-Solomon codes.

Uploaded by

Raphael
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views

Linear Code - Wikipedia

Linear codes are error-correcting codes used in communications where any linear combination of codewords is also a codeword. They allow for more efficient encoding and decoding compared to other codes. Linear codes work by encoding messages into codewords with more symbols added to allow for error detection and correction. Parameters like codeword length, message length, minimum distance between codewords determine the error correction ability of the code. Popular examples include Hamming codes, Hadamard codes, and Reed-Solomon codes.

Uploaded by

Raphael
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Linear code

In coding theory, a linear code is an


error-correcting code for which any
linear combination of codewords is
also a codeword. Linear codes are
traditionally partitioned into block
codes and convolutional codes,
although turbo codes can be seen as a
hybrid of these two types.[1] Linear
codes allow for more efficient
encoding and decoding algorithms
than other codes (cf. syndrome
decoding).

Linear codes are used in forward error


correction and are applied in methods
for transmitting symbols (e.g., bits) on
a communications channel so that, if
errors occur in the communication,
some errors can be corrected or
detected by the recipient of a message
block. The codewords in a linear block
code are blocks of symbols that are
encoded using more symbols than the
original value to be sent.[2] A linear
code of length n transmits blocks
containing n symbols. For example, the
[7,4,3] Hamming code is a linear binary
code which represents 4-bit messages
using 7-bit codewords. Two distinct
codewords differ in at least three bits.
As a consequence, up to two errors per
codeword can be detected while a
single error can be corrected.[3] This
code contains 24=16 codewords.

Definition and parameters


A linear code of length n and rank k is a
linear subspace C with dimension k of
the vector space where is the
finite field with q elements. Such a
code is called a q-ary code. If q = 2 or
q = 3, the code is described as a binary
code, or a ternary code respectively.
The vectors in C are called codewords.
The size of a code is the number of
codewords and equals qk.

The weight of a codeword is the


number of its elements that are
nonzero and the distance between two
codewords is the Hamming distance
between them, that is, the number of
elements in which they differ. The
distance d of the linear code is the
minimum weight of its nonzero
codewords, or equivalently, the
minimum distance between distinct
codewords. A linear code of length n,
dimension k, and distance d is called an
[n,k,d] code.

We want to give the standard basis


because each coordinate represents a
"bit" that is transmitted across a "noisy
channel" with some small probability of
transmission error (a binary symmetric
channel). If some other basis is used
then this model cannot be used and the
Hamming metric does not measure the
number of errors in transmission, as we
want it to.

Generator and check


matrices
As a linear subspace of , the entire
code C (which may be very large) may
be represented as the span of a
minimal set of codewords (known as a
basis in linear algebra). These basis
codewords are often collated in the
rows of a matrix G known as a
generating matrix for the code C. When
G has the block matrix form
, where denotes the
identity matrix and P is some
matrix, then we say G is
in standard form.

A matrix H representing a linear


function whose
kernel is C is called a check matrix of C
(or sometimes a parity check matrix).
Equivalently, H is a matrix whose null
space is C. If C is a code with a
generating matrix G in standard form,
, then
is a check matrix for C. The code
generated by H is called the dual code
of C. It can be verified that G is a
matrix, while H is a
matrix.

Linearity guarantees that the minimum


Hamming distance d between a
codeword c0 and any of the other
codewords c ≠ c0 is independent of c0.
This follows from the property that the
difference c − c0 of two codewords in C
is also a codeword (i.e., an element of
the subspace C), and the property that
d(c, c0) = d(c − c0, 0). These properties
imply that
In other words, in order to find out the
minimum distance between the
codewords of a linear code, one would
only need to look at the non-zero
codewords. The non-zero codeword
with the smallest weight has then the
minimum distance to the zero
codeword, and hence determines the
minimum distance of the code.

The distance d of a linear code C also


equals the minimum number of linearly
dependent columns of the check matrix
H.

Proof: Because , which is

equivalent to ,

where is the column of .


Remove those items with ,
those with are linearly
dependent. Therefore, is at least the
minimum number of linearly dependent
columns. On another hand, consider
the minimum set of linearly dependent
columns where is the
column index set.
. Now consider the vector such that
if . Note   because
  . Therefore, we have
, which is the minimum
number of linearly dependent columns
in . The claimed property is therefore
proved.

Example: Hamming codes


As the first class of linear codes
developed for error correction purpose,
Hamming codes have been widely used
in digital communication systems. For
any positive integer , there exists
a Hamming
code. Since , this Hamming code
can correct a 1-bit error.

Example : The linear block code with


the following generator matrix and
parity check matrix is a
Hamming code.
Example: Hadamard codes
Hadamard code is a  
linear code and is capable of correcting
many errors. Hadamard code could be
constructed column by column : the
column is the bits of the binary
representation of integer , as shown in
the following example. Hadamard code
has minimum distance   and
therefore can correct   errors.
Example: The linear block code with
the following generator matrix is a
  Hadamard code:
  .

Hadamard code is a special case of


Reed–Muller code. If we take the first
column (the all-zero column) out from
  , we get   simplex code,
which is the dual code of Hamming
code.

Nearest neighbor
algorithm
The parameter d is closely related to
the error correcting ability of the code.
The following construction/algorithm
illustrates this (called the nearest
neighbor decoding algorithm):

Input: A "received vector" v in .

Output: A codeword w in C closest to v


if any.

Starting with t=0 repeat the following


two steps.
Enumerate the elements of the ball
of (Hamming) radius t around the
received word v, denoted Bt(v).
For each w in Bt(v), check if w in
C. If so, return w as the solution!
Increment t. Fail when t > (d - 1)/2 so
enumeration is complete and no
solution has been found.

Note: "fail" is not returned unless


t > (d − 1)/2. We say that a linear C is t-
error correcting if there is at most one
codeword in Bt(v), for each v in .

Popular notation
Codes in general are often denoted by
the letter C, and a code of length n and
of rank k (i.e., having k code words in
its basis and k rows in its generating
matrix) is generally referred to as an
(n, k) code. Linear block codes are
frequently denoted as [n, k, d] codes,
where d refers to the code's minimum
Hamming distance between any two
code words.

(The [n, k, d] notation should not be


confused with the (n, M, d) notation
used to denote a non-linear code of
length n, size M (i.e., having M code
words), and minimum Hamming
distance d.)

Singleton bound
Lemma (Singleton bound): Every linear
[n,k,d] code C satisfies  
.

A code C whose parameters satisfy


k+d=n+1 is called maximum distance
separable or MDS. Such codes, when
they exist, are in some sense best
possible.

If C1 and C2 are two codes of length n


and if there is a permutation p in the
symmetric group Sn for which (c1,...,cn)
in C1 if and only if (cp(1),...,cp(n)) in C2,
then we say C1 and C2 are permutation
equivalent. In more generality, if there
is an   monomial matrix
  which sends C1
isomorphically to C2 then we say C1
and C2 are equivalent.

Lemma: Any linear code is permutation


equivalent to a code which is in
standard form.

Examples
Some examples of linear codes
include:

Repetition codes
Parity codes
Cyclic codes
Hamming codes
Golay code, both the binary and
ternary versions
Polynomial codes, of which BCH
codes are an example
Reed–Solomon codes
Reed–Muller codes
Goppa codes
Low-density parity-check codes
Expander codes
Multidimensional parity-check codes

Generalization
Hamming spaces over non-field
alphabets have also been considered,
especially over finite rings (most
notably over Z4) giving rise to modules
instead of vector spaces and ring-linear
codes (identified with submodules)
instead of linear codes. The typical
metric used in this case the Lee
distance. There exist a Gray isometry
between   (i.e. GF(22m)) with the
Hamming distance and   (also
denoted as GR(4,m)) with the Lee
distance; its main attraction is that it
establishes a correspondence between
some "good" codes that are not linear
over   as images of ring-linear
codes from   .[4][5][6]

More recently, some authors have


referred to such codes over rings
simply as linear codes as well.[7]

See also
Decoding methods

References
1. William E. Ryan and Shu Lin (2009).
Channel Codes: Classical and Modern.
Cambridge University Press. p. 4.
ISBN 978-0-521-84868-8.
2. MacKay, David, J.C. (2003).
Information Theory, Inference, and
Learning Algorithms (PDF). Cambridge
University Press. p. 9.
ISBN 9780521642989. "In a linear block
code, the extra   bits are linear
functions of the original   bits; these
extra bits are called parity-check bits"
3. Thomas M. Cover and Joy A. Thomas
(1991). Elements of Information Theory.
John Wiley & Sons, Inc. pp. 210–211.
ISBN 0-471-06259-6.
4. Marcus Greferath (2009). "An
Introduction to Ring-Linear Coding
Theory". In Massimiliano Sala, Teo
Mora, Ludovic Perret, Shojiro Sakata,
Carlo Traverso. Gröbner Bases, Coding,
and Cryptography. Springer Science &
Business Media. ISBN 978-3-540-93806-
4.
5.
http://www.encyclopediaofmath.org/ind
ex.php/Kerdock_and_Preparata_codes
6. J.H. van Lint (1999). Introduction to
Coding Theory (3rd ed.). Springer.
Chapter 8: Codes over 4. ISBN 978-3-
540-64133-9.
7. S.T. Dougherty, J.-L. Kim, P. Sole
(2015). "Open Problems in Coding
Theory". In Steven Dougherty, Alberto
Facchini, Andre Gerard Leroy, Edmund
Puczylowski, Patrick Sole.
Noncommutative Rings and Their
Applications . American Mathematical
Soc. p. 80. ISBN 978-1-4704-1032-2.
J. F. Humphreys; M. Y. Prest (2004).
Numbers, Groups and Codes (2nd
ed.). Cambridge University Press.
ISBN 978-0-511-19420-7. Chapter 5
contains a more gentle introduction
(than this article) to the subject of
linear codes.

External links
q-ary code generator program
Code Tables: Bounds on the
parameters of various types of
codes , IAKS, Fakultät für Informatik,
Universität Karlsruhe (TH)]. Online, up
to date table of the optimal binary
codes, includes non-binary codes.
The database of Z4 codes Online, up
to date database of optimal Z4
codes.
Retrieved from
"https://en.wikipedia.org/w/index.php?
title=Linear_code&oldid=871316981"

Last edited 4 months ago by Pol…

Content is available under CC BY-SA 3.0


unless otherwise noted.

You might also like