0% found this document useful (0 votes)
69 views5 pages

Network Coding, Algebraic Coding, and Network Error Correction

Network coding, (classical) algebraic coding, and network error correction. Network Coding was introduced for satellite communication networks in. Network error correction has been introduced as a paradigm for error correction on networks.

Uploaded by

Saissi Nadia
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views5 pages

Network Coding, Algebraic Coding, and Network Error Correction

Network coding, (classical) algebraic coding, and network error correction. Network Coding was introduced for satellite communication networks in. Network error correction has been introduced as a paradigm for error correction on networks.

Uploaded by

Saissi Nadia
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Network Coding, Algebraic Coding, and Network Error Correction

Raymond W. Yeung
Network Coding Research Centre The Chinese University of Hong Kong N.T., Hong Kong Email: [email protected]

Ning Cai
Department of Information Engineering The Chinese University of Hong Kong N.T., Hong Kong Email: [email protected]

Abstract This paper discusses the relation between network coding, (classical) algebraic coding, and network error correction. In the rst part, we clarify the relation between network coding and algebraic coding. By showing that the Singleton bound in algebraic coding theory is a special case of the Maxow Min-cut bound in network coding theory, we formally establish that linear multicast and its stronger versions are network generalizations of a maximum distance separation (MDS) code. In the second part, we rst give an overview of network error correction, a paradigm for error correction on networks which can be regarded as an extension of classical point-to-point error correction. Then by means of an example, we show that an upper bound in terms of classical errorcorrecting codes is not tight even for a simple class of networks called regular networks. This illustrates the complexity involved in the construction of network error-correcting codes.

I. I NTRODUCTION The concept of network coding was introduced for satellite communication networks in [2] and fully developed in [3], where in the latter the term network coding was coined and the advantage of network coding over routing was demonstrated. The main result in [3], namely a characterization of the maximum rate at which information generated at a single source node can be multicast, can be regarded as the Max-ow Min-cut theorem for network information ow. An algorithm for constructing linear network codes that achieve the Max-ow Min-cut bound was devised in [5]. Subsequently, a more transparent proof for the existence of such linear network codes was given in [6]. For further references on the subject, we refer the reader to the Network Coding Homepage [10] and the tutorial [7].

Inspired by network coding, network error correction has been introduced in [4] as a paradigm for error correction on networks which can be regarded as an extension of classical point-to-point error correction. Specically, the results in [4] [8] [9] are network generalizations of the fundamental bounds in classical algebraic coding theory. In this paper, we discuss the relation between network coding, algebraic coding, and network error correction. The rest of the paper is organized as follows. In Section II, we rst establish that a linear network code achieving the Max-ow Min-cut bound is a network generalization of a maximum distance separation (MDS) code in classical algebraic coding [1]. This claries the relation between network coding and classical algebraic coding. In Section III, upon giving an overview of network error correction, we illustrate the complexity involved in the construction of network error-correcting codes by means of an example. Concluding remarks are in Section IV. II. T HE S INGLETON B OUND
AND

MDS C ODES

Consider the network in Fig. 1. In this network, there are three layers of nodes. The top layer consists of the source node , the middle layer consists of nodes each connecting to node , and the bottom layer consists of nodes each connecting to a distinct subset of nodes on the middle layer. We call this network an combination network, or simply an network, where . Assume that a message consisting of information symbols taken from a nite eld is generated at the source node , and each channel can transmit one symbol in in the specied direction. A linear
     

s n

it follows that
@ hegfTiT V E S 

or which is precisely the Singleton bound for classical linear block code [1]. Thus the Singleton bound is a special case of the Max-ow Min-cut theorem. Moreover, by (2), the non-source nodes in the network with maximum ow at least equal to are simply all the nodes on the bottom layer, and each of them can decode the source message. Hence, we conclude that an classical linear block code with minimum distance is a -dimensional linear multicast on the network. More generally, an classical linear block code with minimum distance is a -dimensional linear multicast on the network for all . The proof is straightforward (we already have shown it for ). On the other hand, it is readily seen that a -dimensional linear multicast on the network, where , is an classical linear block code with minimum distance satisfying A classical linear block code achieving tightness in the Singleton bound is called a maximum distance separation (MDS) code [1]. From the foregoing, the Singleton bound is a special case of the Max-ow Min-cut theorem. Since a linear multicast, broadcast, or dispersion achieves tightness in the Max-ow Min-cut theorem to different extents, they can all be regarded as network generalizations of an MDS code. The existence of MDS codes corresponds, in the more general paradigm of network coding, to the existence of linear multicasts and their stronger versions. This has been discussed in great detail in [7]. III. N ETWORK E RROR C ORRECTION Inspired by network coding, network errorcorrecting codes has been introduced in [4] for multicasting a source message to a set of nodes on a network when the communication channels are not error-free. The usual approach in existing networks, namely link-by-link error correction, is a special case of network error correction. Network
5  @ 0CB 7 r  3   xw 7 ` V S 7 9uyE  E V E S v(uBe4 d 5  @ DCB E 3 Q I H G RPA( 5  @ DPB  3 V E S ts$ E @ he4fqpE V  S 

...
r
Fig. 1. An
  

...

...

(3)

combination network.

network code on a given network is qualied as a linear multicast [7] if for all non-source node in the network, if
! @  7 5 A986! S YXE ! `  7 5 A986! 3 1 ) ' % # 420(&$"

(1)

then node can decode the source message. Note that by the Max-ow Min-cut theorem, (1) is a necessary condition for any node to be able to decode the source message. In [7], linear broadcast, linear dispersion, and generic linear network code are also dened as linear network codes possessing stronger properties than linear multicast. These stronger linear network codes are useful for various applications. linear block code with Consider a classical minimum distance and regard it as a linear network code on the network. Specically, the code takes the source message as input and outputs symbols, each being transmitted on one of the outgoing channels of node . For each node on the middle layer, since there is only one input channel, we assume without loss of generality that the symbol received is replicated and transmitted on each outgoing channel. code has minimum distance , Since the of the nodes on by accessing a subset of the middle layer (corresponding to erasures), each node on the bottom layer can decode the source message. From the foregoing, by the Maxow Min-cut theorem, (2)
E V E S 9WUT 5  @ DCB 3 1 ) ' % # 420(&$" 3 Q I H G RPA( F E 5  @ 0CB 3 ! !

Since
@ hegfTeB6! V E S d 5 3 1 ) ' % # X2cb&a"

form a classical -error correcting code with alphabet , and consequently ii) and in the case that the code is linear,
@ 5 @ &i@ iB 3   Y(  @ &C@ B 5 @ 3 mq   (T 

and
@ lC2 5 @ 3 3 @ 5 Q @ e 3 3 @ (i@ 5 E 3 3 @ C@ 5 s 3 3 @ 5 C@ 3 3 @ 5 @ 5 Q ik e @ 3 @ 3 d

Let us consider binary codes for this network, i.e.,

` 5 A} e @

@ 5 Q A} u e @ @ 5 } @ 3 3

5 e l@

} ~

d {l@ 5 e

if
} ~

@ 5 i e @

@ 5 @ lCb

@ 2icE 5 @

@ 5 i2s e @

@ (i2s 5 E @

Denition 3: An acyclic network is regular , where is the minimum volume of a regular cut between and .
3 y wv u|$" 3 1 ) ' % # " y w v X20(&$zcx$" e 5 e l@ 3

} @ @ @ E @ s @ uuCbC0C2ikP@

Denition 2: For a partition of the node set , is a regular cut if its members form an antichain, i.e., if , then there exists no path either from to or from to .
j k 5 r vi@ 5 r vC@ q 3 j k s ue tXkC f j @ 5 r vC@ q 3 ce s

e &ui@ Q e

is acyclic, it naturally denes a partial Since order on the channel set . Two channels are said to be incompatible if neither nor . A set of channels is called an antichain if the channels in are pairwise incompatible.
f j @ lkib h j lk j h kim q 3 o ipn n h

Denition 1: A network code on is -error, correcting if it can correct all -errors for i.e., if the total number of errors in the network is at most , then the source message can be recovered by all the sink nodes .
d  f yge

where , and and are the size of an optimal classical -ary -error-correcting code of length and the size of an optimal classical linear -ary -error-correcting code of length , respectively. The upper bound on rendered in the above theorem is in terms of bounds dened for classical error-correcting codes. Since the errors occurring at the channels across any cut in a regular network do not interfere with each other (because the set of channels form an antichain), one may conjecture that this upper bound on is generally tight for regular networks. The following example, however, shows the contrary. Example 1: Consider the network in Fig. 2 which is specied by
5 @ &C@ B 5 @ &i@ B    4  d

5 e l@

3

@ f 5 5 r baq$hti@

} ~

y wv k0|$"

5 r i@

q 3

e cs

d g

 T  q 3

s f @ 5 ue g4ib

3 mq

5 r vC@

 

q 3

ce s

generalizations of the Hamming bound, the Singleton bound, and the Gilbert-Varshamov bound in classical algebraic coding have been obtained. In particular, the tightness of the Singleton bound in the network setting is preserved, meaning that linear network codes are asymptotically optimal. We refer the reader to [8] [9] for the details. In this section, we discuss an upper bound obtained in [8] which is given in terms of bounds dened for classical error-correcting codes. By means of an example, we will show that this bound is not tight even for a simple class of networks called regular networks. This illustrates the complexity involved in the construction of network error-correcting codes. Let us rst describe the setup of network error correction. An acylic communication network is represented by a directed acyclic graph , where is the node set and is the channel set, in which multiple channels between a pair of nodes is allowed. On each channel, one symbol from a can be transmitted in the certain code alphabet specied direction. A message taken from a source alphabet is generated at the source node , which is to be multicast to a set of sink nodes . A network code on is dened in the usual way (see for example [8]), and for a network code , the symbol transmitted on channel when the message is is denoted by .
5 @ 3 d U 5 ( 3 A

For a -error-correcting network code on a given network , we are naturally interested in the maximum possible value of , the size of the source alphabet. The following theorem renders an upper bound on . Theorem 1: [8] Let be a -error-correcting code for an acyclic network with source alphabet and code alphabet , and let . i) If is a regular cut between the source node and a sink node , then the set of all possible vectors transmitted across , i.e.,

f u2 a
Fig. 2. The network for Example 1.

u1

Now again consider the cut in (6). With (5), (6), and (8), it can readily be veried that
@ 5 5 3 55~hh @ k@ @ @ @ 5 E d 3 3 3 5i55~ H @ ~ Cx ~ Cx 5 @ 5 3 3 3 H @ 5 u 3 3 u Cx @ 5 3 C 3 3 E

5 s i@

3

` Bkh d 5 @

@ h@

Ae

d 5 @ akh

5 P@

3

since by symmetry one can exchange the roles of 0 and 1 componentwise. We observe that for a particular network code, a channel can be removed if its encoding function can only take one value because such a channel does not convey any information. For the network in Fig. 2, if the encoding function of any channel can take only one value, then by removing that channel from the network, we will nd a sink node such that the minimum cut between the source node and this sink node is reduced to 2. This contradicts Theorem 1 because of the nonexistence of a (2,1) code that can correct 1 error. This means that the encoding functions of all the channels must take two values. In particular, an encoding function of a channel whose input-nodes

. It is easy to verify that if , then the outputs of the channels across the cut in (7) is if the source message is and an error occurs at channel , or if the source message is and an error occurs at channel . Thus we must have (8)

between

and

5 k Q ci@ e @ E

5 @ P2C0E

@ i@ 5 s

@ b Q i0C@ e @ E

3

@ P@ 5

and
@ h~ d 5 3 Pxezk d 5 3 Cx~ d 5 3 Cx

otherwise the outputs of the channels across the cut in (6) would again be so that the sink node cannot distinguish the source messages 0 and 1. We now consider the cut
5 @ h@ 3 ce s Q e

5 C@

@ B5 d

(4) where denotes the encoding function of for channel , etc, that multicasts a message from the binary source alphabet . Assume that the network code in (4) is 1error-correcting. We will show that this leads to a contradiction. Without loss of generality, we let
d 5 3 h@ Cx5 d ~ d 3 Cxez5 d 3 5 Px @ 3 C

and , the outputs of the channels are if the source message is and an error occurs at channel . Next, consider the case that the source message is and an error occurs at channel . Then we must have (6)
Q e 5 @ @ 3

between

5 @ 2C0E

3 5 E @ bC2s

5 e @ E @ s kkAci2C@

5kC@0E @5C@ @ 5 @ 3 3 3 @ e @ E @ s kAci2C@

5 E bC@

ue s

the encoding alphabet is given by . It is easy to verify for this network that for , so it is regular. In light of the existence of a classical binary 1-error-correcting (3,1) code, if the bounds in Theorem 1 are tight, then there would exist a binary -error-correcting network code
@ C @ Chy d ~y &@ @ ~PP d w &y @ ` ` ` @ @ @ ~CP CxC PxCP C Tuie d 5 e @ 3 } ~Tui@ d 5 e 3 1 ) ' % # X20(2a" d

Let us consider the case that the source message is 1 and an error occurs at channel . It is easy to see that the outputs of channels and are 0 and 1, respectively, so that by (5), channel outputs a 0. Then across the cut
3

has in-degree one must be a bijection, so we may assume with loss of generality that it is the identity function. Let us consider the encoding function with the rst and the second arguments being the outputs of channels and , respectively. We will show that there is no way to choose the function such that the code is able to correct 1 error. First, without loss of generality, let
` d 5 Bk@ 5 E @ bC2s 3 3 H 5 E bC@ 3 H

(5)

(7)

(9)

By Theorem 1,
&@ ~ f 5 5 $8ib 3 H b  Px @ ( Cx @ 5 5 3 3 3

is a classical 1-error-correcting code so that its minimum distance is at least 3, a contradiction to (9). Therefore, the assumption that the code in (4) is 1-error-correcting is incorrect, and we conclude that there exists no binary 1-error-correcting network code that can transmit 1 bit. This in turn shows that the upper bound in Theorem 1 is not tight. IV. C ONCLUDING R EMARKS We have claried the relation between network coding and algebraic coding. We have also have given an overview of network error correction, a paradigm for error correction on networks and an extension of classical point-to-point error correction, and discussed the complexity involved in the construction of network error correction codes. ACKNOWLEDGMENT The work of Raymond W. Yeung was partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (RGC Ref. No. CUHK4214/03E). R EFERENCES
[1] R. C. Singleton, Maximum distance Q-nary codes, IEEE Trans. Inform. Theory, IT-10: 116-118, 1964. [2] R. W. Yeung and Z. Zhang, Distributed source coding for satellite communications, IEEE Trans. Inform. Theory, IT45: 1111-1120, 1999. [3] R. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung, Network information ow, IEEE Trans. Inform. Theory, IT-46: 12041216, 2000. [4] N. Cai and R. W. Yeung, Network Coding and Error Correction, IEEE Information Theory Workshop, Bangalore, India, Oct 20-25, 2002. [5] S.-Y. R. Li, R. W. Yeung and N. Cai, Linear network coding, IEEE Trans. Inform. Theory, IT-49: 371-381, 2003. [6] R. Koetter and M. M dard, An algebraic approach to nete work coding, IEEE/ACM Transactions on network coding, vol. 11, 782-795, 2003. [7] R. W. Yeung, S.-Y. R. Li, N. Cai, and Z. Zhang, Theory of network coding, to appear in Foundation and Trends in Communications and Information Theory. [8] R. W. Yeung and N. Cai, Network error correction, Part I: Basic concepts and upper bounds, submitted to Communications in Information and Systems, http://www.ims.cuhk.edu.hk/cis [9] N. Cai and R. W. Yeung, Network error correction, Part II: Lower bounds, submitted to Communications in Information and Systems, http://www.ims.cuhk.edu.hk/cis [10] Network coding homepage, http://www.networkcoding.info

You might also like