0% found this document useful (0 votes)
54 views40 pages

BCH Texto

Uploaded by

Feli Cabezas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
54 views40 pages

BCH Texto

Uploaded by

Feli Cabezas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 40
CHAPTER 6 Binary BCH Codes The Bose, Chaudhuri, and Hocquenghem (BCH) codes form a large class of powerful random error-correcting cyclic codes. This class of codes is a remarkable generalization of the Hamming codes for multiple-error correction. Binary BCH codes were discovered by Hocquenghem in 1959 [1] and independently by Bose and Chaudhuri in 1960 [2]. The cyclic structure of these codes was proved by Peterson in 1960 [3]. Binary BCH codes were generalized to codes in p™ symbols (where pis a prime) by Gorenstein and Zierler in 1961 [4]. Among the nonbinary BCH codes, the most important subclass is the class of Reed-Solomon (RS) codes. The RS codes were discovered by Reed and Solomon in 1960 [5] independently of the work by Hocquenghem, Bose, and Chaudhuri. The first decoding algorithm for binary BCH codes was devised by Peterson in 1960 [3]. Then, Peterson’s algorithm was generalized and refined by Gorenstein and Zierler [4], Chien [6], Forney [7], Berlekamp [8, 9], Massey [10, 11], Burton [12], and others. Among all the decoding algorithms for BCH codes, Berlekamp’s iterative algorithm, and Chien’s search algorithm are the most efficient ones. In this chapter we consider primarily a subclass of the binary BCH codes that is the most important subclass from the standpoint of both theory and implementation. Nonbinary BCH codes and Reed-Solomon codes will be discussed in Chapter 7. For a detailed description of the BCH codes, and their algebraic properties and decoding algorithms, the reader is referred to [9] and [13-17] 6.1 BINARY PRIMITIVE BCH CODES For any positive integers m(m > 3) and r(¢ < 2"~1), there exists a binary BCH code with the following parameter Block length: n=2"—1, Number of parity-check digits: n — k < mt Minimum distance: Grin = 2+ 1 Clearly, this code is capable of correeting any combination of 1 or fewer errors i a block of n = 2" — 1 digits. We call this code a r-error-correcting BCH code. The generator polynomial of this code is specified in terms of its roots from the Galois field GF(2"). Let a be a primitive element in GF(2”), The generator polynomial (4) of the r-error-correcting BCH code of length 2” — 1 is the lowest-degree polynomial over GF(2) that has a,a*a3,- a7 (6.1) as its roots [ie., g(a) = 0 for 1 < i < 21), It follows from Theorem 2.11 that g(X) has a,a?,--- a and their conjugates as all its roots. Let @;(X) be the 194 Section 6.1 Binary Primitive BCH Codes 495 minimal polynomial of a’ . Then, g(X) must be the least conunon multiple (LCM) of 1(X), b2(X), + - hay (X), that is, BUX) = LOM} (X). 62), 2,0) (62) It is an even integer, it can be expressed as a product of the following form: iid, where i’ is an odd number, and/ = 1. Then, a’ = (a) is a conjugate of a’, and therefore @’ and a’ have the same minimal polynomial; that is, (2) = (X), Hence, every even power of a in the sequence of (6.1) has the same minimal polynomial as some preceding odd power of a in the sequence. As a result, the generator polynomial g(X) of the binary r-error-correcting BCH code of length 2” ~ 1 given by (6.2) can be reduced to BUX) = LOM(91(X), 630. by OD) (6.3) Because the degree of each minimal polynomial is m or less, the degree of g(X) is at most mr: that is, the number of parity-check digits, n ~ k, of the code is at most equal {o m1. There is no simple formula for enumerating n —k, but ifr is small, m —k is exactly equal to mr [9, 18]. The parameters for all binary BCH codes of length 2" ~ 1 with m < 10 are given in Table 6.1. The BCH codes just defined are usually called primitive (or narrow-sense) BCH codes, TABLE 6.1: BCH codes generated by primitive elements of order less than 2!°. a kt] a k tl » & Ef fear |p 27 esto ee 13) 2557 eo) oud 43014 63 30 ee} 36 14 55 31 53 29° 21 47 2 31 26 1 22 23 45 43 a 2 1s 27 37 45 16 3 8 3 29 47 M5 | 255 247 1 2 55 67 239 2 13 59 63 ST 1 2313 9 63 ieee) 223° «4/511 502 1 45 3 2s 5 493 2 30 4 207 «6 4843 36 5 199-7 4154 30 6 191 8 466 5 247 1879 4576 (continued overleaf) 196 Chapter 6 Binary BCH Codes TABLE 6.1: (continued) nok tla sek tla kot 18 10 179 10 4487 16 it im it 4398 10 13 163 12 4309 715 15513 421 10 127° 120 «1 14714 4120 1 132 2B9 18. 403 12 106 3 Bl 19 39413, 99 4 12321 3854 2 5 us 22 37615 85 6 10723 36716 77 99 24 358 18, nm 9 1 25 34919) 64 10 87 26 340 20 37) 79° 27 33121 S11 322 22] SIL 166 47] SIL 10121 313 23 187 $1 | 1023 10131 304 25 148 53 10032 295 26 130 54 9933 286 27 130 55 9834 217 28 121 38 9735 268 29 2 59 9636 259 30 103 61 9837 250 31 o4 62 9438 241 36 85 63 9339 238 37 76 8S 923 10 229 38 6787 9131 220 39 581 90312 21 41 49 93 893 13 202 42 40 95 88314 193 43 31 109 873 15 184 45 28 1h 86316 175 46 9 19 85817 1023 848 18 | 1023 553 52} 1023 268 103 838 19 S43 53 258 106 828 20 533 54 249 107 818 21 523 55 238 109 808 22 513. S7 228 110 798 23 50358, 218 111 788 24 49359 208 115 718 25 48360 203 117 768 26 473 61 193 118 758 27 463 62 183 119 748 28 453 63 173 122 738 29 44373 163 123 Section 6.1 Binary Primitive BCH Codes 197, TABLE 6.1: (continued) nok f¢|]n k t{a ok RS 30 43374 153125 M8 31 43 75 143 126 108 34 43°77 133 127 698 35 403-78 123 170 688 36 39379) 21171 678 37 38382 ul 173 668 38 37883 101175 658 39 368 85 OL 181 64841 358 86 86 183, 638 42 34887 16 187 628 43, 338 89 66 189 618 44 32890 56 191 608 45 318 91 46 219 598 46 308 93 36 223 58847 298 94 26 239 578 49 288 95 16 147 573. 50 278 102 W255 563 51 From (6.3), we see that the single-error-correcting BCH code of length 2" — 1 is generated by BX) = $)(X). Because a is a primitive element of GF(2"), $1(X) is a primitive polynomial of degree m. Therefore, the single-error-correcting BCH code of length 2” — 1 is a Hamming code. EXARPLE 6.4 Leta be a primitive element of the Galois field GF(2*) given by Table 2.8 such that 1 +a +e = 0, From Table 2.9 we find that the minimal polynomials of a.a°, and a are OX) HL 4X4 XA, $3(X) = 14+ X4X274 34 x4, and 3(X) = 14 X 4X7, respectively. It follows from (6.3) that the double-error-correcting BCH code of length n = 2* — 1 = 15 is generated by 8(X) = LCM(6;(%), 630). 198 Chapter 6 Binary BCH Codes Because $(X) and X) are two distinct irreducible polynomials, 8X) = $1(X)G3(X) SCE X4¢XN14 X47 4X94 x4 SLE Xt EXO XT H XE ‘Thus, the code is a (15,7) cyclic code with dyin > 5. Since the generator polynomial is a code polynomial of weight 5, the minimum distance of this code is exactly 5 The triple-error-correcting BCH code of length 15 is generated by 8(X) = LCM($,(X), 63(4). 6500} = (4X4 X04 x44 4x04 xX 4X2) S1t X4 X24 Xt $ x5 4 x8 + x1 ‘This triple-error-correcting BCH code is a (15, 5) cyclic code with drain > 7. Because the weight of the generator polynomial is 7, the minimum distance of this code is exactly 7. Using the primitive polynomial p(X) = 1+ X + X°, we may construct the Galois ficld GF(2°), as shown in Table 6.2. The minimal polynomials of the elements TABLE 6.2: Galois field GF(2°) with p(a) 0 0 (000000) 14 (00000) @ a (010000) a a (001000) a a (000100) ras at (000010) a5 a (000001) o& 1 + a (110000) a? a+ (011000) of + (001100) @? aw +a4 (000110) a a + o& (900011) wl 1 4 @ + a (110004) aly + a2 (101000) a3 a oe (010100) alt tat (001010) al ral + (000101) a6 1 4 a tat (110010) al? a + at + oS (011001) ce peeee ae ae RE EN pase Ores (111100) a? a + ae + (011110) @ ef + @ Ay (OL111) +++ RRRRRA +teett Section 6.1 Binary Primitive BCH Codes 199 TABLE 6.2: (continued) + 8 tat + oF (di0111) a tat + of (o1o01)) + ¢ + a (100101) +at (100010) + 08 (010001) oe (111000) we 4 a (011100) of + oF tat (001110) oe tet + oF (wool) (110011) od + oF (101001) + 2 (100100) at (010010) a + o& (001001) +08 (110100) a + at z (011010) a ad + a (001101) ; + 08 + a s (110110) « + of 4 a5 (011011) ow + a + (111101) wo + a + af (101110) +o + ot + oF (010111) a + of + 0 (111011) a + oF + oe (101101) +08 + at (100110) + af + af (010011) «@ + a (111001) oF + oF (101100) +o + af (010110) a + af a (001011) + 8 + a (110101) Pye ty Me + @ + a a? + af (111010) oF + a + oF (011101) wo + a + af (111110) oe + oP + at + oF (11114) e+ 8 + at + oF (lidiidy we +a + af + of (oti +a + wt + o (100111) + af + @& (100011) + a (100001) 200 Chapter6 —_Binary BCH Codes TABLE 6.3: Minimal polynomials of the clements in GRO). Elements a1, 07, oct, eol® 03? 3, a®, a2, 833 20-20, gO, IT, 4 cat! 78,0058, a, o¥® a8, 6 al, 622, a, 05, 50, 37 e293 @28. 52 ol. gl. G38 Minimal polynomials 14+X+X6 LEX 4+ XP 4X44 x6 14 X4X7 4X5 4x6 T+ X34$X6 14+ xX24X3 LEXA 4X4 S54 XO 14 X4X3 4x44 x6 al, 63, 57 5199 1+ XP + X44 XS +E 2}, g(t? 1+X4+X? 0 a,a a8 aa 1+ X+X44K5 4X0 27,4, 1HX4X9 8), ot THs + x6 TABLE 6.4: Generator polynomials of all the BCH codes of length 63. ” kof BX) 6 37 1 14X 4x6 sa 2 AEX + XYLEX + XP + 4+ XS) 45 3 (4X 4X24. X5 + XK) Q(X) 39 4 = (14X24 X%)g3(X) 36 5 (1+ X2 + X9)g(X) 30 6 (1+ X2 + X34. X5 + X5)gs(X) 24 7 alX) =F X + X34 X48 + XD) goX) 18 10 Bi0(X) = (1+ X7 + X44 XS + X%)B0(X) 16 1 gi (X) = (1+ X + X4)g10(X) 10 13 gis(X) = + X + X44 X54 XM BK) 718 gis(X) =A +X +X )gn(X) in GF(2°) are listed in Table 6.3. Using (6.3), we find the generator polynomials of all the BCH codes of length 63, as shown in Table 6.4. The generator polynomials of all binary primitive BCH codes of length 2” ~ 1 with m < 10 are given in Appendix C. It follows from the definition of a -error-correcting BCH code of Jength n = 2" — 1 that each code polynomial has a, a?.--- .«~ and their conjugates as roots, Now, let ¥(X) = ug + v1 X +--+ vp ¥"-! be a polynomial with coefficients from GF(2). If v(X) has a,a?,--- ,@7 as roots, it follows from Theorem 2.14 that v(X) is divisible by the minimal polynomials $:(X), $2(4),--- ,@2,(X) of a,a@?,+++,a!, Obviously, ¥(X) is divisible by their least common multiple (the generator polynomial), @(X) = LCM {1 (X). b2(X). +1 b2( XD). Section 6.1 Binary Primitive BCH Codes 204 Hence, ¥(X) is a code polynomial. Consequently, we may define a -error-correcting BCH code of length n = 2" —1 in the following manner: a binary n-tuple v = (09, ¥1, 024+», Uy) 18 a codeword if and only if the polynomial ¥(X) = up + u1X + vee 1X"! has a7,@2, ++» ,02! as roots. This definition is useful in proving the minimum distance of the code. Let v(X) = v9 + 1X + +++ + 0,1"! be a code polynomial in a ¢-error- correcting BCH code of length n = 2" —1. Because a! is a root of 1X) for 1 jy tell us the ervor locations in e(X), as in (6.16). In general, the equations of (6.17) have many possible solutions (2 of them). Each solution yields a different error pattern. If the number of ersors in the actual error pattern e(X) 1 or fewer (ic., v <1), the solution that yields an error pattern with the smallest number of errors is the right solution; that is, the error pattern corresponding to this solution is the most probable error pattern e(X) caused by the channel noise. For large 1, solving the equations of (6.17) directly is difficult and ineffective. In the following, we describe an effective procedure for determining a! for / = 1,2,--- .v from the syndrome components 5's. For convenience, let Baal (6.18) for 1 <1 < v. We call these elements the error location munbers, since they teil us the locations of the errors. Now, we can express the equations of (6.17) in the following form: Sy = Bit Bates + Bo 5 = B+ BP +e + BP (6.19) Soy = BR + BP + + Bel 208 Chapter6 —_Binary BCH Codes ‘These 2r equations are symmetric functions in fj, B2,--- , fo. Which are known as power-sum symmetric functions. Now, we define the following polynomial: 4 x) 20+ 6.x : i o(X) = (1+ BrX)(1 + B2X)--- (1+ BX) (6.20) = 09 +.0,X +onX* +--+ +0,X". The roots of ¢(X) are 8,1, By! --- , By |, which are the inverses of the error-location numbers. For this reason, o (X) is called the error-location polynomial. Note that o (X) is an unknown polynomial whose coefficients must be determined. The coefficients of o(X) and the error-location numbers are related by the following equations: op =1 0) = Pit Bote + By 02 = Bia + BoBs +--+ + By1Bv (6.21) BiB2 ++ Bo The oy’s are known as elementary symmetric functions of fi's. From (6.19) and (6.21), we sce that the o;’s are related to the syndrome components $;’s. In fact, they are related to the syndrome components by the following Newton’s identities: Sito =0 $1 +0481 +202 =0 +018) + 025, +303 =0 (6.22) Fo1Sy1 $+ + 9y-18, + voy =O Sy Fo1Sy +--+ 0-18) tors, =0 For the binary case, since 1 + 1 = 2 = 0, we have io, {2% foroddi, "= 10 foreveni Ifitis possible to determine the elementary symmetric functions 0,02, --- , oy from the equations of (6.22), the error-location numbers f;, £2, ++ , By can be found by determining the roots of the error-location polynomial 0 (X). Again, the equations of (6.22) may have many solutions: however, we want to find the solution that yields a.o(X) of minimal degree. This o(X) will produce an error pattern with a minimum number of errors. If v < 1, this o(X) will give the actual error pattern e(X). Next, we describe a procedure to determine the polynomial 6 (X) of minimum degree that satisfies the first 2r equations of (6.22) (since we know only Sj through $3, Section 6.3 Iterative Algorithm for Finding the Error-Location Polynomial o(X) 209 At this point, we outline the error-correcting procedure for BCH codes. The procedure consists of three major steps: 1, Compute the syndrome $ r(X). 2, Determine the exror-location polynomial ¢ (2) from the syndrome components St. Say--+ Ste 3, Determine the error-location numbers fi, f2.--- . Sv by finding the roots of @(X), and correct the errors in #(X). (Si, S2.++- Sx) from the received polynomial The first decoding algorithm that carties out these three steps was devised by Peterson [3]. Steps 1 and 3 are quite simple; step 2 is the most complicated part of decoding a BCH code. 6.3 ITERATIVE ALGORITHM FOR FINDING THE ERROP-LOCATION POLYNOMIAL o(X) Here we present Berlekamp’s iterative algorithm for finding the error-location polynomial. We describe only the algorithm, without giving any proof. The reader ‘who is interested in details of this algorithm is referred to Berlekamp [9}, Peterson and Weldon (13], and MacWilliams and Sloane (14). The first step of iteration is to find a minimum-degree polynomial ¢(X) whose coefficients satisfy the first Newton's identity of (6.22). The next step is to test whether the coefficients of o!(X) also satisfy the second Newton's identity of (6.22). If the coefficients of a'!)(X) do satisfy the second Newton's identity of (6.22), we set ox) = 0X), If the coefficients of ¢(X) do not satisfy the second Newton's identity of (6.22), we add a correction term to o"(X) to form #)(X) such that ¢@(X) has mini- mum degree and its coefficients satisfy the first two Newton’s identities of (6.22). Therefore, at the end of the second step of iteration, we obtain a minimum-degree polynomial «)(X) whose coefficients satisfy the first two Newton's identities of (6.22). The third step of iteration is to find a minimum-degree polynomial «(X) from ¢)(X) such that the coefficients of #)(X) satisfy the first three Newton's identities of (6.22). Again, we test whether the coefficients of o °)(X) satisfy the third Newton's identity of (6.22). If they do, we set ¢'(X) = «)(X). If they do not, we add a correction term to ¢)(X) to form o©)(X). Iteration continues until we obtain 9X), Then, 6” (X) is taken to be the error-location polynomial « (X), that is, 6(X) =?(X), This 6 (X) will yield an error pattern e(X) of minimum weight that satisfies the equations of (6.17). If the number of errors in the received polynomial r(X) is 1 or less, then ¢(X) produces the true error pattern. Let aX) = LOX + ay XP +o bol xle (6.23) be the minimum-degree polynomial determined at the jth step of iteration whose coefficients satisfy the first , Newton's identities of (6.22). To determine +(x), 210 Chapter 6 Binary BCH Codes we compute the following quantity: Serr top Sy toy Spa to FO Sst (6.24) This quantity d,, is called the xth discrepancy. If d, = 0, the coefficients of o(“)(X) satisfy the (2 + 1)th Newton's identity. In this event, we set oO FDEx) = 6 (X), If d,, # 0, the coefficients of (xX) do not satisfy the (1 -+ 1)th Newton's identity, and we must add a correction term to o!)(X) to obtain o“*(X), To make this correction, we go back to the steps prior to the j.th step and determine a polynomial o‘1(X) such that the pth discrepancy d, #0, and p ~1, [I, is the degree of o)(X)] has the largest value. Then, oD 0X) = 6 (X) + dudg XP), (625) which is the minimum-degree polynomial whose coefficients satisfy the first jz + 1 Newton's identities. The proof of this is quite complicated and is omitted from this introductory book. ‘To carry out the iteration of finding o (X), we begin with Table 6.5 and proceed to fill out the table, where /,, is the degree of 0 (X). Assuming that we have filled out all rows up to and including the :th row, we fill out the (11+ 1)th row as follows: L Ifd, =0, then o*)(X) = 6 (X), and [yyy = ly 2, Ifdz # 0, we find another row p prior to the jth row such that dy 4 0 and the number p —/, in the last column of the Table has the largest value. Then, o\“+)(X) is given by (6.25), and Inga = max(l, Ip + =p). (6.26) Ineither case, 4 ut dst = Supa top Spar bo OT Settee (627) where the o/#*s are the coefficients of "+ (X). The polynomial (xX) in the last row should be the required o (X). If its degree is greater than r, there are more TABLE 6.5: Berlekamp’s iterative procedure for finding the error-location polynomial of a BCH code. # ee) cx) armed Srna -1 1 1 0 0 1 Sy Oo 1 2 24 Section 6.3 Iterative Algorithm for Finding the Error-Location Polynomial o(X) 244 than + errors in the received polynomial 1(X), and generally it is not possible to locate them. EXAMPLE 6.5 Consider the (15, 5) triple-error-correeting BCH code given in Example 6.1. Assume that the codeword of all zeros, (0.0, 6, 0, 0, 0. 0,0,0, 0,0, 0,0, 8, 0), is transmitted, and the vector r= (000101000000100) is received. Then, r(X) = X3-+ X° +X". The minimal polynomials for a, #?, and a are identical, and $1(X) = p(X) = $4(X) = 14K 4X4 ‘The elements a? and a have the same minimal polynomial, OX) = 6(X) = 14+. X 4+ X72 4X3 4 x4 ‘The minimal polynomial for «9 is 5(X) = 14. X4 X? Dividing r(X) by $:(X), 63(X), and $5(X), respectively, we obtain the following remainders: BX b3(X) = 1+ X?7+X3, (6.28) bs(X) = X?. Using Table 2.8 and substituting a, a”, and a into by(X), we obtain the following syndrome components: Si=H=S=1 Substituting o° and a° into b3(X). we obtain S=1te5 +09 = So= Ita! +a'8 (629) Substituting @° into bs(X), we have Ss =a", Using the iterative procedure described previously, we obtain Table 6.6. Thus, the error-location polynomial is o(X) = 0'%(X) =14 X +X. 212 Chapter6 —_Binary BCH Codes TABLE 6.6: Steps for finding the error-location polynomial of the (15,5) BCH code given in Example 6.5. a o)(X) dy 1 0 1 1 0 0 +X 0 1 0 (take p = 1) 1+X a 1 a 14.X405x? 0 2 1 (take p = 0) 14X4a5xX? gl 2 14+X+05xX3 0 3 2 (take p = 2) We can easily check that a7, !°, and a"? are the roots of o (X). Their inverses are @!?, «3, and a3, which are the error-location numbers. Therefore, the error pattern is P+ xh 4x0 e(X) Adding e(X) to the received polynomial r(X), we obtain the all-zero vector. Ifthe number of errors in the received polynomial r(X) is less than the designed error-correcting capability 1 of the code, it is not necessary to carry out the 2r steps of iteration to find the error-location polynomial o (X). Let 0(X) and d,, be the solution and discrepancy obtained at the jth step of iteration. Let /,, be the degree of o)(X). Chen [19] has shown that if d, and the discrepancies at the next r —{,, ~ 1 steps are all zero, «“(X) is the error-location polynomial. Therefore, if the number of errors in the received polynomial r(X) is v(v < 1), only 1 + v steps of iteration are needed to determine the error-location polynomial o(X). If v is small (this is often the case), the reduction in the number of iteration steps results in an increase in decoding speed, The described iterative algorithm for finding ¢ (X) applies not only to binary BCH codes but also to nonbinary BCH codes. 6.4 SIMPLIFIED ITERATIVE ALGORITHM FOR FINDING THE ERROR-LOCATION POLYNOMIAL o(X) The described iterative algorithm for finding 6(X) applies to both binary and nonbinary BCH codes, including Reed-Solomon codes; however, for binary BCH codes, this algorithm can be simplified to r-steps for computing o (X) Recall that for a polynomial f(X) over GF(2), X) = F(X), [sce (2.10)]. Because the received polynomial r(X) is a polynomial over GF(2), we have P(X) = 1X7), Section 6.4 Simplified Iterative Algorithm 213 Substituting a’ for X in the preceding equality, we obtain Pla!) = re), (6.30) Because 5; = r(a!),and Sy; = e(o”') [see (6.13)], we obtain the following relationship between Sp; and Si: e Sy = (6.31) Suppose the first Newton's identity of (6.22) holds. Then, Si +01 =0. ‘This result says that a=S, (6.32) It follows from (6.31) and (6.32) that Sp $018) +202 = S45 +0 Pe NTT (633) =0. ‘The foregoing equality is simply the Newtons second equality. This result says that if the first Newton’s identity holds, then the second Newton's identity also holds. Now, suppose the first and the third Nevton’s identities of (6.22) holds that is, Si +01 =0, (6.34) 53 +0152 + 025) +303 =0. (6.35) ‘The equality of (6.34) implies that the second Newton's identity holds: Sy +01S1 +20) = (6.36) Then, (Sz +0151 + 202)* = 0, Sh + of St (6.37) It follows from (6.31) that (6.37) becomes Sy +07 =0. (6.38) Multiplying both sides of (6.35) by a1. we obtain 0153 + ofS) + oS + 30403 = 0. (6.39) Adding (6.38) and (6,39) and using the equalities of (6.31) and (6.32). we find that the fourth Newton’s identity holds. Sa +153 + 0282 + 038) + dog = 0. 214 Chapter Binary BCH Codes This result says that if the first and the third Newton's identities hold, the second and the fourth Newton’s identities also hold. With some effort, it is possible to prove that if the first, third, --- , 2r — 1th ‘Newton's identities hold, then the second, fourth, --- , 2rth Newton’s identities also hold. This implies that with the iterative algorithm for finding the error-location polynomial o(X), the solution #?#1)(X) at the (2u — Dth step of iteration is also the solution #@")(X) at the 2th step of iteration; that is, 9X) = ot D(X) (6.40) This suggests that the (2 — 1)th and the 2th steps of iteration can be combined. As a result, the foregoing iterative algorithm for finding 6 (X) can be reduced tof steps. Only the even steps are needed. The simplified algorithm can be carried out by filling out a table with only r rows, as illustrated in Table 6.7. Assuming that we have filled out all rows up to and including the j.th row, we fill out the (+ Dth row as follows: 1. Ifd, = 0, then o"*Y(X) = o)(X), 2. Ifd, # 0, we find another row preceding the jth row, say the pth, such that the number 2p —/p in the last column is as large as possible and d, # 0. Then, DEK) = G(X) + duds! X20 9 P(X), (6.41) In either case, |,,41 is exactly the degree of o“*!(X), and the discrepancy at the (w+ 1th step is tl) wt) 42 +03 du = Sous3 +64! (6.42) (ut Srp be FO Sp e3—tuas ‘The polynomial o("(X) in the last row should be the required 6(X). If its degree is greater than r, there were more than r errors, and generally it is not possible to locate them ‘The computation required in this simplified algorithm is half of the computation, required in the general algorithm; however, we must remember that the simplified algorithm applies only to binary BCH codes. Again, if the number of errors in the TABLE 6.7: A simplified Berlekamp iterative procedure for finding the error-location polynomial of a binary BCH code. wo oMX) dyn Oty 1 1 0 -1 1 Ss 0 0 1 2 0 1 2 Section 6.5 ‘ining the Error-Location Numbers and Error Correction 215 TABLE 6.8: Steps for finding the error-location polynomial of the (155) binary BCH code given in Example 6.6. o(E) dy ty 1 i 0-1 Sp=l nn) 14+ S)X=14+X $3 + S251 1 1 (take p = -4) 14+ X40°x? a 2 2 (take p = 0) LEX +05x3 = 3 3 (take p =2) received polynomial x(X) is less than ¢, it is not necessary to carry out the 1 steps of iteration to determine (X) for a r-error-correcting binary BCH code. Based on Chen’s result [19}, if for some 1, d,, and the discrepancies at the next [(1 — I — 1)/21 steps of iteration are zero, o'H(X) is the error-location polynomial. If the number of errors in the received polynomial is v(v < 1), only [(¢ + v)/2] steps of iteration are needed to determine the error-location polynomial o(X). EXAMIPLE 6.6 ‘The simplified table for finding o(X) for the code considered in Example 6.5 is given in Table 6.8. Thus, o(X) = 6(X) = 1+ X +a5X°, which is identical to the solution found in Example 65. 6.5 FINDING THE ERROR-LOCATION NUMBERS AND ERROR CORRECTION ‘The last step in decoding a BCH code is to find the errortocation numbers that are the reciprocals of the roots of o(X). The roots of o(X) can be found simply by substituting 1, c, «2, --- , a(n = 2" — 1) into o(X). Since a" = 1.a7! = a"! @ ‘Therefore. if a! is a root of o(X), a""~ is an error-location number, and the received digit 7, is an erroneous digit. Consider Example 6.6, where the error-location polynomial was found to be a(X)=14 X +05X3, By substituting 1. a, 07, --- .@ into #(X), we find that o°, a! and a! are roots of o(X). Therefore, the error-location numbers are «!?, 0°, and a3. The error pattern is e(X) = X34 X5 4X", which is exactly the assumed error pattern, The decoding of the code is completed by adding (modulo-2) e(X) to the received vector r(X).. ‘The described substitution method for finding the roots of the error-location polynomial was first used by Peterson in his algorithm for decoding BCH codes [3]. Later, Chien {6] formulated a procedure for carrying out the substitution and error correction. Chien’s procedure for searching for error-location numbers is described next. The received vector W(X) = ro X bra X? perp XE 216 Chapter6 —_Binary BCH Codes is decoded bit by bit. The high-order bits are decoded first. To decode 11, the decoder tests whether a" is an error-location number; this is equivalent to testing whether its inverse, a, is a root of o(X). Ifa is a root, then 1+ 01a +0207 +--+ oa" =0. Therefore, to decode r,_;, the decoder forms oa, 0207, +++, ya". If the sum L+o1a +0207 +--+, ova” = 0, then «1 is an error-location number, and 7,1 is an erroneous digit; otherwise, r,1 is a correct digit. To decode r,-), the decoder forms oy!, 020”, -.- oy”! and tests the sum, T+ aa! +0307 +--+ 0,0" If this sum is 0, then a! is a root of o(X), and r,__ is an erroneous digit; otherwise, ryt is a correct digit. The described testing procedure for error locations can be implemented in a straightforward manner by a circuit such as that shown in Figure 6.1 [6]. The 1 co-registers are initially stored with oj, 02, --- , 0; calculated in step 2 of the decoding (vai = ov42 0; = 0 for v <1). Immediately before ry; is read out of the buffer, the f multipliers @ are pulsed once. The multiplications are performed, and oa, 0707, --- , ova” are stored in the o-registers. The output of the logic circuit A is 1 if and only if the sum 1 + oa + a3a? +--+ a,a” = 0; otherwise, the output of A is 0. The digit r); is read out of the buffer and corrected by the output of A. Once ry—1 is decoded, the r multipliers are pulsed again. Now, 012, one, ++. , 0,07" are stored in the o-registers. The sum Loja? + 0704 +--+ 5,07" is tested for 0. The digit r, 2 is read out of the buffer and corrected in the same manner as r,_1 Was corrected. This process continues until the whole received vector is read out of the buffer. FIGURE 6.1: Cyclic error location search unit, Section 6.6 Correction of Errors and Erasures 217 The described decoding algorithm also applies to nonprimitive BCH codes. The 2r syndrome components are given by =1(6') fori si <2. 6.6 CORRECTION OF ERRORS AND ERASURES: If the channel is the binary symmetric erasure channel as shown in Figure 1.6(b), the received vector may contain both errors and erasures. It was shown in Section 3.4, that a code with minimum distance din is capable of correcting all combinations of v errors and ¢ erasures provided that Qvtetl < dmin (6.43) Erasure and error correction with binary BCH codes are quite simple. Suppose a BCH code is designed to correct 1 errors, and the received polynomial r(X) contains v (unknown) random errors and e (known) erasures. The decoding can be accomplished in two steps. First, the erased positions are replaced with 0’s and the resulting vector is decoded using the standard BCH decoding algorithm. Next, the e erased positions are replaced with 1’s, and the resulting vector is decoded in the same manner. The decodings result in (wo codewords. The codeword with the smallest number of errors corrected outside the e erased positions is chosen as the decoded codeword. If the inequality of (6.43) holds, this decoding algorithm always results in correct decoding. To see this, we write (6.43) in terms of the error-correcting capability 1 of the code as preter (6.44) Assume that when e erasures are replaced with 0's, e* < ¢/2 errors are introduced in those ¢ erased positions. As a result, the resulting vector contains a total of v +e" ¢/2 then only e ~ e* < e/2 errors are introduced when the ¢ erasures are replaced with 1's. In this case the resultant vector contains v+(e-e*) < errors. Such errors are also correctable. Therefore, with the described decoding algorithm, at least one of the two decoded codewords is correct. 6.7 IMPLEMENTATION OF GALOIS FIELD ARITHMETIC Decoding of BCH codes requires computations using Galois field arithmetic. Galois field arithmetic can be implemented more easily than ordinary arithmetic because there are no carries. In this Section we discuss circuits that perform addition and multiplication over a Galois field. For simplicity, we consider the arithmetic over the Galois field GF(2*) given by Table 2.8, To add two field elements, we simply add their vector representations. The resultant vector is then the vector representation of the sum of the two field elements. For example, we want to add a? and a!3 of GF(2‘). From Table 2.8 we find that their vector representations are (1 1 0 1) and (10 1 1), respectively. Their vector sum is (1101) + (101 1) = (011 0), which is the vector representation of aS, 218 Chapter6 —_Binary BCH Codes ADD a “a, | Resister A 0 o » | accumulator) 4455 ath tbs by by b> by | Register B FIGURE 6.2: Galois field adder Two field elements can be added with the circuit shown in Figure 6.2. First, the vector representations of the two elements to be added are loaded into registers A and B. Their vector sum then appears at the inputs of register A. When register A is pulsed (or clocked), the sum is loaded into register A (register A serves as an accumulator) For multiplication, we first consider multiplying a field element by a fixed element from the same field. Suppose that we want to multiply a field element f in GF(2‘) by the primitive element a whose minimal polynomial is 6(X) = 1+ X + X* We can express element fas a polynomial in « as follows: B= bo + bya + boa? + by Multiplying both sides of this equality by « and using the fact that a = 1 +, we obtain the following equality: of = b3 + (by +b3)a + bye? + bya? ‘This multiplication can be carried out by the feedback shift register shown in Figure 63. First, the vector representation (bg, bi, 02, bs) of Bis loaded into the register, then the register is pulsed. The new contents in the register form the FIGURE 6.3: Circuit for multiplying an arbitrary element in GF(2*) by a. Section 6.7 Implementation of Galois Field Arithmetic 212 vector representation of af. For example, let 6 = @? = 14+ +03. The vector representation of f is (1 10 1). We load this vector into the register of the circuit shown in Figure 6.3. After the register is pulsed, the new contents in the register will be (1.010), which represents a8, the product of a7 and @. The circuit shown in Figure 6.3 can be used to generate (or count) all the nonzero elements of GF(2*). First, we load (10.00) (vector representation of a = 1) into the register. Successive shifts of the register will generate vector representations of successive powers of a, in exactly the same order as they appear in Table 2.8, At the end of the fifteenth shift, the register will contain (1.0.0 0) again. As another example, suppose that we want (o devise a circuit to multiply an arbitrary element of GF(2") by the etement a3. Again, we express in polynomial form: B= by + bya + bye? + byae® Multiplying both sides of the preceding equation by «3, we have 088 = bya? + bot + bya + ba = bya? + bil + at) + by(e +07) + b3(0? +03) by + (by + ba)ae + (by + b3)0? + (by + B3)03. Based on the preceding expression, we obtain the circuit shown in Figure 6.4, which is capable of multiplying any element 6 in GF(2") by a. To multiply, we first load the vector representation (bp. bi, bz, bs) of B into the register, then we pulse the register. The new contents in the register will be the vector representation of a. Next, we consicler multiplying two arbitrary field elements. Again, we use GF(2') for illustration. Let 6 and y be two elements in GF(2*), We express these two elements in polynomial form: B= by + bye + bye? + Byer, = cg +c + cna? + e303 ‘Then, we can express the product By in the following form: By = (((c3B)a + c2B)a + c1B)e + coB (6.45) FIGURE 6.4: Circuit for multiplying an arbitrary element in GF(24) by a3, 220. Chapter6 —_ Binary BCH Codes ‘This product can be evaluated with the following steps: 1, Multiply 38 by a and add the product to caf. 2, Multiply (c38)q + c28 by a and add the product to c1. 3, Multiply ((c3)a + ¢2B)q + 6) by « and add the product to cof. Multiplication by a can be carried out by the circuit shown in Figure 6.3. This circuit, can be modified to carry out the computation given by (6.45). The resultant circuit is shown in Figure 6.5. In operation of this circuit, the feedback shift register A is initially empty, and (bp, bi, bs, bs) and (co, c1, €9, ¢3), the vector representations of B and y, are loaded into registers B and C, respectively. Then, registers A and C are shifted four times. At the end of the first shift, register A contains (c3bq, ¢3b1, e362, ¢3b3), the vector representation of ef. At the end of the second shift, register A contains the vector representation of (c3)a +¢28. At the end of the third shift, register A contains the vector representation of ((c38)a + ¢8)a + cB. ‘At the end of the fourth shift, register A contains the product fy in vector form. If we express the product fy in the form By = (coB) + c1Ba) + c2Ba*) + cxBa we obtain a different multiplication circuit, as shown in Figure 6.6. To perform the multiplication, and y are loaded into registers B and C, respectively, and register A is initially empty. Then, registers A, B, and C are shifted four times. At the end of, Register A = 1 by | Register B Register C af—faf le FIGURE 6.5: Circuit for multiplying two elements of GF(24) Section 6.7 implementation of Galois Field Arithmetic 22% Register B by eke Ks HoH Register C Register A. FIGURE 6.6: Another circuit for multiplying two elements of GF(2"), the fourth shift, register A holds the product By. Both multiplication cireuits shown in Figures 6.5 and 6.6 are of the same complexity and require the same amount of computation time. ‘Two elements from GF(2”) can be multiplied with a combinational logic cirenit with 2m inputs and m outputs, The advantage of this implementation is its speed; however, for large m, it becomes prohibitively complex and costly. Multiplication can also be programmed in a general-purpose computer; it requires roughly Sn instruction executions. Let r(X) be a polynomial over GF(2). We consider now how to compute e(a'). ‘This type of computation is required in the first step of decoding of a BCH code. It can be done with a circuit for multiplying a field element by a in GF"), Again, wwe use computation over GF(2") for illustration, Suppose that we want to compute Ka) = rot ria +e? +--+ rigolt, (6.46) where a is a primitive element in GFQ*) given by Table 2.8. We can express the right-hand side of (6.46) in the form Ha) = 6 (rao + rig)a + rgda ++ dor +79, ‘Then, we can compute r(a) by adding an input to the circuit for multiplying by @ shown in Figure 6.3. The resultant circuit for computing (a) is shown in Figure 6.7. In operation of this circuit, the register is initially empty. The vector (ro. ++ 14) is shifted into the circuit one digit at a time. After the first shift, the register contains (714,0,0,0). At the end of the second shift, the register contains the vector representation of riya +3. At the completion of the third shift, the 222 Chapter6 —_Binary BCH Codes HX) Taput FIGURE 6.7: Circuit for computing r(«). HX) Taput me FIGURE 6.8: Circuit for computing r(e°). register contains the vector representation of (riya + r3)a +112. When the last digit ro is shifted into the circuit, the register contains x(a) in vector form. Similarly, we can compute 1(e°) by adding an input to the circuit for multi- plying by @? of Figure 6.4. The resultant circuit for computing r(a°) is shown in Figure 68. There is another way of computing r(a). Let @,(X) be the minimal polynomial of a, Let b(X) be the remainder resulting from dividing r(X) by @;(X). Then, x(a!) = be’). Thus, computing r(a') is equivalent to computing b(a'). A circuit can be devised to compute b(a'). For illustration, we again consider computation over GF), Suppose that we want to compute r(a3). The minimal polys romial of a is $3(X) 14+X+X?+ X34 X4. The remainder resulting from divi the form b(X) = bo + LX + boX? + b3X%, Then, bias) = by + bya + boa + bya? = by + dra + bn{o? +a) + bsa +a) = by + bya + bow? + (by + by + b3)0?. ra*) (647) From the preceding expression we see that (a) can be computed by using a circuit that divides r(X) by $3(X) = 1+ X + X? + X9 + X4 and then combining Section 6.7 Implementation of Galois Field Arithmetic 223 the coefficients of the remainder b(X) as given by (6.47). Such a circuit is shown in Figure 6.9, where the feedback connection of the shift register is based on $(X) =14+X +X? + X34 X74, Because a® is a conjugate of a, it has the same minimal polynomial as e5, and therefore r(@®) can be computed from the same remainder b(X) resulting from dividing 1(X) by $3(X). To form r(a®), we combine 1) apr by) +=: + [bs + es (ar) FIGURE 6.9: Another circuit for computing r(a*) in GF(2"). Ha") FIGURE 6.10: suit for computing #(a3) and x(c) in GF(2"), 224 Chapter6 —_Binary BCH Codes the coefficients of b(X) in the following manner: ra®) b(a®) = by + bia® + bya? + bya’ = by the? +03) + +at = (by + bz) + baat (by + by)a? + (bt + bp + b3)0°. 3 te3)4 The combined circuit for computing r(a>) and r(@®) is shown in Figure 6.10. ‘The arithmetic operation of division over GF(2") can be performed by first forming the multiplicative inverse of the divisor f and then multiplying this inverse 6 by the dividend, thus forming the quotient. The multiplicative inverse of 6 can be found by using the fact that §7"~! = 1. Thus, pone 6.8 IMPLEMENTATION OF ERROR CORRECTION Each step in the decoding of a BCH code can be implemented either by digital hardware or by software. Each implementation has certain advantages. We consider these implementations next, 1 Syndrome Computations The first step in decoding a f-error-correction BCH code is to compute the 2r syndrome components 5}, $2, --» , 52). these syndrome components may be obtained by substituting the field elements a, a2, --- , a2! into the received polynomial r(X). For software implementation, a’ into r(X) is best substituted as follows S = He!) = mr @l > + rnaa'y" + ne! tro (natal + rp adel + rade! te rjal +10. This computation takes n — 1 additions and n — 1 multiplications. For binary BCH codes, we have shown that Sx; = 5?. With this equality, the 2r syndrome components can be computed with (n — 1)r additions and nr multiplications. For hardware implementation, the syndrome components may be computed with feedback shift registers as described in Section 6,7. We may use either the type of circuits shown in Figures 6,7 and 6.8 or the type of circuit shown in Figure 6.10. ‘The second type of circuit is simpler, From the expression of (6.3), we see that the generator polynomial is a product of at most ¢ minimal polynomials. Therefore, at most r feedback shift registers, each consisting of at most m stages, are needed to form the 2r syndrome components. The computation is performed as the received polynomial e(X) enters the decoder. As soon as the entire r(X) has entered the decoder, the 2r syndrome components are formed. It takes n clock cycles to complete the computation. A syndrome computation circuit for the double-error-correcting (15, 7) BCH code is shown in Figure 6.11, where two feedback shift registers, each with four stages, are employed. The advantage of hardware implementation of syndrome computation is speed; however, software implementation is less expensive. Section 6.8 Implementation of Error Correction 225 HX) +] sit butter register Tapa (X) 14 XE Xt OAK) SLX XE XE NE J. -b- - by —- Sy FIGURE 6.11: Syndrome computation circuit for the double-error-correcting (15, 7) BCH code. 6.8.2 Finding the Error-Location Polynomial o(X) For this step the software computation requires somewhat fewer than ¢ additions and ? multiplications to compute each 6*!(X) and each dj,, and since there are ¢ of each, the total is roughly 2r? additions and 2/? multiplications. A pure hardware implementation requires the same total, and the speed depends on how much is done in parallel. The type of circuit shown in Figure 6.2 may be used for addition, and the type of circuits shown in Figures 6.5 and 6.6 may be used for multiplications. A very fast hardware implementation of finding o(X) would probably be very expensive, whereas a simple hardware implementation would probably be organized much like a general-purpose computer, except with a wired rather than a stored program. 6.8.3 Computation of Error-Lecation Numbers and Errer Correction Inthe worst case, this step requires substituting m field elements into an error-location polynomial «(X) of degree s to determine its roots. In software this requires nt multi- plications and nr additions. Thisstep can also be performed in hardware using Chien’s 226 Chapter6 Binary BCH Codes searching circuit, shown in Figure 6.1. Chien’s searching circuit requires : multipliers for multiplying by a, a”, --- ,@*, respectively. These multipliers may be the type of circuits shown in Figures 6.3 and 6.4. Initially, 01, 02, --- , 0; foundin step 2are loaded into the registers of the ¢ multipliers. Then, these multipliers are shifted n times. At the end of the /th shift, the 1 registers contain oa’, opa”!,--- , o,a"', Then, the sum T+ oye! + 0707 +--+ o,0" is tested. If the sum is zero, a”~ is an error-location number; otherwise, «”~! is not an error-location number. This sum can be formed by using / m-input modulo-2 adders. An m-input OR gate is used to test whether the sum is zero, It takes n clock cycles to complete this step. If we want to correct only the message digits, only k clock cycles are needed. A Chien’s searching circuit for the double-error-correcting, (15, 6) BCH code is shown in Figure 6.12, For large 1 and m, the cost for building + wired multipliers for multiplying a,a*,--- a in one clock cycle becomes substantial. For more economical but Output 15-bit buf Multiplying by a initially load with oy Multiplying by a initially load with o FIGURE 6.12: Chien’s searching circuit for the double-error-correcting (15, 7) BCH code.

You might also like