Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

BCH code

From Wikipedia, the free encyclopedia
Error correction code

Incoding theory, theBose–Chaudhuri–Hocquenghem codes (BCH codes) form a class ofcyclicerror-correcting codes that are constructed usingpolynomials over afinite field (also called aGalois field). BCH codes were invented in 1959 by French mathematicianAlexis Hocquenghem, and independently in 1960 byRaj Chandra Bose andD. K. Ray-Chaudhuri.[1][2][3] The nameBose–Chaudhuri–Hocquenghem (and the acronymBCH) arises from the initials of the inventors' surnames (mistakenly, in the case of Ray-Chaudhuri).

One of the key features of BCH codes is that during code design, there is a precise control over the number of symbol errors correctable by the code. In particular, it is possible to design binary BCH codes that can correct multiple bit errors. Another advantage of BCH codes is the ease with which they can be decoded, namely, via analgebraic method known assyndrome decoding. This simplifies the design of the decoder for these codes, using small low-powerelectronic hardware.

BCH codes are used in applications such as satellite communications,[4]compact disc players,DVDs,disk drives,USB flash drives,solid-state drives,[5] andtwo-dimensional bar codes.

Definition and illustration

[edit]

Primitive narrow-sense BCH codes

[edit]

Given aprime numberq andprime powerqm with positive integersm andd such thatdqm − 1, a primitive narrow-sense BCH code over thefinite field (or Galois field)GF(q) with code lengthn =qm − 1 anddistance at leastd is constructed by the following method.

Letα be aprimitive element ofGF(qm).For any positiveintegeri, letmi(x) be theminimal polynomial with coefficients inGF(q) ofαi.Thegenerator polynomial of the BCH code is defined as theleast common multipleg(x) = lcm(m1(x),…,md − 1(x)).It can be seen thatg(x) is a polynomial with coefficients inGF(q) and dividesxn − 1.Therefore, thepolynomial code defined byg(x) is a cyclic code.

Example

[edit]

Letq = 2 andm = 4 (thereforen = 15). We will consider different values ofd forGF(16) = GF(24) based on the reducing polynomialz4 +z + 1, using primitive elementα(z) =z. There are fourteen minimum polynomialsmi(x) with coefficients inGF(2) satisfying

mi(αi)mod(z4+z+1)=0.{\displaystyle m_{i}\left(\alpha ^{i}\right){\bmod {\left(z^{4}+z+1\right)}}=0.}

The minimal polynomials are

m1(x)=m2(x)=m4(x)=m8(x)=x4+x+1,m3(x)=m6(x)=m9(x)=m12(x)=x4+x3+x2+x+1,m5(x)=m10(x)=x2+x+1,m7(x)=m11(x)=m13(x)=m14(x)=x4+x3+1.{\displaystyle {\begin{aligned}m_{1}(x)&=m_{2}(x)=m_{4}(x)=m_{8}(x)=x^{4}+x+1,\\m_{3}(x)&=m_{6}(x)=m_{9}(x)=m_{12}(x)=x^{4}+x^{3}+x^{2}+x+1,\\m_{5}(x)&=m_{10}(x)=x^{2}+x+1,\\m_{7}(x)&=m_{11}(x)=m_{13}(x)=m_{14}(x)=x^{4}+x^{3}+1.\end{aligned}}}

The BCH code withd=2,3{\displaystyle d=2,3} has the generator polynomial

g(x)=lcm(m1(x),m2(x))=m1(x)=x4+x+1.{\displaystyle g(x)={\rm {lcm}}(m_{1}(x),m_{2}(x))=m_{1}(x)=x^{4}+x+1.\,}

It has minimalHamming distance at least 3 and corrects up to one error. Since the generator polynomial is of degree 4, this code has 11 data bits and 4 checksum bits. It is also denoted as:(15, 11) BCH code.

The BCH code withd=4,5{\displaystyle d=4,5} has the generator polynomial

g(x)=lcm(m1(x),m2(x),m3(x),m4(x))=m1(x)m3(x)=(x4+x+1)(x4+x3+x2+x+1)=x8+x7+x6+x4+1.{\displaystyle {\begin{aligned}g(x)&={\rm {lcm}}(m_{1}(x),m_{2}(x),m_{3}(x),m_{4}(x))=m_{1}(x)m_{3}(x)\\&=\left(x^{4}+x+1\right)\left(x^{4}+x^{3}+x^{2}+x+1\right)=x^{8}+x^{7}+x^{6}+x^{4}+1.\end{aligned}}}

It has minimal Hamming distance at least 5 and corrects up to two errors. Since the generator polynomial is of degree 8, this code has 7 data bits and 8 checksum bits. It is also denoted as:(15, 7) BCH code.

The BCH code withd=6,7{\displaystyle d=6,7} has the generator polynomial

g(x)=lcm(m1(x),m2(x),m3(x),m4(x),m5(x),m6(x))=m1(x)m3(x)m5(x)=(x4+x+1)(x4+x3+x2+x+1)(x2+x+1)=x10+x8+x5+x4+x2+x+1.{\displaystyle {\begin{aligned}g(x)&={\rm {lcm}}(m_{1}(x),m_{2}(x),m_{3}(x),m_{4}(x),m_{5}(x),m_{6}(x))=m_{1}(x)m_{3}(x)m_{5}(x)\\&=\left(x^{4}+x+1\right)\left(x^{4}+x^{3}+x^{2}+x+1\right)\left(x^{2}+x+1\right)=x^{10}+x^{8}+x^{5}+x^{4}+x^{2}+x+1.\end{aligned}}}

It has minimal Hamming distance at least 7 and corrects up to three errors. Since the generator polynomial is of degree 10, this code has 5 data bits and 10 checksum bits. It is also denoted as:(15, 5) BCH code. (This particular generator polynomial has a real-world application, in the "format information" of theQR code.)

The BCH code withd=8{\displaystyle d=8} and higher has the generator polynomial

g(x)=lcm(m1(x),m2(x),...,m14(x))=m1(x)m3(x)m5(x)m7(x)=(x4+x+1)(x4+x3+x2+x+1)(x2+x+1)(x4+x3+1)=x14+x13+x12++x2+x+1.{\displaystyle {\begin{aligned}g(x)&={\rm {lcm}}(m_{1}(x),m_{2}(x),...,m_{14}(x))=m_{1}(x)m_{3}(x)m_{5}(x)m_{7}(x)\\&=\left(x^{4}+x+1\right)\left(x^{4}+x^{3}+x^{2}+x+1\right)\left(x^{2}+x+1\right)\left(x^{4}+x^{3}+1\right)=x^{14}+x^{13}+x^{12}+\cdots +x^{2}+x+1.\end{aligned}}}

This code has minimal Hamming distance 15 and corrects 7 errors. It has 1 data bit and 14 checksum bits. It is also denoted as:(15, 1) BCH code. In fact, this code has only two codewords: 000000000000000 and 111111111111111 (a trivialrepetition code).

General BCH codes

[edit]

General BCH codes differ from primitive narrow-sense BCH codes in two respects.

First, the requirement thatα{\displaystyle \alpha } be a primitive element ofGF(qm){\displaystyle \mathrm {GF} (q^{m})} can be relaxed. By relaxing this requirement, the code length changes fromqm1{\displaystyle q^{m}-1} toord(α),{\displaystyle \mathrm {ord} (\alpha ),} theorder of the elementα.{\displaystyle \alpha .}

Second, the consecutive roots of the generator polynomial may run fromαc,,αc+d2{\displaystyle \alpha ^{c},\ldots ,\alpha ^{c+d-2}} instead ofα,,αd1.{\displaystyle \alpha ,\ldots ,\alpha ^{d-1}.}

Definition. Fix a finite fieldGF(q),{\displaystyle GF(q),} whereq{\displaystyle q} is a prime power. Choose positive integersm,n,d,c{\displaystyle m,n,d,c} such that2dn,{\displaystyle 2\leq d\leq n,}gcd(n,q)=1,{\displaystyle {\rm {gcd}}(n,q)=1,} andm{\displaystyle m} is themultiplicative order ofq{\displaystyle q} modulon.{\displaystyle n.}

As before, letα{\displaystyle \alpha } be aprimitiven{\displaystyle n}th root of unity inGF(qm),{\displaystyle GF(q^{m}),} and letmi(x){\displaystyle m_{i}(x)} be theminimal polynomial overGF(q){\displaystyle GF(q)} ofαi{\displaystyle \alpha ^{i}} for alli.{\displaystyle i.}The generator polynomial of the BCH code is defined as theleast common multipleg(x)=lcm(mc(x),,mc+d2(x)).{\displaystyle g(x)={\rm {lcm}}(m_{c}(x),\ldots ,m_{c+d-2}(x)).}

Note: ifn=qm1{\displaystyle n=q^{m}-1} as in the simplified definition, thengcd(n,q){\displaystyle {\rm {gcd}}(n,q)} is 1, and the order ofq{\displaystyle q} modulon{\displaystyle n} ism.{\displaystyle m.}Therefore, the simplified definition is indeed a special case of the general one.

Special cases

[edit]

The generator polynomialg(x){\displaystyle g(x)} of a BCH code has coefficients fromGF(q).{\displaystyle \mathrm {GF} (q).}In general, a cyclic code overGF(qp){\displaystyle \mathrm {GF} (q^{p})} withg(x){\displaystyle g(x)} as the generator polynomial is called a BCH code overGF(qp).{\displaystyle \mathrm {GF} (q^{p}).}The BCH code overGF(qm){\displaystyle \mathrm {GF} (q^{m})} and generator polynomialg(x){\displaystyle g(x)} with successive powers ofα{\displaystyle \alpha } as roots is one type ofReed–Solomon code where the decoder (syndromes) alphabet is the same as the channel (data and generator polynomial) alphabet, all elements ofGF(qm){\displaystyle \mathrm {GF} (q^{m})} .[6] The other type of Reed Solomon code is anoriginal view Reed Solomon code which is not a BCH code.

Properties

[edit]

The generator polynomial of a BCH code has degree at most(d1)m{\displaystyle (d-1)m}. Moreover, ifq=2{\displaystyle q=2} andc=1{\displaystyle c=1}, the generator polynomial has degree at mostdm/2{\displaystyle dm/2}.

Proof

Each minimal polynomialmi(x){\displaystyle m_{i}(x)} has degree at mostm{\displaystyle m}. Therefore, the least common multiple ofd1{\displaystyle d-1} of them has degree at most(d1)m{\displaystyle (d-1)m}. Moreover, ifq=2,{\displaystyle q=2,} thenmi(x)=m2i(x){\displaystyle m_{i}(x)=m_{2i}(x)} for alli{\displaystyle i}. Therefore,g(x){\displaystyle g(x)} is the least common multiple of at mostd/2{\displaystyle d/2} minimal polynomialsmi(x){\displaystyle m_{i}(x)} for odd indicesi,{\displaystyle i,} each of degree at mostm{\displaystyle m}.

A BCH code has minimal Hamming distance at leastd{\displaystyle d}.

Proof

Suppose thatp(x){\displaystyle p(x)} is a code word with fewer thand{\displaystyle d} non-zero terms. Then

p(x)=b1xk1++bd1xkd1, where k1<k2<<kd1.{\displaystyle p(x)=b_{1}x^{k_{1}}+\cdots +b_{d-1}x^{k_{d-1}},{\text{ where }}k_{1}<k_{2}<\cdots <k_{d-1}.}

Recall thatαc,,αc+d2{\displaystyle \alpha ^{c},\ldots ,\alpha ^{c+d-2}} are roots ofg(x),{\displaystyle g(x),} hence ofp(x){\displaystyle p(x)}. This implies thatb1,,bd1{\displaystyle b_{1},\ldots ,b_{d-1}} satisfy the following equations, for eachi{c,,c+d2}{\displaystyle i\in \{c,\dotsc ,c+d-2\}}:

p(αi)=b1αik1+b2αik2++bd1αikd1=0.{\displaystyle p(\alpha ^{i})=b_{1}\alpha ^{ik_{1}}+b_{2}\alpha ^{ik_{2}}+\cdots +b_{d-1}\alpha ^{ik_{d-1}}=0.}

In matrix form, we have

[αck1αck2αckd1α(c+1)k1α(c+1)k2α(c+1)kd1α(c+d2)k1α(c+d2)k2α(c+d2)kd1][b1b2bd1]=[000].{\displaystyle {\begin{bmatrix}\alpha ^{ck_{1}}&\alpha ^{ck_{2}}&\cdots &\alpha ^{ck_{d-1}}\\\alpha ^{(c+1)k_{1}}&\alpha ^{(c+1)k_{2}}&\cdots &\alpha ^{(c+1)k_{d-1}}\\\vdots &\vdots &&\vdots \\\alpha ^{(c+d-2)k_{1}}&\alpha ^{(c+d-2)k_{2}}&\cdots &\alpha ^{(c+d-2)k_{d-1}}\\\end{bmatrix}}{\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{d-1}\end{bmatrix}}={\begin{bmatrix}0\\0\\\vdots \\0\end{bmatrix}}.}

The determinant of this matrix equals

(i=1d1αcki)det(111αk1αk2αkd1α(d2)k1α(d2)k2α(d2)kd1)=(i=1d1αcki)det(V).{\displaystyle \left(\prod _{i=1}^{d-1}\alpha ^{ck_{i}}\right)\det {\begin{pmatrix}1&1&\cdots &1\\\alpha ^{k_{1}}&\alpha ^{k_{2}}&\cdots &\alpha ^{k_{d-1}}\\\vdots &\vdots &&\vdots \\\alpha ^{(d-2)k_{1}}&\alpha ^{(d-2)k_{2}}&\cdots &\alpha ^{(d-2)k_{d-1}}\\\end{pmatrix}}=\left(\prod _{i=1}^{d-1}\alpha ^{ck_{i}}\right)\det(V).}

The matrixV{\displaystyle V} is seen to be aVandermonde matrix, and its determinant is

det(V)=1i<jd1(αkjαki),{\displaystyle \det(V)=\prod _{1\leq i<j\leq d-1}\left(\alpha ^{k_{j}}-\alpha ^{k_{i}}\right),}

which is non-zero. It therefore follows thatb1,,bd1=0,{\displaystyle b_{1},\ldots ,b_{d-1}=0,} hencep(x)=0.{\displaystyle p(x)=0.}

A BCH code is cyclic.

Proof

A polynomial code of lengthn{\displaystyle n} is cyclic if and only if its generator polynomial dividesxn1.{\displaystyle x^{n}-1.} Sinceg(x){\displaystyle g(x)} is the minimal polynomial with rootsαc,,αc+d2,{\displaystyle \alpha ^{c},\ldots ,\alpha ^{c+d-2},} it suffices to check that each ofαc,,αc+d2{\displaystyle \alpha ^{c},\ldots ,\alpha ^{c+d-2}} is a root ofxn1.{\displaystyle x^{n}-1.} This follows immediately from the fact thatα{\displaystyle \alpha } is, by definition, ann{\displaystyle n}th root of unity.

Encoding

[edit]

Because any polynomial that is a multiple of the generator polynomial is a valid BCH codeword, BCH encoding is merely the process of finding some polynomial that has the generator as a factor.

The BCH code itself is not prescriptive about the meaning of the coefficients of the polynomial; conceptually, a BCH decoding algorithm's sole concern is to find the valid codeword with the minimal Hamming distance to the received codeword. Therefore, the BCH code may be implemented either as asystematic code or not, depending on how the implementor chooses to embed the message in the encoded polynomial.

Non-systematic encoding: The message as a factor

[edit]

The most straightforward way to find a polynomial that is a multiple of the generator is to compute the product of some arbitrary polynomial and the generator. In this case, the arbitrary polynomial can be chosen using the symbols of the message as coefficients.

s(x)=p(x)g(x){\displaystyle s(x)=p(x)g(x)}

As an example, consider the generator polynomialg(x)=x10+x9+x8+x6+x5+x3+1{\displaystyle g(x)=x^{10}+x^{9}+x^{8}+x^{6}+x^{5}+x^{3}+1}, chosen for use in the (31, 21) binary BCH code used byPOCSAG and others. To encode the 21-bit message {101101110111101111101}, we first represent it as a polynomial overGF(2){\displaystyle GF(2)}:

p(x)=x20+x18+x17+x15+x14+x13+x11+x10+x9+x8+x6+x5+x4+x3+x2+1{\displaystyle p(x)=x^{20}+x^{18}+x^{17}+x^{15}+x^{14}+x^{13}+x^{11}+x^{10}+x^{9}+x^{8}+x^{6}+x^{5}+x^{4}+x^{3}+x^{2}+1}

Then, compute (also overGF(2){\displaystyle GF(2)}):

s(x)=p(x)g(x)=(x20+x18+x17+x15+x14+x13+x11+x10+x9+x8+x6+x5+x4+x3+x2+1)(x10+x9+x8+x6+x5+x3+1)=x30+x29+x26+x25+x24+x22+x19+x17+x16+x15+x14+x12+x10+x9+x8+x6+x5+x4+x2+1{\displaystyle {\begin{aligned}s(x)&=p(x)g(x)\\&=\left(x^{20}+x^{18}+x^{17}+x^{15}+x^{14}+x^{13}+x^{11}+x^{10}+x^{9}+x^{8}+x^{6}+x^{5}+x^{4}+x^{3}+x^{2}+1\right)\left(x^{10}+x^{9}+x^{8}+x^{6}+x^{5}+x^{3}+1\right)\\&=x^{30}+x^{29}+x^{26}+x^{25}+x^{24}+x^{22}+x^{19}+x^{17}+x^{16}+x^{15}+x^{14}+x^{12}+x^{10}+x^{9}+x^{8}+x^{6}+x^{5}+x^{4}+x^{2}+1\end{aligned}}}

Thus, the transmitted codeword is {1100111010010111101011101110101}.

The receiver can use these bits as coefficients ins(x){\displaystyle s(x)} and, after error-correction to ensure a valid codeword, can recomputep(x)=s(x)/g(x){\displaystyle p(x)=s(x)/g(x)}

Systematic encoding: The message as a prefix

[edit]

A systematic code is one in which the message appears verbatim somewhere within the codeword. Therefore, systematic BCH encoding involves first embedding the message polynomial within the codeword polynomial, and then adjusting the coefficients of the remaining (non-message) terms to ensure thats(x){\displaystyle s(x)} is divisible byg(x){\displaystyle g(x)}.

This encoding method leverages the fact that subtracting the remainder from a dividend results in a multiple of the divisor. Hence, if we take our message polynomialp(x){\displaystyle p(x)} as before and multiply it byxnk{\displaystyle x^{n-k}} (to "shift" the message out of the way of the remainder), we can then useEuclidean division of polynomials to yield:

p(x)xnk=q(x)g(x)+r(x){\displaystyle p(x)x^{n-k}=q(x)g(x)+r(x)}

Here, we see thatq(x)g(x){\displaystyle q(x)g(x)} is a valid codeword. Asr(x){\displaystyle r(x)} is always of degree less thannk{\displaystyle n-k} (which is the degree ofg(x){\displaystyle g(x)}), we can safely subtract it fromp(x)xnk{\displaystyle p(x)x^{n-k}} without altering any of the message coefficients, hence we have ours(x){\displaystyle s(x)} as

s(x)=q(x)g(x)=p(x)xnkr(x){\displaystyle s(x)=q(x)g(x)=p(x)x^{n-k}-r(x)}

OverGF(2){\displaystyle GF(2)} (i.e. with binary BCH codes), this process is indistinguishable from appending acyclic redundancy check, and if a systematic binary BCH code is used only for error-detection purposes, we see that BCH codes are just a generalization of themathematics of cyclic redundancy checks.

The advantage to the systematic coding is that the receiver can recover the original message by discarding everything after the firstk{\displaystyle k} coefficients, after performing error correction.

Decoding

[edit]

There are many algorithms for decoding BCH codes. The most common ones follow this general outline:

  1. Calculate the syndromessj for the received vector
  2. Determine the number of errorst and the error locator polynomialΛ(x) from the syndromes
  3. Calculate the roots of the error location polynomial to find the error locationsXi
  4. Calculate the error valuesYi at those error locations
  5. Correct the errors

During some of these steps, the decoding algorithm may determine that the received vector has too many errors and cannot be corrected. For example, if an appropriate value oft is not found, then the correction would fail. In a truncated (not primitive) code, an error location may be out of range. If the received vector has more errors than the code can correct, the decoder may unknowingly produce an apparently valid message that is not the one that was sent.

Calculate the syndromes

[edit]

The received vectorR{\displaystyle R} is the sum of the correct codewordC{\displaystyle C} and an unknown error vectorE.{\displaystyle E.} The syndrome values are formed by consideringR{\displaystyle R} as a polynomial and evaluating it atαc,,αc+d2.{\displaystyle \alpha ^{c},\ldots ,\alpha ^{c+d-2}.} Thus the syndromes are[7]

sj=R(αj)=C(αj)+E(αj){\displaystyle s_{j}=R\left(\alpha ^{j}\right)=C\left(\alpha ^{j}\right)+E\left(\alpha ^{j}\right)}

forj=c{\displaystyle j=c} toc+d2.{\displaystyle c+d-2.}

Sinceαj{\displaystyle \alpha ^{j}} are the zeros ofg(x),{\displaystyle g(x),} of whichC(x){\displaystyle C(x)} is a multiple,C(αj)=0.{\displaystyle C\left(\alpha ^{j}\right)=0.} Examining the syndrome values thus isolates the error vector so one can begin to solve for it.

If there is no error,sj=0{\displaystyle s_{j}=0} for allj.{\displaystyle j.} If the syndromes are all zero, then the decoding is done.

Calculate the error location polynomial

[edit]

If there are nonzero syndromes, then there are errors. The decoder needs to figure out how many errors and the location of those errors.

If there is a single error, write this asE(x)=exi,{\displaystyle E(x)=e\,x^{i},} wherei{\displaystyle i} is the location of the error ande{\displaystyle e} is its magnitude. Then the first two syndromes are

sc=eαcisc+1=eα(c+1)i=αisc{\displaystyle {\begin{aligned}s_{c}&=e\,\alpha ^{c\,i}\\s_{c+1}&=e\,\alpha ^{(c+1)\,i}=\alpha ^{i}s_{c}\end{aligned}}}

so together they allow us to calculatee{\displaystyle e} and provide some information abouti{\displaystyle i} (completely determining it in the case of Reed–Solomon codes).

If there are two or more errors,

E(x)=e1xi1+e2xi2+{\displaystyle E(x)=e_{1}x^{i_{1}}+e_{2}x^{i_{2}}+\cdots \,}

It is not immediately obvious how to begin solving the resulting syndromes for the unknownsek{\displaystyle e_{k}} andik.{\displaystyle i_{k}.}

The first step is finding, compatible with computed syndromes and with minimal possiblet,{\displaystyle t,} locator polynomial:

Λ(x)=j=1t(xαij1){\displaystyle \Lambda (x)=\prod _{j=1}^{t}\left(x\alpha ^{i_{j}}-1\right)}

Three popular algorithms for this task are:

  1. Peterson–Gorenstein–Zierler algorithm
  2. Berlekamp–Massey algorithm
  3. Sugiyama Euclidean algorithm

Peterson–Gorenstein–Zierler algorithm

[edit]

Peterson's algorithm is the step 2 of the generalized BCH decoding procedure. Peterson's algorithm is used to calculate the error locator polynomial coefficientsλ1,λ2,,λv{\displaystyle \lambda _{1},\lambda _{2},\dots ,\lambda _{v}} of a polynomial

Λ(x)=1+λ1x+λ2x2++λvxv.{\displaystyle \Lambda (x)=1+\lambda _{1}x+\lambda _{2}x^{2}+\cdots +\lambda _{v}x^{v}.}

Now the procedure of the Peterson–Gorenstein–Zierler algorithm.[8] Expect we have at least 2t syndromessc, …,sc+2t−1. Letv = t.

  1. Start by generating theSv×v{\displaystyle S_{v\times v}} matrix with elements that are syndrome values
    Sv×v=[scsc+1sc+v1sc+1sc+2sc+vsc+v1sc+vsc+2v2].{\displaystyle S_{v\times v}={\begin{bmatrix}s_{c}&s_{c+1}&\dots &s_{c+v-1}\\s_{c+1}&s_{c+2}&\dots &s_{c+v}\\\vdots &\vdots &\ddots &\vdots \\s_{c+v-1}&s_{c+v}&\dots &s_{c+2v-2}\end{bmatrix}}.}
  2. Generate acv×1{\displaystyle c_{v\times 1}} vector with elements
    Cv×1=[sc+vsc+v+1sc+2v1].{\displaystyle C_{v\times 1}={\begin{bmatrix}s_{c+v}\\s_{c+v+1}\\\vdots \\s_{c+2v-1}\end{bmatrix}}.}
  3. LetΛ{\displaystyle \Lambda } denote the unknown polynomial coefficients, which are given by
    Λv×1=[λvλv1λ1].{\displaystyle \Lambda _{v\times 1}={\begin{bmatrix}\lambda _{v}\\\lambda _{v-1}\\\vdots \\\lambda _{1}\end{bmatrix}}.}
  4. Form the matrix equation
    Sv×vΛv×1=Cv×1.{\displaystyle S_{v\times v}\Lambda _{v\times 1}=-C_{v\times 1\,}.}
  5. If the determinant of matrixSv×v{\displaystyle S_{v\times v}} is nonzero, then we can actually find an inverse of this matrix and solve for the values of unknownΛ{\displaystyle \Lambda } values.
  6. Ifdet(Sv×v)=0,{\displaystyle \det \left(S_{v\times v}\right)=0,} then follow
           ifv=0{\displaystyle v=0}       then             declare an empty error locator polynomial             stop Peterson procedure.       end       setvv1{\displaystyle v\leftarrow v-1}
    continue from the beginning of Peterson's decoding by making smallerSv×v{\displaystyle S_{v\times v}}
  7. After you have values ofΛ{\displaystyle \Lambda }, you have the error locator polynomial.
  8. Stop Peterson procedure.

Factor error locator polynomial

[edit]

Now that you have theΛ(x){\displaystyle \Lambda (x)} polynomial, its roots can be found in the formΛ(x)=(αi1x1)(αi2x1)(αivx1){\displaystyle \Lambda (x)=\left(\alpha ^{i_{1}}x-1\right)\left(\alpha ^{i_{2}}x-1\right)\cdots \left(\alpha ^{i_{v}}x-1\right)} by brute force for example using theChien search algorithm. The exponentialpowers of the primitive elementα{\displaystyle \alpha } will yield the positions where errors occur in the received word; hence the name 'error locator' polynomial.

The zeros of Λ(x) areαi1, …,αiv.

Calculate error values

[edit]

Once the error locations are known, the next step is to determine the error values at those locations. The error values are then used to correct the received values at those locations to recover the original codeword.

For the case of binary BCH, (with all characters readable) this is trivial; just flip the bits for the received word at these positions, and we have the corrected code word. In the more general case, the error weightsej{\displaystyle e_{j}} can be determined by solving thelinear system

sc=e1αci1+e2αci2+sc+1=e1α(c+1)i1+e2α(c+1)i2+ {\displaystyle {\begin{aligned}s_{c}&=e_{1}\alpha ^{c\,i_{1}}+e_{2}\alpha ^{c\,i_{2}}+\cdots \\s_{c+1}&=e_{1}\alpha ^{(c+1)\,i_{1}}+e_{2}\alpha ^{(c+1)\,i_{2}}+\cdots \\&{}\ \vdots \end{aligned}}}

Forney algorithm

[edit]

However, there is a more efficient method known as theForney algorithm.

Let

S(x)=sc+sc+1x+sc+2x2++sc+d2xd2.{\displaystyle S(x)=s_{c}+s_{c+1}x+s_{c+2}x^{2}+\cdots +s_{c+d-2}x^{d-2}.}
vd1,λ00Λ(x)=i=0vλixi=λ0k=0v(αikx1).{\displaystyle v\leqslant d-1,\lambda _{0}\neq 0\qquad \Lambda (x)=\sum _{i=0}^{v}\lambda _{i}x^{i}=\lambda _{0}\prod _{k=0}^{v}\left(\alpha ^{-i_{k}}x-1\right).}

And the error evaluator polynomial[9]

Ω(x)S(x)Λ(x)modxd1{\displaystyle \Omega (x)\equiv S(x)\Lambda (x){\bmod {x^{d-1}}}}

Finally:

Λ(x)=i=1viλixi1,{\displaystyle \Lambda '(x)=\sum _{i=1}^{v}i\cdot \lambda _{i}x^{i-1},}

where

ix:=k=1ix.{\displaystyle i\cdot x:=\sum _{k=1}^{i}x.}

Than if syndromes could be explained by an error word, which could be nonzero only on positionsik{\displaystyle i_{k}}, then error values are

ek=αikΩ(αik)αcikΛ(αik).{\displaystyle e_{k}=-{\alpha ^{i_{k}}\Omega \left(\alpha ^{-i_{k}}\right) \over \alpha ^{c\cdot i_{k}}\Lambda '\left(\alpha ^{-i_{k}}\right)}.}

For narrow-sense BCH codes,c = 1, so the expression simplifies to:

ek=Ω(αik)Λ(αik).{\displaystyle e_{k}=-{\Omega \left(\alpha ^{-i_{k}}\right) \over \Lambda '\left(\alpha ^{-i_{k}}\right)}.}

Explanation of Forney algorithm computation

[edit]

It is based onLagrange interpolation and techniques ofgenerating functions.

ConsiderS(x)Λ(x),{\displaystyle S(x)\Lambda (x),} and for the sake of simplicity supposeλk=0{\displaystyle \lambda _{k}=0} fork>v,{\displaystyle k>v,} andsk=0{\displaystyle s_{k}=0} fork>c+d2.{\displaystyle k>c+d-2.} Then

S(x)Λ(x)=j=0i=0jsji+1λixj.{\displaystyle S(x)\Lambda (x)=\sum _{j=0}^{\infty }\sum _{i=0}^{j}s_{j-i+1}\lambda _{i}x^{j}.}
S(x)Λ(x)=S(x){λ0=1v(αix1)}={i=0d2j=1vejα(c+i)ijxi}{λ0=1v(αix1)}={j=1vejαciji=0d2(αij)ixi}{λ0=1v(αix1)}={j=1vejαcij(xαij)d11xαij1}{λ0=1v(αix1)}=λ0j=1vejαcij(xαij)d11xαij1=1v(αix1)=λ0j=1vejαcij((xαij)d11){1,,v}{j}(αix1){\displaystyle {\begin{aligned}S(x)\Lambda (x)&=S(x)\left\{\lambda _{0}\prod _{\ell =1}^{v}\left(\alpha ^{i_{\ell }}x-1\right)\right\}\\&=\left\{\sum _{i=0}^{d-2}\sum _{j=1}^{v}e_{j}\alpha ^{(c+i)\cdot i_{j}}x^{i}\right\}\left\{\lambda _{0}\prod _{\ell =1}^{v}\left(\alpha ^{i_{\ell }}x-1\right)\right\}\\&=\left\{\sum _{j=1}^{v}e_{j}\alpha ^{ci_{j}}\sum _{i=0}^{d-2}\left(\alpha ^{i_{j}}\right)^{i}x^{i}\right\}\left\{\lambda _{0}\prod _{\ell =1}^{v}\left(\alpha ^{i_{\ell }}x-1\right)\right\}\\&=\left\{\sum _{j=1}^{v}e_{j}\alpha ^{ci_{j}}{\frac {\left(x\alpha ^{i_{j}}\right)^{d-1}-1}{x\alpha ^{i_{j}}-1}}\right\}\left\{\lambda _{0}\prod _{\ell =1}^{v}\left(\alpha ^{i_{\ell }}x-1\right)\right\}\\&=\lambda _{0}\sum _{j=1}^{v}e_{j}\alpha ^{ci_{j}}{\frac {\left(x\alpha ^{i_{j}}\right)^{d-1}-1}{x\alpha ^{i_{j}}-1}}\prod _{\ell =1}^{v}\left(\alpha ^{i_{\ell }}x-1\right)\\&=\lambda _{0}\sum _{j=1}^{v}e_{j}\alpha ^{ci_{j}}\left(\left(x\alpha ^{i_{j}}\right)^{d-1}-1\right)\prod _{\ell \in \{1,\cdots ,v\}\setminus \{j\}}\left(\alpha ^{i_{\ell }}x-1\right)\end{aligned}}}

We want to compute unknownsej,{\displaystyle e_{j},} and we could simplify the context by removing the(xαij)d1{\displaystyle \left(x\alpha ^{i_{j}}\right)^{d-1}} terms. This leads to the error evaluator polynomial

Ω(x)S(x)Λ(x)modxd1.{\displaystyle \Omega (x)\equiv S(x)\Lambda (x){\bmod {x^{d-1}}}.}

Thanks tovd1{\displaystyle v\leqslant d-1} we have

Ω(x)=λ0j=1vejαcij{1,,v}{j}(αix1).{\displaystyle \Omega (x)=-\lambda _{0}\sum _{j=1}^{v}e_{j}\alpha ^{ci_{j}}\prod _{\ell \in \{1,\cdots ,v\}\setminus \{j\}}\left(\alpha ^{i_{\ell }}x-1\right).}

Thanks toΛ{\displaystyle \Lambda } (the Lagrange interpolation trick) the sum degenerates to only one summand forx=αik{\displaystyle x=\alpha ^{-i_{k}}}

Ω(αik)=λ0ekαcik{1,,v}{k}(αiαik1).{\displaystyle \Omega \left(\alpha ^{-i_{k}}\right)=-\lambda _{0}e_{k}\alpha ^{c\cdot i_{k}}\prod _{\ell \in \{1,\cdots ,v\}\setminus \{k\}}\left(\alpha ^{i_{\ell }}\alpha ^{-i_{k}}-1\right).}

To getek{\displaystyle e_{k}} we just should get rid of the product. We could compute the product directly from already computed rootsαij{\displaystyle \alpha ^{-i_{j}}} ofΛ,{\displaystyle \Lambda ,} but we could use simpler form.

Asformal derivative

Λ(x)=λ0j=1vαij{1,,v}{j}(αix1),{\displaystyle \Lambda '(x)=\lambda _{0}\sum _{j=1}^{v}\alpha ^{i_{j}}\prod _{\ell \in \{1,\cdots ,v\}\setminus \{j\}}\left(\alpha ^{i_{\ell }}x-1\right),}

we get again only one summand in

Λ(αik)=λ0αik{1,,v}{k}(αiαik1).{\displaystyle \Lambda '\left(\alpha ^{-i_{k}}\right)=\lambda _{0}\alpha ^{i_{k}}\prod _{\ell \in \{1,\cdots ,v\}\setminus \{k\}}\left(\alpha ^{i_{\ell }}\alpha ^{-i_{k}}-1\right).}

So finally

ek=αikΩ(αik)αcikΛ(αik).{\displaystyle e_{k}=-{\frac {\alpha ^{i_{k}}\Omega \left(\alpha ^{-i_{k}}\right)}{\alpha ^{c\cdot i_{k}}\Lambda '\left(\alpha ^{-i_{k}}\right)}}.}

This formula is advantageous when one computes the formal derivative ofΛ{\displaystyle \Lambda } form

Λ(x)=i=1vλixi{\displaystyle \Lambda (x)=\sum _{i=1}^{v}\lambda _{i}x^{i}}

yielding:

Λ(x)=i=1viλixi1,{\displaystyle \Lambda '(x)=\sum _{i=1}^{v}i\cdot \lambda _{i}x^{i-1},}

where

ix:=k=1ix.{\displaystyle i\cdot x:=\sum _{k=1}^{i}x.}

Decoding based on extended Euclidean algorithm

[edit]

An alternate process of finding both the polynomial Λ and the error locator polynomial is based on Yasuo Sugiyama's adaptation of theExtended Euclidean algorithm.[10] Correction of unreadable characters could be incorporated to the algorithm easily as well.

Letk1,...,kk{\displaystyle k_{1},...,k_{k}} be positions of unreadable characters. One creates polynomial localising these positionsΓ(x)=i=1k(xαki1).{\displaystyle \Gamma (x)=\prod _{i=1}^{k}\left(x\alpha ^{k_{i}}-1\right).}Set values on unreadable positions to 0 and compute the syndromes.

As we have already defined for the Forney formula letS(x)=i=0d2sc+ixi.{\displaystyle S(x)=\sum _{i=0}^{d-2}s_{c+i}x^{i}.}

Let us run extended Euclidean algorithm for locating least common divisor of polynomialsS(x)Γ(x){\displaystyle S(x)\Gamma (x)} andxd1.{\displaystyle x^{d-1}.}The goal is not to find the least common divisor, but a polynomialr(x){\displaystyle r(x)} of degree at most(d+k3)/2{\displaystyle \lfloor (d+k-3)/2\rfloor } and polynomialsa(x),b(x){\displaystyle a(x),b(x)} such thatr(x)=a(x)S(x)Γ(x)+b(x)xd1.{\displaystyle r(x)=a(x)S(x)\Gamma (x)+b(x)x^{d-1}.}Low degree ofr(x){\displaystyle r(x)} guarantees, thata(x){\displaystyle a(x)} would satisfy extended (byΓ{\displaystyle \Gamma }) defining conditions forΛ.{\displaystyle \Lambda .}

DefiningΞ(x)=a(x)Γ(x){\displaystyle \Xi (x)=a(x)\Gamma (x)} and usingΞ{\displaystyle \Xi } on the place ofΛ(x){\displaystyle \Lambda (x)} in the Fourney formula will give us error values.

The main advantage of the algorithm is that it meanwhile computesΩ(x)=S(x)Ξ(x)modxd1=r(x){\displaystyle \Omega (x)=S(x)\Xi (x){\bmod {x}}^{d-1}=r(x)} required in the Forney formula.

Explanation of the decoding process

[edit]

The goal is to find a codeword which differs from the received word minimally as possible on readable positions. When expressing the received word as a sum of nearest codeword and error word, we are trying to find error word with minimal number of non-zeros on readable positions. Syndromsi{\displaystyle s_{i}} restricts error word by condition

si=j=0n1ejαij.{\displaystyle s_{i}=\sum _{j=0}^{n-1}e_{j}\alpha ^{ij}.}

We could write these conditions separately or we could create polynomial

S(x)=i=0d2sc+ixi{\displaystyle S(x)=\sum _{i=0}^{d-2}s_{c+i}x^{i}}

and compare coefficients near powers0{\displaystyle 0} tod2.{\displaystyle d-2.}

S(x)={0,,d2}E(x)=i=0d2j=0n1ejαijαcjxi.{\displaystyle S(x){\stackrel {\{0,\cdots ,\,d-2\}}{=}}E(x)=\sum _{i=0}^{d-2}\sum _{j=0}^{n-1}e_{j}\alpha ^{ij}\alpha ^{cj}x^{i}.}

Suppose there is unreadable letter on positionk1,{\displaystyle k_{1},} we could replace set of syndromes{sc,,sc+d2}{\displaystyle \{s_{c},\cdots ,s_{c+d-2}\}} by set of syndromes{tc,,tc+d3}{\displaystyle \{t_{c},\cdots ,t_{c+d-3}\}} defined by equationti=αk1sisi+1.{\displaystyle t_{i}=\alpha ^{k_{1}}s_{i}-s_{i+1}.} Suppose for an error word all restrictions by original set{sc,,sc+d2}{\displaystyle \{s_{c},\cdots ,s_{c+d-2}\}} of syndromes hold,than

ti=αk1sisi+1=αk1j=0n1ejαijj=0n1ejαjαij=j=0n1ej(αk1αj)αij.{\displaystyle t_{i}=\alpha ^{k_{1}}s_{i}-s_{i+1}=\alpha ^{k_{1}}\sum _{j=0}^{n-1}e_{j}\alpha ^{ij}-\sum _{j=0}^{n-1}e_{j}\alpha ^{j}\alpha ^{ij}=\sum _{j=0}^{n-1}e_{j}\left(\alpha ^{k_{1}}-\alpha ^{j}\right)\alpha ^{ij}.}

New set of syndromes restricts error vector

fj=ej(αk1αj){\displaystyle f_{j}=e_{j}\left(\alpha ^{k_{1}}-\alpha ^{j}\right)}

the same way the original set of syndromes restricted the error vectorej.{\displaystyle e_{j}.} Except the coordinatek1,{\displaystyle k_{1},} where we havefk1=0,{\displaystyle f_{k_{1}}=0,} anfj{\displaystyle f_{j}} is zero, ifej=0.{\displaystyle e_{j}=0.} For the goal of locating error positions we could change the set of syndromes in the similar way to reflect all unreadable characters. This shortens the set of syndromes byk.{\displaystyle k.}

In polynomial formulation, the replacement of syndromes set{sc,,sc+d2}{\displaystyle \{s_{c},\cdots ,s_{c+d-2}\}} by syndromes set{tc,,tc+d3}{\displaystyle \{t_{c},\cdots ,t_{c+d-3}\}} leads to

T(x)=i=0d3tc+ixi=αk1i=0d3sc+ixii=1d2sc+ixi1.{\displaystyle T(x)=\sum _{i=0}^{d-3}t_{c+i}x^{i}=\alpha ^{k_{1}}\sum _{i=0}^{d-3}s_{c+i}x^{i}-\sum _{i=1}^{d-2}s_{c+i}x^{i-1}.}

Therefore,

xT(x)={1,,d2}(xαk11)S(x).{\displaystyle xT(x){\stackrel {\{1,\cdots ,\,d-2\}}{=}}\left(x\alpha ^{k_{1}}-1\right)S(x).}

After replacement ofS(x){\displaystyle S(x)} byS(x)Γ(x){\displaystyle S(x)\Gamma (x)}, one would require equation for coefficients near powersk,,d2.{\displaystyle k,\cdots ,d-2.}

One could consider looking for error positions from the point of view of eliminating influence of given positions similarly as for unreadable characters. If we foundv{\displaystyle v} positions such that eliminating their influence leads to obtaining set of syndromes consisting of all zeros, than there exists error vector with errors only on these coordinates.IfΛ(x){\displaystyle \Lambda (x)} denotes the polynomial eliminating the influence of these coordinates, we obtain

S(x)Γ(x)Λ(x)={k+v,,d2}0.{\displaystyle S(x)\Gamma (x)\Lambda (x){\stackrel {\{k+v,\cdots ,d-2\}}{=}}0.}

In Euclidean algorithm, we try to correct at most12(d1k){\displaystyle {\tfrac {1}{2}}(d-1-k)} errors (on readable positions), because with bigger error count there could be more codewords in the same distance from the received word. Therefore, forΛ(x){\displaystyle \Lambda (x)} we are looking for, the equation must hold for coefficients near powers starting from

k+12(d1k).{\displaystyle k+\left\lfloor {\frac {1}{2}}(d-1-k)\right\rfloor .}

In Forney formula,Λ(x){\displaystyle \Lambda (x)} could be multiplied by a scalar giving the same result.

It could happen that the Euclidean algorithm findsΛ(x){\displaystyle \Lambda (x)} of degree higher than12(d1k){\displaystyle {\tfrac {1}{2}}(d-1-k)} having number of different roots equal to its degree, where the Fourney formula would be able to correct errors in all its roots, anyway correcting such many errors could be risky (especially with no other restrictions on received word). Usually after gettingΛ(x){\displaystyle \Lambda (x)} of higher degree, we decide not to correct the errors. Correction could fail in the caseΛ(x){\displaystyle \Lambda (x)} has roots with higher multiplicity or the number of roots is smaller than its degree. Fail could be detected as well by Forney formula returning error outside the transmitted alphabet.

Correct the errors

[edit]

Using the error values and error location, correct the errors and form a corrected code vector by subtracting error values at error locations.

Decoding examples

[edit]

Decoding of binary code without unreadable characters

[edit]

Consider a BCH code in GF(24) withd=7{\displaystyle d=7} andg(x)=x10+x8+x5+x4+x2+x+1{\displaystyle g(x)=x^{10}+x^{8}+x^{5}+x^{4}+x^{2}+x+1}. (This is used inQR codes.) Let the message to be transmitted be [1 1 0 1 1], or in polynomial notation,M(x)=x4+x3+x+1.{\displaystyle M(x)=x^{4}+x^{3}+x+1.}The "checksum" symbols are calculated by dividingx10M(x){\displaystyle x^{10}M(x)} byg(x){\displaystyle g(x)} and taking the remainder, resulting inx9+x4+x2{\displaystyle x^{9}+x^{4}+x^{2}} or [ 1 0 0 0 0 1 0 1 0 0 ]. These are appended to the message, so the transmitted codeword is [ 1 1 0 1 1 1 0 0 0 0 1 0 1 0 0 ].

Now, imagine that there are two bit-errors in the transmission, so the received codeword is [ 10 0 1 1 1 0 0 01 1 0 1 0 0 ]. In polynomial notation:

R(x)=C(x)+x13+x5=x14+x11+x10+x9+x5+x4+x2{\displaystyle R(x)=C(x)+x^{13}+x^{5}=x^{14}+x^{11}+x^{10}+x^{9}+x^{5}+x^{4}+x^{2}}

In order to correct the errors, first calculate the syndromes. Takingα=0010,{\displaystyle \alpha =0010,} we haves1=R(α1)=1011,{\displaystyle s_{1}=R(\alpha ^{1})=1011,}s2=1001,{\displaystyle s_{2}=1001,}s3=1011,{\displaystyle s_{3}=1011,}s4=1101,{\displaystyle s_{4}=1101,}s5=0001,{\displaystyle s_{5}=0001,} ands6=1001.{\displaystyle s_{6}=1001.}Next, apply the Peterson procedure by row-reducing the followingaugmented matrix.

[S3×3|C3×1]=[s1s2s3s4s2s3s4s5s3s4s5s6]=[101110011011110110011011110100011011110100011001][000100001000011100000001101100010000000000000000]{\displaystyle \left[S_{3\times 3}|C_{3\times 1}\right]={\begin{bmatrix}s_{1}&s_{2}&s_{3}&s_{4}\\s_{2}&s_{3}&s_{4}&s_{5}\\s_{3}&s_{4}&s_{5}&s_{6}\end{bmatrix}}={\begin{bmatrix}1011&1001&1011&1101\\1001&1011&1101&0001\\1011&1101&0001&1001\end{bmatrix}}\Rightarrow {\begin{bmatrix}0001&0000&1000&0111\\0000&0001&1011&0001\\0000&0000&0000&0000\end{bmatrix}}}

Due to the zero row,S3×3 is singular, which is no surprise since only two errors were introduced into the codeword.However, the upper-left corner of the matrix is identical to[S2×2 |C2×1], which gives rise to the solutionλ2=1000,{\displaystyle \lambda _{2}=1000,}λ1=1011.{\displaystyle \lambda _{1}=1011.}The resulting error locator polynomial isΛ(x)=1000x2+1011x+0001,{\displaystyle \Lambda (x)=1000x^{2}+1011x+0001,} which has zeros at0100=α13{\displaystyle 0100=\alpha ^{-13}} and0111=α5.{\displaystyle 0111=\alpha ^{-5}.}The exponents ofα{\displaystyle \alpha } correspond to the error locations.There is no need to calculate the error values in this example, as the only possible value is 1.

Decoding with unreadable characters

[edit]

Suppose the same scenario, but the received word has two unreadable characters [ 10 0 ? 1 1 ? 0 01 1 0 1 0 0 ]. We replace the unreadable characters by zeros while creating the polynomial reflecting their positionsΓ(x)=(α8x1)(α11x1).{\displaystyle \Gamma (x)=\left(\alpha ^{8}x-1\right)\left(\alpha ^{11}x-1\right).} We compute the syndromess1=α7,s2=α1,s3=α4,s4=α2,s5=α5,{\displaystyle s_{1}=\alpha ^{-7},s_{2}=\alpha ^{1},s_{3}=\alpha ^{4},s_{4}=\alpha ^{2},s_{5}=\alpha ^{5},} ands6=α7.{\displaystyle s_{6}=\alpha ^{-7}.} (Using log notation which is independent on GF(24) isomorphisms. For computation checking we can use the same representation for addition as was used in previous example.Hexadecimal description of the powers ofα{\displaystyle \alpha } are consecutively 1,2,4,8,3,6,C,B,5,A,7,E,F,D,9 with the addition based on bitwise xor.)

Let us make syndrome polynomial

S(x)=α7+α1x+α4x2+α2x3+α5x4+α7x5,{\displaystyle S(x)=\alpha ^{-7}+\alpha ^{1}x+\alpha ^{4}x^{2}+\alpha ^{2}x^{3}+\alpha ^{5}x^{4}+\alpha ^{-7}x^{5},}

compute

S(x)Γ(x)=α7+α4x+α1x2+α6x3+α1x4+α5x5+α7x6+α3x7.{\displaystyle S(x)\Gamma (x)=\alpha ^{-7}+\alpha ^{4}x+\alpha ^{-1}x^{2}+\alpha ^{6}x^{3}+\alpha ^{-1}x^{4}+\alpha ^{5}x^{5}+\alpha ^{7}x^{6}+\alpha ^{-3}x^{7}.}

Run the extended Euclidean algorithm:

(S(x)Γ(x)x6)=(α7+α4x+α1x2+α6x3+α1x4+α5x5+α7x6+α3x7x6)=(α7+α3x110)(x6α7+α4x+α1x2+α6x3+α1x4+α5x5+2α7x6+2α3x7)=(α7+α3x110)(α4+α5x110)(α7+α4x+α1x2+α6x3+α1x4+α5x5α3+(α7+α3)x+(α3+α1)x2+(α5+α6)x3+(α3+α1)x4+2α6x5+2x6)=((1+α4)+(α1+α2)x+α7x2α7+α3xα4+α5x1)(α7+α4x+α1x2+α6x3+α1x4+α5x5α3+α2x+α0x2+α2x3+α6x4)=(α3+α5x+α7x2α7+α3xα4+α5x1)(α5+α4x110)(α3+α2x+α0x2+α2x3+α6x4(α7+α7)+(2α7+α4)x+(α5+α6+α1)x2+(α7+α4+α6)x3+(α4+α6+α1)x4+2α5x5)=(α7x+α5x2+α3x3α3+α5x+α7x2α3+α5x+α6x2α4+α5x)(α3+α2x+α0x2+α2x3+α6x4α4+α4x+α2x2+α5x3).{\displaystyle {\begin{aligned}&{\begin{pmatrix}S(x)\Gamma (x)\\x^{6}\end{pmatrix}}\\[6pt]={}&{\begin{pmatrix}\alpha ^{-7}+\alpha ^{4}x+\alpha ^{-1}x^{2}+\alpha ^{6}x^{3}+\alpha ^{-1}x^{4}+\alpha ^{5}x^{5}+\alpha ^{7}x^{6}+\alpha ^{-3}x^{7}\\x^{6}\end{pmatrix}}\\[6pt]={}&{\begin{pmatrix}\alpha ^{7}+\alpha ^{-3}x&1\\1&0\end{pmatrix}}{\begin{pmatrix}x^{6}\\\alpha ^{-7}+\alpha ^{4}x+\alpha ^{-1}x^{2}+\alpha ^{6}x^{3}+\alpha ^{-1}x^{4}+\alpha ^{5}x^{5}+2\alpha ^{7}x^{6}+2\alpha ^{-3}x^{7}\end{pmatrix}}\\[6pt]={}&{\begin{pmatrix}\alpha ^{7}+\alpha ^{-3}x&1\\1&0\end{pmatrix}}{\begin{pmatrix}\alpha ^{4}+\alpha ^{-5}x&1\\1&0\end{pmatrix}}\\&\qquad {\begin{pmatrix}\alpha ^{-7}+\alpha ^{4}x+\alpha ^{-1}x^{2}+\alpha ^{6}x^{3}+\alpha ^{-1}x^{4}+\alpha ^{5}x^{5}\\\alpha ^{-3}+\left(\alpha ^{-7}+\alpha ^{3}\right)x+\left(\alpha ^{3}+\alpha ^{-1}\right)x^{2}+\left(\alpha ^{-5}+\alpha ^{-6}\right)x^{3}+\left(\alpha ^{3}+\alpha ^{1}\right)x^{4}+2\alpha ^{-6}x^{5}+2x^{6}\end{pmatrix}}\\[6pt]={}&{\begin{pmatrix}\left(1+\alpha ^{-4}\right)+\left(\alpha ^{1}+\alpha ^{2}\right)x+\alpha ^{7}x^{2}&\alpha ^{7}+\alpha ^{-3}x\\\alpha ^{4}+\alpha ^{-5}x&1\end{pmatrix}}{\begin{pmatrix}\alpha ^{-7}+\alpha ^{4}x+\alpha ^{-1}x^{2}+\alpha ^{6}x^{3}+\alpha ^{-1}x^{4}+\alpha ^{5}x^{5}\\\alpha ^{-3}+\alpha ^{-2}x+\alpha ^{0}x^{2}+\alpha ^{-2}x^{3}+\alpha ^{-6}x^{4}\end{pmatrix}}\\[6pt]={}&{\begin{pmatrix}\alpha ^{-3}+\alpha ^{5}x+\alpha ^{7}x^{2}&\alpha ^{7}+\alpha ^{-3}x\\\alpha ^{4}+\alpha ^{-5}x&1\end{pmatrix}}{\begin{pmatrix}\alpha ^{-5}+\alpha ^{-4}x&1\\1&0\end{pmatrix}}\\&\qquad {\begin{pmatrix}\alpha ^{-3}+\alpha ^{-2}x+\alpha ^{0}x^{2}+\alpha ^{-2}x^{3}+\alpha ^{-6}x^{4}\\\left(\alpha ^{7}+\alpha ^{-7}\right)+\left(2\alpha ^{-7}+\alpha ^{4}\right)x+\left(\alpha ^{-5}+\alpha ^{-6}+\alpha ^{-1}\right)x^{2}+\left(\alpha ^{-7}+\alpha ^{-4}+\alpha ^{6}\right)x^{3}+\left(\alpha ^{4}+\alpha ^{-6}+\alpha ^{-1}\right)x^{4}+2\alpha ^{5}x^{5}\end{pmatrix}}\\[6pt]={}&{\begin{pmatrix}\alpha ^{7}x+\alpha ^{5}x^{2}+\alpha ^{3}x^{3}&\alpha ^{-3}+\alpha ^{5}x+\alpha ^{7}x^{2}\\\alpha ^{3}+\alpha ^{-5}x+\alpha ^{6}x^{2}&\alpha ^{4}+\alpha ^{-5}x\end{pmatrix}}{\begin{pmatrix}\alpha ^{-3}+\alpha ^{-2}x+\alpha ^{0}x^{2}+\alpha ^{-2}x^{3}+\alpha ^{-6}x^{4}\\\alpha ^{-4}+\alpha ^{4}x+\alpha ^{2}x^{2}+\alpha ^{-5}x^{3}\end{pmatrix}}.\end{aligned}}}

We have reached polynomial of degree at most 3, and as

((α4+α5x)α3+α5x+α7x2α3+α5x+α6x2(α7x+α5x2+α3x3))(α7x+α5x2+α3x3α3+α5x+α7x2α3+α5x+α6x2α4+α5x)=(1001),{\displaystyle {\begin{pmatrix}-\left(\alpha ^{4}+\alpha ^{-5}x\right)&\alpha ^{-3}+\alpha ^{5}x+\alpha ^{7}x^{2}\\\alpha ^{3}+\alpha ^{-5}x+\alpha ^{6}x^{2}&-\left(\alpha ^{7}x+\alpha ^{5}x^{2}+\alpha ^{3}x^{3}\right)\end{pmatrix}}{\begin{pmatrix}\alpha ^{7}x+\alpha ^{5}x^{2}+\alpha ^{3}x^{3}&\alpha ^{-3}+\alpha ^{5}x+\alpha ^{7}x^{2}\\\alpha ^{3}+\alpha ^{-5}x+\alpha ^{6}x^{2}&\alpha ^{4}+\alpha ^{-5}x\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\end{pmatrix}},}

we get

((α4+α5x)α3+α5x+α7x2α3+α5x+α6x2(α7x+α5x2+α3x3))(S(x)Γ(x)x6)=(α3+α2x+α0x2+α2x3+α6x4α4+α4x+α2x2+α5x3).{\displaystyle {\begin{pmatrix}-\left(\alpha ^{4}+\alpha ^{-5}x\right)&\alpha ^{-3}+\alpha ^{5}x+\alpha ^{7}x^{2}\\\alpha ^{3}+\alpha ^{-5}x+\alpha ^{6}x^{2}&-\left(\alpha ^{7}x+\alpha ^{5}x^{2}+\alpha ^{3}x^{3}\right)\end{pmatrix}}{\begin{pmatrix}S(x)\Gamma (x)\\x^{6}\end{pmatrix}}={\begin{pmatrix}\alpha ^{-3}+\alpha ^{-2}x+\alpha ^{0}x^{2}+\alpha ^{-2}x^{3}+\alpha ^{-6}x^{4}\\\alpha ^{-4}+\alpha ^{4}x+\alpha ^{2}x^{2}+\alpha ^{-5}x^{3}\end{pmatrix}}.}

Therefore,

S(x)Γ(x)(α3+α5x+α6x2)(α7x+α5x2+α3x3)x6=α4+α4x+α2x2+α5x3.{\displaystyle S(x)\Gamma (x)\left(\alpha ^{3}+\alpha ^{-5}x+\alpha ^{6}x^{2}\right)-\left(\alpha ^{7}x+\alpha ^{5}x^{2}+\alpha ^{3}x^{3}\right)x^{6}=\alpha ^{-4}+\alpha ^{4}x+\alpha ^{2}x^{2}+\alpha ^{-5}x^{3}.}

LetΛ(x)=α3+α5x+α6x2.{\displaystyle \Lambda (x)=\alpha ^{3}+\alpha ^{-5}x+\alpha ^{6}x^{2}.} Don't worry thatλ01.{\displaystyle \lambda _{0}\neq 1.} Find by brute force a root ofΛ.{\displaystyle \Lambda .} The roots areα2,{\displaystyle \alpha ^{2},} andα10{\displaystyle \alpha ^{10}} (after finding for exampleα2{\displaystyle \alpha ^{2}} we can divideΛ{\displaystyle \Lambda } by corresponding monom(xα2){\displaystyle \left(x-\alpha ^{2}\right)} and the root of resulting monom could be found easily).

Let

Ξ(x)=Γ(x)Λ(x)=α3+α4x2+α2x3+α5x4Ω(x)=S(x)Ξ(x)α4+α4x+α2x2+α5x3modx6{\displaystyle {\begin{aligned}\Xi (x)&=\Gamma (x)\Lambda (x)=\alpha ^{3}+\alpha ^{4}x^{2}+\alpha ^{2}x^{3}+\alpha ^{-5}x^{4}\\\Omega (x)&=S(x)\Xi (x)\equiv \alpha ^{-4}+\alpha ^{4}x+\alpha ^{2}x^{2}+\alpha ^{-5}x^{3}{\bmod {x^{6}}}\end{aligned}}}

Let us look for error values using formula

ej=Ω(αij)Ξ(αij),{\displaystyle e_{j}=-{\frac {\Omega \left(\alpha ^{-i_{j}}\right)}{\Xi '\left(\alpha ^{-i_{j}}\right)}},}

whereαij{\displaystyle \alpha ^{-i_{j}}} are roots ofΞ(x).{\displaystyle \Xi (x).}Ξ(x)=α2x2.{\displaystyle \Xi '(x)=\alpha ^{2}x^{2}.} We get

e1=Ω(α4)Ξ(α4)=α4+α7+α5+α7α5=α5α5=1e2=Ω(α7)Ξ(α7)=α4+α4+α1+α1α1=0e3=Ω(α10)Ξ(α10)=α4+α1+α7+α5α7=α7α7=1e4=Ω(α2)Ξ(α2)=α4+α6+α6+α1α6=α6α6=1{\displaystyle {\begin{aligned}e_{1}&=-{\frac {\Omega (\alpha ^{4})}{\Xi '(\alpha ^{4})}}={\frac {\alpha ^{-4}+\alpha ^{-7}+\alpha ^{-5}+\alpha ^{7}}{\alpha ^{-5}}}={\frac {\alpha ^{-5}}{\alpha ^{-5}}}=1\\e_{2}&=-{\frac {\Omega (\alpha ^{7})}{\Xi '(\alpha ^{7})}}={\frac {\alpha ^{-4}+\alpha ^{-4}+\alpha ^{1}+\alpha ^{1}}{\alpha ^{1}}}=0\\e_{3}&=-{\frac {\Omega (\alpha ^{10})}{\Xi '(\alpha ^{10})}}={\frac {\alpha ^{-4}+\alpha ^{-1}+\alpha ^{7}+\alpha ^{-5}}{\alpha ^{7}}}={\frac {\alpha ^{7}}{\alpha ^{7}}}=1\\e_{4}&=-{\frac {\Omega (\alpha ^{2})}{\Xi '(\alpha ^{2})}}={\frac {\alpha ^{-4}+\alpha ^{6}+\alpha ^{6}+\alpha ^{1}}{\alpha ^{6}}}={\frac {\alpha ^{6}}{\alpha ^{6}}}=1\end{aligned}}}

Fact, thate3=e4=1,{\displaystyle e_{3}=e_{4}=1,} should not be surprising.

Corrected code is therefore [ 11 01 1 10 0 00 1 0 1 0 0].

Decoding with unreadable characters with a small number of errors

[edit]

Let us show the algorithm behaviour for the case with small number of errors. Let the received word is [ 10 0 ? 1 1 ? 0 0 0 1 0 1 0 0 ].

Again, replace the unreadable characters by zeros while creating the polynomial reflecting their positionsΓ(x)=(α8x1)(α11x1).{\displaystyle \Gamma (x)=\left(\alpha ^{8}x-1\right)\left(\alpha ^{11}x-1\right).}Compute the syndromess1=α4,s2=α7,s3=α1,s4=α1,s5=α0,{\displaystyle s_{1}=\alpha ^{4},s_{2}=\alpha ^{-7},s_{3}=\alpha ^{1},s_{4}=\alpha ^{1},s_{5}=\alpha ^{0},} ands6=α2.{\displaystyle s_{6}=\alpha ^{2}.}Create syndrome polynomial

S(x)=α4+α7x+α1x2+α1x3+α0x4+α2x5,S(x)Γ(x)=α4+α7x+α5x2+α3x3+α1x4+α1x5+α1x6+α6x7.{\displaystyle {\begin{aligned}S(x)&=\alpha ^{4}+\alpha ^{-7}x+\alpha ^{1}x^{2}+\alpha ^{1}x^{3}+\alpha ^{0}x^{4}+\alpha ^{2}x^{5},\\S(x)\Gamma (x)&=\alpha ^{4}+\alpha ^{7}x+\alpha ^{5}x^{2}+\alpha ^{3}x^{3}+\alpha ^{1}x^{4}+\alpha ^{-1}x^{5}+\alpha ^{-1}x^{6}+\alpha ^{6}x^{7}.\end{aligned}}}

Let us run the extended Euclidean algorithm:

(S(x)Γ(x)x6)=(α4+α7x+α5x2+α3x3+α1x4+α1x5+α1x6+α6x7x6)=(α1+α6x110)(x6α4+α7x+α5x2+α3x3+α1x4+α1x5+2α1x6+2α6x7)=(α1+α6x110)(α3+α1x110)(α4+α7x+α5x2+α3x3+α1x4+α1x5α7+(α5+α5)x+2α7x2+2α6x3+2α4x4+2α2x5+2x6)=((1+α2)+(α0+α6)x+α7x2α1+α6xα3+α1x1)(α4+α7x+α5x2+α3x3+α1x4+α1x5α7+α0x){\displaystyle {\begin{aligned}{\begin{pmatrix}S(x)\Gamma (x)\\x^{6}\end{pmatrix}}&={\begin{pmatrix}\alpha ^{4}+\alpha ^{7}x+\alpha ^{5}x^{2}+\alpha ^{3}x^{3}+\alpha ^{1}x^{4}+\alpha ^{-1}x^{5}+\alpha ^{-1}x^{6}+\alpha ^{6}x^{7}\\x^{6}\end{pmatrix}}\\&={\begin{pmatrix}\alpha ^{-1}+\alpha ^{6}x&1\\1&0\end{pmatrix}}{\begin{pmatrix}x^{6}\\\alpha ^{4}+\alpha ^{7}x+\alpha ^{5}x^{2}+\alpha ^{3}x^{3}+\alpha ^{1}x^{4}+\alpha ^{-1}x^{5}+2\alpha ^{-1}x^{6}+2\alpha ^{6}x^{7}\end{pmatrix}}\\&={\begin{pmatrix}\alpha ^{-1}+\alpha ^{6}x&1\\1&0\end{pmatrix}}{\begin{pmatrix}\alpha ^{3}+\alpha ^{1}x&1\\1&0\end{pmatrix}}{\begin{pmatrix}\alpha ^{4}+\alpha ^{7}x+\alpha ^{5}x^{2}+\alpha ^{3}x^{3}+\alpha ^{1}x^{4}+\alpha ^{-1}x^{5}\\\alpha ^{7}+\left(\alpha ^{-5}+\alpha ^{5}\right)x+2\alpha ^{-7}x^{2}+2\alpha ^{6}x^{3}+2\alpha ^{4}x^{4}+2\alpha ^{2}x^{5}+2x^{6}\end{pmatrix}}\\&={\begin{pmatrix}\left(1+\alpha ^{2}\right)+\left(\alpha ^{0}+\alpha ^{-6}\right)x+\alpha ^{7}x^{2}&\alpha ^{-1}+\alpha ^{6}x\\\alpha ^{3}+\alpha ^{1}x&1\end{pmatrix}}{\begin{pmatrix}\alpha ^{4}+\alpha ^{7}x+\alpha ^{5}x^{2}+\alpha ^{3}x^{3}+\alpha ^{1}x^{4}+\alpha ^{-1}x^{5}\\\alpha ^{7}+\alpha ^{0}x\end{pmatrix}}\end{aligned}}}

We have reached polynomial of degree at most 3, and as

(1α1+α6xα3+α1x(α7+α7x+α7x2))(α7+α7x+α7x2α1+α6xα3+α1x1)=(1001),{\displaystyle {\begin{pmatrix}-1&\alpha ^{-1}+\alpha ^{6}x\\\alpha ^{3}+\alpha ^{1}x&-\left(\alpha ^{-7}+\alpha ^{7}x+\alpha ^{7}x^{2}\right)\end{pmatrix}}{\begin{pmatrix}\alpha ^{-7}+\alpha ^{7}x+\alpha ^{7}x^{2}&\alpha ^{-1}+\alpha ^{6}x\\\alpha ^{3}+\alpha ^{1}x&1\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\end{pmatrix}},}

we get

(1α1+α6xα3+α1x(α7+α7x+α7x2))(S(x)Γ(x)x6)=(α4+α7x+α5x2+α3x3+α1x4+α1x5α7+α0x).{\displaystyle {\begin{pmatrix}-1&\alpha ^{-1}+\alpha ^{6}x\\\alpha ^{3}+\alpha ^{1}x&-\left(\alpha ^{-7}+\alpha ^{7}x+\alpha ^{7}x^{2}\right)\end{pmatrix}}{\begin{pmatrix}S(x)\Gamma (x)\\x^{6}\end{pmatrix}}={\begin{pmatrix}\alpha ^{4}+\alpha ^{7}x+\alpha ^{5}x^{2}+\alpha ^{3}x^{3}+\alpha ^{1}x^{4}+\alpha ^{-1}x^{5}\\\alpha ^{7}+\alpha ^{0}x\end{pmatrix}}.}

Therefore,

S(x)Γ(x)(α3+α1x)(α7+α7x+α7x2)x6=α7+α0x.{\displaystyle S(x)\Gamma (x)\left(\alpha ^{3}+\alpha ^{1}x\right)-\left(\alpha ^{-7}+\alpha ^{7}x+\alpha ^{7}x^{2}\right)x^{6}=\alpha ^{7}+\alpha ^{0}x.}

LetΛ(x)=α3+α1x.{\displaystyle \Lambda (x)=\alpha ^{3}+\alpha ^{1}x.} Don't worry thatλ01.{\displaystyle \lambda _{0}\neq 1.} The root ofΛ(x){\displaystyle \Lambda (x)} isα31.{\displaystyle \alpha ^{3-1}.}

Let

Ξ(x)=Γ(x)Λ(x)=α3+α7x+α4x2+α5x3,Ω(x)=S(x)Ξ(x)α7+α0xmodx6{\displaystyle {\begin{aligned}\Xi (x)&=\Gamma (x)\Lambda (x)=\alpha ^{3}+\alpha ^{-7}x+\alpha ^{-4}x^{2}+\alpha ^{5}x^{3},\\\Omega (x)&=S(x)\Xi (x)\equiv \alpha ^{7}+\alpha ^{0}x{\bmod {x^{6}}}\end{aligned}}}

Let us look for error values using formulaej=Ω(αij)/Ξ(αij),{\displaystyle e_{j}=-\Omega \left(\alpha ^{-i_{j}}\right)/\Xi '\left(\alpha ^{-i_{j}}\right),} whereαij{\displaystyle \alpha ^{-i_{j}}} are roots of polynomialΞ(x).{\displaystyle \Xi (x).}

Ξ(x)=α7+α5x2.{\displaystyle \Xi '(x)=\alpha ^{-7}+\alpha ^{5}x^{2}.}

We get

e1=Ω(α4)Ξ(α4)=α7+α4α7+α2=α3α3=1e2=Ω(α7)Ξ(α7)=α7+α7α7+α4=0e3=Ω(α2)Ξ(α2)=α7+α2α7+α6=α3α3=1{\displaystyle {\begin{aligned}e_{1}&=-{\frac {\Omega \left(\alpha ^{4}\right)}{\Xi '\left(\alpha ^{4}\right)}}={\frac {\alpha ^{7}+\alpha ^{4}}{\alpha ^{-7}+\alpha ^{-2}}}={\frac {\alpha ^{3}}{\alpha ^{3}}}=1\\e_{2}&=-{\frac {\Omega \left(\alpha ^{7}\right)}{\Xi '\left(\alpha ^{7}\right)}}={\frac {\alpha ^{7}+\alpha ^{7}}{\alpha ^{-7}+\alpha ^{4}}}=0\\e_{3}&=-{\frac {\Omega \left(\alpha ^{2}\right)}{\Xi '\left(\alpha ^{2}\right)}}={\frac {\alpha ^{7}+\alpha ^{2}}{\alpha ^{-7}+\alpha ^{-6}}}={\frac {\alpha ^{-3}}{\alpha ^{-3}}}=1\end{aligned}}}

The fact thate3=1{\displaystyle e_{3}=1} should not be surprising.

Corrected code is therefore [ 11 01 1 10 0 0 0 1 0 1 0 0].

Citations

[edit]
  1. ^Reed & Chen 1999, p. 189
  2. ^Hocquenghem 1959
  3. ^Bose & Ray-Chaudhuri 1960
  4. ^"Phobos Lander Coding System: Software and Analysis"(PDF).Archived(PDF) from the original on 2022-10-09. Retrieved25 February 2012.
  5. ^Marelli, Alessia; Micheloni, Rino (2018)."BCH Codes for Solid-State-Drives".Inside Solid State Drives (SSDS). Springer Series in Advanced Microelectronics. Vol. 37. pp. 369–406.doi:10.1007/978-981-13-0599-3_11.ISBN 978-981-13-0598-6. Retrieved23 September 2023.
  6. ^Gill n.d., p. 3
  7. ^Lidl & Pilz 1999, p. 229
  8. ^Gorenstein, Peterson & Zierler 1960
  9. ^Gill n.d., p. 47
  10. ^Yasuo Sugiyama, Masao Kasahara, Shigeichi Hirasawa, and Toshihiko Namekawa. A method for solving key equation for decoding Goppa codes. Information and Control, 27:87–99, 1975.

References

[edit]

Primary sources

[edit]

Secondary sources

[edit]

Further reading

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=BCH_code&oldid=1338877975"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp