
TheBerlekamp–Massey algorithm is analgorithm that will find the shortestlinear-feedback shift register (LFSR) for a given binary output sequence. The algorithm will also find theminimal polynomial of a linearlyrecurrent sequence in an arbitraryfield. The field requirement means that the Berlekamp–Massey algorithm requires all non-zero elements to have amultiplicative inverse.[1] Reeds and Sloane offer an extension to handle aring.[2]
Elwyn Berlekamp invented an algorithm for decodingBose–Chaudhuri–Hocquenghem (BCH) codes.[3][4]James Massey recognized its application to linear feedback shift registers and simplified the algorithm.[5][6] Massey termed the algorithm the LFSR Synthesis Algorithm (Berlekamp Iterative Algorithm),[7] but it is now known as the Berlekamp–Massey algorithm.
The Berlekamp–Massey algorithm is an alternative to theReed–Solomon Peterson decoder for solving the set of linear equations. It can be summarized as finding the coefficients Λj of a polynomial Λ(x) so that for all positionsi in an input streamS:
In the code examples below,C(x) is a potential instance ofΛ(x). The error locator polynomialC(x) forL errors is defined as:
or reversed:
The goal of the algorithm is to determine the minimal degreeL andC(x) which results in allsyndromes
being equal to 0:
Algorithm:C(x) is initialized to 1,L is the current number of assumed errors, and initialized to zero.N is the total number of syndromes.n is used as the mainiterator and to index the syndromes from 0 toN−1.B(x) is a copy of the lastC(x) sinceL was updated and initialized to 1.b is a copy of the last discrepancyd (explained below) sinceL was updated and initialized to 1.m is the number of iterations sinceL,B(x), andb were updated and initialized to 1.
Each iteration of the algorithm calculates a discrepancyd. At iterationk this would be:
Ifd is zero, the algorithm assumes thatC(x) andL are correct for the moment, incrementsm, and continues.
Ifd is not zero, the algorithm adjustsC(x) so that a recalculation ofd would be zero:
Thexm termshifts B(x) so it follows the syndromes corresponding tob. If the previous update ofL occurred on iterationj, thenm =k −j, and a recalculated discrepancy would be:
This would change a recalculated discrepancy to:
The algorithm also needs to increaseL (number of errors) as needed. IfL equals the actual number of errors, then during the iteration process, the discrepancies will become zero beforen becomes greater than or equal to 2L. OtherwiseL is updated and the algorithm will updateB(x),b, increaseL, and resetm = 1. The formulaL = (n + 1 −L) limitsL to the number of available syndromes used to calculate discrepancies, and also handles the case whereL increases by more than 1.
The algorithm fromMassey (1969, p. 124) for an arbitrary field:
polynomial(fieldK) s(x) = .../* coeffs are sj; output sequence as N-1 degree polynomial) *//* connection polynomial */ polynomial(field K) C(x) = 1;/* coeffs are cj */ polynomial(field K) B(x) = 1; int L = 0; int m = 1; field K b = 1; int n;/* steps 2. and 6. */for (n = 0; n < N; n++) {/* step 2. calculate discrepancy */ field K d = sn +∑L
i=1 ci sn - iif (d == 0) {/* step 3. discrepancy is zero; annihilation continues */ m = m + 1; }elseif (2 * L <= n) {/* step 5. *//* temporary copy of C(x) */ polynomial(field K) T(x) = C(x); C(x) = C(x) - d b−1 xm B(x); L = n + 1 - L; B(x) = T(x); b = d; m = 1; }else {/* step 4. */ C(x) = C(x) - d b−1 xm B(x); m = m + 1; } }return L;
In the case of binary GF(2) BCH code, the discrepancy d will be zero on all odd steps, so a check can be added to avoid calculating it.
/* ... */for(n=0;n<N;n++){/* if odd step number, discrepancy == 0, no need to calculate it */if((n&1)!=0){m=m+1;continue;}/* ... */
{{citation}}: CS1 maint: location missing publisher (link)