Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Linear code

From Wikipedia, the free encyclopedia
Class of error-correcting code

Incoding theory, alinear code is anerror-correcting code for which anylinear combination ofcodewords is also a codeword. Linear codes are traditionally partitioned intoblock codes andconvolutional codes, althoughturbo codes can be seen as a hybrid of these two types.[1] Linear codes allow for more efficient encoding and decoding algorithms than other codes (cf.syndrome decoding).[citation needed]

Linear codes are used inforward error correction and are applied in methods for transmitting symbols (e.g.,bits) on acommunications channel so that, if errors occur in the communication, some errors can be corrected or detected by the recipient of a message block. The codewords in a linear block code are blocks of symbols that are encoded using more symbols than the original value to be sent.[2] A linear code of lengthn transmits blocks containingn symbols. For example, the [7,4,3]Hamming code is a linearbinary code which represents 4-bit messages using 7-bit codewords. Two distinct codewords differ in at least three bits. As a consequence, up to two errors per codeword can be detected while a single error can be corrected.[3] This code contains 24 = 16 codewords.

Definition and parameters

[edit]

Alinear code of lengthn and dimensionk is alinear subspaceC withdimensionk of thevector spaceFqn{\displaystyle \mathbb {F} _{q}^{n}} whereFq{\displaystyle \mathbb {F} _{q}} is thefinite field withq elements. Such a code is called aq-ary code. Ifq = 2 orq = 3, the code is described as abinary code, or aternary code respectively. The vectors inC are calledcodewords. Thesize of a code is the number of codewords and equalsqk.

Theweight of a codeword is the number of its elements that are nonzero and thedistance between two codewords is theHamming distance between them, that is, the number of elements in which they differ. The distanced of the linear code is the minimum weight of its nonzero codewords, or equivalently, the minimum distance between distinct codewords. A linear code of lengthn, dimensionk, and distanced is called an [n,k,d] code (or, more precisely,[n,k,d]q{\displaystyle [n,k,d]_{q}} code).

We want to giveFqn{\displaystyle \mathbb {F} _{q}^{n}} the standard basis because each coordinate represents a "bit" that is transmitted across a "noisy channel" with some small probability of transmission error (abinary symmetric channel). If some other basis is used then this model cannot be used and the Hamming metric does not measure the number of errors in transmission, as we want it to.

Generator and check matrices

[edit]

As alinear subspace ofFqn{\displaystyle \mathbb {F} _{q}^{n}}, the entire codeC (which may be very large) may be represented as thespan of a set ofk{\displaystyle k} codewords (known as abasis inlinear algebra). These basis codewords are often collated in the rows of a matrix G known as agenerating matrix for the codeC. When G has the block matrix formG=[IkP]{\displaystyle {\boldsymbol {G}}=[I_{k}\mid P]}, whereIk{\displaystyle I_{k}} denotes thek×k{\displaystyle k\times k} identity matrix and P is somek×(nk){\displaystyle k\times (n-k)} matrix, then we say G is instandard form.

A matrixH representing a linear functionϕ:FqnFqnk{\displaystyle \phi :\mathbb {F} _{q}^{n}\to \mathbb {F} _{q}^{n-k}} whosekernel isC is called acheck matrix ofC (or sometimes a parity check matrix). Equivalently,H is a matrix whosenull space isC. IfC is a code with a generating matrixG in standard form,G=[IkP]{\displaystyle {\boldsymbol {G}}=[I_{k}\mid P]}, thenH=[PTInk]{\displaystyle {\boldsymbol {H}}=[-P^{T}\mid I_{n-k}]} is a check matrix for C. The code generated byH is called thedual code of C. It can be verified that G is ak×n{\displaystyle k\times n} matrix, while H is a(nk)×n{\displaystyle (n-k)\times n} matrix.

Linearity guarantees that the minimumHamming distanced between a codewordc0 and any of the other codewordsc ≠ c0 is independent ofc0. This follows from the property that the differencec − c0 of two codewords inC is also a codeword (i.e., anelement of the subspaceC), and the property thatd(c, c0) = d(c − c0, 0). These properties imply that

mincC, cc0d(c,c0)=mincC, cc0d(cc0,0)=mincC, c0d(c,0)=d.{\displaystyle \min _{c\in C,\ c\neq c_{0}}d(c,c_{0})=\min _{c\in C,\ c\neq c_{0}}d(c-c_{0},0)=\min _{c\in C,\ c\neq 0}d(c,0)=d.}

In other words, in order to find out the minimum distance between the codewords of a linear code, one would only need to look at the non-zero codewords. The non-zero codeword with the smallest weight has then the minimum distance to the zero codeword, and hence determines the minimum distance of the code.

The distanced of a linear codeC also equals the minimum number of linearly dependent columns of the check matrixH.

Proof: BecauseHcT=0{\displaystyle {\boldsymbol {H}}\cdot {\boldsymbol {c}}^{T}={\boldsymbol {0}}}, which is equivalent toi=1n(ciHi)=0{\displaystyle \sum _{i=1}^{n}(c_{i}\cdot {\boldsymbol {H_{i}}})={\boldsymbol {0}}}, whereHi{\displaystyle {\boldsymbol {H_{i}}}} is theith{\displaystyle i^{th}} column ofH{\displaystyle {\boldsymbol {H}}}. Remove those items withci=0{\displaystyle c_{i}=0}, thoseHi{\displaystyle {\boldsymbol {H_{i}}}} withci0{\displaystyle c_{i}\neq 0} are linearly dependent. Therefore,d{\displaystyle d} is at least the minimum number of linearly dependent columns. On another hand, consider the minimum set of linearly dependent columns{HjjS}{\displaystyle \{{\boldsymbol {H_{j}}}\mid j\in S\}} whereS{\displaystyle S} is the column index set.i=1n(ciHi)=jS(cjHj)+jS(cjHj)=0{\displaystyle \sum _{i=1}^{n}(c_{i}\cdot {\boldsymbol {H_{i}}})=\sum _{j\in S}(c_{j}\cdot {\boldsymbol {H_{j}}})+\sum _{j\notin S}(c_{j}\cdot {\boldsymbol {H_{j}}})={\boldsymbol {0}}}. Now consider the vectorc{\displaystyle {\boldsymbol {c'}}} such thatcj=0{\displaystyle c_{j}'=0} ifjS{\displaystyle j\notin S}. NotecC{\displaystyle {\boldsymbol {c'}}\in C} becauseHcT=0{\displaystyle {\boldsymbol {H}}\cdot {\boldsymbol {c'}}^{T}={\boldsymbol {0}}} . Therefore, we havedwt(c){\displaystyle d\leq wt({\boldsymbol {c'}})}, which is the minimum number of linearly dependent columns inH{\displaystyle {\boldsymbol {H}}}. The claimed property is therefore proven.

Example: Hamming codes

[edit]
Main article:Hamming code

As the first class of linear codes developed for error correction purpose,Hamming codes have been widely used in digital communication systems. For any positive integerr2{\displaystyle r\geq 2}, there exists a[2r1,2rr1,3]2{\displaystyle [2^{r}-1,2^{r}-r-1,3]_{2}} Hamming code. Sinced=3{\displaystyle d=3}, this Hamming code can correct a 1-bit error.

Example : The linear block code with the following generator matrix and parity check matrix is a[7,4,3]2{\displaystyle [7,4,3]_{2}} Hamming code.

G=(1000110010001100101110001101),{\displaystyle {\boldsymbol {G}}={\begin{pmatrix}1&0&0&0&1&1&0\\0&1&0&0&0&1&1\\0&0&1&0&1&1&1\\0&0&0&1&1&0&1\end{pmatrix}},}H=(101110011 100100111001){\displaystyle {\boldsymbol {H}}={\begin{pmatrix}1&0&1&1&1&0&0\\1&1&\ 1&0&0&1&0\\0&1&1&1&0&0&1\end{pmatrix}}}

Example: Hadamard codes

[edit]
Main article:Hadamard code

Hadamard code is a[2r,r,2r1]2{\displaystyle [2^{r},r,2^{r-1}]_{2}} linear code and is capable of correcting many errors. Hadamard code could be constructed column by column : theith{\displaystyle i^{th}} column is the bits of the binary representation of integeri{\displaystyle i}, as shown in the following example. Hadamard code has minimum distance2r1{\displaystyle 2^{r-1}} and therefore can correct2r21{\displaystyle 2^{r-2}-1} errors.

Example: The linear block code with the following generator matrix is a[8,3,4]2{\displaystyle [8,3,4]_{2}} Hadamard code:GHad=(00001 1110011001101010101){\displaystyle {\boldsymbol {G}}_{\mathrm {Had} }={\begin{pmatrix}0&0&0&0&1&\ 1&1&1\\0&0&1&1&0&0&1&1\\0&1&0&1&0&1&0&1\end{pmatrix}}}.

Hadamard code is a special case ofReed–Muller code. If we take the first column (the all-zero column) out fromGHad{\displaystyle {\boldsymbol {G}}_{\mathrm {Had} }}, we get[7,3,4]2{\displaystyle [7,3,4]_{2}}simplex code, which is thedual code of Hamming code.

Nearest neighbor algorithm

[edit]

The parameter d is closely related to the error correcting ability of the code. The following construction/algorithm illustrates this (called the nearest neighbor decoding algorithm):

Input: Areceived vector v inFqn.{\displaystyle \mathbb {F} _{q}^{n}.}

Output: A codewordw{\displaystyle w} inC{\displaystyle C} closest tov{\displaystyle v}, if any.

We say that a linearC{\displaystyle C} ist{\displaystyle t}-error correcting if there is at most one codeword inBt(v){\displaystyle B_{t}(v)}, for eachv{\displaystyle v} inFqn{\displaystyle \mathbb {F} _{q}^{n}}.

Popular notation

[edit]
Main article:Block code § Popular notation

Codes in general are often denoted by the letterC, and a code of lengthn and ofrankk (i.e., havingn code words in its basis andk rows in itsgenerating matrix) is generally referred to as an (nk) code. Linear block codes are frequently denoted as [nkd] codes, whered refers to the code's minimum Hamming distance between any two code words.

(The [nkd] notation should not be confused with the (nMd) notation used to denote anon-linear code of lengthn, sizeM (i.e., havingM code words), and minimum Hamming distanced.)

Singleton bound

[edit]

Lemma (Singleton bound): Every linear [n,k,d] code C satisfiesk+dn+1{\displaystyle k+d\leq n+1}.

A codeC whose parameters satisfyk +d = n + 1 is calledmaximum distance separable orMDS. Such codes, when they exist, are in some sense best possible.

IfC1 andC2 are two codes of lengthn and if there is a permutationp in thesymmetric groupSn for which (c1,...,cn) inC1 if and only if (cp(1),...,cp(n)) inC2, then we sayC1 andC2 arepermutation equivalent. In more generality, if there is ann×n{\displaystyle n\times n}monomial matrixM:FqnFqn{\displaystyle M\colon \mathbb {F} _{q}^{n}\to \mathbb {F} _{q}^{n}} which sendsC1 isomorphically toC2 then we sayC1 andC2 areequivalent.

Lemma: Any linear code is permutation equivalent to a code which is in standard form.

Bonisoli's theorem

[edit]

A code is defined to beequidistant if and only if there exists some constantd such that the distance between any two of the code's distinct codewords is equal tod.[4] In 1984 Arrigo Bonisoli determined the structure of linear one-weight codes over finite fields and proved that every equidistant linear code is a sequence ofdualHamming codes.[5]

Examples

[edit]

Some examples of linear codes include:

Generalization

[edit]

Hamming spaces over non-field alphabets have also been considered, especially overfinite rings, most notablyGalois rings overZ4. This gives rise tomodules instead of vector spaces andring-linear codes (identified withsubmodules) instead of linear codes. The typical metric used in this case theLee distance. There exist aGray isometry betweenZ22m{\displaystyle \mathbb {Z} _{2}^{2m}} (i.e. GF(22m)) with the Hamming distance andZ4m{\displaystyle \mathbb {Z} _{4}^{m}} (also denoted as GR(4,m)) with the Lee distance; its main attraction is that it establishes a correspondence between some "good" codes that are not linear overZ22m{\displaystyle \mathbb {Z} _{2}^{2m}} as images of ring-linear codes fromZ4m{\displaystyle \mathbb {Z} _{4}^{m}}.[6][7][8]

Some authors have referred to such codes over rings simply as linear codes as well.[9]

See also

[edit]

References

[edit]
  1. ^William E. Ryan and Shu Lin (2009).Channel Codes: Classical and Modern. Cambridge University Press. p. 4.ISBN 978-0-521-84868-8.
  2. ^MacKay, David, J.C. (2003).Information Theory, Inference, and Learning Algorithms(PDF).Cambridge University Press. p. 9.Bibcode:2003itil.book.....M.ISBN 9780521642989.In alinear block code, the extraNK{\displaystyle N-K} bits are linear functions of the originalK{\displaystyle K} bits; these extra bits are calledparity-check bits{{cite book}}: CS1 maint: multiple names: authors list (link)
  3. ^Thomas M. Cover and Joy A. Thomas (1991).Elements of Information Theory. John Wiley & Sons, Inc. pp. 210–211.ISBN 978-0-471-06259-2.
  4. ^Etzion, Tuvi; Raviv, Netanel (2013). "Equidistant codes in the Grassmannian".arXiv:1308.6231 [math.CO].
  5. ^Bonisoli, A. (1984). "Every equidistant linear code is a sequence of dual Hamming codes".Ars Combinatoria.18:181–186.
  6. ^Marcus Greferath (2009). "An Introduction to Ring-Linear Coding Theory". In Massimiliano Sala; Teo Mora; Ludovic Perret; Shojiro Sakata; Carlo Traverso (eds.).Gröbner Bases, Coding, and Cryptography. Springer Science & Business Media.ISBN 978-3-540-93806-4.
  7. ^"Encyclopedia of Mathematics".www.encyclopediaofmath.org.
  8. ^J.H. van Lint (1999).Introduction to Coding Theory (3rd ed.). Springer. Chapter 8: Codes over ℤ4.ISBN 978-3-540-64133-9.
  9. ^S.T. Dougherty; J.-L. Kim; P. Sole (2015)."Open Problems in Coding Theory". In Steven Dougherty; Alberto Facchini; Andre Gerard Leroy; Edmund Puczylowski; Patrick Sole (eds.).Noncommutative Rings and Their Applications. American Mathematical Soc. p. 80.ISBN 978-1-4704-1032-2.

Bibliography

[edit]
  • J. F. Humphreys; M. Y. Prest (2004).Numbers, Groups and Codes (2nd ed.). Cambridge University Press.ISBN 978-0-511-19420-7. Chapter 5 contains a more gentle introduction (than this article) to the subject of linear codes.

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Linear_code&oldid=1259978043"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp