Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Information theory

From Wikipedia, the free encyclopedia
(Redirected fromInformation Theory)
Scientific study of digital information
Not to be confused withInformation science.
Information theory

Information theory is the mathematical study of thequantification,storage, andcommunication ofinformation. The field was established and formalized byClaude Shannon in the 1940s,[1] though early contributions were made in the 1920s through the works ofHarry Nyquist andRalph Hartley. It is at the intersection ofelectronic engineering,mathematics,statistics,computer science,neurobiology,physics, andelectrical engineering.[2][3]

A key measure in information theory isentropy. Entropy quantifies the amount of uncertainty involved in the value of arandom variable or the outcome of arandom process. For example, identifying the outcome of afaircoin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of adie (which has six equally likely outcomes). Some other important measures in information theory aremutual information,channel capacity,error exponents, andrelative entropy. Important sub-fields of information theory includesource coding,algorithmic complexity theory,algorithmic information theory andinformation-theoretic security.

Applications of fundamental topics of information theory include source coding/data compression (e.g. forZIP files), and channel coding/error detection and correction (e.g. forDSL). Its impact has been crucial to the success of theVoyager missions to deep space,[4] the invention of thecompact disc, the feasibility of mobile phones and the development of theInternet andartificial intelligence.[5][6][3] The theory has also found applications in other areas, includingstatistical inference,[7]cryptography,neurobiology,[8]perception,[9]signal processing,[2]linguistics, the evolution[10] and function[11] of molecular codes (bioinformatics),thermal physics,[12]molecular dynamics,[13]black holes,quantum computing,information retrieval,intelligence gathering,plagiarism detection,[14]pattern recognition,anomaly detection,[15] the analysis ofmusic,[16][17]art creation,[18]imaging system design,[19] study ofouter space,[20] the dimensionality ofspace,[21] andepistemology.[22]

Overview

[edit]

Information theory studies the transmission, processing, extraction, and utilization ofinformation. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 byClaude Shannon in a paper entitledA Mathematical Theory of Communication, in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, thenoisy-channel coding theorem, showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent.[8]

Coding theory is concerned with finding explicit methods, calledcodes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) anderror-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible.[citation needed]

A third class of information theory codes arecryptographic algorithms (bothcodes andciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography andcryptanalysis, such as theunit ban.[citation needed]

Historical background

[edit]
Main article:History of information theory

The landmark eventestablishing the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in theBell System Technical Journal in July and October 1948. HistorianJames Gleick rated the paper as the most important development of 1948, noting that the paper was "even more profound and more fundamental" than thetransistor.[23] He came to be known as the "father of information theory".[24][25][26] Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter toVannevar Bush.[26]

Prior to this paper, limited information-theoretic ideas had been developed atBell Labs, all implicitly assuming events of equal probability.Harry Nyquist's 1924 paper,Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relationW =K logm (recalling theBoltzmann constant), whereW is the speed of transmission of intelligence,m is the number of different voltage levels to choose from at each time step, andK is a constant.Ralph Hartley's 1928 paper,Transmission of Information, uses the wordinformation as a measurable quantity, reflecting the receiver's ability to distinguish onesequence of symbols from any other, thus quantifying information asH = logSn =n logS, whereS was the number of possible symbols, andn the number of symbols in a transmission. The unit of information was therefore thedecimal digit, which since has sometimes been called thehartley in his honor as a unit or scale or measure of information.Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world warEnigma ciphers.[citation needed]

Much of the mathematics behind information theory with events of different probabilities were developed for the field ofthermodynamics byLudwig Boltzmann andJ. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions byRolf Landauer in the 1960s, are explored inEntropy in thermodynamics and information theory.[citation needed]

In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion:

"The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."

With it came the ideas of:

Quantities of information

[edit]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(April 2024) (Learn how and when to remove this message)
Main article:Quantities of information

Information theory is based onprobability theory and statistics, wherequantified information is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is calledentropy, which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable.[27] Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisychannel in the limit of long block lengths, when the channel statistics are determined by the joint distribution.

The choice of logarithmic base in the following formulae determines theunit of information entropy that is used. A common unit of information is the bit orshannon, based on thebinary logarithm. Other units include thenat, which is based on thenatural logarithm, and thedecimal digit, which is based on thecommon logarithm.

In what follows, an expression of the formp logp is considered by convention to be equal to zero wheneverp = 0. This is justified becauselimp0+plogp=0{\displaystyle \lim _{p\rightarrow 0+}p\log p=0} for any logarithmic base.

Entropy of an information source

[edit]

Based on theprobability mass function of each source symbol to be communicated, the ShannonentropyH, in units of bits (per symbol), is given by

H=ipilog2(pi){\displaystyle H=-\sum _{i}p_{i}\log _{2}(p_{i})}

wherepi is the probability of occurrence of thei-th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called theshannon in his honor. Entropy is also commonly computed using the natural logarithm (basee, wheree is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base28 = 256 will produce a measurement inbytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (orhartleys) per symbol.

Intuitively, the entropyHX of a discrete random variableX is a measure of the amount ofuncertainty associated with the value ofX when only its distribution is known.

The entropy of a source that emits a sequence ofN symbols that areindependent and identically distributed (iid) isNH bits (per message ofN symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of lengthN will be less thanNH.

The entropy of aBernoulli trial as a function of success probability, often called thebinary entropy function,Hb(p). The entropy is maximized at 1 bit per trial when the two possible outcomes are equally probable, as in an unbiased coin toss.

If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. IfX{\displaystyle \mathbb {X} } is the set of all messages{x1, ...,xn} thatX could be, andp(x) is the probability of somexX{\displaystyle x\in \mathbb {X} }, then the entropy,H, ofX is defined:[28]

H(X)=EX[I(x)]=xXp(x)logp(x).{\displaystyle H(X)=\mathbb {E} _{X}[I(x)]=-\sum _{x\in \mathbb {X} }p(x)\log p(x).}

(Here,I(x) is theself-information, which is the entropy contribution of an individual message, andEX{\displaystyle \mathbb {E} _{X}} is theexpected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobablep(x) = 1/n; i.e., most unpredictable, in which caseH(X) = logn.

The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having theshannon (Sh) as unit:

Hb(p)=plog2p(1p)log2(1p).{\displaystyle H_{\mathrm {b} }(p)=-p\log _{2}p-(1-p)\log _{2}(1-p).}

Joint entropy

[edit]

Thejoint entropy of two discrete random variablesX andY is merely the entropy of their pairing:(X,Y). This implies that ifX andY areindependent, then their joint entropy is the sum of their individual entropies.

For example, if(X,Y) represents the position of a chess piece—X the row andY the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece.

H(X,Y)=EX,Y[logp(x,y)]=x,yp(x,y)logp(x,y){\displaystyle H(X,Y)=\mathbb {E} _{X,Y}[-\log p(x,y)]=-\sum _{x,y}p(x,y)\log p(x,y)\,}

Despite similar notation, joint entropy should not be confused withcross-entropy.

Conditional entropy (equivocation)

[edit]

Theconditional entropy orconditional uncertainty ofX given random variableY (also called theequivocation ofX aboutY) is the average conditional entropy overY:[29]

H(X|Y)=EY[H(X|y)]=yYp(y)xXp(x|y)logp(x|y)=x,yp(x,y)logp(x|y).{\displaystyle H(X|Y)=\mathbb {E} _{Y}[H(X|y)]=-\sum _{y\in Y}p(y)\sum _{x\in X}p(x|y)\log p(x|y)=-\sum _{x,y}p(x,y)\log p(x|y).}

Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that:

H(X|Y)=H(X,Y)H(Y).{\displaystyle H(X|Y)=H(X,Y)-H(Y).\,}

Mutual information (transinformation)

[edit]

Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information ofX relative toY is given by:

I(X;Y)=EX,Y[SI(x,y)]=x,yp(x,y)logp(x,y)p(x)p(y){\displaystyle I(X;Y)=\mathbb {E} _{X,Y}[SI(x,y)]=\sum _{x,y}p(x,y)\log {\frac {p(x,y)}{p(x)\,p(y)}}}

whereSI (Specific mutual Information) is thepointwise mutual information.

A basic property of the mutual information is that

I(X;Y)=H(X)H(X|Y).{\displaystyle I(X;Y)=H(X)-H(X|Y).\,}

That is, knowingY, we can save an average ofI(X;Y) bits in encodingX compared to not knowingY.

Mutual information issymmetric:

I(X;Y)=I(Y;X)=H(X)+H(Y)H(X,Y).{\displaystyle I(X;Y)=I(Y;X)=H(X)+H(Y)-H(X,Y).\,}

Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between theposterior probability distribution ofX given the value ofY and theprior distribution onX:

I(X;Y)=Ep(y)[DKL(p(X|Y=y)p(X))].{\displaystyle I(X;Y)=\mathbb {E} _{p(y)}[D_{\mathrm {KL} }(p(X|Y=y)\|p(X))].}

In other words, this is a measure of how much, on the average, the probability distribution onX will change if we are given the value ofY. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution:

I(X;Y)=DKL(p(X,Y)p(X)p(Y)).{\displaystyle I(X;Y)=D_{\mathrm {KL} }(p(X,Y)\|p(X)p(Y)).}

Mutual information is closely related to thelog-likelihood ratio test in the context of contingency tables and themultinomial distribution and toPearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution.

Kullback–Leibler divergence (information gain)

[edit]

TheKullback–Leibler divergence (orinformation divergence,information gain, orrelative entropy) is a way of comparing two distributions: a "true"probability distributionp(X){\displaystyle p(X)}, and an arbitrary probability distributionq(X){\displaystyle q(X)}. If we compress data in a manner that assumesq(X){\displaystyle q(X)} is the distribution underlying some data, when, in reality,p(X){\displaystyle p(X)} is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined

DKL(p(X)q(X))=xXp(x)logq(x)xXp(x)logp(x)=xXp(x)logp(x)q(x).{\displaystyle D_{\mathrm {KL} }(p(X)\|q(X))=\sum _{x\in X}-p(x)\log {q(x)}\,-\,\sum _{x\in X}-p(x)\log {p(x)}=\sum _{x\in X}p(x)\log {\frac {p(x)}{q(x)}}.}

Although it is sometimes used as a 'distance metric', KL divergence is not a truemetric since it is not symmetric and does not satisfy thetriangle inequality (making it a semi-quasimetric).

Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a numberX is about to be drawn randomly from a discrete set with probability distributionp(x){\displaystyle p(x)}. If Alice knows the true distributionp(x){\displaystyle p(x)}, while Bob believes (has aprior) that the distribution isq(x){\displaystyle q(x)}, then Bob will be moresurprised than Alice, on average, upon seeing the value ofX. The KL divergence is the (objective) expected value of Bob's (subjective)surprisal minus Alice's surprisal, measured in bits if thelog is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him.

Directed Information

[edit]

Directed information,I(XnYn){\displaystyle I(X^{n}\to Y^{n})}, is an information theory measure that quantifies theinformation flow from the random processXn={X1,X2,,Xn}{\displaystyle X^{n}=\{X_{1},X_{2},\dots ,X_{n}\}} to the random processYn={Y1,Y2,,Yn}{\displaystyle Y^{n}=\{Y_{1},Y_{2},\dots ,Y_{n}\}}. The termdirected information was coined byJames Massey and is defined as

I(XnYn)i=1nI(Xi;Yi|Yi1){\displaystyle I(X^{n}\to Y^{n})\triangleq \sum _{i=1}^{n}I(X^{i};Y_{i}|Y^{i-1})},

whereI(Xi;Yi|Yi1){\displaystyle I(X^{i};Y_{i}|Y^{i-1})} is theconditional mutual informationI(X1,X2,...,Xi;Yi|Y1,Y2,...,Yi1){\displaystyle I(X_{1},X_{2},...,X_{i};Y_{i}|Y_{1},Y_{2},...,Y_{i-1})}.

In contrast tomutual information,directed information is not symmetric. TheI(XnYn){\displaystyle I(X^{n}\to Y^{n})} measures the information bits that are transmitted causally[clarification needed] fromXn{\displaystyle X^{n}} toYn{\displaystyle Y^{n}}. The Directed information has many applications in problems wherecausality plays an important role such ascapacity of channel with feedback,[30][31] capacity of discretememoryless networks with feedback,[32]gambling with causal side information,[33]compression with causal side information,[34]real-time control communication settings,[35][36] and in statistical physics.[37]

Other quantities

[edit]

Other important information theoretic quantities include theRényi entropy and theTsallis entropy (generalizations of the concept of entropy),differential entropy (a generalization of quantities of information to continuous distributions), and theconditional mutual information. Also,pragmatic information has been proposed as a measure of how much information has been used in making a decision.

Coding theory

[edit]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(April 2024) (Learn how and when to remove this message)
Main article:Coding theory
A picture showing scratches on the readable surface of a CD-R. Music and data CDs are coded using error correcting codes and thus can still be read even if they have minor scratches usingerror detection and correction.

Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source.

  • Data compression (source coding): There are two formulations for the compression problem:
  • Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel.

This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (thebroadcast channel) or intermediary "helpers" (therelay channel), or more generalnetworks, compression followed by transmission may no longer be optimal.

Source theory

[edit]

Any process that generates successive messages can be considered asource of information. A memoryless source is one in which each message is anindependent identically distributed random variable, whereas the properties ofergodicity andstationarity impose less restrictive constraints. All such sources arestochastic. These terms are well studied in their own right outside information theory.

Rate

[edit]

Informationrate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is:

r=limnH(Xn|Xn1,Xn2,Xn3,);{\displaystyle r=\lim _{n\to \infty }H(X_{n}|X_{n-1},X_{n-2},X_{n-3},\ldots );}

that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, theaverage rate is:

r=limn1nH(X1,X2,Xn);{\displaystyle r=\lim _{n\to \infty }{\frac {1}{n}}H(X_{1},X_{2},\dots X_{n});}

that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result.[38]

Theinformation rate is defined as:

r=limn1nI(X1,X2,Xn;Y1,Y2,Yn);{\displaystyle r=\lim _{n\to \infty }{\frac {1}{n}}I(X_{1},X_{2},\dots X_{n};Y_{1},Y_{2},\dots Y_{n});}

It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject ofsource coding.

Channel capacity

[edit]
Main article:Channel capacity

Communications over a channel is the primary motivation of information theory. However, channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality.

Consider the communications process over a discrete channel. A simple model of the process is shown below:

MessageWEncoderfnEncodedsequenceXnChannelp(y|x)ReceivedsequenceYnDecodergnEstimatedmessageW^{\displaystyle {\xrightarrow[{\text{Message}}]{W}}{\begin{array}{|c| }\hline {\text{Encoder}}\\f_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Encoded \atop sequence} }]{X^{n}}}{\begin{array}{|c| }\hline {\text{Channel}}\\p(y|x)\\\hline \end{array}}{\xrightarrow[{\mathrm {Received \atop sequence} }]{Y^{n}}}{\begin{array}{|c| }\hline {\text{Decoder}}\\g_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Estimated \atop message} }]{\hat {W}}}}

HereX represents the space of messages transmitted, andY the space of messages received during a unit time over our channel. Letp(y|x) be theconditional probability distribution function ofY givenX. We will considerp(y|x) to be an inherent fixed property of our communications channel (representing the nature of thenoise of our channel). Then the joint distribution ofX andY is completely determined by our channel and by our choice off(x), the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or thesignal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called thechannel capacity and is given by:

C=maxfI(X;Y).{\displaystyle C=\max _{f}I(X;Y).\!}

This capacity has the following property related to communicating at information rateR (whereR is usually bits per symbol). For any information rateR <C and coding errorε > 0, for large enoughN, there exists a code of lengthN and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rateR >C, it is impossible to transmit with arbitrarily small block error.

Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity.

Capacity of particular channel models

[edit]
  • A continuous-time analog communications channel subject toGaussian noise—seeShannon–Hartley theorem.
  • Abinary symmetric channel (BSC) with crossover probabilityp is a binary input, binary output channel that flips the input bit with probabilityp. The BSC has a capacity of1 −Hb(p) bits per channel use, whereHb is the binary entropy function to the base-2 logarithm:
  • Abinary erasure channel (BEC) with erasure probabilityp is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol 'e' called an erasure. The erasure represents complete loss of information about an input bit. The capacity of the BEC is1 −p bits per channel use.

Channels with memory and directed information

[edit]

In practice many channels have memory. Namely, at timei{\displaystyle i} the channel is given by the conditional probabilityP(yi|xi,xi1,xi2,...,x1,yi1,yi2,...,y1){\displaystyle P(y_{i}|x_{i},x_{i-1},x_{i-2},...,x_{1},y_{i-1},y_{i-2},...,y_{1})}.It is often more comfortable to use the notationxi=(xi,xi1,xi2,...,x1){\displaystyle x^{i}=(x_{i},x_{i-1},x_{i-2},...,x_{1})} and the channel becomeP(yi|xi,yi1){\displaystyle P(y_{i}|x^{i},y^{i-1})}.In such a case the capacity is given by themutual information rate when there is no feedback available and theDirected information rate in the case that either there is feedback or not[30][39] (if there is no feedback the directed information equals the mutual information).

Fungible information

[edit]

Fungible information is theinformation for which the means ofencoding is not important.[40] Classical information theorists and computer scientists are mainly concerned with information of this sort. It is sometimes referred as speakable information.[41]

Applications to other fields

[edit]

Intelligence uses and secrecy applications

[edit]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(April 2024) (Learn how and when to remove this message)

Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, theban, was used in theUltra project, breaking the GermanEnigma machine code and hastening theend of World War II in Europe. Shannon himself defined an important concept now called theunicity distance. Based on the redundancy of theplaintext, it attempts to give a minimum amount ofciphertext necessary to ensure unique decipherability.

Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. Abrute force attack can break systems based onasymmetric key algorithms or on most commonly used methods ofsymmetric key algorithms (sometimes called secret key algorithms), such asblock ciphers. The security of all such methods comes from the assumption that no known attack can break them in a practical amount of time.

Information theoretic security refers to methods such as theone-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on thekey) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; theVenona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material.

Pseudorandom number generation

[edit]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(April 2024) (Learn how and when to remove this message)

Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termedcryptographically secure pseudorandom number generators, but even they requirerandom seeds external to the software to work as intended. These can be obtained viaextractors, if done carefully. The measure of sufficient randomness in extractors ismin-entropy, a value related to Shannon entropy throughRényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses.

Seismic exploration

[edit]

One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory anddigital signal processing offer a major improvement of resolution and image clarity over previous analog methods.[42]

Semiotics

[edit]

SemioticiansDoede Nauta [nl] andWinfried Nöth both consideredCharles Sanders Peirce as having created a theory of information in his works on semiotics.[43]: 171 [44]: 137  Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing."[43]: 91 

Concepts from information theory such as redundancy and code control have been used by semioticians such asUmberto Eco andFerruccio Rossi-Landi [it] to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones.[45]

Integrated process organization of neural information

[edit]

Quantitative information theoretic methods have been applied incognitive science to analyze the integrated process organization of neural information in the context of thebinding problem incognitive neuroscience.[46] In this context, either an information-theoretical measure, such asfunctional clusters (Gerald Edelman andGiulio Tononi's functional clustering model and dynamic core hypothesis (DCH)[47]) oreffective information (Tononi'sintegrated information theory (IIT) of consciousness[48][49][50]), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods (Karl J. Friston'sfree energy principle (FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and theBayesian brain hypothesis[51][52][53][54][55]).

Miscellaneous applications

[edit]

Information theory also has applications in thesearch for extraterrestrial intelligence,[56]black holes,[57]bioinformatics,[58] andgambling.[59][60]

See also

[edit]

Applications

[edit]

History

[edit]

Theory

[edit]

Concepts

[edit]

References

[edit]
  1. ^Schneider, Thomas D. (2006)."Claude Shannon: Biologist".IEEE Engineering in Medicine and Biology Magazine: The Quarterly Magazine of the Engineering in Medicine & Biology Society.25 (1):30–33.doi:10.1109/memb.2006.1578661.ISSN 0739-5175.PMC 1538977.PMID 16485389.
  2. ^abCruces, Sergio; Martín-Clemente, Rubén; Samek, Wojciech (2019-07-03)."Information Theory Applications in Signal Processing".Entropy.21 (7): 653.Bibcode:2019Entrp..21..653C.doi:10.3390/e21070653.ISSN 1099-4300.PMC 7515149.PMID 33267367.
  3. ^abBaleanu, D.; Balas, Valentina Emilia; Agarwal, Praveen, eds. (2023).Fractional Order Systems and Applications in Engineering. Advanced Studies in Complex Systems. London, United Kingdom: Academic Press. p. 23.ISBN 978-0-323-90953-2.OCLC 1314337815.
  4. ^Horgan, John (2016-04-27)."Claude Shannon: Tinkerer, Prankster, and Father of Information Theory".IEEE. Retrieved2024-11-08.
  5. ^Shi, Zhongzhi (2011).Advanced Artificial Intelligence.World Scientific Publishing. p. 2.doi:10.1142/7547.ISBN 978-981-4291-34-7.
  6. ^Sinha, Sudhi; Al Huraimel, Khaled (2020-10-20).Reimagining Businesses with AI (1 ed.). Wiley. p. 4.doi:10.1002/9781119709183.ISBN 978-1-119-70915-2.
  7. ^Burnham, K. P.; Anderson, D. R. (2002).Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach (Second ed.). New York: Springer Science.ISBN 978-0-387-95364-9.
  8. ^abF. Rieke; D. Warland; R Ruyter van Steveninck; W Bialek (1997).Spikes: Exploring the Neural Code. The MIT press.ISBN 978-0262681087.
  9. ^Delgado-Bonal, Alfonso; Martín-Torres, Javier (2016-11-03)."Human vision is determined based on information theory".Scientific Reports.6 (1): 36038.Bibcode:2016NatSR...636038D.doi:10.1038/srep36038.ISSN 2045-2322.PMC 5093619.PMID 27808236.
  10. ^cf; Huelsenbeck, J. P.; Ronquist, F.; Nielsen, R.; Bollback, J. P. (2001). "Bayesian inference of phylogeny and its impact on evolutionary biology".Science.294 (5550):2310–2314.Bibcode:2001Sci...294.2310H.doi:10.1126/science.1065889.PMID 11743192.S2CID 2138288.
  11. ^Allikmets, Rando; Wasserman, Wyeth W.; Hutchinson, Amy; Smallwood, Philip; Nathans, Jeremy; Rogan, Peter K. (1998)."Thomas D. Schneider], Michael Dean (1998) Organization of the ABCR gene: analysis of promoter and splice junction sequences".Gene.215 (1):111–122.doi:10.1016/s0378-1119(98)00269-8.PMID 9666097.
  12. ^Jaynes, E. T. (1957)."Information Theory and Statistical Mechanics".Phys. Rev.106 (4): 620.Bibcode:1957PhRv..106..620J.doi:10.1103/physrev.106.620.S2CID 17870175.
  13. ^Talaat, Khaled; Cowen, Benjamin; Anderoglu, Osman (2020-10-05)."Method of information entropy for convergence assessment of molecular dynamics simulations".Journal of Applied Physics.128 (13): 135102.Bibcode:2020JAP...128m5102T.doi:10.1063/5.0019078.OSTI 1691442.S2CID 225010720.
  14. ^Bennett, Charles H.; Li, Ming; Ma, Bin (2003)."Chain Letters and Evolutionary Histories".Scientific American.288 (6):76–81.Bibcode:2003SciAm.288f..76B.doi:10.1038/scientificamerican0603-76.PMID 12764940. Archived fromthe original on 2007-10-07. Retrieved2008-03-11.
  15. ^David R. Anderson (November 1, 2003)."Some background on why people in the empirical sciences may want to better understand the information-theoretic methods"(PDF). Archived fromthe original(PDF) on July 23, 2011. Retrieved2010-06-23.
  16. ^Loy, D. Gareth (2017), Pareyon, Gabriel; Pina-Romero, Silvia; Agustín-Aquino, Octavio A.; Lluis-Puebla, Emilio (eds.),"Music, Expectation, and Information Theory",The Musical-Mathematical Mind: Patterns and Transformations, Computational Music Science, Cham: Springer International Publishing, pp. 161–169,doi:10.1007/978-3-319-47337-6_17,ISBN 978-3-319-47337-6, retrieved2024-09-19
  17. ^Rocamora, Martín; Cancela, Pablo; Biscainho, Luiz (2019-04-05)."Information Theory Concepts Applied to the Analysis of Rhythm in Recorded Music with Recurrent Rhythmic Patterns".Journal of the Audio Engineering Society.67 (4):160–173.doi:10.17743/jaes.2019.0003.
  18. ^Marsden, Alan (2020)."New Prospects for Information Theory in Arts Research".Leonardo.53 (3):274–280.doi:10.1162/leon_a_01860.ISSN 0024-094X.
  19. ^Pinkard, Henry; Kabuli, Leyla; Markley, Eric; Chien, Tiffany; Jiao, Jiantao; Waller, Laura (2024). "Universal evaluation and design of imaging systems using information estimation".arXiv:2405.20559 [physics.optics].
  20. ^Wing, Simon; Johnson, Jay R. (2019-02-01)."Applications of Information Theory in Solar and Space Physics".Entropy.21 (2): 140.Bibcode:2019Entrp..21..140W.doi:10.3390/e21020140.ISSN 1099-4300.PMC 7514618.PMID 33266856.
  21. ^Kak, Subhash (2020-11-26)."Information theory and dimensionality of space".Scientific Reports.10 (1): 20733.doi:10.1038/s41598-020-77855-9.ISSN 2045-2322.PMC 7693271.PMID 33244156.
  22. ^Harms, William F. (1998)."The Use of Information Theory in Epistemology".Philosophy of Science.65 (3):472–501.doi:10.1086/392657.ISSN 0031-8248.JSTOR 188281.
  23. ^Gleick 2011, pp. 3–4.
  24. ^Horgan, John (2016-04-27)."Claude Shannon: Tinkerer, Prankster, and Father of Information Theory".IEEE. Retrieved2023-09-30.
  25. ^Roberts, Siobhan (2016-04-30)."The Forgotten Father of the Information Age".The New Yorker.ISSN 0028-792X. Retrieved2023-09-30.
  26. ^abTse, David (2020-12-22)."How Claude Shannon Invented the Future".Quanta Magazine. Retrieved2023-09-30.
  27. ^Braverman, Mark (September 19, 2011)."Information Theory in Computer Science"(PDF).
  28. ^Reza 1994.
  29. ^Ash 1990.
  30. ^abMassey, James (1990), "Causality, Feedback And Directed Information",Proc. 1990 Intl. Symp. on Info. Th. and its Applications,CiteSeerX 10.1.1.36.5688
  31. ^Permuter, Haim Henry; Weissman, Tsachy; Goldsmith, Andrea J. (February 2009). "Finite State Channels With Time-Invariant Deterministic Feedback".IEEE Transactions on Information Theory.55 (2):644–662.arXiv:cs/0608070.doi:10.1109/TIT.2008.2009849.S2CID 13178.
  32. ^Kramer, G. (January 2003). "Capacity results for the discrete memoryless network".IEEE Transactions on Information Theory.49 (1):4–21.doi:10.1109/TIT.2002.806135.
  33. ^Permuter, Haim H.; Kim, Young-Han; Weissman, Tsachy (June 2011). "Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing".IEEE Transactions on Information Theory.57 (6):3248–3259.arXiv:0912.4872.doi:10.1109/TIT.2011.2136270.S2CID 11722596.
  34. ^Simeone, Osvaldo; Permuter, Haim Henri (June 2013). "Source Coding When the Side Information May Be Delayed".IEEE Transactions on Information Theory.59 (6):3607–3618.arXiv:1109.1293.doi:10.1109/TIT.2013.2248192.S2CID 3211485.
  35. ^Charalambous, Charalambos D.; Stavrou, Photios A. (August 2016). "Directed Information on Abstract Spaces: Properties and Variational Equalities".IEEE Transactions on Information Theory.62 (11):6019–6052.arXiv:1302.3971.doi:10.1109/TIT.2016.2604846.S2CID 8107565.
  36. ^Tanaka, Takashi; Esfahani, Peyman Mohajerin; Mitter, Sanjoy K. (January 2018)."LQG Control With Minimum Directed Information: Semidefinite Programming Approach".IEEE Transactions on Automatic Control.63 (1):37–52.arXiv:1510.04214.doi:10.1109/TAC.2017.2709618.S2CID 1401958.Archived from the original on Apr 12, 2024 – via TU Delft Repositories.
  37. ^Vinkler, Dror A; Permuter, Haim H; Merhav, Neri (20 April 2016). "Analogy between gambling and measurement-based work extraction".Journal of Statistical Mechanics: Theory and Experiment.2016 (4): 043403.arXiv:1404.6788.Bibcode:2016JSMTE..04.3403V.doi:10.1088/1742-5468/2016/04/043403.S2CID 124719237.
  38. ^Jerry D. Gibson (1998).Digital Compression for Multimedia: Principles and Standards. Morgan Kaufmann.ISBN 1-55860-369-7.
  39. ^Permuter, Haim Henry; Weissman, Tsachy; Goldsmith, Andrea J. (February 2009). "Finite State Channels With Time-Invariant Deterministic Feedback".IEEE Transactions on Information Theory.55 (2):644–662.arXiv:cs/0608070.doi:10.1109/TIT.2008.2009849.S2CID 13178.
  40. ^Bartlett, Stephen D.; Rudolph, Terry;Spekkens, Robert W. (April–June 2007). "Reference frames, superselection rules, and quantum information".Reviews of Modern Physics.79 (2):555–606.arXiv:quant-ph/0610030.Bibcode:2007RvMP...79..555B.doi:10.1103/RevModPhys.79.555.
  41. ^Peres, A.; P. F. Scudo (2002b). A. Khrennikov (ed.).Quantum Theory: Reconsideration of Foundations. Växjö University Press, Växjö, Sweden. p. 283.
  42. ^Haggerty, Patrick E. (1981). "The corporation and innovation".Strategic Management Journal.2 (2):97–118.doi:10.1002/smj.4250020202.
  43. ^abNauta, Doede (1972).The Meaning of Information. The Hague: Mouton.ISBN 9789027919960.
  44. ^Nöth, Winfried (January 2012)."Charles S. Peirce's theory of information: a theory of the growth of symbols and of knowledge".Cybernetics and Human Knowing.19 (1–2):137–161.
  45. ^Nöth, Winfried (1981). "Semiotics of ideology".Semiotica, Issue 148.
  46. ^Maurer, H. (2021). "Chapter 10: Systematic Class of Information Based Architecture Types".Cognitive Science: Integrative Synchronization Mechanisms in Cognitive Neuroarchitectures of the Modern Connectionism. Boca Raton/FL: CRC Press.doi:10.1201/9781351043526.ISBN 978-1-351-04352-6.
  47. ^Edelman, G.M.; Tononi, G. (2000).A Universe of Consciousness: How Matter Becomes Imagination. New York: Basic Books.ISBN 978-0465013777.
  48. ^Tononi, G.; Sporns, O. (2003)."Measuring information integration".BMC Neuroscience.4:1–20.doi:10.1186/1471-2202-4-31.PMC 331407.PMID 14641936.
  49. ^Tononi, G. (2004a)."An information integration theory of consciousness".BMC Neuroscience.5:1–22.doi:10.1186/1471-2202-5-42.PMC 543470.PMID 15522121.
  50. ^Tononi, G. (2004b)."Consciousness and the brain: theoretical aspects". In Adelman, G.; Smith, B. (eds.).Encyclopedia of Neuroscience (3rd ed.). Amsterdam, Oxford: Elsevier.ISBN 0-444-51432-5.Archived(PDF) from the original on 2023-12-02.
  51. ^Friston, K.; Stephan, K.E. (2007)."Free-energy and the brain".Synthese.159 (3):417–458.doi:10.1007/s11229-007-9237-y.PMC 2660582.PMID 19325932.
  52. ^Friston, K. (2010). "The free-energy principle: a unified brain theory".Nature Reviews Neuroscience.11 (2):127–138.doi:10.1038/nrn2787.PMID 20068583.
  53. ^Friston, K.; Breakstear, M.; Deco, G. (2012)."Perception and self-organized instability".Frontiers in Computational Neuroscience.6:1–19.doi:10.3389/fncom.2012.00044.PMC 3390798.PMID 22783185.
  54. ^Friston, K. (2013)."Life as we know it".Journal of the Royal Society Interface.10 (86): 20130475.doi:10.1098/rsif.2013.0475.PMC 3730701.PMID 23825119.
  55. ^Kirchhoff, M.; Parr, T.; Palacios, E.; Friston, K.; Kiverstein, J. (2018)."The Markov blankets of life: autonomy, active inference and the free energy principle".Journal of the Royal Society Interface.15 (138): 20170792.doi:10.1098/rsif.2017.0792.PMC 5805980.PMID 29343629.
  56. ^Doyle, Laurance R.;McCowan, Brenda; Johnston, Simon; Hanser, Sean F. (February 2011). "Information theory, animal communication, and the search for extraterrestrial intelligence".Acta Astronautica.68 (3–4):406–417.Bibcode:2011AcAau..68..406D.doi:10.1016/j.actaastro.2009.11.018.
  57. ^Bekenstein, Jacob D (2004)."Black holes and information theory".Contemporary Physics.45 (1):31–43.arXiv:quant-ph/0311049.Bibcode:2004ConPh..45...31B.doi:10.1080/00107510310001632523.ISSN 0010-7514.
  58. ^Vinga, Susana (2014-05-01)."Information theory applications for biological sequence analysis".Briefings in Bioinformatics.15 (3):376–389.doi:10.1093/bib/bbt068.ISSN 1467-5463.PMC 7109941.PMID 24058049.
  59. ^Thorp, Edward O. (2008-01-01), Zenios, S. A.; Ziemba, W. T. (eds.),"The kelly criterion in blackjack sports betting, and the stock market*",Handbook of Asset and Liability Management, San Diego: North-Holland, pp. 385–428,doi:10.1016/b978-044453248-0.50015-0,ISBN 978-0-444-53248-0, retrieved2025-01-20
  60. ^Haigh, John (2000)."The Kelly Criterion and Bet Comparisons in Spread Betting".Journal of the Royal Statistical Society, Series D (The Statistician).49 (4):531–539.doi:10.1111/1467-9884.00251.ISSN 1467-9884.

Further reading

[edit]

The classic work

[edit]

Other journal articles

[edit]
  • J. L. Kelly Jr.,Princeton, "A New Interpretation of Information Rate"Bell System Technical Journal, Vol. 35, July 1956, pp. 917–26.
  • R. Landauer,IEEE.org, "Information is Physical"Proc. Workshop on Physics and Computation PhysComp'92 (IEEE Comp. Sci.Press, Los Alamitos, 1993) pp. 1–4.
  • Landauer, R. (1961)."Irreversibility and Heat Generation in the Computing Process"(PDF).IBM J. Res. Dev.5 (3):183–191.doi:10.1147/rd.53.0183.
  • Timme, Nicholas; Alford, Wesley; Flecker, Benjamin; Beggs, John M. (2012). "Multivariate information measures: an experimentalist's perspective".arXiv:1111.6857 [cs.IT].

Textbooks on information theory

[edit]

Other books

[edit]

External links

[edit]
Wikiquote has quotations related toInformation theory.
Library resources about
Information theory
Subfields of and cyberneticians involved incybernetics
Subfields
Cyberneticians
Information processes
information processes by function
information processing abstractions
Information processors
natural
mixed
artificial
Information processing
theories and concepts
in biology
in cognitive psychology
in computer science
in philosophy
interdisciplinary
other
Lossless
Entropy type
Dictionary type
Other types
Hybrid
Lossy
Transform type
Predictive type
Audio
Concepts
Codec parts
Image
Concepts
Methods
Video
Concepts
Codec parts
Theory
Community
People
Majormathematics areas
Foundations
Algebra
Analysis
Discrete
Geometry
Number theory
Topology
Applied
Computational
Related topics
Note: This template roughly follows the 2012ACM Computing Classification System.
Hardware
Computer systems organization
Networks
Software organization
Software notations andtools
Software development
Theory of computation
Algorithms
Mathematics ofcomputing
Information systems
Security
Human–computer interaction
Concurrency
Artificial intelligence
Machine learning
Graphics
Applied computing
Authority control databases: NationalEdit this at Wikidata
Retrieved from "https://en.wikipedia.org/w/index.php?title=Information_theory&oldid=1283252613"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp