Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Channel capacity

From Wikipedia, the free encyclopedia
Information-theoretical limit on transmission rate in a communication channel
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Channel capacity" – news ·newspapers ·books ·scholar ·JSTOR
(May 2023) (Learn how and when to remove this message)
Information theory

Channel capacity, inelectrical engineering,computer science, andinformation theory, is the theoretical maximum rate at whichinformation can be reliably transmitted over acommunication channel.

Following the terms of thenoisy-channel coding theorem, the channel capacity of a givenchannel is the highest information rate (in units ofinformation per unit time) that can be achieved with arbitrarily small error probability.[1][2]

Information theory, developed byClaude E. Shannon in 1948, defines the notion of channel capacity and provides a mathematical model by which it may be computed. The key result states that the capacity of the channel, as defined above, is given by the maximum of themutual information between the input and output of the channel, where the maximization is with respect to the input distribution.[3]

The notion of channel capacity has been central to the development of modern wireline and wireless communication systems, with the advent of novelerror correction coding mechanisms that have resulted in achieving performance very close to the limits promised by channel capacity.

Formal definition

[edit]

The basic mathematical model for a communication system is the following:

MessageWEncoderfnEncodedsequenceXnChannelp(y|x)ReceivedsequenceYnDecodergnEstimatedmessageW^{\displaystyle {\xrightarrow[{\text{Message}}]{W}}{\begin{array}{|c|}\hline {\text{Encoder}}\\f_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Encoded \atop sequence} }]{X^{n}}}{\begin{array}{|c|}\hline {\text{Channel}}\\p(y|x)\\\hline \end{array}}{\xrightarrow[{\mathrm {Received \atop sequence} }]{Y^{n}}}{\begin{array}{|c|}\hline {\text{Decoder}}\\g_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Estimated \atop message} }]{\hat {W}}}}

where:

LetX{\displaystyle X} andY{\displaystyle Y} be modeled as random variables. Furthermore, letpY|X(y|x){\displaystyle p_{Y|X}(y|x)} be theconditional probability distribution function ofY{\displaystyle Y} givenX{\displaystyle X}, which is an inherent fixed property of the communication channel. Then the choice of themarginal distributionpX(x){\displaystyle p_{X}(x)} completely determines thejoint distributionpX,Y(x,y){\displaystyle p_{X,Y}(x,y)} due to the identity

 pX,Y(x,y)=pY|X(y|x)pX(x){\displaystyle \ p_{X,Y}(x,y)=p_{Y|X}(y|x)\,p_{X}(x)}

which, in turn, induces amutual informationI(X;Y){\displaystyle I(X;Y)}. Thechannel capacity is defined as

 C=suppX(x)I(X;Y){\displaystyle \ C=\sup _{p_{X}(x)}I(X;Y)\,}

where thesupremum is taken over all possible choices ofpX(x){\displaystyle p_{X}(x)}.

Additivity of channel capacity

[edit]

Channel capacity is additive over independent channels.[4] It means that using two independent channels in a combined manner provides the same theoretical capacity as using them independently. More formally, letp1{\displaystyle p_{1}} andp2{\displaystyle p_{2}} be two independent channels modelled as above;p1{\displaystyle p_{1}} having an input alphabetX1{\displaystyle {\mathcal {X}}_{1}} and an output alphabetY1{\displaystyle {\mathcal {Y}}_{1}}. Idem forp2{\displaystyle p_{2}}. We define the product channelp1×p2{\displaystyle p_{1}\times p_{2}} as(x1,x2)(X1,X2),(y1,y2)(Y1,Y2),(p1×p2)((y1,y2)|(x1,x2))=p1(y1|x1)p2(y2|x2){\displaystyle \forall (x_{1},x_{2})\in ({\mathcal {X}}_{1},{\mathcal {X}}_{2}),\;(y_{1},y_{2})\in ({\mathcal {Y}}_{1},{\mathcal {Y}}_{2}),\;(p_{1}\times p_{2})((y_{1},y_{2})|(x_{1},x_{2}))=p_{1}(y_{1}|x_{1})p_{2}(y_{2}|x_{2})}

This theorem states:C(p1×p2)=C(p1)+C(p2){\displaystyle C(p_{1}\times p_{2})=C(p_{1})+C(p_{2})}

Proof

We first show thatC(p1×p2)C(p1)+C(p2){\displaystyle C(p_{1}\times p_{2})\geq C(p_{1})+C(p_{2})}.

LetX1{\displaystyle X_{1}} andX2{\displaystyle X_{2}} be two independent random variables. LetY1{\displaystyle Y_{1}} be a random variable corresponding to the output ofX1{\displaystyle X_{1}} through the channelp1{\displaystyle p_{1}}, andY2{\displaystyle Y_{2}} forX2{\displaystyle X_{2}} throughp2{\displaystyle p_{2}}.

By definitionC(p1×p2)=suppX1,X2(I(X1,X2:Y1,Y2)){\displaystyle C(p_{1}\times p_{2})=\sup _{p_{X_{1},X_{2}}}(I(X_{1},X_{2}:Y_{1},Y_{2}))}.

SinceX1{\displaystyle X_{1}} andX2{\displaystyle X_{2}} are independent, as well asp1{\displaystyle p_{1}} andp2{\displaystyle p_{2}},(X1,Y1){\displaystyle (X_{1},Y_{1})} is independent of(X2,Y2){\displaystyle (X_{2},Y_{2})}. We can apply the following property ofmutual information:I(X1,X2:Y1,Y2)=I(X1:Y1)+I(X2:Y2){\displaystyle I(X_{1},X_{2}:Y_{1},Y_{2})=I(X_{1}:Y_{1})+I(X_{2}:Y_{2})}

For now we only need to find a distributionpX1,X2{\displaystyle p_{X_{1},X_{2}}} such thatI(X1,X2:Y1,Y2)I(X1:Y1)+I(X2:Y2){\displaystyle I(X_{1},X_{2}:Y_{1},Y_{2})\geq I(X_{1}:Y_{1})+I(X_{2}:Y_{2})}. In fact,π1{\displaystyle \pi _{1}} andπ2{\displaystyle \pi _{2}}, two probability distributions forX1{\displaystyle X_{1}} andX2{\displaystyle X_{2}} achievingC(p1){\displaystyle C(p_{1})} andC(p2){\displaystyle C(p_{2})}, suffice:

C(p1×p2)I(X1,X2:Y1,Y2)=I(X1:Y1)+I(X2:Y2)=C(p1)+C(p2){\displaystyle C(p_{1}\times p_{2})\geq I(X_{1},X_{2}:Y_{1},Y_{2})=I(X_{1}:Y_{1})+I(X_{2}:Y_{2})=C(p_{1})+C(p_{2})}

ie.C(p1×p2)C(p1)+C(p2){\displaystyle C(p_{1}\times p_{2})\geq C(p_{1})+C(p_{2})}

Now let us show thatC(p1×p2)C(p1)+C(p2){\displaystyle C(p_{1}\times p_{2})\leq C(p_{1})+C(p_{2})}.

Letπ12{\displaystyle \pi _{12}} be some distribution for the channelp1×p2{\displaystyle p_{1}\times p_{2}} defining(X1,X2){\displaystyle (X_{1},X_{2})} and the corresponding output(Y1,Y2){\displaystyle (Y_{1},Y_{2})}. LetX1{\displaystyle {\mathcal {X}}_{1}} be the alphabet ofX1{\displaystyle X_{1}},Y1{\displaystyle {\mathcal {Y}}_{1}} forY1{\displaystyle Y_{1}}, and analogouslyX2{\displaystyle {\mathcal {X}}_{2}} andY2{\displaystyle {\mathcal {Y}}_{2}}.

By definition of mutual information, we have

I(X1,X2:Y1,Y2)=H(Y1,Y2)H(Y1,Y2|X1,X2)H(Y1)+H(Y2)H(Y1,Y2|X1,X2){\displaystyle {\begin{aligned}I(X_{1},X_{2}:Y_{1},Y_{2})&=H(Y_{1},Y_{2})-H(Y_{1},Y_{2}|X_{1},X_{2})\\&\leq H(Y_{1})+H(Y_{2})-H(Y_{1},Y_{2}|X_{1},X_{2})\end{aligned}}}

Let us rewrite the last term ofentropy.

H(Y1,Y2|X1,X2)=(x1,x2)X1×X2P(X1,X2=x1,x2)H(Y1,Y2|X1,X2=x1,x2){\displaystyle H(Y_{1},Y_{2}|X_{1},X_{2})=\sum _{(x_{1},x_{2})\in {\mathcal {X}}_{1}\times {\mathcal {X}}_{2}}\mathbb {P} (X_{1},X_{2}=x_{1},x_{2})H(Y_{1},Y_{2}|X_{1},X_{2}=x_{1},x_{2})}

By definition of the product channel,P(Y1,Y2=y1,y2|X1,X2=x1,x2)=P(Y1=y1|X1=x1)P(Y2=y2|X2=x2){\displaystyle \mathbb {P} (Y_{1},Y_{2}=y_{1},y_{2}|X_{1},X_{2}=x_{1},x_{2})=\mathbb {P} (Y_{1}=y_{1}|X_{1}=x_{1})\mathbb {P} (Y_{2}=y_{2}|X_{2}=x_{2})}. For a given pair(x1,x2){\displaystyle (x_{1},x_{2})}, we can rewriteH(Y1,Y2|X1,X2=x1,x2){\displaystyle H(Y_{1},Y_{2}|X_{1},X_{2}=x_{1},x_{2})} as:

H(Y1,Y2|X1,X2=x1,x2)=(y1,y2)Y1×Y2P(Y1,Y2=y1,y2|X1,X2=x1,x2)log(P(Y1,Y2=y1,y2|X1,X2=x1,x2))=(y1,y2)Y1×Y2P(Y1,Y2=y1,y2|X1,X2=x1,x2)[log(P(Y1=y1|X1=x1))+log(P(Y2=y2|X2=x2))]=H(Y1|X1=x1)+H(Y2|X2=x2){\displaystyle {\begin{aligned}H(Y_{1},Y_{2}|X_{1},X_{2}=x_{1},x_{2})&=\sum _{(y_{1},y_{2})\in {\mathcal {Y}}_{1}\times {\mathcal {Y}}_{2}}\mathbb {P} (Y_{1},Y_{2}=y_{1},y_{2}|X_{1},X_{2}=x_{1},x_{2})\log(\mathbb {P} (Y_{1},Y_{2}=y_{1},y_{2}|X_{1},X_{2}=x_{1},x_{2}))\\&=\sum _{(y_{1},y_{2})\in {\mathcal {Y}}_{1}\times {\mathcal {Y}}_{2}}\mathbb {P} (Y_{1},Y_{2}=y_{1},y_{2}|X_{1},X_{2}=x_{1},x_{2})[\log(\mathbb {P} (Y_{1}=y_{1}|X_{1}=x_{1}))+\log(\mathbb {P} (Y_{2}=y_{2}|X_{2}=x_{2}))]\\&=H(Y_{1}|X_{1}=x_{1})+H(Y_{2}|X_{2}=x_{2})\end{aligned}}}

By summing this equality over all(x1,x2){\displaystyle (x_{1},x_{2})}, we obtainH(Y1,Y2|X1,X2)=H(Y1|X1)+H(Y2|X2){\displaystyle H(Y_{1},Y_{2}|X_{1},X_{2})=H(Y_{1}|X_{1})+H(Y_{2}|X_{2})}.

We can now give an upper bound over mutual information:

I(X1,X2:Y1,Y2)H(Y1)+H(Y2)H(Y1|X1)H(Y2|X2)=I(X1:Y1)+I(X2:Y2){\displaystyle {\begin{aligned}I(X_{1},X_{2}:Y_{1},Y_{2})&\leq H(Y_{1})+H(Y_{2})-H(Y_{1}|X_{1})-H(Y_{2}|X_{2})\\&=I(X_{1}:Y_{1})+I(X_{2}:Y_{2})\end{aligned}}}

This relation is preserved at the supremum. Therefore

C(p1×p2)C(p1)+C(p2){\displaystyle C(p_{1}\times p_{2})\leq C(p_{1})+C(p_{2})}

Combining the two inequalities we proved, we obtain the result of the theorem:

C(p1×p2)=C(p1)+C(p2){\displaystyle C(p_{1}\times p_{2})=C(p_{1})+C(p_{2})}

Shannon capacity of a graph

[edit]
Main article:Shannon capacity of a graph

IfG is anundirected graph, it can be used to define a communications channel in which the symbols are the graph vertices, and two codewords may be confused with each other if their symbols in each position are equal or adjacent. The computational complexity of finding the Shannon capacity of such a channel remains open, but it can be upper bounded by another important graph invariant, theLovász number.[5]

Noisy-channel coding theorem

[edit]

Thenoisy-channel coding theorem states that for any error probability ε > 0 and for any transmissionrateR less than the channel capacityC, there is an encoding and decoding scheme transmitting data at rateR whose error probability is less than ε, for a sufficiently large block length. Also, for any rate greater than the channel capacity, the probability of error at the receiver goes to 0.5 as the block length goes to infinity.

Example application

[edit]

An application of the channel capacity concept to anadditive white Gaussian noise (AWGN) channel withB Hzbandwidth andsignal-to-noise ratioS/N is theShannon–Hartley theorem:

C=Blog2(1+SN) {\displaystyle C=B\log _{2}\left(1+{\frac {S}{N}}\right)\ }

C is measured inbits per second if thelogarithm is taken in base 2, ornats per second if thenatural logarithm is used, assumingB is inhertz; the signal and noise powersS andN are expressed in a linearpower unit (like watts or volts2). SinceS/N figures are often cited indB, a conversion may be needed. For example, a signal-to-noise ratio of 30 dB corresponds to a linear power ratio of1030/10=103=1000{\displaystyle 10^{30/10}=10^{3}=1000}.

Channel capacity estimation

[edit]

To determine the channel capacity, it is necessary to find the capacity-achieving distributionpX(x){\displaystyle p_{X}(x)} and evaluate themutual informationI(X;Y){\displaystyle I(X;Y)}. Research has mostly focused on studying additive noise channels under certain power constraints and noise distributions, as analytical methods are not feasible in the majority of other scenarios. Hence, alternative approaches such as, investigation on the input support,[6] relaxations[7] and capacity bounds,[8] have been proposed in the literature.

The capacity of a discrete memoryless channel can be computed using theBlahut-Arimoto algorithm.

Deep learning can be used to estimate the channel capacity. In fact, the channel capacity and the capacity-achieving distribution of any discrete-time continuous memoryless vector channel can be obtained using CORTICAL,[9] a cooperative framework inspired bygenerative adversarial networks. CORTICAL consists of two cooperative networks: a generator with the objective of learning to sample from the capacity-achieving input distribution, and a discriminator with the objective to learn to distinguish between paired and unpaired channel input-output samples and estimatesI(X;Y){\displaystyle I(X;Y)}.

Channel capacity in wireless communications

[edit]

This section[10] focuses on the single-antenna, point-to-point scenario. For channel capacity in systems with multiple antennas, see the article onMIMO.

Bandlimited AWGN channel

[edit]
Main article:Shannon–Hartley theorem
AWGN channel capacity with the power-limited regime and bandwidth-limited regime indicated. Here,P¯N0=1{\displaystyle {\frac {\bar {P}}{N_{0}}}=1};B andC can be scaled proportionally for other values.

If the average received power isP¯{\displaystyle {\bar {P}}} [W], the total bandwidth isW{\displaystyle W} in Hertz, and the noisepower spectral density isN0{\displaystyle N_{0}} [W/Hz], the AWGN channel capacity is

CAWGN=Wlog2(1+P¯N0W){\displaystyle C_{\text{AWGN}}=W\log _{2}\left(1+{\frac {\bar {P}}{N_{0}W}}\right)} [bits/s],

whereP¯N0W{\displaystyle {\frac {\bar {P}}{N_{0}W}}} is the received signal-to-noise ratio (SNR). This result is known as theShannon–Hartley theorem.[11]

When the SNR is large (SNR ≫ 0 dB), the capacityCWlog2P¯N0W{\displaystyle C\approx W\log _{2}{\frac {\bar {P}}{N_{0}W}}} is logarithmic in power and approximately linear in bandwidth. This is called thebandwidth-limited regime.

When the SNR is small (SNR ≪ 0 dB), the capacityCP¯N0ln2{\displaystyle C\approx {\frac {\bar {P}}{N_{0}\ln 2}}} is linear in power but insensitive to bandwidth. This is called thepower-limited regime.

The bandwidth-limited regime and power-limited regime are illustrated in the figure.

Frequency-selective AWGN channel

[edit]

The capacity of thefrequency-selective channel is given by so-calledwater filling power allocation,

CNc=n=0Nc1log2(1+Pn|h¯n|2N0),{\displaystyle C_{N_{c}}=\sum _{n=0}^{N_{c}-1}\log _{2}\left(1+{\frac {P_{n}^{*}|{\bar {h}}_{n}|^{2}}{N_{0}}}\right),}

wherePn=max{(1λN0|h¯n|2),0}{\displaystyle P_{n}^{*}=\max \left\{\left({\frac {1}{\lambda }}-{\frac {N_{0}}{|{\bar {h}}_{n}|^{2}}}\right),0\right\}} and|h¯n|2{\displaystyle |{\bar {h}}_{n}|^{2}} is the gain of subchanneln{\displaystyle n}, withλ{\displaystyle \lambda } chosen to meet the power constraint.

Slow-fading channel

[edit]

In aslow-fading channel, where the coherence time is greater than the latency requirement, there is no definite capacity as the maximum rate of reliable communications supported by the channel,log2(1+|h|2SNR){\displaystyle \log _{2}(1+|h|^{2}SNR)}, depends on the random channel gain|h|2{\displaystyle |h|^{2}}, which is unknown to the transmitter. If the transmitter encodes data at rateR{\displaystyle R} [bits/s/Hz], there is a non-zero probability that the decoding error probability cannot be made arbitrarily small,

pout=P(log(1+|h|2SNR)<R){\displaystyle p_{out}=\mathbb {P} (\log(1+|h|^{2}SNR)<R)},

in which case the system is said to be in outage. With a non-zero probability that the channel is in deep fade, the capacity of the slow-fading channel in strict sense is zero. However, it is possible to determine the largest value ofR{\displaystyle R} such that the outage probabilitypout{\displaystyle p_{out}} is less thanϵ{\displaystyle \epsilon }. This value is known as theϵ{\displaystyle \epsilon }-outage capacity.

Fast-fading channel

[edit]

In afast-fading channel, where the latency requirement is greater than the coherence time and the codeword length spans many coherence periods, one can average over many independent channel fades by coding over a large number of coherence time intervals. Thus, it is possible to achieve a reliable rate of communication ofE(log2(1+|h|2SNR)){\displaystyle \mathbb {E} (\log _{2}(1+|h|^{2}SNR))} [bits/s/Hz] and it is meaningful to speak of this value as the capacity of the fast-fading channel.

Feedback Capacity

[edit]

Feedback capacity is the greatest rate at whichinformation can be reliably transmitted, per unit time, over a point-to-pointcommunication channel in which the receiver feeds back the channel outputs to the transmitter. Information-theoretic analysis of communication systems that incorporate feedback is more complicated and challenging than without feedback. Possibly, this was the reasonC.E. Shannon chose feedback as the subject of the first Shannon Lecture, delivered at the 1973 IEEE International Symposium on Information Theory in Ashkelon, Israel.

The feedback capacity is characterized by the maximum of thedirected information between the channel inputs and the channel outputs, where the maximization is with respect to the causal conditioning of the input given the output. Thedirected information was coined byJames Massey[12] in 1990, who showed that its an upper bound on feedback capacity. Formemoryless channels, Shannon showed[13] that feedback does not increase the capacity, and the feedback capacity coincides with the channel capacity characterized by themutual information between the input and the output. The feedback capacity is known as a closed-form expression only for several examples such as the trapdoor channel,[14] Ising channel,.[15][16] For some other channels, it is characterized through constant-size optimization problems such as the binary erasure channel with a no-consecutive-ones input constraint,[17] NOST channel.[18]

The basic mathematical model for a communication system is the following:

Communication with feedback

Here is the formal definition of each element (where the only difference with respect to the nonfeedback capacity is the encoder definition):

That is, for each timei{\displaystyle i} there exists a feedback of the previous outputYi1{\displaystyle Y_{i-1}} such that the encoder has access to all previous outputsYi1{\displaystyle Y^{i-1}}. An(2nR,n){\displaystyle (2^{nR},n)} code is a pair of encoding and decoding mappings withW=[1,2,,2nR]{\displaystyle {\mathcal {W}}=[1,2,\dots ,2^{nR}]}, andW{\displaystyle W} is uniformly distributed. A rateR{\displaystyle R} is said to beachievable if there exists a sequence of codes(2nR,n){\displaystyle (2^{nR},n)} such that theaverage probability of error:Pe(n)Pr(W^W){\displaystyle P_{e}^{(n)}\triangleq \Pr({\hat {W}}\neq W)} tends to zero asn{\displaystyle n\to \infty }.

Thefeedback capacity is denoted byCfeedback{\displaystyle C_{\text{feedback}}}, and is defined as the supremum over all achievable rates.

Main results on feedback capacity

[edit]

LetX{\displaystyle X} andY{\displaystyle Y} be modeled as random variables. Thecausal conditioningP(yn||xn)i=1nP(yi|yi1,xi){\displaystyle P(y^{n}||x^{n})\triangleq \prod _{i=1}^{n}P(y_{i}|y^{i-1},x^{i})} describes the given channel. The choice of thecausally conditional distributionP(xn||yn1)i=1nP(xi|xi1,yi1){\displaystyle P(x^{n}||y^{n-1})\triangleq \prod _{i=1}^{n}P(x_{i}|x^{i-1},y^{i-1})} determines thejoint distributionpXn,Yn(xn,yn){\displaystyle p_{X^{n},Y^{n}}(x^{n},y^{n})} due to the chain rule for causal conditioning[19]P(yn,xn)=P(yn||xn)P(xn||yn1){\displaystyle P(y^{n},x^{n})=P(y^{n}||x^{n})P(x^{n}||y^{n-1})} which, in turn, induces adirected informationI(XNYN)=E[logP(YN||XN)P(YN)]{\displaystyle I(X^{N}\rightarrow Y^{N})=\mathbf {E} \left[\log {\frac {P(Y^{N}||X^{N})}{P(Y^{N})}}\right]}.

Thefeedback capacity is given by

 Cfeedback=limn1nsupPXn||Yn1I(XnYn){\displaystyle \ C_{\text{feedback}}=\lim _{n\to \infty }{\frac {1}{n}}\sup _{P_{X^{n}||Y^{n-1}}}I(X^{n}\to Y^{n})\,},

where thesupremum is taken over all possible choices ofPXn||Yn1(xn||yn1){\displaystyle P_{X^{n}||Y^{n-1}}(x^{n}||y^{n-1})}.

Gaussian feedback capacity

[edit]

When the Gaussian noise is colored, the channel has memory. Consider for instance the simple case on anautoregressive model noise processzi=zi1+wi{\displaystyle z_{i}=z_{i-1}+w_{i}} wherewiN(0,1){\displaystyle w_{i}\sim N(0,1)} is an i.i.d. process.

Solution techniques

[edit]

The feedback capacity is difficult to solve in the general case. There are some techniques that are related to control theory andMarkov decision processes if the channel is discrete.

See also

[edit]

Advanced Communication Topics

[edit]

External links

[edit]

References

[edit]
  1. ^Saleem Bhatti."Channel capacity".Lecture notes for M.Sc. Data Communication Networks and Distributed Systems D51 -- Basic Communications and Networks. Archived fromthe original on 2007-08-21.
  2. ^Jim Lesurf."Signals look like noise!".Information and Measurement, 2nd ed.
  3. ^Thomas M. Cover, Joy A. Thomas (2006).Elements of Information Theory. John Wiley & Sons, New York.ISBN 9781118585771.
  4. ^Cover, Thomas M.; Thomas, Joy A. (2006). "Chapter 7: Channel Capacity".Elements of Information Theory (Second ed.). Wiley-Interscience. pp. 206–207.ISBN 978-0-471-24195-9.
  5. ^Lovász, László (1979), "On the Shannon Capacity of a Graph",IEEE Transactions on Information Theory, IT-25 (1):1–7,doi:10.1109/tit.1979.1055985.
  6. ^Smith, Joel G. (1971)."The information capacity of amplitude- and variance-constrained sclar gaussian channels".Information and Control.18 (3):203–219.doi:10.1016/S0019-9958(71)90346-9.
  7. ^Huang, J.; Meyn, S.P. (2005). "Characterization and Computation of Optimal Distributions for Channel Coding".IEEE Transactions on Information Theory.51 (7):2336–2351.doi:10.1109/TIT.2005.850108.ISSN 0018-9448.S2CID 2560689.
  8. ^McKellips, A.L. (2004). "Simple tight bounds on capacity for the peak-limited discrete-time channel".International Symposium onInformation Theory, 2004. ISIT 2004. Proceedings. IEEE. p. 348.doi:10.1109/ISIT.2004.1365385.ISBN 978-0-7803-8280-0.S2CID 41462226.
  9. ^Letizia, Nunzio A.; Tonello, Andrea M.; Poor, H. Vincent (2023). "Cooperative Channel Capacity Learning".IEEE Communications Letters.27 (8):1984–1988.arXiv:2305.13493.doi:10.1109/LCOMM.2023.3282307.ISSN 1089-7798.
  10. ^David Tse, Pramod Viswanath (2005),Fundamentals of Wireless Communication, Cambridge University Press, UK,ISBN 9780521845274
  11. ^The Handbook of Electrical Engineering. Research & Education Association. 1996. p. D-149.ISBN 9780878919819.
  12. ^Massey, James (Nov 1990)."Causality, Feedback and Directed Information"(PDF).Proc. 1990 Int. Symp. On Information Theory and Its Applications (ISITA-90), Waikiki, HI.:303–305.
  13. ^Shannon, C. (September 1956). "The zero error capacity of a noisy channel".IEEE Transactions on Information Theory.2 (3):8–19.doi:10.1109/TIT.1956.1056798.
  14. ^Permuter, Haim; Cuff, Paul; Van Roy, Benjamin; Weissman, Tsachy (July 2008)."Capacity of the trapdoor channel with feedback"(PDF).IEEE Trans. Inf. Theory.54 (7):3150–3165.arXiv:cs/0610047.doi:10.1109/TIT.2008.924681.S2CID 1265.
  15. ^Elishco, Ohad; Permuter, Haim (September 2014). "Capacity and Coding for the Ising Channel With Feedback".IEEE Transactions on Information Theory.60 (9):5138–5149.arXiv:1205.4674.doi:10.1109/TIT.2014.2331951.S2CID 9761759.
  16. ^Aharoni, Ziv; Sabag, Oron; Permuter, Haim H. (September 2022). "Feedback Capacity of Ising Channels With Large Alphabet via Reinforcement Learning".IEEE Transactions on Information Theory.68 (9):5637–5656.doi:10.1109/TIT.2022.3168729.S2CID 248306743.
  17. ^Sabag, Oron; Permuter, Haim H.; Kashyap, Navin (2016). "The Feedback Capacity of the Binary Erasure Channel With a No-Consecutive-Ones Input Constraint".IEEE Transactions on Information Theory.62 (1):8–22.doi:10.1109/TIT.2015.2495239.
  18. ^Shemuel, Eli; Sabag, Oron; Permuter, Haim H. (2022). "The Feedback Capacity of Noisy Output Is the State (NOST) Channels".IEEE Transactions on Information Theory.68 (8):5044–5059.arXiv:2107.07164.doi:10.1109/TIT.2022.3165538.
  19. ^Permuter, Haim Henry; Weissman, Tsachy; Goldsmith, Andrea J. (February 2009). "Finite State Channels With Time-Invariant Deterministic Feedback".IEEE Transactions on Information Theory.55 (2):644–662.arXiv:cs/0608070.doi:10.1109/TIT.2008.2009849.S2CID 13178.
Mobile
networks
,
protocols
Generations
General
operation
Mobile
devices
Form factors
Smartphones
Mobile
specific
software
Apps
Commerce
Content
Culture
Environment
and health
Law
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Channel capacity" – news ·newspapers ·books ·scholar ·JSTOR
(January 2008) (Learn how and when to remove this message)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Channel_capacity&oldid=1322294953"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp