Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

A low-complexity maximum likelihood decoder for tail-biting trellis

EURASIP Journal on Wireless Communications and Networkingvolume 2013, Article number: 130 (2013)Cite this article

Abstract

Tail-biting trellises are defined on a circular time axis and can have a smaller number of states than equivalent conventional trellises. Existing circular Viterbi algorithms (CVAs) on the tail-biting trellis are non-convergent and sub-optimal. In this study, we show that the net path metric of each tail-biting path is lower-bounded during the decoding process of the CVA. This property can be applied to remove unnecessary iterations of the CVA and results in a convergent maximum likelihood (ML) decoder. Simulation results show that the proposed algorithm exhibits higher decoding efficiency compared with other existing ML decoders.

1 Introduction

For linear block codes, conventional trellis and tail-biting trellis representations have gained a great deal of attention in the past decades[13]. Trellis representations not only reveal the code structure, but also lead to efficient trellis-based decoding algorithms[48]. For the same linear block codes, the number of states in its tail-biting trellis can be as low as the square root of the number of states in its minimal conventional trellis[1,8], e.g., for the (24, 12) extended Golay code, the maximum number of states in its conventional trellis is 512[8], while the maximum number of states in its time-varying tail-biting trellis is only 16[1].

In addition, the tail-biting technique has been widely used in convolutional encoders to eliminate the rate loss caused by the known tail bits. For example, the Worldwide Interoperability for Microwave Access (WiMAX)[9] and Long Term Evolution[10] standards both adopted tail-biting convolutional codes in the control channel or broadcasting channel. Consequently, a maximum likelihood (ML) decoder with high decoding efficiency on tail-biting trellises is important and desirable for studying tail-biting codes.

Both the Viterbi algorithm and bidirectional efficient algorithm for searching code trees (BEAST) can achieve ML decoding on conventional trellises[8]. In the case of a tail-biting trellis, due to the lack ofa priori knowledge about the starting state, the Viterbi and BEAST decoder have to perform an exhaustive search on tail-biting trellises to find the ML codeword. BEAST can be more efficient if applied to the conventional trellis obtained by reducing the tail-biting code generator matrix to the minimum span form[8].

Another kind of decoder, the two-phase ML decoder, has been proposed to reduce the decoding complexity for tail-biting trellises[5,6]. This kind of algorithm performs Viterbi searches on tail-biting trellises in the first phase and records the accumulated path metric of each path at every section for the second phase. In the second phase, heuristic searches are performing based on the result obtained from the first phase. Since the two-phase decoder is based on two distinct algorithms, this makes it difficult for practical application.

The circular Viterbi algorithm (CVA)-based decoder greatly reduces the implementation complexity of a decoder for tail-biting trellises and provides near-optimal block error rate performance. However, the decoding process of the CVA is non-convergent and sub-optimal[4,7]. In this paper, we introduce a CVA-based ML decoder for tail-biting trellises. In this algorithm, the lower bound of the net path metric of each tail-biting path can be obtained to exclude impossible starting state candidates, which leads to convergence of the CVA. In addition, the net path metric of survivor paths can be used to terminate redundant searches without performing a full Viterbi iteration.

The following parts of this article are organized as follows: In Section 2, a detailed description of the algorithm is presented. The performance of the proposed algorithm is demonstrated by simulations in Section 3. Section 4 concludes this paper.

2 Algorithm description

2.1 Variable definition

An example of a tail-biting trellis is shown in Figure1, which has eight sections with four states at each section. For a tail-biting trellis withL sections, denote bySl the set of states at locationl, where 0≤lL. From the definition of tail-biting trellises, we haveS0SL. Any path that starts from and terminates at the same state forms a tail-biting path. All tail-biting paths that start from the same state construct a sub-tail-biting trellis of this state. In Figure1, the branches of thick solid lines form a tail-biting path of state ‘01,’ and all solid lines compose the sub-tail-biting trellis of state ‘01’.

Figure 1
figure 1

The tail-biting trellis of the (8, 4) convolutional code with octal generator polynomials of {7, 5}.

A conventional CVA-based decoder performs Viterbi iterations around the tail-biting trellis to find the optimal tail-biting path. This algorithm takes advantage of the circular property of tail-biting trellises and employs the path metrics accumulated in the ending states of the trellis to initialize the path metrics in the starting states for a new iteration until a predefined termination condition is fulfilled. In the following parts, we elaborate the decoding process of CVA on tail-biting convolutional codes. The decoding process for general tail-biting trellises can be similarly obtained.

For tail-biting convolutional codes of rateb/c, the length of information bits isbL and the length of the corresponding codeword iscL. Binary code bitsvl(j){0,1} are mapped toxl(j)=12vl(j)Es with binary phase-shift keying (BPSK) modulation, where 0≤jc−1 and 0≤lL−1. Without loss of generality, signal energyEs is normalized to 1. After passing through an additive white Gaussian noise (AWGN) channel with a double-sided noise power spectrum density ofN0/2, the corresponding received symbols arerl(j). The log-likelihood ratio ofrl(j) is given byΛrl(j)=4rl(j)N0, where forlL,xl(j)=x(l)L(j),Λrl(j)=Λr(l)L(j), and (l)L =l modL.

During the decoding process of the CVA, the accumulated path metric of the survivor path entering states at locationl in thei th iteration is

Mli(s)=k=0(i1)L+lj=0c1xk(j)Λrk(j),i1.
(1)

The weighted Hamming distance betweenΛrl(j) andxl(j) can be defined as in[8]:

DΛrl(j),xl(j)=Λrl(j),ifsgnΛrl(j)xl(j)0,otherwise,
(2)

wheresgnΛrl(j) denotes the sign ofΛrl(j). Based on (1) and (2), the ML decoding on the tail-biting trellis is equivalent to solving the following equation:

x̂=arg maxxk=(i1)LiL1j=0c1xkj·Λrk(j)=arg maxxk=(i1)LiL1j=0c1Λrk(j)2DΛrk(j),xk(j)=arg minxk=(i1)LiL1j=0c1DΛrk(j),xk(j).
(3)

The termΛrk(j) can be ignored in the third line of (3) since it is independent of specific codewordsx and consequently is a constant for all paths on the tail-biting trellis.

Denote byPi(βi(s),s) the survivor path that connects stateβi(s) ofS0 with states ofSl in thei th iteration. The corresponding net path metric ofPi(βi(s),s) can be derived from (1) and (3):

Mneti(βi(s),s)=Mli(s)M0i(βi(s)).
(4)

Since the initial path metricsM0i+1(s) are different fromM0i(s) for each statesS0, different survivor paths can be obtained in each iteration. Denote byP¯i the ML path obtained in thei th iteration, where the ML path obtained from the first iteration has the least net path metric among all possible survivor paths[4]. Similarly, the ML tail-biting path obtained in thei th iteration is denoted byP~i, which has the net path metric ofM~i. Among the set of tail-biting paths{(P~i(s,s),M~i(s,s))sS0,i≥1}, the optimal tail-biting path and its net path metric are denoted byP~O andM~O, respectively.

2.2 Lower bounds of the net path metrics

A conventional CVA-based decoder is non-convergent, and consequently, it cannot guarantee that the tail-biting path obtained is optimal when the decoding process is terminated[4,7]. In order to design a convergent ML decoder on tail-biting trellises, further information needs to be obtained from the decoding process of CVA. Based on the characteristics of CVA, we can derive a lower bound of the net path metric of each tail-biting path. This observation is summarized in Lemma 1

Lemma 1.

LetP~(s,s) denote the ML tail-biting path on the sub-tail-biting trellis of states, and the corresponding net path metric isM~(s,s), wheresS0. DefineB(s) as

B(s)=maxi1{MLi(s)M0i(s)},
(5)

then we haveB(s)M~(s,s), i.e.,B(s) is the lower bound of the metrics of all paths on the sub-tail-biting trellis of states.

Proof

The tail-biting trellis defined on a circular time axis can be split at sectionl=0 and duplicated on the time axis head-tail. Conventional CVA then becomes a general Viterbi decoder composed of several lengthL decoding sections, where the Viterbi algorithm searches on the duplicated trellis by recording and repeating the received symbols. Consequently, combining (1) and (3), we find that the survivor pathPi(βi(s),s) has the minimum accumulated path metricMli(s) among all possible pathsPi(s,s), wheres,βi(s)S0,sSl, and 0≤lL−1. Consequently, we have

Mli(s)=M0i(βi(s))+Mneti(βi(s),s)M0i(s)+Mneti(s,s).
(6)

Since (6) holds for 0≤lL−1, we know that for anysSL, (6) also holds. Then for the ML tail-biting path,P~(s,s), on the sub-tail-biting trellis of states, from (6), we have

MLi(s)M0i(s)Mneti(s,s)M~(s,s),i1.
(7)

SinceB(s)=maxi1{MLi(s)M0i(s)}, then combining (7), we have

B(s)M~(s,s).
(8)

SinceP~(s,s) is the ML tail-biting path on the sub-tail-biting trellis of states, from (3) and (8), we come to the conclusion thatB(s) is a lower bound of the metrics of all paths on the sub-tail-biting trellis of states. □

The lower boundB(s) defined in Lemma 1 is updated as iterations continue, and a more precise estimation ofB(s) can be obtained if more iterations are performed on the tail-biting trellis. According to (8), the maximal value ofB(s) isM~(s,s).

Based on Lemma 1, we can reduce the decoding complexity of CVA on the tail-biting trellis by removing redundant computations and iterations during the decoding process and control the convergence of CVA. The improvements of the proposed decoder can be summarized into the following two aspects.

Firstly, during the decoding process, if the net path metricM neti(βi(s),s) of survivor pathPi(βi(s),s) is not less thanM~O, whereM~O is the optimal tail-biting path obtained in the firsti−1 iterations andsSl, all searches that follow states can be terminated (refer to Figure2). In this case, the net path metric of any survivor path that starts from stateβi(s) and passes through states is not less thanM~O.

Figure 2
figure 2

Decoding process of the B-CVA ML decoder. For the first iteration, four survivor paths corresponding to each of the terminal states are shown on the tail-biting trellis in (a), where the tail-biting trellis has been split at locationl=0. For the second iteration, the decoding process is illustrated from (b) to (e). Searches following the circles with left incident branch are terminated according to step 2.1. The decoding process is terminated atl=4 since there are no survivor paths with net path metric less thanM~O.

Secondly, denote bySCi the set of survivor starting state candidates in thei th iteration of the CVA, i.e.,

SCi={s|B(s)<M~O},

whereSC1=S0.

After thei th iteration, (P~O,M~O) andB(s) will be updated with (P~i,M~i) andBi(s), where (P~i,M~i) is the best tail-biting path and its metric obtained in thei th iteration, andBi(s) is the lower bound of the metrics of all paths on the sub-tail-biting trellis of states obtained in thei th iteration. ForsSCi, ifB(s)M~O, then from Lemma 1, we haveM~(s,s)M~O. This indicates that the ML tail-biting path on the sub-tail-biting trellis of states is not better thanP~O. States can be excluded from the setSCi+1. As iterations continue, the search space on the tail-biting trellis shrinks and the decoder will converge to the global optimal solution eventually.

2.3 CVA-based ML decoder on tail-biting trellis

We summarize the above decoding process as follows: In the algorithm description, the operator ‘ ←’ denotes value assignment from the right-hand side to the left-hand side, and the operator ‘=’ denotes logic comparison between two operands.

From the description above, we find that the decoding process can only be terminated whenSCi+1= in step 2.4. The number of entries inSC1 is finite, and as iterations continue, the size ofSCi will reduce to zero.

Firstly, states with boundB(s)>M~O is deleted fromSCi in step 2.3 since the ML tail-biting path on the sub-tail biting trellis of states is not better thanP~O.

Secondly, after the (i+1)th iteration, if no state has been deleted fromSCi+1, i.e.,sSCi+1,B(s)<M~O, then equationSCi+1=SCi+2 holds and the Viterbi algorithm will be performed on the sub-tail-biting trellis of states in step 2.4, wheresSCi+1. IfM~(s,s)<M~O,P~O will be updated withP~(s,s); else, ifM~(s,s)M~O, this indicates thatP~(s,s) is not better thanP~O. Then states can be deleted fromSCi+1 since the ML tail-biting path on its sub-tail-biting trellis has been found.

Thirdly, after thei th iteration, ifM~(s,s)<M~O,P~O will be updated withP~(s,s), wheresSCi. SinceB(s)=maxi1{MLi(s)M0i(s)}M~(s,s),B(s) will be updated withM~(s,s) in step 2.3, and then states will be deleted fromSCi since the equality inB(s)M~O holds.

All survivor starting state candidates inSC1 will be deleted eventually, andSCi will be empty in a finite number of iterations. WhenSCi is empty, the decoder converges to the global optimal solution that has been recorded inP~O. In summary, the algorithm presented above is a convergent ML decoder on tail-biting trellises. We call this a bounded CVA (B-CVA) ML decoder.

The following example is used to explain the decoding process of the B-CVA.

Example 1

The tail-biting convolutional code that is represented by the tail-biting trellis in Figure1 is selected in this example. The information sequence is {01011100}, and the corresponding codeword is {(00), (11), (10), (00), (01), (10), (01), (11)}. After passing through the AWGN channel where the signal-to-noise ratio (SNR) isEb/N0=0 dB, the received sequence (for one realization) is {(1.144, 0.458), (−0.986, −1.234), (0.291, 1.364), (0.472, 0.350), (1.578, −1.594), (0.050, −0.399), (2.260, 0.359), (−1.501, 0.234)}. For convenience, states {00, 01, 10, 11} are represented by {0, 1, 2, 3}. For the first iteration, setSC1={0,1,2,3} and the four survivor paths are shown on Figure2a. We find that there is a tail-biting pathP1(0,0) with net path metricMnet1(0,0)=1.333. Then (P~O,M~O) are updated with (P1(0,0),Mnet1(0,0)). The bounds of each terminal state are updated toB(0)=1.333,B(1)=0.291,B(2)=1.868, andB(3)=2.026. Since onlyB(1)<M~O, according to step 2.3 of the B-CVA,SC2={1}. The decoding process of the second iteration is illustrated in Figure2b, c, d, e. We find that with the control on the decoding process of the CVA, the second iteration terminates at sectionl=4; and at each section, searches after states that hasMnet2(1,s)M~O are terminated. The states corresponds to the circle with left incident branch on the trellis in Figure2b, c, d, e. The decoding result is the codeword associated with the tail-biting pathP~O. In the first iteration, there are 64 real additions and 32 logical comparisons; and in the second iteration, the numbers of real additions and logical comparisons corresponding to each step are {(4, 2), (4, 2), (8, 4), (6, 4)}. In summary, the decoding complexity of B-CVA is denoted by 86 real additions and 44 logical comparisons. As is shown in Appendix 1: decoding process of the two-phase ML decoder, the decoding complexity of the two-phase decoder is denoted by 100 real additions and 101 logical comparisons.

3 Simulation results

In order to show the decoding efficiency, we compare the proposed B-CVA ML decoder with other widely cited tail-biting ML decoders in three aspects: the number of real additions, the number of logical comparisons, and average memory space consumption during the decoding process. The codewords are modulated to BPSK symbols and then passed through an AWGN channel. The results shown in the tables and figures are obtained by observing at least 100 block error events.

The conventional BEAST ML decoder should perform decoding on each sub-tail-biting trellis independently and consecutively to find the ML tail-biting path[8]. To improve its efficiency, we use the upper-bounding technique on the thresholds used by BEAST. During the consecutive decoding process, the metric of the optimal tail-biting path obtained on the firsti sub-tail-biting trellises are used to upper-bound the thresholds that will be used for decoding the remaining |S0|−i sub-tail-biting trellises, where |S0| denotes the size of setS0. With this upper-bounding technique, redundant searches can be terminated timely or reduced through the whole decoding process. For convenience, we call the BEAST decoder with this upper-bounding technique an advanced-BEAST ML decoder, which should be more efficient than the original BEAST decoder in[8]. Results presented in Appendix 4 are used to demonstrate this point.

In the first example, different decoders were applied for decoding the tail-biting convolutional codes (64, 32) with octal generator polynomials {345, 237}, which can be represented by a 128-state tail-biting trellis[7]. For the demonstration of the block error rate (BLER) performance of the B-CVA, we choose the most cited sub-optimal decoder proposed in[7] for comparison. This decoder is called the wrap-around Viterbi algorithm (WAVA). Because the WAVA is non-convergent, the maximal allowed number of iterations is set as 20 during the decoding process. Table1 shows the BLER performance of different decoders. We find that of the sub-optimal decoders, WAVA has performance that is close to the optimal results.

Table 1BLER performance of WAVA and ML decoders for (64, 32) tail-biting convolutional codes over the AWGN channel

Figure3 shows the average memory space consumption of the different ML decoders during the decoding process. We find that the advanced BEAST decoder and the B-CVA decoder require almost constant memory space from the low- to high-SNR regions. However, we find that the WAVA decoder requires less memory space than the B-CVA. This is due to the fact that WAVA has no need of space for storingB(s) of every state inS0. The memory space required by the two-phase decoder is about 212 times more than that required by the B-CVA or BEAST decoders. This is due to the fact that the two-phase decoder has to store path metrics of all survivor paths obtained in the first phase and has to maintain the open stack and close table during the second phase.

Figure 3
figure 3

Average memory space required for decoding the (64, 32) tail-biting convolutional codes. Average memory space consumption by the WAVA and different ML decoders: advanced BEAST ML decoder, two-phase ML decoder and B-CVA ML decoder, for decoding the (64, 32) tail-biting convolutional codes over the AWGN channel.

Figure4 shows that the two-phase ML decoder which is based on depth-first searches is a little better than the B-CVA ML decoder in the number of real additions. The BEAST ML decoder which performs exhaustive searches on the tail-biting trellis shows the highest decoding complexity. Figure5 shows that a larger number of logical comparisons should be performed by the two-phase decoder than that performed by the B-CVA decoder. This is due to the fact that many logical comparisons need to be performed to sort the open stack and to retrieve the close table in the low-SNR region. In fact, the length of the close table grows linearly as the searches continue. Figures4 and5 also show that the advanced BEAST decoder, which has to be performed on each sub-tail-biting trellis, exhibits high decoding complexity from the low- to high-SNR regions. Meanwhile, we find that the WAVA exhibits high decoding complexity from the low- to medium-SNR regions; this is caused by circular traps existing in the CVA decoding process[4].

Figure 4
figure 4

Average number of real additions required for decoding the (64, 32) tail-biting convolutional codes. Average number of real additions that were performed during the decoding process of the WAVA and different ML decoders: advanced BEAST ML decoder, two-phase ML decoder and B-CVA ML decoder, for decoding the (64, 32) tail-biting convolutional codes over the AWGN channel.

Figure 5
figure 5

Average number of logical comparisons required for decoding the (64, 32) tail-biting convolutional codes. Average number of logical comparisons that were performed during the decoding process of the WAVA and different ML decoders: advanced BEAST ML decoder, two-phase ML decoder, and B-CVA ML decoder, for decoding the (64, 32) tail-biting convolutional codes over the AWGN channel.

The second example adopts the tail-biting convolutional codes that have been used in WiMAX. The corresponding generator polynomials are {133, 171} in octal form[9]. The length of the information sequence is 40 bits, and the corresponding code is a long tail-biting convolutional code[7], which is denoted as an (80, 40) tail-biting convolutional code. The BLER of the WAVA and ML decoders are presented in Table2, where the ML decoder in Table1 refers to any kind of ML decoders mentioned in this paper: the advanced BEAST ML decoder, two-phase ML decoder, or BCVA decoder, since all of them exhibit exactly the same block error rate performance. We find that in the case of long tail-biting codes, the BLER performance of WAVA is close to that of ML decoders.

Table 2BLER performance of WAVA and ML decoders for (80, 40) tail-biting convolutional code over the AWGN channel

From Figures6,7 and8, we come to almost the same conclusions as that in the first example. In Figure6, the memory space required by the advanced BEAST decoder decreases slowly as system SNR grows. Figure8 shows that, in the low-SNR region, many comparisons were performed to maintain the open stack and close table in the second phase of the two-phase ML decoder. In summary, the B-CVA decoder is an efficient ML decoder on tail-biting trellises both in memory space saving and decoding complexity reduction.

Figure 6
figure 6

Average memory space required for decoding the (80, 40) tail-biting convolutional code. Average memory space consumption by the WAVA and different ML decoders: advanced BEAST ML decoder, two-phase ML decoder, and B-CVA ML decoder, for decoding the (80, 40) tail-biting convolutional codes over the AWGN channel.

Figure 7
figure 7

Average number of real additions required for decoding the (80, 40) tail-biting convolutional code. Average number of real additions that were performed during the decoding process of the WAVA and different ML decoders: advanced BEAST ML decoder, two-phase ML decoder and B-CVA ML decoder, for decoding the (80, 40) tail-biting convolutional codes over the AWGN channel.

Figure 8
figure 8

Average number of logical comparisons required for decoding the (80, 40) tail-biting convolutional code. Average number of logical comparisons that were performed during the decoding process of the WAVA and different ML decoders: advanced BEAST ML decoder, two-phase ML decoder and B-CVA ML decoder, for decoding the (80, 40) tail-biting convolutional codes over the AWGN channel.

4 Conclusion

We propose a convergent CVA-based ML decoder for tail-biting trellises. The proposed algorithm takes advantage of the lower bound of the net path metric of the tail-biting path to control the decoding process of CVA. Simulation results show that, on tail-biting trellises, the B-CVA decoder exhibits high decoding efficiency while requiring relatively small memory space during the decoding process. These advantages make it attractive to practical applications.

Appendices

Appendix 1: decoding process of the two-phase ML decoder

In Example 1, the decoding process of the two-phase ML decoder can be described as follows[6]: (1) During the first phase, the decoder stores the path metricMl1(s) ofsSl and 0≤lL; the decoding complexity of the first phase is the same as that of the B-CVA, which contains 64 real additions and 32 logical comparisons; (2) in the second phase, heuristic searches are performed on the sub-tail-biting trellis of state 01; in each step, the heuristic search is performed following the top path in the open stack. There are 3 real additions and 4 logical comparisons for updating the values of theg-function,h-function, andf-function. The two successors will be saved in an open stack if the values of theirf-functions are less thanM~1. Meanwhile, the starting state, terminal state, and current sectionl of the top path will be saved in the close table. Then, the top path of the open stack will be compared with each entry of the close table in three aspects: starting state, terminal state, and current location. With six heuristic searching steps, the decoding process is stopped. The decoding complexity in summary is denoted by 100 real additions and 105 logical comparisons. We have ignored the complexity of sorting the open stack according to ascending order of theirf-function values. The open stack and the corresponding close table obtained in each step are shown in Table3, where Open stack = {(starting state, current state,l-section,f-function)}, Close table = {(starting state, current state,l-section)}, and Complexity = {(the number of real additions, the number of logical comparisons)}.

Table 3Updates of the open stack and close table during heuristic searches of the two-phase ML decoder

Appendix 2: improvements of the BEAST ML decoder with upper-bounding technique

To show the decoding efficiency improvement of the advanced BEAST decoder, we use it and the original BEAST decoder[8] to decode a tail-biting convolutional code used in WiMAX. The length of the information sequence is 40 bits. The average numbers of real additions and logical comparisons are presented in Table4. We find that the upper-bounding technique can cut the decoding complexity of the BEAST decoder by 1/22/3.

Table 4Average numbers of real additions and logical comparisons in decoding the (80, 40) code used in WiMAX

References

  1. Calderbank A, Forney G, Vardy A: Minimal tailbiting trellises: Golay code and more.IEEE Trans. Inf. Theory 1999, 45(5):1435-1455. 10.1109/18.771145

    Article  Google Scholar 

  2. Stahl P, Anderson J, Johannesson R: Optimal and near-optimal encoders for short and moderate-length tailbiting trellises.IEEE Trans. Inf. Theory 1999, 45(7):2562-2571. 10.1109/18.796408

    Article MathSciNet  Google Scholar 

  3. Gluesing-Luerssen H, Weaver E: Linear tail-biting trellises: characteristic generators and the BCJR-construction.IEEE Trans. Inf. Theory 2011, 57(2):738-751.

    Article MathSciNet  Google Scholar 

  4. Wang X, Qian H, Xu J, Yang Y, Wang F: An efficient CVA-based decoding algorithm for tail-biting codes. InIEEE Global Telecommunications Conference. Houston; 5–9 Dec 2011:1-5.

  5. Shankar P, Kumar P, Sasidharan K, Rajan B, Madhu A: Efficient convergent maximum likelihood decoding on tail-biting (2007). . Accessed 10 Aug 2007http://arxiv.org/pdf/cs/0601023v1.pdf

  6. Pai H, Han Y, Wu T, Chen P, Shieh S: Low-complexity ML decoding for convolutional tail-biting codes.IEEE Commun. Lett 2008, 12(12):883-885.

    Article  Google Scholar 

  7. Shao R, Lin S, Fossorier M: Two decoding algorithms for tailbiting codes.IEEE Trans. Commun. 2003, 51(10):1658-1665. 10.1109/TCOMM.2003.818084

    Article  Google Scholar 

  8. Bocharova IE, Johannesson R, Kudryashov BD, Loncar M: BEAST decoding for block codes.Europ. Trans. Telecomm. 2004, 15: 297-305. 10.1002/ett.979

    Article  Google Scholar 

  9. IEEE:IEEE Std 802.16–2009, IEEE Standard for, Local and Metropolitan Area Networks Part 16: Air Interface for Broadband Wireless Access Systems. Piscataway: IEEE; 2009.

    Google Scholar 

  10. 3rd Generation Partnership Project:3GPP TS 36.212, Technical Specification, Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA); Multiplexing and Channel Coding (Release 8). Sophia Antipolis: ETSI; 2007.

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the Hundred Talents Program of the Chinese Academy of Sciences, the Shanghai Pujiang Talent Program No. 11PJ1408700, the National Science and Technology Major Project No. 2011ZX03003-003-04, the International Science and Technology Cooperation Program of Shanghai Municipality No. 11220705400, and the Science and Technology Commission Foundation of Shanghai No. 12511503400.

Author information

Authors and Affiliations

  1. Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, 200050, China

    Xiaotao Wang, Hua Qian & Kai Kang

  2. Graduate University of Chinese Academy of Sciences, Beijing, 100049, China

    Xiaotao Wang

  3. Shanghai Research Center for Wireless Communications, Shanghai, 200335, China

    Hua Qian & Kai Kang

  4. Shanghai Internet of Things Co., Ltd., Shanghai, 201899, China

    Hua Qian & Kai Kang

  5. Electrical and Computer Engineering Department, University of Michigan-Dearborn, Dearborn, MI, 48128-2406, USA

    Weidong Xiang

Authors
  1. Xiaotao Wang

    You can also search for this author inPubMed Google Scholar

  2. Hua Qian

    You can also search for this author inPubMed Google Scholar

  3. Kai Kang

    You can also search for this author inPubMed Google Scholar

  4. Weidong Xiang

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toHua Qian.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Download PDF

[8]ページ先頭

©2009-2025 Movatter.jp