BCJR algorithm

The Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm is an algorithm for maximum a posteriori decoding of error correcting codes defined on trellises (principally convolutional codes). The algorithm is named after its inventors: Bahl, Cocke, Jelinek and Raviv.[1] This algorithm is critical to modern iteratively-decoded error-correcting codes, including turbo codes and low-density parity-check codes.

Steps involved

[edit]

Based on the trellis:

Variations

[edit]

SBGT BCJR

[edit]

Berrou, Glavieux and Thitimajshima simplification.[2]

Log–MAP BCJR

[edit]

The Log–MAP algorithm is a log-domain implementation of the BCJR (MAP) decoder. By working with log-likelihoods it avoids numerical underflow and turns probability multiplications into additions. In the log domain, the forward–backward recursions use the Jacobian logarithm (“max-star”) identity , which yields the same a-posteriori log-likelihood ratios (LLRs) as the original MAP/BCJR algorithm when applied to the branch metrics.[3]

In practice, the small correction term is implemented via a short look-up table or a piecewise-linear approximation; working in the log domain also simplifies normalization of the forward/backward state metrics. Log–MAP decoders that output extrinsic LLRs are widely used as the soft-input/soft-output components in iterative (turbo) decoders.[4][5]

A common lower-complexity variant is **Max-Log-MAP**, which approximates the Jacobian logarithm by dropping the correction term (i.e., using ). This reduces complexity at the cost of a small performance loss relative to Log-MAP/MAP; the gap can be narrowed with constant/linear corrections or by scaling the extrinsic information (“normalized/scaled Max-Log-MAP”), typically recovering a few-tenths of a dB depending on the code and SNR.[6][7]

Windowed BCJR

[edit]

A modified version that processes the trellis in segments to reduce computational complexity and memory requirements. This approach is particularly useful for very long sequences where full trellis storage becomes impractical. The windowed version maintains near-optimal performance while significantly lowering latency and hardware resource utilization in implementations.[8]

Implementations

[edit]

See also

[edit]

References

[edit]
  1. ^ Bahl, L.; Cocke, J.; Jelinek, F.; Raviv, J. (March 1974). "Optimal Decoding of Linear Codes for minimizing symbol error rate". IEEE Transactions on Information Theory. 20 (2): 284–7. doi:10.1109/TIT.1974.1055186.
  2. ^ Wang, Sichun; Patenaude, François (2006). "A Systematic Approach to Modified BCJR MAP Algorithms for Convolutional Codes". EURASIP Journal on Applied Signal Processing. 2006 095360: 95360. Bibcode:2006EJASP2006..242W. doi:10.1155/ASP/2006/95360.{{cite journal}}: CS1 maint: article number as page number (link)
  3. ^ Bahl, L. R.; Cocke, J.; Jelinek, F.; Raviv, J. (March 1974). "Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate". IEEE Transactions on Information Theory. 20 (2): 284–287. doi:10.1109/TIT.1974.1055186.
  4. ^ Vogt, J.; Finger, A. (2000). "Improving the max-log-MAP turbo decoder". Electronics Letters. 36 (23): 1937–1939. Bibcode:2000ElL....36.1937V. doi:10.1049/el:20001357.
  5. ^ Li, Jian (2019). "Turbo decoder design based on an LUT-normalized Log-MAP algorithm". Electronics. 8 (9): 1037. doi:10.3390/electronics8091037. PMC 7515343. PMID 33267527.
  6. ^ Robertson, P.; Villebrun, E.; Hoeher, P. (June 1995). "A comparison of optimal and sub-optimal MAP decoding algorithms operating in the log domain" (PDF). Proc. IEEE ICC. pp. 1009–1013.
  7. ^ Chen, J. (2003). Reduced-complexity decoding algorithms for LDPC codes (PDF) (Thesis). University of Hawaiʻi at Mānoa.
  8. ^ Viterbi, A.J. (1998). "An intuitive justification and a simplified implementation of the MAP decoder for convolutional codes". IEEE Journal on Selected Areas in Communications. 16 (2): 260–264. Bibcode:1998IJSAC..16..260V. doi:10.1109/49.661114. ISSN 0733-8716.
[edit]