FOR THE PAST few years, there has been a great amount

Similar documents
MULTILEVEL CODING (MLC) with multistage decoding

Performance Evaluation of Low Density Parity Check codes with Hard and Soft decision Decoding

Iterative Joint Source/Channel Decoding for JPEG2000

Decoding of Block Turbo Codes

Digital Television Lecture 5

Q-ary LDPC Decoders with Reduced Complexity

THE idea behind constellation shaping is that signals with

Capacity-Approaching Bandwidth-Efficient Coded Modulation Schemes Based on Low-Density Parity-Check Codes

MULTIPATH fading could severely degrade the performance

Improving LDPC Decoders via Informed Dynamic Scheduling

Goa, India, October Question: 4/15 SOURCE 1 : IBM. G.gen: Low-density parity-check codes for DSL transmission.

Low-complexity Low-Precision LDPC Decoding for SSD Controllers

DEGRADED broadcast channels were first studied by

Performance comparison of convolutional and block turbo codes

Code Design for Incremental Redundancy Hybrid ARQ

Multitree Decoding and Multitree-Aided LDPC Decoding

Constellation Shaping for LDPC-Coded APSK

SNR Estimation in Nakagami-m Fading With Diversity Combining and Its Application to Turbo Decoding

IN data storage systems, run-length-limited (RLL) coding

Error Patterns in Belief Propagation Decoding of Polar Codes and Their Mitigation Methods

LDPC codes for OFDM over an Inter-symbol Interference Channel

Master s Thesis Defense

Performance Analysis of Maximum Likelihood Detection in a MIMO Antenna System

Degrees of Freedom in Adaptive Modulation: A Unified View

FPGA-BASED DESIGN AND IMPLEMENTATION OF A MULTI-GBPS LDPC DECODER. Alexios Balatsoukas-Stimming and Apostolos Dollas

A Sliding Window PDA for Asynchronous CDMA, and a Proposal for Deliberate Asynchronicity

ULTRA-WIDEBAND (UWB) communication systems

n Based on the decision rule Po- Ning Chapter Po- Ning Chapter

BEING wideband, chaotic signals are well suited for

THE EFFECT of multipath fading in wireless systems can

IEEE C /02R1. IEEE Mobile Broadband Wireless Access <

Maximum Likelihood Detection of Low Rate Repeat Codes in Frequency Hopped Systems

ISSN: ISO 9001:2008 Certified International Journal of Engineering Science and Innovative Technology (IJESIT) Volume 2, Issue 4, July 2013

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 1, JANUARY

On Performance Improvements with Odd-Power (Cross) QAM Mappings in Wireless Networks

The throughput analysis of different IR-HARQ schemes based on fountain codes

ITERATIVE decoding of classic codes has created much

Reduced-Complexity VLSI Architectures for Binary and Nonbinary LDPC Codes

LDPC Communication Project

MULTICARRIER communication systems are promising

High-Rate Non-Binary Product Codes

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

Low-density parity-check codes: Design and decoding

Short-Blocklength Non-Binary LDPC Codes with Feedback-Dependent Incremental Transmissions

INCREMENTAL redundancy (IR) systems with receiver

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1.

A Soft-Limiting Receiver Structure for Time-Hopping UWB in Multiple Access Interference

p J Data bits P1 P2 P3 P4 P5 P6 Parity bits C2 Fig. 3. p p p p p p C9 p p p P7 P8 P9 Code structure of RC-LDPC codes. the truncated parity blocks, hig

Low-Density Parity-Check Codes for Volume Holographic Memory Systems

Interleaved PC-OFDM to reduce the peak-to-average power ratio

Multiple-Bases Belief-Propagation for Decoding of Short Block Codes

Study of Turbo Coded OFDM over Fading Channel

Performance of Combined Error Correction and Error Detection for very Short Block Length Codes

SPACE TIME coding for multiple transmit antennas has attracted

Project. Title. Submitted Sources: {se.park,

Performance Optimization of Hybrid Combination of LDPC and RS Codes Using Image Transmission System Over Fading Channels

Forced Convergence Decoding of LDPC Codes EXIT Chart Analysis and Combination with Node Complexity Reduction Techniques (Invited Paper) 1

Digital Fountain Codes System Model and Performance over AWGN and Rayleigh Fading Channels

High-performance Parallel Concatenated Polar-CRC Decoder Architecture

Iterative Polar Quantization-Based Modulation to Achieve Channel Capacity in Ultrahigh- Speed Optical Communication Systems

IN RECENT years, wireless multiple-input multiple-output

FOR applications requiring high spectral efficiency, there

An Improved Design of Gallager Mapping for LDPC-coded BICM-ID System

ADAPTIVE channel equalization without a training

ORTHOGONAL frequency division multiplexing (OFDM)

Combining Modern Codes and Set- Partitioning for Multilevel Storage Systems

IN A direct-sequence code-division multiple-access (DS-

Vector-LDPC Codes for Mobile Broadband Communications

FPGA-Based Design and Implementation of a Multi-Gbps LDPC Decoder

LDPC Codes for Rank Modulation in Flash Memories

LDPC Decoding: VLSI Architectures and Implementations

Adaptive Modulation, Adaptive Coding, and Power Control for Fixed Cellular Broadband Wireless Systems: Some New Insights 1

SEVERAL diversity techniques have been studied and found

Performance and Complexity Tradeoffs of Space-Time Modulation and Coding Schemes

Low-Complexity High-Order Vector-Based Mismatch Shaping in Multibit ΔΣ ADCs Nan Sun, Member, IEEE, and Peiyan Cao, Student Member, IEEE

EXIT Chart Analysis for Turbo LDS-OFDM Receivers

VLSI Design for High-Speed Sparse Parity-Check Matrix Decoders

Coded Modulation Design for Finite-Iteration Decoding and High-Dimensional Modulation

Study of Second-Order Memory Based LT Encoders

Polar Codes with Integrated Probabilistic Shaping for 5G New Radio

Power Efficiency of LDPC Codes under Hard and Soft Decision QAM Modulated OFDM

Orthogonal vs Non-Orthogonal Multiple Access with Finite Input Alphabet and Finite Bandwidth

Improving Windowed Decoding of SC LDPC Codes by Effective Decoding Termination, Message Reuse, and Amplification

Lab/Project Error Control Coding using LDPC Codes and HARQ

NOISE FACTOR [or noise figure (NF) in decibels] is an

Polar Codes for Magnetic Recording Channels

Serial Concatenation of LDPC Codes and Differentially Encoded Modulations. M. Franceschini, G. Ferrari, R. Raheli and A. Curtoni

Hamming net based Low Complexity Successive Cancellation Polar Decoder

Low Complexity Belief Propagation Polar Code Decoder

Optimized Codes for the Binary Coded Side-Information Problem

Reduced-Complexity Decoding of Q-ary LDPC Codes for Magnetic Recording

DESIGN AND IMPLEMENTATION OF AN ALGORITHM FOR MODULATION IDENTIFICATION OF ANALOG AND DIGITAL SIGNALS

Optimized Degree Distributions for Binary and Non-Binary LDPC Codes in Flash Memory

IN WIRELESS and wireline digital communications systems,

Narrow-Band Interference Rejection in DS/CDMA Systems Using Adaptive (QRD-LSL)-Based Nonlinear ACM Interpolators

The Case for Optimum Detection Algorithms in MIMO Wireless Systems. Helmut Bölcskei

Performance of Channel Coded Noncoherent Systems: Modulation Choice, Information Rate, and Markov Chain Monte Carlo Detection

Probability of Error Calculation of OFDM Systems With Frequency Offset

Combining Multipath and Single-Path Time-Interleaved Delta-Sigma Modulators Ahmed Gharbiya and David A. Johns

A hybrid phase-based single frequency estimator

Transcription:

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 4, APRIL 2005 549 Transactions Letters On Implementation of Min-Sum Algorithm and Its Modifications for Decoding Low-Density Parity-Check (LDPC) Codes Jianguang Zhao, Farhad Zarkeshvari, Student Member, IEEE, and Amir H. Banihashemi, Member, IEEE Abstract The effects of clipping and quantization on the performance of the min-sum algorithm for the decoding of low-density parity-check (LDPC) codes at short and intermediate block lengths are studied. It is shown that in many cases, only four quantization bits suffice to obtain close to ideal performance over a wide range of signal-to-noise ratios. Moreover, we propose modifications to the min-sum algorithm that improve the performance by a few tenths of a decibel with just a small increase in decoding complexity. A quantized version of these modified algorithms is also studied. It is shown that, when optimized, modified quantized min-sum algorithms perform very close to, and in some cases even slightly outperform, the ideal belief-propagation algorithm at observed error rates. Index Terms Clipping, iterative decoding algorithms, low-density parity-check (LDPC) codes, max-product algorithm, max-sum algorithm, min-sum algorithm, modified min-sum algorithms, quantization. I. INTRODUCTION FOR THE PAST few years, there has been a great amount of research devoted to the analysis and design of iterative coding schemes, in general, and low-density parity-check (LDPC) codes [7], in particular. This has been mainly due to the excellent performance/complexity tradeoff that these schemes offer. Iterative decoding algorithms for LDPC codes can be naturally described by message passing on the Tanner graph (TG) [11] of the code. Depending on the decoding algorithm and the type of messages, certain computations are performed at the symbol nodes and the check nodes of the TG, and the algorithm is executed by exchanging messages between check nodes and symbol nodes through the edges of the graph, in both directions Paper approved by M. Fossorier, the Editor for Coding and Communication Theory of the IEEE Communications Society. Manuscript received June 25, 2002; revised August 1, 2003; March 28, 2004; and May 25, 2004. This work was supported in part by Zarlink Semiconductor Corp. (formerly Mitel Semiconductor Corp.), and in part by National Capital Institute of Telecommunications (NCIT). This paper was presented in part at IEEE Globecom, Taipei, Taiwan, November 17 21, 2002, and in part at the 21st Biennial Symposium on Communications, Queen s University, Kingston, ON, Canada, June 2 5, 2002. J. Zhao was with the Broadband Communications and Wireless Systems (BCWS) Centre and Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada. F. Zarkeshvari was with the Broadband Communications and Wireless Systems (BCWS) Centre and Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada. He is now with the Department of Electronics, Carleton University, Ottawa, ON K1S 5B6, Canada (e-mail: farhad@sce.carleton.ca) A. H. Banihashemi is with the Broadband Communications and Wireless Systems (BCWS) Centre and Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada (e-mail: ahashemi@sce.carleton.ca). Digital Object Identifier 10.1109/TCOMM.2004.836563 and iteratively. Decoding complexity per iteration is low, due to the sparseness of the TG. The best performing, yet the most complex, algorithm for decoding LDPC codes is known to be the belief propagation (BP) or sum-product [7], [11], [12]. In this letter, we focus on another well-known iterative decoding algorithm: min-sum (MS) [4], [11], [12]. Other common names for MS are max-product and max-sum, depending on the type of messages and the corresponding operations performed in symbol and check nodes. As for MS, the messages are log-likelihood ratios (LLRs) (for a description of the MS algorithm, see, e.g., [4]). Although, in general, the error performance of MS is a few tenths of a decibel inferior to that of BP, it is much simpler to implement, and no estimate of noise power is needed for an additive white Gaussian noise (AWGN) channel. Moreover, in this letter, we show that the implementation of MS is more robust against quantization error, compared with a similar implementation for the BP algorithm. In the rest of the letter, we discuss the effects of clipping and quantization on the performance of the MS algorithm. We observe that although quantization, in general, degrades the performance, clipping can provide improvement, and that in many cases, the overall effect is such that only four quantization bits results in near or even better than ideal performance over a wide range of signal-to-noise ratio (SNR). Note that the results of [10] for a rate- code with block length 60 000 shows that at least six bits are required for the implementation of the BP algorithm in the LLR domain. We also propose simple modifications (conditional and unconditional corrections) that can considerably improve the performance of MS with negligible cost in complexity. Interestingly, the quantized version of our modified MS algorithms can perform very close to, and in some cases, even slightly outperform, the ideal BP algorithm. (One should notice that this is theoretically possible on graphs with cycles, as BP is only optimal on cycle-free graphs.) A modification identical to our unconditional correction has been independently proposed by Chen and Fossorier in [4] and [5]. In [5], they have also studied quantized MS for regular LDPC codes. We discuss the results of Chen and Fossorier in more detail in the following sections. II. CLIPPING AND QUANTIZATION We consider the transmission of information over a binary input AWGN channel using bipolar signaling. The received vector at the output of the channel is processed by an MS decoder, and the decoding stops if the estimated vector is a codeword, or a maximum number of iterations is reached. 0090-6778/$20.00 2005 IEEE

550 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 4, APRIL 2005 To implement the MS algorithm, we consider the received values to be clipped symmetrically at a threshold, and then uniformly quantized in the range. There are quantization intervals, symmetric with respect to the origin, and each represented by quantization bits. Integer numbers are assigned to the intervals. Operations are performed on integers, and at bit nodes, an outgoing message is clipped to if it exceeds the threshold. As we will see in Section IV, clipping can improve the performance of the MS algorithm over a wide range of SNR, and with optimized, only four quantization bits provides near or even slightly better than ideal performance. Clipping, however, introduces an early error floor. The error floor, as well as the performance in the waterfall region, can be improved considerably by simple modifications to the MS algorithm, as presented in the next section. In [5], the effect of quantization on MS was studied by density evolution as well as simulations on a (3, 6)-regular (8000, 4000) code. It was also observed in [5] that clipping (as part of quantization) can improve the threshold of the MS algorithm and also can introduce an early error floor at finite block lengths. No effort, however, was made in [5] to optimize and study the variations of optimal with, SNR, and the choice of the code. In this letter, we show that although optimal is rather insensitive to and SNR, it varies considerably by the choice of the code. In [5], due to nonoptimized, one can observe abnormalities in the reported results for quantized MS applied to the (8000, 4000) code, e.g., the BER curve for outperforms the BER curve for. The clipping thresholds used for these cases are about and, respectively. We have reported simulation results for the same code in Fig. 3(c). Our results indicate that the optimized for this code, regardless of, is about 1.25 over a wide range of SNR values. Our optimized results also show a graceful improvement in the performance of quantized MS [for both bit-error rate (BER) and word-error rate (WER)] by increasing, and no sign of error floor for in BER and WER down to and, respectively. This is while the results reported for in [5] show an error floor starting at around and for BER and WER, respectively. Also, our optimized results for, and all outperform ideal MS (with no quantization) in the waterfall region. In [5], however, although the BER curve for outperforms ideal MS, the one for performs almost the same as ideal MS. Moreover, for WER, the trend changes, and while still performs more or less the same as ideal MS, for, quantized MS performs much worse than ideal MS at higher SNR values. Unlike these results, for our optimized results given in Fig. 3(c), BER and WER curves follow the same trends by changing. III. MODIFICATIONS TO MS For a given code and a given channel model, and a prescribed error rate, MS usually requires up to a few tenths of a decibel more transmitted power than BP. Inspection of the MS and BP algorithms in the LLR domain reveals that the two algorithms perform the exact same operation in bit nodes (addition), and the difference is only in the operations performed in check nodes. Suppose that we have a check node with degree 3, and let Fig. 1. Correction factor f (d) for MS algorithm. and be the input messages to with and. Also, denote the output message by.for BP, we then have while for MS,. The two messages have the same sign, but a different magnitude [2]. Now, assuming (high SNR regime), in (1) can be simplified as (2) The function is plotted in Fig. 1. It is always negative and larger than for. Comparing (2) to, one can see that the term, which is only a function of the magnitude of the difference between the incoming messages, is the correction factor that should be added to the magnitude of the output message of the check node in the MS algorithm to make it almost the same as that of BP. The precise implementation of is, however, complex. In this letter, we use the following simple stepwise approximation for the correction factor: if otherwise. The correction factor is applied to the magnitude of the outgoing MS messages from check nodes based on the following correction step (signs remain the same as those in conventional MS). 1) Conditional Correction: For each edge, let denote the set of extrinsic incoming messages. Also, use the notations and to denote the message with the smallest magnitude in, and the outgoing message along, respectively. Based on the MS algorithm, we have. The correction starts by removing from, and performing the following step iteratively until the set is empty. Randomly select a message from, and update with (1)

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 4, APRIL 2005 551,if, for some predetermined positive and nonnegative real numbers and, respectively. Otherwise, leave unchanged. Remove from. It is easy to see that in the above correction process, if, then the order in which the incoming messages from the set are processed does not affect the final value of. In fact, in this case, the correction factor is applied at most once for each outgoing message. Parameters and are optimized through simulation. A modification of the conditional correction in 1) is to apply the correction factor only once if there exists a message in such that. We distinguish the two conditional corrections by labels A and B, where A refers to the one described in 1). While correction B is simpler to implement than A, our simulation results show that both corrections, when optimized, perform more or less the same. The following yet simpler correction is a special case of conditional correction B when (assuming that all check nodes have degree at least three). 2) Unconditional correction: For each edge, the magnitude of the outgoing message is, for some predetermined positive real number. Based on our simulations, for coarse quantization (four bits and less), conditional correction has a clear advantage over the unconditional one. For a larger number of quantization bits, however, the two algorithms, when optimized, perform very closely. This makes the unconditional correction a more attractive choice due to its lower complexity. There has been some recent work to partially close the performance gap between MS and BP [1] [6], [8]. In [1], [3], [6], and [8], the authors apply a corrective term to the output of check nodes in the MS algorithm, which is an approximation of in (1). In [1] and [6], the approximation is a constant which is nonzero over a region specified by not only the difference, but also the sum, of and, while in [3] and [8], lookup tables and piecewise linear approximations are also used. The approach of [2] is to normalize the output of check nodes, while in [4] and [5], an algorithm identical to our unconditional correction, called offset BP-based, 1 has been independently proposed. In [4], the performance of offset MS for regular LDPC codes was analyzed using density evolution, and optimal correction parameters for some ensembles of rate- LDPC codes, including (3, 6)-regular codes, were obtained. The optimal correction parameter for (3, 6) codes was then used in the simulation of the (3, 6)-regular (8000, 4000) code, and results as close as about 0.05 db to BP were reported. The application of density evolution to the quantized version of offset MS for regular LDPC codes was then presented in [5]. Numerical results for threshold values of these algorithms for different values of quantization steps, different numbers of quantization bits, and different offset values, 2 were also given in [5]. These results are for two rate- ensembles of (3, 6) and (4, 8) codes and no 1 The BP-based algorithm is the same as MS. We use them in the rest of the letter interchangeably. 2 Parameter, used in [5], is the same as y in our unconditional correction algorithm, presented in 2). Fig. 2. BER curves for code I ( ),II(--) and III (- 1 -) decoded by ideal MS (), clipped MS (no character), and ideal BP (). explicit effort in optimizing the clipping threshold was made. The values of and were, however, chosen in such a way that they can generate offsets equal to the optimal offset value in the continuous case. Simulation results for quantized offset MS (, and ) applied to the (3, 6)-regular (8000, 4000) code were also given in [5], and performance results that are about 0.1 db away from BP were reported. This example, which has close-to-optimal parameters, is the only example provided in [5] on the application of quantized offset MS to finite-length codes. In this letter, we not only study MS with unconditional correction in more detail, but also introduce MS with conditional correction. For modified MS algorithms, and by extensive simulations on three short and intermediate-length LDPC codes, we obtain optimal clipping thresholds and correction parameters, and make the following observations: 1) MS with conditional correction outperforms MS with unconditional correction for coarse quantization (four bits or less); 2) optimal clipping threshold for modified MS algorithms, that is mainly a function of the code and is rather insensitive to and SNR, is different than that for the standard MS algorithm (for example, for the (8000, 4000) code, optimal changes from 1.25 to 2.5); 3) the early error floor caused by clipping in standard MS can be much improved using modified MS algorithms; and 4) for shorter codes, modified MS algorithms can outperform the BP algorithm. None of these observations was made in [5]. In particular, phenomena 3) and 4) are specific tofinite-length codes and do not appear in the asymptotic density evolution analysis used in [4] and [5]. IV. SIMULATIONS AND DISCUSSIONS For simulations, we consider three codes, one irregular and two regular. The irregular code, constructed in [9], has parameters, with and being the block length and the dimension of the code, respectively. The regular codes, taken from [13], have parameters (273, 191) and (8000, 4000). For future reference, these codes are labeled as I [(1268, 456)

552 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 4, APRIL 2005 Fig. 3. (a) (c) BER ( ) and WER (---) curves of ideal and quantized MS for codes I, II, and III, respectively. (d) Effect of clipping threshold c on error performance of quantized MS for code I at E =N =2dB. code], II [(273, 191) code], and III [(8000, 4000) code]. For all simulation results in this paper, the maximum number of iterations is chosen to be 200. Also, for each SNR, enough code words are simulated to result in 100 codeword errors. Moreover, when the error performances of different algorithms are compared, the corresponding decoders are set to operate in parallel on the same set of received vectors. To study the effect of clipping, the BER for the three codes versus under ideal (no clipping or quantization) and clipped (no quantization) MS are given in Fig. 2 ( and are the average energy per information bit and the power spectral density of AWGN, respectively). Results for the BP algorithm are also given as reference. For each code, the optimum clipping threshold (resulting in the smallest BER) appears to be almost a constant over a wide range of SNR. This is equal to, and for codes I, II, and III, respectively. As can be seen in Fig. 2, clipping closes part of the performance gap between MS and BP. This is due to upper bounding the overconfident reliabilities at the output of check nodes, which is expected to improve the performance of MS. Clipping, however, results in an early error floor at higher values of SNR, as can be seen for codes I and II in Fig. 2. 3 To investigate the effect of quantization, BER and WER curves for the three codes resulting from quantized MS with different number of quantization bits ( ) are plotted in Fig. 3(a) (c). The error-rate curves for ideal MS are also given as reference. For each code, the clipping threshold is fixed and is chosen to be 2, 1.5, and 1.25, for codes I, II, and III, respectively. It is observed that the optimum clipping threshold is almost the same for different values of, and over a wide range of SNR values. For code I, the effect of on BER for different number of quantization bits at db is shown in Fig. 3(d). It can be seen that optimal is approximately equal to 2, regardless of. Fig. 3(a) (c) show that by using, one can obtain near or better than ideal performance over a wide range of SNR values. In fact, since our simulation results 3 It has been shown in [14] that for higher values of SNR, the optimal clipping threshold increases with SNR. Selecting a variable clipping threshold improves the error floor of the algorithm, compared with what is reported in this letter. The same thing applies to the quantized version of MS and modified MS algorithms.

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 4, APRIL 2005 553 Fig. 4. BER curves of MS, BP, and modified MS algorithms for codes I (a), II (b), and III (c). for five and six bits are almost identical, one would expect that for the number of bits larger than six, the results practically coincide with those for six bits in the observed range of SNRs. In [14], it is shown that, in general, the optimal increases with SNR. This improves the error-floor performance under quantized MS. For example, the results of [14] indicate that for code I, by increasing the threshold to its optimal values at higher, the error floors reported in Fig. 3(a) are improved by about an order of magnitude. The effects of conditional and unconditional corrections on ideal, clipped, and quantized MS for the three codes are studied with extensive simulation. For ideal MS, optimal values of and for conditional correction, and the optimal value of for unconditional correction, are rather insensitive to, and are mainly functions of the code. The modified algorithms considerably outperform MS and perform very close to, or in some cases even slightly outperform, BP. An example of this can be seen in Fig. 4(a) for code I, when unconditional correction with outperforms BP by about 0.2 db at BER. For clipped MS, our simulations show that the best clipping threshold for all modified MS algorithms is almost the same, and is rather constant over a wide range of values. It is almost equal to 3, 2, and 2.5 for codes I, II, and III, respectively. The optimal values of and are also rather insensitive to. The results for modified clipped MS in the waterfall region are very close to those of modified ideal MS. The former, however, still suffers from an earlier error floor, compared with the latter. This error floor, though, is lower than the original error floor for clipped MS [14]. The reason for this is the increase in the optimal clipping threshold for modified algorithms. For the quantized version of modified MS algorithms, it appears that the optimal for each code is almost constant over a wide range of and for different number of quantization bits, and is more or less the same as that of modified clipped MS algorithms, i.e., 3, 2, and 2.5, for codes I, II, and III, respectively. For a given number of bits and, the optimal values of and do not depend much on. Fig. 4 shows the BER curves of modified quantized MS algorithms with the best and parameters. For, we have reported the BER curves for conditional A and unconditional corrections. Results of ideal MS and BP are also given as reference. For code II, Fig. 4(b) is zoomed into the high-snr region of the curves. The results for four bits

554 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 4, APRIL 2005 are not shown in this case, as they are more or less the same as those of standard MS in Fig. 3(b). Fig. 4(a) and (c) show that for four bits, conditional correction considerably outperforms unconditional correction. It can also be seen in Fig. 4(a) (c) that modified quantized MS algorithms perform close to ideal BP and even slightly outperform BP at higher values for codes I and II. This is a significant improvement, compared with the quantized version of the conventional MS algorithm, particularly for codes I and III. As a final note, by comparing Fig. 3(a) (c) with Fig. 4(a) (c), one can see that although for the MS algorithm, four-bit quantization may be sufficient, this may not be the case for modified MS algorithms. In particular, Fig. 4(a) shows a considerable improvement in the error performance for code I, when is increased from 4 to 5. Similarly for code III, the increase from four to five bits results in about 0.2 db gain in SNR. V. CONCLUSION The effects of clipping and quantization on the MS algorithm are investigated. It is shown by simulating three LDPC codes that with the optimum clipping threshold, a four-bit uniform quantizer and four-bit messages provide near (sometimes even better than) ideal performance over a wide range of SNR values. We also propose modifications to the MS algorithm that can improve the performance significantly with a minor increase in complexity. At observed error rates, modified MS algorithms, even the quantized versions, perform very close to, or sometimes even outperform, BP with much less complexity. Our results indicate that at a given SNR, proper attention should be given to the choices of the clipping threshold and the required precision for obtaining the best performance/complexity tradeoff. These choices depend on the code and are related to the selected version of MS algorithm. ACKNOWLEDGMENT The authors wish to thank the Editor and the anonymous reviewers for their helpful comments. REFERENCES [1] A. Anastasopoulos, A comparison between the sum-product and the min-sum iterative detection algorithms based on density evolution, in Proc. IEEE Globecom, San Antonio, TX, Nov. 2001, pp. 1021 1025. [2] J. Chen and M. Fossorier, Near-optimum universal belief-propagation-based decoding of low-density parity-check codes, IEEE Trans. Commun., vol. 50, pp. 406 414, Mar. 2002. [3] J. Chen, A. Dholakia, E. Eleftheriou, M. Fossorier, and X.-Y. Hu, Nearoptimal reduced-complexity decoding algorithms for LDPC codes, in Proc. IEEE Int. Symp. Information Theory, Lausanne, Switzerland, Jun. 30 Jul. 5, 2002, p. 455. [4] J. Chen and M. Fossorier, Density evolution for two improved BP-based decoding algorithms of LDPC codes, IEEE Commun. Lett., vol. 6, pp. 208 210, May 2002. [5], Density evolution for BP-based decoding algorithms of LDPC codes and their quantized versions, in Proc. IEEE Globecom, Nov. 2002, pp. 1378 1382. [6] E. Eleftheriou, T. Mittelholzer, and A. Dholakia, Reduced-complexity decoding algorithm for low-density parity-check codes, IEE Electron. Lett., vol. 37, no. 2, pp. 102 104, Jan. 2001. [7] R. G. Gallager, Low-density parity-check codes, IRE Trans. Inform. Theory, vol. IT-8, pp. 21 28, Jan. 1962. [8] X.-Y. Hu, E. Eleftheriou, D.-M. Arnold, and A. Dholakia, Efficient implementation of the sum-product algorithm for decoding LDPC codes, in Proc. IEEE Globecom, San Antonio, TX, Nov. 2001, pp. 1036 1036E. [9] Y. Mao and A. H. Banihashemi, A heuristic search for good low-density parity-check codes at short block lengths, in Proc. IEEE Int. Conf. Communications, vol. 1, 2001, pp. 41 44. [10] L. Ping and W. K. Leung, Decoding low-density parity-check codes with finite quantization bits, IEEE Commun. Lett., vol. 4, pp. 62 64, Feb. 2000. [11] R. M. Tanner, A recursive approach to low-complexity codes, IEEE Trans. Inform. Theory, vol. IT-27, pp. 533 547, Sep. 1981. [12] N. Wiberg, Codes and decoding on general graphs, Ph.D. dissertation, Dept. Elec. Eng., Linköping Univ., Linköping, Sweden, 1996. [13] [Online]. Available: http://www.inference.phy.cam.ac.uk/mackay/ codes/data.html#s14 [14] J. Zhao, Effects of clipping and quantization on min-sum algorithm and its modifications for decoding low-density parity-check codes, M.Sc. thesis, Dept. Syst. Comput. Eng., Carleton Univ., Ottawa, ON, Canada, 2003.