SIMULATED PERFORMANCE A MATLAB IMPLEMENTATION OF LOW-DENSITY PARITY- CHECK CODES. By: Dan Dechene Kevin Peets. Supervised by: Dr.

Similar documents
A study of turbo codes for multilevel modulations in Gaussian and mobile channels

Performance Analysis of Multi User MIMO System with Block-Diagonalization Precoding Scheme

Parameter Free Iterative Decoding Metrics for Non-Coherent Orthogonal Modulation

Review: Our Approach 2. CSC310 Information Theory

Calculation of the received voltage due to the radiation from multiple co-frequency sources

Adaptive Modulation for Multiple Antenna Channels

Efficient Large Integers Arithmetic by Adopting Squaring and Complement Recoding Techniques

Dynamic Optimization. Assignment 1. Sasanka Nagavalli January 29, 2013 Robotics Institute Carnegie Mellon University

Rejection of PSK Interference in DS-SS/PSK System Using Adaptive Transversal Filter with Conditional Response Recalculation

Digital Transmission

Space Time Equalization-space time codes System Model for STCM

NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION

A NSGA-II algorithm to solve a bi-objective optimization of the redundancy allocation problem for series-parallel systems

The Performance Improvement of BASK System for Giga-Bit MODEM Using the Fuzzy System

A thesis presented to. the faculty of. the Russ College of Engineering and Technology of Ohio University. In partial fulfillment

NATIONAL RADIO ASTRONOMY OBSERVATORY Green Bank, West Virginia SPECTRAL PROCESSOR MEMO NO. 25. MEMORANDUM February 13, 1985

A Comparison of Two Equivalent Real Formulations for Complex-Valued Linear Systems Part 2: Results

Revision of Lecture Twenty-One

Impact of Interference Model on Capacity in CDMA Cellular Networks. Robert Akl, D.Sc. Asad Parvez University of North Texas

Side-Match Vector Quantizers Using Neural Network Based Variance Predictor for Image Coding

Walsh Function Based Synthesis Method of PWM Pattern for Full-Bridge Inverter

PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION. Evgeny Artyomov and Orly Yadid-Pecht

c 2009 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

LOCAL DECODING OF WALSH CODES TO REDUCE CDMA DESPREADING COMPUTATION

The Spectrum Sharing in Cognitive Radio Networks Based on Competitive Price Game

International Journal of Network Security & Its Application (IJNSA), Vol.2, No.1, January SYSTEL, SUPCOM, Tunisia.

High Speed, Low Power And Area Efficient Carry-Select Adder

Comparative Analysis of Reuse 1 and 3 in Cellular Network Based On SIR Distribution and Rate

Error Probability of RS Code Over Wireless Channel

Joint Power Control and Scheduling for Two-Cell Energy Efficient Broadcasting with Network Coding

A High-Sensitivity Oversampling Digital Signal Detection Technique for CMOS Image Sensors Using Non-destructive Intermediate High-Speed Readout Mode

Bit-interleaved Rectangular Parity-Check Coded Modulation with Iterative Demodulation In a Two-Node Distributed Array

THE USE OF CONVOLUTIONAL CODE FOR NARROWBAND INTERFERENCE SUPPRESSION IN OFDM-DVBT SYSTEM

A Novel Optimization of the Distance Source Routing (DSR) Protocol for the Mobile Ad Hoc Networks (MANET)

Learning Ensembles of Convolutional Neural Networks

antenna antenna (4.139)

On the Feasibility of Receive Collaboration in Wireless Sensor Networks

The Impact of Spectrum Sensing Frequency and Packet- Loading Scheme on Multimedia Transmission over Cognitive Radio Networks

Ensemble Evolution of Checkers Players with Knowledge of Opening, Middle and Endgame

IEE Electronics Letters, vol 34, no 17, August 1998, pp ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES

Understanding the Spike Algorithm

Topology Control for C-RAN Architecture Based on Complex Network

Phasor Representation of Sinusoidal Signals

Uncertainty in measurements of power and energy on power networks

Control Chart. Control Chart - history. Process in control. Developed in 1920 s. By Dr. Walter A. Shewhart

DC-FREE TURBO CODING SCHEME FOR GPRS SYSTEM

TECHNICAL NOTE TERMINATION FOR POINT- TO-POINT SYSTEMS TN TERMINATON FOR POINT-TO-POINT SYSTEMS. Zo = L C. ω - angular frequency = 2πf

Priority based Dynamic Multiple Robot Path Planning

ANNUAL OF NAVIGATION 11/2006

To: Professor Avitabile Date: February 4, 2003 From: Mechanical Student Subject: Experiment #1 Numerical Methods Using Excel

Inverse Halftoning Method Using Pattern Substitution Based Data Hiding Scheme

熊本大学学術リポジトリ. Kumamoto University Repositor

Graph Method for Solving Switched Capacitors Circuits

Utility-based Routing

Resource Allocation Optimization for Device-to- Device Communication Underlaying Cellular Networks

Performance Analysis of Power Line Communication Using DS-CDMA Technique with Adaptive Laguerre Filters

HUAWEI TECHNOLOGIES CO., LTD. Huawei Proprietary Page 1

MTBF PREDICTION REPORT

Multicarrier Modulation

Passive Filters. References: Barbow (pp ), Hayes & Horowitz (pp 32-60), Rizzoni (Chap. 6)

Analysis of Time Delays in Synchronous and. Asynchronous Control Loops. Bj rn Wittenmark, Ben Bastian, and Johan Nilsson

Approximate Joint MAP Detection of Co-Channel Signals

Hierarchical Generalized Cantor Set Modulation

Research of Dispatching Method in Elevator Group Control System Based on Fuzzy Neural Network. Yufeng Dai a, Yun Du b

Full-duplex Relaying for D2D Communication in mmwave based 5G Networks

Multi-Robot Map-Merging-Free Connectivity-Based Positioning and Tethering in Unknown Environments

Markov Chain Monte Carlo Detection for Underwater Acoustic Channels

Network Reconfiguration in Distribution Systems Using a Modified TS Algorithm

An Efficient Method for PAPR Reduction of OFDM Signal with Low Complexity

Uplink User Selection Scheme for Multiuser MIMO Systems in a Multicell Environment

California, 4 University of California, Berkeley

Throughput Maximization by Adaptive Threshold Adjustment for AMC Systems

POLYTECHNIC UNIVERSITY Electrical Engineering Department. EE SOPHOMORE LABORATORY Experiment 1 Laboratory Energy Sources

Multi-sensor optimal information fusion Kalman filter with mobile agents in ring sensor networks

Define Y = # of mobiles from M total mobiles that have an adequate link. Measure of average portion of mobiles allocated a link of adequate quality.

MIMO-OFDM Systems. Team Telecommunication and Computer Networks, FSSM, University Cadi Ayyad, P.O. Box 2390, Marrakech, Morocco.

Optimizing Transmission Lengths for Limited Feedback with Non-Binary LDPC Examples

Chaotic Filter Bank for Computer Cryptography

RC Filters TEP Related Topics Principle Equipment

Secure Transmission of Sensitive data using multiple channels

Distributed Channel Allocation Algorithm with Power Control

Power Minimization Under Constant Throughput Constraint in Wireless Networks with Beamforming

1 GSW Multipath Channel Models

AN IMPROVED BIT LOADING TECHNIQUE FOR ENHANCED ENERGY EFFICIENCY IN NEXT GENERATION VOICE/VIDEO APPLICATIONS

Prevention of Sequential Message Loss in CAN Systems

Performance Study of OFDMA vs. OFDM/SDMA

A new family of linear dispersion code for fast sphere decoding. Creative Commons: Attribution 3.0 Hong Kong License

Rational Secret Sharing without Broadcast

LOW-density parity-check (LDPC) codes first discovered

A New Type of Weighted DV-Hop Algorithm Based on Correction Factor in WSNs

Direct Sequence Spread Spectrum (DSSS)

Ergodic Capacity of Block-Fading Gaussian Broadcast and Multi-access Channels for Single-User-Selection and Constant-Power

Micro-grid Inverter Parallel Droop Control Method for Improving Dynamic Properties and the Effect of Power Sharing

Optimum Ordering for Coded V-BLAST

High Speed ADC Sampling Transients

Index Terms Adaptive modulation, Adaptive FEC, Packet Error Rate, Performance.

On Channel Estimation of OFDM-BPSK and -QPSK over Generalized Alpha-Mu Fading Distribution

A TWO-PLAYER MODEL FOR THE SIMULTANEOUS LOCATION OF FRANCHISING SERVICES WITH PREFERENTIAL RIGHTS

Reduced Cluster Search ML Decoding for QO-STBC Systems

UNIT 11 TWO-PERSON ZERO-SUM GAMES WITH SADDLE POINT

Transcription:

SIMULATED PERFORMANCE OF LOW-DENSITY PARITY- CHECK CODES A MATLAB IMPLEMENTATION LAKEHEAD UNIVERSITY FACULTY OF ENGINEERING 2006 By: Dan Dechene Kevn Peets Supervsed by: Dr. Julan Cheng

TABLE OF CONTENTS 1.0 Introducton1 1.1 Dgtal Communcaton2 2.0 Channel Codng.3 2.1 Shannon Theorem for Channel Codng.3 2.2 Hammng Code 5 2.3 Tanner Graph Representaton.7 3.0 LDPC.8 3.1 Introducton to LDPC 8 3.1.1 Party Check Matrx8 3.1.1.1 Classfyng the Matrx8 3.1.1.2 Methods of Generaton 9 3.1.2 Mnmum Dstance of LDPC Codes.9 3.1.3 Cycle Length of LDPC Codes 9 3.1.4 Lnear Indepence.10 3.2 LDPC System Overvew11 3.3 Generaton for Smulaton13 3.4 Encodng 15 3.4.1 Lnear Indepence Problem 17 3.5 Decodng17 3.5.1 Hard Decson vs. Soft Decson Decodng 17 3.5.2 SPA Algorthm 19 3.5.2.1 Computng Messages.20 3.5.2.2 Intalzaton.22 3.5.2.3 Soft-Decson.24 3.5.2.4 Smulaton Computaton 25 4.0 Results26 5.0 Problems Encountered.29 6.0 Future Work.31 6.1 Increase Effcency of Smulaton Algorthms.31 6.2 Lower Memory Requrements of Party-Check Matrx.31 6.3 VLSI Implementaton 31 7.0 Concluson.32 References.34 Appx A Code35 Appx B Smulnk Model43

TABLE OF FIGURES Fgure 1: Communcaton System Block Dagram2 Fgure 2: Graphcal Representaton of Hammng (7,4) Code5 Fgure 3: All Possble Codewords for Hammng (7,4) Code.6 Fgure 4: Bpartte Tanner Graph.7 Fgure 5: Length 4 Cycle.10 Fgure 6: Length 6 Cycle.10 Fgure 7: LDPC System Overvew 11 Fgure 8: Flowchart to create Party-Check matrx, H.14 Fgure 9: Lkelhood functons for BPSK modulaton over an AWGN channel.18 Fgure 10: Representaton of Nodes 20 Fgure 11: Representaton of Nodes.20 Fgure 12: Flowchart for Decodng25 Fgure 13: MacKay s Results.27 Fgure 14: Smulated Results.27 Fgure 15: Performance of smulatons vs. Hammng wth Shannon s lmt.27

1.0 INTRODUCTION In the early nnetes, turbo codes and ts new teratve decodng technque were ntroduced, employng ths new codng scheme and ts decodng algorthm, t was possble to acheve performance wthn a few tenths of a db from the Shannon lmt for a bt error rate of 10-5 [1]. Ths dscovery not only had a major mpact on the telecommuncatons ndustry, but t also kcked off major research nto the area of channel maxmzng codng schemes usng teratve decodng now that they knew t was possble to acheve. In 1962, Robert Gallager had orgnally proposed Low-Densty Party-Check codes, or LDPC codes [2] as a class of channel codng, but mplementaton of these codes requred a large amount of computng power due to the hgh complexty and memory requrements of the encodng/decodng operatons, so they were forgotten. A few years after turbo codes made ther appearance, Davd MacKay redscovered LDPC codes [3], and he showed that LDPC codes were also capable of approachng the Shannon lmt usng teratve decodng technques. An LDPC code s a lnear block code charactersed by a very sparse party-check matrx. Ths means that the party check matrx has a very low concentraton of 1 s n t, hence the name low-densty party-check code. The sparseness of LDPC codes s what has nterested researchers, as t can lead to excellent performance n terms of bt error rates. The purpose of ths paper was to gan an understandng of LDPC codes and utlze that s knowledge to construct a test a seres of algorthms to smulate ther performance. Ths paper wll begn wth a basc background of dgtal communcatons and channel codng theory and then carry the basc prncples forward and apply them to LDPC. (1)

1.1 DIGITAL COMMUNICATION Dgtal communcaton s a fundamental requrement of the modern world. Many current analog transmsson systems are convertng to dgtal such cable TV. The advantages allow content to be dynamc as well as ntroduce new features that were mpossble over an analog system. Fgure 1: Communcaton System Block Dagram Fgure 1 shows a model of a communcaton system. A dgtal message orgnates from the source (ths could have been obtaned from an analog sgnal va an analog-to-dgtal converter). These dgtal sgnals are then passed through a source encoder. The source encoder removes the redundancy of the system; much the same way as computer fle compresson operates. Followng source encodng, the sgnal s then passed through the channel encoder whch adds controlled redundancy to the sgnal, the sgnal s then modulated and transmtted over the channel. The reverse process occurs n the recever. Ths paper focuses on the channel encoder/decoder blocks channel codng. The purpose of channel codng s to add controlled redundancy nto the transmtted sgnal to ncrease the relablty of transmsson and lower transmsson power requrements. (2)

2.0 CHANNEL CODING Channel codng s a way of ntroducng controlled redundancy nto a transmtted bnary data stream n order to ncrease the relablty of transmsson and lower power transmsson requrements. Channel codng s carred out by ntroducng redundant party bts nto the transmtted nformaton stream. The requrement of a channel codng scheme only exsts because of the nose ntroduced n the channel. Smple channel codng schemes allow the receved of the transmtted data sgnal to detect errors, whle more advanced channel codng schemes provde the ablty to recover a fnte about of corrupted data. Ths results n more relable communcaton, and n many cases, elmnates the need for retransmsson. Although channel codng provdes many benefts, there s an ncrease n the number of bts beng transmtted. Ths s mportant when selectng the best channel codng scheme to acheve the requred bt error rate for a system. 2.1 SHANNON THEOREM FOR CHANNEL CODING Communcaton over nosy channels can be mproved by the use of a channel code C, as demonstrated by C. E. Shannon n 1948 wth hs famous channel codng theorem: Let a dscrete channel have the capacty C and a dscrete source the entropy per second H. If H C there exsts a codng system such that the output of the source can be transmtted over the channel wth an arbtrarly small frequency of errors (or an arbtrarly small equvocaton). If H > C t s possble to encode the source so that the equvocaton s less than H. [4] Ths theorem states that below a maxmum code rate R, whch s equal to the capacty of the channel, t s possble to fnd error correctng codes capable of achevng any gven (3)

probablty of error. Whle Shannon proposed ths theorem, he provded no nsght n how to acheve ths capacty. The evdence of the search for ths codng scheme can be seen by the rapd development of capacty mprovng schemes. When Shannon announced hs theory n the July and October ssues of the Bell System Techncal Journal n 1948, the largest communcatons cable n operaton at that tme was capable of carryng 1800 voce conversatons. Twenty-fve years later, the hghest capacty cable was capable of carryng 230000 smultaneous conversatons [5]. Researchers are contnuously lookng for ways to mprove capacty. Currently, the only measure that can be used for code performance s ts proxmty to Shannon s Lmt. Shannon s lmt can be expressed n a number of dfferent ways. Shannon s lmt for a band-lmted channel s: P C B log 2 1 + P = S N For a system wth no bandwdth lmt, ths equaton becomes: 1 E = 2 + B C log 1 2R 2 N 0 The SNR for a coded system also deps on the code rate, R {(bts n orgnal message) / (bts sent on channel)} and can be found as: R(1 + p log 2 ( p) + (1 p) log 2 (1 p)) E = CR N B 0 (4)

where p s the BER for a gven SNR and R s the code rate. The above equatons can be solved to have an expresson that has only contans p and SNR. To solve ths equaton requres numercal computaton. R(1 + p log + 1 = 2 + 2 ( p) (1 p) log 2 (1 p)) log 2 1 2 E R N B 0 E R N B 0 From the above, the Shannon s lmt for a code rate of ½ can be shown to be 0.188dB [6]. 2.2 HAMMING CODE Hammng (7,4) code s an relatvely smple startng pont n order to understand channel codng. It s block code that generates 3 party bts for every 4 bts of data. Hammng code operates off even party. Hammng (7,4) s a very smplstc code to understand as t can be graphcally represented by a Venn dagram. Fgure 2 shows how a Hammng (7,4) s party bts are calculated. The data bts s 1, s 2, s 3, and s 4 are placed n the mddle of the Venn dagram as shown, then the party bts t 5, t 6, and t 7 are assgned n a manner such that each of the 3 crcles has an even number of ones (even party). Fgure 2: Graphcal Representaton of Hammng 7,4 Code [3] (5)

Fgure 3: All Possble Codewords for Hammng 7,4 Code [3] Fgure 3 shows the constructed codeword for the gven Fgure 2 above. Another nterestng property of any channel codng scheme s ts mnmum dstance. For Hammng (7,4), ths mnmum dstance t 3. Ths means that gven an arbtrary codeword, t wll take 3 bts flppng to produce any other possble codeword. In terms of decodng, ths means that t s possble to correct a fnte number of errors. For Hammng (7,4) code, t s able to detect sngle and dual bt errors, but s only able to correct sngle bt errors. It s mportant to note, that f 3 or more bt errors occur, than the decoder wll be unable to correct the bt errors, and n fact may be unable to detect that a bt error occurred. The followng equatons represent the characterstcs of Hammng Code n terms of mnmum dstance, number of detectable and number of correctable errors. MD = 2 n +1 p = n + 1 Where MD s the mnmum dstance, n s the number of errors the code can correct, and p s the number of errors the code can detect. Agan, t s mportant to note, that f suffcent nose s present, then the codeword may be corrupted n a matter than Hammng code s unable to detect and correct errors. Ths means that mnmum dstance plays an mportant role as a characterstc of a gven code. Mnmum dstance n general s defned as the fewest number of bts that must flp to n any gven codeword, to be dstngushed as another. A large mnmum dstance makes for a good codng scheme, as t ncreases the nose mmunty n the system. It s often very dffcult to determne the mnmum dstance for a gven code. Ths s the case because there exsts 2 k possble codewords n any gven codng scheme, therefore (6)

computng mnmum dstance requres that 2 k 1 *(2 k 1) comparsons be performed. It s obvous that as the blocklength ncreases, measurng the mnmum dstance would requre a large amount of computatonal power. There are methods that have been proposed to measure the mnmum dstance of these codes, however they wll not be dscussed here. Although Hammng (7,4) code does not provde a large gan n terms of error rate performance versus an uncoded system, t provdes an excellent frst step n studyng codng theory. 2.3 TANNER GRAPH REPRESENTATION Tanner Graphs are pctoral ways of representng the party check matrx of block codes. We are nterested n these graphs as they can represent the H matrces of LDPC code. The rows of a party check matrx are represented by the check nodes, whle the columns are represented by varable nodes. If there s a 1 at a gven poston (j,), where j s the row ndex and s the column ndex, an edge s used to show ths connecton n the Tanner graph. Fgure 4 llustrates a Tanner graph of an mplementaton of Hammng (7,4) code. f 0 f 1 f 2 H 1 1 1 0 1 0 0 = 0 1 1 1 0 1 0 1 0 1 1 0 0 1 c 0 c 1 c 2 c 3 c 4 c 5 c 6 f 0 f 1 f 2 c 0 c 1 c 2 c 3 c 4 c 5 c 6 Fgure 4: Tanner Graph of a Hammng (7,4) Code (7)

3.0 LDPC 3.1 INTRODUCTION TO LDPC Robert E. Gallager orgnally dscovered low-densty Party-Check Codes (or LDPC Codes) n 1962 [2]. They are a class of lnear block codes that approach Shannon s Channel Capacty Lmt (See secton 2.1). LDPC Codes are characterzed by the sparseness of ones n the party-check matrx. Ths low number of ones allows for a large mnmum dstance of the code, resultng n mproved performance. Although proposed n the early 1960 s, t has not been snce recently that codes have emerged as a promsng area of research n achevng channel capacty. Ths s part due to the large amount of processng power requred to smulate the code. In the case of any codng scheme larger blocklength codes provde better performance, but requre more computng power. Performance of a code s measured through ts bt error rate (BER) vs. sgnal to nose E rato B n db. The curve of a good code wll show a dramatc drop n BER as SNR N 0 mproves. The best codes have a clff drop at an SNR slghtly hgher than the Shannon s lmt. 3.1.1 PARITY-CHECK MATRIX 3.1.1.1 Classfyng the Matrx LDPC codes are classfed nto two dfferent classes of codes: regular and rregular codes. Regular codes are the set of codes n whch there s a constant number of w C 1 s dstrbuted throughout each column and a constant number of w R 1 s per row. For a determned column weght (w C ) we can determne the row weght as N*w C /(N-k), N s the blocklength of the code and k s the message length. Irregular codes are those of whch do not belong to ths set (do not mantan a consstent row weght). (8)

3.1.1.2 Methods of Generaton In the 1960 s, Gallager publshed the exstence of the class of LDPC codes, but provded no nsght nto how to generate the party-check matrx (also known as the H matrx). There have been many methods proposed by varous researchers [3][6][7] as to methods of generaton. Several methods nclude: Random Generaton subject to constrants Densty Evoluton Fnte Geometry In terms of generaton there are several key concerns to examne when generatng the party-check matrx such as mnmum dstance, cycle length and lnear ndepence. 3.1.2 MINIMUM DISTANCE OF LDPC CODES As dscussed n secton 2.2, the mnmum dstance s a property of any codng scheme. Ideally ths mnmum dstance should be as large as possble, but there s a practcal lmt on how large ths mnmum dstance can be. LDPC posses a large problem when calculatng ths mnmum dstance effcently as an effectve LDPC code requres rather large blocklengths. Usng random generaton t s very dffcult to specfy the mnmum dstance as a parameter, rather mnmum dstance wll become a property of the code. 3.1.3 CYCLE LENGTH OF LDPC CODES Usng a Tanner Graph t s possble to vew the defnton of the mnmum cycle length of a code. It s the mnmum number of edges travelled from one check node to return to the same check node. Length 4 and Length 6 cycles wth the correspondng party-check matrx confguratons are shown n Fgures 5 and 6 respectvely. (9)

Check Nodes Check Nodes Varable Nodes Varable Nodes H = 1 1 1 1 H = 1 1 1 1 1 1 Fgure 5: Length 4 Cycle Fgure 6: Length 6 Cycle It has been shown that the exstence of these cycles degrade the performance durng teratve decodng process [7]. Therefore when generatng the party-check matrx, the mnmum cycle length permtted must be determned. It s possble control the mnmum cycle length when generatng the matrx, however computatonal complexty and tme ncreases exponentally wth each ncrease n mnmum cycle length. 3.1.4 LINEAR INDEPENDENCE The generator matrx G, s defned such that: c = G T m Where, (10)

c = [c 1, c 2,, c N ] T Codeword m = [m 1, m 2,, m K ] T Message Word G = k by n Generator matrx In order to guarantee the exstence of such a matrx G, the lnear ndepence of all rows of the party-check matrx must be assured. In practcal random generaton, ths becomes very dffcult. The method used to approach ths problem wll be studed n further depth n secton 3.3 Generaton for Smulaton. 3.2 LDPC SYSTEM OVERVIEW Message Source m LDPC Encoder c BPSK Modulator x Channel + n Message Destnaton mˆ Retreve Message ĉ LDPC Decoder y from Codeword (SPA) Fgure 7: LDPC System Overvew Where: m Message c Codeword x Modulated sgnal n AWGN nose y Receved sgnal ĉ Estmated codeword mˆ Estmated message Note: All above sgnals are vectors n terms of smulaton mplementaton (11)

Message Source: The message source s the -user transmttng the data. In terms of moble communcatons, the message source would be the -user transmttng hs/her voce nformaton. The smulaton utlzed a random message generator. Ths generator creates a message vector wth equal a pror probablty: Pr[ m 1] = Pr[ m = 0] = 0.5. = LDPC Encoder: The LDPC encoder s mplemented at the -user transmttng the data. In terms of smulaton mplementaton, encodng s done va a generator matrx. Ths s covered n further detal n secton 3.4. BPSK Modulator: The BPSK (Bnary Phase Shft Keyng) modulator maps the nput bnary sgnals, to an analog sgnal for transmsson. In smulaton, the BPSK sgnal s represented by the mappng: 0,1} { E, E }. { b b Channel: The channel s the medum of whch nformaton s transmtted from the transmtter to the recever. In moble communcaton, ths s a wreless channel, and for other applcatons ths could be copper or fbre optcs. The addton of nose normally occurs n the channel. In the smulatons, the channel s modelled as an AWGN (Addtve Whte Gaussan Nose) channel. The resultng nose added to the system follows the zero-mean normal dstrbuton, wth varance N O /2, and N O s the sngle-sded nose power spectral densty. (12)

LDPC Decoder: The decoder s mplemented at the -user recevng the nformaton. In terms of smulaton mplementaton, decodng s a process that loops through passng messages back and forth along the tanner graph (See Secton 2.3: Tanner Graphs) under certan condtons are satsfed, or a maxmum number of passes have occurred. Ths s dscussed n much more detal n secton 3.5. It s obvous that n moble communcatons, the handset would requre both the encoder and decoder as a par to allow for b-drectonal communcaton. Retreve Message From Codeword: Ths smple process retreves the estmated message from the estmated codeword. In the smulaton ths s done va a smple functon call followng estmatng the codeword. Message Destnaton: The message destnaton s the -user recevng the data. In a moble communcatons envronment, ths would be the user recevng the voce nformaton of the other user. In the smulatons, there s no message destnaton; rather the estmated message s compared to the transmtted message n order to detect whether a transmsson error occurred. 3.3 GENERATION FOR SIMULATION The method used for generatng the H matrx n ths paper was random generaton wth constrants. The algorthm used to generate ths routne allows for 4 nput parameters: N Block/Codeword Length k Message Bts (13)

w C Column Weght (# of 1 s per column) reltol Tolerance Varable used to control regularty The row weght (w R ) s computed as w C N/(N-k). In order to guarantee that w R s a whole number, the value s rounded up f t contans a decmal value, settng the maxmum allowed number of 1 s per row. In order to allow for suffcently fast computaton of the H matrx, only cycles of length 4 are avoded n the algorthm. The algorthm for generaton of the matrx s shown n Fgure 8 below. Fgure 8: Flowchart to create the Party-Check matrx, H (14)

3.4 ENCODING Practcal encodng of LDPC can be dffcult thng to mplement. Due to the nature of communcatons systems, t normally requres real-tme operaton. Encodng of codes, especally of hgher blocklengths can be qute dffcult to mplement n hardware however there are several methods of generatng H such that encodng can be done va shft regsters [11], however these methods wll not be dscussed here. In terms of smulaton, encodng can be done va matrx multplcaton, as memory allotment of most personal computers can handle these operatons wth rather large blocklengths. In Secton 3.1.4, t was determned that we can compute the codeword c usng: c = G Ths paper wll now examne how to generate ths matrx G. In order to determne the relatonshp of the party bts to the H matrx, we wll use the followng defnton of the syndrome. The defnton s smlar to that of Hammng Code (Secton 2.2). We defne a complete set of successful party-checks as: T m Hc = 0 Where: c = c, c,., ] [ 1 2 H N k N c N T = N-k by N Party-Check Matrx The locaton of the party-bts n the codeword s arbtrary, therefore we wll form our codeword such that: c = [ p : m ] T (15)

Where: m = [m 1, m 2,, m K ] T Message Word p = [p 1, p 2,, p N-k ] T Party Bts Therefore: H [ p : m ] T = 0 H can be parttoned as: H = [ X : Y ] Where: X = N-k by N-k Sub-matrx Y = N-k by k Sub-matrx From ths we can fnd: Xp + Ym = 0 Usng modulo-2 arthmetc we can solve for p as: p=x -1 Ym Then we solve for c as: c = [(X -1 Y) T : I] T m Where I s the k by k dentty matrx (16)

And we defne G as: G = [(X -1 Y) T : I] 3.4.1 LINEAR INDEPENDENCE PROBLEM In the above expresson for G t s evdent that X must be nvertble of GF(2). Random generaton of H does not normally guarantee ths lnear ndepence of the sub-matrx X. Ths problem was solved by rearrangng the columns of H to guarantee the sub-matrx X s nvertble. Ths s done by performng Gauss-Jordan elmnaton on a dummy H. When the current dagonal element of X s a zero element, that column s swapped wth the next column contanng a non-zero element n that row. The resultng column swaps are performed on the actual H matrx and recorded for the purpose of rearrangng the codeword bts c followng encodng, n order for the Syndrome (Hc=0) to be satsfed usng the orgnal H matrx. The nformaton on rearrangement s also requred at the recever to recover the orgnal message. Ths method of rearrangng columns was devsed by Arun Avudanayagam, a Masters student at the Unversty of Florda whle workng on a smulaton toolbox for codng [8]. An mportant note s that such a problem can easly be avoded by utlzng other methods of generatng the party-check matrx whch result n a lnearly ndepent sub-matrx X. 3.5 DECODING 3.5.1 HARD DECISION VS. SOFT DECISION DECODING There are generally two classes of decodng technques: Hard and Soft decson decodng. Hard decson decodng nvolves makng a decson on the value of a bt at the pont of recepton, such as a MAP (Maxmum A Posteror Probablty) decoder. Such a decoder (17)

(18) forms a decson based on a boundary that mnmzes the probablty of bt error. Fgure 9 shows the lkelhood functons for BPSK modulaton over an AWGN (Addtve Whte Gaussan Nose) channel. Fgure 9: Lkelhood functons for BPSK modulaton over an AWGN channel The values of the lkelhood functons n the above fgure are gven by: + = O b N E y O e N s y f 2 ) ( 1 1 ) ( π and = O b N E y O e N s y f 2 ) ( 2 1 ) ( π The optmal choce for a MAP recever to mnmze the probablty of error would be to choose a decson boundary α such that ] Pr[ mn error α. The probablty of error as a functon of α can be found as:

2 ( y E ) b ( y+ E ) b α 1 No No 1 PrERROR ( α ) = Pr[ S2]* e dy + Pr[ S1]* e N α π πn O O 2 dy The optmal value of α s the value of α that mnmzes the above equaton. Ths α wll become the decson threshold (boundary) for a MAP recever. Note: These above expressons are only vald for BPSK modulaton over an AWGN channel. A decson for a soft-decson decoder s not so clear. Soft-decodng requres processng of the receved codeword vector pror to makng a decson on the value of the bts. There s a large amount of research nto varous methods of soft-decson decodng. Ths paper examnes the performance of a Message Passng Algorthm known as the Sum-Product Algorthm Decoder (SPA Decoder). 3.5.2 SPA ALGORITHM As stated n the prevous secton, ths paper examnes the performance under smulaton for the Sum-Product Decoder. Ths decodng method s based on passng messages back on forth between the check and varable nodes on the Tanner Graph (See secton 2.3: Tanner Graph Representaton). Followng each teraton, the algorthm determnes a new soft-decson a posteror probablty and forms an estmate of the codeword. In order to understand the algorthm, we wll defne the followng notaton: H : party check matrx ( n k c : th P( b) : Pr[ c bt of n bt codeword = b y ] by n) (19)

R C R C j j j j : set of column locatons where H ( j, ) = 1 for : set of ~ : set R ~ j : set C q ( b) : Pr[ c row locatons where H ( j, ) = 1 for r ( b) : Pr[ CheckNode f j less column less row j = b r j j ( b) C Satsfed c ~ j, y ] = b & q j th j th column ( b) R j row ~ ] In the algorthm messages are passed from check to varable node and from varable to check node as messages r j (b) and q j (b) respectvely. Ths message passng can be seen below n fgures 10 and 11 below by use of the Tanner graph representaton. f j r j (b) f j q j (b) r j (b) c c q j (b) Fgure 10: Representaton of nodes Fgure 11: Representaton of nodes 3.5.2.1 Computng the Messages By makng usng Bayes theorem, q j (b) can be solved as follows: q j ( b) = Pr[ c = b r j ( b) C ~ j, y & Check equatons that c Pr[ Check equatons nvolvng f j Satsfed c = b, y & r j ( b) C = Pr[ Check equatons nvolvng f Satsfed] j are nvolved n are Satsfed] ~ j]* Pr[ c = b y ] (20)

Wth the assumpton of ndepence of Check equatons beng satsfed gven c = b : q ( b) = K j j P( b) k k C~ j r ( b) (1) By usng (1), we can solve for both cases, where b=0 and b=1: q (0) = K j and q (1) = K j j j P(0) P(1) k k C~ j r r k k C~ j (0) (1) (2) (3) Where K are chosen to guarantee unty total probablty : q (0) q (1) = 1 and P(0) j and P(1) are found usng equaton (10). j + j Snce all n-k check equatons n H utlze even party, r j (b) can be solved for usng the followng result: f j = c = 0 over mod-2 arthmetc (4) R j It can be shown that for M bts, the probablty of the set contanng an even number of ones s [3]: M 1 1 Pr[ even bts] = + (1 2Pr[ bt = 1]) (5) 2 2 If there are an even number of bts that equal 1 that are attached to check node j (not ncludng bt ), then bt must be 0 to satsfy even party constrants. Therefore the (21)

probablty that the check node j s satsfed gven c = 0 s gven as follows usng (4) and (5): 1 1 r (0) = + 2 2 (1 2Pr[ j c k k Rj~ = 1]) (6) Usng (6), we can substtute q kj (1) for Pr[c k = 1] as ths s the defnton of qj(1) and the product n (6) s over the set n whch q j (1) exsts. 1 1 r (0) = + 2 2 [1 2 j q kj k Rj~ (1)] (7) By usng the requred condton that r j (0)+r j (1)=1 and (7), r j (1) solves as: 1 1 r (1) = 2 2 [1 2 j q kj k Rj~ (1)] (8) Equatons (2), (3), (7) and (8) represent the set of messages passed between check and varable nodes as shown n Fgure 10 and 11. 3.5.2.2 Intalzaton It s apparent from equatons (2), (3) (7) and (8) that they are functons of the other. Ths s obvous from the fact that the message passng algorthm passes messages back and forth between check and varable nodes. Snce q b) = Pr[ c = b r ( b) C ~ j, y ] j ( j and P b) = Pr[ c = b y ], and snce ntally no messages have been passed before the ( teratons begn, these two equatons can be equated and message q j (b) at teraton 0 becomes: (22)

qj ( b) = P( b), P(b) s found usng equaton (10) found below. The fnal expresson requred for computaton s the value of P(b). It can be found as follows: P( b) = Pr[ s = x y ], x = Eb for b = 0 and x = Eb for b = 1 m By Bayes Theorem: P( b) = x { ± Pr[ y sm = x]* Pr[ sm = x] Pr[ y s = x]* Pr[ s = x] Eb} m m (9) Makng the assumpton of equal a pror probabltes results n: Pr[ s 1] = Pr[ s = 0] = 0.5 m = m Pr[y s m = x] s the value of lkelhood functon evaluated at x. It s depant on the channel parameters and modulaton scheme. Ths smulaton s utlzng BPSK over an AWGN channel. Therefore, the lkelhood functons are gven by f ( y sm = x), x { ± Eb). By manpulatng the above expresson for P(b) wth the substtuted values evaluated usng the lkelhood functon as n Fgure 9, P(b) can be solved as: 1 P( b) = 4 N 1+ e xy O (10) Where x s mean of the two lkelhood functon: x ± E }, b s the bt mapped to x, y { b s the contnuous tme sgnal receved for the th bt and N O s the sngle-sded nose power spectral densty. (23)

NOTE: The above equaton has been derved for an AWGN channel wth nose varance NO 2 utlzng BPSK modulaton and s only vald for such a channel. 3.5.2.3 Soft-Decson Followng each teraton, the seres of party-checks are performed on the estmated codeword and the syndrome s computed (e. Hc=Syn), but thus far we have not establshed how ths codeword s estmated. The Pr[c = b All Check Nodes satsfed, y ] can be computed usng the ndepence assumpton and Bayes Theorem as below. Q ( b) = Pr[ c b y, All Check Nodes Satsfed] = Pr[ All Check Nodes Satsfed c = b]*pr[ c Q ( b) = Pr[ All Check Nodes Satsfed] = b] Q ( b) = K P( b) r ( b) j C j (11) Where K s chosen such that Q (0) + Q (1) = 1 and P(b) s found n equaton (10). cˆ 1 f Q (1) > 0.5 = 0 otherwse Followng computaton of ĉ, the syndrome s calculated as: Syn = Hĉ (24)

If: Syn = [0] (n-k)x1 or algorthm has reached ts maxmum number of teratons the algorthm s exted, otherwse t proceeds to ts next teraton. It s mportant to note, that Syn[0] does not guarantee that the estmated codeword s the correct codeword (that ĉ=c). It only represents that the estmated codeword satsfes all party checks (t s a legtmate codeword). Such a codeword that satsfes the syndrome (generates a zero vector) but s not the orgnal codeword s an undetectable error. For large blocklength codes wth a good mnmum dstance, the probablty of these undetectable errors s very low [3]. 3.5.2.4 Smulaton Computaton Thus far ths paper has examned the computatons requred to calculate the varables used n the algorthm. The decodng algorthm tself s shown by the flowchart (Fgure 12) below. (25)

4.0 RESULTS Fgure 12: Flowchart for Decodng Ths paper has examned the LDPC error control codng scheme. In order to verfy the proper smulaton mplementaton of the codng scheme, results from Davd Mackay s Informaton Theory, Inference, and Learnng Algorthms were used as a bass for comparson. Parameters used by MacKay [1, p.565] were: Regular LDPC Codes Blocklengths of 96, 204, 408 and 816 Column weght (w C ) of 3 ½ Rate Codes 13 Iteratons for SPA Decoder Parameters utlzed for ths smulaton were: Regular LDPC Codes Blocklengths of 96, 204 and 408 (nsuffcent processng power to compute 816) Column weght (w C ) of 3 ½ Rate Codes 13 Iteratons for SPA Decoder (26)

Fgure 13: MacKay s Results [3, P 565] Fgure 14: Smulated Results Fgures 13 and 14 above respectvely show Mackay s results and the results from ths smulaton. The smulaton verfes Mackay s results wthn a reasonable tolerance. Secton 2.2 dscussed the Hammng (7,4) codng scheme. The below fgure (Fgure 15), compares the performance of the above smulaton blocklengths versus Hammng (7,4). Fgure 15: Performance of smulatons vs. Hammng (7,4) wth Shannon s lmt (27)

The above fgure shows a key beneft of utlzng LDPC codng. The sharp performance curves are apparent when comparng the performance of the LPDC codes versus that of Hammng (7,4), even for relatvely low blocklengths. It can be seen that as the blocklengths ncrease, as does the performance. In order to compare these two codes, the standard way to represent the BER vs. SNR s E b /N O where N O s the sngle-sded nose power spectral densty, and E b s the average energy per message bt (not symbol bt). Ths scheme takes nto account the added requrement of transmttng addtonal party bts. Ths relatonshp can be computed as: E N or E N b O b O ( db) = 10log ( db) = 10log 10 10 Es RN E N s O O + 10log 10 1 R For a code rate of ½ (such as that used n smulaton) E N or E N b O b O ( db) = 10log ( db) = 10log 10 10 E N E N s O s O + 10log 10 + 3.01dB ( 2) Ths ncrease n 3.01dB represents the ncrease n power requred to transmt the addtonal party bts and stll mantan a constant Sgnal-to-Nose rato of the message bt energy to nose power spectral densty. (28)

Re-examnng Fgure 15, t can be seen at a target Bt Error Rate (BER) of 10-4 the blocklength of 408 results n approxmately a 2dB drop reducton n the E b /N O requrement versus a blocklength of 96. Shannon s Nosy Codng Theorem [1, p.133] has shown that as ths blocklength ts towards nfnty ( ), the performance graph wll approach that of Shannon s lmt for a gven code rate (See Secton 2.1: Shannon s Theorem for Channel Codng), whch can be seen on the graph as 0.188dB. 5.0 PROBLEMS ENCOUNTERED Several problems were encountered n workng on ths project. Ths paper wll brefly examne the problems encountered and solutons developed. Computatonal Tme: The orgnal mplementaton of the SPA decoder algorthm was an N 2 process. Ths process took an exceedng large amount of processng tme, even for low blocklengths of 96 bts. Wthout modfcatons, ths paper would have been unable to accurately smulate results for the larger blocklengths, and would have been unable to compare smulatons wth MacKay s results. The soluton used was to fnd both the row and column supports (R j and C ) whle generatng H n the MakeH routne. Usng these n the SPA decodng algorthm reduced t to an N- tme process. In addton t also reduced the computaton tme of the generaton routne (MakeH) to N 2 from N 3 whle stll performng the same functon (generatng H subject to column weght, regularty and cycle constrants). Lnear Indepence: When generatng the party-check matrx randomly, t s dffcult to guarantee lnear ndepence of each row. The soluton to the problem was a soluton used by Arun Avudanayagam, a Master s student at the Unversty of Florda [8]. It (29)

nvolved rearrangng the columns of H, to make the left most (n-k)x(n-k) submatrx of H lnearly ndepent. Please reference secton 3.4.1 for more detaled nformaton Bnary Matrx Inverson: When computng the generaton matrx G, there s a requrement to nvert the left most (n-k)x(n-k) sub-matrx of H. The matrx nverson routne ntegrated nto Matlab was unable to perform a bnary nverson over GF(2). The soluton nvolved creatng a routne to effcently perform ths nverson over GF(2). Ths was done utlzng Gauss-Jordan Elmnaton. The exclusve-or functon (XOR) n Matlab proved crucal to makng ths an effcent algorthm. Regularty Control: The routne for generatng H (as seen n secton 3.3) randomly places ones startng from column 1 to column N. Each row only allows w R ones per row to be place. In order to modfy ths regularty constrant, a tolerance varable s ntroduced. The value of the varable must be less than or equal to 1. When the value s 1, t generates a regular LDPC code. When the tolerance varable s less than 1, the routne allows up to one more addtonal 1 per row when: > reltol * N Where s the current column counter, reltol s the tolerance varable and N s the blocklength of the code beng generated. (30)

6.0 FUTURE WORK Followng completon of the project, several objectves for future work were proposed. These prospects for future study wll be brefly examned. 6.1 INCREASE EFFICIENCY OF SIMULATION ALGORITHMS: The project successfully ran smulatons of lower blocklengths of code (96, 204 and 408 bt codewords). Future research would branch nto more effcent methods of smulatng the performance of LDPC codes. Ths could be ether devsng a more effcent algorthm or develop an algorthm utlzng dstrbuted computng. The resultng product would provde an effcent way to smulate larger blocklengths and examne the performance of these larger blocklengths. 6.2 LOWER MEMORY REQUIREMENTS OF THE PARITY-CHECK MATRIX: Utlze a dfferent method to generate the party-check matrx that leads to lower memory requrements and more practcal mplementaton. Varous researchers have proposed several methods. Examnng such methods or developng a new one would be more benefcal to proceed to practcal mplementaton of the system. 6.3 VLSI IMPLEMENTATION: Followng completon of lowerng the memory requrements of the party-check matrx, t would be natural to proceed to mplementng LDPC n hardware. A practcal system would allow other performance measurements to be determned such as maxmum transmsson rate wth varous blocklengths and latency of the system. (31)

7.0 CONCLUSION Low-Densty Party-Check Codes (LDPC) were dscovered n the early 1960 s by Robert Gallager. These codes had been largely forgotten after ther nventon untl ther redscovery n the md-nnetes. Ths was due n part to the hgh complexty of decodng the messages and the lack of computng power when these codes were orgnally nvented. LDPC codes are, along wth the turbo-codes, currently the best performng channel codng schemes, as they can theoretcally reach Shannon s lmt, and have been shown to come close extremely close n smulatons. LDPC codes can be thought of as a generc term for a class of error correctng codes dstngushed from others by a very sparse party-check matrx, H. LDPC performance mproves as block length ncreases, so they can theoretcally acheve Shannon s lmt as block length goes to nfnty. LDPC allows for the relable transmsson, or storage, of data n nosy envronments. Even short codes provde a substantal codng gan over uncoded, or low complexty coded systems. These results allows for lower transmsson power, transmsson over noser channels, wth the same, f not better relablty. Ths paper, has provded a mlestone n mplementng such an error control codng scheme and has helped the project team to develop a good understandng LDPC codes overall. The project provded an opportunty to use theoretcal knowledge and wrte Matlab algorthms usng both source code and Smulnk modellng. Wthn an acceptable tolerance, the smulatons successfully n recreatng results publshed n 2003 by MacKay [3] usng dentcal parameters, meanng the mplementaton was successful. Movng forward, LDPC codes wll be become a more vable codng scheme for practcal mplementaton. The codng scheme has already been chosen as a standard for the DVB- S2 protocol used n dgtal vdeo broadcastng [9] and s also used n some data storage applcatons. Overall, the decodng of LDPC has become a very hot topc of research over (32)

the past few years as real-tme encodng wll become the overall goal n terms of applcaton ntegraton. It s also worth nothng that there are hardware mplementatons that are roughly 103 tmes faster than the current software mplementatons [1]. (33)

REFERENCES [1] Gulloud, F., (2004). Generc Archtecture for LDPC Codes Decodng. Pars: Telecom Pars. [2] Gallager, R. G. Low-Densty Party-Check Codes. MIT Press, Cambrdge, MA, 1963. [3] MacKay, D. J. C., (2003). Informaton Theory, Inference, and Learnng Algorthms. Unted Kngdom: Cambrdge Unversty Press. [4] Shannon, C. E. A Mathematcal Theory of Communcaton. Bell System Techncal Journal, vol. 27, pp. 379-423, 623-656, July, October, 1948 [5] Lucent Technologes. Informaton Theory: The Growth of System Capacty. http://www.lucent.com/mnds/nfotheory/what.html (vewed March 2006). [6] Ln, S., & Costello, D. J. Jr. (2004). Error Control Codng (Second Edton). Upper Saddle Rver, New Jersey: Pearson Prentce Hall. [7] Ryan, W. E., (2003). An Introducton to LDPC Codes. Arzona: Unversty of Arzona. [8] Arun Avudanayagam. Arun s Page (Personal unversty project webste). http://arun-10.trpod.com (vewed February 2006). [9] Morello, A., Mgnone, V., (2004). DVB-S2 Ready For Lft-Off. EBU Techncal Revew. (October 2004). [10] Ryan, W. E., (2001). An Introducton to LDPC Codes. Unpublshed Notes. [11] Hu, X. Y., Eleftherou, E., Arnold, D. M., Dholaka, A. (2001). Effcent Implementatons of the Sum-Product Algorthm for Decodng LDPC Codes. IEEE. [12] Huang, F., (1997). Evaluaton of Soft Output Decodng for Turbo Codes. Vrgna: Vrgna Polytechnc Insttute. [13] Yao, E., Nkolc, B., Anantharam, V., (2003). Iteratve Decoder Archtectures. IEEE Communcatons (Aug 2003, P132-140). [14] Andrews, K., Dolnar, S., Thorpe, J., (2005). Encoders for Block-Crculant LDPC Codes. Calforna Insttute of Technology. Submtted to ISIT, currently unpublshed. (34)

APPENDIX A: MATLAB CODE A.1 MakeH.m functon [ParCheck,Rj,C]=MakeH(blocklength,nummessagebts,wc,reltol); %Generates a Party-Check Matrx H wth gven nputs: %Blocklength - N - length of codeword %nummessagebts - k - length of message %wc - column weght - # of 1's per column %reltol - controls regularty -> 1 for regular LDPC generaton, but may not %ever fnsh rows=blocklength-nummessagebts; cols=blocklength; wr=cel(cols*wc/rows); %True all the tme only f H s a Regular LDPC code -> Target for Irregular LDPC counter(1:rows,1:wr)=0; rowcount(1:rows)=0; %Generate H subject to constrants for =1:1:cols for k=1:1:wc common(1)=2; whle(max(common)>=2) common(1:rows)=0; randnum=round(rand*(rows-1))+1; whle(((rowcount(randnum)>=wr && /cols<=reltol) (rowcount(randnum)>wr && /cols>reltol)) length(fnd(counter(randnum,:)==))>=1) randnum=round(rand*(rows-1))+1; countertemp=counter(randnum,:); countertemp(rowcount(randnum)+1)=; %Guaranteeng no length 4 cycles on the tanner graph for j=1:1:rows f(j~=randnum) for l=1:1:wr f(length(fnd(counter(j,:)==countertemp(l)))>=1 && countertemp(l)~=0) common(j)=common(j)+1; %Vald Bt Locaton, wrte t!!! counter(randnum,rowcount(randnum)+1)=; rowcount(randnum)=rowcount(randnum)+1; colcounter(,k)=randnum; ParCheck(randnum,)=1; %Dsplay current column (35)

Rj=counter; C=colcounter; A.2 ParseH.m functon [Gprme,newcol]=ParseH(mat1) [rows,cols]=sze(mat1); %Column Rearrangement to guarantee non-sngular Matrx temph=mat1; for =1:rows NewColPoston()=0; %Performs Gauss-Jordan on dummy varable to move columns of H to make %submatrx A nvertable for =1:1:rows f temph(,)==0 for k=(+1):1:cols f (temph(,k)==1) spot=k; break; tempcol=temph(:,spot); temph(:,k)=temph(:,); temph(:,)=tempcol; tempcol=mat1(:,spot); mat1(:,k)=mat1(:,); mat1(:,)=tempcol; NewColPoston()=spot; for j=1:1:rows f j~= f temph(j,)==1 temph(j,:)=xor(temph(,:),temph(j,:)); %Reassgn Matrces to proper locaton augmat(1:rows,1:rows)=mat1(1:rows,1:rows); B(1:rows,1:(cols-rows))=mat1(1:rows,(rows+1):cols); clear('mat1'); clear('temph'); newcol=newcolposton; %Augment Identty Matrx wth Square Matrx for =1:1:rows for j=1:1:rows f(==j) augmat(,j+rows)=1; (36)

f(~=j) augmat(,j+rows)=0; %Begn GF2 Inverson for =1:1:rows f(augmat(,)==0 && ~=rows) swflag=0; for k=+1:1:rows f(augmat(k,)==1) temp=augmat(,:); augmat(,:)=augmat(k,:); augmat(k,:)=temp; swflag=1; break; f(swflag==0 (==rows && augmat(rows,rows)==0)) dsp('matrx was not nvertable -> Sngular') done=0; break; for j=1:1:rows f(augmat(j,)==1 && j~=) augmat(j,:)=xor(augmat(,:),augmat(j,:)); %Augment wth Identty matrx to create a full generaton matrx Anv(1:rows,1:rows)=augmat(1:rows,(rows+1):2*rows); Gprme=BnaryMultply(Anv,B); for =1:1:(cols-rows) for j=1:1:(cols-rows) f(==j) Gprme(rows+,j)=1; f(~=j) Gprme(rows+,j)=0; clear('augmat'); A.3 Run.m functon [SNR,BER]=Run(G,H,Rj,C,Iter,NewCol) % Generator Matrx (37)

% H Party Check Matrx % Row Support % Column Support % Number of Iteratons for the decoder % column rearrangement for party-check columns SNR=[1 1.5 2 2.5 3] ; %SNR VECTOR -> Change ths when needed No=1; %Nose Power Spectral Densty (Sngle sded) Rate=(sze(H,2)-sze(H,1))/sze(H,2); Amp=sqrt(No*R*10.^(SNR/10)); %Eb/No = 10log10(amp^2/R*No) msgsze=48 %MessageSze warnng off MATLAB:dvdeByZero done=0; MaxErrors=msgsze*10; var=no/2; for =1:1:length(SNR) BtErrors=0; numtme=0; whle(bterrors<maxerrors) %Untl a certan amount of Bt Errors message=round(rand(msgsze,1)); %Random message sequency Codeword=LDPCencode(G,NewCol,message); %Encode Tx=BPSKoverAWGN(Amp(),Codeword,No); %Modulate [Rx,NumIt,Succ]=SPADecode(H,Rj,C,Tx,var,Iter,NewCol); %Decode BtErrors=BtErrors+xor(Rx,message); %Add Bt Errors to total f(mod(numtme,100)==0) %Dsplay status of number of runs numtme f (numtme>1000000) %Ext f runnng too long done=1; break; numtme=numtme+1; BER()=BtErrors/(numtme*msgsze); %Calculate BER f(done==1) break; A.4 LDPCencode.m functon [Codeword]=LDPCencode(G,NewColArrangement,Message) %Encodes the gven message CodewordTemp=BnaryMultply(G,Message); %Create Codeword rows=length(newcolarrangement); %Perform poston adjustments based on column rearrangment of H for =rows:-1:1 f(newcolarrangement()~=0) (38)

TempBt=CodewordTemp(); CodewordTemp()=CodewordTemp(NewColArrangement()); CodewordTemp(NewColArrangement())=TempBt; Codeword=CodewordTemp; clear('temp'); clear('codewordtemp'); A.5 BnaryMultply.m functon [result]=bnarymultply(mat1,mat2) %Performs GF2 (bnary) Multplcaton of 2 Matrces [row1,col1]=sze(mat1); [row2,col2]=sze(mat2); f(col1==row2) for =1:1:row1 for j=1:1:col2 result(,j)=mod(sum(and(mat1(,:),transpose(mat2(:,j)))),2); f(col1~=row2) dsp('error Matrces cannot be multpled') A.6 BPSKoverAWGN.m functon [RecevedWord]=BPSKoverAWGN(Amp,Codeword,No) %Modulates codeword over BPSK channel for =1:1:length(Codeword) RecevedWord()=Amp*(-1).^Codeword()+AWGN(No); A.7 AWGN.m functon [Nose]=AWGN(No) %Returns nose gven No x1=rand; x2=rand; y1=sqrt(-2*log(x1))*cos(2*p*x2); Nose=sqrt(No/2)*y1; (39)

A.8 SPADecode.m functon [RxMessage,NumIteratonsPerformed,Success]=SPADecode(H,Rj,C,Codeword,Varance,MaxIteratons,n ewcol) %Ths functon performs the sum-product algorthym for decodng the %Receved message vector. Assumes H s a sparse bnary matrx (H n %expanded form). Also assumes BPSK for channel encodng %0 => -Amp 1 => Amp szeofh=sze(h); rows=szeofh(1); %Number of rows of party check=length of message cols=szeofh(2); %Number of cols of party check=length of codeword var=varance; Success=0; factor=1; factor1=1; %Intalzaton for =1:1:cols for k=1:1:sze(c,2) P(C(,k),)=1/(1+exp(-2*Codeword()/var)); P(C(,k),)=1-P(C(,k),); qj0(c(,k),)=1-p(c(,k),); qj1(c(,k),)=p(c(,k),); %SPA Routne for count=1:1:maxiteratons %Calculate Messages passed from Check to Varable Nodes for j=1:1:rows for kp=1:1:sze(rj,2) f(rj(j,kp)~=0) temp=0.5; for k=1:1:sze(rj,2) f(rj(j,k)~=0 && Rj(j,k)~=Rj(j,kp)) temp=temp*(1-2*qj1(j,rj(j,k))); rj0(j,rj(j,kp))=0.5+temp; rj1(j,rj(j,kp))=1-rj0(j,rj(j,kp)); %Calculate Messages passed from Varable to Check Nodes for =1:1:cols for kp=1:1:sze(c,2) temp0=1; temp1=1; (40)

for k=1:1:sze(c,2) f(c(,k)~=c(,kp)) temp0=temp0*rj0(c(,k),); temp1=temp1*rj1(c(,k),); temp0=temp0*(1-p(c(,kp),)); temp1=temp1*p(c(,kp),); factor1()=temp0+temp1; temp0=temp0/factor1(); temp1=temp1/factor1(); qj0(c(,kp),)=temp0; qj1(c(,kp),)=temp1; %Make soft decson -> Calculate estmated codeword for =1:1:cols temp0=1; temp1=1; for k=1:1:sze(c,2) temp0=temp0*rj0(c(,k),); temp1=temp1*rj1(c(,k),); temp0=temp0*(1-p(c(,1),)); temp1=temp1*(p(c(,1),)); factor()=temp0+temp1; temp0=temp0/factor(); temp1=temp1/factor(); Q()=temp1; f Q()>0.5 CodeEst()=1; else CodeEst()=0; %Check to see f all party checks are satfed val=sum(bnarymultply(codeest,transpose(h))); f(val==0 && sum(codeest)~=0) NumIteratonsPerformed=count; Success=1; break; %If not sucessful f(success==0) NumIteratonsPerformed=MaxIteratons; %Get estmated message from estmated codeword RxMessage=GetMessage(CodeEst,newcol,cols-rows); (41)

A.9 GetMessage.m functon [Message]=GetMessage(Codeword,NewColArrangement,MessageLength) %Returns Message from codeword for =1:1:MessageLength f(newcolarrangement()~=0) TempBt=Codeword(); Codeword()=Codeword(NewColArrangement()); Codeword(NewColArrangement())=TempBt; Message(1:MessageLength)=Codeword((length(Codeword)-MessageLength)+1:length(Codeword)); A.10 MAP.m (Used to Generate Lkelhood Functons n Secton 3.5.1) Eb=1; No=1; x1=-5:.01:2; x2=-2:0.01:5; y1=1/sqrt(2*p*no/2).*exp(-(x1-(-sqrt(eb))).^2/no); y2=1/sqrt(2*p*no/2).*exp(-(x2-(sqrt(eb))).^2/no); plot(x1,y1,x2,y2) (42)

APPENDIX B: SIMULINK MODEL (43)