A thesis presented to. the faculty of. the Russ College of Engineering and Technology of Ohio University. In partial fulfillment

Similar documents
A study of turbo codes for multilevel modulations in Gaussian and mobile channels

Space Time Equalization-space time codes System Model for STCM

Parameter Free Iterative Decoding Metrics for Non-Coherent Orthogonal Modulation

Calculation of the received voltage due to the radiation from multiple co-frequency sources

Digital Transmission

PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION. Evgeny Artyomov and Orly Yadid-Pecht

Review: Our Approach 2. CSC310 Information Theory

To: Professor Avitabile Date: February 4, 2003 From: Mechanical Student Subject: Experiment #1 Numerical Methods Using Excel

Rejection of PSK Interference in DS-SS/PSK System Using Adaptive Transversal Filter with Conditional Response Recalculation

Adaptive Modulation for Multiple Antenna Channels

A Comparison of Two Equivalent Real Formulations for Complex-Valued Linear Systems Part 2: Results

International Journal of Network Security & Its Application (IJNSA), Vol.2, No.1, January SYSTEL, SUPCOM, Tunisia.

DC-FREE TURBO CODING SCHEME FOR GPRS SYSTEM

Dynamic Optimization. Assignment 1. Sasanka Nagavalli January 29, 2013 Robotics Institute Carnegie Mellon University

Performance Analysis of Multi User MIMO System with Block-Diagonalization Precoding Scheme

Comparative Analysis of Reuse 1 and 3 in Cellular Network Based On SIR Distribution and Rate

Understanding the Spike Algorithm

High Speed, Low Power And Area Efficient Carry-Select Adder

NATIONAL RADIO ASTRONOMY OBSERVATORY Green Bank, West Virginia SPECTRAL PROCESSOR MEMO NO. 25. MEMORANDUM February 13, 1985

The Performance Improvement of BASK System for Giga-Bit MODEM Using the Fuzzy System

Uncertainty in measurements of power and energy on power networks

Generalized Incomplete Trojan-Type Designs with Unequal Cell Sizes

Bit-interleaved Rectangular Parity-Check Coded Modulation with Iterative Demodulation In a Two-Node Distributed Array

THE USE OF CONVOLUTIONAL CODE FOR NARROWBAND INTERFERENCE SUPPRESSION IN OFDM-DVBT SYSTEM

antenna antenna (4.139)

Research of Dispatching Method in Elevator Group Control System Based on Fuzzy Neural Network. Yufeng Dai a, Yun Du b

Walsh Function Based Synthesis Method of PWM Pattern for Full-Bridge Inverter

IEE Electronics Letters, vol 34, no 17, August 1998, pp ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES

Passive Filters. References: Barbow (pp ), Hayes & Horowitz (pp 32-60), Rizzoni (Chap. 6)

Side-Match Vector Quantizers Using Neural Network Based Variance Predictor for Image Coding

Efficient Large Integers Arithmetic by Adopting Squaring and Complement Recoding Techniques

Throughput Maximization by Adaptive Threshold Adjustment for AMC Systems

Revision of Lecture Twenty-One

熊本大学学術リポジトリ. Kumamoto University Repositor

Markov Chain Monte Carlo Detection for Underwater Acoustic Channels

UNIT 11 TWO-PERSON ZERO-SUM GAMES WITH SADDLE POINT

Resource Allocation Optimization for Device-to- Device Communication Underlaying Cellular Networks

NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION

Fall 2018 #11 Games and Nimbers. A. Game. 0.5 seconds, 64 megabytes

Impact of Interference Model on Capacity in CDMA Cellular Networks. Robert Akl, D.Sc. Asad Parvez University of North Texas

HUAWEI TECHNOLOGIES CO., LTD. Huawei Proprietary Page 1

SIMULATED PERFORMANCE A MATLAB IMPLEMENTATION OF LOW-DENSITY PARITY- CHECK CODES. By: Dan Dechene Kevin Peets. Supervised by: Dr.

EFFICIENT FIELD PROGRAMMABLE GATE ARRAY IMPLEMENTATION OF A CONVOLUTIONAL TURBO CODE FOR LONG TERM EVOLUTION SYSTEMS

Error Probability of RS Code Over Wireless Channel

MTBF PREDICTION REPORT

LOCAL DECODING OF WALSH CODES TO REDUCE CDMA DESPREADING COMPUTATION

Reduced Cluster Search ML Decoding for QO-STBC Systems

Analysis of Time Delays in Synchronous and. Asynchronous Control Loops. Bj rn Wittenmark, Ben Bastian, and Johan Nilsson

Rational Secret Sharing without Broadcast

The Spectrum Sharing in Cognitive Radio Networks Based on Competitive Price Game

High Speed ADC Sampling Transients

A New Type of Weighted DV-Hop Algorithm Based on Correction Factor in WSNs

Approximate Joint MAP Detection of Co-Channel Signals

TECHNICAL NOTE TERMINATION FOR POINT- TO-POINT SYSTEMS TN TERMINATON FOR POINT-TO-POINT SYSTEMS. Zo = L C. ω - angular frequency = 2πf

NETWORK 2001 Transportation Planning Under Multiple Objectives

Learning Ensembles of Convolutional Neural Networks

IN wireless networks, it has always been a challenge to satisfy

Keywords LTE, Uplink, Power Control, Fractional Power Control.

Graph Method for Solving Switched Capacitors Circuits

A NSGA-II algorithm to solve a bi-objective optimization of the redundancy allocation problem for series-parallel systems

Optimizing Transmission Lengths for Limited Feedback with Non-Binary LDPC Examples

Secure Transmission of Sensitive data using multiple channels

An Efficient Method for PAPR Reduction of OFDM Signal with Low Complexity

Adaptation of Hybrid FSO/RF Communication System Using Puncturing Technique

Performance of Modified Iterative Decoding Algorithm for Multilevel Codes in Adaptive OFDM System

Index Terms Adaptive modulation, Adaptive FEC, Packet Error Rate, Performance.

Priority based Dynamic Multiple Robot Path Planning

Network Reconfiguration in Distribution Systems Using a Modified TS Algorithm

Joint Power Control and Scheduling for Two-Cell Energy Efficient Broadcasting with Network Coding

Ensemble Evolution of Checkers Players with Knowledge of Opening, Middle and Endgame

PERFORMANCE EVALUATION OF BOOTH AND WALLACE MULTIPLIER USING FIR FILTER. Chirala Engineering College, Chirala.

Control Chart. Control Chart - history. Process in control. Developed in 1920 s. By Dr. Walter A. Shewhart

29. Network Functions for Circuits Containing Op Amps

problems palette of David Rock and Mary K. Porter 6. A local musician comes to your school to give a performance

Define Y = # of mobiles from M total mobiles that have an adequate link. Measure of average portion of mobiles allocated a link of adequate quality.

c 2009 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

Robust Image Transmission Performed by SPIHT and Turbo-Codes

Fast Code Detection Using High Speed Time Delay Neural Networks

Multicarrier Modulation

Hierarchical Generalized Cantor Set Modulation

Power Minimization Under Constant Throughput Constraint in Wireless Networks with Beamforming

Lecture 10: Bipolar Junction Transistor Construction. NPN Physical Operation.

Joint Adaptive Modulation and Power Allocation in Cognitive Radio Networks

Iterative Detection and Decoding (IDD) MIMO-OFDM HARQ Algorithm with Antenna Scheduling

Test 2. ECON3161, Game Theory. Tuesday, November 6 th

RC Filters TEP Related Topics Principle Equipment

A new family of linear dispersion code for fast sphere decoding. Creative Commons: Attribution 3.0 Hong Kong License

The Throughput of Hybrid-ARQ in Block Fading under Modulation Constraints

An Efficient Energy Adaptive Hybrid Error Correction Technique for Underwater Wireless Sensor Networks

LOW-density parity-check (LDPC) codes first discovered

Latency Insertion Method (LIM) for IR Drop Analysis in Power Grid

Chapter 2 Two-Degree-of-Freedom PID Controllers Structures

arxiv: v1 [cs.it] 30 Sep 2008

Discussion on How to Express a Regional GPS Solution in the ITRF

1 GSW Multipath Channel Models

Utility-based Routing

Evaluate the Effective of Annular Aperture on the OTF for Fractal Optical Modulator

A Preliminary Study on Targets Association Algorithm of Radar and AIS Using BP Neural Network

Ergodic Capacity of Block-Fading Gaussian Broadcast and Multi-access Channels for Single-User-Selection and Constant-Power

Comparison of Two Measurement Devices I. Fundamental Ideas.

Transcription:

Crcular Trells based Low Densty Party Check Codes A thess presented to the faculty of the Russ College of Engneerng and Technology of Oho Unversty In partal fulfllment of the requrements for the degree Master of Scence Irna Anţe November 2008 2008 Irna Anţe. All Rghts Reserved.

Ths thess ttled 2 Crcular Trells based Low Densty Party Check Codes by IRINA ANIŢEI has been approved for the School of Electrcal Engneerng and Computer Scence and the Russ College of Engneerng and Technology by Jeffrey C. Dll Professor of Electrcal Engneerng and Computer Scence Denns Irwn Dean, Russ College of Engneerng and Technology

ABSTRACT 3 ANIŢEI, IRINA, M.S., November 2008, Electrcal Engneerng Crcular Trells based Low Densty Party Check Codes (75 pp.) Drector of Thess: Jeffrey C. Dll Tal btng crcular trells block codes (TBC) 2 used along wth teratve Maxmum A-Posteror (MAP) decoders acheve performance very close to the Shannon lmt. A Low Densty Party Check (LDPC) code usng a Sum Product Algorthm (SPA) decoder s also known to acheve comparable performance. In ths work the performance of (TBC) 2 encoder used wth an SPA decoder s presented. The goal of ths research s to compare the performance of (TBC) 2 encoder wth dfferent teratve decoders. In order to use the SPA for decodng, a party check (H) matrx representaton of the (TBC) 2 s developed. It s shown that for small block lengths ths H matrx acheves comparable performance. For larger block szes the H matrx representaton of the (TBC) 2 encoder s found non-optmal for SPA decodng and the performance of the code s degraded. Approved: Jeffrey C. Dll Professor of Electrcal Engneerng and Computer Scence

ACKNOWLEDGMENTS 4 It s my pleasure to thank the people who helped me through these years. Wthout ther support and encouragement ths work would not have been possble. Frstly, I thank my academc advsor, Dr. Jeff Dll, who shared wth me a lot of hs expertse and research nsght. Wth hs enthusasm and hs talent to explan thngs clearly and smply, he helped me gan the knowledge and courage to complete ths work. I thank Dr. Davd Matolak, who set an example for me through hs outstandng teachng style. Hs choce of course materals and homeworks has provded me wth an n depth understandng of wreless communcatons concepts. I would also lke to thank Dr. Razvan Bunescu and Dr. Dnh van Huynh for ther tme, patence and ther postve atttude. I am very grateful to my colleague and frend Kamal Gant for hs constant help and good collaboraton. I deeply apprecate hs advce and avalablty for answerng my questons at all tmes durng the past three years. I am thankful to the Department of Electrcal Engneerng for provdng me wth the fnancal assstance throughout my tme as a graduate student. In recognton to all ther encouragements and help I thank my frends who always stood by my sde: Cerasela and George Caa, Iula Tomescu, Steven Huang, Sumt Bhattacharya, and Indranl Sen. I also thank Iulan Clapa for all hs help. Most mportantly, I want to express my endless grattude to people who helped me to do my best n all matters of lfe, my famly: Teodora and Stefan Anţe, Magdalena, Mara, Smaranda and Costel Oprea. I dedcate ths thess to them.

5 TABLE OF CONTENTS Page ABSTRACT... 3 ACKNOWLEDGMENTS... 4 LIST OF TABLES... 7 LIST OF FIGURES... 8 CHAPTER 1: INTRODUCTION... 10 1.1 Motvaton... 10 1.2 Background Informaton... 11 1.2.1 Dgtal Communcaton Systems and Error Correcton Codes... 12 1.2.2 LDPC Codes... 14 1.2.3 Turbo Codes... 15 1.3 Organzaton of the Thess... 16 CHAPTER 2: LDPC CODES AND TURBO CODES OVERVIEW... 17 2.1 LDPC Codes... 17 2.1.1 Representaton of LDPC Codes... 17 2.1.2 Propertes of LDPC Codes... 19 2.2 LDPC Encoder... 20 2.2.1 Encodng usng the G Matrx... 20 2.2.2 Encodng Usng the H Matrx... 21 2.3 LDPC Decoder... 22 2.4 Performance of LDPC Codes... 23 2.5 Turbo Codes Encoder... 25 2.5.1 Descrpton of Parallel Concatenated Encoder... 25 2.5.2 Interleavers... 26 2.6 Turbo Codes Decoder... 27 2.6.1 Decodng Process... 27 2.6.2 Decodng Algorthm... 29

6 2.7 Performance of Turbo Codes... 30 2.8 Comparson of Performance of Turbo Codes and LDPC codes... 31 CHAPTER 3: CIRCULAR TRELLIS BASED CODES USING PARITY CHECK MATRIX... 33 3.1 (TBC) 2 Encoder... 33 3.1.1 Propertes of (TBC) 2... 33 3.1.2 Advantages of (TBC) 2... 35 3.2 H Matrx Representaton of (TBC) 2... 37 3.3 SP Algorthm... 42 3.4 Parallel Concatenated Encoder SPA Decoder... 46 3.5 Turbo SPA Decoder... 47 3.5.1 Parallel SPA Decoders wth Interleavng... 47 3.5.2 Parallel Gallager SPA Decoders... 51 CHAPTER 4: RESULTS AND DISCUSSIONS... 53 4.1 Results for Desgnng an H matrx for (TBC) 2... 53 4.1.1 H matrx for Systematc Codes... 54 4.1.2 H Matrx for Nonsystematc Codes... 63 4.1.3 H Matrx Representaton of two Parallel Concatenated (TBC) 2... 65 4.2 Turbo Encoder-Turbo SPA Decoder... 67 4.2.1 Results for Parallel SPA Decoders wth Interleavng... 67 4.2.2 Results for Parallel Gallager SPA Decoders... 69 REFERENCES... 73

LIST OF TABLES Page 7 Table 1 - Example of State table for 16-state trells [17]... 34 Table 2 - Number of 4-length loops of H matrx... 41 Table 3 - Performance for the ntal H matrx... 54 Table 4 - Results for the mproved H matrx... 55 Table 5 - Results for the mproved H matrx usng dfferent systematc table... 58 Table 6 - The results for our H matrx usng dfferent systematc table... 60 Table 7 - Results for our H matrx for a punctured code of rate 2/3... 61 Table 8 - Results for our H matrx for a punctured code of rate 4/5... 62 Table 9 - The results for H matrx usng a non-systematc table... 64 Table 10 - Results for H matrx representaton of two PC-(TBC) 2 encoders... 65 Table 11 - Results for H matrx representaton of two PC-(TBC) 2 usng a symmetrcal nterleaver... 66

LIST OF FIGURES 8 Page Fgure 1 - Dgtal communcaton system basc dagram... 12 Fgure 2 - a) H matrx; b) Tanner s Graph... 18 Fgure 3 - Examples of 4-length cycle... 19 Fgure 4 - Performance of LDPC codes [21]... 24 Fgure 5 - Dagram of Turbo Codes Encoder... 26 Fgure 6 - Turbo Decoder dagram... 28 Fgure 7- Performance of Turbo Codes [21]... 31 Fgure 8 - The performance of LDPC and Turbo Codes [21]... 32 Fgure 9 - Butterfly structure [17]... 35 Fgure 10 - Performance of (TBC) 2 Encoder [17]... 36 Fgure 11 - The path through the trells of a mnmum weght codeword... 38 Fgure 12 - The matrx representaton of 6-length loops... 41 Fgure 13 - The graphcal representaton of 6-length loops... 41 Fgure 14 Informaton exchange between v-nodes and c-nodes.... 43 Fgure 15 Informaton exchange between c-nodes and v-nodes... 43 Fgure 16 - Flow chart of the Sum Product decodng algorthm... 45 Fgure 17 - General flow chart of our Matlab code... 46 Fgure 18 - Dagram of an LDPC - Turbo system... 47 Fgure 19 - Turbo-SPA decoder flow chart... 50 Fgure 20 - Parallel Gallager codes [13]... 51 Fgure 21 - Performance comparson of LDPC and PCGC [22]... 52

9 Fgure 22 - Performance of our code usng the ntal H matrx... 54 Fgure 23 - The performance of our code usng the mproved H matrx... 56 Fgure 24 Comparatve performance of ntal H matrx and mpoved H matrx... 58 Fgure 25 - The performance of H matrx usng dfferent systematc symbol table 2... 59 Fgure 26 - The performance of our code usng dfferent H matrx... 60 Fgure 27 - The performance of H matrx for a puncture code of rate 2/3... 61 Fgure 28 - The performance of H matrx for a puncture code of rate 4/5... 62 Fgure 29 - The performance of our code usng a non-systematc table... 64 Fgure 30 - The performance of our code usng H matrx representaton of two PC (TBC) 2 encoders... 65 Fgure 31 - The comparatve performance of dfferent codes... 68 Fgure 32 - The comparatve performance of PG and LDPC codes... 69 Fgure 33 - Comparatve performances for our [16x32] H matrx... 70 Fgure 34 - Performance of our H matrx havng zero 4-length cycles... 71

10 CHAPTER 1: INTRODUCTION One of the major challenges of contemporary tmes s to fnd a relable way of communcatng nformaton. Shannon s work n the 40's gave brth to the feld of nformaton theory and error control codes. An apprecable amount of work was done n error control codes for the next few decades. By the late 80's the research n ths area had matured. However, the mprovement n performance was lmted by the ablty of the decoder. Wth the dscovery of Turbo codes and LDPC codes n the 90 s a powerful and effcent decodng algorthm was found [1]. Ths work nvgorated the codng communty and fuelled the current wave of research n error control codes and teratve decodng algorthms. In the present chapter, we provde the motvaton for ths research n the context of the past work on ths topc. Also, a dscusson of basc concepts used n ths research s ncluded. Ths chapter ends wth an overvew of the thess. 1.1 Motvaton LDPC codes and Turbo Codes both use teratve decodng [1]. There have been many ndependent studes on Turbo Codes and LDPC codes n the past. Also, a combnaton of the two s known as Parallel concatenated Gallager codes [13], Turbo lke LDPC codes, or decodng Turbo Codes based on ther party check matrces [14].

11 The purpose of ths research s to compare the performance of Turbo Codes and LDPC codes usng the same encoder but dfferent decoders. The performance of the Turbo Code whch uses a Tal btng crcular trells block codes (TBC) 2 Encoder has been nvestgated n prevous research [3] - [6][17]. The research presented n ths thess focuses on two man deas. Frstly, our am s to study the performance of (TBC) 2 encoder usng a LDPC (Sum product Algorthm - SPA) decoder nstead of the Turbo MAP decoder. Ths nnovatve dea mples fndng good H matrx representatons of the (TBC) 2 encoder. Secondly, the goal s to verfy that a Parallel Concatenated Encoder represented by an H matrx and a Turbo SPA decoder wll gve the same consstent performance as a Turbo Code performance, gven the same code rate and block length. 1.2 Background Informaton In the followng secton we wll ntroduce the reader to some of the basc concepts necessary to understand the present work. Intally, we ntroduce the dgtal communcaton system concept and the place of error correctng codes n ths context. We contnue our ntroducton by dscussng the error correcton codes, ther use and mportance, the block dagram, the types of forward error correcton codes (FEC) and ther performance. Further, for completon, the hstory of Turbo Codes and LDPC codes s presented.

1.2.1 Dgtal Communcaton Systems and Error Correcton Codes 12 Dgtal communcaton systems are communcaton systems whch transmt the encoded nformaton n dgtal form [1]. The need for the use of dgtal communcaton systems s justfed by the data processng optons and reslence attaned aganst the use of analog transmsson. The smplfed block dagram of a communcaton system contans a transmtter sde, a channel and a recever sde [2] as shown n Fgure 1. The transmtter system contans an Encoder, the place where the data s represented as a member of the same fnte code or message set [1]. The data s transmtted through a channel where nose s added before feedng to the recever. Here, the decoder has the role of decodng the encoded data n order for the orgnal transmtted nformaton to be retreved. Transmtter C H A N N E L Nose Recever Fgure 1 - Dgtal communcaton system basc dagram The prmary functon of an deal dgtal communcaton system s to transmt the nformaton to the recever wth as lttle degradaton as possble. At the same tme, the

13 dgtal communcaton system should be relable from the transmtted energy and bandwdth pont of vew. The metrc consdered to ascertan the qualty of a dgtal communcaton system s bt error rate (BER) or probablty of bt error (P b ) [2]. Therefore, to acheve the characterstcs descrbed above, error correcton codes are used to encode and decode the nformaton. The error correcton codes are used n a dgtal communcaton system n order to mprove the performance of the system. Ths s realzed by addng redundancy whch enables the transmtted sgnal to be resstant to dfferent channel effects such as nose, nterference or fadng [2]. In a general dagram (Fgure 1) of a communcaton system the error correctng codes are named channel codng. The types of error correcton codes are: 1. Block codes 2. Convolutonal &Trells codes [1] When talkng about Block codes the two mportant characterstcs are the code rate and the block sze. Each segment of data contans a fxed number of m bts, and each segment s encoded and respectvely decoded one block at a tme. The encoder adds the party bts, whch are the redundancy so that the output segment s n bts, wth n > m. Furthermore, the code rate s calculated as the rato of the nput bts and output bts, n our case R= m/n [1]. Dependng on whether the orgnal nput sequence s found or not n the output of the encoder we have systematc codes or nonsystematc codes, respectvely. In case of systematc codes the nput sequence s not altered n the output of the encoder [2]. Another mportant metrc n defnng the capablty of an error correctng code s the

14 mnmum dstance d between two codewords [1]. Ths s defned as the smallest value of the set of Hammng dstances for the specfc code. The error correctng capablty (t) s gven by formula: 1 t = d (1.1) 2 As we can notce, mnmum dstance and error correctng capablty are drectly proportonal to each other [1]. A codng scheme that has memory, such as a convolutonal code, can be referred to as a trells code. Such a trells code when combned wth modulaton to acheve error-correcton performance wthout ncreasng the bandwdth s referred to as trells coded modulaton [1]. LDPC codes, Hammng codes, Golay codes are some examples of Block codes [1]. 1.2.2 LDPC Codes Intally proposed by Robert Gallager [15], LDPC (low densty party check) codes were not studed much for more than 35 years. One reason s that Reed-Solomon (RS) codes were nvented n the same perod and they were more sutable codes for the applcatons developed at that tme. Another reason was the hgh computatonal complexty requred for LDPC codes compared to RS codes. In 1998 Rchardson and Urbanke resusctated the nterest n LDPC codes. Also n 1999 MacKay publshed hs work on LDPC codes [16]. LDPC codes are essentally a type of block code, whch means that the data s encoded and decoded n a block by block manner [16]. The encoder has the role to add party bts

15 to the nput sequence and the decoder detects and corrects the possble errors. The decodng algorthm s the same class of algorthms as the Turbo Codes decoder algorthm. It s named SPA (sum product algorthm) or MPA (message passng algorthm) [16]. It has been proved that the LDPC codes provde very hgh decoder throughput [16]. The LDPC codes can be represented usng Tanner s graph or usng the party-check matrx named the H matrx [17]. Also, from the pont of vew of the weght of the rows or columns of the H matrx the LDPC codes can be regular or rregular. An example of the performance of LDPC codes s gven n followng chapters. 1.2.3 Turbo Codes Invented by Berrou et. al., n 1993 [1], Turbo Codes were the frst error correctng codes whch acheved ncreased data rate by not ncreasng the transmtted power. Used n satellte communcatons and wreless communcatons, Turbo Codes offer a hgh performance n terms of error correcton and protecton of data. A Turbo Code Encoder contans two parallel concatenated encoders separated by an nterleaver. At the decoder sde, the turbo structure s kept by havng two parallel teratve decoders separated by an nterleaver and denterleaver. Ths structure has the advantage of beng capable of decodng much longer codes wth a moderate degree of decodng algorthm complexty. The basc block dagram s presented n Chapter 2, where we dscuss n more detal Turbo Codes and also an example of Turbo Codes performance s presented.

16 1.3 Organzaton of the Thess Ths thess s dvded nto fve chapters. The frst chapter s the Introducton chapter n whch the motvaton of our research s presented followed by background nformaton and the outlne of the thess. In Chapter 2, an overvew of Turbo Codes and LDPC codes s gven. The Encodng and Decodng algorthms are presented for both types of FEC codes and the performance of each s llustrated. In Chapter 3 we descrbe our research base notons. The (TBC) 2 Encoder, the SPA algorthm, the H matrx representaton of (TBC) 2 and the parallel concatenated encoder wth one SPA decoder and wth turbo SPA decoders are descrbed.chapter 4 s the results and dscussons chapter. It contans the analytcal and smulaton results for the H representaton of the (TBC) 2 encoder usng dfferent methods. The thess ends wth Chapter 5, whch concludes ths work and summarzes the results obtaned. Some suggestons and new deas for future work are also brefly dscussed n ths chapter.

17 CHAPTER 2: LDPC CODES AND TURBO CODES OVERVIEW In Chapter 2 an overvew of LDPC codes and Turbo codes s presented. The concepts descrbed here are the fundamentals of our research. Both Turbo Codes and LDPC codes acheve performance very close to the Shannon lmt. The prmary reason for ths performance s ther teratve decodng algorthms. In order to optmze the performance of these decodng algorthms specalzed encoders were developed. 2.1 LDPC Codes Low densty party check codes are a class of forward error correctng codes wth the property that at hgher rates, when usng hgh-order modulatons, they seem to have a dstnct advantage over other forward error correcton codes [16]. 2.1.1 Representaton of LDPC Codes LDPC codes can be represented n two dfferent ways. The two representatons are nterrelated and equvalent. One way of representng LDPC codes s usng the party check matrx, whch n lterature s named the H-matrx [11]. The elements of the H-matrx are 0s and 1s. The densty of 1s n the H-matrx should be very low n order for the decoder to gve a good

18 performance. In ths case the matrx s named sparse H-matrx. From ths pont of vew, the LDPC codes can be classfed n to two categores: Regular LDPC codes and Irregular LDPC codes [12]. If the number of 1s n each row and n each column s constant than the LDPC code s regular, otherwse the LDPC code s rregular. The other way of representng the LDPC codes s usng Tanner s Graphs [16]. The Tanner Graphs and H matrx are equvalent, and they can be derved one from the other [12]. Tanner Graphs are bpartte graphs and the contanng nodes are named varable nodes (v-nodes) and check nodes (c-nodes)[16]. For a better understandng we can consder the example llustrated n Fgure 2. H = 1 0 0 1 0 0 1 0 0 0 1 0 1 1 0 1 1 0 1 0 1 1 0 1 Fgure 2 - a) H matrx; b) Tanner s Graph If an element of the H matrx s 1, than there wll be a correspondng edge n Tanner s graph. In the above example, the frst element of the H matrx s 1, so there s an edge between frst v-node and frst c-node. Consderng the second element of H, whch has the value 0, notce that there s no edge between frst v-node and second c-node n Tanner s Graph.

19 2.1.2 Propertes of LDPC Codes The frst metrc of nterest s the densty of the H matrx, whch s gven by the total number of 1 s on each row and column of the matrx. From the lterature we know that the best performance for a regular LDPC code s gven by a (3, 6) structure [8]. That sgnfes that all the check nodes are degree 3 ones and all varable nodes have degree 6. The randomness of the H matrx s an mportant factor. In order to obtan a good performance the rows of the party check matrx need to be lnearly ndependent. Also, large mnmum dstance between codewords s requred [15]. That means that the number of 4-length loops should be mnmzed. A 4-length loop s a path n Tanner s Graph that contans four edges wth the property that the ntal and the last node concde [16]. Fourlength loop examples are llustrated n Fgure 3. Fgure 3 - Examples of 4-length cycle The sparseness of the H matrx and the number of 4-length loops n a LDPC code wll have a drastc nfluence n the performance of the code.

20 There are a few desgn technques used for LDPC codes. The approach used n our research s constructng a low desty party check matrx usng the encoder for Turbo Codes. Obvously the goal s, usng an effcent encoder and decoder, to obtan nearcapacty performance and low error rate floors. Dfferent desgn approaches have dfferent names for LDPC codes. A few examples are: Gallager codes, MacKay codes, Irregular LDPC codes, Array codes, Combnatoral LDPC codes [16]. 2.2 LDPC Encoder The goal of ths subchapter s to descrbe the encoder for the LDPC codes. In order to do so, we present two ways of encodng. The frst method to encode s usng the generator matrx of a code (G matrx). Another way of encodng s usng the party check matrx of the code (H matrx). 2.2.1 Encodng usng the G Matrx Consderng C a [n,k] code havng a generator matrx G and the nput sequence x, one can encode t by applyng the formula: xg=c. In ths equaton c s the obtaned codeword and conssts of a k systematc bts and n-k party or redundant bts [20]. Ths s when the G matrx s n the standard form [I P], where I s the [kxk] sze dentty matrx and P s a

21 [kx(n-k)] sze matrx named the party matrx. Usually systematc codes have ther G matrces n standard form. If, however, the G matrx s not n the standard form, by row permutatons and manpulatons, an equvalent code wth a G matrx n standard form can be obtaned [2]. Havng the G matrx n the standard form facltates obtanng the party check matrx by applyng the formula: H=[P T I]. 2.2.2 Encodng Usng the H Matrx Another way of encodng an nput sequence or message x s usng the H matrx of the gven code C. As mentoned before, f we consder the frst k bts of x as beng the systematc bts and the c=k-n bts left as beng the party check bts, our codeword can be wrtten as: x=[k c] ( 2.1) Also, denote: H=[A B] (2.2) where A s a m x m Identty matrx and B s a m x (n-m) matrx. In ths scenaro x s a vald codeword f the followng constrant s true: T x H = 0 (2.3) From the equatons (2.1), ( 2.2) and ( 2.3) we can conclude that : Ac+Bx=0 (2.4) c = A 1 Bx (2.5)

22 Equaton (2.5) gves us means to calculate the check bt wth the condton that matrx A s non-sngular. It s worth mentonng that for a systematc code A s the dentty matrx. 2.3 LDPC Decoder The decodng algorthm used n our research s named Message Passng Algorthm or Sum Product Algorthm. Message Passng Algorthm s an algorthm that computes teratvely the dstrbutons of varables n a graph-based model [16]. In our case, the Message Passng Algorthm s based on Tanner s graph. In Message Passng Algorthm, the a-posteror probablty (APP) that a gven bt n the transmtted codeword c equals 1 gven the receved codeword y, Pr(c=1 y), s computed. The formula for the APP rato [16] s: l ( c ) ( c = 0 y) ( c = 1 y) Pr = (2.6) Pr Also, the log lkelhood rato can be defned, as n the followng formula [2]: L ( c ) ( c = 0 y) ( c = y) 1 Pr = log (2.7) Pr Intally the APP s computed from the receved data and sent to v-nodes. The probabltes contaned n the v-nodes are sent to the adjacent c-nodes. The c-nodes

23 contan so called check party equatons and the obtaned result s named extrnsc nformaton. l The messages or extrnsc nformaton whch are sent n l teratons wll be denoted, m cv respectvely m l vc. The formula for computng the extrnsc nformaton [18] s: m l cv l 1+ ' tanh( m / 2) v V c = ln (2.8) 1 ' v V c ' \{} v v c l tanh( m ' / 2) \{} v v c In formula (2.8) l m cv s the extrnsc nformaton from the c-nodes to the v-nodes and m ' l v c s the extrnsc nformaton from the v-nodes to the c-nodes known from the prevous teraton. In the next half teraton, the nformaton contaned n the c-nodes s send to the v-nodes. The formula [18] used s: m ( l) vc = m + v m, l = 0 v ' c C v \ {} c m l 1 ' c v, l 1 (2.9) At the end of each teraton (at the v-node), hard decsons are taken. The last step s to check f the obtaned word s a vald codeword (x*h T =0). If the codeword s a vald one than the teraton s stopped f not the process contnues. 2.4 Performance of LDPC Codes

24 In Fgure 4 the performance of the two types of LDPC codes s presented [21]. The block length of the code s 10 6 bts, the code rate s ½ and the Belef-propagaton algorthm s used for decodng. Note that the dstance from the Shannon lmt for the rregular LDPC code s about 0.1 db at P b =10-5. The rregular LDPC code performs better than the (3,6) regular LDPC code because of the randomness n the rregular code. 10-2 Block length = 10 6 bts and Code Rate = 1/2 Shannon Lmt optmzed rregulat LDPC (3,6) regular LDPC 10-3 Pb 10-4 10-5 10-6 0 0.2 0.4 0.6 0.8 1 1.2 Eb/No(dB)) Fgure 4 - Performance of LDPC codes [21]

25 2.5 Turbo Codes Encoder Turbo Codes contan two key nnovatons: parallel concatenated encoder and teratve decoder [19]. These nnovatons are responsble for the superor performance of the Turbo Codes. 2.5.1 Descrpton of Parallel Concatenated Encoder The encoder of Turbo Codes s a parallel concatenated (PC) encoder. It s formed of two convolutonal encoders separated by an nterleaver. Typcally recursve systematc codes (RSC) are used for convolutonal encodng [7]. It s known that there exsts an equvalent non-recursve convolutonal code for every RSC. The reason for whch the RSC codes are used for Turbo Codes encodng s because of the dfference n the nput sequences mapped nto the output codewords. For RSC, low weght nput sequences are mapped to hgh weght codewords, and t s ths property that helps them perform well for Turbo Codes. In the orgnal Turbo Code a code rate 1/3 RSC code wth generator matrces G1 [7] were used. In order to get a 1/2 code rate, puncturng was used. Usually both the encoders used n the parallel concatenated structure are the same, but ths may not necessarly be the case. The followng fgure (Fgure 5) llustrates the block dagram of a parallel concatenated encoder.

Input Systematc bts 26 ENC1 Party 1 p1() π () I n t e r l e a v e r ENC 2 Party 1 p2( π ()) Fgure 5 - Dagram of Turbo Codes Encoder In the above fgure Encoder 1 (ENC1) and the Encoder 2 (ENC2) are both recursve systematc codes. 2.5.2 Interleavers The nterleaver used n Turbo Codes s dfferent from the channel nterleaver. Channel nterleavng s used to avod burst errors. The nterleaver used n Turbo Codes serves two man purposes: - t ensures that the output of one decoder s uncorrelated wth the output of the other one - - f an nput sequence generates a low weght codeword wth the frst encoder then the nterleaver s responsble to ensure that the nterleaved nput sequence doesn t generate a low weght codeword from encoder two

27 The sze of the nterleaver determnes the performance of the Turbo Code. As the length of the nterleaver ncreases, the block length of the code ncreases and better performance can be acheved. Commonly, a sem-random nterleaver s used. The random propertes of the nterleaver provde decorrelaton between the decoders. Careful desgn of sem-random nterleavers s necessary n order to ensure that low weght codewords are avoded [17]. Note that a low weght nput sequence s a sequence of small Hammng weght (4 or 5) and a low weght codeword s a codeword of small Hammng weght. The desgn of the nterleaver s accountable for the performance of the Turbo Code at hgh SNRs. Low weght codewords are responsble for the error floor seen n Turbo Codes. By usng a good nterleaver, perfectly desgned, the mnmum weght of the Turbo Code can be ncreased thus delayng the appearance of the error floor [1]. 2.6 Turbo Codes Decoder Iteratve decoders are used n Turbo Codes and they can be used to effcently decode complex codewordes. Before ther nventon, complex encoders exsted but weren t used because decodng was prohbtvely dffcult. 2.6.1 Decodng Process In ths secton the decodng process appled n Turbo Codes s presented. Fgure 6 llustrates the block dagram of the Turbo Code Decoder.

y1 DEC1 op1 28 Inter leave Denter leave y2 DEC2 op2 Fgure 6 - Turbo Decoder dagram The nput to Decoder 1 s the sequence of systematc bts, output of Encoder1, and extrnsc nformaton from Decoder 2. Smlarly, the nput to Decoder2 s the sequence of systematc bts, output of Encoder 2, and extrnsc nformaton from Decoder1. Note that the extrnsc nformaton exchanged between the decoders has to be nterleaved/denterleaved [19]. Decoder1 (DEC1) and Decoder 2 (DEC2) are soft-nput and soft-output Decoders. The nputs to the decoders are probabltes and so are the outputs. After the frst teraton the output of Decoder 1 conssts of ntrnsc and extrnsc nformaton. The ntrnsc nformaton s the nformaton known before decodng and the extrnsc nformaton s the nformaton ganed due to the decodng process. Ths extrnsc nformaton s exchanged between the two decoders. The teraton or exchange of nformaton between the two decoders s contnued untl both the decoders converge to the same codeword and the maxmum number of teratons s reached. Typcally the soft nput, soft output maxmum a-posteror probablty decoder s used for decodng [7].

2.6.2 Decodng Algorthm 29 The classcal algorthm used n Turbo Code decoders s Maxmum a-posteror (MAP) algorthm also called BCJR algorthm [7]. If we consder, as descrbed n secton 2.3.1, that the log lkelhood a-posteror probablty s gven by the formula (2.10): L ( c ) ( c = + 1 y) ( c = y) 1 Pr = log (2.10) Pr where the decoder decdes c = +1 f Pr ( c = +1 y) > Pr( c 1 y) = and c = -1 otherwse. For a trells code, consderng as startng state (or prevous state) the state s k-1 = a and the end state (or current state) s k = b the above formula can be wrtten as: L ( c ) ( s = a, s = b, y) / Pr( y) Pr k 1 k + 1 = log (2.11) Pr( sk 1 = a, sk = b, y) / Pr( y) 1 That means that gven the probablty of output y we can fnd the probablty of nput bt as beng 1 or -1. By lookng at the prevous formula notce that after cancellng the Pr(y) the remanng part to calculate s [7]: ( s = a, s b y) whch can be wrtten as n equaton (2.13) [7]: Pr k 1 k =, (2.12) ( s a, s = b, y) = α ( a) γ ( a b) β ( b) Pr 1 k 1 = k k k, k (2.13) Keepng n mnd the above formula, the BCJR algorthm nvolves four basc steps:

computed as: 1 - Calculate the forward metrc ( ( a) ) k 1 30 α k = Pr s, y ths s α. Defnng ( ) ( ) ( b) α ( a) ( a b) k k 1 γ, wth the condtons that α ( 0) 1 and ( b 0) 0 0 = k b k 1 α = (2.14) α [7]. 0 = 2 - Calculate the ntermedate metrc (from state a to state b): ( a, b) Pr( s = b, y s a) γ (2.15) k = k k k 1 = 3 Calculate the reverse metrc ( ( b) ) β : k ( b) = β ( a) γ ( a b) β (2.16) k k, wth the condtons that β ( 0 ) = 1 and ( b 0 ) = 0 n β [7]. 4 - Calculate the fnal bt probabltes ( α ( a) γ ( a b) β ( b) ) n k 1 k, k. As explaned prevously n secton 2.6.1, n a Turbo Code two teratve MAP based Decoders are used. 2.7 Performance of Turbo Codes In Fgure 7 we have llustrated the performance of a Turbo Code [21] for a block length of 10 6 bts and code rate 1/2. The Turbo Code performance s about 0.3 db at P b =10-5 away from the Shannon lmt for the same block sze as our LDPC example. The encoder used s the classcal turbo code encoder and the decodng algorthm s a Belefpropagaton algorthm.

31 10-2 Block length = 10 6 bts and Code Rate = 1/2 Shannon Lmt Turbo code 10-3 Pb 10-4 10-5 10-6 0 0.2 0.4 0.6 0.8 1 1.2 Eb/No (db) Fgure 7- Performance of Turbo Codes [21] 2.8 Comparson of Performance of Turbo Codes and LDPC codes In Fgure 8 the comparatve performance of (3,6) regular LDPC code, Irregular LDPC code and Turbo Code s presented. The performance of rregular LDPC s about 0.2 db better than the Turbo Code performance for the same block length and code rate. From lterature [21], t s known that the performance of the rregular LDPC code t s better than Turbo code performance for large block lengths.

32 10-2 10-3 Block length = 10 6 bts and Code Rate = 1/2 Shannon Lmt optmzed rregular LDPC (3,6) regular LDPC Turbo Code Pb 10-4 10-5 10-6 0 0.2 0.4 0.6 0.8 1 1.2 Eb/No(dB) Fgure 8 - The performance of LDPC and Turbo Codes [21]

33 CHAPTER 3: CIRCULAR TRELLIS BASED CODES USING PARITY CHECK MATRIX In Chapter 3 we present the basc concepts and the novel deas of ths research. The chapter starts wth a descrpton of the (TBC) 2 encoder. It contnues wth a descrpton of the H matrx representaton of (TBC) 2 encoder. Further, the SP algorthm used n our smulatons s presented and a Parallel Concatenated encoder SPA decoder scenaro s descrbed. In the last part of the chapter the Turbo-SPA dea s presented. 3.1 (TBC) 2 Encoder Tal-btng crcular trells block codes (TBC) 2 are a novel error control scheme used n severe jammng envronments. The block length and code rate for these codes can be chosen dynamcally to provde flexblty and robustness to a communcaton system [5]. These trells codes can be completely descrbed usng ther state and symbol tables. 3.1.1 Propertes of (TBC) 2 The propertes of (TBC) 2 encoder are gven by ts State Table and Symbol (Transmsson) Table. In subsequent sectons we dscuss n detal the State Table and Symbol table for (TBC) 2 encoder. A state table ndcates all possble state transtons n a trells gven the current state and nput symbol. The state table for the (TBC) 2 s a Sxn matrx, where S s the number of

states and n s the nput alphabet sze. The state table for the (TBC) 2 34 has the followng propertes: Tal-btng: For the (TBC) 2 encoder, the trells path for any gven nput sequence wll have the same startng and endng state. Ths property of the table s known as tal-btng and makes the trells crcular. Ths tal-btng property of the state table allevates the problem of the decoder knowng the startng and endng state of a receved codeword [3]. Butterfly structure: The state table used n ths research has S = 16 states wth an alphabet sze n = 4, as shown n Table 1.For ths table, the states can be grouped nto sets of 4-flys, where an n- fly s a group of n ntal states that transt to the same n next states as n Fgure 9. Ths butterfly structure can be used to ncrease the free dstance of the code [6]. Next State Current State 1 nput 2 nput 3 nput 4 nput 1 1 2 4 10 2 3 6 7 12 3 4 10 1 2 4 5 16 8 15 5 6 3 12 7 6 7 12 3 6 7 8 15 5 16 8 9 11 14 13 9 10 4 2 1 10 11 9 13 14 11 12 7 6 3 12 13 14 11 9 13 14 13 9 11 14 15 8 16 5 15 16 5 15 8 16 2 1 10 4 Table 1 - Example of State table for 16-state trells [17]

35 Fgure 9 - Butterfly structure [17] For a gven nput sequence, the state table s used to generate a unque path through the trells. The symbol table s then used to map ths trells path onto a unque set of channel symbols. For the (TBC) 2, smplex symbol assgnment s used to generate a symbol table that acheves larger than orthogonal mnmum dstance between codewords. 3.1.2 Advantages of (TBC) 2 The man advantages of (TBC) 2 [5][6][17] are: a) Near-Shannon lmt performance: (TBC) 2 encoder when used along wth an teratve decoder acheve performance of approxmately 1dB away from theoretcal lmts. The teratve decoder used had two soft-nput soft-output (SISO) Maxmum A-posteror Probablty (MAP) decoders separated by an nterleaver/de-nterleaver.

36 b) Low latency: Ths code uses blocks of length rangng from 32 to1024 bts and has very low latency when compared to block lengths of the order of 10,000 bts. c) Hgh adaptvty: These codes can have a code rate wthn the range 1/12-4/5 for block lengths of 32-1024 bts. d) Effcent decodng: The teratve decoder used for these codes can be made hghly parallel thus further reducng the latency [5]. Fgure 10 shows the performance of the 16-state (TBC) 2 encoder wth a turbo decoder for code rate 1/12 and dfferent block lengths [17]. Probablty of frame error 10 0 10-1 10-2 10-3 B = 32 B = 64 B = 128 B = 256 B = 1024 10-4 -1-0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 E b /N o (db) Fgure 10 - Performance of (TBC) 2 Encoder [17]

3.2 H Matrx Representaton of (TBC) 2 37 The man objectve of ths research s to generate a party check (H) matrx for the (TBC) 2 that can be used n the sum product algorthm. In ths subchapter the man steps to obtan a good H matrx representaton of the encoder are presented. Step 1: Generatng the H matrx The frst step s to generate a sparse H matrx representaton of the (TBC) 2. Ths was accomplshed by frst formng a generator matrx G, n standard form, for the code. The H matrx s then generated from the G matrx. The generator matrx G was formed by sendng an nput sequence e = [ 0 0 0 1 0 0] 1xB to the (TBC) 2 and collectng the correspondng output codeword C. Each C. forms a row of the G matrx, so that ths s generated as: C1 C2 G =.. C n Ths G matrx s transformed to the standard form [I:P] usng row manpulatons, where I s the dentty matrx (representng the systematc symbols) and P s the set of party symbols. H s then obtaned as: H = [ P I ] T : where PPT s the transposed matrx formed by the party bts and I s the Identty matrx.

38 However, the H matrx obtaned usng the above method t s not sparse. As a consequence the number of 4-length cycles s large, and ths translates to a poor performance at the decoder. Step 2: Makng the H matrx sparse. For the SPA to be executed n reasonable tme, the number of branches connectng the check nodes and varable nodes should be small. Ths means that the H matrx has to be sparse. In order to make our H matrx sparse we use mnmum weght nput sequences. An nput sequence that results n a mnmum weght code word s known as a mnmum weght nput sequence. By weght we are referrng to the Hammng weght of the sequence. So, a mnmum weght codeword typcally has an nput sequence of weght less than 6 and the overall weght of the codeword less than 20. It s known that for a trells encoder there exst nput sequences such that the trells transtons devate from the all 0 state and remerge wth t after only a few transtons as shown n Fgure 11. Fgure 11 - The path through the trells of a mnmum weght codeword Such nput sequences result n mnmum weght codewords. An example to llustrate the above statement s descrbed below. Consder the followng rows of the H matrx: - Frst row: [1 1 0 1 1 1 0 0 0 1 0 0]

- Second row: [1 0 1 1 1 1 0 0 1 0 0 0] 39 In whch the frst sx bts are the party bts and the last sx bts are the systematc bts. Note that the Hammng weght of both rows s 6. By addng these two rows (modulo-2) we obtan: [0 1 1 0 0 0 0 1 1 0 0 0]. The weght of ths codeword s 4. Ths example shows that we can do row operatons on the H matrx to make the matrx sparser. The algorthm used to generate the G matrx has the followng logc: for row =:k e=zeros(1,k) e[row]=1 codeword= TBC_encode(e[row]) G[row]=codeword End In order to fnd a good H an algorthm whch does a search for combnaton of rows that gve mnmum weght codeword was wrtten. The logc of the code s as follows. for row1=1:k-1 for row2=2:k combne=row1+row2 f weght (combne)<mnweght mnweght=weght (combne) end end end For a gven H matrx we search through all possble weght-2, weght-3 and weght-4 nput sequences. Once a mnmum weght sequence was found, we performed row operatons on the H matrx so that the systematc bts matched the bts of the mnmum weght sequence. These row operatons make the H matrx sparse by reducng the weght of the party symbols.

For calculatng the number of 4-length loops, the algorthm used s: 40 for row1=1:k-1 for row2=row1+1:k locaton=fnd(row2==1); new_row=mod(row1+row2,2); locaton_new=fnd(new_row(locaton)==1); f(length(locaton_new)<length(locaton)) num_4=floor((length(locaton)-length(locaton_new))/2) ; num_4++ end end end Another more organzed way of reducng the densty of the H matrx s to use a Greedy Algorthm. In ths method, row manpulatons are performed and the combnaton of rows that results n the lowest weght s chosen. For a txn H matrx, the th row s added to the remanng t--1 rows and the weght of each row s calculated. The combnaton of rows that results n the lowest weght s used to replace the th row. Ths method, when appled to all the t rows results n a sparser H matrx. We apply ths technque multple tmes untl the densty of the matrx remans constant. The resultng party check matrx s the sparsest matrx possble. In Table 2 we present the results that show the gan obtaned by applyng the above row combnng methods.

41 Intal H matrx H matrx after row combnng Block length Code Rate Number of 1 s Number of 4 length loops 128 1/2 8704 139232 256 1/2 35072 1166912 512 1/2 140288 9476480 1024 1/2 560128 76012288 128 1/2 1024 640 256 1/2 2048 1280 512 1/2 4096 2560 1024 1/2 8192 5120 Table 2 - Number of 4-length loops of H matrx From the table, we note that row combnng gves sgnfcant reducton n 4-length loops but s unable to elmnate all of them. We also researched the nfluence of the 6-length loops on the performance of H matrx and tred to elmnate them. The pattern of a 6-length loop can be represented by matrces or correspondng Tanner graphs [12] as shown n the followng fgures. Fgure 12 - The matrx representaton of 6-length loops Fgure 13 - The graphcal representaton of 6-length loops

42 We tred searchng and elmnatng the 6 length loops patterns but were not able to elmnate all of them. From lterature [16], we found that even wth some 6 length loops present n the matrces the performance s not badly degraded. So, even n our case we do not expect that a small number of 6 length loops to be detrmental to performance. 3.3 SP Algorthm The decodng algorthm used n our research s the Sum Product Algorthm (SPA). As dscussed n prevous sectons, Sum Product Algorthm s used to compute teratvely the dstrbutons of varables n a graph-based model [16]. In our case, the SPA s based on Tanner s graph. The algorthm used n our Matlab smulaton s descrbed below. In SPA, the a-posteror probablty (APP) that a gven bt n transmtted codeword c equals 1 gven the receved word y: Pr(c =1 y) s computed. The formula for the APP rato s: l ( c ) ( c = 0 y) ( c = 1 y) Also, the log lkelhood rato can be defned as: l ( c ) Pr = (3.1) Pr ( c ) = 0 y ( c = y) 1 Pr = log (3.2) Pr Intally the APPs are computed from the receved data (for an AWGN channel) usng the formula: L 2 ( q ) L( c ) = 2 y / σ j = (3.3)

where σ 2 s the nose varance. 43 The graphcal representaton of half teraton n SPA s llustrated n Fgure 14. c - nodes v - nodes Fgure 14 Informaton exchange between v-nodes and c-nodes. As shown n the fgure, the probabltes contaned n the v-nodes are sent to the adjacent c-nodes. The c-nodes contan so called check party equatons and the obtaned result s the extrnsc nformaton. The formula for computng the extrnsc nformaton [7] s: Where ( ) q j L = tanh (3.4) j ' V j \ 2 1 ( ) 1 r tanh L( q ) ' j L ' are the log lkelhood ratos of all the branches from the v-nodes to c- nodes excludng the branch th. In the next half teraton, the nformaton contaned n the c-nodes s sent to the v-nodes [16] as shown n Fgure 15. c - nodes v - nodes Fgure 15 Informaton exchange between c-nodes and v-nodes The formula s: L ( q ) = L( c ) + L( r ) j ' j C \ ' (3.5) j

Where ( ) 44 L are the log lkelhood ratos of all the branches from the c-nodes to v- ' r j nodes excludng the branch j th. Fnally, for the next teraton LLRs are updated: L ( Q ) = L( c ) + ' C j L( r ) (3.6) At the end of each teraton (at the v-node), hard decsons are taken: ˆ = 1 c f ( ) < 0 Q j L or else =0 (3.7) The last step s to check f the obtaned word s a vald codeword.e. f c*h =0. If the codeword s a vald one, then the teraton s stopped, else the process contnues untl the maxmum number of teratons s reached. The SPA flow dagram used n our research s gven n Fgure 16. ĉ

45 START Fnd Hj= 1 In ta lz e q j w th channel data NO f lo o p < max_ter NO YES c a lc u la te p ro b a b lte s form v nodes to c-nodes (q j) c o m p u te p ro b a b lte s from c-nodes to c- n o d e s (rj) c a lc u la te n e w q j fro m old rj calculate Q j NO f cht=0 NO YES STOP Fgure 16 - Flow chart of the Sum Product decodng algorthm

3.4 Parallel Concatenated Encoder SPA Decoder 46 In our work, we keep our code desgn as close as possble to the Turbo Codes desgn. In our smulaton, we use an H matrx generated from a parallel concatenated encoder, smlar to the one used n Turbo Codes, and one SPA decoder. Each consttuent encoder of the parallel structure s a (TBC) 2 based encoder. The method to generate the H matrx from the Parallel Concatenated encoder s smlar to the one descrbed prevously n Secton 3.2. The motvaton for usng a parallel concatenated encoder to generate the H matrx s because of the use of the nterleaver. Ths wll provde more randomness n the H matrx whch mght mprove performance. A general flow dagram of our Matlab code s presented n Fgure 17. START Parallel concatenated Encoder generate H plot Pb vs. SNR NO f SNR< max SNR SNR()=SNR()+1 STOP YES block err < max NO Calculate Pb() YES generate codewords Tx codewords SPA Decoder Count errors Fgure 17 - General flow chart of our Matlab code

47 3.5 Turbo SPA Decoder In ths secton the two knds of Turbo decoders used n ths research are presented. At the begnnng of ths secton, the Parallel SPA turbo decoders are dscussed. Further, the parallel Gallager SPA decoders are presented. The performance curves for both cases are presented n Chapter 4. 3.5.1 Parallel SPA Decoders wth Interleavng In the followng fgure the dagram of an LDPC-Turbo system s presented. Fgure 18 - Dagram of an LDPC - Turbo system In the LDPC-Turbo decoder, each decoder- DEC1 and DEC2- uses SPA (sum product algorthm). As explaned before, SPA (sum product algorthm) used n DEC1 and DEC2 contans the followng steps: 1 - Calculate the ntal probabltes, n other words, the nformaton that comes from the channel nto each decoder.

2 - Calculate the extrnsc nformaton from the v-nodes to the c-nodes, denote t q j. 48 3 - Calculate the exchanged nformaton from the c-nodes to the v-nodes, denote t r j. 4 - Calculate the output probabltes of the DEC1 and DEC2, denoted by Q. 1 2 5 - Make hard decsons on the outputs and and obtan codewords as the outputs of Q DEC1 and DEC2. Compare the output codewords of DEC1 and DEC2 and f they are the same then the decodng process s done. If the output codewords do not match, then the Turbo process starts. LDPC-Turbo decoder conssts n the followng steps: I - Calculate the extrnsc nformaton from each decoder. Denote the extrnsc nformaton from DEC1 as extr1, then the formula for t s: Q 1 1 ( 0) = Q ( 0) P ( 0) 1 extr1 = e (3.8) 1 1 1 and e ( 1) Q ( 1) P ( 1) = (3.9) Smlarly, for DEC2 we have: 2 extr2 = e (3.10) 1 1 ( 0) = Q ( 0) P ( 0) 2 2 2 and e ( 1) Q ( 1) P ( 1) = (3.11) II - Exchange the extrnsc nformaton between DEC1 and DEC2. Lookng at the fgure, one can notce that after exchangng the extrnsc nformaton at the nput of the DEC1 we have the de-nterleaved extr2. The formula for ths s: 2 ( 0 2 extr ) e ( ) () 0 1 and 2 ( 1 2 ) = e ( ) () 1 1 = extr (3.12)

49 Where P k j are the APPs for symbol j, decoder k, teraton and denotes ( ) nterleavng. Smlarly, the extrnsc nformaton at the nput of DEC2 we have the nterleaved extr1. Mathematcally, we can wrte t as: 1 ( 0 1 extr ) e ( ) () 0 and 1 ( 1 1 ) e ( ) () 1 = extr = (3.13) III - Second teraton starts and the nput of each decoder now s modfed At DEC1 we wll have the denterleaved extrnsc nformaton from DEC2 and the ntal nformaton from ENC1. That s: extr 2 + ENC1 = (3.14) Z The ntrnsc data was gven by the P. So we can wrte: Z Z 1 1 2 ( ) ( ) 1 0 = e 0 ( 0 ) 1 + P () () 1 2 = e 1() () 1 1 + P () 1 (3.15) At the DEC2 we wll have the nterleaved extrnsc nformaton from DEC1 and the ntal nformaton from the ENC2. The mathematcal expressons for ths are gven by the followng formulae: Z Z 2 2 1 ( ) ( ) 2 0 = e 0 ( 0 ) 1 + P () () 1 1 = e 1() () 1 2 + P () 1 (3.16) VI - Compare the output codewords and f thy do not match begn the thrd teraton smlarly to step III. At ths step make hard decsons after decodng usng the SP algorthm explaned n the begnnng, and f n the obtaned codewords are not the same, begn the thrd teraton

50 that s smlar to repeatng the Step III. Keep n mnd that n ths case the P s wll be replaced by the Z s. For our smulaton the Turbo-SPA decoder flow chart s llustrated n Fgure 19. START get channel nfo NO f loop < max_ter NO YES SPA1 Separate extrnsc output and ntrnsc output Use channel nfo and extrnsc from SPA1 for decodng n SPA2 Separate extrnsc output and ntrnsc output from SPA2 exrnsc only Hard decsons on output from SPA1 and SPA2 NO f op1==op2 NO YES STOP Fgure 19 - Turbo-SPA decoder flow chart

51 3.5.2 Parallel Gallager SPA Decoders The dfference between parallel Gallager decoders and the teratve decoder presented n secton 3.5.1 s that two dfferent H matrces are used for the two decoders and no nterleaver/denterleaver are used. The dagram of the Parallel Concatenated Gallager Codes (PCGC) [13] s presented n Fgure 20. X x extr1 extr2 LDPC ENC1 party1 nput LDPC DEC1 LDPC DEC2 nt1 nt2 LDPC ENC2 party2 Fgure 20 - Parallel Gallager codes [13] The encodng for the parallel Gallager codes s slghtly dfferent from that of parallel concatenated encodng. The man dfference s that there s no nterleaver and n each encoder a dfferent H matrx s used. In ths encoder type the nterleaver s avoded by usng an approprate H matrx for the second encoder. At the decoder ths translates also to no nterleaver/denterleaver and the two SP algorthms use two dfferent H matrces. But, smlar to Turbo SPA decoders only extrnsc nformaton of systematc bts s exchanged between the two decoders. In the

52 results presented n Fgure 21, the choce of the H matrces s such that the average mnmum column weght should be 2.667 [13]. That s because low weght gves good performance at low SNRs and hgh weght gves good performance at hgh SNRs [22]. The performance curve for the Parallel Gallager code compared wth an LDPC code s shown n the fgure below [13]. 10 0 PCGC [2.67, 1920] LDPC codes [2.67, 1920] 10-1 10-2 Pb 10-3 10-4 10-5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Eb/No(dB) Fgure 21 - Performance comparson of LDPC and PCGC [22] Fgure 21 shows the comparson performance of a LDPC code and a PCGC for a block length of 1920 bts and code rate ½. Also, for the LDPC code, the mnmum column weght of the H matrx s 2.67 as t s for the PCGC [22]. Notce that the codng gan for PCGC at P b =10-2 s about 0.5 db more than that of LDPC code for the same block sze and code rate.

CHAPTER 4: RESULTS AND DISCUSSIONS 53 Ths chapter presents the results of our research. These results are obtaned from smulatons conducted usng MATLAB. In these smulatons at least 30 codeword errors were counted for every value of E b /N o. 4.1 Results for Desgnng an H matrx for (TBC) 2 The results we present n ths chapter are the performance of the H matrx representatons of a (TBC) 2 encoder. The method used to generate these H matrces was presented n prevous chapters (secton 3.2). In the followng results dfferent systematc tables were used n order to obtan more randomness n the H matrx. Once the H matrx was obtaned, technques to elmnate the 4 and 6 length loops were used to make the H matrx sparse and mprove performance.