Chapter - 7 Adaptive Channel Equalization
Chapter - 7 Adaptive Channel Equalization 7.1 Introduction The transmission o f digital information over a communication channel causes Inter Symbol Interference (ISI)[1] among the data symbols. In other words the channel distorts the signal and the ISI causes error when we attempt to recover the data. It is well known that the channel characteristics cause the disturbance very considerably. Therefore it is appropriate to assume that the channel, which is modeled as a nonlinear system, is unknown to the receiver, that must recover the digital information. In such a case, the problem is to design a correcting system which, when cascaded with the original system, produces an output which is almost free from distortion caused by the channel. Thus a replica of the desired transmitted signal is yielded. In digital communication system, such a corrective system is called an adaptive equalizer. In the general context o f linear system theory, the corrective system is called an inverse system because the corrective system has a frequency response which is basically reciprocal of the frequency response of the system that caused the distortion. Further more, since the distorted system yields an output x(n) that is the convolution of the binary input s(n) with the nonlinear impulse response plus the additive noise q(n), the inverse system operation that takes x(n) and produces s(n) is called deconvolution. Adaptive equalizers find extensive use in all types of communication systems. In telephone lines operating at speeds of 16.8 K bits/sec and in the voice band modem based communication equalizers are used [2,3]. In overseas channel ISI is caused due to multipath transmission. In this system also equalizers are used for multipath compensation [4]. In radio communication channels arrays o f equalizers are used [5,6] to compensate the effects o f multipath transmission. In this chapter we have undertaken the problem o f adaptive channel equalization specifically for nonlinear channels. The conventional equalizers reported in the literature assume the channel to be linear. The equalizers o f a linear channel are
relatively easy to be implemented by linear adaptive algorithms such as LMS, RLS etc [7,8]. If the channel happens to be nonlinear, a linear equalizer model can not completely compensate the ISI generated due to the channel. In recent years ANN technology has played an impressive role in design of nonlinear systems using BP algorithm [9-11]. The solution o f nonlinear problems becomes easy by ANN technology due to the inherent nonlinear property present in each neuron. In a recent paper [12] multilayer ANN has been used to develop an adaptive equalizer for nonlinear channels. It has been reported that the performance o f such an equalizer is superior to other adaptive equalizers. Based on this notion we have extended use of ANN to design and develop digital channel equalizers of nonlinear channels using single layer functional expansion based adaptive neural network. In recent years single layer functional expansion based neural networks have gained popularity in many practical situations [13] because of reduced complexity and low cost compared to multilayer perceptron technique. 7.2 Adaptive Channel Equalizers The schematic o f an adaptive channel equalizer has been depicted in Fig. 7.1. The input binary message s(n) is passed through the nonlinear channel whose output is added with white additive noise q(n) to produce the received signal x(n). The communication channel has been modeled as a nonlinear system followed by an additive white noise. The received signal x(n) is a distorted version o f the transmitted signal s(n) which is affected due to ISI and the additive noise. The purpose of the equalizer is to reconstruct the transmitted message faithfully at the receiver. The received message is passed through a tap delay filter to produce the input vector values x(n), x(n-l), x(n-2),..., x(n-m+l) for an M -tap delay filter, to be used as input to the equalizer. The equalizer produces an output y a(n) which is then compared with the delayed version of the transmitted signal s(n). The resulting difference is known as the error e(n). The training signal is generated by delaying the message by some clock cycles which are half the order o f the equalizer. The knowledge of the
error signal and the input received signal vector to the equalizer is used to adjust the equalizer weights with the help of some training algorithm. The training of the equalizer weights continues by the application o f the received signal samples. The training is stopped when the mean square error (MSE) of the error attains a value below a prescribed error. The design of the equalizer is achieved, when the MSE settles to some steady state as the steady state weights remain constant. In Fig. 7.2 schematic of an ANN based equalizer is shown. q(n) Fig. 7.1 An Adaptive Equalizer Fig. 7.2 An ANN based channel equalizer
This figure is identical to Fig. 7.1 except that the additive equalizer of Fig. 7.1 is replaced by an ANN structure and the learning algorithm is either a BP algorithm or a variation of it. In the paper [9], adaptive equalization using ANN technique has been reported. The structure used here is a multilayer ANN. 7.3 The proposed efficient ANN equalizer structures In this chapter we have proposed three different adaptive equalizers of nonlinear channels using ANN technique and have compared the relative performances (i) Multilayer ANN based equalizer (ii) Single layer single neuron based equalizer (iii) Single layer functional expansion based equalizer 7.3.1 M ultilayer ANN based equalizer Fig, 7.3 depicts a multilayer based ANN equalizer. As the nonlinearity is attached with every node of a multilayer network, the entire network is capable of forming arbitrary complex decision regions in the pattern space. However increasing number o f layers and number of neurons also enhances the complexity o f the overall network.
s(n) Fig. 7.3 A m ultilayer based adaptive channel equalizer Consider a multilayer network with two layer having M nodes in the input layer, N nodes in the hidden layer and one in the output layer. At time n the input vector to the ANN be given by Eq.(7.1) X(n) = [x(n),x{n - 1 ),...*(*...(7.1) The input layer consists o f the feedforward as well as the feedback taps as shown in Fig. 7.3. Corresponding to the Ni nodes o f the hidden layer, the output vector o f the hidden layer can be given as...(7.2)
M x) = / (!» ( «- ( + 1) + w.)...(7.3) 1= 1 Wbj is the bias weight with a unity input magnitude, f(m) is the sigmoidal function. The final output o f the network is given as A'I y jn ) = / ( X» W + w> -(7.4) * = l This output is in non binary form. To reconstruct the binary form of the transmitted message,.v(/?). the output o f the equalizer is passed through a hard limiter. The training sequence ya(n), is derived from the transmitted message s{n) by delaying d samples. Thus ya(n) = s(n-d) The delay d is usually set to half of the number o f input taps. Here y a {ri) is the estimated output at the instant n. The learning o f the connecting weights o f different layers as well as bias weights is based on the sequential BP learning described in Section 2.6.2 o f Chapter 2. The learning characteristics and the Bit Error Rate (BER) are two important features o f the equalization which describe its characteristics and performance. The training and BER evaluation are dealt under computer simulation studies. 7.3.2 Single Layer and Single neuron Network based channel equalizers A Single Layer and Single neuron Network based channel equalizer is shown in Fig. 7.4. In this type o f equalizers only one neuron is used. By comparing with standard LMS based equalizer, we may observe that except the presence o f a sigmoidal
nonlmearity, both the equalizers are same. Compared with the multilayer ANN base equalizer, it is seen that the single neuron based equalizer is structurally very simple [14], and cost wise very economical. The received signal is passed through M taps feed forward filter. There is also one feedback tap at the input along with the bias input. These inputs are weighted by connecting weights and the bias weight and passed through a single neuron., to produce the output of the equalizer. The learning of the weights are based on adaptive learning algorithm, meant for single layer outlined in Section 2.5.1 o f Chapter 2. Fig. 7.4 A Single layer Single Neuron N etw ork based adaptive channel equalizer 7.3.3 Single layer with functional expansion based equalizer A Single layer with functional expansion based equalizer is shown in Fig. 7.5. The input to this equalizer is identical to that of Fig. 7.4. In this case also there is one neuron but all the inputs are fimctionally expanded by trigonometric expansions to increase the number of inputs from M+l to 2.(M+1) [15-17], The output o f the equalizer is compared with the training signal to generate the error signal
e(n). The ANN algorithm described in Section 2,5.4 is used to train the connecting and the bias weights. After learning is complete, the transmitted signal is reconstructed by passing output o f equalizer through the hard limiter as shown in Fig. 7.5. The training o f this equalizer and its performance are assessed in next section. Fig. 7.5 An ANN based functional expansion adaptive channel equalizer 7.4 Computer Simulation The computer simulation study has been carried out in this section, to assess the performances of the three proposed structures described in the previous section. Three typical nonlinear channels as mentioned in Eqs.(7.5), (7.6 ) and (7.7) are used in the simulation study. x(n) = s(n) + [tanh2 s(w)] + [>(«)]' +?(*)...(7.5) x(n) = O.U(n) + 0.3[s(n)]2 + 0fls(ii)]3 + q(n)...(7.6) x(«) = 0Jj(fl) + *'"(") +0.5 + q(n) -(7-7)
Where s(n) is the transmitted binary signal at the n-th iteration and q(n) is the additive noise. (a) G eneration of input data and additive noise Zero mean binary random inputs are generated and used as serial inputs to the nonlinear channels. The output of the channel is estimated from the channel characteristics mentioned in Eqs. (7.5),(7.6) and (7.7), The input signal power to the channel is unity since the input o f the channel is binary in nature. The channel parameters are suitably normalized in such a way that the signal power of the output o f the channel is also maintained at unity. To this unity signal power, zero mean white noise o f different decibels strengths are added to generate the received signal. The additive noise used for channel -1 and channel - 3 is -20 db each, whereas for channel - 2 it is >30 db for all three types of equalizers proposed. (b) M ultilayer ANN based equalizer Fig 7.3 was simulated with five feedforward and one feedback tap. The proposed structure is a 3-layer structure with six inputs ( five forward and one feedback), one neuron each in first and second hidden layer and one in the output layer. The learning characteristics showing relationship between the MSE in db and iterations for channels 3,2,3 are plotted in Figs. 7.6(a), (b) and (c) respectively, from observation o f these curves it is noted that MSE settles at -30 db, -30 db and -40 db at around 200, 1200 and 200 iterations for channels 1,2 and 3 respectively. (c) Single layer and single neuron based equalizer The equalizer o f Fig. 7.4 was used for simulation. This structure also contains five feedforward and one feedback tap at the input layer. There is only one neuron in this structure. The learning characteristics obtained from the simulation of channels 1,2 and 3 are shown in Figs. 7.7 (a), (b) and (c) respectively. It is observed
that the MSE reaches to the steady state with -28 db, -25 db, -40 db at about 300,800 and 300 iterations for channels 1,2 and 3 respectively. (d) Single layer and functional expansion based equalizers The scheme o f Fig. 7.5 was used for simulation. Like the other equalizer structures six input taps ( five feedforward and one feedback) were considered. They were further expanded to twelve points such as { x(, smnxu X2,sin7TX2,,x6,simrx6 }. The learning characteristics for three channels are displayed in Figs.7.8 (a),(b) and (c) respectively. MSE floor is achieved at about -35dB, -30 db, -60 db at around 200,200 and 300 iterations for channels 1,2 and 3 respectively. Table 7.1 provides a comprehensive simulation study of learning characteristics for the three equalizers for three channels. In this table MSE floor of each equalizer for each o f the three nonlinear channels is given in db. No. of iterations means the number o f iterations taken in reaching the MSE floor. Table 7.1 A comparative study o f learning characteristics for three equalizers for three channels Channel -1 Channel -2 Channel -3 Type o f proposed MSE No. of MSE No. of MSE No. of Equalizer floor iterations floor iterations floor iterations M ultilayer -30 200-30 1200-40 200 Single layer single -28 300-25 800-40 300 neuron Single layer functional expansion -35 200-30 200-60 300
(a) Cl»aone' <b) Cba»n '
Iterations (c) Channel - 3 Fig. 7.6 Learning characteristics of three channels using multilayer based equalizer Iterations (a) Channel -1
MSE in db MSE in db 20 Iterations (b) Channel -2. 1 4 0 I------------- - ----------------------------------- 0 500 Iterations (c) Channel - 3 Fig. 7.7 Learning characteristics o f three channels using single layer - single neuron based equalizer
MSE in db MSE in db 5 0-5 -10-15 -20-25 -30-35 -40-45 Iterations (a) Channel -1 Iterations (b) Channel -2
0-20 m -40 ill $ -60 s -80-100 -120 u 500 Iterations (c) Channel -3 Fig. 7.8 L earning characteristics of three channels using single layer - functional expansion based equalizer (e) Perform ance study of three equalizers For comparing the relative performances o f different equalizer structures, the Bit Error Rate (BER) characteristics are plotted. This study proceeds as follows. After the learning is complete, the magnitude o f different connecting weights and bias weights were fixed at the last values i.e. the steady state values and the equalizer models were constructed. Binary test patterns were generated and passed sequentially through the channels as well as the equalizers. Atleast 105 random bits were passed through channels. Corresponding to each transmitted bit, the output of the equalizer was estimated by passing through the constructed equalizer model. Both the transmitted and the constructed equalizer outputs were compared from all the 105 transmitted bits. The number o f mismatched bits were computed and logio o f the probability o f error was estimated for a given signal to noise ratio SNR. The relation between the probability o f error in db and SNR in db were plotted for three different equalizer structures and for the different nonlinear channels. The BER graphs are shown in Figs.7.9 (a), (b) and (c) for channels 1,2 and 3 respectively. The BER is an important characteristic o f a channel equalizer which reveals its performance. On
comparing the BER graphs displayed in Fig. 7.9 (a), for channel -1, it is observed thal the function expansion based equalizer offers superior performance. The same trend is observed for channel -2 also. However for channel 3, the BER performance o f single neuron based equalizer offers superior performance. Signal to noise ratio (db) (a) Bit Error Rates (BER) o f three equalizers for channel >1 Signal to noise ratio (db) (b) Bit Error Rates (BER) o f three equalizers for channel -2
Signal to noise ratio (db) (c) Bit Error Rates (BER) o f three equalizers for channe Fig. 7.9 A comparison of performances of three equalizers for three channels 7.5 Conclusion From the performance comparison, it is in general observed that ANN based equalizers offer encouraging performance for nonlinear channels as all the three structures offer low BER for different SNR conditions. On comparison of performances o f individual equalizer, it is further observed that for two channels, the performance o f single layer functional expansion based equalizer is better whereas for the third case single layer single neuron structure offers better performance. On comparing the computational and structural complexity, it is noticed that single layer single neuron is the best. Thus comparing complexity, cost and performance, the present study reveals that the single neuron based equalizer is recommended for use for different nonlinear channels.
References [1.]G.Panda, J.K.Satpathy and S.K.Patra, Development of new neural adaptive equalizers and their performance comparison with existing techniques, Journal o f 1ETE, vol.42, No.4&5, pp.237-254, July-October 1996. [2.JG.J.Prokis,D igital Communication, mcgraw-hill New York, 1983. [3.]C.A.Siller Jr., Multipath propagation",ieee Communication Magazine,vol 22,pp 6-15, February 1984. [4.]G.D.Forney,Jr. R G Gallger, G RLang, F M Longstaff & S U Quereshi, "Effiecient modulation for band limited channels", IEEE Trans. Selected Areas Communication, vol SAC 2, pp 632-647, September 1984. [5.] D M B rady, "An adaptive coherent diversity receiver for data transmission through dispersive media",proc. 1970 IEEE Int. Conf. Communication, pp 21-35 to 21-39, June 1970. [6.JP M onsen/'m SE equalization for interference on fading diversity channels, IEEE Trans. Commun., vol COM 32, pp 5-12, January 1984. [7.] W idrow,m E Hoff, Jr. Adaptive Switching circuits, IRE WESCON Conv Rec. Pt4, pp 96-104, August 1960. [8.]S.U.H.Quereshi, "Adaptive equalization", Proc. IEEE, vol 73, No. 9. September 1985, pp. 1349-1387. [9.]D.R.Hush and B.G.Horne,"Progress in supervised neural networks", IEEE Signal Process. Mag., January 1993, pp. 8-39. [10.] R.P.Lippm ann," An introduction to computing with neural networks", IEEE ASSP Mag., April 1987 pp 4-25. [1 l.jd.e.r um elhart and J.L.McCIeUand, Parallel Distributed Processing: Explorations in the Microstructure o f cognition, vol. 1, Foundation MIT Press, Cambridge, MA, 1986. [12.] G.J.G ibson, S Siu and C.F.N. Cowan, Multilayer perceptron structure applied to Adaptive equalizers for data communication, IEEE Proceedings, ICASSP, Glasgow, May 1989, pp 1183-1186.
[13.]Y.H.Pao, Adaptive pattern Recognition and Neural Networks, Addison-Wesley, reading, MA, 1989, chapter 8, pp. 197-222. [14.]S.Siu, G.J.G ibson «&C.F.N.Cown & P.M.Grant,"Decision feedback equalization using neural network structures and performance comparison with standard architecture", JEE Proc. Pt 1. Vol 137, No.4, pp 221-225, August 1990. [15.]C.L.GiIes and T.Maxwell,"Learning, Invariance and generalization in higher order neural networks",. Appl. Optics, Vol. 26, 1987, pp. 4972-4978. [16.]Jagdish C.P atra and Ranendra N. Pal, A functional link artificial neural network for adaptive channel equalization IEEE Trans, on Signal Processing, voi.43,no.2, pp.181-195, May 1995. [17]B.W idrow and M.A.Lehr, 30 years o f adaptive neural networks: Perceptron, madaline and backpropagation. Proc. IEEE, vol. 78, No. 9, September 1990, pp.1415-1442.