International Journal of Scientific Research Engineering & echnology (IJSRE), ISSN 78 88 Volume 4, Issue 6, June 15 74 A New Least Mean Squares Adaptive Algorithm over Distributed Networks Based on Incremental Strategy Karen Der Avanesian 1, Mohammad Shams Esfand Abadi 1, Shahid Rajaee eacher raining University, Faculty of Electrical and Computer Engineering, ehran, Iran Abstract his paper applies the new least mean squares (LMS) adaptive algorithm, which is circulantly weighted LMS (CLMS), in distributed networks based on incremental strategy. hedistributed CLMS (dclms) algorithm is optimized with respect to approximate a priori knowledge of input autocorrelation signals from all nodes in the network. In comparison with dlms, the dclms adaptive algorithm has faster convergence speed especially for highly colored input signals. We demonstrate the good performance of dclms through several simulation results in distributed networks. Keywords: Distributed networks, incremental strategy, adaptive algorithm, circulantly weighted LMS. I. Introduction he least mean squares (LMS) algorithm is used in many adaptive filter applications. Unfortunately, forhighly colored inputs, the convergence speed of these algorithms may become extremely slow. he normalized LMS (NLMS) can slightly solve this problem. he affine projection algorithm (APA) and recursive least squares (RLS) have high convergence speed for highly colored input data. But in comparison with LMS and NLMS algorithms, RLS and APA have high computational complexity. herefore the main problem in ordinary adaptive algorithms is sensitivity to the kind of input signals. We are looking for to the algorithm which has the following features at the same time [1]: High convergence speed for different input signals. Low steady-state mean square error (MSE). Low computational complexity. he circulantly weighted LMS (CLMS) adaptive algorithm which was presented in [1] has these three features [], he CLMS algorithm is based on the modification of the update term in the LMS algorithm with a circulant matrix optimized with respect to some approximate apriori knowledge of the autocorrelation properties of the input to the adaptive filter. he convergence speed of CLMS is faster than LMS adaptive algorithm especially for highly colored input signal. Also, the computational complexity of CLMS is slightly larger than LMS. Distributed processing deals with the extraction of information from data collected at nodes that are distributed over a geographic area. For example, each node in a network of nodes could collect noisy observations related to a certain parameter of interest. he nodes would then interact with each other in a certain manner, as dictated by the network topology, in order to arrive at an estimate of the parameter. he objective is to arrive at an estimate that is as accurate as the one that would be obtained if each node had access to the information across the entire network. Obviously, the effectiveness of any distributed implementation will depend on the modes of cooperation that are allowed among the nodes [3]. In an incremental mode of cooperation, information flows in a sequential manner from one node to the adjacent node. his mode of operation requires a cyclic pattern of collaboration among the nodes, and it tends to require the least amount of communications and power [4]. Consensus implementations employ two time scales and they function as follows. Assume the network is interested in estimating a certain parameter. Each node collects observations over a period of time and reaches an individual decision about the parameter. During this time, there is limited interaction among the nodes; the nodes act more like individual agents. Following this initial stage, the nodes then combine their estimates through several consensus iterations; under suitable conditions, the estimates generally converge asymptotically to the desired (global) estimate of the parameter. he distributed LMS (dlms) was the first adaptive distributed network which was presented in [4]. When the input signals of the nodes are highly colored, the convergence speed of dlms will be slow. Different adaptive distributed networks such as distributed APA (dapa) and distributed RLS (drls) was proposed [1]. Again these algorithms have high computational load. In this paper we apply the CLMS algorithm in distributed networks based on incremental strategy to establish the distributed CLMS (dclms) algorithm. In distributed CLMS (dclms), the circulant matrix is designed based on the autocorrelation matrices of input signals from the nodes. herefore the dclms has better performance than dlms algorithm.
International Journal of Scientific Research Engineering & echnology (IJSRE), ISSN 78 88 Volume 4, Issue 6, June 15 75 his paper is organized as follows. In Section II, the CLMS adaptive algorithm is presented. In the next section, the dclms will be established. Finally before concluding the paper, we present several simulation results in distributed networks. II. CLMS Algorithm As we know, the convergence speed of adaptive filter algorithm is depending on the eigenvalue spread of autocorrelation of input signals. Large eigenvalue spread leads to the slow convergence speed. If we know that the input signal has an autocorrelation matrix given by, or being close to R, it would make sense to select the C matrix as the inverse or approximate to R. his will give a low eigenvalue spread for CR, and consequently rapid convergence. We restrict C to be circulant and symmetric, for keep computational complexity low [1]. herefore, the weight update equation with M coefficients in CLMS is introduced as: h n = h n 1 + μx n e(n) (1). In (1), x(n)is the input signal vector which is defined as: x n = x n, x n 1,, x(n M (). + 1) Also, μis the step-size and e(n)is the error signal which is obtained by: e n = d n x n h(n 1) (3). Where d(n) is desired signal. Given an assumed set of autocorrelation matrices R, R 1,, R (Q 1) for input signal, find the circulant matrix C that best approximate the inverse of the autocorrelation matrices. We can define the following optimization problem. min c Ψ Q 1 k= CR I F (4). Where. F denotes the Forbenius norm, I is the identity matrix, and Ψ denotes the set of circulant matrices [1].Writing the circulant C matrix explicitly as: a a 1 a a M 1 a a 1 a M 1 a M = a 1 a a 3 a c o c 1 c M 1 (5). Where all the rows of C are c i = Π M i c fori =,1, M 1, and Π is shift matrix which is defined by [1]: Π = 1 1 1 (6). Solving the optimization problem in (4) leads to the following equation. M 1 Q 1 i= k= Π i R k Π M i c M 1 Q 1 = Π i R k M i c i= k= (7). Wheree = 1,,,. We propose to use fixedc matrix, which is good approximate inverse to set of input autocorrelation matrices [1]. III. dclms Algorithm Learning algorithm for distributed networks based on incremental strategy has been shown in Fig. 1. Based on local data d k n, x k n and h k 1 n from its previous node in the k 1 cycle, the update equation for local estimation is performed at each time instant n. Finally, the local estimationh Q n is used as the global estimation for filter coefficients. his value is used as the initial local estimation h 1 n + 1 for the next time instant (n + 1). Actually each node is an adaptive filter and has its own local data d k n, x k n where kdenotes the node index. We assume a linear data model to generate the desired signal for each node as: d k n = x k n h o + v k n (8). Where v k n is additive white Gaussian noise of each node and h o is the unknown system that we would like to estimate it. he local update equation for distributed CLMS algorithm estimation at each time instant n in incremental network is given by h k n = h k n 1 μ k Cx k n e k n (9). Where e k n = d k n x k (n)h k 1 n (1). Is the error signal in all nodes. 1. Designing Circulant Matrix Based on rue/close Autocorrelation Matrices he first strategy for designing the matrix C is based on the autocorrelation of input signals of the nodes. We can select the autocorrelation matrices close to the true autocorrelation matrices. Based on these autocorrelation matrices, the matric C from (5) is designed [1].. Designing Matrix Based on Estimation of Autocorrelation Matrices he true autocorrelation matrices in each node can be obtained from the following relation: R k = E x k n x k (n) (11). We can estimate these matrices from (1) and (13) as:
International Journal of Scientific Research Engineering & echnology (IJSRE), ISSN 78 88 Volume 4, Issue 6, June 15 76 R k = 1 n j =n +1 R k = 1 n + 1 n j = x k j x k (j) x k j x k (j) (1). (13). Where denotes the number of recent regressors in each node. hen we can design the C matrix based on these estimations for autocorrelation matrices. Fig.1. Distributed processing over adaptive incremental network through CLMS algorithm. IV.Simulation Results We simulated performance of dclms algorithm in a distributed network base on incremental strategy with nodes in a system identification setup. he impulse response of the random unknown system has M = taps. he correlated input signal,x k n, at each node generated by passing white Gaussian noise process with unit variance through first order autoregressive model filter, which z-transform function is: dclms. Fig.5-8 present simulated steady-state value MSE and EMSE of dlms with µ =. and dclms with µ =. for each node, also can note that for this colored input if we choose step size of dlms algorithm more than. result may not be converge. Preconditioned C matrix designed based on estimation of autocorrelation matrices of each node and also for AR(1) inputs with correlation index close to index of entire nodes. In all simulations estimated correlation indexes has been chosen α = {.9,.95,.99}. Results show simulated steady-state of dclms, MSE for both designed preconditioned C matrices.fig.5-6 shows performance of simulated Steady state dclms and dlms which is demonstrate that steady state of dlms and dclms is same for 1 sample iteration but dclms has much more convergence speed. Fig 7-8 shows steady state for 1 sample iteration which are demonstrate that dclms algorithm reached to steady state but dlms did not converge yet. Fig.9 present simulated MSE learning curves of dclms algorithm in which preconditioned C matrix was designed with entire nodes autocorrelation for three different estimations of autocorrelation, first estimating with whole samples and then estimating with 5 and 5 regressors of input samples. With this estimated autocorrelation new preconditioned matrices was made and results was demonstrated. Fig.1 just like Fig.9 but instead of using entire nodes correlation index for designing preconditioned matrix only choose α = {.9,.95,.99} which were near to entire nodes correlation index. V. Conclusion In this paper we applied circulant weighted LMS algorithm in adaptive distributed networks based on incremental strategy. Circulant matrix due to its characteristic which is estimation of inverse leads to decreasing eigenvalue separation of autocorrelation; this could increase convergence speed of adaptive algorithm. Simulation results show the good performance of this algorithm in distributed networks. 1 α k 1 α k z 1 (14). 1.99.98 Correlation Index ( k ).1.9.8 Noise Power Profile ( v,k ) And αε.9,.99 which is highly colored input data. he Noise sequence of each node is a white Gaussian noise with variance of σ v,k (,.1].Fig. shows correlation index and noise power profile of each node. All estimation in simulation results are obtained by ensemble averaging over 1 independent trials. Fig.3-5 compares MSE and MSD simulated learning curves of dlms and dclms for different step size with true and estimated autocorrelation matrices which is obtain from Eq. (1) and (13) for node 1. As we can see dclms algorithm has a high convergence speed comparing with.97.96.95.94.93.9.91.9 4 6 8 1 1 14 16 18.7.6.5.4.3..1 4 6 8 1 1 14 16 18 Fig.. Node Profile: Noise power and Correlation index for the Gaussian data network.
E MSD(dB) E International Journal of Scientific Research Engineering & echnology (IJSRE), ISSN 78 88 Volume 4, Issue 6, June 15 77 3 dlms algorithm, =. dclms algorith, =. (C designed with entire nodes correlation index) dclms algorith, =. (C designed with {.99,.95,.9}) dlms, =. dclms (C disigned with entire nodes), =. dclms (C designed with {.9,.95,.99}, =. 1-3 1 3 4 5 6 7 8 9 1 Itteration Number Fig.3. Comparing MSE learning Curves of dlms and dclms for different step size and correlation index. -34 4 6 8 1 1 14 16 18 Fig.6. Steady-state EMSE for dlms and dclms, using C matrix designed by true and estimated autocorrelation of each node.simulated for 1 samples. 3 dlms algorithm, =. dclms algorith, =. (C designed with entire nodes correlation index) dclms algorith, =. (C designed with {.99,.95,.9}) -6-8 1-1 -14 1 3 4 5 6 7 8 9 1 Itteration Number Fig.4. Comparing MSD learning curves of dlms and dclms with different step size and correlation index. dlms, =. dclms (C disigned with entire nodes), =. dclms (C designed with {.9,.95,.99}, =. 4 6 8 1 1 14 16 18 Fig.7.Steady-state MSE for dlms and dclms, using C matrix designed by true and estimated autocorrelation of each node.simulated for 1 samples. -1-14 dlms, =. dclms (C disigned with entire nodes), =. dlms =. dclms (C disigned with entire nodes), =. dclms (C designed with {.9,.95,.99}, =. dclms (C designed with {.9,.95,.99}, =. 4 6 8 1 1 14 16 18 Fig.5. Steady-state MSE for dlms and dclms, using C matrix designed by true and estimated autocorrelation of each node. Simulated for 1 samples. 4 6 8 1 1 14 16 18 Fig.8. Steady-state EMES for dlms and dclms, using C matrix designed by true and estimated autocorrelation of each node. Simulated for 1 samples.
International Journal of Scientific Research Engineering & echnology (IJSRE), ISSN 78 88 Volume 4, Issue 6, June 15 78 3 1 dclms, cirulant C matrix designed with true auto correlation dclms, cirulant C matrix designed with estimated autocorrelation, for = 5 regressors of each node [7] J. H. Husoy, A preconditioned LMS adaptive filter ECI rans. on electrical Eng., electronics and communications, vol 5, pp.3-9 Feb 7 dclms, cirulant C matrix designed with estimated autocorrelation, for = 5 regressors of each node -4 1 3 4 5 6 7 8 9 1 Itterations Number Fig.9. MSE learning curves of dclms for different estimation of autocorrelation for each highly colored node. 3 dclms, cirulant C matrix designed with true auto correlation dclms, cirulant C matrix designed with estimated autocorrelation, for = 5 regressors of each node 1 dclms, cirulant C matrix designed with estimated autocorrelation, for = 1 regressors of each node 1 3 4 5 6 7 8 9 1 Itterations Number Fig.1. MSE learning curves of dclms for different estimation of autocorrelation for set of correlation index near to correlation of each node. References [1] J. H. Husoy, M. S. Esfand Abadi, A novel LMS-type adaptive filter optimized for operation in multiple signal environments". [] J. H. Husoy, M. S. Esfand Abadi, A Family of Flexible NLMS-type Adaptive Filter Algorithms IEEE 7. [3] C. G. Lopes and A. H. Sayed, Incremental adaptive strategies over distributed networks, IEEE rans. Signal Process, vol. 55, no. 8, pp. 464 477, Aug. 7 [4] A. H. Sayed and C. G. Lopes, Adaptive processing over distributed networks IEICE rans. Fundam. Electron, Commun. Comput. Sci., vol. E9-A, no. 8, pp. 154 151, 7. [5] Haykin, Adaptive Filter heory, Second ed. Englewood Cliffs, NJ: Prentice hall 1996 [6] A.H. Sayed Adaptive Filters, New York: Wiley 8