CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF

Size: px
Start display at page:

Download "CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF"

Transcription

1 95 CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF 6.1 INTRODUCTION An artificial neural network (ANN) is an information processing model that is inspired by biological nervous systems such as brain process information. The most basic functions of the brain are replicated by this model. The key element of ANN is the novel structure of its information processing system. The network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Through a learning process ANN can be configured for a specific application, such as pattern recognition or data classification, etc. This learning process in biological systems involves adjustments to the synaptic connections that exist between the neurons. The same process is undergone in ANN for obtaining solutions to complex problems. The neural network research is aimed to perform various computational tasks at faster rate than traditional systems. Neural networks learn by example and they cannot be programmed to perform a specific task. Artificial neural networks perform various tasks such as Identification and control, Pattern-matching and classification,

2 96 Sequence recognition, Medical diagnosis, etc. 6.2 NEURAL NETWORK AND THEIR ARCHITECTURES ANNs composes large number of highly interconnected parallel configured processing units called nodes or neurons. A connection link is provided between each and every neuron. Each connection link is associated with weights which contain information about the input signal. This information is used by the neuron net to solve a particular problem. ANNs is characterized by their ability of memorization and generalization of training patterns or data similar to that of a human brain. Since ANN tries to replicate the working nature of brain, it has the capability to model networks of original neurons as found in the brain. Thus, the ANN processing elements are called neurons or artificial neurons. Each neuron has its own internal state and it is the function of inputs received by the neuron. This activity level of neuron is called the activation function. It can be transmitted to other neurons one at a time and thus all neuron receives the activation signal. In order to depict the basic operation of a neural net, I 1, I 2 are considered as a set of neurons. Here I 1 and I 2 are input neurons which transmit signals, and O is the output neuron which receives signals. Input neurons I 1 and I 2 are connected to the output neuron O, over a weighted interconnection links (w 1 and w 2 ) as shown in Figure 6.1. The net input for the simple neuron net architecture can be calculated in the following way,

3 97 y in = - a 1 w 1 + a 2 w 2 (6.1) where a 1 and a 2 are the activation functions of input neurons I 1 and I 2, i.e., the output of input signals. The output O can be obtained by applying activations over the net input, i.e., the functions of the net input, O = f (O in ) (6.2) Output = function (net input calculated) I 1 w 1 Inputs O Output I 2 w 2 Net Input (o in ) Figure.6.1 Architecture of a neuron function. The function to be applied over the net input is called activation Figure 6.2 shows a mathematical representation of the artificial neuron. In this model, the net input is elucidated as O in = a 1 w 1 + a 2 w a n w n (6.3) n i1 a i w i where i represents the i th applied over it to calculate the output. processing element. The activation function is

4 98 Figure 6.2 Mathematical structure of artificial neuron namely, The architectures of ANN are specified by three basic entities The interconnections The training or learning rules adopted for updating and adjusting the connection weights Their activation functions Connections between neurons In ANN processing elements are arranged in layers such that each processing element output is found to be connected through weights to the other processing elements. The interconnections of the neurons are essential for an ANN. It should be noted that the function of each processing element in an ANN can be specified only after knowing the point of origination and termination of a neuron. The arrangement of neurons to form layers and the connection link formed between layers is called the network architecture. There exist five basic types of neuron connection architectures. They are, 1. Single layer feed-forward network, 2. Multilayer feed-forward network,

5 99 3. Single layer node with its own feedback, 4. Single layer recurrent network, 5. Multilayer recurrent network. The neural nets are basically classified into single-layer or multilayer neural networks. A layer is formed by combining two processing elements with other processing elements through weights. A layer links the input phase with the output phase. These linked interconnections lead to formation of various network architectures. When a layer of the processing units is formed, the inputs can be connected to these units with various weights to give series of outputs. The output is obtained one per each node. Thus, a single-layer feed-forward network is formed as depicted in Figure 6.3. A multilayer feed-forward network (Figure 6.4) is formed by the interconnection of several layers with processing units. The input layer receives the input and transmits to the output layer. The output layer generates the desired output of the network. There is another layer that is formed between the input and output layer is called as hidden layer. This hidden layer has no direct contact with the external background and is internal to the network. An ANN has zero to several hidden layers based on the application. The usage of more number of hidden layers in ANN provides an efficient output response but it lead to complexity of the network. This type of network with more number of layers is called multilayer network. In some cases of applications the network layers are designed such that every output from one layer is connected to each and every node in the next layer.

6 100 Input layer Output layer I 1 W 11 O 1 W 21 Input Neurons s W 1n W 12 W 22 I 2 O 2 W 2n W 1m Output Neuron s I n W 2m W nm O m Figures 6.3 Single Layer Feed-Forward Network Input layer Hidden layers I 1 Weights H 1 Output layer G 1 O 1 Input Neurons I 2 G 2 H 2 O 2 Output Neurons I n H q G k O m Figures 6.4 Multilayer Feed-Forward Network

7 101 When the output from output layer is directed back as inputs to same or preceding nodes, then the network is said to be feed-forward network. In these feed-forward networks no neuron in the output layer is an input to a node in the same layer or in the preceding layer. For giving solutions to complex problems, the multilayer feed-forward networks are widely used. Because these network uses many hidden layers and proper training may lead to the specified solution Learning methods for training neural network The learning or training is a process by means of that a neural network can be trained to adopt itself to give desired response. This learning makes proper parameter adjustments and thus resulting in target output. The following are the two kinds of broad classifications of learning in ANN. 1. Parameter learning: The weights in the neural network are updated by using this learning. 2. Structure learning: In this learning the total network architecture including the number of processing elements and their connection types are changed. The above two types of learning can be performed simultaneously or separately. Apart from these categories of learning, the general classification of learning in ANN are given as 1. Supervised learning 2. Unsupervised learning 3. Reinforcement learning

8 102 Though many learning are available, in some cases of problems, the solution by using the supervised learning gives the target output, than the other learning. Since the learning by unsupervised and reinforcement does not provide the proper training for the network to process the input vectors Supervised learning In the supervised learning, each input vector requires a corresponding target vector, which represents the desired output. The input vector along with target vector is called as training pair. The block diagram of Figure 6.5 depicts the working of a supervised learning network. Desired Output Input NEURAL NETWORK Actual Output Compare Adjustment of weights Error Figure 6.5 Block diagram of Supervised Learning The network is trained precisely about the type of output to be emitted. The input vector for specified application is presented to the network which results in a required output. This obtained output vector is called as actual output vector and is compared with the desired (target) output vector. If there exists a difference between the two vectors an error signal is generated by the network. This error signal is used for adjusting the weights until the actual output matches the target output. In this training, a supervising is done by setting the target output and thus resulting in error minimization. Hence,

9 103 this type of network training is called as the supervised training methodology. In supervised learning, it is assumed that the correct target output values are known for each input pattern Purpose of Activation Functions The output of a network can be derived exactly by the help of an activation function. This function is associated with the input of the processing neuron. The activation function assigns the required operation to the processing neuron in order to get the specified output. This integration function (f) serves to combine activation information from other processing neurons. The response of neurons can be dampened by using the nonlinear activation function and thus it shows that activation stimuli are controllable. The usage of this nonlinear function in multilayer network is more advantageous than single layer network since the output obtained with linear function in multilayer network is same as that of single layer network. Hence due to this reason nonlinear activation functions are widely used in multilayer networks. Sigmoidal functions: This function is widely used in back propagation network because of the relationship between the value of the functions at a point and the value of the derivative at that point. It reduces the computational issues during training. The sigmoidal functions are depicted in Figure 6.6. The sigmoidal function are of two types and are given as Binary sigmoid function: It is also called as logistic sigmoid function or unipolar sigmoid function. The range of this sigmoid function is from 0 to 1.

10 104 It can be defined as 1 f (I) I 1 e (6.4) function is -f(i)] (6.5) Bipolar sigmoid function: This function is defined as I 2 1 e f (I) 1 (6.6) I I 1 e 1 e The sigmoid function range is between -1 and +1. The derivative of this function can be f (I) [1 f (I)][1 f (I)] (6.7) 2 f(i) f(i) I 0 I (a) (b) Figure 6.6 (a) Binary sigmoidal function (b) Bipolar sigmoidal function

11 105 The hyperbolic tangent function and bipolar sigmoid function are closely related to each other and their relation can be depicted as I I e e f (I) (6.8) I I e e 2I 1 e f (I) (6.9) 2I 1 e The derivative of the hyperbolic tangent function is h (I) = [1+h(I)][1-h(I)] (6.10) If a binary data is used in the network, the bipolar sigmoid activation function shall be used after converting that data into bipolar form Terms used for ANN Weights: The direct communication link between the neurons in the architecture of ANN is associated with weights. The information about the input signal is contained in the weights and it is used by the network to give solutions for solving the problems. Since there are many such links in a network, the weights are represented as a weight matrix (W). The mathematical formation of this conn T w 1 w11 w1m W (6.11) T w n w n1 w nm where w i = [w i1, w i2,... w im ] T, i=1,2,...n, is the weight vector of processing element and w ij is the w

12 106 the information of all elements in an ANN and set of W matrices determine set of all possible element configurations for that ANN. Bias: The bias plays a major role in determining the output of the network. The bias included in the network has its impact in calculating the net input. The bias is included by adding a component I 0 = 1 to input vector I. Thus, the input vector becomes I = (1, I 1,... I i,... I n ) (6.12) The bias is considered like another weight, that is w 0j = b j. For a simple network shown in Figure 6.7, the net input to output neuron O j is calculated as follows, I 1 w 1j I 2 w 2j I j w ij O j w nj b j I n -1 Figure 6.7 Inclusion of Bias term for Neuron net

13 107 o inj I w n i0 a i w ij 0w0j a1w1j a2w2j... anwnj n 0 j aiwij i1 n o b a w (6.13) inj j i1 i ij The bias can be of two types 1. Positive bias 2. Negative bias The positive bias helps in increasing the net input of the network and negative bias helps in decreasing the net input of the network. Thus by using bias the output of the network can be changed. Threshold: It is a value limit set in order to predict the final output of the network. This threshold value is used in the activation function. The calculated net input and this set value is compared to obtain the network output. Each and every application has a threshold limit and based on this value the activation function is defined for that network. According to that function the required output is also calculated. The activation function using threshold can be defined as 1, if net f (net) (6.14) 1, if net w

14 108 Learning Rate: weight adjustment at each step of training process. The range of learning rate is from 0 to 1and it determines the rate of learning at each time step. Momentum factor: It is added in the weight updating process inorder to have faster convergence. This factor helps in large weight adjustments and hence used in the back propagation network. It should be noted that for using this momentum factor, the weights of the previous training patterns must be saved. Vigilance parameter: The vigilance parameter ( control the degree of similarity required for patterns to be assigned to the 6.3 BACK PROPAGATION ARTIFICIAL NEURAL NETWORK The networks associated with back propagation learning algorithm are called Back Propagation Artificial Neural Networks (BPANNs). It is a popular learning method developed by neural networks. The network with this algorithm gives the alternate way for handling large learning problems and has reawakened the scientific community to the modelling and processing of numerous quantitative phenomena. This algorithm is applied to multilayer feed forward networks with continuous differentiable activation functions. For a given set of training pattern, the algorithm provides a procedure for changing the weights in a BPANN to classify the given input patterns correctly. The basic concept for this weight update algorithm is based on the gradient descent method. In this method, the error is propagated back to the hidden unit through the feed forward layer. The main objective of the network is to train the network in order to achieve a balance between the nets

15 109 ability to respond (memorization) and its ability to give reasonable responses to similar input that is used in training (generalization). The process by which the weights are calculated during the learning period of the network with the back propagation algorithm is different from other networks. It is very difficult to calculate the weights of the hidden layers in multilayer perceptrons with zero output error and when the hidden layers are increased the network training becomes more complex. For updating weights, the error must be calculated. The calculated error is the difference between the actual and the desired (target) output and it can be easily measured at the output layer. But it should be noted that at the hidden layers, there is no direct information of the error. So other techniques should be used to calculate an error at the hidden layer with minimized output error. stages, The training of the Back propagated neural network is done in three 1. The feed-forward of the input training data 2. Back propagation of the error and 3. Updating weights according to error obtained The testing of the BPN involves the computation of feed-forward phase only. There can be more than one hidden layer (more beneficial) but one hidden layer is sufficient. Even though the training is very slow, once the network is trained it can produce its outputs very rapidly Architecture of BPANN A back propagation neural network is a multilayer, feed forward neural network consisting of an input layer, a hidden layer and an output layer. The neurons present in the hidden and output layers have biases whose

16 110 activation is always 1 and those biases also act as weights. Figure 6.8 shows the architecture of BPANN with the information of feed forward and back propagation phase. In the back propagation phase of learning, signals are sent in the reverse direction to the processing elements. The outputs obtained from the network is either binary (0, 1) or bipolar (-1, +1) and so the activation function could be the sigmoid activation function. Input values Input Layer Weight matrix1 Hidden layer Weight matrix 2 Output Layer Output values Figure 6.8 Basic architecture of a back propagated network The notations used in the BPN are, a i - Activation of unit I i, input signal. o i - Activation of unit O j, o j = f(o inj ) w ij - Weight on connection from unit I i to unit O j b j - Bias acting on unit j. Bias has a constant activation of 1.

17 111 W - Weight matrix, W={ w ij } o inj - Net input to unit O j j - Threshold for the activation of neuron O j. S - Training input vector, S = (s 1,..., s i,..., s n ) T - Training output vector, T = (t 1,..., t i,...,t n ) I - Input vector, I = (I 1,..., I i,...,i n ) w ij - Change in weights, w w (new) w (old) (6.15) ij ij ij - Learning rate for controlling the weight adjustment at each step of training. 6.4 FLOWCHART FOR TRAINING BPANN The flow of the training process is depicted by using the flowchart shown in Figure 6.9. The terms used in the flowchart and in the training algorithm are as follows, I = input training vector ( I 1,... I i,... I n ) d = target output vector (d 1,... d i,... d n ) = Learning rate parameter I i = input unit i. (since the input layer uses identity activation function, the input and output signals here are same) b oj = bias on jth hidden unit b ok = bias on kth output unit c ij = Weight on connection from unit i th to j th Hidden node H j = hidden unit j. The net input to H j is

18 112 H b I c (6.16) inj 0j i i ij and the output is H j = f(h inj ) (6.17) O k = output unit k. The net input to O k is O ink b0k H jwik (6.18) j and the output is O k = f(o ink ) (6.19) k = error correction weight adjustment for w jk that is due to an error at output unit O k, which is back propagated to hidden units that feed into unit O k j = error correction weight adjustment for c ij that is due to the back propagation of error to hidden unit H j. The binary sigmoidal and bipolar sigmoidal activation functions are the commonly used activation functions because of the following characteristics, Continuity Differentiability Nondecreasing monotony

19 Training algorithm adopted for BPANN The back propagation learning algorithm can be outlined in the following three phases of algorithm. They are as follows, Feed forward Phase Back propagation of error Weight and bias updation Step 0: Step 1: Step 2: Initialize weights and learning rate to some random values. Perform Steps 2-9 when stopping condition is false. Perform Steps 3-8 for each training pair Feed forward Phase of network Step 3: Step 4: Each input unit receives input signal x i and it is send to the hidden unit (i = 1 to n). The net input of each Hidden unit H j (j =1to p) is calculated H b I c (6.20) inj 0j i i ij Calculate output of the hidden unit by applying its activation functions over H inj H j = f(h inj ) (6.21) and send the obtained output signal from hidden unit to the input of output layer units. Step 5: For each output unit O k (k = 1 to m), calculate the net input p O b H w (6.22) ink 0k j1 j ik

20 114 and apply the activation function to compute output signal O k = f(o ink ) (6.23) Calculation of Back propagation error in the network Step 6: Target pattern for each output unit O k (k =1 to m) is received corresponding the error correction factor is computed, k = (d k O k ) f (O ink ) (6.24) According to error correction factor the change in weights and bias are updated and is given as jk = k H j (6.25) 0k = k (6.26) k to the hidden layer backwards. Step 7: Each hidden unit (H j, j = 1 to p) sums its delta inputs from the output units, m inj kw jk (6.27) k1 inj inj ) to calculate the error term j = inj f (H inj ) (6.28) bias, j, update the change in weights and ij = j I i (6.29) oj = j (6.30)

21 Weight and bias updation in network Step 8: Update the bias and weights for each output unit O k (k =1 to m) w jk(new) = w jk(old) jk (6.31) b 0k(new) = b 0k(old) 0k (6.32) Update the bias and weights for each hidden unit (H j, j = 1 to p) c ij(new) = c ij(old) ij (6.33) b oj(new) = b oj(old) oj (6.34) Step 9: Check for the stopping condition. When the actual output equals the target output or if required epochs reached, the stopping condition is checked. The weights in this algorithm are changed immediately after a training pattern is presented. This is an incremental approach for updation of weights but it is different from another training called batch mode training where the weights are changed only after all the training patterns are presented. Another drawback of this batch mode training is it needs additional storage for the immediate weight updation. Since the back propagation learning algorithm implements gradient descent method it converges with nearest minimum error. This is possible only when the relation existing between the input and the output training patterns is deterministic. Thus error surface is also deterministic. If the error surface is random in nature, the algorithm based on gradient descent method helps to get out of local minima. It also updates the weights of the network within the required learning rate and minimizes the problems due to lack of proper convergence.

22 116 Initialization phase Start Initialize the weights C For each training pair I, t NO B YES Input signal I i received Transmitted to Hidden layer units Calculate the hidden Layer output (H j ) Send H j to the output layer Target pair d k enters Calculate the output layer output (O k ) A Output calculation phase Figure 6.9 Adaptive Learning Algorithm of BPN

23 117 Error derivation phase A Compute error correction factor between output & hidden (f k ) Derive the weight and bias correction term Weight and bias updation phase j j change the weight and bias Updated weight and bias on Output unit w jk(new), b ok (New) Updated weight and bias on Hidden unit c ij(new), b oj (New) C NO If target epochs reached or d k = O k YES B Stop Figure 6.9 (Continued)

24 Factors needed for convergence of BPANN The training and convergence of the BPANN are based on various learning factors such as weights initialization, the type of learning rate, the weight updation rule, the size and nature of the training patterns, and the architecture of network. Initial weights: The initialization of weights is an important factor for determining the convergence of network. The weights are initialized at small random values inorder to have faster convergence. Since the sigmoidal activation functions are widely used in this network, the weight cannot be initialized to high values. Because the activation functions get saturated from initial stage itself and may have the problems due to local minima. The range of weight (w ij ) used for initialization is given in the following equation. 3, p 3 i p i (6.35) where p i is the number of processing elements j that feed forward to processing element i. The weights connecting input neurons to the hidden neurons are obtained by the equation. cij(old) cij(new) (6.36) c (old) j where c j is the average weight calculated for all values of i, and the scale 1/n (n is the number of the input neurons and P is the number of hidden neurons). convergence of BPANN. A larger learning rate may result in overshooting

25 119 and oscillation, while a lower learning rate leads to slower learning of the -3 to 10. Momentum: The momentum factor is added with the normal gradient descent method inorder to have larger learning rate without convergence. This factor can be used with either pattern by pattern updating or batch-mode updating. The weight updation formula with momentum factor is given by the following equations. w (t 1) w jk (t) kh j wij(t) w jk (t 1 (6.37) jk ) w jk(t 1) c (t 1) c jk(t) jii cij(t) c jk(t 1 (6.38) jk ) c jk(t 1) Generalization: When trainable parameters are more for the given amount of training data, the network shows overtraining. On the other hand for small number of trainable parameters, the network fails to learn the training data and performs poorly. For improving the generalization of network, the input space of the pattern is changed. This can be done by introducing the variations in the input space without altering the output components. The back propagation network is the best network for generalization. Number of training data and hidden layer nodes: The entire input space should be covered by training data. For multilayer feed forward networks there exists many numbers of hidden layers. The number of hidden units depends up on the application to be used and it should be determined.

26 IMPLEMENTATION OF BACK PROPAGATED NEURAL NETWORK BASED ACTIVE REGENERATIVE HARMONIC FILTER For improving the performance and obtaining better results in harmonic elimination, the neural based current controller (Maurizio Cirrincione et al. 2009) is implemented. The training of the network layer gives the required mitigation of harmonics. For obtaining the required harmonics reduction, back propagated neural network is used. These networks are purely based on supervisory training. It uses gradient descent method of training instead of steepest descent method since the adjustment for required output can be done continuously during the training process. The standard steepest descent is always used due to the constant learning rate throughout the training. But the total performance of this algorithm is very sensitive to the proper setting of the learning rate. It is not practical to determine the optimal setting for the learning rate before training, and so the performance will be better if the learning rate changes during training process (Li Xia et al. 2011). For improving the performance, the learning rate is allowed to change during the training process (Julio Viola et al. 2007). This adaptive learning rate attempts to keep the learning step size as large as possible while keeping the learning stable. The flow of adaptive training algorithm for the Back Propagated Artificial Neural Network (BPANN) is shown in the flowchart of Figure 6.9. The proposed ARHF with the trained BPANN is depicted in Figure 6.10 (Pinheiro et al. 1996).

27 121 Figure 6.10 Proposed ARHF with neural network trainer Simulation and its Outputs The MATLAB simulink model is used to model the two types of nonlinear loads with inclusion of Back propagated neural network for controlling active regenerative harmonic filter. The Nonlinear Load 1and 2 are simulated with the ARHF as shown in Figures 6.11 and 6.12.

28 Figure 6.11 Simulation circuit of Nonlinear Load 1 with Neural Network 122

29 123 Figure 6.12 Simulation circuit of Nonlinear Load 2 with Neural Network The multilayer feed forward back propagated neural network is implemented in the simulation. After simulation of both loads, the magnitudes of harmonics are reduced. The individual network layers incorporated in the MATLAB simulink model are shown in Figure 6.13(a) and (b). The integration of the neural network layers is shown in Figure 6.13(c). The sigmoidal activation function is selected and it works on the given input to give the particular order harmonic reduction. The training is done for 84 epoches for getting the best harmonic elimination (Figure 6.13(d)). The FFT window shown in Figures 6.14 and 6.15 gives the source current with reduced THD i for both loads.

30 124 (a) Neural network Layer 1 (b) Neural network Layer 2 (c) Integration of Layers (1 and 2) (d) Epoches carried out in training Figure 6.13 Implementation of BPANN in simulation

31 125 Figure 6.14 Simulated Nonlinear Load 1 with BPANN (THD i =1.53%) Figure 6.15 Simulated Nonlinear Load 2 with BPANN (THD i =3.84%)

32 126 The gradient descent method of training algorithm is adopted for this network and it controls the regenerative converter switching (Mumcu et al. 2001) there by controlling the generation of harmonics. FFT analysis shows the meticulous harmonic elimination by the usage of back propagation neural network. The derived harmonic profiles are tabulated and their performance is validated. By using the adaptive training algorithm for the Back Propagated Artificial Neural Network, the utility is protected from the issues created by nonlinear loads. The devastating effects created by all types of nonlinear loads can be lucratively reduced by the use of ANN controller. The input current THD of the nonlinear loads obtained before mitigation are THD i =89.54% (NL1) and THD i =190.55% (NL2) respectively. After the use of this adaptive Back Propagated Artificial Neural Network with ARHF, the current THD for NL1 and NL2 are reduced. The FFT profile of harmonic orders obtained after simulation is comparatively low as described in the standards like IEEE The reduced source current THDs by the BPANN based active regenerative filtering are depicted in Figure 6.14 for NL1 and in Figure 6.15 for NL2 and they are THD i (NL1) = 1.53% THD i (NL2) = 3.84% 6.6 VALIDATION AND DISCUSSION OF SIMULATED RESULTS The MATLAB simulation of the nonlinear loads is done with FFT analysis of source current. The two cycles of the utility current is observed and the harmonic profiles of nonlinear loads (NL1 and NL2) are tabulated. By the simulation, the magnitude of the current THD for both types of nonlinear loads is taken with the even and odd harmonic orders. The harmonic orders taken for the nonlinear loads before compensation are shown in Table 6.1. Figure 6.16 gives the bar chart of the harmonic orders plotted with its

33 127 magnitude in percentage (%). The harmonic orders upto the 19 th order including even and odd orders is plotted in the chart which gives a clear picture of harmonics content in the utility. The compensation of source current starts with the instantaneous p- q theory based current controller. The additional use of high pass filter with this controller mitigates the higher order harmonics. The graphs in Figures 6.17 and 6.18 depict the reduced magnitude of harmonics obtained after the usage of p-q theory based controller with and without high pass filter. The plotted chart also indicates the performance of this controller with the uncompensated profile of both loads. The harmonic orders like H 3, H 5, H 7,.etc., upto H 19 are reduced by this controller and further reduction is also done by addition of HPF with it. For improving the efficiency of active regenerative filtering the artificial intelligence techniques are introduced. The fuzzy logic controller is first checked with the proposed filtering to reduce the harmonics pollution. The fuzzy tuned controller provides fine tuning for reducing the harmonics compared to the classical methods. The results obtained by usage of fuzzy tuned controller are shown in Figure The comparison chart for p-q theory based controller and fuzzy logic controller is shown in Figure This validates the performance of fuzzy logic controller in harmonic limitation. To obtain the best power quality at the utility, the back propagated neural network is implemented and results are taken. The uncompensated source profile comparison with BPANN is given in Figure 6.21 for NL1 and NL2. Figure 6.22 shows the results of neural network based controller over p- q theory based control. The optimized intelligent controller is predicted in Figure 6.23 for both types of nonlinear loads. All controllers performances are compared in Figure 6.24 and the best performing controller for the harmonics suppression in both loads is explained in conclusion.

34 Figure 6.16 Uncompensated harmonic profiles for NL1 and NL2 128

35 129 Figure 6.17 Comparison chart for NL1 and NL2 with p-q Theory based controller (without HPF)

36 130 Figure 6.18 Harmonic profiles for NL1 and NL2 with p-q Theory based controller (with HPF)

37 Figure 6.19 FFT profiles for NL1 and NL2 with Fuzzy Logic Controller 131

38 132 Figure 6.20 Fuzzy Logic controller comparison with p-q theory based controller (NL1 and NL2)

39 133 Figure 6.21 FFT profiles for NL1 and NL2 with BPANN trained controller

40 134 Figure 6.22 BPANN trained controller with p-q theory based controller (NL1 and NL2)

41 Figure 6.23 Comparison of FLC and BPANN (NL1 and NL2) 135

42 136 (a) All controllers comparison of THD i for NL1 (b) All controllers comparison of THD i for NL2 Figure 6.24 Comparison of THD i for NL1 and NL2 Table 6.1 gives the complete FFT profiles of all loads with uncompensation and also compensation provided by p-q theory based current controller, fuzzy tuned controller and back propagated neural network based current controllers. The lucrative reduction of odd order harmonics is predicted clearly for all type of controllers. The reduced harmonic orders show that the best harmonic limitation is provided by artificial neural network

43 137 than other types of controllers. The harmonic orders in this table also validate the results provided by active regenerative harmonic filter during mitigation of source current harmonics for both types of nonlinear loads. Table 6.1 Harmonic Values obtained from the Simulation S.NO. Non Linear Load 1 Non Linear Load 2 Controller p-q Harmonic Without controller Order controller without HPF p-q Without controller FLC ANN controller with HPF p-q p-q controller controller FLC ANN without with HPF HPF H H H H H H H H H H H H H H H H H H H The Table 6.2 gives the Total current Harmonic Distortion obtained by the use of ARHF with all the controllers. The reduced THD i is obtained for the application of BPANN with the regenerative converter compared to the other conventional controllers. The values of source current THDs are also within the stringent limits of the standards.

44 138 Table 6.2 THD i obtained from the Simulation CONTROLLERS NONLINEAR NONLINEAR LOAD1 LOAD2 Uncompensated THD i =89.54 % THD i = % p-q theory based controller THD i = 6.17% THD i = 20.98% p-q theory based controller with HPF THD i = 5.81% THD i =19.84% Fuzzy Logic Controller THD i = 6.00% THD i = 4.47% Back Propagation Neural Network trained controller THD i = 1.53% THD i = 3.84%

NEURAL NETWORK BASED MAXIMUM POWER POINT TRACKING

NEURAL NETWORK BASED MAXIMUM POWER POINT TRACKING NEURAL NETWORK BASED MAXIMUM POWER POINT TRACKING 3.1 Introduction This chapter introduces concept of neural networks, it also deals with a novel approach to track the maximum power continuously from PV

More information

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016 Artificial Neural Networks Artificial Intelligence Santa Clara, 2016 Simulate the functioning of the brain Can simulate actual neurons: Computational neuroscience Can introduce simplified neurons: Neural

More information

MINE 432 Industrial Automation and Robotics

MINE 432 Industrial Automation and Robotics MINE 432 Industrial Automation and Robotics Part 3, Lecture 5 Overview of Artificial Neural Networks A. Farzanegan (Visiting Associate Professor) Fall 2014 Norman B. Keevil Institute of Mining Engineering

More information

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw Review Analysis of Pattern Recognition by Neural Network Soni Chaturvedi A.A.Khurshid Meftah Boudjelal Electronics & Comm Engg Electronics & Comm Engg Dept. of Computer Science P.I.E.T, Nagpur RCOEM, Nagpur

More information

NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM)

NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM) NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM) Ahmed Nasraden Milad M. Aziz M Rahmadwati Artificial neural network (ANN) is one of the most advanced technology fields, which allows

More information

Multiple-Layer Networks. and. Backpropagation Algorithms

Multiple-Layer Networks. and. Backpropagation Algorithms Multiple-Layer Networks and Algorithms Multiple-Layer Networks and Algorithms is the generalization of the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions.

More information

CHAPTER 4 AN EFFICIENT ANFIS BASED SELF TUNING OF PI CONTROLLER FOR CURRENT HARMONIC MITIGATION

CHAPTER 4 AN EFFICIENT ANFIS BASED SELF TUNING OF PI CONTROLLER FOR CURRENT HARMONIC MITIGATION 92 CHAPTER 4 AN EFFICIENT ANFIS BASED SELF TUNING OF PI CONTROLLER FOR CURRENT HARMONIC MITIGATION 4.1 OVERVIEW OF PI CONTROLLER Proportional Integral (PI) controllers have been developed due to the unique

More information

1 Introduction. w k x k (1.1)

1 Introduction. w k x k (1.1) Neural Smithing 1 Introduction Artificial neural networks are nonlinear mapping systems whose structure is loosely based on principles observed in the nervous systems of humans and animals. The major

More information

IBM SPSS Neural Networks

IBM SPSS Neural Networks IBM Software IBM SPSS Neural Networks 20 IBM SPSS Neural Networks New tools for building predictive models Highlights Explore subtle or hidden patterns in your data. Build better-performing models No programming

More information

DIAGNOSIS OF STATOR FAULT IN ASYNCHRONOUS MACHINE USING SOFT COMPUTING METHODS

DIAGNOSIS OF STATOR FAULT IN ASYNCHRONOUS MACHINE USING SOFT COMPUTING METHODS DIAGNOSIS OF STATOR FAULT IN ASYNCHRONOUS MACHINE USING SOFT COMPUTING METHODS K. Vinoth Kumar 1, S. Suresh Kumar 2, A. Immanuel Selvakumar 1 and Vicky Jose 1 1 Department of EEE, School of Electrical

More information

CHAPTER 4 PV-UPQC BASED HARMONICS REDUCTION IN POWER DISTRIBUTION SYSTEMS

CHAPTER 4 PV-UPQC BASED HARMONICS REDUCTION IN POWER DISTRIBUTION SYSTEMS 66 CHAPTER 4 PV-UPQC BASED HARMONICS REDUCTION IN POWER DISTRIBUTION SYSTEMS INTRODUCTION The use of electronic controllers in the electric power supply system has become very common. These electronic

More information

POWER TRANSFORMER PROTECTION USING ANN, FUZZY SYSTEM AND CLARKE S TRANSFORM

POWER TRANSFORMER PROTECTION USING ANN, FUZZY SYSTEM AND CLARKE S TRANSFORM POWER TRANSFORMER PROTECTION USING ANN, FUZZY SYSTEM AND CLARKE S TRANSFORM 1 VIJAY KUMAR SAHU, 2 ANIL P. VAIDYA 1,2 Pg Student, Professor E-mail: 1 vijay25051991@gmail.com, 2 anil.vaidya@walchandsangli.ac.in

More information

Gradient Descent Learning for Utility Current Compensation using Active Regenerative PWM Filter

Gradient Descent Learning for Utility Current Compensation using Active Regenerative PWM Filter Journal of Computer Science 7 (12): 1760-1764, 2011 ISSN 1549-3636 2011 Science Publications Gradient Descent Learning for Utility Current Compensation using Active Regenerative PWM Filter 1 R. Balamurugan

More information

CHAPTER 4 IMPLEMENTATION OF ADALINE IN MATLAB

CHAPTER 4 IMPLEMENTATION OF ADALINE IN MATLAB 52 CHAPTER 4 IMPLEMENTATION OF ADALINE IN MATLAB 4.1 INTRODUCTION The ADALINE is implemented in MATLAB environment running on a PC. One hundred data samples are acquired from a single cycle of load current

More information

Shunt active filter algorithms for a three phase system fed to adjustable speed drive

Shunt active filter algorithms for a three phase system fed to adjustable speed drive Shunt active filter algorithms for a three phase system fed to adjustable speed drive Sujatha.CH(Assoc.prof) Department of Electrical and Electronic Engineering, Gudlavalleru Engineering College, Gudlavalleru,

More information

Application of Multi Layer Perceptron (MLP) for Shower Size Prediction

Application of Multi Layer Perceptron (MLP) for Shower Size Prediction Chapter 3 Application of Multi Layer Perceptron (MLP) for Shower Size Prediction 3.1 Basic considerations of the ANN Artificial Neural Network (ANN)s are non- parametric prediction tools that can be used

More information

CHAPTER 6 NEURO-FUZZY CONTROL OF TWO-STAGE KY BOOST CONVERTER

CHAPTER 6 NEURO-FUZZY CONTROL OF TWO-STAGE KY BOOST CONVERTER 73 CHAPTER 6 NEURO-FUZZY CONTROL OF TWO-STAGE KY BOOST CONVERTER 6.1 INTRODUCTION TO NEURO-FUZZY CONTROL The block diagram in Figure 6.1 shows the Neuro-Fuzzy controlling technique employed to control

More information

Use of Neural Networks in Testing Analog to Digital Converters

Use of Neural Networks in Testing Analog to Digital Converters Use of Neural s in Testing Analog to Digital Converters K. MOHAMMADI, S. J. SEYYED MAHDAVI Department of Electrical Engineering Iran University of Science and Technology Narmak, 6844, Tehran, Iran Abstract:

More information

CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK

CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK 4.1 INTRODUCTION For accurate system level simulator performance, link level modeling and prediction [103] must be reliable and fast so as to improve the

More information

Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors

Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors Int. J. Advanced Networking and Applications 1053 Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors Eng. Abdelfattah A. Ahmed Atomic Energy Authority,

More information

ARTIFICIAL NEURAL NETWORK BASED FAULT LOCATION FOR TRANSMISSION LINES

ARTIFICIAL NEURAL NETWORK BASED FAULT LOCATION FOR TRANSMISSION LINES University of Kentucky UKnowledge University of Kentucky Master's Theses Graduate School 2011 ARTIFICIAL NEURAL NETWORK BASED FAULT LOCATION FOR TRANSMISSION LINES Suhaas Bhargava Ayyagari University of

More information

Surveillance and Calibration Verification Using Autoassociative Neural Networks

Surveillance and Calibration Verification Using Autoassociative Neural Networks Surveillance and Calibration Verification Using Autoassociative Neural Networks Darryl J. Wrest, J. Wesley Hines, and Robert E. Uhrig* Department of Nuclear Engineering, University of Tennessee, Knoxville,

More information

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE A Thesis by Andrew J. Zerngast Bachelor of Science, Wichita State University, 2008 Submitted to the Department of Electrical

More information

Transient stability Assessment using Artificial Neural Network Considering Fault Location

Transient stability Assessment using Artificial Neural Network Considering Fault Location Vol.6 No., 200 مجلد 6, العدد, 200 Proc. st International Conf. Energy, Power and Control Basrah University, Basrah, Iraq 0 Nov. to 2 Dec. 200 Transient stability Assessment using Artificial Neural Network

More information

IMPLEMENTATION OF NEURAL NETWORK IN ENERGY SAVING OF INDUCTION MOTOR DRIVES WITH INDIRECT VECTOR CONTROL

IMPLEMENTATION OF NEURAL NETWORK IN ENERGY SAVING OF INDUCTION MOTOR DRIVES WITH INDIRECT VECTOR CONTROL IMPLEMENTATION OF NEURAL NETWORK IN ENERGY SAVING OF INDUCTION MOTOR DRIVES WITH INDIRECT VECTOR CONTROL * A. K. Sharma, ** R. A. Gupta, and *** Laxmi Srivastava * Department of Electrical Engineering,

More information

CHAPTER 4 MONITORING OF POWER SYSTEM VOLTAGE STABILITY THROUGH ARTIFICIAL NEURAL NETWORK TECHNIQUE

CHAPTER 4 MONITORING OF POWER SYSTEM VOLTAGE STABILITY THROUGH ARTIFICIAL NEURAL NETWORK TECHNIQUE 53 CHAPTER 4 MONITORING OF POWER SYSTEM VOLTAGE STABILITY THROUGH ARTIFICIAL NEURAL NETWORK TECHNIQUE 4.1 INTRODUCTION Due to economic reasons arising out of deregulation and open market of electricity,

More information

CHAPTER 6 ANFIS BASED NEURO-FUZZY CONTROLLER

CHAPTER 6 ANFIS BASED NEURO-FUZZY CONTROLLER 143 CHAPTER 6 ANFIS BASED NEURO-FUZZY CONTROLLER 6.1 INTRODUCTION The quality of generated electricity in power system is dependent on the system output, which has to be of constant frequency and must

More information

Neural Network Classifier and Filtering for EEG Detection in Brain-Computer Interface Device

Neural Network Classifier and Filtering for EEG Detection in Brain-Computer Interface Device Neural Network Classifier and Filtering for EEG Detection in Brain-Computer Interface Device Mr. CHOI NANG SO Email: cnso@excite.com Prof. J GODFREY LUCAS Email: jglucas@optusnet.com.au SCHOOL OF MECHATRONICS,

More information

Artificial Neural Network Based Fault Locator for Single Line to Ground Fault in Double Circuit Transmission Line

Artificial Neural Network Based Fault Locator for Single Line to Ground Fault in Double Circuit Transmission Line DOI: 10.7763/IPEDR. 2014. V75. 11 Artificial Neural Network Based Fault Locator for Single Line to Ground Fault in Double Circuit Transmission Line Aravinda Surya. V 1, Ebha Koley 2 +, AnamikaYadav 3 and

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks ABSTRACT Just as life attempts to understand itself better by modeling it, and in the process create something new, so Neural computing is an attempt at modeling the workings

More information

A Neural Network Approach for the calculation of Resonant frequency of a circular microstrip antenna

A Neural Network Approach for the calculation of Resonant frequency of a circular microstrip antenna A Neural Network Approach for the calculation of Resonant frequency of a circular microstrip antenna K. Kumar, Senior Lecturer, Dept. of ECE, Pondicherry Engineering College, Pondicherry e-mail: kumarpec95@yahoo.co.in

More information

Neural Network Based Optimal Switching Pattern Generation for Multiple Pulse Width Modulated Inverter

Neural Network Based Optimal Switching Pattern Generation for Multiple Pulse Width Modulated Inverter Vol.3, Issue.4, Jul - Aug. 2013 pp-1910-1915 ISSN: 2249-6645 Neural Network Based Optimal Switching Pattern Generation for Multiple Pulse Width Modulated Inverter K. Tamilarasi 1, C. Suganthini 2 1, 2

More information

A Novel Fuzzy Neural Network Based Distance Relaying Scheme

A Novel Fuzzy Neural Network Based Distance Relaying Scheme 902 IEEE TRANSACTIONS ON POWER DELIVERY, VOL. 15, NO. 3, JULY 2000 A Novel Fuzzy Neural Network Based Distance Relaying Scheme P. K. Dash, A. K. Pradhan, and G. Panda Abstract This paper presents a new

More information

Decriminition between Magnetising Inrush from Interturn Fault Current in Transformer: Hilbert Transform Approach

Decriminition between Magnetising Inrush from Interturn Fault Current in Transformer: Hilbert Transform Approach SSRG International Journal of Electrical and Electronics Engineering (SSRG-IJEEE) volume 1 Issue 10 Dec 014 Decriminition between Magnetising Inrush from Interturn Fault Current in Transformer: Hilbert

More information

ARTIFICIAL NEURAL NETWORK BASED CLASSIFICATION FOR MONOBLOCK CENTRIFUGAL PUMP USING WAVELET ANALYSIS

ARTIFICIAL NEURAL NETWORK BASED CLASSIFICATION FOR MONOBLOCK CENTRIFUGAL PUMP USING WAVELET ANALYSIS International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 6340(Print) ISSN 0976 6359(Online) Volume 1 Number 1, July - Aug (2010), pp. 28-37 IAEME, http://www.iaeme.com/ijmet.html

More information

NNC for Power Electronics Converter Circuits: Design & Simulation

NNC for Power Electronics Converter Circuits: Design & Simulation NNC for Power Electronics Converter Circuits: Design & Simulation 1 Ms. Kashmira J. Rathi, 2 Dr. M. S. Ali Abstract: AI-based control techniques have been very popular since the beginning of the 90s. Usually,

More information

Evolutionary Artificial Neural Networks For Medical Data Classification

Evolutionary Artificial Neural Networks For Medical Data Classification Evolutionary Artificial Neural Networks For Medical Data Classification GRADUATE PROJECT Submitted to the Faculty of the Department of Computing Sciences Texas A&M University-Corpus Christi Corpus Christi,

More information

Prediction of Missing PMU Measurement using Artificial Neural Network

Prediction of Missing PMU Measurement using Artificial Neural Network Prediction of Missing PMU Measurement using Artificial Neural Network Gaurav Khare, SN Singh, Abheejeet Mohapatra Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur-208016,

More information

Fault Diagnosis of Analog Circuit Using DC Approach and Neural Networks

Fault Diagnosis of Analog Circuit Using DC Approach and Neural Networks 294 Fault Diagnosis of Analog Circuit Using DC Approach and Neural Networks Ajeet Kumar Singh 1, Ajay Kumar Yadav 2, Mayank Kumar 3 1 M.Tech, EC Department, Mewar University Chittorgarh, Rajasthan, INDIA

More information

Artificial Neural Network based Fault Classifier and Distance

Artificial Neural Network based Fault Classifier and Distance IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 02, 2014 ISSN (online): 2321-0613 Artificial Neural Network based Fault Classifier and Brijesh R. Solanki 1 Dr. MahipalSinh

More information

Application of Feed-forward Artificial Neural Networks to the Identification of Defective Analog Integrated Circuits

Application of Feed-forward Artificial Neural Networks to the Identification of Defective Analog Integrated Circuits eural Comput & Applic (2002)11:71 79 Ownership and Copyright 2002 Springer-Verlag London Limited Application of Feed-forward Artificial eural etworks to the Identification of Defective Analog Integrated

More information

Adaptive Multi-layer Neural Network Receiver Architectures for Pattern Classification of Respective Wavelet Images

Adaptive Multi-layer Neural Network Receiver Architectures for Pattern Classification of Respective Wavelet Images Adaptive Multi-layer Neural Network Receiver Architectures for Pattern Classification of Respective Wavelet Images Pythagoras Karampiperis 1, and Nikos Manouselis 2 1 Dynamic Systems and Simulation Laboratory

More information

Harmonic detection by using different artificial neural network topologies

Harmonic detection by using different artificial neural network topologies Harmonic detection by using different artificial neural network topologies J.L. Flores Garrido y P. Salmerón Revuelta Department of Electrical Engineering E. P. S., Huelva University Ctra de Palos de la

More information

Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India. Fig.1.Neuron and its connection

Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India. Fig.1.Neuron and its connection NEUROCOMPUTATION FOR MICROSTRIP ANTENNA Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India Abstract: A Neural Network is a powerful computational tool that

More information

Indirect Vector Control of Induction Motor Using Pi Speed Controller and Neural Networks

Indirect Vector Control of Induction Motor Using Pi Speed Controller and Neural Networks Vol.3, Issue.4, Jul - Aug. 2013 pp-1980-1987 ISSN: 2249-6645 Indirect Vector Control of Induction Motor Using Pi Speed Controller and Neural Networks C. Mohan Krishna M. Tech 1, G. Meerimatha M.Tech 2,

More information

A comparative study of different feature sets for recognition of handwritten Arabic numerals using a Multi Layer Perceptron

A comparative study of different feature sets for recognition of handwritten Arabic numerals using a Multi Layer Perceptron Proc. National Conference on Recent Trends in Intelligent Computing (2006) 86-92 A comparative study of different feature sets for recognition of handwritten Arabic numerals using a Multi Layer Perceptron

More information

NEURAL NETWORK FAULT DIAGNOSIS SYSTEM FOR A DIESEL-ELECTRIC LOCOMOTIVE S CLOSED LOOP EXCITATION CONTROL SYSTEM

NEURAL NETWORK FAULT DIAGNOSIS SYSTEM FOR A DIESEL-ELECTRIC LOCOMOTIVE S CLOSED LOOP EXCITATION CONTROL SYSTEM Vol.109 (1) March 2018 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 23 NEURAL NETWORK FAULT DIAGNOSIS SYSTEM FOR A DIESEL-ELECTRIC LOCOMOTIVE S CLOSED LOOP EXCITATION CONTROL SYSTEM M. Barnard* and

More information

Lesson 08. Convolutional Neural Network. Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni.

Lesson 08. Convolutional Neural Network. Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni. Lesson 08 Convolutional Neural Network Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni Lesson 08 Convolution we will consider 2D convolution the result

More information

PERFORMANCE PARAMETERS CONTROL OF WOUND ROTOR INDUCTION MOTOR USING ANN CONTROLLER

PERFORMANCE PARAMETERS CONTROL OF WOUND ROTOR INDUCTION MOTOR USING ANN CONTROLLER PERFORMANCE PARAMETERS CONTROL OF WOUND ROTOR INDUCTION MOTOR USING ANN CONTROLLER 1 A.MOHAMED IBRAHIM, 2 M.PREMKUMAR, 3 T.R.SUMITHIRA, 4 D.SATHISHKUMAR 1,2,4 Assistant professor in Department of Electrical

More information

COMPARATIVE STUDY ON ARTIFICIAL NEURAL NETWORK ALGORITHMS

COMPARATIVE STUDY ON ARTIFICIAL NEURAL NETWORK ALGORITHMS International Journal of Latest Trends in Engineering and Technology Special Issue SACAIM 2016, pp. 448-453 e-issn:2278-621x COMPARATIVE STUDY ON ARTIFICIAL NEURAL NETWORK ALGORITHMS Neenu Joseph 1, Melody

More information

Comparison of Various Neural Network Algorithms Used for Location Estimation in Wireless Communication

Comparison of Various Neural Network Algorithms Used for Location Estimation in Wireless Communication Comparison of Various Neural Network Algorithms Used for Location Estimation in Wireless Communication * Shashank Mishra 1, G.S. Tripathi M.Tech. Student, Dept. of Electronics and Communication Engineering,

More information

Research on MPPT Control Algorithm of Flexible Amorphous Silicon. Photovoltaic Power Generation System Based on BP Neural Network

Research on MPPT Control Algorithm of Flexible Amorphous Silicon. Photovoltaic Power Generation System Based on BP Neural Network 4th International Conference on Sensors, Measurement and Intelligent Materials (ICSMIM 2015) Research on MPPT Control Algorithm of Flexible Amorphous Silicon Photovoltaic Power Generation System Based

More information

Neural Network based Digital Receiver for Radio Communications

Neural Network based Digital Receiver for Radio Communications Neural Network based Digital Receiver for Radio Communications G. LIODAKIS, D. ARVANITIS, and I.O. VARDIAMBASIS Microwave Communications & Electromagnetic Applications Laboratory, Department of Electronics,

More information

Current Harmonic Estimation in Power Transmission Lines Using Multi-layer Perceptron Learning Strategies

Current Harmonic Estimation in Power Transmission Lines Using Multi-layer Perceptron Learning Strategies Journal of Electrical Engineering 5 (27) 29-23 doi:.7265/2328-2223/27.5. D DAVID PUBLISHING Current Harmonic Estimation in Power Transmission Lines Using Multi-layer Patrice Wira and Thien Minh Nguyen

More information

[Dobriyal, 4(9): September, 2015] ISSN: (I2OR), Publication Impact Factor: 3.785

[Dobriyal, 4(9): September, 2015] ISSN: (I2OR), Publication Impact Factor: 3.785 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY A REVIEW ON CHANNEL ESTIMATION USING BP NEURAL NETWORK FOR OFDM Bandana Dobriyal* *Department of Electronics and Communication,

More information

CHAPTER 3 FAULT DETECTION SCHEMES FOR THREE PHASE INDUCTION MOTOR

CHAPTER 3 FAULT DETECTION SCHEMES FOR THREE PHASE INDUCTION MOTOR 62 CHAPTER 3 FAULT DETECTION SCHEMES FOR THREE PHASE INDUCTION MOTOR 3.1 INTRODUCTION Induction motors play a vital role in industries. Reliability of drive systems with these motors has a serious economical

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Perceptron Barnabás Póczos Contents History of Artificial Neural Networks Definitions: Perceptron, Multi-Layer Perceptron Perceptron algorithm 2 Short History of Artificial

More information

ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT REAL TIME POWER QUALITY MONITORING SYSTEM

ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT REAL TIME POWER QUALITY MONITORING SYSTEM ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT REAL TIME POWER QUALITY MONITORING SYSTEM Ajith Abraham and Baikunth Nath Gippsland School of Computing & Information Technology Monash University, Churchill

More information

Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition

Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition Shigueo Nomura and José Ricardo Gonçalves Manzan Faculty of Electrical Engineering, Federal University of Uberlândia, Uberlândia, MG,

More information

Application Of Artificial Neural Network In Fault Detection Of Hvdc Converter

Application Of Artificial Neural Network In Fault Detection Of Hvdc Converter Application Of Artificial Neural Network In Fault Detection Of Hvdc Converter Madhuri S Shastrakar Department of Electrical Engineering, Shree Ramdeobaba College of Engineering and Management, Nagpur,

More information

Neural Model for Path Loss Prediction in Suburban Environment

Neural Model for Path Loss Prediction in Suburban Environment Neural Model for Path Loss Prediction in Suburban Environment Ileana Popescu, Ioan Nafornita, Philip Constantinou 3, Athanasios Kanatas 3, Netarios Moraitis 3 University of Oradea, 5 Armatei Romane Str.,

More information

Automatic Classification of Power Quality disturbances Using S-transform and MLP neural network

Automatic Classification of Power Quality disturbances Using S-transform and MLP neural network I J C T A, 8(4), 2015, pp. 1337-1350 International Science Press Automatic Classification of Power Quality disturbances Using S-transform and MLP neural network P. Kalyana Sundaram* & R. Neela** Abstract:

More information

MAGNT Research Report (ISSN ) Vol.6(1). PP , Controlling Cost and Time of Construction Projects Using Neural Network

MAGNT Research Report (ISSN ) Vol.6(1). PP , Controlling Cost and Time of Construction Projects Using Neural Network Controlling Cost and Time of Construction Projects Using Neural Network Li Ping Lo Faculty of Computer Science and Engineering Beijing University China Abstract In order to achieve optimized management,

More information

Automatic Generation Control of Three Area Power Systems Using Ann Controllers

Automatic Generation Control of Three Area Power Systems Using Ann Controllers International Journal of Computational Engineering Research Vol, 03 Issue, 6 Automatic Generation Control of Three Area Power Systems Using Ann Controllers Nehal Patel 1, Prof.Bharat Bhusan Jain 2 1&2

More information

Energy Saving Scheme for Induction Motor Drives

Energy Saving Scheme for Induction Motor Drives International Journal of Electrical Engineering. ISSN 0974-2158 Volume 5, Number 4 (2012), pp. 437-447 International Research Publication House http://www.irphouse.com Energy Saving Scheme for Induction

More information

Analysis of Learning Paradigms and Prediction Accuracy using Artificial Neural Network Models

Analysis of Learning Paradigms and Prediction Accuracy using Artificial Neural Network Models Analysis of Learning Paradigms and Prediction Accuracy using Artificial Neural Network Models Poornashankar 1 and V.P. Pawar 2 Abstract: The proposed work is related to prediction of tumor growth through

More information

FOUR TOTAL TRANSFER CAPABILITY. 4.1 Total transfer capability CHAPTER

FOUR TOTAL TRANSFER CAPABILITY. 4.1 Total transfer capability CHAPTER CHAPTER FOUR TOTAL TRANSFER CAPABILITY R structuring of power system aims at involving the private power producers in the system to supply power. The restructured electric power industry is characterized

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION 1 CHAPTER 1 INTRODUCTION 1.1 BACKGROUND The increased use of non-linear loads and the occurrence of fault on the power system have resulted in deterioration in the quality of power supplied to the customers.

More information

Available online at ScienceDirect. Procedia Computer Science 85 (2016 )

Available online at   ScienceDirect. Procedia Computer Science 85 (2016 ) Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 85 (2016 ) 263 270 International Conference on Computational Modeling and Security (CMS 2016) Proposing Solution to XOR

More information

The Hamming Code Performance Analysis using RBF Neural Network

The Hamming Code Performance Analysis using RBF Neural Network , 22-24 October, 2014, San Francisco, USA The Hamming Code Performance Analysis using RBF Neural Network Omid Haddadi, Zahra Abbasi, and Hossein TooToonchy, Member, IAENG Abstract In this paper the Hamming

More information

Initialisation improvement in engineering feedforward ANN models.

Initialisation improvement in engineering feedforward ANN models. Initialisation improvement in engineering feedforward ANN models. A. Krimpenis and G.-C. Vosniakos National Technical University of Athens, School of Mechanical Engineering, Manufacturing Technology Division,

More information

Detection and classification of faults on 220 KV transmission line using wavelet transform and neural network

Detection and classification of faults on 220 KV transmission line using wavelet transform and neural network International Journal of Smart Grid and Clean Energy Detection and classification of faults on 220 KV transmission line using wavelet transform and neural network R P Hasabe *, A P Vaidya Electrical Engineering

More information

MATLAB/GUI Simulation Tool for Power System Fault Analysis with Neural Network Fault Classifier

MATLAB/GUI Simulation Tool for Power System Fault Analysis with Neural Network Fault Classifier MATLAB/GUI Simulation Tool for Power System Fault Analysis with Neural Network Fault Classifier Ph Chitaranjan Sharma, Ishaan Pandiya, Dipak Swargari, Kusum Dangi * Department of Electrical Engineering,

More information

Deep Neural Networks (2) Tanh & ReLU layers; Generalisation and Regularisation

Deep Neural Networks (2) Tanh & ReLU layers; Generalisation and Regularisation Deep Neural Networks (2) Tanh & ReLU layers; Generalisation and Regularisation Steve Renals Machine Learning Practical MLP Lecture 4 9 October 2018 MLP Lecture 4 / 9 October 2018 Deep Neural Networks (2)

More information

Chapter 2 Channel Equalization

Chapter 2 Channel Equalization Chapter 2 Channel Equalization 2.1 Introduction In wireless communication systems signal experiences distortion due to fading [17]. As signal propagates, it follows multiple paths between transmitter and

More information

Artificial Neural Network Approach to Mobile Location Estimation in GSM Network

Artificial Neural Network Approach to Mobile Location Estimation in GSM Network INTL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 2017, VOL. 63, NO. 1,. 39-44 Manuscript received March 31, 2016; revised December, 2016. DOI: 10.1515/eletel-2017-0006 Artificial Neural Network Approach

More information

FACE RECOGNITION USING NEURAL NETWORKS

FACE RECOGNITION USING NEURAL NETWORKS Int. J. Elec&Electr.Eng&Telecoms. 2014 Vinoda Yaragatti and Bhaskar B, 2014 Research Paper ISSN 2319 2518 www.ijeetc.com Vol. 3, No. 3, July 2014 2014 IJEETC. All Rights Reserved FACE RECOGNITION USING

More information

TEMPORAL DIFFERENCE LEARNING IN CHINESE CHESS

TEMPORAL DIFFERENCE LEARNING IN CHINESE CHESS TEMPORAL DIFFERENCE LEARNING IN CHINESE CHESS Thong B. Trinh, Anwer S. Bashi, Nikhil Deshpande Department of Electrical Engineering University of New Orleans New Orleans, LA 70148 Tel: (504) 280-7383 Fax:

More information

Artificial Neural Networks approach to the voltage sag classification

Artificial Neural Networks approach to the voltage sag classification Artificial Neural Networks approach to the voltage sag classification F. Ortiz, A. Ortiz, M. Mañana, C. J. Renedo, F. Delgado, L. I. Eguíluz Department of Electrical and Energy Engineering E.T.S.I.I.,

More information

Computational Intelligence Introduction

Computational Intelligence Introduction Computational Intelligence Introduction Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Neural Networks 1/21 Fuzzy Systems What are

More information

Detection and Classification of One Conductor Open Faults in Parallel Transmission Line using Artificial Neural Network

Detection and Classification of One Conductor Open Faults in Parallel Transmission Line using Artificial Neural Network Detection and Classification of One Conductor Open Faults in Parallel Transmission Line using Artificial Neural Network A.M. Abdel-Aziz B. M. Hasaneen A. A. Dawood Electrical Power and Machines Eng. Dept.

More information

The Basic Kak Neural Network with Complex Inputs

The Basic Kak Neural Network with Complex Inputs The Basic Kak Neural Network with Complex Inputs Pritam Rajagopal The Kak family of neural networks [3-6,2] is able to learn patterns quickly, and this speed of learning can be a decisive advantage over

More information

Fault Detection in Double Circuit Transmission Lines Using ANN

Fault Detection in Double Circuit Transmission Lines Using ANN International Journal of Research in Advent Technology, Vol.3, No.8, August 25 E-ISSN: 232-9637 Fault Detection in Double Circuit Transmission Lines Using ANN Chhavi Gupta, Chetan Bhardwaj 2 U.T.U Dehradun,

More information

Generating an appropriate sound for a video using WaveNet.

Generating an appropriate sound for a video using WaveNet. Australian National University College of Engineering and Computer Science Master of Computing Generating an appropriate sound for a video using WaveNet. COMP 8715 Individual Computing Project Taku Ueki

More information

Radio Deep Learning Efforts Showcase Presentation

Radio Deep Learning Efforts Showcase Presentation Radio Deep Learning Efforts Showcase Presentation November 2016 hume@vt.edu www.hume.vt.edu Tim O Shea Senior Research Associate Program Overview Program Objective: Rethink fundamental approaches to how

More information

[ENE02] Artificial neural network based arcing fault detection algorithm for underground distribution cable

[ENE02] Artificial neural network based arcing fault detection algorithm for underground distribution cable [ENE02] Artificial neural network based arcing fault detection algorithm for underground distribution cable Chan Wei Kian 1, Abdullah Asuhaimi Mohd. Zin 1, Md. Shah Majid 1, Hussein Ahmad 1, Zaniah Muda

More information

Impulse Noise Removal Based on Artificial Neural Network Classification with Weighted Median Filter

Impulse Noise Removal Based on Artificial Neural Network Classification with Weighted Median Filter Impulse Noise Removal Based on Artificial Neural Network Classification with Weighted Median Filter Deepalakshmi R 1, Sindhuja A 2 PG Scholar, Department of Computer Science, Stella Maris College, Chennai,

More information

A Multilayer Artificial Neural Network for Target Identification Using Radar Information

A Multilayer Artificial Neural Network for Target Identification Using Radar Information Available online at www.ijiems.com A Multilayer Artificial Neural Network for Target Identification Using Radar Information James Rodrigeres 1, Joy Fundil 1, International Hellenic University, School of

More information

Prediction of Cluster System Load Using Artificial Neural Networks

Prediction of Cluster System Load Using Artificial Neural Networks Prediction of Cluster System Load Using Artificial Neural Networks Y.S. Artamonov 1 1 Samara National Research University, 34 Moskovskoe Shosse, 443086, Samara, Russia Abstract Currently, a wide range

More information

Image Processing and Artificial Neural Network techniques in Identifying Defects of Textile Products

Image Processing and Artificial Neural Network techniques in Identifying Defects of Textile Products Image Processing and Artificial Neural Network techniques in Identifying Defects of Textile Products Mrs.P.Banumathi 1, Ms.T.S.Ushanandhini 2 1 Associate Professor, Department of Computer Science and Engineering,

More information

Constant False Alarm Rate Detection of Radar Signals with Artificial Neural Networks

Constant False Alarm Rate Detection of Radar Signals with Artificial Neural Networks Högskolan i Skövde Department of Computer Science Constant False Alarm Rate Detection of Radar Signals with Artificial Neural Networks Mirko Kück mirko@ida.his.se Final 6 October, 1996 Submitted by Mirko

More information

Characterization of LF and LMA signal of Wire Rope Tester

Characterization of LF and LMA signal of Wire Rope Tester Volume 8, No. 5, May June 2017 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info ISSN No. 0976-5697 Characterization of LF and LMA signal

More information

CS 229 Final Project: Using Reinforcement Learning to Play Othello

CS 229 Final Project: Using Reinforcement Learning to Play Othello CS 229 Final Project: Using Reinforcement Learning to Play Othello Kevin Fry Frank Zheng Xianming Li ID: kfry ID: fzheng ID: xmli 16 December 2016 Abstract We built an AI that learned to play Othello.

More information

Eur Ing Dr. Lei Zhang Faculty of Engineering and Applied Science University of Regina Canada

Eur Ing Dr. Lei Zhang Faculty of Engineering and Applied Science University of Regina Canada Eur Ing Dr. Lei Zhang Faculty of Engineering and Applied Science University of Regina Canada The Second International Conference on Neuroscience and Cognitive Brain Information BRAININFO 2017, July 22,

More information

Course Objectives. This course gives a basic neural network architectures and learning rules.

Course Objectives. This course gives a basic neural network architectures and learning rules. Introduction Course Objectives This course gives a basic neural network architectures and learning rules. Emphasis is placed on the mathematical analysis of these networks, on methods of training them

More information

Prediction of Compaction Parameters of Soils using Artificial Neural Network

Prediction of Compaction Parameters of Soils using Artificial Neural Network Prediction of Compaction Parameters of Soils using Artificial Neural Network Jeeja Jayan, Dr.N.Sankar Mtech Scholar Kannur,Kerala,India jeejajyn@gmail.com Professor,NIT Calicut Calicut,India sankar@notc.ac.in

More information

ARTIFICIAL INTELLIGENCE IN POWER SYSTEMS

ARTIFICIAL INTELLIGENCE IN POWER SYSTEMS ARTIFICIAL INTELLIGENCE IN POWER SYSTEMS Prof.Somashekara Reddy 1, Kusuma S 2 1 Department of MCA, NHCE Bangalore, India 2 Kusuma S, Department of MCA, NHCE Bangalore, India Abstract: Artificial Intelligence

More information

A Comparison of Particle Swarm Optimization and Gradient Descent in Training Wavelet Neural Network to Predict DGPS Corrections

A Comparison of Particle Swarm Optimization and Gradient Descent in Training Wavelet Neural Network to Predict DGPS Corrections Proceedings of the World Congress on Engineering and Computer Science 00 Vol I WCECS 00, October 0-, 00, San Francisco, USA A Comparison of Particle Swarm Optimization and Gradient Descent in Training

More information

Statistical Tests: More Complicated Discriminants

Statistical Tests: More Complicated Discriminants 03/07/07 PHY310: Statistical Data Analysis 1 PHY310: Lecture 14 Statistical Tests: More Complicated Discriminants Road Map When the likelihood discriminant will fail The Multi Layer Perceptron discriminant

More information

Application Research on BP Neural Network PID Control of the Belt Conveyor

Application Research on BP Neural Network PID Control of the Belt Conveyor Application Research on BP Neural Network PID Control of the Belt Conveyor Pingyuan Xi 1, Yandong Song 2 1 School of Mechanical Engineering Huaihai Institute of Technology Lianyungang 222005, China 2 School

More information