Florida State University Libraries

Size: px
Start display at page:

Download "Florida State University Libraries"

Transcription

1 Florida State University Libraries Electronic Theses, Treatises and Dissertations The Graduate School 2010 Predictive Harmonic Cancellation Using Neural Networks Brian Malinconico Follow this and additional works at the FSU Digital Library. For more information, please contact

2 THE FLORIDA STATE UNIVERSITY COLLEGE OF ENGINEERING PREDICTIVE HARMONIC CANCELLATION USING NEURAL NETWORKS By BRIAN MALINCONICO A Thesis submitted to the Department of Electrical & Computer Engineering in partial fulfillment of the requirements for the degree of Master of Electrical Engineering Degree Awarded: Fall Semester, 2010

3 The members of the committee approve the thesis of Brian Malinconico defended on Oct. 15, Dr. Simon Foo Professor Directing Thesis Dr. Rodney Roberts Committee Member Dr. Anke Meyer-Baese Committee Member Approved: Dr. Simon Foo, Chair, Department of Electrical & Computer Engineering Dr. Ching-Jen Chen, Dean, College of Engineering The Graduate School has verified and approved the above-named committee members. ii

4 To my family and dedicated girlfriend, for their loving support. iii

5 ACKNOWLEDGMENTS I would like to thank my advising professor and my thesis committee for their patient help. I would also like to thank the following people for the contribution of various libraries and content used in this paper. Brandon Kuczenski for the vline and hline libraries Quasar Jarosz for the original image of a neuron used on page 8 which is licensed under Creative Commons Attribution-Share Alike 3.0 Unported iv

6 TABLE OF CONTENTS List of Tables List of Figures List of Symbols Abstract vii viii x xi 1 Background Fourier Series Harmonics Effects of Harmonics in Power Systems Analysis of Harmonics Filters Passive Filters Active Filters Artificial Neural Networks The Biological Neuron Circuit Representation Single-Layer Feedforward Networks Multi-Layer Feedforward Networks Training Algorithms Training Factors Matlab Neural Network Toolbox Training Algorithms Training Sets Data Division Error Analysis Training Goals System Overview Common Systems Unit Delays Change Detector Memory Buffers Harmonic Detection and Estimation Memory Buffer Harmonic Amplitude Estimation v

7 2.3 Harmonic Prediction Network Configuration Harmonic Predictor Training Training Decision System Results Simulations Simulation Limitations Basic Idealized Basic Idealized with Noise Basic Idealized with Noise and Out of Range Values Constantly Changing Input Values Harmonic Prediction Cancellation Delay Harmonic Estimation Training Necessary Number of Neurons Alternate Network Architectures Performance Generalization Network Instability Concluding Remarks Future Research Phase Shift Compensation Sine Components Constantly Changing Systems Feasibility Concerns Required Sampling Speed Sampling Accuracy A Artificial Neuron Circuitry 68 A.1 Summing Amplifier A.2 Inverting Amplifier Bibliography Biographical Sketch vi

8 LIST OF TABLES 3.1 Input state changes for Basic Idealized Simulation Input state and harmonic generation relationships for Basic Idealized Simulation Input state and harmonic generation relationships for Basic Idealized Simulation Input state changes Input state changes Input state and harmonic generation relationships Harmonic estimation MATLAB training parameters vii

9 LIST OF FIGURES 1.1 Example passive filter circuits [8] Example second order active filters [6] The biological neuron [13] Logical configuration of neuron structure General circuit representation for artificial neuron Single layer artificial neural network Example input sample spaces General feed forward neural network Example training error using MATLAB Neural Network Toolbox Harmonic prediction training system Training decision subsystem Prediction network output Example training set with time shifts Example training set with time shifts and noise Network performance vs. number of hidden neurons Harmonic estimation performance with RBF network Estimation network output Remaining harmonics (IFFT) Performance of estimation network Estimation variation over a single period Estimation variation of 5 th harmonic over two periods viii

10 3.8 Estimation network output with values out of training range Prediction network output with values out of training range Remaining harmonics (IFFT) with values out of training range THD of signal with harmonic values out of training range Prediction output in training range THD of signal with harmonic values in training range Harmonic values in a constantly changing system IFFT Neural network harmonic estimation when trained without noise Neural network harmonic estimation when trained with noise ix

11 LIST OF SYMBOLS n... Number of sample points f s...fundamental Frequency a 1... Fundamental Amplitude f samp...sampling frequency d window... Ratio of window length to peroid length N T...Number of training sets V F...Fundamental Voltage V H...Total Harmonic Voltage v n... Amplitude of the n-th Voltage harmonic I F...Fundamental Current I H...Total Harmonic Current i n...amplitude of the n-th Harmonic Current R n... System resistance at n-th harmonic frequency P L...Transmission Loss w...weight matrix E max...maximum acceptable error τ...prediction Delay J...Jacobian x

12 ABSTRACT Filtering is an important aspect of the modern power system. By reducing the effects of harmonics, power transmission and utilization becomes more efficient. This research examines the use of neural networks for the estimation and prediction of harmonics. The utilization of neural networks for adaptive harmonic prediction, allows the cancellation of harmonics before their creation. A large part of this research focuses on the estimation of Fourier coefficients. By identifying the strengths and weaknesses of neural networks for Fourier coefficient estimation future direction for research was determined. The deficiencies of the developed networks prevent the application of this system in real-life situations. Despite the need for future research, the performance of the neural networks shows significant possibilities. xi

13 CHAPTER 1 BACKGROUND 1.1 Fourier Series In 1822 Jean-Baptist Joseph Fourier provided the scientific community with a tool that was capable of decomposing any periodic signal into the sum of sinusoids[9]. Ideally, an infinite sum of sine and cosine, and appropriately selected coefficients, can be used to reconstruct any arbitrary periodic signal. An infinite sum cannot be realized, therefore a smaller finite series must be used to recreate a signal with moderate accuracy. This signal decomposition allows an arbitrarily complex signal to be broken down into the sum of simple signals, while still maintaining a accurate representation. Equation 1.1 describes the standard Fourier Series as the sum of sine and cosine terms. As can be seen by examination, the frequency of each signal component is an integer multiple of the frequency of the first term. The frequency of the first term, represented by f s, is known as the Fundamental Frequency, and its associated amplitude a 1 is referred to as the Fundamental Amplitude of the signal. The only other additional special term found in equation 1.1 is a 0 which is the DC offset or the average value of the function. f(x) = a 0 + (a n cos(nf s x+φ)+b n sin(nf s x+φ)) (1.1) n=1 For all values of a n and b n direct calculation is possible using the formulas described in equations 1.2, where P is equal to the period of the periodic function in question. 1

14 a o = 1 P a n = 1 P/2 b n = 1 P/2 P/2 P/2 P/2 P/2 P/2 P/2 f(x)dx f(x)cos(nx) f(x)sin(nx) (1.2) Equation 1.1 and the three equations presented in 1.2 are computationally complex and difficult to solve using standard digital tools. This high complexity leads to slow computation making the Fourier Series a poor choice for the solution of real-time problems. 1.2 Harmonics Harmonics can be defined as signal components which are present beyond the fundamental frequency[10]. These components appear in equation 1.1 as the Fourier terms where n > 1. For the analysis of harmonics in power systems, the Fundamental Frequency f s is the power systems frequency (typically either 60Hz or 50Hz) and the Fundamental Amplitude a 1 is the amplitude of the supplied signal. In power systems, harmonics are unwanted noise caused by non-linear end user systems. Some common non-linear loads responsible for power system harmonics are: Personal Computers [1, 10] Uninterruptible power supplies [1, 10] Laser printers [10] Microwave ovens [10] Electronic lighting ballasts [1, 10] Variable and adjustable speed drives [1] Utility power supply grids provide a pure sinusoidal voltage[10] which is distorted by the connected non-linear loads. These loads typically create only odd-ordered harmonics under normal operating conditions[10, 16], with the 5th, 7th, 11th and 13th being the most common and in most cases 5th being the largest[11]. 2

15 Harmonics have many negative effects on power systems (section discusses the negative effects in more detail). In order to control the negative effects of harmonics on power systems some countries and organizations have released harmonic emission standards. The European Union provides such standards in directives EN [10], EN , and EN to limit the harmonic current emissions in devices of various power levels. Similar harmonic specifications were recommended in IEEE-519, but are not mandatory limits in the United States Effects of Harmonics in Power Systems The scope of the negative effects created by harmonics in power systems has led many countries to regulate the harmonic contribution of end-user devices(section 1.2). Harmonics affect both the end-user and the utility side environments in negative ways, decreasing efficiency and causing errors. The most obvious potential problem for a system suffering from heavy harmonic distortion is the failure of connected electrical components[1, 10]. Failure can manifest over time in the form of degradation and eventual failure of capacitors, or the continual overheating of transformers[1, 10, 11]. The manifestation of error can occur quickly, such as a failure due to the overheating of a a neutral wire[1, 10], or the false tripping of circuit breakers and other protective devices[1, 10]. The presence of harmonics negatively effects power generation and transmission[1, 10, 11]. Equation 1.3 is the formula for calculating the transmission loss due to harmonic currents. This increased loss is linearly related to the quantity and amplitude of the harmonics present[10]. P L = i 2 nr n [10] (1.3) n=2,3... Several authors [1, 10, 11] have delineated an exhaustive list of the effects of harmonics. 3

16 1.2.2 Analysis of Harmonics IEEE Standard 519 was written to provide guidance on dealing with harmonics issues such as recommended limits on harmonic current and voltage injection discussed in section 1.2. In addition to recommended limits, the standard also defines some of the tools necessary to determine harmonic contribution to a signal. The primary tool for harmonic analysis is the Total Harmonic Distortion (THD). THD is provided in a form for voltage and current, and is typically displayed as a percentage. The total harmonic distortion of the voltage waveform is calculated using equation 1.4 and is a ratio of the root of the sum of the harmonic contribution (n > 1) squared, to the RMS value of the fundamental voltage[5]. V THD (%) = n=2,3,4,... V 2 n V 1 x100% (1.4) Similar analysis can be done on the current signal to determine the Current THD. Equation 1.5 describes the formula to determine the Current THD. I THD (%) = n=2,3,4,... I 2 n I 1 x100% (1.5) 1.3 Filters The most logical step for the elimination of harmonic currents or harmonic voltages is to apply filters to attempt to eliminate harmonics present in the system. Electrical filters can be broken down into two main categories, passive filters and active filters. Although each classification of filters is a broad, all-encompassing category, the attribute used to separate an active filter from a passive filter is the fact that an active filter is able to add energy to a system, while a passive filter contains only passive elements and thus cannot add energy to a system[8]. 4

17 1.3.1 Passive Filters Passive filter networks are typically designed to pass a target frequency (f s ) with little to no distortion while attenuating or rejecting frequencies above or below the target frequency. Low-pass filters are the most common type of filter[8] and are designed to pass lower frequencies while attenuating frequencies above their cutoff. High-pass filters work in the opposite way, rejecting low frequencies while passing high frequencies. A Band-pass filter works by passing all frequencies within a specified range, attenuating all higher and lower frequencies. The final type of passive filter is the Band-stop filter, which is used to pass all frequency components other then the frequency the filter is tuned to block. An ideal filter passes all signal components within the passband, while blocking all components outside the passband. However, since passive circuit elements are not ideal, typical filter characteristics allow the target frequency to pass with minimal attenuation, while heavily attenuating frequencies outside of the pass-band. This behavior is commonly referred to as the roll-off rate and is measured in db/decade. The speed of the roll-off is directly related to the quality factor (Q) of a filter. A higher Q value indicates a higher roll-off rate, and a filter with a sharper cut-off edge. Increasing a filter s order increases the rate at which frequencies outside the passband attenuate therefore increasing the Q factor. Figure 1.1 provides a configuration overview of some common passive filters, but is by no means a comprehensive list. Values of the elements in the various circuits are chosen based on the desired cut-off frequency, and the quality of the filter. Many of these passive LC filter designs have been effectively used for over eighty years[2]. Despite the depth of research on the topic, passive filters have many deficiencies when used in practical systems. One of the issues with the use of passive filters in systems with significant levels of harmonic distortion is heat generation. Since passive filters do not inject energy into a system, they remove harmonics by dissipating the harmonic energy, typically as heat. This heat generation stresses the filter components (See section 1.2.1) and is impractical for systems with a high THD. Heat generation can also negatively effect other components in the system depending on the specific application. 5

18 R C V in C V out V in R V out (a) Low-Pass (b) High-Pass R L C C V in R V out V in L V out (c) Band-Pass (d) Band-Stop Figure 1.1: Example passive filter circuits [8] Active Filters As can be seen by the example passive filters in Figure 1.1, all of the circuit elements are passive and therefore unable to add energy to the system. Since energy cannot be added, the gain of these filters is limited to less than unity. Another major concern is the procurement of inductive elements. High-quality inductors are difficult and expensive to produce. Due to the varied shapes of inductive elements, they are not compatible with many printed circuit board manufacturer s current equipment[7, 8]. This limitation can be overcome by the inclusion of active elements, for the creation of active filters. Figure 1.2 provides example second-order low pass and high pass active filters. These circuits use only resistive elements, capacitors and the operational amplifier s internal configuration to obtain similar roll-off characteristics as their counterparts in Figure 1.1. Through the use of the operational amplifier, the active filter also has the ability to inject energy into the circuit up to a amplitude of ±V s. This provides the active filter with the ability to dampen harmonics without removing energy as heat. 6

19 C 1 V s + V in R 1 R 2 C 2 + V s V out (a) Low-Pass R 1 V s + V in C 1 C 2 R 2 + V s V out (b) High-Pass Figure 1.2: Example second order active filters [6] Controlled Active Filters. The filters presented in Figure 1.1 and 1.2 work though the tuning of the resistive and capacitive values to filter desired frequencies. Another class of active filter is the controlled active filter. Controlled active filters receive some form of instruction from a controller indicating their desired output. Allowing a filter to follow a behavior pattern provided by the controlling system. 1.4 Artificial Neural Networks Artificial Neural Networks (ANN) are the result of the blending of engineering, biology, and artificial intelligence. Drawing on biological inspiration, ANNs seek to reproduce the flexibility and power of the human brain. The resulting system is a highly parallel and fault tolerant learning machine capable of external or self directed learning. 7

20 Figure 1.3: The biological neuron [13] The Biological Neuron The neuron is the elementary computing element and the basic building block of neural networks in biological systems. These highly specialized cells are designed to react to the physical and chemical stimuli[14]. Their behavior can be generally broken down into three stages receiving stimuli, processing stimuli, and responding to stimuli. A general diagram of a single neuron is shown in Figure 1.3. The biological neuron consists of three major components the axon, the cell body (also know as the soma), and the dendrites. Each component plays a role in controlling inter-neuron communication creating complex activation patterns across the entire network. All of these components together provide the biological inspiration for the artificial neural network. The human brain is a complex organ made up of approximately 10 4 neurons[17]. Each neuron has approximately 10 4 synapses[17] connecting it to neighboring neurons creating a complex network of neuron interconnections. Due to the sheer volume of neurons and synapses, mapping the interconnections between neurons in the human brain presents a daunting task to researchers. It is unlikely a complete map of such a complex network will be developed in the near future. There are complete maps of simpler organisms such as C. 8

21 elegans. Dendrites. Dendrites are neuron elements which receive signals[14] from neighboring neurons forming what is known as a dendritic tree[15]. The dendrites of a neuron connect to neighboring axons through synapses. Axon. The axon acts like an intra-neuron transmission line, transmitting impulse data from the cell body[14] of one neuron to the dendrites of neighboring neuron. After transmitting an impulse from the cell body, the axon enters a absolute refractory period where the axon will not transmit again regardless of excitation at the cell body[17], this period lasts approximately 1/25000 of a second [14]. This behavior has the effect of preventing closed loops in the neural network from creating recurrent firing patterns. Synapse. The synapse is the small contact organ where the axon-dendrite connection is made for the transmission of impulse signals. Although a synapse creates a functional connection between two neighboring neurons, the synaptic cleft physically separates the cells with an empty gap. Soma. The soma, or the cell body, receives multiple input signals from the dendrites to form a complete excitation signal. The neuron collects inputs from all over the cell body during a short time interval referred to as the period of latent summation[17]. The received impulse signals can be either excitatory, or inhibitory [17] and could be considered positive or negative numbers respectfully. If threshold conditions are met during the period of latent summation the neuron generates an impulse response for transmission to neighboring neurons to its axon only if certain necessary conditions are met. The necessary conditions for transmission can be represented as a threshold following the assumption that inputs are represented by positive and negative numbers. Figure 1.4 presents a logical diagram depicting the neuron. F(x) can be represented as any arbitrary function which represents the desired activation pattern. 9

22 Dendrites (Input) Cell Body/Soma Axon (Output) + F(x) Figure 1.4: Logical configuration of neuron structure Circuit Representation Hardware implementations of artificial neural networks vary widely and are heavily dependent on the network architecture. Figure 1.5 presents the basic circuitry for implementing the logical neuron presented in Figure 1.4[17]. Figure 1.5 consists of three major parts a summing amplifier, an inverting amplifier, and the activation function. The summing and inverting amplifier work in conjunction to provide the sign-correct weighted signal V o to the activation function. The activation input voltage is defined by Equation 1.6, a detailed solution for the output V o can be found in Appendix A on page 68. V o = R f2r f R N n=1,2,... V n R input,n (1.6) The hardware implementation of complex activation functions can prove to be much more difficult then the construction of the rest of the neuron. Simple activation functions, such as a linear function, are more realizable than complex functions, such as sigmoid, more complex functions may require implementation in an integrated circuit Single-Layer Feedforward Networks The single-layer, single-output feedforward artificial neural network described in Figure 1.6a provides the closest logical equivalent to the logical biological neuron configuration described in Figure1.4. Dendrites are represented by the weighted connections from the inputs to the activation function. The desired activation function takes x, as described by 10

23 V 1 R input,1 R f V 2 R input,2. + R + R f2 V o F(x) V N R input,n Figure 1.5: General circuit representation for artificial neuron Equation 1.7, as input and the return value is transmitted out of the network. I 1 x = [ ] I 2 w 1,1 w 1,2... w 1,i 1 w 1,i. I i I i (1.7) The single-layer, single-output ANN can further be generalized to the multi-output system of Figure1.6b. The described network can be considered as m independent single-output networks sharing common input data. A weight modification from any input to any neuron has no effect on the input to any other neurons. Mathematical description of the outputs, or the inputs to the activation function is slightly more complex in the multi-output case. Equation 1.8 defines an m-by-i matrix representing the weights of an i-input m-output single-layer, feedforward neural network. w 1,1 w 1,2 w 1,i 1 w 1,i w 2,1 w 2,2 w 2,i 1 w 2,i w = w m 1,1 w m 1,2 w m 1,i 1 w m 1,i w m,1 w m,2 w m,i 1 w m,i (1.8) The inputs to the activations functions can then be conveniently defined as x m = w m Ī where w m is the mth row in the w matrix. The output from the network is then the value of the activation function for each value of x. 11

24 F(x 1 ) o 1 I 1 I 1 w 1,1 I 2. F(x 2 ) o 2 I 2. w 1,2 w 1,i 1 F(x) o 1 I i 1. F(x m 1 ) o m 1 I i 1 w 1,i I i I i (a) Single-Output Network F(x m ) (b) Multi-Output Network o m Figure 1.6: Single layer artificial neural network The single layer network has the advantage of a simple connection structure providing a small period of latent summation. When the complexity increases from the single-output configuration to the multi-output configuration, the entire network behavior is still defined through relatively simple mathematics. During training, single-layer ANNs develop a linear separation between points in the sample space. Figure 1.7 provides an example of a linearly separable and non-linearly separable problem. The logic functions AND, NAND, NOT, OR (Figure 1.7a), and NOR are all linearly separable problems. These are all problems a single layer network can be trained to solve. XOR (Figure 1.7b) however, is a logic problem which is not linearly separable and therefore cannot be solved by a single-layer, feedforward ANN Multi-Layer Feedforward Networks XOR can be written in terms of the linearly separable functions NAND and OR, both logic problems that single-layer networks can solve. However, by utilizing the output of one 12

25 1 1 X X 0 1 (a) Linearly Separable (OR) 0 X (b) Non-Linearly Separable (XOR) 1 Figure 1.7: Example input sample spaces network as the input to a second network, the network topology now resembles a multi-layer artificial neural network. Mutli-layered artificial neural networks are no longer constrained by the necessity for linear separation in the sample space. A multi-layer ANN, with at least one hidden layer, is able to form an arbitrarily complex hyperplane separating desired outputs from undesired ones[17]. Figure 1.8 describes a general, fully connected n-layer feedforward neural network. Each circle represents a signal processing unit commonly referred to as a neuron, while each square represents an input to the network. An individual ANN has many characteristics that make its function unique. These characteristics, such as network topology, activation functions, affect the way the network is able to generalize the input/output pairs Training Algorithms General neural network training algorithms work for the purpose of reducing the error during the recall of training sets. Although the ideal goal for any training algorithm is to reduce the overall error to zero, practical application requires training only to reach a maximum allowed error threshold (E max ). Many different algorithms have been proposed and most are designed to work with 13

26 Inputs Layers of Hidden Nodes Outputs B 1 B n F(x) F(x) y 0,1 h 1,1 h n,1 y N,1 y 0,2. F(x) h 1,2.... F(x) h n,2. y N,2. y 0,i 1 F(x) F(x) y N,m 1 h 1,j 1 h n,k 1 y 0,i y N,m F(x) F(x) h 1,j h n,k Figure 1.8: General feed forward neural network specific network configurations, or are designed to specifically avoid the pitfalls of other algorithms. This section presents a basic overview of common algorithms as well as algorithms which will be used in this simulation. A comprehensive list training algorithms is beyond the scope of this section. Error Backprogagation. Error back-propagation is a very popular supervised learning algorithm designed to reduce the overall error in the recall of the training set. The back-propagation training algorithm gets its name by the method in which the overall error is reduced. The error at the output layer is propagated back into hidden layers, and is used for the adjustment of network weights reducing the overall system error. The error back-propagation training begins with a feedforward recall of a training set. During the recall phase outputs from layer n = 1..N are stored in the matrix y n according to Equation 1.9 where y 0 represents the input training set. As is depicted by Figure 1.8, all layers can potentially have a different number of elements. Therefore the layer weight 14

27 matrix W n is a g-by-h matrix where g is the number of elements in the nth layer and h is the number of elements in the previous layer. The input layer y n 1 changes dimensions, based on the size of the previous layer, and can be shown to be a an h-by-1 matrix. y n = Γ[W n y n 1 +b n ] (1.9) The network produced output, y N is then compared with the desired output (d) to compute the cycle error. The cumulative cycle error E (Equation 1.10) is calculated over all training sets, and is used in the stopping conditions for comparison to E max. E = 1 N d y n 2 (1.10) 2 n=1,2,... The individual layer error is used to calculate the error signal for weight correction. The layer error is calculated according to equation 1.11 where f n represents the derivative of the activation function for the layer the error signal is being calculated for. Note that the error signal is calculated for the output layer and then propagated backwards into the hidden layers. δ N = d Y N δ n = W T n+1δ n+1 f n (1.11) The layer error calculation is used to to adjust the network weights according to equation 1.12, with the goal of reducing the overall cycle error (Equation 1.10). η is a learning constant discussed in section W n = ηδ n y T n 1 (1.12) After the weight changes are applied, the stopping conditions are evaluated and training ceases if the stopping conditions are met. If training is not completed, the process starts again with the the feedforward recall phase. 15

28 1.4.6 Training Factors Various factors affect a neural network s ability to solve an arbitrary problem. Network topology limits the type of problems a neural network can solve (Section 1.4.3), but other factors discussed here affect the accuracy of the solutions provided. Weight Initialization. The weight matrix of an artificial neural network is typically initialized to small random values. The initial values have a significant effect on the network s final solution and directly affect the the generalization solution. [17] Momentum Term. The momentum term allows for the more rapid convergence of back-propagation training by propagating weight modification vectors to later epochs. The inclusion of the momentum term supplements the weight adjustment vector (Equation 1.12) with a fraction of the weight adjustment from the previous epoch of training[17]. The inclusion of the momentum term produces an equation frequently in the form of equation 1.13 where α is a positive value known as the momentum constant, which is typically less then one [17]. W n (t) = ηδ n yn 1 T α W n (t 1) (1.13) Learning Constant. The learning constant is a value typically represented as η which significantly effects the ability of a back-propagation algorithm to converge. Effective values of η have been reported in technical documents ranging from 10 3 to 10[17]. The best value is heavily dependent on the specific application of the back-propagation algorithm. A large learning constant provides significantly faster convergence on problems with a shallow gradient descent, but tends to cause overshooting and instability in problems with steeper descent. A smaller learning constant helps to avoid overshooting an absolute minimum and provides more stability at the cost of additional convergence time. The learning constant chosen for back-propagation training directly affects the algorithm s ability to converge below the desired total error within a length of time feasible to the application. Since the learning constant s value is heavily dependent on the application [17], an appropriate learning constant should typically be determined experientially. 16

29 1.5 Matlab Neural Network Toolbox The MATLAB Neural Network toolbox provides a comprehensive neural network development environment that allows for the creation and simulation of many common networks, and many common network configurations Training Algorithms MATLAB provides many training algorithms for use in training error back-propagation networks. Each algorithm uses different techniques to calculate the necessary weight adjustment vectors and uses varying amounts of memory and CPU cycles. In order to decrease training time, the algorithm which best fits the training set size and network configuration must be chosen. Algorithms that approximate second order training speeds such as Levenberg-Marquardt tend to find error minimizing solutions in fewer epochs then other algorithms. The calculation time required to approach the second order training can cause significant delays. Through experimentation, the most effective training algorithm for the presented problem can be determined based on training set size, and available computational resources. Levenberg-Marquardt back-propagation. Levenberg-Marquardt back-propagation is a popular training algorithm in part because it approximates second-order training speeds without the need to calculate the computationally expensive Hessian matrix. Instead the Hessian is approximated according to equation 1.14, where J is the Jacobian composed of the first derivatives of the network errors. H = J T J (1.14) This Hessian approximation is then used for the computation of the weight modification vector according to equation 1.15[3, 4], where e is the vector of network errors. w = (J T J +γi) 1 J T e (1.15) γ represents a scalar training parameter used to control the speed and method of convergence. When γ is zero, equation 1.15 approximates Newton s method by using using the 17

30 approximate Hessian[4] (Equation 1.14). When γ is large, the algorithm instead approximates the gradient descent method with a small step size[4]. The value γ can be modified with each training iteration as the network approaches a solution. Since Newton s method is more accurate than the gradient descent method around the final error minimum[4] γ is typically decreased as training progresses. The Levenberg-Marquardt algorithm requires the computation and storage of large matrices, particularly the Jacobian. Although the MATLAB implementation provides many performance increases and tweaks, the use of the Levenberg-Marquardt algorithm on problems with large training sets, or a large number of hidden nodes, may be infeasible for many systems. BFGS quasi-newton back-propagation. Quasi-Newton error back-propagation training is based on the Newton iterative method and provides a lower memory alternative to the faster Levenberg-Marquardt training algorithm. Newton s method involves the complex calculation of the Hessian matrix. Like the Levenberg-Marquardt algorithm, the quasi-newton algorithm was designed to approach second order training speed. The quasi-newton family of training algorithms provides an alternative to use of the computationally intensive Hessian Matrix. Instead, an approximate Hessian is updated as a function of the gradient. The BFGS quasi-newton back-propagation training provides a lower memory alternative to the Levenberg-Marquardt algorithm. It is most useful in MATLAB when the training set size, or hidden node size, makes the computation of the Levenberg-Marquardt algorithm too taxing. The BFGS quasi-newton training is implemented in the MATLAB function trainbfg Resilient back-propagation. Resilient back-propagation provide a training algorithm which helps to negate the effects of activation functions with extremely low slopes. The sign of the derivative is the only factor used to determine the direction of the weight modification. The magnitude has no effect. This has the effect of preventing the weights from becoming stuck in areas where the derivative of the activation function approaches 18

31 Figure 1.9: Example training error using MATLAB Neural Network Toolbox zero. This becomes especially important when nodes use the sigmoid activation function, since the slope approaches zero in both the positive and negative extremes of the function. Resilient backproagation provides a training option which requires less memory than the BFGS quasi-newton training algorithm. Resilient back-propagation training is provided in MATLAB through the trainrp function Training Sets When training a neural network using the MATLAB neural network toolbox, MATLAB divides the provided training sets into three separate subsets Training, Validation, and Test. Training Set. The training set is used for the computation of the gradient, and to update the weights and biases. This is the only set which the network is actually trained to recognize. 19

32 Validation Set. The validation set is used to evaluate the effectiveness of the network s ability to generalize. Since the network is not trained to specifically recognize these sets, the error calculated in the validation set represents the error in the network s ability to generalize to new or unknown patterns. The validation set is used to trigger early stopping of training. As can be seen in Figure 1.9, the error of the validation set typically decreases with increased training. However, the error in the validation set can increase, and is typically a symptom of overfitting. This represents a decrease in generalization and is not typically desired. MATLAB provides the network.trainparam.max fail training parameter. This defines the maximum number of epochs the error in the validation set can fail to decrease below the previous best epoch before training is stopped. Should early stopping occur due to a failure to generalize, the network is restored to its best epoch. This ensures MATLAB creates a neural network that provides better generalization over a neural network that better recognizes the provides training sets. Test Set. The test set is not used during training. It is instead used to compare different network models, and a tool for the network designer. Should the test set reach a minimum significantly faster then the validation set it could indicate poor data division.[4] Data Division The training sets discussed in section are created through one of four built-in data division algorithms. Each algorithm is appropriate to the method in which the training set are constructed. The data division function is set by changing the network.dividefcn to the function name. In-depth usage and configuration of the data division function can be found in the Neural Network Toolbox Manual[4]. Indexed Data Division. Indexed data division provides a method for manually dividing the training sets according to their array index. This allows for the assignment of arbitrary sized training, validation, and test sets. Indexed data division can provide consistent and repeatable data division to the training function. 20

33 Interleaved Data Division. Interleaved data division assigns the provided training sets to the three training sets in a round robin fashion. By default, the training set is assigned 60% ofthe targets, and 20%is assigned both the validationand test sets. This data division function, like the indexed data division function, provide consistent and repeatable data division to the training function. Random Data Division. Random data division is the default data division function used by the Neural Network Toolbox. The provided training set is randomly divided, placing 60% of the inputs in the training set, and 20% of the inputs in both the validation and test sets. Unlike the indexed and interleaved data division functions, this function will not provide consistent training sets between training runs. Block Data Division. The block data division algorithms provides the simplest method of customizing the proportion of training sets assigned to the training, validation, and test sets. The default behavior of the block data division function is to assign the first 60% of randomly chosen training sets to the training set,and 20% to both the validation and then the test set. This data division function, like the random data division function, will not provide consistent training sets between training runs Error Analysis MATLAB provides seven functions for error calculation. The default performance function, and the one used in this research, is the Mean Squared Error mse. The mean squared error of a data set is defined by equation 1.16 where N is the total number of data points, d is the desired output, and y is the actual output. The use of mean squared error for error calculation provides the benefit of increasing the weight of large errors, and decreasing the weight of smaller errors. E = 1 N N n=1,2,... (y n d n ) 2 (1.16) 21

34 1.5.5 Training Goals MATLAB provides multiple methods to trigger the end of network training. Early stopping methods discussed in section provide the means only to stop the network training when the network fails to generalize. The other stopping methods discussed here provide the functionality to stop training based on time and performance constraints. Maximum Epochs. Setting the maximum epoch for training provides a time constraint for network training. A reasonable maximum epoch will depend heavily on the training algorithm used, as convergence can be accomplished in as few as 10 epochs to more than 10, 000 epochs. The selection of a reasonable epoch goal helps prevent a training session to continue indefinitely as the error goal is never reached. The selection of a lower number of epochs is not sufficient to guarantee a faster training time between training algorithms. As was discussed in section 1.5.1, the difference in memory usage between algorithms can allow for more variation in total training time then the selection of a maximum epoch as an algorithm nears the computer s memory limits. Error Threshold (E max ). The error threshold defines the maximum acceptable error the training algorithm can accept. This error is calculated based on the selected error function. Training is subsequently stopped when the calculated error is less then E max. MATLAB provides a default E max of 0, indicating training will only stop when there is no detectable error. 22

35 CHAPTER 2 SYSTEM OVERVIEW 2.1 Common Systems Unit Delays The unit delay system is provided by the memory block, and is used in this simulation primarily to prevent race conditions. The unit delay system outputs the input from the previous timestep and is a built in MATLAB Simulink element Change Detector As the name implies, the change detector monitors an input signal or bus for changes in value. The output signal is binary regardless of the size of the input, where a value of one indicates that the current input differs from the previous input. This custom code differs from the built-in MATLAB not-equals function in that it always returns a binary 1 or 0, regardless of the input dimensions Memory Buffers Memory buffers are required for the operation of many of the later subsystems. The memory buffer works like a standard stack, adding one input to the top of the stack and popping the oldest input off to maintain a fixed length. While a scalar is taken as the input vector, a vector of the entire stack is provided as output. Initialization. The memory buffer initializes its stack to a null value appropriate to the system. As a true null value is unrealizable in hardware, and in MATALB Simulink, a constant that is out of the possible range of inputs should be used. 23

36 Sampling Frequency. The sampling frequency determines the rate at which inputs are sampled and added to the stack. This value is used to drive a square-wave generator while an edge detector provides the sampling triggers. Asynchronous Reset. In addition to the values to be buffered, the memory buffer takes an asynchronous reset as an input parameter. The asynchronous reset provides a method to reset the buffer to state at first initialization. This input is used on memory buffers in the filter system when a change was detected that could cause the previously stored values to be unrepresentative of the current system state. Buffer Validity. The second output from the memory buffer is a validity boolean. This values is set to true if and only if the buffer s stack contains no null values. 2.2 Harmonic Detection and Estimation The harmonic detection and estimation system is responsible for the determination of a harmonic presence by providing an estimated amplitude. The estimated harmonic amplitude values provided by this subsystem are used to determine when additional training of the harmonic predictor are necessary. The harmonic amplitude estimation is conducted through the analysis of an incomplete signal period. This windowed view provides the advantage of allowing the determination of the harmonic amplitude with sampled values that can be obtained before an entire period is available. For the purposes of this paper, the period training window is defined as d window and is a ratio of the window length to period length therefore d window Memory Buffer The memory for the harmonic detection and estimation system buffers voltage, or current measurements, to be supplied to the estimation network discussed in section The sampling frequency for the memory buffer is defined by equation 2.1 as a function of the desired number of neural network inputs n, the training window length d window, and the fundamental frequency f s. 24

37 The asynchronous reset is used in this system to reset the buffer when the the incoming inputs may be representative of harmonic values different from what is present in the system. This reset is triggered when the inputs to the harmonic prediction subsystem change, or when the predicted amplitudes change. This prevents the network instability discussed in the results section Harmonic Amplitude Estimation f samp = nf s d window (2.1) The core system of the harmonic detection and estimation subsystem is a multi-layer feed forward neural network similar to the system described in Section This neural network is trained outside of the simulation due to the complexity of the training sets and the length of time training typically takes. These issues, as well as optimal configurations, will be discussed section 3.3. The inputs to the network are sampled and stored in the memory buffer discussed in section The values stored in the memory buffer are provided to the neural network, which returns an estimated harmonic amplitude. The estimation delay between a harmonic occurring, and the neural networks ability to provide an accurate estimate, is a function of the sampling frequency (Equation 2.1) and the number of inputs and can be defined in seconds by equation 2.2. The primary cause of this delay is due to the memory buffer being reset. As the network is unable to estimate harmonics with null values present, it must wait for the all buffered elements to be filled. While the memory buffer is in an invalid state, the amplitude estimation is returned as 0 for all harmonics. τ = f samp n = f s d window (2.2) The network configuration and training set construction provides the network with the ability to generalize and provide accurate harmonic estimation. Optimal configurations for the network to provide accurate estimations will be investigated in section

38 2.3 Harmonic Prediction The harmonic prediction subsystem uses a neural network for the prediction of harmonic amplitudes based on the state of the monitored harmonic producing loads. Like in the Harmonic Amplitude Estimator, a standard multi-layered feed forward network provides the generalization ability necessary to provide amplitude estimations. The optimal network configuration for harmonic prediction requires engineering forethought and experimental determination. Non-linearly separable input/output combinations require the use of the multi-layer network proposed. Should the network designer know that linear separation is possible a single layer network can be used to decrease processing time Network Configuration For this research, a standard feedforward neural network was used to provide harmonic predictions. The exact configuration of this network was determined by the dimensions of the input vectors, but always contained five hidden nodes and a single output. One network was setup and trained for each harmonic being monitored. The number of hidden nodes in this network was selected based on the expected complexity of the input vectors. A greater number of neurons would be necessary for more complex input / output combinations. Asthenetworkwastrainedinrealtime,limitsontrainingtimewereplacedinanattempt to limit the networks effectiveness per perceived set. The chief limit on this training was a max epochs value of 10. The key ability for optimal performance of this prediction network is the selection of inputs that accurately represent the state of the harmonic producing loads. 2.4 Harmonic Predictor Training The harmonic predictor described in Section 2.3 is retrained by the harmonic predictor training system. When the estimated harmonic provided by the system described in Section 26

39 Figure 2.1: Harmonic prediction training system differs by more then a desired error threshold training is triggered. This allows the harmonic prediction system to adapt to the system it is integrated into, and to improve its prediction accuracy over time as the the system is observed. As can be seen in Figure 2.1, the training subsystem contains two memory buffers (Section 2.1.3) responsible for tracking input/output pairs provided by the training decision subsystem. Additional post-processing of the acquired datasets is required to prevent duplicate entries with the same input vectors. This post-processing prevents a situation where an error minimum is not possible Training Decision System The training decision subsystem is responsible for triggering the harmonic predictor training. The system determines training is necessary when the magnitude of the predicted harmonics is greater then a predefined constant value or when an input changes. The training decision subsystem provides the training signal, and an input/output pair as output. 27

40 Figure 2.2: Training decision subsystem 28

41 CHAPTER 3 RESULTS 3.1 Simulations All simulations were completed in MATLAB Simulink with a time step of 10 7 seconds. This high time resolution was used insure all memory buffers sampled at the correct time. When running simulations with a larger time step, the memory buffers would sample at a later time than was appropriate, and would contain samples representing a larger observation window than the neural network was trained to recognize. The observation window used in all simulations is d window = 0.5 periods in length unless otherwise noted. This value was chosen because it provided fast and accurate results. For all simulations discussed here, a null value of was used. This value was chosen because no combination of harmonics would result in experiential values near this value Simulation Limitations For reasons discussed in section 4.1, out of phase harmonics will not be used in simulations. This has the effect of removing the φ term from the Fourier series of equation 1.1 to equation 3.1 greatly simplifying the possible input space. f(x) = a n cos(nf s x) (3.1) n=1 By removing sine components and the phase offset, the possible input space is greatly simplified. This generalization allows for the training of the estimation network in a rea- 29

42 Table 3.1: Input state changes for Basic Idealized Simulation Time Input Vector 0.00 < 0.0 > 0.01 < 1,0 > 0.06 < 1,1 > 0.10 < 1,0 > 0.15 < 0,1 > 0.25 < 1,0 > 0.30 < 1,1 > 0.40 < 0,0 > Table 3.2: Input state and harmonic generation relationships for Basic Idealized Simulation Inputs Harmonics Present a b 3 rd 5 th d 20 0 d sonable amount of time, an issue discussed in section Basic Idealized This initial simulation provides a very simple foundation for the proof of concept for the predictive harmonic cancellation system. The binary input values represent the state of a switch which alternates between a short circuit and a voltage source. This simulation creates constant in-phase harmonics that are near the bounds of the harmonic estimation networks training set. Input values for this simulation are listed in table 3.1, while the corresponding harmonics generated are listed in table 3.2, where d represents a don t care value Basic Idealized with Noise The next step to test the ability of the artificial neural networks to accurately estimate the harmonic values was to introduce random noise into the simulation. Through the 30

43 Table 3.3: Input state and harmonic generation relationships for Basic Idealized Simulation Inputs Harmonics Present a b 3 rd 5 th d 45 0 d inclusion of a voltage source, controlled by a uniform random number generator, a random voltage in the range [-1,1] is added to the signal at each Simulink time step. The input vectors and the corresponding harmonics for this simulation are the same as was described in table Basic Idealized with Noise and Out of Range Values As was discussed in chapter 1, harmonic amplitudes can, in some cases, exceed their fundamental amplitudes. These large values are sometimes unexpected, and could be created by any number of environmental circumstances. This simulation seeks to provide a harmonic with an amplitude larger then the network was trained to estimate, in addition to uniformly distributed random noise in the [ 1, 1] range. This simulation utilizes the same input state changes as the previous simulations. The input values can be found in table 3.1. Table 3.3 provides the harmonic amplitude values associated with each input Constantly Changing Input Values After the basic simulations were run, it became necessary to develop a simulation which produced harmonics which responded to signals other then a binary 1 or 0. This simulation set is actually two separate simulations, offering increased complexity and a different training opportunity to the prediction network. As can be sen in tables 3.4 and 3.5, for certain time ranges harmonic amplitudes change linearly and exponentially with respect to time. In addition to the added complexity of non-constant harmonic values, input values are scaled by a factor of 1.5 as is depicted in table

44 Table 3.4: Input state changes Time Input Vector < 0 > < 7.5 > < 10 > [0.05,0.075) < 220t+21 > < 7 > Time Table 3.5: Input state changes Input Vector < 0 > < 3 > < 5 > < 7 > < 10 > [0.1166, ) < 80t + 58/3 > < 6 > [0.1916,0.2166) < 1.44e4t t+588 > < 12 > Table 3.6: Input state and harmonic generation relationships Inputs Harmonics Present a 3 rd α 1.5 α 32

45 3.2 Harmonic Prediction The harmonic prediction system is responsible for learning in-system how various input factors affect the harmonic generation. In a system where the inputs to the neural network are accurately representative of the state of the harmonic generating loads, the harmonic prediction network should be able to immediately cancel all generated harmonics. For the initial appearance of any harmonic due to the change in a monitored input, the output to the filter is delayed by a time defined by equation 2.2. Upon the second appearance of the harmonic value, due to a change in the same input, the harmonic is canceled at the next valid amplitude estimation. Figure 3.1 provides a visual representation of the predicted harmonics in the simulation described in Section When compared to the change in input vectors shown in Table 3.1, the estimation delay is not visually noticeable except at the transition at t = 0.15, where the input vector changed from < 1,0 > to < 0,1 >. As this pattern had not been previously trained in the network, a best guess was provided. This predicted value is due to the generalization that was created based on the past two training sets of < 1,0 > and < 1,1 >. As can be seen through inspection of Figure 3.1, the estimation provided was inaccurate. After a time delay of τ, the Harmonic Estimation network began providing valid estimations again, and the remaining harmonic was removed Cancellation Delay Given the presentation of a new input vector, a trained harmonic prediction network can attempt to estimate the harmonics generated. Without sufficient prior training, this harmonic is unlikely to be the correct amplitude. If the estimation is off by a difference large enough to trigger training, the correct amplitude will eventually be trained. This process takes τ seconds for the first presentation of the signal, and an additional τ seconds for each time the estimation needs to be corrected (as will be discussed in Section 3.3.4). Presentation of the same input vector a second time however, should cancel the generated 33

46 Figure 3.1: Prediction network output 34

47 Table 3.7: Harmonic estimation MATLAB training parameters Parameter Value epochs goal 0 max fail 100 harmonics immediately, providing a filter control signal before the harmonic estimation circuitry could detect it. This is the strength of predictive harmonic cancellation. 3.3 Harmonic Estimation Training Goals. Although the training goal of any artificial neural network should always be to reach 0 error, a more reasonable error threshold should be set for achievement. The training goals for the Harmonic Estimation networks are defined in Table 3.7. Experimentally, most networks tended to reach their maximum epoch during training, while reaching an acceptable error solution of Themax fail parameter was typically only reached when the network topology was insufficient for the amount of data being processed. Training Set Construction. To aid in generalization, it is important to provide a representative sample of the possible inputs for training. This was accomplished by defining minimum and maximum harmonic amplitude values, and generating sets with randomly chosen values for each harmonic. Additional parameters are implemented as the simulation requires, such as random noise. N. Pecharanin, et. al.([12]) provided the foundation necessary for the construction of a harmonic estimation neural network over a fixed window. Their network and training sets can be generalized by training over multiple observation windows. This time shifting can be seen in Figure 3.2 contains six training sets based off of two sets of harmonic amplitudes. The training set depicted in Figure 3.2 includes the fundamental frequency, in addition to the partial waveforms of a harmonic distorted signal. The inclusion of the fundamental 35

48 Figure 3.2: Example training set with time shifts 36

49 Figure 3.3: Example training set with time shifts and noise frequency at the fundamental amplitude is an important training set in order to ensure network estimation accuracy around a harmonic-free environment. The network is then further generalized through the inclusion of random noise in the training set. Figure 3.3 provides an example data set where random noise has been added. This training set uses uniformly distributed noise in the range[ 1, 1]. Through the inclusion of this noise, the network is able to generalize and accurately estimate harmonics on a noisy system. Other benefits related to the inclusion to noise are discussed in Section and Speed Concerns. The generation of sample sets representative of a large range of harmonics with appropriate time shifts, is a computationally intensive process. Listing 3.1 provides the pseudo code used for the generation of arbitrarily complex training sets. Through examination of the code, it can be shown that the complexity of execution is 37

50 Listing 3.1: Pseudo code for training set generation 1 for set number = 1:number of coefficent sets to create 2 coefficents = Generate Coefficents 3 starting phase = Generate Phases 4 5 for d time = start time:time offset step:end time 6 for d phase = start phase:phase step:end phase 7 time step = peroid*d window/number of inputs; 8 9 time range = [0+d time:time step: peroid*d window - time step ]: phase = starting phase + d phase; FinalSet = EvaulateFourierSeries(time range, Coefficents,.. 15 phase) + random noise 16 end 17 end 18 end O(n) time, indicating that the time required to generate training sets increases linearly with the number of sets generated. Although the required time scales in a generally linear fashion, up to 180,000 data sets can be generated with relatively modest generation requirements. Equation 3.2 can be used to determine the number of input/output pairs that will be generated. ( ) ( ) end phase start phase end time start time N T = N sets phase step time offset step (3.2) The size of the data sets provides a challenge to modern computers when tasked with training the neural network. Many of the networks presented in this paper took over eight hours to train with a maximum epoch of 10000, producing a error on the order of Necessary Number of Neurons The number of neurons required for the adequate performance of a neural network for a given input/output set is the subject of much research. Figure 3.4 graphs the performance of the estimation network vs the number of neurons in the hidden layer. A minimum performance goal of 10 3 was used during training to initiate early stopping when an acceptable 38

51 Figure 3.4: Network performance vs. number of hidden neurons 39

52 error level was reached. The graphs clearly indicate a generally downward slope, indicating that the error of a network with more hidden nodes would be less than a smaller network. As is clearly indicated in Figure 3.4, networks trained to recognize lower order harmonics require fewer neurons then higher order harmonics. A 3 rd order harmonic required just over 10 hidden neurons to begin training to acceptable levels, while the 11 th order harmonic required over 35 hidden neurons Alternate Network Architectures The MATLAB Neural Network toolbox provides many network architectures for simulation. The use of a Radial Basis Function network was also investigated for use as the harmonic estimation network. Figure 3.4 depicts the training performance as the mean squared error vs. number of neurons present in the hidden layer. Given an acceptable error threshold for the standard feed forward back-propagation network was on the order of 10 3 to 10 5, the RBF network fails to reach the a comparable training goal with a reasonable number of neurons. As can be seen in Figure 3.4, estimation networks for all four harmonics do not reach an acceptable error threshold until the number of hidden neurons exceeds The experimental performance of this network was not evaluated, as the sheer number of necessary neurons would make any practical application slow, negating one of the primary benefits to the use of artificial neural networks for harmonic estimation Performance When designing the system, the maximum estimation error provided by the network within the training region is a major factor in determining an appropriate retraining error threshold. A threshold selected below maximum error observed can cause instability as errors in estimation can cause retraining. The harmonic estimator output for the simulation described in section is provided in Figure 3.5. Impulse values of the harmonic amplitude can clearly be seen τ seconds after the input changes. These impulses are then followed by low harmonic estimations. These 40

53 (a) 3 rd Harmonic (b) 5 th Harmonic 41

54 (c) 7 th Harmonic (d) 11 th Harmonic Figure 3.4: Harmonic estimation performance with RBF network 42

55 Figure 3.5: Estimation network output 43

56 Figure 3.6: Remaining harmonics (IFFT) non-zero average values indicate that the network recognizes the presence of a harmonic, but cannot accurately determine the harmonic amplitude. Figure 3.6 represents the magnitude of the IFFT, and clearly shows the harmonics remaining after application of the predicted harmonics. The values reported by the IFFT are slightly delayed to allow for the collection of enough data points to produce accurate results. These values can be compared to the values reported by the harmonic estimation network in Figure 3.5. A correlation can clearly be seen as the harmonic estimation network attempts to determine the value of remaining harmonics. Figure 3.5 provides a visual representation of the estimation network s performance with respect to harmonic amplitude. Additionally the rate at which the estimation network provided an estimation of the correct sign is also included. Each figure plots a range above and below the training range equal in width to the width of the training range and includes vertical red lines indicating the boundaries of the the training range. 44

57 Figures 3.7a, 3.6c, and 3.5e clearly indicate the lowest average MSE is centered within the training region, in this case around an amplitude of 0. Outside of the training region the MSE increases, and in some cases plateaus at a MSE on the order of The rate the network returned an estimation with the correct sign is displayed in figure 3.7b, 3.6d, and 3.5f and indicates that the estimation network for the 3 rd harmonic is less stable than the the corresponding networks for the 5 th and 7 th harmonics. The necessity of producing an estimation with the correct sign but an erronous amplitude will be shown in the Outside Training Range subsection. Periodic Error. As would be expected, the harmonic estimation network does not provide exact amplitude estimations. As can be seen in the actual network output in figure 3.5, the network produces an estimate with an average value generally centered around the actual harmonic value. Figure 3.6 shows the estimation networks output for an entire period over the 3 rd, 5 th, and7 th harmonics. Therangeoftheerroneousvaluesdependsgreatlyontheperformanceof the original training, and where the actual values fall with respect to the original training range. Figure 3.7 also depicts the estimator s output variation over two periods. The periodic nature of the error can clearly be seen through observation. Outside Training Range. As was discussed in section 1.2, the amplitude of generated harmonics can sometimes exceed the amplitude of the fundamental frequency. The simulation described in section simulates a system where the harmonic estimation network is presented with a harmonic of greater amplitude than the network was originally trained to handle. Figure 3.8 depicts the estimation network s output throughout the simulation. When the estimation network s output is compared to the actual harmonics present, an estimation error is clearly present. The initial estimation of the 3 rd harmonic by the harmonic estimation network was approximately -3v, while as can be seen through examination of table 3.3, the actual value is 45v. This new value is then trained, and a approximately +3v 3 rd harmonic is injected by the filter, bringing the 3 rd harmonic value to approximately 48v. 45

58 (a) MSE vs. Harmonic Value for 3 rd Harmonic (b) Frequency of Sign Correct Estimation for 5 th Harmonic 46

59 (c) MSE vs. Harmonic Value for 5 th Harmonic (d) Frequency of Sign Correct Estimation for 5 th Harmonic 47

60 (e) MSE vs. Harmonic Value for 7 th Harmonic (f) Frequency of Sign Correct Estimation for 7 th Harmonic Figure 3.5: Performance of estimation network 48

61 (a) 3 rd Harmonic (b) 5 th Harmonic (c) 7 th Harmonic Figure 3.6: Estimation variation over a single period 49

62 Figure 3.7: Estimation variation of 5 th harmonic over two periods Once the memory buffers were refreshed, and a new estimation could be provided, the estimation network corrected its mistake by reporting a 3 rd harmonic value of approximately +25v. This new value was again sent to the harmonic predictor for training, bringing the amplitude of the 3 rd harmonic closer to the training range. After another round of training, the network manages to stabilize itself and the prediction network begins accurately reporting the 3 rd harmonic voltage. This guess and check method of determining the harmonic amplitudes of signals with out-of-range components illustrates the importance of a sign correct estimation. By incrementally providing new estimated amplitudes the estimation network was able to instruct the prediction system and filter to gradually reduce the harmonic amplitude to an in-range value. The IFFT output from the out-of-range simulation differs greatly from the IFFT for the in-range simulation due to the memory buffer reset discussed in section As the filter is frequently updated to provide increasingly accurate harmonic cancellation values, the memory buffers are reset. The memory buffer for IFFT calculation requires significantly more data points for accurate determination compared with the neural network counterpart. Therefore the IFFT waits to provide output until a valid input is provided, taking 50

63 Figure 3.8: Estimation network output with values out of training range 51

64 Figure 3.9: Prediction network output with values out of training range 52

65 Figure 3.10: Remaining harmonics (IFFT) with values out of training range significantly more time to provide valid output. The THD vs. Time for the out-of-range simulation is depicted in Figure The spikes in the THD clearly occur at times when the inputs, and hence harmonics, change. However, after the initial harmonics are created the predictive cancellation effectively prevents them from producing meaningful distortions. Inside Training Range. Figures 3.6, and 3.5 both provide example network outputs when the harmonic values are within the training range originally provided to the network. This provides harmonic estimations that are generally accurate to within 1 of their actual value. Figure 3.12 depicts the outputs from the prediction network during the in-range simulation discusses in section The staircase effect visible in figure 3.9 is not present as the harmonic estimation provides accurate results immediately. The THD of out of training range harmonics depicted in figure 3.11 takes longer to fall to 0 then the THD of in range harmonic values, as shown in figure This difference is 53

66 Figure 3.11: THD of signal with harmonic values out of training range 54

67 Figure 3.12: Prediction output in training range 55

68 Figure 3.13: THD of signal with harmonic values in training range 56

69 due to the need to retrain the network multiple times to determine the correct harmonic amplitude Generalization One of the greatest strengths of the artificial neural network is its ability to generalize and solve problems with an incomplete data set. This ability is crucial in the harmonic estimation network, as the training set is unlikely to include the exact combination of harmonics present in the system where the network is operating. Additionally, the network s ability to generalize allows it to adapt to and estimate harmonic amplitudes outside of the training range, as shown in Figure 3.8. As will be discussed in Section 3.3.6, variation in training set construction, through the addition of a random noise component, produced a more stable network. The addition of random noise allowed the network to better generalize and produce a lower error output. Generalization also plays a role in the prediction network. The prediction network s generalization is well displayed by Figure 3.1 at time At this time, the network inputs changed from < 1,0 > to < 0,1 >, an input combination that had not previously been observed (See Table 3.1 for previous input states). As the network had previously seen the < 1, 1 > state the prediction network produced an estimated harmonic amplitude of just under 2. When compared to Table 3.2, this value is clearly incorrect. Like in the out-of-range training set, this incorrect harmonic value was corrected after the harmonic estimation network could provide accurate estimations. The ability to generalize can clearly be seen in the predicted harmonic values in Figure This figure represents the prediction network s performance in the simulation described in Section This very basic simulation provides the neural network few training points before the harmonic values begin to change linearly and non-linearly. The performance of this network should be further be improved if the network was provided with more sample points before entering a state the system cannot train off. 57

70 (a) Predicted Harmonics (b) Actual Harmonic Values 58

71 (c) Predicted Harmonics (d) Actual Harmonic Values Figure 3.13: Harmonic values in a constantly changing system 59

72 3.3.6 Network Instability The asynchronous reset for the memory buffers discussed in Section is primarily used to ensure that all of the buffer s data is representative of a single input vector and harmonic set. This reset helps to stabilize the network by allowing the system to ignore erroneous values. Figure 3.14 provides a baseline for stability comparison as a graph of the IFFT for the simulation described in Section During the time range [0, 1/60), the buffer s data is largely a null value while initial values are collected. At this time, the output spikes to around Once the buffer contains valid data, the IFFT s output ramps up to the expected value of 20. The magnitude of the initialization error is due in part to the null value used during simulation. Slight instability occurs again when the 5 th harmonic is activated. As the value for the 5 th harmonic ramps up to the expected value of 7, the value for the 3 rd harmonic oscillates. This behavior can be seen at all input state transitions listed in Table 3.1. The output from the initial neural network design and training algorithm can be seen in Figure Output spikes during initialization are on the order of ±1 10 4, and the network output swings in the range of ±200 during input vector transitions. The IFFT performance is clearly superior to the performance of the artificial neural network that was trained without the use of noise. The performance of this system is what indicated the need for the asynchronous reset and memory state validation. The use of noise in the construction of training sets provides a level of generalization that adds stability and causes the network to perform in a manner similar to that of the IFFT. Figure 3.16 displays the output from the harmonic estimator during the same simulation used in the previous two examples. As with the neural network trained without noise and the IFFT, the network trained with noise oscillates in the range of ±80 until the memory buffer contains valid values. Beyond the initialization phase, the network produces erroneous results at the state transitions. In comparison to Figure 3.14, the reported errors are of comparable amplitude, 60

73 Figure 3.14: IFFT 61

74 Figure 3.15: Neural network harmonic estimation when trained without noise 62

CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF

CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF 95 CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF 6.1 INTRODUCTION An artificial neural network (ANN) is an information processing model that is inspired by biological nervous systems

More information

CHAPTER 4 PV-UPQC BASED HARMONICS REDUCTION IN POWER DISTRIBUTION SYSTEMS

CHAPTER 4 PV-UPQC BASED HARMONICS REDUCTION IN POWER DISTRIBUTION SYSTEMS 66 CHAPTER 4 PV-UPQC BASED HARMONICS REDUCTION IN POWER DISTRIBUTION SYSTEMS INTRODUCTION The use of electronic controllers in the electric power supply system has become very common. These electronic

More information

CHAPTER 4 MONITORING OF POWER SYSTEM VOLTAGE STABILITY THROUGH ARTIFICIAL NEURAL NETWORK TECHNIQUE

CHAPTER 4 MONITORING OF POWER SYSTEM VOLTAGE STABILITY THROUGH ARTIFICIAL NEURAL NETWORK TECHNIQUE 53 CHAPTER 4 MONITORING OF POWER SYSTEM VOLTAGE STABILITY THROUGH ARTIFICIAL NEURAL NETWORK TECHNIQUE 4.1 INTRODUCTION Due to economic reasons arising out of deregulation and open market of electricity,

More information

Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India. Fig.1.Neuron and its connection

Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India. Fig.1.Neuron and its connection NEUROCOMPUTATION FOR MICROSTRIP ANTENNA Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India Abstract: A Neural Network is a powerful computational tool that

More information

MINE 432 Industrial Automation and Robotics

MINE 432 Industrial Automation and Robotics MINE 432 Industrial Automation and Robotics Part 3, Lecture 5 Overview of Artificial Neural Networks A. Farzanegan (Visiting Associate Professor) Fall 2014 Norman B. Keevil Institute of Mining Engineering

More information

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE A Thesis by Andrew J. Zerngast Bachelor of Science, Wichita State University, 2008 Submitted to the Department of Electrical

More information

Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors

Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors Int. J. Advanced Networking and Applications 1053 Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors Eng. Abdelfattah A. Ahmed Atomic Energy Authority,

More information

ANALYSIS OF EFFECTS OF VECTOR CONTROL ON TOTAL CURRENT HARMONIC DISTORTION OF ADJUSTABLE SPEED AC DRIVE

ANALYSIS OF EFFECTS OF VECTOR CONTROL ON TOTAL CURRENT HARMONIC DISTORTION OF ADJUSTABLE SPEED AC DRIVE ANALYSIS OF EFFECTS OF VECTOR CONTROL ON TOTAL CURRENT HARMONIC DISTORTION OF ADJUSTABLE SPEED AC DRIVE KARTIK TAMVADA Department of E.E.E, V.S.Lakshmi Engineering College for Women, Kakinada, Andhra Pradesh,

More information

Shunt active filter algorithms for a three phase system fed to adjustable speed drive

Shunt active filter algorithms for a three phase system fed to adjustable speed drive Shunt active filter algorithms for a three phase system fed to adjustable speed drive Sujatha.CH(Assoc.prof) Department of Electrical and Electronic Engineering, Gudlavalleru Engineering College, Gudlavalleru,

More information

NEURAL NETWORK BASED MAXIMUM POWER POINT TRACKING

NEURAL NETWORK BASED MAXIMUM POWER POINT TRACKING NEURAL NETWORK BASED MAXIMUM POWER POINT TRACKING 3.1 Introduction This chapter introduces concept of neural networks, it also deals with a novel approach to track the maximum power continuously from PV

More information

Voltage Stability Assessment in Power Network Using Artificial Neural Network

Voltage Stability Assessment in Power Network Using Artificial Neural Network Voltage Stability Assessment in Power Network Using Artificial Neural Network Swetha G C 1, H.R.Sudarshana Reddy 2 PG Scholar, Dept. of E & E Engineering, University BDT College of Engineering, Davangere,

More information

Neural Network Based Optimal Switching Pattern Generation for Multiple Pulse Width Modulated Inverter

Neural Network Based Optimal Switching Pattern Generation for Multiple Pulse Width Modulated Inverter Vol.3, Issue.4, Jul - Aug. 2013 pp-1910-1915 ISSN: 2249-6645 Neural Network Based Optimal Switching Pattern Generation for Multiple Pulse Width Modulated Inverter K. Tamilarasi 1, C. Suganthini 2 1, 2

More information

A Novel Control Method to Minimize Distortion in AC Inverters. Dennis Gyma

A Novel Control Method to Minimize Distortion in AC Inverters. Dennis Gyma A Novel Control Method to Minimize Distortion in AC Inverters Dennis Gyma Hewlett-Packard Company 150 Green Pond Road Rockaway, NJ 07866 ABSTRACT In PWM AC inverters, the duty-cycle modulator transfer

More information

Linear Systems. Claudia Feregrino-Uribe & Alicia Morales-Reyes Original material: Rene Cumplido. Autumn 2015, CCC-INAOE

Linear Systems. Claudia Feregrino-Uribe & Alicia Morales-Reyes Original material: Rene Cumplido. Autumn 2015, CCC-INAOE Linear Systems Claudia Feregrino-Uribe & Alicia Morales-Reyes Original material: Rene Cumplido Autumn 2015, CCC-INAOE Contents What is a system? Linear Systems Examples of Systems Superposition Special

More information

A Novel Fuzzy Neural Network Based Distance Relaying Scheme

A Novel Fuzzy Neural Network Based Distance Relaying Scheme 902 IEEE TRANSACTIONS ON POWER DELIVERY, VOL. 15, NO. 3, JULY 2000 A Novel Fuzzy Neural Network Based Distance Relaying Scheme P. K. Dash, A. K. Pradhan, and G. Panda Abstract This paper presents a new

More information

Literature Review for Shunt Active Power Filters

Literature Review for Shunt Active Power Filters Chapter 2 Literature Review for Shunt Active Power Filters In this chapter, the in depth and extensive literature review of all the aspects related to current error space phasor based hysteresis controller

More information

CHAPTER 4 IMPLEMENTATION OF ADALINE IN MATLAB

CHAPTER 4 IMPLEMENTATION OF ADALINE IN MATLAB 52 CHAPTER 4 IMPLEMENTATION OF ADALINE IN MATLAB 4.1 INTRODUCTION The ADALINE is implemented in MATLAB environment running on a PC. One hundred data samples are acquired from a single cycle of load current

More information

ARTIFICIAL NEURAL NETWORK BASED FAULT LOCATION FOR TRANSMISSION LINES

ARTIFICIAL NEURAL NETWORK BASED FAULT LOCATION FOR TRANSMISSION LINES University of Kentucky UKnowledge University of Kentucky Master's Theses Graduate School 2011 ARTIFICIAL NEURAL NETWORK BASED FAULT LOCATION FOR TRANSMISSION LINES Suhaas Bhargava Ayyagari University of

More information

1 Introduction. w k x k (1.1)

1 Introduction. w k x k (1.1) Neural Smithing 1 Introduction Artificial neural networks are nonlinear mapping systems whose structure is loosely based on principles observed in the nervous systems of humans and animals. The major

More information

Detection and classification of faults on 220 KV transmission line using wavelet transform and neural network

Detection and classification of faults on 220 KV transmission line using wavelet transform and neural network International Journal of Smart Grid and Clean Energy Detection and classification of faults on 220 KV transmission line using wavelet transform and neural network R P Hasabe *, A P Vaidya Electrical Engineering

More information

Step Response of RC Circuits

Step Response of RC Circuits EE 233 Laboratory-1 Step Response of RC Circuits 1 Objectives Measure the internal resistance of a signal source (eg an arbitrary waveform generator) Measure the output waveform of simple RC circuits excited

More information

Chapter -3 ANALYSIS OF HVDC SYSTEM MODEL. Basically the HVDC transmission consists in the basic case of two

Chapter -3 ANALYSIS OF HVDC SYSTEM MODEL. Basically the HVDC transmission consists in the basic case of two Chapter -3 ANALYSIS OF HVDC SYSTEM MODEL Basically the HVDC transmission consists in the basic case of two convertor stations which are connected to each other by a transmission link consisting of an overhead

More information

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw Review Analysis of Pattern Recognition by Neural Network Soni Chaturvedi A.A.Khurshid Meftah Boudjelal Electronics & Comm Engg Electronics & Comm Engg Dept. of Computer Science P.I.E.T, Nagpur RCOEM, Nagpur

More information

Harmonics Analysis Of A Single Phase Inverter Using Matlab Simulink

Harmonics Analysis Of A Single Phase Inverter Using Matlab Simulink International Journal Of Engineering Research And Development e- ISSN: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 14, Issue 5 (May Ver. II 2018), PP.27-32 Harmonics Analysis Of A Single Phase Inverter

More information

NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM)

NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM) NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM) Ahmed Nasraden Milad M. Aziz M Rahmadwati Artificial neural network (ANN) is one of the most advanced technology fields, which allows

More information

Study of Power Factor Correction in Single Phase AC-DC Converter

Study of Power Factor Correction in Single Phase AC-DC Converter Avneet Kaur, Prof. S.K Tripathi, Prof. P. Tiwari 89 Study of Power Factor Correction in Single Phase AC-DC Converter Avneet Kaur, Prof. S.K Tripathi, Prof. P. Tiwari Abstract: This paper is regarding power

More information

The Discrete Fourier Transform. Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido

The Discrete Fourier Transform. Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido The Discrete Fourier Transform Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido CCC-INAOE Autumn 2015 The Discrete Fourier Transform Fourier analysis is a family of mathematical

More information

Digital Signal Processing. VO Embedded Systems Engineering Armin Wasicek WS 2009/10

Digital Signal Processing. VO Embedded Systems Engineering Armin Wasicek WS 2009/10 Digital Signal Processing VO Embedded Systems Engineering Armin Wasicek WS 2009/10 Overview Signals and Systems Processing of Signals Display of Signals Digital Signal Processors Common Signal Processing

More information

Low Pass Filter Introduction

Low Pass Filter Introduction Low Pass Filter Introduction Basically, an electrical filter is a circuit that can be designed to modify, reshape or reject all unwanted frequencies of an electrical signal and accept or pass only those

More information

Surveillance and Calibration Verification Using Autoassociative Neural Networks

Surveillance and Calibration Verification Using Autoassociative Neural Networks Surveillance and Calibration Verification Using Autoassociative Neural Networks Darryl J. Wrest, J. Wesley Hines, and Robert E. Uhrig* Department of Nuclear Engineering, University of Tennessee, Knoxville,

More information

Single-Ended to Differential Converter for Multiple-Stage Single-Ended Ring Oscillators

Single-Ended to Differential Converter for Multiple-Stage Single-Ended Ring Oscillators IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 38, NO. 1, JANUARY 2003 141 Single-Ended to Differential Converter for Multiple-Stage Single-Ended Ring Oscillators Yuping Toh, Member, IEEE, and John A. McNeill,

More information

Laboratory Assignment 5 Amplitude Modulation

Laboratory Assignment 5 Amplitude Modulation Laboratory Assignment 5 Amplitude Modulation PURPOSE In this assignment, you will explore the use of digital computers for the analysis, design, synthesis, and simulation of an amplitude modulation (AM)

More information

Initialisation improvement in engineering feedforward ANN models.

Initialisation improvement in engineering feedforward ANN models. Initialisation improvement in engineering feedforward ANN models. A. Krimpenis and G.-C. Vosniakos National Technical University of Athens, School of Mechanical Engineering, Manufacturing Technology Division,

More information

Artificial Intelligence Elman Backpropagation Computing Models for Predicting Shelf Life of. Processed Cheese

Artificial Intelligence Elman Backpropagation Computing Models for Predicting Shelf Life of. Processed Cheese Vol.4/No.1 B (01) INTERNETWORKING INDONESIA JOURNAL 3 Artificial Intelligence Elman Backpropagation Computing Models for Predicting Shelf Life of Processed Cheese Sumit Goyal and Gyanendra Kumar Goyal

More information

ARE HARMONICS STILL A PROBLEM IN DATA CENTERS? by Mohammad Al Rawashdeh, Lead Consultant, Data Center Engineering Services

ARE HARMONICS STILL A PROBLEM IN DATA CENTERS? by Mohammad Al Rawashdeh, Lead Consultant, Data Center Engineering Services ARE HARMONICS STILL A PROBLEM IN DATA CENTERS? by Mohammad Al Rawashdeh, Lead Consultant, Data Center Engineering Services edarat group INTRODUCTION Harmonics are a mathematical way of describing distortion

More information

POWER TRANSFORMER PROTECTION USING ANN, FUZZY SYSTEM AND CLARKE S TRANSFORM

POWER TRANSFORMER PROTECTION USING ANN, FUZZY SYSTEM AND CLARKE S TRANSFORM POWER TRANSFORMER PROTECTION USING ANN, FUZZY SYSTEM AND CLARKE S TRANSFORM 1 VIJAY KUMAR SAHU, 2 ANIL P. VAIDYA 1,2 Pg Student, Professor E-mail: 1 vijay25051991@gmail.com, 2 anil.vaidya@walchandsangli.ac.in

More information

CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK

CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK 4.1 INTRODUCTION For accurate system level simulator performance, link level modeling and prediction [103] must be reliable and fast so as to improve the

More information

Disturbance Rejection Using Self-Tuning ARMARKOV Adaptive Control with Simultaneous Identification

Disturbance Rejection Using Self-Tuning ARMARKOV Adaptive Control with Simultaneous Identification IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 9, NO. 1, JANUARY 2001 101 Disturbance Rejection Using Self-Tuning ARMARKOV Adaptive Control with Simultaneous Identification Harshad S. Sane, Ravinder

More information

A Numerical Approach to Understanding Oscillator Neural Networks

A Numerical Approach to Understanding Oscillator Neural Networks A Numerical Approach to Understanding Oscillator Neural Networks Natalie Klein Mentored by Jon Wilkins Networks of coupled oscillators are a form of dynamical network originally inspired by various biological

More information

Artificial Neural Networks approach to the voltage sag classification

Artificial Neural Networks approach to the voltage sag classification Artificial Neural Networks approach to the voltage sag classification F. Ortiz, A. Ortiz, M. Mañana, C. J. Renedo, F. Delgado, L. I. Eguíluz Department of Electrical and Energy Engineering E.T.S.I.I.,

More information

2 TD-MoM ANALYSIS OF SYMMETRIC WIRE DIPOLE

2 TD-MoM ANALYSIS OF SYMMETRIC WIRE DIPOLE Design of Microwave Antennas: Neural Network Approach to Time Domain Modeling of V-Dipole Z. Lukes Z. Raida Dept. of Radio Electronics, Brno University of Technology, Purkynova 118, 612 00 Brno, Czech

More information

Single Phase Bridgeless SEPIC Converter with High Power Factor

Single Phase Bridgeless SEPIC Converter with High Power Factor International Journal of Emerging Engineering Research and Technology Volume 2, Issue 6, September 2014, PP 117-126 ISSN 2349-4395 (Print) & ISSN 2349-4409 (Online) Single Phase Bridgeless SEPIC Converter

More information

Artificial Neural Network Based Fault Locator for Single Line to Ground Fault in Double Circuit Transmission Line

Artificial Neural Network Based Fault Locator for Single Line to Ground Fault in Double Circuit Transmission Line DOI: 10.7763/IPEDR. 2014. V75. 11 Artificial Neural Network Based Fault Locator for Single Line to Ground Fault in Double Circuit Transmission Line Aravinda Surya. V 1, Ebha Koley 2 +, AnamikaYadav 3 and

More information

1C.4.1 Harmonic Distortion

1C.4.1 Harmonic Distortion 2 1 Ja n 1 4 2 1 J a n 1 4 Vo l.1 -Ge n e r a l;p a r tc-p o we r Qu a lity 1. Scope This handbook section contains of PacifiCorp s standard for harmonic distortion (electrical pollution) control, as well

More information

FACE RECOGNITION USING NEURAL NETWORKS

FACE RECOGNITION USING NEURAL NETWORKS Int. J. Elec&Electr.Eng&Telecoms. 2014 Vinoda Yaragatti and Bhaskar B, 2014 Research Paper ISSN 2319 2518 www.ijeetc.com Vol. 3, No. 3, July 2014 2014 IJEETC. All Rights Reserved FACE RECOGNITION USING

More information

TO LIMIT degradation in power quality caused by nonlinear

TO LIMIT degradation in power quality caused by nonlinear 1152 IEEE TRANSACTIONS ON POWER ELECTRONICS, VOL. 13, NO. 6, NOVEMBER 1998 Optimal Current Programming in Three-Phase High-Power-Factor Rectifier Based on Two Boost Converters Predrag Pejović, Member,

More information

CHAPTER 2 AN ANALYSIS OF LC COUPLED SOFT SWITCHING TECHNIQUE FOR IBC OPERATED IN LOWER DUTY CYCLE

CHAPTER 2 AN ANALYSIS OF LC COUPLED SOFT SWITCHING TECHNIQUE FOR IBC OPERATED IN LOWER DUTY CYCLE 40 CHAPTER 2 AN ANALYSIS OF LC COUPLED SOFT SWITCHING TECHNIQUE FOR IBC OPERATED IN LOWER DUTY CYCLE 2.1 INTRODUCTION Interleaving technique in the boost converter effectively reduces the ripple current

More information

A New Subsynchronous Oscillation (SSO) Relay for Renewable Generation and Series Compensated Transmission Systems

A New Subsynchronous Oscillation (SSO) Relay for Renewable Generation and Series Compensated Transmission Systems 21, rue d Artois, F-75008 PARIS CIGRE US National Committee http : //www.cigre.org 2015 Grid of the Future Symposium A New Subsynchronous Oscillation (SSO) Relay for Renewable Generation and Series Compensated

More information

Transient stability Assessment using Artificial Neural Network Considering Fault Location

Transient stability Assessment using Artificial Neural Network Considering Fault Location Vol.6 No., 200 مجلد 6, العدد, 200 Proc. st International Conf. Energy, Power and Control Basrah University, Basrah, Iraq 0 Nov. to 2 Dec. 200 Transient stability Assessment using Artificial Neural Network

More information

MODEL 5002 PHASE VERIFICATION BRIDGE SET

MODEL 5002 PHASE VERIFICATION BRIDGE SET CLARKE-HESS COMMUNICATION RESEARCH CORPORATION clarke-hess.com MODEL 5002 PHASE VERIFICATION BRIDGE SET TABLE OF CONTENTS WARRANTY i I BASIC ASSEMBLIES I-1 1-1 INTRODUCTION I-1 1-2 BASIC ASSEMBLY AND SPECIFICATIONS

More information

NNC for Power Electronics Converter Circuits: Design & Simulation

NNC for Power Electronics Converter Circuits: Design & Simulation NNC for Power Electronics Converter Circuits: Design & Simulation 1 Ms. Kashmira J. Rathi, 2 Dr. M. S. Ali Abstract: AI-based control techniques have been very popular since the beginning of the 90s. Usually,

More information

Transient Stability Improvement of Multi Machine Power Systems using Matrix Converter Based UPFC with ANN

Transient Stability Improvement of Multi Machine Power Systems using Matrix Converter Based UPFC with ANN IJSRD - International Journal for Scientific Research & Development Vol. 3, Issue 04, 2015 ISSN (online): 2321-0613 Transient Stability Improvement of Multi Machine Power Systems using Matrix Converter

More information

Comparison of Various Neural Network Algorithms Used for Location Estimation in Wireless Communication

Comparison of Various Neural Network Algorithms Used for Location Estimation in Wireless Communication Comparison of Various Neural Network Algorithms Used for Location Estimation in Wireless Communication * Shashank Mishra 1, G.S. Tripathi M.Tech. Student, Dept. of Electronics and Communication Engineering,

More information

IJSRD - International Journal for Scientific Research & Development Vol. 3, Issue 03, 2015 ISSN (online):

IJSRD - International Journal for Scientific Research & Development Vol. 3, Issue 03, 2015 ISSN (online): IJSRD - International Journal for Scientific Research & Development Vol. 3, Issue 03, 2015 ISSN (online): 2321-0613 Mitigating the Harmonic Distortion in Power System using SVC With AI Technique Mr. Sanjay

More information

CHAPTER 7 CONCLUSIONS AND FUTURE SCOPE

CHAPTER 7 CONCLUSIONS AND FUTURE SCOPE CHAPTER 7 CONCLUSIONS AND FUTURE SCOPE 7.1 INTRODUCTION A Shunt Active Filter is controlled current or voltage power electronics converter that facilitates its performance in different modes like current

More information

UNIT-3. Electronic Measurements & Instrumentation

UNIT-3.   Electronic Measurements & Instrumentation UNIT-3 1. Draw the Block Schematic of AF Wave analyzer and explain its principle and Working? ANS: The wave analyzer consists of a very narrow pass-band filter section which can Be tuned to a particular

More information

Research on MPPT Control Algorithm of Flexible Amorphous Silicon. Photovoltaic Power Generation System Based on BP Neural Network

Research on MPPT Control Algorithm of Flexible Amorphous Silicon. Photovoltaic Power Generation System Based on BP Neural Network 4th International Conference on Sensors, Measurement and Intelligent Materials (ICSMIM 2015) Research on MPPT Control Algorithm of Flexible Amorphous Silicon Photovoltaic Power Generation System Based

More information

Experiment 2: Transients and Oscillations in RLC Circuits

Experiment 2: Transients and Oscillations in RLC Circuits Experiment 2: Transients and Oscillations in RLC Circuits Will Chemelewski Partner: Brian Enders TA: Nielsen See laboratory book #1 pages 5-7, data taken September 1, 2009 September 7, 2009 Abstract Transient

More information

Abstract: PWM Inverters need an internal current feedback loop to maintain desired

Abstract: PWM Inverters need an internal current feedback loop to maintain desired CURRENT REGULATION OF PWM INVERTER USING STATIONARY FRAME REGULATOR B. JUSTUS RABI and Dr.R. ARUMUGAM, Head of the Department of Electrical and Electronics Engineering, Anna University, Chennai 600 025.

More information

Section 11: Power Quality Considerations Bill Brown, P.E., Square D Engineering Services

Section 11: Power Quality Considerations Bill Brown, P.E., Square D Engineering Services Section 11: Power Quality Considerations Bill Brown, P.E., Square D Engineering Services Introduction The term power quality may take on any one of several definitions. The strict definition of power quality

More information

-binary sensors and actuators (such as an on/off controller) are generally more reliable and less expensive

-binary sensors and actuators (such as an on/off controller) are generally more reliable and less expensive Process controls are necessary for designing safe and productive plants. A variety of process controls are used to manipulate processes, however the most simple and often most effective is the PID controller.

More information

Reactive power compensation for linear and non linear loads by using active and passive filter for smart grid applications.

Reactive power compensation for linear and non linear loads by using active and passive filter for smart grid applications. Reactive power compensation for linear and non linear loads by using active and passive filter for smart grid applications. 1 Vikas Kumar Chandra, 2 Mahendra Kumar Pradhan 1,2 ECE Department, School of

More information

Application of Fuzzy Logic Controller in UPFC to Mitigate THD in Power System

Application of Fuzzy Logic Controller in UPFC to Mitigate THD in Power System International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 9, Issue 8 (January 2014), PP. 25-33 Application of Fuzzy Logic Controller in UPFC

More information

Phase Comparison Relaying

Phase Comparison Relaying MULTILIN GER-2681B GE Power Management Phase Comparison Relaying PHASE COMPARISON RELAYING INTRODUCTION Phase comparison relaying is a kind of differential relaying that compares the phase angles of the

More information

Digital Logic Circuits

Digital Logic Circuits Digital Logic Circuits Let s look at the essential features of digital logic circuits, which are at the heart of digital computers. Learning Objectives Understand the concepts of analog and digital signals

More information

A high-efficiency switching amplifier employing multi-level pulse width modulation

A high-efficiency switching amplifier employing multi-level pulse width modulation INTERNATIONAL JOURNAL OF COMMUNICATIONS Volume 11, 017 A high-efficiency switching amplifier employing multi-level pulse width modulation Jan Doutreloigne Abstract This paper describes a new multi-level

More information

Low-Voltage Wide Linear Range Tunable Operational Transconductance Amplifier

Low-Voltage Wide Linear Range Tunable Operational Transconductance Amplifier Low-Voltage Wide Linear Range Tunable Operational Transconductance Amplifier A dissertation submitted in partial fulfillment of the requirement for the award of degree of Master of Technology in VLSI Design

More information

PERFORMANCE ANALYSIS OF SRM DRIVE USING ANN BASED CONTROLLING OF 6/4 SWITCHED RELUCTANCE MOTOR

PERFORMANCE ANALYSIS OF SRM DRIVE USING ANN BASED CONTROLLING OF 6/4 SWITCHED RELUCTANCE MOTOR PERFORMANCE ANALYSIS OF SRM DRIVE USING ANN BASED CONTROLLING OF 6/4 SWITCHED RELUCTANCE MOTOR Vikas S. Wadnerkar * Dr. G. Tulasi Ram Das ** Dr. A.D.Rajkumar *** ABSTRACT This paper proposes and investigates

More information

Design Neural Network Controller for Mechatronic System

Design Neural Network Controller for Mechatronic System Design Neural Network Controller for Mechatronic System Ismail Algelli Sassi Ehtiwesh, and Mohamed Ali Elhaj Abstract The main goal of the study is to analyze all relevant properties of the electro hydraulic

More information

HYBRID ACTIVE FILTER WITH VARIABLE CONDUCTANCE FOR HARMONIC RESONANCE SUPPRESSION USING ANN

HYBRID ACTIVE FILTER WITH VARIABLE CONDUCTANCE FOR HARMONIC RESONANCE SUPPRESSION USING ANN HYBRID ACTIVE FILTER WITH VARIABLE CONDUCTANCE FOR HARMONIC RESONANCE SUPPRESSION USING ANN 1 M.Shyamala, 2 P.Dileep Kumar 1 Pursuing M.Tech, PE Branch, Dept of EEE. 2 Assoc.Prof,EEE,Dept,Brilliant Institute

More information

A Comparison of Particle Swarm Optimization and Gradient Descent in Training Wavelet Neural Network to Predict DGPS Corrections

A Comparison of Particle Swarm Optimization and Gradient Descent in Training Wavelet Neural Network to Predict DGPS Corrections Proceedings of the World Congress on Engineering and Computer Science 00 Vol I WCECS 00, October 0-, 00, San Francisco, USA A Comparison of Particle Swarm Optimization and Gradient Descent in Training

More information

EE 791 EEG-5 Measures of EEG Dynamic Properties

EE 791 EEG-5 Measures of EEG Dynamic Properties EE 791 EEG-5 Measures of EEG Dynamic Properties Computer analysis of EEG EEG scientists must be especially wary of mathematics in search of applications after all the number of ways to transform data is

More information

EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING

EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING Clemson University TigerPrints All Theses Theses 8-2009 EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING Jason Ellis Clemson University, jellis@clemson.edu

More information

Distance Relay Response to Transformer Energization: Problems and Solutions

Distance Relay Response to Transformer Energization: Problems and Solutions 1 Distance Relay Response to Transformer Energization: Problems and Solutions Joe Mooney, P.E. and Satish Samineni, Schweitzer Engineering Laboratories Abstract Modern distance relays use various filtering

More information

A Comparison of MLP, RNN and ESN in Determining Harmonic Contributions from Nonlinear Loads

A Comparison of MLP, RNN and ESN in Determining Harmonic Contributions from Nonlinear Loads A Comparison of MLP, RNN and ESN in Determining Harmonic Contributions from Nonlinear Loads Jing Dai, Pinjia Zhang, Joy Mazumdar, Ronald G Harley and G K Venayagamoorthy 3 School of Electrical and Computer

More information

Application of Fourier Transform in Signal Processing

Application of Fourier Transform in Signal Processing 1 Application of Fourier Transform in Signal Processing Lina Sun,Derong You,Daoyun Qi Information Engineering College, Yantai University of Technology, Shandong, China Abstract: Fourier transform is a

More information

A 5 GHz LNA Design Using Neural Smith Chart

A 5 GHz LNA Design Using Neural Smith Chart Progress In Electromagnetics Research Symposium, Beijing, China, March 23 27, 2009 465 A 5 GHz LNA Design Using Neural Smith Chart M. Fatih Çaǧlar 1 and Filiz Güneş 2 1 Department of Electronics and Communication

More information

Understanding Input Harmonics and Techniques to Mitigate Them

Understanding Input Harmonics and Techniques to Mitigate Them Understanding Input Harmonics and Techniques to Mitigate Them Mahesh M. Swamy Yaskawa Electric America YASKAWA Page. 1 Organization Introduction Why FDs Generate Harmonics? Harmonic Limit Calculations

More information

ME 365 EXPERIMENT 7 SIGNAL CONDITIONING AND LOADING

ME 365 EXPERIMENT 7 SIGNAL CONDITIONING AND LOADING ME 365 EXPERIMENT 7 SIGNAL CONDITIONING AND LOADING Objectives: To familiarize the student with the concepts of signal conditioning. At the end of the lab, the student should be able to: Understand the

More information

DETECTION AND CLASSIFICATION OF POWER QUALITY DISTURBANCES

DETECTION AND CLASSIFICATION OF POWER QUALITY DISTURBANCES DETECTION AND CLASSIFICATION OF POWER QUALITY DISTURBANCES Ph.D. THESIS by UTKARSH SINGH INDIAN INSTITUTE OF TECHNOLOGY ROORKEE ROORKEE-247 667 (INDIA) OCTOBER, 2017 DETECTION AND CLASSIFICATION OF POWER

More information

CHAPTER 6 UNIT VECTOR GENERATION FOR DETECTING VOLTAGE ANGLE

CHAPTER 6 UNIT VECTOR GENERATION FOR DETECTING VOLTAGE ANGLE 98 CHAPTER 6 UNIT VECTOR GENERATION FOR DETECTING VOLTAGE ANGLE 6.1 INTRODUCTION Process industries use wide range of variable speed motor drives, air conditioning plants, uninterrupted power supply systems

More information

444 Index. F Fermi potential, 146 FGMOS transistor, 20 23, 57, 83, 84, 98, 205, 208, 213, 215, 216, 241, 242, 251, 280, 311, 318, 332, 354, 407

444 Index. F Fermi potential, 146 FGMOS transistor, 20 23, 57, 83, 84, 98, 205, 208, 213, 215, 216, 241, 242, 251, 280, 311, 318, 332, 354, 407 Index A Accuracy active resistor structures, 46, 323, 328, 329, 341, 344, 360 computational circuits, 171 differential amplifiers, 30, 31 exponential circuits, 285, 291, 292 multifunctional structures,

More information

Advances in Antenna Measurement Instrumentation and Systems

Advances in Antenna Measurement Instrumentation and Systems Advances in Antenna Measurement Instrumentation and Systems Steven R. Nichols, Roger Dygert, David Wayne MI Technologies Suwanee, Georgia, USA Abstract Since the early days of antenna pattern recorders,

More information

Chapter 2 Signal Conditioning, Propagation, and Conversion

Chapter 2 Signal Conditioning, Propagation, and Conversion 09/0 PHY 4330 Instrumentation I Chapter Signal Conditioning, Propagation, and Conversion. Amplification (Review of Op-amps) Reference: D. A. Bell, Operational Amplifiers Applications, Troubleshooting,

More information

Key-Words: - NARX Neural Network; Nonlinear Loads; Shunt Active Power Filter; Instantaneous Reactive Power Algorithm

Key-Words: - NARX Neural Network; Nonlinear Loads; Shunt Active Power Filter; Instantaneous Reactive Power Algorithm Parameter control scheme for active power filter based on NARX neural network A. Y. HATATA, M. ELADAWY, K. SHEBL Department of Electric Engineering Mansoura University Mansoura, EGYPT a_hatata@yahoo.com

More information

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors In: M.H. Hamza (ed.), Proceedings of the 21st IASTED Conference on Applied Informatics, pp. 1278-128. Held February, 1-1, 2, Insbruck, Austria Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

More information

Performance Improvement of Contactless Distance Sensors using Neural Network

Performance Improvement of Contactless Distance Sensors using Neural Network Performance Improvement of Contactless Distance Sensors using Neural Network R. ABDUBRANI and S. S. N. ALHADY School of Electrical and Electronic Engineering Universiti Sains Malaysia Engineering Campus,

More information

Harmonic detection by using different artificial neural network topologies

Harmonic detection by using different artificial neural network topologies Harmonic detection by using different artificial neural network topologies J.L. Flores Garrido y P. Salmerón Revuelta Department of Electrical Engineering E. P. S., Huelva University Ctra de Palos de la

More information

ARTIFICIAL INTELLIGENCE IN POWER SYSTEMS

ARTIFICIAL INTELLIGENCE IN POWER SYSTEMS ARTIFICIAL INTELLIGENCE IN POWER SYSTEMS Prof.Somashekara Reddy 1, Kusuma S 2 1 Department of MCA, NHCE Bangalore, India 2 Kusuma S, Department of MCA, NHCE Bangalore, India Abstract: Artificial Intelligence

More information

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY Sidhesh Badrinarayan 1, Saurabh Abhale 2 1,2 Department of Information Technology, Pune Institute of Computer Technology, Pune, India ABSTRACT: Gestures

More information

CHAPTER 3. Instrumentation Amplifier (IA) Background. 3.1 Introduction. 3.2 Instrumentation Amplifier Architecture and Configurations

CHAPTER 3. Instrumentation Amplifier (IA) Background. 3.1 Introduction. 3.2 Instrumentation Amplifier Architecture and Configurations CHAPTER 3 Instrumentation Amplifier (IA) Background 3.1 Introduction The IAs are key circuits in many sensor readout systems where, there is a need to amplify small differential signals in the presence

More information

INTERNATIONAL JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY (IJEET)

INTERNATIONAL JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY (IJEET) INTERNATIONAL JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY (IJEET) International Journal of Electrical Engineering and Technology (IJEET), ISSN 0976 6545(Print), ISSN 0976 6545(Print) ISSN 0976 6553(Online)

More information

Eur Ing Dr. Lei Zhang Faculty of Engineering and Applied Science University of Regina Canada

Eur Ing Dr. Lei Zhang Faculty of Engineering and Applied Science University of Regina Canada Eur Ing Dr. Lei Zhang Faculty of Engineering and Applied Science University of Regina Canada The Second International Conference on Neuroscience and Cognitive Brain Information BRAININFO 2017, July 22,

More information

Design of Shunt Active Power Filter by using An Advanced Current Control Strategy

Design of Shunt Active Power Filter by using An Advanced Current Control Strategy Design of Shunt Active Power Filter by using An Advanced Current Control Strategy K.Sailaja 1, M.Jyosthna Bai 2 1 PG Scholar, Department of EEE, JNTU Anantapur, Andhra Pradesh, India 2 PG Scholar, Department

More information

Three Phase PFC and Harmonic Mitigation Using Buck Boost Converter Topology

Three Phase PFC and Harmonic Mitigation Using Buck Boost Converter Topology Three Phase PFC and Harmonic Mitigation Using Buck Boost Converter Topology Riya Philip 1, Reshmi V 2 Department of Electrical and Electronics, Amal Jyothi College of Engineering, Koovapally, India 1,

More information

Active Filter Design Techniques

Active Filter Design Techniques Active Filter Design Techniques 16.1 Introduction What is a filter? A filter is a device that passes electric signals at certain frequencies or frequency ranges while preventing the passage of others.

More information

Voltage Sag and Swell Mitigation Using Dynamic Voltage Restore (DVR)

Voltage Sag and Swell Mitigation Using Dynamic Voltage Restore (DVR) Voltage Sag and Swell Mitigation Using Dynamic Voltage Restore (DVR) Mr. A. S. Patil Mr. S. K. Patil Department of Electrical Engg. Department of Electrical Engg. I. C. R. E. Gargoti I. C. R. E. Gargoti

More information

Signal Processing of Automobile Millimeter Wave Radar Base on BP Neural Network

Signal Processing of Automobile Millimeter Wave Radar Base on BP Neural Network AIML 06 International Conference, 3-5 June 006, Sharm El Sheikh, Egypt Signal Processing of Automobile Millimeter Wave Radar Base on BP Neural Network Xinglin Zheng ), Yang Liu ), Yingsheng Zeng 3) ))3)

More information

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016 Artificial Neural Networks Artificial Intelligence Santa Clara, 2016 Simulate the functioning of the brain Can simulate actual neurons: Computational neuroscience Can introduce simplified neurons: Neural

More information

THE TREND toward implementing systems with low

THE TREND toward implementing systems with low 724 IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 30, NO. 7, JULY 1995 Design of a 100-MHz 10-mW 3-V Sample-and-Hold Amplifier in Digital Bipolar Technology Behzad Razavi, Member, IEEE Abstract This paper

More information