Constant False Alarm Rate Detection of Radar Signals with Artificial Neural Networks

Size: px
Start display at page:

Download "Constant False Alarm Rate Detection of Radar Signals with Artificial Neural Networks"

Transcription

1 Högskolan i Skövde Department of Computer Science Constant False Alarm Rate Detection of Radar Signals with Artificial Neural Networks Mirko Kück mirko@ida.his.se Final 6 October, 1996

2 Submitted by Mirko Kück to the University of Skövde as a dissertation towards the degree of M.Sc. by examination and dissertation in the Department of Computer Science. October, 96 I certify that all material in this dissertation which is not my own work has been identified and that no material is included for which a degree has already been conferred upon me. Signed

3 Abstract Radar signal detection is mostly based in conventional radar systems on statistical methods. These methods are computationally expensive and often only optimal for one type of clutter distribution. More recent approaches in radar signal detection consider also artificial neural networks as detectors. These approaches have up to now only treated clutter environments with homogenous distributed clutter. With this project mixed clutter environments were addressed. The experiments with different artificial neural network architectures showed that only a multilayer perceptron architecture could solve the task of radar signal detection. This architecture achieved a good performance in mixed clutter environment compared to a conventional detector.

4 Table of Contents 1. INTRODUCTION Radar Signal Detection with ANNs MSc Project Outline 2 2. RADAR SIGNAL DETECTION Conventional CFAR detection 6 3. ARTIFICIAL NEURAL NETWORKS Background Backpropagation Training Strategy Learning rate Momentum Generalisation Capabilities and Overtraining Hyperplanes Architectures Multilayer Perceptron Simple Recurrent Network Modular Approaches PREVIOUS WORK ANN Approaches ANN-CFAR DETECTION Two Layer ANN 24

5 5.1.1 CA-CFAR detector ANN-CFAR detector Multilayer ANN EXPERIMENTS Weibull PDF SCR, PFA and PD Threshold Calculation Performance Criteria (CFAR-loss) Training Set Background Clutter Distribution Target Signal Scenarios Training the MLP Approaches Multilayer Perceptron Approach Simple Recurrent Network Approach CN Approach EVALUATION Specialised hidden units compared with direct training Limited learning abilities of SRNs Comparison between MLP and CN architecture Comparison between ANN-CFAR and CA-CFAR CONCLUSION Approaches Achievements Future work REFERENCES 58

6 Preface All terms written in italic letters are defined in the appendix Definitions. The final version of this document will be available on the world wide web ( and will contain hyperlinks on references.

7 1. Introduction The MSc project documented in this thesis is carried out in co-operation with the Connectionist Research Group at the University of Skövde and Ericsson Microwave Systems AB. 1.1 Radar Signal Detection with ANNs Electronic signal filtering is a complex task, which requires complicated specialised hardware or computationally expensive signal processing. In radar systems the target signals have to be separated from unwanted background clutter. This kind of filtering can be considered as a pattern recognition problem, which fits very well to the possibilities an artificial neural network (ANN) offers. In the past, there have been several approaches to detect radar signals in unwanted background clutter. These approaches used extensive statistical computations, which where often only applicable to one kind of background clutter (homogenous clutter); Farina et al. (1986), Barton (1988). In the last few years, researchers also picked up the possibility of problem solving with artificial neural networks and used them for radar signal detection; Bucciarelli et al. (1993), Amoozegar et al. (1994), Ramamurti et al. (1993). These approaches delivered better and more robust results than conventional statistical approaches. October, 96 1

8 Introduction 1.2 MSc Project The goal of this MSc Project is to train ANNs to perform constant false alarm rate (CFAR) detection of radar signals in mixed clutter environments. The MSc Project will investigate if and how ANNs can be trained to cope with mixed clutter environments and which architectures are suitable for this kind of problem. The performance of ANN detectors will be compared with conventional approaches and as a result finally suggestions for future research be given. 1.3 Outline Chapter 2 introduces the problem of signal detection in radar systems and a conventional method for detection. Chapter 3 explains function and architectures of ANNs. Afterwards, Chapter 4 considers previous approaches of ANNs in radar signal detection. Chapter 5 then explains the relationship between conventional methods and a simple ANN approach for radar signal detection. In Chapter 6 experiments are done to achieve radar signal detection in mixed clutter environments with ANNs. This chapter also describes the underlying distribution of the data set taken for the experiments. The approaches taken in the experiments are evaluated in Chapter 7. In Chapter 8 the project is concluded by bringing up the results from previous work in the context with the achievements from this project. Further suggestions for future research are also given. October, 96 2

9 2. Radar signal detection A stream of radar pulses is sent out toward a target. As they are reflected by the target, energy is returned from the target to the radar detector. The return signal is plotted as a function of time delay which is proportional to the range (i.e. the target s distance from the radar). This continuous time delay function is transformed into discrete time steps, where each time step corresponds to a discrete range. The signal amplitude for each range is stored in separate range cells which can be viewed in a range profile. An example is shown in Figure amplitude 4 target signals range Figure 1, example range profile Unfortunately, the radar system receives not only echoes from targets, it also receives noise. This noise consist of thermal noise and background clutter. Background clutter arises from reflections from building, hills, mountains, forest, waves on water etc. October, 96 3

10 Radar signal detection mountains water level target buildings Figure 2, non-homogeneous environment Furthermore, this background clutter may vary in different ranges but also vary in time due to changes in the environment. Small targets at a great distance produce only weak echoes, so weak that they can be completely lost in the clutter. As the target s range decreases, the strength of the received target signal increases. If the distance gets closer, the target signal becomes strong enough to be detected above the clutter Stimson (1983). There are different approaches to develop radar signal detectors that are optimum in some sense, such that false alarms (false detection s) can be kept at a suitably constant low rate in an a-priori unknown, time varying and spatially non-homogeneous environment as shown in Figure 2. These approaches are called constant false alarm rate (CFAR) detection. In this approaches the environment is modelled by distribution functions. The fact that the environment is time varying restricts modelling due to unknown environment transitions that are a-prior unknown. To simplify the detectors complexity, detectors are in normal designed for an approximation of one known environments. A simple way to separate target signal returns from background clutter is to build a threshold. October, 96 4

11 Radar signal detection target signals threshold 6 amplitude range Figure 3, threshold The distribution of the clutter can be approximated by certain probability distribution functions, where each medium follows a different probability distribution. For example, clutter returns from buildings follow a different probability distribution than clutter returns from mountains. Not only does the clutter distribution vary, but also the clutter level (scale) strongly varies. Detectors that cope with these variations have to have an adaptive threshold. The theory of threshold detectors that are optimal for one type of clutter distribution (homogenous clutter) is well understood if thermal noise and clutter are complex Gaussian distributions, Farina (1987). Unfortunately, this requires a-priori information about the statistic distributions which in practice are not known completely a-priori. October, 96 5

12 Radar signal detection 2.1 Conventional CFAR detection If the target reflections get strong enough to be detected over the clutter, one can use a fixed threshold to separate them from the background clutter. Unfortunately, this leads to a huge amount of false alarms, if the clutter level changes and the clutter appears with a higher amplitude, as illustrated in Figure signal fixed threshold false alarms + detection s 12 amplitude 10 target signal e.g. water/ground range Figure 4, Fixed Threshold In order to achieve CFAR detection in such a scenario, one needs a variable threshold which is adaptive to different clutter levels. One kind of adaptive threshold detector is the cell averaging CFAR (CA-CFAR) detector, described by Barton (1988). October, 96 6

13 Radar signal detection Comparator Detection 0/1 1 x N n α Figure 5, CA-CFAR detector The input is supplied to a shift register, where the cell under test (CUT) is surrounded by a set of neighbouring range cells. The CA-CFAR detector calculates the average of the surrounding range cells and multiplies it with a fixed value α to get the threshold. The threshold is in the next step compared with the value of the CUT. If this value exceeds the threshold, a target detection will be given. Than the contents of the shift register are shifted about one position and the detection process is repeated for the next range cell. The shift register can be considered as a reference window which slides over the stream of range cells. A problem with the CA-CFAR is the possibility of another interfering target in the reference window. The interfering target as depicted in Figure 6 gets included in the averaging and increases the threshold so that the target in the CUT may not be detected. Another problem with the CA-CFAR is that a clutter as edge shown in Figure 7 may produce false alarms due to cutting the clutter edge with the threshold. October, 96 7

14 Radar signal detection signal threshold interfering target 12 amplitude 10 8 target in cell under test 6 4 reference window range Figure 6, Target masking signal threshold 15 amplitude 10 5 reference window range Figure 7, Clutter edge October, 96 8

15 3. Artificial Neural Networks Most of the current generation of artificial neural networks are characterised by a multilayer architecture, with several layers of neurons. The backpropagation algorithm from Rumelhart et al. (1986) is a well-known and largely successful learning algorithm for supervised learning networks where input and domain are known a prior. Opposed to supervised learning, an unsupervised learning algorithm blindly or heuristically classifies the input domain and has often less computational complexity then supervised learning. Multilayer architectures have proven been functionally capable of solving impressive problems in information storage, processing, and transformation. They can be used for density estimation, classification and regression problems as shown by Jordan (1996). They may not, however, be the best neural techniques for a given problem. In spite of these increased capabilities, they remain restricted to certain classes of problems, since they are difficult to train and to scale. 3.1 Background Biological neurons transmit electrochemical signals over neural pathways. Each neuron receives signals from other neurons through special junctions called synapses. Some inputs tend to excite the neurons, others tend to inhibit it. When the cumulative effect exceeds a threshold, the neuron fires (becomes active) and sends a signal to other neurons. October, 96 9

16 Artificial Neural Networks Artificial neural networks model this behaviour in a simple way. Each neuron has a set of inputs and one output. The incoming signals (i 1,,i n ) are multiplied independently by a weight (w 1,,w n ). The sum of all weighted input signals determines the activation of the neuron (net). To produce the output signal the activation is further processed by an activation function (F) which can for example be a threshold, linear or sigmoid function. i 1 w 1 i 2 w 2 net n = i i= 1 i w i F w n i n Figure 8, computational model of a neuron The whole network consists of sets (layers) of neurons, which are connected. A network consists of one input and one output layer, in addition the network can also contain one or several hidden layers. The input layer buffers the inputs and has no weights. All following layers (hidden and output layers) have weights in their inputs. Talking about a network with a single layer of weights this network has two layers, one input and one output layer. If the information flow through the network is only in one direction, without feedback loops, we call it a feedforward network. A neural network with feedback connections is called recurrent. October, 96 10

17 Artificial Neural Networks Output Layer feedback connection Hidden Layer Input Layer Figure 9, feedforward and recurrent neural network Solving a problem with an ANN generally involves the following steps: Select a network topology which fits to the nature of the problem Choose the activation functions which are appropriate for the nature of the problem Train the neural network with a training procedure 3.2 Backpropagation Training Strategy Neural networks using supervised learning techniques are based on mapping input patterns to output patterns. In order to recognise patterns, the network needs to be trained with these patterns. To train the network one has to use a training algorithm. One well known and widely used among current types of neural networks is the backpropagation training algorithm, Rumeldart et al. (1986). The name backpropagation comes from the fact that the error gradients of hidden units are derived from propagating backward the errors associated with output units since only target values for output units are given but not for hidden units. The errors are computed to be some function on the distance between actual output and target output. The output of a neuron is calculated as follows: n o j = F wijoi i with o i = previous layer output (unit input); o j = current layer output October, 96 11

18 Artificial Neural Networks where F is the activation function, for example a sigmoid function given by F ( x ) = e x Backpropagation learning works as follows: First, the output vector as a result of an given input vector is calculated. Then the output vector is compared with the target vector for the actual input. The error between target and output vector is than calculated backwards through the net and results in an adjustment w ij for each weight. The adjusted weight is wij(t+1) = wij(t) + wij which should bring the output closer to the desired output (target vector). The adjustment is calculated by the generalised delta rule for multilayer networks, see Rumelhart et al. (1986). The weight adjustment is proportional to the error and leads to no adjustments, if the input is zero. One limitation is that the backpropagation algorithm is prone to local minima, because of using a gradient descent strategy in order to minimise the error criterion. Gradient descent corresponds to performing steepest descent on a surface in weight space whose height at any point is equal to the error measure. If a point is reached where the gradient gets zero, a minima is found and no further weight changes are done. This minima can either be the global minima over the whole weight space or a local minima in one region of the weight space. If a local minima is reached the algorithm is stuck and has no ability to reach the global minimum or the error criterion respectively Learning rate A careful selection of the step size for weight adjustment (the learning rate η) is often necessary to ensure smooth convergence. Large step sizes can cause the network to October, 96 12

19 Artificial Neural Networks become paralysed. When network paralysis occurs, further training does little for convergence. On the other hand, if the size is too small, convergence can be very slow Momentum A large learning rate η corresponds to rapid learning but might result in oscillations. On the other hand, some researchers, Rumelhart et al. (1986), have shown that a small η may result in a failure in learning. Further, Rumelhart et al. (1986) proposed adding a momentum term to the delta rule in the following form: w ( n + 1) = ηδ o + α w ( n) ji j i ji where δ j is the error signal (target vector - output vector) and α, called momentum ratio, is a constant and the parameters (n+1) and (n) are used to indicate the (n+1) and nth step. The momentum term is used to specify that the change to the step (n+1) step should be somewhat similar to the change in the nth step. This modified delta rule was used in the experiments described later Generalisation Capabilities and Overtraining Generalisation is concerned with how well the network performs on the problem with respect to both seen and unseen data. During the training, the process of adjusting the weights based on the gradients is repeated until the error criterion is reached or a continuing training would not reduce the error any further. Overtraining the network will make it memorise the individual inputoutput training pairs (training set) rather than adapting to common features within the data. This issue arises, because the network matches the data too closely and loses its generalisation ability over unseen data (test data). In practice, one has to choose a stopping condition, typically an error goal. October, 96 13

20 Artificial Neural Networks Hyperplanes A hyperplane separates the input space in two decision regions. Each neuron of the same layer can form one hyperplane. For example, with two neurons the input space (e.g. Colour, Shape) can be separated into maximally four regions (c,d,e,f). In this example the attributes of the input space belong to either on of two fruit types. Colour d c Hyperplanes e Fruit Type I Fruit Type II f Colour Shape 1 2 Shape Figure 10, hyperplanes of two neurons To combine together the attributes of region c and e that are of the same fruit type, a following layer of units can than separate the output result (c,d,e,f) of the previous layer further in decision regions (I, II) as shown in the next figure. This decision regions are in this case the correct classification of attributes into fruit types. Output Neuron1 I c d Hyperplane II f e Output Neuron2 1 2 Layer 1 3 Layer 2 Figure 11, separation of previous layer output result October, 96 14

21 Artificial Neural Networks 3.3 Architectures Three different types of architectures will be considered: Multilayer Perceptron (MLP) Simple Recurrent Network (SRN) Modular Approaches Multilayer Perceptron As mentioned earlier artificial neural networks are based on mapping input patterns onto output patterns. Using an MLP architecture, as illustrated in Figure 12, there can be any non-linearity between the input and the output pattern, if the network has sufficient hidden units and hidden layers. computational direction Figure 12, MLP architecture To represent temporal dependencies within the network, the MLP can learn the transitional probabilities from one input to the next. On the other hand dependencies between patterns that span more than two inputs cannot be learned. A different way to represent time in MLPs is to present it as an additional dimension of the input. If one wants to compute a temporal sequence with an MLP, this sequence must be presented as overlapping time slices to the network. A relative temporal position between two network input vectors (v 1, v 2 ) as considered in Elman s (1990) example, October, 96 15

22 Artificial Neural Networks v 1 : v 2 : will be treated by the network as quite dissimilar input vectors. However MLPs can, of course, be trained to treat this patterns as similar, but the similarity is a consequence of the external teacher and not of the relative temporal position, which will not lead to generalisation to novel patterns Simple Recurrent Network The ability to process sequences is crucial for many tasks. It is needed whenever the order of events is relevant or the current state depends on previous states. Speech recognition or time series prediction are some examples. The objective is to enable ANNs to handle temporal information between more than one state transition. computational direction Figure 13, SRN architecture Simple recurrent networks (SRNs) represent time implicitly in their structure and not as an additional dimension of the input. To achieve this feature the network must be given memory which is done in Elman s (1990) simple recurrent network, see Figure 13, by feeding back the hidden representation as extra input units to the hidden layer itself. This extra input units are called context units, because they represent the context of the previous steps and provide the network with memory. SRNs are also applicable to other problems than temporal sequences. Other problems can be decomposed into sequences, where the network learns for example structures of October, 96 16

23 Artificial Neural Networks letters in sequences. Even in cases where a complete prediction of the output sequence is not possible the network can learn to predict partial sub-regularities as shown by Elman (1990). The advantage of feedback in a network is that the history of inputs effecting the output is not strictly limited as in time window approaches. Another advantage is that they are smaller than MLPs for time dependencies because the temporal dimension is implicit in the structure and not as in MLPs represented in their spatial dimension Modular Approaches Modularity is a technique largely used in computer science, by decomposing a problem into sub problems. Each sub problems is solved by a separate module that is expect to be simpler than the overall task. This can also be applied to ANNs, where each module is a single ANN specialised on one part of the whole problem. By combining or switching between the results of each module, a final result can be obtained. This combining or switching mechanism can be implemented in different ways. Modular Neural Networks supervision Module 1 supervision input Module 2 Module 4 output Module 3 supervision supervision Figure 14, Modular Neural Network October, 96 17

24 Artificial Neural Networks The four modules shown in Figure 14 are trained simultaneously and supervision is provided to each module. Modules one to three can be supported with all inputs or only those inputs that each specialised module needs for its task. Module four receives the outputs from the other modules and is trained to combine this outputs to a final result. One example could be to use the advantages of specialised ANNs trained to estimate single parameters of probability distribution function (e.g. variance, mean, median). A fourth module could combine this parameters and give an estimation about the underlying probability distribution function. Adaptive Mixture of Experts Expert 1 s 1 Expert outputs s 1,s 2,s 3 input Expert 2 s 2 output s = g 1s 1+g 2s 2+g 3s 3 Expert 3 s 3 gating g 1 g 2 g 3 Gate outputs sum up to 1 g 1+g 2+g 3 = 1 Figure 15, Adaptive Mixture of Experts This modular architecture was introduced by Jacob et al. (1991). It uses different networks to learn training patterns from different regions of the input space. The network consists of two types of modules as illustrated in Figure 15: Expert networks and a gating network. The expert networks compete to learn the training patterns and the gating network mediates the competition. The gating network has as many outputs as there are expert networks, and the activation of these output units must be non negative October, 96 18

25 Artificial Neural Networks and add up to one. That means that the output of the gating network is the portion each expert has of the final result (output of the network). Cascaded Networks input Context Network weights of Function Network input Function Network output Figure 16, Cascaded Network Breaking a problem into sub problems leads to modules that have to be implemented in separate ANNs. A cascaded network (CN) as introduced by Pollack (1987) offers a possibility to combine these modules within one network. The network consists of a function network and a context network. Training the function network separate for each sub problem leads to multiple sets of weights, one for each sub problem. The context network computes as its outputs the weights of the function network. If each sub problem can be indicated due to some input features the context network can then be trained to map these states or patterns onto the specific weight set for each sub problem. This mechanism leads to a network which can implement multiple functions with weights that are modified dynamically during computation. Running all sub problems first could make the mapping between input features and weight sets difficult due to edge constraints between the sets. Merging the sub-problems removes these constraints and forms the training as follows. All input patterns are passed through the function network and after each pass of an input pattern, the output error is propagated backward through the net to compute the weights of the function network. These weights are then used in the same step as target patterns for the context network to compute its weights due to October, 96 19

26 Artificial Neural Networks backpropagation. As input of the context network any feature can be chosen (i.e. the same input as to the function network, only some specific input units, the internal state of the function network, etc.). Pollack (1987) build for example a 4-1 multiplexor using a cascaded network. According to a two bit address, one of four data lines should be switched to the output. Pollack used the two bit address as input of the function network and the four data lines as input of the context network. Training the cascaded network converges in general much quicker than a feed-forward solution (6 input, 4 hidden and 1 output unit) where there is no distinction between all six inputs. October, 96 20

27 4. Previous Work Radar signal detection is a complex task, where a lot of actions have to be done on separate levels to achieve CFAR detection. A radar signal processing system looks in general as shown in Figure 17, Farina (1986) and Bogler (1990). Antenna Receiver Digital Signal Processor Data Processor Display Figure 17, Radar Signal Processing The antenna receives reflected radar signals, which are then amplified and filtered in the Receiver. The received input stream is then stored in range cells and the digital signal processor (DSP) performs target detection and false alarm control on it. Modern radars possess very sophisticated feedback logic to optimise the detection performance. This is done by correlating the new radar measurements with previous target tracks in the Data Processor, to make sequential predictions about tracks. There are a lot of research activities in radar signal processing. Most of these research activities apply conventional statistical methods to detect signals within noisy environments. Over the last decade, these research activities also picked up the new possibilities of problem solving with ANNs. ANNs have been successfully used in radar image processing, like automatic target recognition, Grossberg (1995). This project focuses on the action taking place in the DSP, detecting single targets, to achieve CFAR detection. Therefore ANN approaches used in the receiver or data processor will only be mentioned but not further described in later chapters. October, 96 21

28 Previous Work 4.1 ANN Approaches In most radar systems, a series of M short pulses is sent out in one direction. The received radar pulse has a complex nature (e.g. signal amplitude and phase), where the whole received signal consists of M complex pulses. Due to the complex nature of the received signal, it is possible to calculate if the received signal is a target reflection or clutter reflection for each range cell. To improve the detection rate Bucciarelli et al. (1993) trained an ANN with complex inputs and activation functions and compared their results with the performance of optimal detectors for gaussian clutter. They focused in their study mainly on the comparisons of different signal to clutter ratios with the result that the complex ANN receiver performs very closely to the optimum receiver for gaussian clutter. Most CFAR detectors do not focus on the level of complex signals for single range cells. They consider a set of range cells after pre-processing the complex return signals as earlier described in section 2.1. A problem with conventional detectors is that they have bad performance if the number of available reference cells decreases. This may be caused by several factors, primarily from constraints on the radar system used (e.g. resolution, presence of interfering targets, dead cells, clutter discontinuities). The robustness against a loss of reference cells with an ANN approach is considered by Amoozegar et al. (1994). They focused on a multilayer feed-forward network with different reference window sizes and different signal to clutter ratios (SCR) and compared their results with conventional detectors. As input parameters they used the cells in the reference window, but also statistical input features like the statistical mean over the reference window, the average power and the variance. They pointed out that ANN detectors consistently provide a superior performance compared to conventional detectors, when the size of reference window is small. Further, they mentioned similar robust performance evaluations under other conditions, viz. presence of interfering targets, dead cells (cells with no or corrupted October, 96 22

29 Previous Work value) and clutter discontinuities. Also Ramamurti et al. (1993) showed that their approach with a multilayer neural network and only the reference window as an input vector yields considerable performance improvements in non-gaussian noise over locally optimum detectors. Considering the radar image as a whole is the last step in the radar signal process. This approach belongs in the category of image processing where target tracking is used to filter out targets and where ANNs have been successfully used according to Grossberg (1995). In this literature review this area will not be considered any further, since it would go beyond the scope of this project. All former approaches tend in the direction of robustness in non-gaussian noise environments. Further research is necessary considering radar detection problems in homogenous clutter (e.g. different clutter levels, clutter edges and interfering targets) and also in mixed clutter environments (e.g. different clutter distributions). Also an investigation is necessary, that considers the internal behaviour of ANN detectors (e.g. why and how the detection process works with ANNs). This project should extend the research of Ramamurti (1993) and Amoozegar and Sundareshan (1994) and investigate if artificial neural networks (ANNs) can be trained on radar signal detection in mixed clutter environments and perform better than conventional radar detectors. October, 96 23

30 5. ANN-CFAR Detection This chapter will first explain the relation between a single layer ANN and a CA-CFAR detector. Then it will focus on MLPs and will moreover relate different architectures to the problems of CFAR detection. 5.1 Two Layer ANN First of all it will be shown why an ANN with n input-units and one output unit (single layer of weights) is able to solve the same task of radar target signal detection as a cell averaging (CA)-CFAR detector does. If this simple ANN is able to do the task of radar signal detection, than a more complex multilayer architecture as discussed in the next section must at least be able to do the same task CA-CFAR detector A CA-CFAR detector has as input the amplitudes of the cell under test (CUT) and a set of surrounding reference cells. The CA-CFAR calculates the average value of all reference cells and multiplies it by α to build the threshold just above the highest peaks of the background clutter distribution. This of course, can only be optimal for one kind of distribution. The calculation of α will be described in Chapter 6.3 with respect to probability distribution functions. October, 96 24

31 ANN Approaches N CUT M + - > 0 : target signal (1) 0 : no target signal (0) 1 N N x i i= 1 1 M N + 1+ M x i i= N + 1 N 1 1 xi + 2 N N + 1+ M x i i= 1 i= N + 1 α Figure 18, CFAR-Detector ANN-CFAR detector A simple ANN-CFAR detector is shown in Figure 19. It has one input layer and one single output unit with a squashing function which converts values less than or equal to zero to a zero output (no target present) and values greater than zero to an output of 1 (target present). Let us assume the weights on the connections from the reference cell 1 are manually initialised with -. If further the weight on the connection from the n 1 CUT is initialised with 1, the ANN will produce an output of greater than zero, if the CUT contains a value greater than the average value of all surrounding cells. To detect only amplitudes from signals as target signals that are higher than the clutter level and therefore achieve a low rate of false alarms, the weight on the connection from the CUT has to have a smaller value than 1. October, 96 25

32 ANN Approaches N n CUT M Figure 19, ANN-Detector The activation state of the output unit (before applying the squashing function) for such an ANN with a single layer of weights is given by: n wixi where w1,..., wn and wn + 1,..., wn + 1+ M = i= 1 and w 5 = α 1 n 1 If one splits this equation up, one gets for the cell average: N N + 1+ M N N + 1+ M N N + 1+ M 1 wixi + wixi = xi + xi xi xi N M n = 1 1 N + with =, 1 2 i= 1 i= N + 1 i= 1 i= N + 1 i= 1 i= N + 1 which is the same result as of the cell average (CA)-CFAR detector and therefore shows that both detectors theoretically should have the same performance. 5.2 Multilayer ANN An ANN-CFAR detector such as considered above with one output unit and no hidden layer has no advantage against any other conventional detector but it is useful to show the similarities. The question, if ANNs can gain advantages against conventional detectors by using more complex architectures will be discussed next. October, 96 26

33 ANN Approaches The fact that background clutter does not follow any distribution, and can only be approximated by some distribution functions, leads to problems with conventional CFAR detectors. The ability of ANNs to model input output relations without any knowledge about the relations is called model freedom. The ANN can be considered as a black box with some specific features dependent on the ANN architecture. Statistical approaches need sufficient knowledge about the data to build models. These models often become very complex. ANNs can be trained by example where the internal model is generated due to training the network. During training the network adapts to features of the input set. It is not clear what kind of features there are, but for example training an ANN on averaging patterns leads to an minimisation of the output variance, because the sum squared error (E P ) of an output pattern is equivalent to the variance (σ 2 ) of this pattern: E ( t o ) 1 2 = N P Pj Pj j with tpj = target pattern = average value and opj = output pattern 2 1 σ = µ N ( x i ) i 2 with µ = average value and x i = distribution value On the other hand this means that the ANN must internally model the variance of the input distribution. Depending on the adaptation of statistical features, a multilayer ANN detector can deliver better results for input distribution that cannot be modelled or only be modelled by some complex distribution functions. Conventional detectors are mostly optimal for only one kind of clutter distribution function. In the next chapter experiments are described that consider the adaptation of ANNs to more than one clutter distribution which would be a main advantage against conventional detectors. October, 96 27

34 6. Experiments As shown by Ramamurti (1993) ANNs can be trained to detect radar signals in non- Gaussian clutter. In his study Ramamurti trained ANNs separately on non-gaussian clutter and demonstrated that ANNs yield performance improvements over locally optimum detectors. In the following experiments the performance of ANNs in mixed clutter environments will be considered. Mixed clutter means that one ANN should be trained on different non-gaussian distributions and be able to adapt to all of them. 6.1 Weibull PDF The Weibull probability density function (PDF) can be used as a good approximation of various clutters. Its two parameters (a, b) specify the shape (a) and scale (b). Varying the shape parameter leads to different non-gaussian distributions. A special case with a=2 is the Rayleigh distribution which is a good approximation for target signals, Farina (1987). The Weibull PDF is defined by a x f ( x) = b b a 1 e a x b. October, 96 28

35 Experiments 1.5 f(amplitude) 1 a = 3.0 a = 2.0 a = amplitude Figure 20, different Weibull PDFs Figure 20 displays the PDF for value b = 1 and different values of a. The distribution (v) has a mean value defined by E[ v] 1 = b Γ 1+ a and a mean square value defined by E[ v ] = b Γ 1+ [1] a where t x 1 Γ( x) = e t dt t= 0 is called gamma function. The cumulative distribution function (CDF) returns the probability p(x) that a Weibull distributed value falls in the interval [0 x]. October, 96 29

36 Experiments x p( x) = f ( x' ) dx' = 1 e x' = 0 a x b Weibull distributed random variables can be generated from an, over the interval [0 1], uniformly distributed random variable u, by transforming the CDF into ( ln ( 1 ( ))) a x = b p x ( ( )) x = b ln u a. [2] SCR, PFA and PD The ratio between clutter and signal level is called signal to clutter ratio (SCR) and can be calculated as 2 [ ] [ ] average signal power E s SCR = = [3] 2 average clutter power E n where the average power is equal to the mean square value of signal or clutter distribution respectively. The most important parameters of a CFAR-detector are the false alarm rate P FA and the detection performance P D. The criterion for a good performance is to maximise the detection performance P D for a low P FA at a certain SCR. P FA and P D are defined as follows for a given threshold T, clutter distribution f CLUT (x) and joint distribution of clutter+signal together f CUT (y) in the cell under test (CUT). b P = P( X > T ) = f ( x) dx = e [4] FA x= T CLUT P = P( Y > T ) = f ( y) dy. D x= T CUT a T October, 96 30

37 Experiments Unfortunately the joint distribution function of signal+clutter in the CUT is unknown. Only the distribution functions for clutter alone and signal alone are available for calculations, therefore it is not possible to calculate P D directly for a given threshold. A method to obtain P D iteratively is shown in Ravid and Levanon (1992). For the case of Rayleigh target and Weibull clutter, the following relationship holds P D = e T 2 b ( 1+ SCR) Γ 1+ 2 a from which the triple relationship for a fixed-threshold (non-cfar) detector between SCR, P FA and P D can be obtained SCR = 1 ln P 1 ln P D FA 2 a + 2 Γ 1 a 1. Due to the fact that analytical methods are not available for evaluating the performance of ANN-CFAR detectors, P Fa and P D have been calculated from simulation experiments. The positions of targets are normally known during experiments. A comparisons between known target positions and detected target positions leads to the number of false and correct detection s. P FA and P D are calculated by P Fa = P D = false detections number of all tested clutter cells correct detections number of all targets October, 96 31

38 Experiments 6.3 Threshold Calculation Equation [4] given for the false alarm rate can be converted to the threshold T. ( ( )) a T = b ln P FA 1 Using a given T, the fixed values α for the CA-CFAR detector can be obtained from T α = x with x = mean value. The following table lists the threshold and fixed value α for a scale parameter b=1 and four different shape parameters of the Weibull distribution. Weibull shape parameter threshold T mean x α a = a = a = a = Table 1 In practice the number of samples to calculate the mean value x is limited due to the limited reference window size. This results in a variance in the mean values x with the effect that α has to be chosen a bit higher than calculated. 6.4 Performance Criteria (CFAR-loss) A detector has to estimate in some way the underlying clutter distribution to be adaptive to it. In the case of Weibull distributed clutter this can be done in several ways, either by October, 96 32

39 Experiments considering the mean and variance of the distribution, or estimating the scale and shape parameter. This estimation is due to the limited size of the reference window reduced in the accuracy which leads to a variance in the estimated value and therefore to a loss in detector performance. The CFAR loss is the ratio between the SCR of a CFAR detector and SCR of a fixed threshold (non-cfar) detector where the parameters are known a priori and the threshold is a constant that is optimal for one distribution, Ravid and Levanon (1992). For a required P FA the CFAR detector has for the same SCR a lower P D as a non-cfar detector, because of the variance in the estimated threshold. This can also be viewed the other way around, if both detectors have the same P D for a required P FA, the CFAR detector needs a stronger signal amplitude to achieve P D which means that the SCR has a lower value than SCR. This leads to the expression of the CFAR-loss CFAR loss = SCR( PFA, PD ) SCR ( P, P ) FA D [5] 6.5 Training Set The data set on which the performance tests and evaluation should be based will be defined in two independent terms, background clutter distribution and target signal scenarios. The range and number of clutter distributions is picked in such a way that a wide range of distributions is represented in the data set and also training and testing are possible in a reasonable time. October, 96 33

40 Experiments Background Clutter Distribution The training set will be based on: 4 different Weibull distributions (shape parameter a varies from 1.0 to 2.0 in steps) 1 clutter level (scale parameter b=1) Each clutter distribution contains 130 data samples and will be created 10 times from uniformly distributed random variables (see eqn. 2). This leads to 1300 data samples for each of the four distributions Target Signal Scenarios Background clutter is randomly distributed and independent from targets signals, which means that one cannot infer from patterns within a clutter on target signals and further that it does not matter where within the clutter a target appears. It is of more interest how many targets appear, what distance they have to each other (interfering) and what amplitude they have. This project is based on single target scenarios and single cell targets, which means that not more than one target at the time is visible to the detector. As mentioned before in this section, a good distribution function to approximate target signals returns is the Rayleigh distribution, a special case of the Weibull distribution that will be used in this experiments to generate target signal returns. The amplitude of target signal returns varies (e.g. depending on the surface and distance of a target). Therefore it is likely that the ANN detector must be trained on different SCR levels to gain high detection performance, otherwise the detector could adapt to specific amplitudes and not to signals. Clutter and signal are added and the SCR can be calculated using eqn. [1] in eqn. [3]. October, 96 34

41 Experiments 6.6 Training the MLP Choosing the right training set was a time consuming problem in this project. To achieve good generalisation capabilities in approaches for pattern recognition, the training set of the ANN is kept small compared to the whole input domain. The training set for mixed clutter environments had to be sufficiently large otherwise the MLP started to memorise the input patterns instead of adapting to the specific distribution of input patterns. Memorising input patterns can be avoided by decreasing the number of hidden units. Smaller number of hidden units forth the net to generalise. On the other hand if the number of hidden units gets to small the ability of generalisation becomes limited, which means in the case of mixed clutter environment a decrease in the ability to learn the adaptation to different clutter distributions. Initial experiments showed that the larger the training set the better the detector performance becomes on an arbitrary test set containing patterns from the same distribution, because in the case of radar signal detection in mixed clutter environments the input set is of theoretically infinite size. During the final experiment the best training strategy was to train the MLP first on a training set with sample from only one or two Weibull distributions with a long tail (i.e. a small shape value a=1.0 ). This initialised the weights with values for the worst case clutter situation (highest peaks). The second training phase was than based on the weight set of the first training. For this training a training set with patterns from all four Weibull distributions was chosen. Training the network on the second training set resulted in higher false alarm rate for distributions with long tails than for distributions with short tails. Therefore the training set was changed to be asymmetric. It contained patterns for a shape parameter a=1.0, 9000 patterns for a=1.3, 6000 patterns for a=1.6 and 3000 patterns for October, 96 35

42 Experiments a=2.0. Training the network on this large training set was very time consuming. The resulted false alarm rate was not the same for all distributions, it varied over an interval of 0.03 for a=1.0 and for a= Approaches The aim of these experiments is to evaluate the performance of ANN-CFAR detectors in a mixed clutter environment. Two design factors influence the performance of conventional CFAR detectors, the threshold estimation algorithm and the reference window size. The threshold algorithm estimates from a set of input examples the threshold to separate target signal returns from background clutter. Most of the threshold algorithms are only optimal for one type of clutter distribution. The threshold estimation performance depends strongly on the number of input examples presented to the algorithm. As mentioned before, this size is limited in radar applications. In initial experiments all ANN architectures showed problems in learning different clutter distributions. To be able to reason only about the adaptation to different clutter distributions, different clutter levels were not considered in this project. However, training an ANN on different clutter levels appeared not to be a problem, as already considered by Ramamurti (1993). It is necessary to distinguish between two main types of ANN-CFAR detectors: Trained on target signal detection: ANN detector that is trained to output directly a detection decision either target detection 1 or non-target detection 0. Trained on threshold estimation: The ANN detector is trained to estimate an optimal threshold according to the current clutter distribution. October, 96 36

43 Experiments Three ANN approaches were chosen for the experiments. First a multilayer perceptron (MLP) architecture with a sliding reference window as input, similar to previous ANN approaches in ANN radar signal detection, Ramamurti (1993) and Amoozegar and Sundareshan (1994). As second a simple recurrent network (SRN) architecture was selected with one input unit and a set of context units that should memorise previous inputs as reference to the actual input. The SRN was trained on both, training on detection and training on threshold estimation. This was not done for the MLP, but which would be a good idea to do. The third architecture was a cascaded network (CN) with the ability to generate different sets of weights according to features of the input distribution. This section describes experiments with the three ANN-CFAR detectors and compares the performance. As a basis for comparison a separate CA-CFAR detector for each considered background clutter distribution was chosen as a reference. In the next chapter the performance of ANN-CFAR detectors will be compared with the conventional CA- CFAR detector Multilayer Perceptron Approach MLPs as mentioned before are able to perform radar signal detection in homogenous clutter environments. In these experiments, MLPs were trained on target signal detection in a mixed clutter environment. All experiments were based on the same data set described earlier in this section. The MLP should adapt to four different Weibull distributions, therefore a MLP can be trained either directly on all four distributions or specialised MLPs trained only on one distribution. These specialised networks/units can be grouped together and through additional units a switching between these units achieved. October, 96 37

44 Experiments Three different types of MLPs were considered for the experiments: MLP with no hidden unit and one output unit, the assumption that already a simple MLP can be trained on target signal detection should be supported. MLP with several specialised hidden units and one output unit, where each hidden unit is separately trained on one clutter distribution. MLP with several hidden units and one output unit directly trained on all four distributions. Amoozegar and Sundareshan (1994) considered the robustness of a MLP when the size of the reference window is small. They observed that the detection performance starts to degrade very sharply when the reference window size get less than 30 and a CA-CFAR detector completely loses its detection capability for window size less then 9, where the MLP detector still delivered appropriate performance. To avoid a strong dependency on the reference window size, a window size of 31 was chosen in the experiments, to consider only the performance dependency on different clutter distributions. All networks were trained with a fast backpropagation algorithm with momentum and adaptive learning rate, Fu (1994). Experiment No. 1 The first experiment should support the assumption stated in Chapter that a neural network with a set of input units and only one output unit can be trained on radar target signal detection. Therefore four (31/1) architectures were trained each on the training set of one background clutter distribution. The training turned out to be extremely dependent on the initial weight set. However, the number of learning epochs achieved in this experiment is given in Table 2 for an error goal of 0.1: October, 96 38

45 Experiments Weibull shape parameter learning epochs a = a = a = a = Table 2 Initial training cycles tended to last longer for Weibull distributions with a longer tail (smaller shape parameter). A longer tail with respect to the Weibull PDF means that the values on the right hand side are spread over a larger interval than on then left hand side of the mean value. This is equal to a higher variance on the tail which complicates the learning. The table above also shows this tendency with one exception for a shape parameter a = , which might result from random weight initialisation. In the case of a = no initial weight-set, generated by the random initialisation routine, lead to a successful training. The weights were instead initialised with the weights for a shape parameter a = 1.0 and then retrained for a shape parameter a = initialised with the weight set from the unit, trained on a = 1.0; training was without initialisation not possible, therefore the whole number of learning epochs was October, 96 39

46 Experiments 1 a=2.0 1 a= Pd 0.4 Pd Pd SCR [db] a= SCR [db] Pd SCR [db] a= SCR [db] Figure 21, ANN performance for different Weibull clutter distributions The performance of each network was compared for a false alarm rate of with separate CA-CFAR detector each optimal for one distribution. As shown in Figure 21 the ANN (dotted line) has almost the same performance as the CA-CFAR detector (solid line). This experiments shows clearly that an ANN with only one output unit and a single layer of weights is able to learn radar signal detection through training. October, 96 40

47 Experiments Experiment No. 2 This experiment considers a modular approach with five hidden units and one output unit. Four hidden units were specialised, each on one of the four background clutter distributions. One extra hidden unit supported the output unit with features of the network input. The aim of this experiment was to investigate if one MLP with specialised hidden units can gain the same performance in each of the four clutter distributions than each specialised hidden unit for the specific distribution itself. The network was trained in two separate steps. It was inspired by cascade correlation training as described in Fu (1994) p.158. First, four different specialised MLPs with 31 input units and only one output unit were trained each on one background clutter distribution. Then four of the five hidden units of the final MLP were assigned each, with the weights of one specialised output unit. These weights were fixed and the whole network trained on all four distributions, so that the output weights and the weights of the five s hidden units could achieve an adaptation to all four distributions. The training took 962 epochs for a false alarm rate of 0.001and an error goal eg = 0.1. In Figure 22 the performance of the combined specialised hidden unit approach is compared with the performance of each specialised unit itself. October, 96 41

48 Experiments 1 a=2.0 1 a= Pd 0.4 Pd Pd SCR [db] a= SCR [db] Pd SCR [db] a= SCR [db] Figure 22, Performance of MLP with specialised hidden units Figure 22 shows further that the detection rate starts to decrease again for higher SCR levels. Unfortunately the false alarm rate varied over a large interval on the test set, shown in the Table 3. The variation is a criterion for robustness in mixed clutter environments which means in this case of specialised hidden units, that the detector had a poor performance. October, 96 42

49 Experiments Weibull shape parameter False Alarm Rate P FA specialise approach (31/5/1) single unit approach (31/1) a = a = a = a = Table 3 During training the whole network was able to learn the training set, which shows that the network is able to learn and memorise a specific training set but has not the ability to generalise from the training set on the test set. Experiment No. 3 As opposed to the previous experiment, the MLP in this experiment was directly trained on all four clutter distributions to compare the differences in the performance of specialised hidden units with non-specialised ones. The network was chosen to be of the same size (31/5/1) as in the previous experiment. During the training the network turned out to be much more dependent on the initial weight set than in previous experiments. In most of the training cases the network remained for more than 300 training epochs on the same SSE before it started to learn. Training this network was a time consuming task and only in a few cases successful. A better way to train the network was to start with an (31/11/1) architecture and prune it down in three steps to (31/5/1), which is described in the next experiments. October, 96 43

50 Experiments The performance appeared in this experiment better than the performance of the specialised approach in the last experiment. The MLP performance (dotted line) is compared in Figure 23 with a CA-CFAR detectors (solid line) that was optimal for the achieved false alarm rates that are listed in the next experiment. 1 a=2.0 1 a= Pd 0.4 Pd Pd SCR [db] a= SCR [db] Pd SCR [db] a= SCR [db] Figure 23, Performance of MLP with 5 equally trained Hidden Units An assumption for the reason of better performance is given in Section 7. October, 96 44

51 Experiments Experiment No. 4 The previous experiment already showed that MLPs are able to perform CFAR detection in mixed clutter environments. To determine an optimal number of hidden units for the test set this final MLP experiment was done. An MLP with a (31/11/1) architecture was directly trained on all four clutter distribution and afterwards pruned. This was done by cutting away in each step two hidden units that had the highest number of weights close to zero. Hinton diagrams are a good way to view all weights of one layer. In this experiment some neurons always contained very small weights and weights that were zero. This neurons already could be separated from other neurons by considering the hinton diagram and deleting afterwards the rows with this neurons in the weight matrix. Initial experiments brought up that further training was most successful if not more than two neurons were pruned in one step. Figure 24 compares the performance of the (31/11/1) architecture (dotted line) with a fixed threshold detector (solid line) optimal for the achieved false alarm rates. It is shown that the MLP performs with a monotonically increasing detection rate (for a SCR range of -20dB to 20 db). October, 96 45

52 Experiments 1 a=2.0 1 a= Pd 0.4 Pd Pd SCR [db] a= SCR [db] Pd SCR [db] a= SCR [db] Figure 24, Performance of a (31/11/1) MLP To show the variation of the false alarm rate for different distributions, in Table 4 the range of the false alarm rate for 11, 9, 7, and 5 hidden units are opposed to each other. The best performance was achieved with a (31/11/1) architecture which had the smallest variation in the false alarm rate. To be able to compare the intervals of false alarm with each other, the quotient between upper and lower bound of the false alarm interval had to be used. The absolute interval range (subtraction of upper and lower bound) can only be used if the intervals do not overlap and either start or end with the same value. October, 96 46

53 Experiments Weibull shape parameter False Alarm Rate P FA with architecture (31/11/1) (31/9/1) (31/7/1) (31/5/1) a = a = a = a = Interval scale: P P FA1. 0 FA Table Simple Recurrent Network Approach The ability of simple recurrent networks (SRNs) to memorise previous states should be compared in this experiments with the sliding window approach from former experiments. For the experiments an Elman architecture was chosen with one input unit and one output unit. The fact that recurrent neural networks have good performance in time series prediction inspired the choice of using training on threshold estimation. The threshold depends on the statistical distribution of the input set which is not a time series. Instead it is a series of values, distributed according to a distribution function, where each value depends on the set of previous values. This leads to the assumption that it is also possible to predict the threshold according to previous values. SRN trained on target signal detection The SRN was trained to detect if the current input value is a target signal. Therefore the network was trained on all four different clutter distribution from the training set October, 96 47

54 Experiments including different levels of target signals. After training the performance was compared with previous MLP approaches that were based on a sliding window as input. SRN trained on threshold prediction The SRN was trained in the same way as in the previous experiment with the difference that as target pattern the threshold of an optimal detector for the specific clutter distribution and a false alarm rate of 10-3 was chosen. Due to prediction of the next possible threshold value the SRN might achieve a better performance than a CA-CFAR detector in changing clutter environment, because predicted values might earlier adapt to changes in the clutter level. The performance was after training compared with previous approaches. SRN Performance For the performance tests the number of weights of the Elman network was chosen in both approaches to be the same as the number of weights in the previous MLP experiment. This leads to the following calculation of hidden units HiddenUnits = ( 1) + MLPWeights + 1. Training the network in both approaches was not successful. The training remained after learning epochs on a high sum squared error of 19 (target signal detection) and 37 (threshold prediction) which lead to a performance of both detectors far behind the performance of MLPs and conventional detectors. Therefore a detailed performance evaluation for this approach is not considered but instead an assumption for the reasons of failure is considered in Section 7.2. October, 96 48

55 Experiments CN Approach As we have seen from previous experiments, a relatively small MLP is able to detect radar return signals in one kind of background clutter. To achieve an increased performance for different clutter distributions, the MLP approach was extended to a cascaded network (CN). The context network of the CN offers the MLP a possibility to generate different sets of weights, where each set of weight could be optimal for one kind of clutter distribution. As function network an architecture with 31 input units 3 hidden units and 1 output unit was chosen. To enable the context network to carry out the switching, all 31 input units were also feed into the context network. The output values of the context network are the weights of the function network, this leads to 100 output units, calculated as follows: OC = ( IF + bias) HF + ( HF + bias) OF with OC = ( 31+ 1) 3 + ( 3 + 1) 1 = 100 OC = Output Context; OF = Output Function. IF = Input Function; HF = Hidden Function A hidden layer introduced in the context network has in this experiment two purposes. One purpose is to reduce the number of weights in the context network. The fact that the context network has 31 input units and 100 output units would lead to 3200 weights without any hidden layer, this influences the learning speed enormously. To keep the number of weights small, 3 context hidden units were chosen in this example. One hidden unit more would mean 100 more weights. The total number of context weights results in WC = ( IC + bias) HC + ( HC + bias) OC with WC = ( 31+ 1) 3 + ( 3 + 1) 100 = 496 WC = ContextWeights; OC = Output Context. IC = Input Context; HC = Hidden Context For the whole cascaded network the context weights have to be added to the number of virtual function weights that are calculated by the context network. This lead to a total number of 596 weights, which is more than twice as many as a MLP with a (31/8/3) architecture. October, 96 49

56 Experiments The second purpose of a hidden layer is to give the network one more layer of weights that enables it to model more complex input-output dependencies. The training of the CN was done in the same way as the training of the MLP approaches. After 130 learning cycles the network already achieved a much lower sum squared error (SSE) than the MLP mentioned above learning cycles further ahead the CN got stuck at an SSE of 19. Various attempts setting the learning rate up and down only helped to reduce the SSE to a final value of 9, which results in the same bad performance as the SRN. October, 96 50

57 7. Evaluation 7.1 Specialised hidden units compared with direct training It was shown that a MLP architecture with specialised hidden units achieved less performance than a MLP architecture of the same size but trained directly on all distributions. The reason for this behaviour could be found in the ability of the network to distinguish between different background clutter distributions, which is necessary for an appropriate threshold calculation. The specialised MLP contained four specialised hidden units with fixed weights and a fifth additional hidden unit. Final training on all four input distribution functions adjusted only the weights of the additional hidden unit and the output unit. The purpose of this training was to introduce an ability to calculate according to one of the four input distributions an appropriate detection result. After the final training the performance on the test set resulted in a high false alarm rate compared to an MLP of the same size but trained directly on all four distributions. The specialised MLP had only one hidden unit and one output unit to achieve a distinction between the input distributions, where the MLP of the same size that was trained directly on all distributions had the possibility to share units for both tasks; the threshold calculation and the distribution distinction. October, 96 51

58 Evaluation 7.2 Limited learning abilities of SRNs Training a SRN on all four clutter distributions remained after epochs on a SSE of 19 which resulted in a performance far behind other detectors. Bengio et al. (1994) considered the ability of memorising knowledge in SRN. They have shown that the memory capabilities of SRNs decrease exponential and only recently passed states influence current states highly. To do statistical estimations it is necessary to consider a sufficiently large sample set, otherwise an increased variance in the estimation result will be gained which lessens the detection performance. The context units of SRNs do not provide this sufficiently large set and therefore the network has a limited ability to learn an adaptation to different distribution functions. 7.3 Comparison between MLP and CN architecture The CN approach has shown that a CN did not achieve a good performance on the mixed clutter environment test set. The training was not successful and remained on a SSE of 9. On the other hand, previous experiments have already shown that simple MLP architectures less than the size of the function network perform well on single homogenous clutter distributions and therefore the ability of the context network to switch between different weight sets must be the reason for failing. The context network consisted of 31 input units, 3 hidden units and 100 output units. One assumption is that the reduced context hidden information about the input clutter distribution is not sufficiently large enough to generate for each of the 100 output units the appropriate weight. An increased hidden information can be achieved by adding more hidden units. Unfortunately, as mentioned before, this leads to a drastic increase of context weights and therefore slower learning. More weights can also mean that training gets more complex which tends to paralyse the learning. October, 96 52

59 Evaluation Reducing the function net to a very small net offers no architectural advantages against an CA-CFAR detector. The context network in this case would do nothing else than to predict the optimal weights according to the input distribution. This is similar to an approach that estimates the parameters of the input distribution or estimates the fixed threshold parameter (alpha), as pointed out in future work. These networks would probably have less weights than the context network, because they do not have as many outputs. The CN approach compared with the MLP approach shows that MLP architectures with less than half of the weights are able to learn the whole task and adapt to all four distributions. 7.4 Comparison between ANN-CFAR and CA-CFAR This project investigates the robustness of ANNs in mixed clutter environment. Most conventional detectors as mentioned before are only optimal for one clutter distribution. Changes in the clutter environment will lead to changes in the false alarm rate and detection performance. Robustness in this sense means that the false alarm rate should remain constant if the environment change (CFAR). This is in practice very hard to achieve. As a measurement for robustness the interval of the false alarm rate for different distributions can be used. Table 5 shows that the scale of the interval is in the case of a CA-CFAR two times larger than the scale of the MLP interval. October, 96 53

60 Evaluation Weibull shape parameter False Alarm Rate P FA with architecture MLP (31/11/1) CA-CFAR a = a = a = a = Interval scale: P P FA1. 0 FA Table 5 Experiment No. 4 shows further that also the detection performance of the MLP-CFAR detector is very good in mixed clutter environments, which leads to a better overall performance of the MLP-CFAR than of the CA-CFAR detector October, 96 54

61 8. Conclusion The aim of this MSc project was to investigate if artificial neural networks (ANNs) can be trained to perform constant false alarm rate (CFAR) detection of radar signals in mixed clutter environments. Conventional radar signal detectors are commonly designed for homogenous clutter environments. In practice, real life clutter is difficult to model accurately, because it is non-homogenous, non-stationary and distribution functions can at best be a good approximation. ANNs learn from sample data. If they are able to learn CFAR detection in previously known mixed clutter environments as considered in this thesis then they should also be able to learn CFAR detection from real data measured by a radar in different clutter environments. The distributions of this clutter need not to be known, because the ANN can build its own models from the measured data and be near-optimal for this environment. 8.1 Approaches ANNs in their simplest form with only one input layer and one single output unit are applicable to CFAR detection in homogenous clutter environment. On the other hand these ANNs gain no performance advantage against other conventional detectors. This project considered more complex ANN architectures to achieve performance advantages with ANNs in mixed clutter environments. October, 96 55

62 Conclusion An MLP architecture with a sliding reference window as input, similar to previous ANN approaches in radar signal detection. An SRN architecture with one input unit and a set of context units that should memorise previous inputs as reference to the actual input. An CN architecture with the ability to switch according to input distribution features between different weight sets. 8.2 Achievements The conventional CA-CFAR detector, only optimal for one type of clutter distributions, produce a high variation in the false alarm rate if the detector is used for other clutter distributions than it is designed for. The variation of the CA-CFAR detector was almost double as big as the variation of the MLP-CFAR detector used in this project. This pointed out that MLPs achieve better performance characteristics in mixed non- Gaussian clutter environment than conventional detectors. A combination of this project result with the research of Ramamurti (1993) and Amoozegar and Sundareshan (1994) shows that MLPs trained on radar signal detection achieve a better performance compared to conventional detectors for small reference window sizes (Amoozegar and Sundareshan), perform better than conventional detectors in non-gaussian noise (Ramamurti) and have the ability to perform detection as an universal detector in mixed non-gaussian clutter environment. 8.3 Future work Different ways are possible for future work. Up to now it has not been shown that all features achieved in independent research activities can be combined in one detector either with a single ANN or a hybrid of different approach. October, 96 56

63 Conclusion During the experiments the main problem was how to train ANNs effectively on different distribution functions. Training a network not only on different clutter distributions, but also on different clutter levels can turn into a time consuming problem. The ANNs problems was to learn the distinction of different distributions. Therefore an investigation on parameter estimation of distribution functions or parameter estimation of the fixed threshold parameter α could be worth doing. The estimation probabilities of ANNs should be compared with statistical estimation results, in example with results from the maximum-likelihood (ML) algorithm, Ravid (1992). Estimated parameters obtained from statistical algorithms have a certain estimation variance dependent on the number of sample values. It is likely that ANNs deliver also for this task better results with a lower estimation variance, because Amoozegar and Sundareshan (1994) already pointed out that MLPs perform better for small reference windows than conventional detectors. This belongs also to the category of parameter estimation problems and therefore supports the assumption that ANNs deliver also better performance in parameter estimation than statistical algorithms. Estimated parameters as output results from ANNs can be used for radar signal detection as described in Ravid and Levanon (1992) who implemented an adaptive threshold detector for estimated parameters in Weibull clutter. Two types of problems that occur also in the process of radar signal detection are the influence of interfering targets on the detection performance and the increase of the false alarm rate due to clutter edges as discussed in Section 2. These problems were not considered in this project or any other of the previous work. Suggestions for approaches can be found in the appendix. October, 96 57

64 9. References Amoozegar F., Sundareshan M.K. (1994), A Robust Neural Network Scheme for Constant False Alarm Rate Processing for Target Detection in Clutter Environment, American Control Conference, Baltimore, Maryland Barton D.K. (1988), Modern Radar System Analysis, Artech House Bengio Y., Simard P., Frasconi P. (1994), Learning Long-Term Dependencies with Gradient Descent is Difficult, IEEE Trans. on Neural Networks, vol. 5, no. 2, p Bogler P.L. (1990), Radar Principles With Applications To Tracking Systems, Wiley & Sons, Inc. Bucciarelli T., Fedele G., Parisi R. (1993), Neural Networks Based Signal Detection, NAECON 93, Daytona Elman J.L. (1990), Finding Structure in Time, Cognitive Science 14, pp , Ablex Publishing Corporation Farina A., Studer F. A. (1986), A Review of CFAR Detection Techniques in Radar Systems, Microwave Journal Farina A. (1987), Optimised Radar Processors, Peter Peregrinus Ltd. on behalf of the Institution of Electrical Engineers Fu L., Neural Networks in Computer Intelligence, McGraw-Hill, 1994 October, 96 58

65 References Grossberg S. (1995), Introduction: 1995 Special Issue Automatic Target Recognition, Neural Network Journal (8/7-8), Pergamon Jacobs R.A., Jordan M.I., Nowlan S.J., Hinton G.E. (1991), Adaptive Mixture of local experts, Neural Computation, pp Jordan M. (1996), Neural Networks, CRC Handbook of Computer Science, CRC Press, Florida Pollack J.B. (1987), Cascaded Back-Propagation on Dynamic Connectionist Networks, Proc. Ninth Conf. of the Cognitive Science Soc., Seattle Ramamurti V., Rao S.S., Gandhi P.P. (1993), Neural Networks for Signal Detection in Non-Gaussian Noise, IEEE 1993 International Conference on Acoustic, Speech and Signal Processing(ICASSP 93), Minneapolis, MN Ravid R., Levanon N. (1992), Maximum-likelihood CFAR for Weibull background, IEE Proceedings-F, Vol. 139, No.3 Rumelhart D.E. (1986), McClelland J.L. and the PDP Research Group, Parallel Distributed Processing, Volume 1, Cambridge, MIT Press Stimson G. W. (1983), Introduction to Airborne Radar, Hughes Aircraft Company, El Segundo, California October, 96 59

66 Appendix Definitions Thesaurus, August An electronic version of the NASA THERMAL NOISE The noise at radiofrequency caused by thermal agitation in a dissipative body. Also called Johnson noise. CLUTTER Atmospheric noise, extraneous signals, etc. which tend to obscure the reception of a desired signal in a radio receiver, radarscope, etc. FALSE ALARMS In general, the unwanted detection of input noise. In radar, an indication of a detected target even though one does not exist, due to noise or interference levels exceeding the set threshold of detection. October, 96 60

67 Appendix SIGNAL TO NOISE RATIOS Ratios which measure the comprehensibility of a data source or transmission link, usually expressed as the root mean square signal amplitude divided by the root mean square noise amplitude. August 1996 Cumulative Distribution Function All random variables (discrete and continuous) have a cumulative distribution function. It is a function giving the probability that the random variable X is less than or equal to x, for every value x. Estimate An estimate is an indication of the value of an unknown quantity based on observed data. PROBABILITY DISTRIBUTION The probability distribution of a discrete random variable is a list of probabilities associated with each of its possible values. It is also sometimes called the probability function or the probability mass function. October, 96 61

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016 Artificial Neural Networks Artificial Intelligence Santa Clara, 2016 Simulate the functioning of the brain Can simulate actual neurons: Computational neuroscience Can introduce simplified neurons: Neural

More information

MINE 432 Industrial Automation and Robotics

MINE 432 Industrial Automation and Robotics MINE 432 Industrial Automation and Robotics Part 3, Lecture 5 Overview of Artificial Neural Networks A. Farzanegan (Visiting Associate Professor) Fall 2014 Norman B. Keevil Institute of Mining Engineering

More information

NEURAL NETWORK BASED MAXIMUM POWER POINT TRACKING

NEURAL NETWORK BASED MAXIMUM POWER POINT TRACKING NEURAL NETWORK BASED MAXIMUM POWER POINT TRACKING 3.1 Introduction This chapter introduces concept of neural networks, it also deals with a novel approach to track the maximum power continuously from PV

More information

The fundamentals of detection theory

The fundamentals of detection theory Advanced Signal Processing: The fundamentals of detection theory Side 1 of 18 Index of contents: Advanced Signal Processing: The fundamentals of detection theory... 3 1 Problem Statements... 3 2 Detection

More information

Improved Detection by Peak Shape Recognition Using Artificial Neural Networks

Improved Detection by Peak Shape Recognition Using Artificial Neural Networks Improved Detection by Peak Shape Recognition Using Artificial Neural Networks Stefan Wunsch, Johannes Fink, Friedrich K. Jondral Communications Engineering Lab, Karlsruhe Institute of Technology Stefan.Wunsch@student.kit.edu,

More information

Transactions on Information and Communications Technologies vol 1, 1993 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 1, 1993 WIT Press,   ISSN Combining multi-layer perceptrons with heuristics for reliable control chart pattern classification D.T. Pham & E. Oztemel Intelligent Systems Research Laboratory, School of Electrical, Electronic and

More information

Initialisation improvement in engineering feedforward ANN models.

Initialisation improvement in engineering feedforward ANN models. Initialisation improvement in engineering feedforward ANN models. A. Krimpenis and G.-C. Vosniakos National Technical University of Athens, School of Mechanical Engineering, Manufacturing Technology Division,

More information

Prediction of airblast loads in complex environments using artificial neural networks

Prediction of airblast loads in complex environments using artificial neural networks Structures Under Shock and Impact IX 269 Prediction of airblast loads in complex environments using artificial neural networks A. M. Remennikov 1 & P. A. Mendis 2 1 School of Civil, Mining and Environmental

More information

Neural Network based Digital Receiver for Radio Communications

Neural Network based Digital Receiver for Radio Communications Neural Network based Digital Receiver for Radio Communications G. LIODAKIS, D. ARVANITIS, and I.O. VARDIAMBASIS Microwave Communications & Electromagnetic Applications Laboratory, Department of Electronics,

More information

CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK

CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK 4.1 INTRODUCTION For accurate system level simulator performance, link level modeling and prediction [103] must be reliable and fast so as to improve the

More information

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw Review Analysis of Pattern Recognition by Neural Network Soni Chaturvedi A.A.Khurshid Meftah Boudjelal Electronics & Comm Engg Electronics & Comm Engg Dept. of Computer Science P.I.E.T, Nagpur RCOEM, Nagpur

More information

A COMPARISON OF ARTIFICIAL NEURAL NETWORKS AND OTHER STATISTICAL METHODS FOR ROTATING MACHINE

A COMPARISON OF ARTIFICIAL NEURAL NETWORKS AND OTHER STATISTICAL METHODS FOR ROTATING MACHINE A COMPARISON OF ARTIFICIAL NEURAL NETWORKS AND OTHER STATISTICAL METHODS FOR ROTATING MACHINE CONDITION CLASSIFICATION A. C. McCormick and A. K. Nandi Abstract Statistical estimates of vibration signals

More information

Advanced Cell Averaging Constant False Alarm Rate Method in Homogeneous and Multiple Target Environment

Advanced Cell Averaging Constant False Alarm Rate Method in Homogeneous and Multiple Target Environment Advanced Cell Averaging Constant False Alarm Rate Method in Homogeneous and Multiple Target Environment Mrs. Charishma 1, Shrivathsa V. S 2 1Assistant Professor, Dept. of Electronics and Communication

More information

CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF

CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF 95 CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF 6.1 INTRODUCTION An artificial neural network (ANN) is an information processing model that is inspired by biological nervous systems

More information

A comparative study of different feature sets for recognition of handwritten Arabic numerals using a Multi Layer Perceptron

A comparative study of different feature sets for recognition of handwritten Arabic numerals using a Multi Layer Perceptron Proc. National Conference on Recent Trends in Intelligent Computing (2006) 86-92 A comparative study of different feature sets for recognition of handwritten Arabic numerals using a Multi Layer Perceptron

More information

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE A Thesis by Andrew J. Zerngast Bachelor of Science, Wichita State University, 2008 Submitted to the Department of Electrical

More information

Intelligent Approach to Improve Standard CFAR Detection in non-gaussian Sea Clutter THESIS

Intelligent Approach to Improve Standard CFAR Detection in non-gaussian Sea Clutter THESIS Intelligent Approach to Improve Standard CFAR Detection in non-gaussian Sea Clutter THESIS Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of

More information

Adaptive Multi-layer Neural Network Receiver Architectures for Pattern Classification of Respective Wavelet Images

Adaptive Multi-layer Neural Network Receiver Architectures for Pattern Classification of Respective Wavelet Images Adaptive Multi-layer Neural Network Receiver Architectures for Pattern Classification of Respective Wavelet Images Pythagoras Karampiperis 1, and Nikos Manouselis 2 1 Dynamic Systems and Simulation Laboratory

More information

Generating an appropriate sound for a video using WaveNet.

Generating an appropriate sound for a video using WaveNet. Australian National University College of Engineering and Computer Science Master of Computing Generating an appropriate sound for a video using WaveNet. COMP 8715 Individual Computing Project Taku Ueki

More information

CHAPTER 4 MONITORING OF POWER SYSTEM VOLTAGE STABILITY THROUGH ARTIFICIAL NEURAL NETWORK TECHNIQUE

CHAPTER 4 MONITORING OF POWER SYSTEM VOLTAGE STABILITY THROUGH ARTIFICIAL NEURAL NETWORK TECHNIQUE 53 CHAPTER 4 MONITORING OF POWER SYSTEM VOLTAGE STABILITY THROUGH ARTIFICIAL NEURAL NETWORK TECHNIQUE 4.1 INTRODUCTION Due to economic reasons arising out of deregulation and open market of electricity,

More information

Deep Neural Networks (2) Tanh & ReLU layers; Generalisation and Regularisation

Deep Neural Networks (2) Tanh & ReLU layers; Generalisation and Regularisation Deep Neural Networks (2) Tanh & ReLU layers; Generalisation and Regularisation Steve Renals Machine Learning Practical MLP Lecture 4 9 October 2018 MLP Lecture 4 / 9 October 2018 Deep Neural Networks (2)

More information

NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM)

NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM) NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM) Ahmed Nasraden Milad M. Aziz M Rahmadwati Artificial neural network (ANN) is one of the most advanced technology fields, which allows

More information

VHF Radar Target Detection in the Presence of Clutter *

VHF Radar Target Detection in the Presence of Clutter * BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 6, No 1 Sofia 2006 VHF Radar Target Detection in the Presence of Clutter * Boriana Vassileva Institute for Parallel Processing,

More information

NEURO-ACTIVE NOISE CONTROL USING A DECOUPLED LINEAIUNONLINEAR SYSTEM APPROACH

NEURO-ACTIVE NOISE CONTROL USING A DECOUPLED LINEAIUNONLINEAR SYSTEM APPROACH FIFTH INTERNATIONAL CONGRESS ON SOUND AND VIBRATION DECEMBER 15-18, 1997 ADELAIDE, SOUTH AUSTRALIA NEURO-ACTIVE NOISE CONTROL USING A DECOUPLED LINEAIUNONLINEAR SYSTEM APPROACH M. O. Tokhi and R. Wood

More information

Application of Generalised Regression Neural Networks in Lossless Data Compression

Application of Generalised Regression Neural Networks in Lossless Data Compression Application of Generalised Regression Neural Networks in Lossless Data Compression R. LOGESWARAN Centre for Multimedia Communications, Faculty of Engineering, Multimedia University, 63100 Cyberjaya MALAYSIA

More information

Background Pixel Classification for Motion Detection in Video Image Sequences

Background Pixel Classification for Motion Detection in Video Image Sequences Background Pixel Classification for Motion Detection in Video Image Sequences P. Gil-Jiménez, S. Maldonado-Bascón, R. Gil-Pita, and H. Gómez-Moreno Dpto. de Teoría de la señal y Comunicaciones. Universidad

More information

Are there alternatives to Sigmoid Hidden Units? MLP Lecture 6 Hidden Units / Initialisation 1

Are there alternatives to Sigmoid Hidden Units? MLP Lecture 6 Hidden Units / Initialisation 1 Are there alternatives to Sigmoid Hidden Units? MLP Lecture 6 Hidden Units / Initialisation 1 Hidden Unit Transfer Functions Initialising Deep Networks Steve Renals Machine Learning Practical MLP Lecture

More information

Application of Multi Layer Perceptron (MLP) for Shower Size Prediction

Application of Multi Layer Perceptron (MLP) for Shower Size Prediction Chapter 3 Application of Multi Layer Perceptron (MLP) for Shower Size Prediction 3.1 Basic considerations of the ANN Artificial Neural Network (ANN)s are non- parametric prediction tools that can be used

More information

Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm

Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm Presented to Dr. Tareq Al-Naffouri By Mohamed Samir Mazloum Omar Diaa Shawky Abstract Signaling schemes with memory

More information

Statistical Tests: More Complicated Discriminants

Statistical Tests: More Complicated Discriminants 03/07/07 PHY310: Statistical Data Analysis 1 PHY310: Lecture 14 Statistical Tests: More Complicated Discriminants Road Map When the likelihood discriminant will fail The Multi Layer Perceptron discriminant

More information

IBM SPSS Neural Networks

IBM SPSS Neural Networks IBM Software IBM SPSS Neural Networks 20 IBM SPSS Neural Networks New tools for building predictive models Highlights Explore subtle or hidden patterns in your data. Build better-performing models No programming

More information

1 Introduction. w k x k (1.1)

1 Introduction. w k x k (1.1) Neural Smithing 1 Introduction Artificial neural networks are nonlinear mapping systems whose structure is loosely based on principles observed in the nervous systems of humans and animals. The major

More information

Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India. Fig.1.Neuron and its connection

Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India. Fig.1.Neuron and its connection NEUROCOMPUTATION FOR MICROSTRIP ANTENNA Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India Abstract: A Neural Network is a powerful computational tool that

More information

Kalman Tracking and Bayesian Detection for Radar RFI Blanking

Kalman Tracking and Bayesian Detection for Radar RFI Blanking Kalman Tracking and Bayesian Detection for Radar RFI Blanking Weizhen Dong, Brian D. Jeffs Department of Electrical and Computer Engineering Brigham Young University J. Richard Fisher National Radio Astronomy

More information

Synthesis of Fault Tolerant Neural Networks

Synthesis of Fault Tolerant Neural Networks Synthesis of Fault Tolerant Neural Networks Dhananjay S. Phatak and Elko Tchernev ABSTRACT This paper evaluates different strategies for enhancing (partial) fault tolerance (PFT) of feedforward artificial

More information

Lesson 08. Convolutional Neural Network. Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni.

Lesson 08. Convolutional Neural Network. Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni. Lesson 08 Convolutional Neural Network Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni Lesson 08 Convolution we will consider 2D convolution the result

More information

6. FUNDAMENTALS OF CHANNEL CODER

6. FUNDAMENTALS OF CHANNEL CODER 82 6. FUNDAMENTALS OF CHANNEL CODER 6.1 INTRODUCTION The digital information can be transmitted over the channel using different signaling schemes. The type of the signal scheme chosen mainly depends on

More information

COMPARATIVE STUDY ON ARTIFICIAL NEURAL NETWORK ALGORITHMS

COMPARATIVE STUDY ON ARTIFICIAL NEURAL NETWORK ALGORITHMS International Journal of Latest Trends in Engineering and Technology Special Issue SACAIM 2016, pp. 448-453 e-issn:2278-621x COMPARATIVE STUDY ON ARTIFICIAL NEURAL NETWORK ALGORITHMS Neenu Joseph 1, Melody

More information

DESIGN AND DEVELOPMENT OF SIGNAL

DESIGN AND DEVELOPMENT OF SIGNAL DESIGN AND DEVELOPMENT OF SIGNAL PROCESSING ALGORITHMS FOR GROUND BASED ACTIVE PHASED ARRAY RADAR. Kapil A. Bohara Student : Dept of electronics and communication, R.V. College of engineering Bangalore-59,

More information

TEMPORAL DIFFERENCE LEARNING IN CHINESE CHESS

TEMPORAL DIFFERENCE LEARNING IN CHINESE CHESS TEMPORAL DIFFERENCE LEARNING IN CHINESE CHESS Thong B. Trinh, Anwer S. Bashi, Nikhil Deshpande Department of Electrical Engineering University of New Orleans New Orleans, LA 70148 Tel: (504) 280-7383 Fax:

More information

Target detection in side-scan sonar images: expert fusion reduces false alarms

Target detection in side-scan sonar images: expert fusion reduces false alarms Target detection in side-scan sonar images: expert fusion reduces false alarms Nicola Neretti, Nathan Intrator and Quyen Huynh Abstract We integrate several key components of a pattern recognition system

More information

Efficient Learning in Cellular Simultaneous Recurrent Neural Networks - The Case of Maze Navigation Problem

Efficient Learning in Cellular Simultaneous Recurrent Neural Networks - The Case of Maze Navigation Problem Efficient Learning in Cellular Simultaneous Recurrent Neural Networks - The Case of Maze Navigation Problem Roman Ilin Department of Mathematical Sciences The University of Memphis Memphis, TN 38117 E-mail:

More information

Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors

Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors Int. J. Advanced Networking and Applications 1053 Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors Eng. Abdelfattah A. Ahmed Atomic Energy Authority,

More information

DESIGN AND IMPLEMENTATION OF AN ALGORITHM FOR MODULATION IDENTIFICATION OF ANALOG AND DIGITAL SIGNALS

DESIGN AND IMPLEMENTATION OF AN ALGORITHM FOR MODULATION IDENTIFICATION OF ANALOG AND DIGITAL SIGNALS DESIGN AND IMPLEMENTATION OF AN ALGORITHM FOR MODULATION IDENTIFICATION OF ANALOG AND DIGITAL SIGNALS John Yong Jia Chen (Department of Electrical Engineering, San José State University, San José, California,

More information

SUPERVISED SIGNAL PROCESSING FOR SEPARATION AND INDEPENDENT GAIN CONTROL OF DIFFERENT PERCUSSION INSTRUMENTS USING A LIMITED NUMBER OF MICROPHONES

SUPERVISED SIGNAL PROCESSING FOR SEPARATION AND INDEPENDENT GAIN CONTROL OF DIFFERENT PERCUSSION INSTRUMENTS USING A LIMITED NUMBER OF MICROPHONES SUPERVISED SIGNAL PROCESSING FOR SEPARATION AND INDEPENDENT GAIN CONTROL OF DIFFERENT PERCUSSION INSTRUMENTS USING A LIMITED NUMBER OF MICROPHONES SF Minhas A Barton P Gaydecki School of Electrical and

More information

Student: Nizar Cherkaoui. Advisor: Dr. Chia-Ling Tsai (Computer Science Dept.) Advisor: Dr. Eric Muller (Biology Dept.)

Student: Nizar Cherkaoui. Advisor: Dr. Chia-Ling Tsai (Computer Science Dept.) Advisor: Dr. Eric Muller (Biology Dept.) Student: Nizar Cherkaoui Advisor: Dr. Chia-Ling Tsai (Computer Science Dept.) Advisor: Dr. Eric Muller (Biology Dept.) Outline Introduction Foreground Extraction Blob Segmentation and Labeling Classification

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Perceptron Barnabás Póczos Contents History of Artificial Neural Networks Definitions: Perceptron, Multi-Layer Perceptron Perceptron algorithm 2 Short History of Artificial

More information

Forecasting Exchange Rates using Neural Neworks

Forecasting Exchange Rates using Neural Neworks International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 6, Number 1 (2016), pp. 35-44 International Research Publications House http://www. irphouse.com Forecasting Exchange

More information

신경망기반자동번역기술. Konkuk University Computational Intelligence Lab. 김강일

신경망기반자동번역기술. Konkuk University Computational Intelligence Lab.  김강일 신경망기반자동번역기술 Konkuk University Computational Intelligence Lab. http://ci.konkuk.ac.kr kikim01@kunkuk.ac.kr 김강일 Index Issues in AI and Deep Learning Overview of Machine Translation Advanced Techniques in

More information

A Neural Network Approach for the calculation of Resonant frequency of a circular microstrip antenna

A Neural Network Approach for the calculation of Resonant frequency of a circular microstrip antenna A Neural Network Approach for the calculation of Resonant frequency of a circular microstrip antenna K. Kumar, Senior Lecturer, Dept. of ECE, Pondicherry Engineering College, Pondicherry e-mail: kumarpec95@yahoo.co.in

More information

A Novel Fuzzy Neural Network Based Distance Relaying Scheme

A Novel Fuzzy Neural Network Based Distance Relaying Scheme 902 IEEE TRANSACTIONS ON POWER DELIVERY, VOL. 15, NO. 3, JULY 2000 A Novel Fuzzy Neural Network Based Distance Relaying Scheme P. K. Dash, A. K. Pradhan, and G. Panda Abstract This paper presents a new

More information

(i) Understanding the basic concepts of signal modeling, correlation, maximum likelihood estimation, least squares and iterative numerical methods

(i) Understanding the basic concepts of signal modeling, correlation, maximum likelihood estimation, least squares and iterative numerical methods Tools and Applications Chapter Intended Learning Outcomes: (i) Understanding the basic concepts of signal modeling, correlation, maximum likelihood estimation, least squares and iterative numerical methods

More information

CHAPTER. delta-sigma modulators 1.0

CHAPTER. delta-sigma modulators 1.0 CHAPTER 1 CHAPTER Conventional delta-sigma modulators 1.0 This Chapter presents the traditional first- and second-order DSM. The main sources for non-ideal operation are described together with some commonly

More information

Analysis of Learning Paradigms and Prediction Accuracy using Artificial Neural Network Models

Analysis of Learning Paradigms and Prediction Accuracy using Artificial Neural Network Models Analysis of Learning Paradigms and Prediction Accuracy using Artificial Neural Network Models Poornashankar 1 and V.P. Pawar 2 Abstract: The proposed work is related to prediction of tumor growth through

More information

Detection of Targets in Noise and Pulse Compression Techniques

Detection of Targets in Noise and Pulse Compression Techniques Introduction to Radar Systems Detection of Targets in Noise and Pulse Compression Techniques Radar Course_1.ppt ODonnell 6-18-2 Disclaimer of Endorsement and Liability The video courseware and accompanying

More information

Neural Filters: MLP VIS-A-VIS RBF Network

Neural Filters: MLP VIS-A-VIS RBF Network 6th WSEAS International Conference on CIRCUITS, SYSTEMS, ELECTRONICS,CONTROL & SIGNAL PROCESSING, Cairo, Egypt, Dec 29-31, 2007 432 Neural Filters: MLP VIS-A-VIS RBF Network V. R. MANKAR, DR. A. A. GHATOL,

More information

Antennas and Propagation. Chapter 6b: Path Models Rayleigh, Rician Fading, MIMO

Antennas and Propagation. Chapter 6b: Path Models Rayleigh, Rician Fading, MIMO Antennas and Propagation b: Path Models Rayleigh, Rician Fading, MIMO Introduction From last lecture How do we model H p? Discrete path model (physical, plane waves) Random matrix models (forget H p and

More information

An Hybrid MLP-SVM Handwritten Digit Recognizer

An Hybrid MLP-SVM Handwritten Digit Recognizer An Hybrid MLP-SVM Handwritten Digit Recognizer A. Bellili ½ ¾ M. Gilloux ¾ P. Gallinari ½ ½ LIP6, Université Pierre et Marie Curie ¾ La Poste 4, Place Jussieu 10, rue de l Ile Mabon, BP 86334 75252 Paris

More information

Application of Artificial Neural Networks System for Synthesis of Phased Cylindrical Arc Antenna Arrays

Application of Artificial Neural Networks System for Synthesis of Phased Cylindrical Arc Antenna Arrays International Journal of Communication Engineering and Technology. ISSN 2277-3150 Volume 4, Number 1 (2014), pp. 7-15 Research India Publications http://www.ripublication.com Application of Artificial

More information

Target Classification in Forward Scattering Radar in Noisy Environment

Target Classification in Forward Scattering Radar in Noisy Environment Target Classification in Forward Scattering Radar in Noisy Environment Mohamed Khala Alla H.M, Mohamed Kanona and Ashraf Gasim Elsid School of telecommunication and space technology, Future university

More information

NNC for Power Electronics Converter Circuits: Design & Simulation

NNC for Power Electronics Converter Circuits: Design & Simulation NNC for Power Electronics Converter Circuits: Design & Simulation 1 Ms. Kashmira J. Rathi, 2 Dr. M. S. Ali Abstract: AI-based control techniques have been very popular since the beginning of the 90s. Usually,

More information

EENG473 Mobile Communications Module 3 : Week # (12) Mobile Radio Propagation: Small-Scale Path Loss

EENG473 Mobile Communications Module 3 : Week # (12) Mobile Radio Propagation: Small-Scale Path Loss EENG473 Mobile Communications Module 3 : Week # (12) Mobile Radio Propagation: Small-Scale Path Loss Introduction Small-scale fading is used to describe the rapid fluctuation of the amplitude of a radio

More information

IMPLEMENTATION OF NEURAL NETWORK IN ENERGY SAVING OF INDUCTION MOTOR DRIVES WITH INDIRECT VECTOR CONTROL

IMPLEMENTATION OF NEURAL NETWORK IN ENERGY SAVING OF INDUCTION MOTOR DRIVES WITH INDIRECT VECTOR CONTROL IMPLEMENTATION OF NEURAL NETWORK IN ENERGY SAVING OF INDUCTION MOTOR DRIVES WITH INDIRECT VECTOR CONTROL * A. K. Sharma, ** R. A. Gupta, and *** Laxmi Srivastava * Department of Electrical Engineering,

More information

Chapter 4 SPEECH ENHANCEMENT

Chapter 4 SPEECH ENHANCEMENT 44 Chapter 4 SPEECH ENHANCEMENT 4.1 INTRODUCTION: Enhancement is defined as improvement in the value or Quality of something. Speech enhancement is defined as the improvement in intelligibility and/or

More information

Surveillance and Calibration Verification Using Autoassociative Neural Networks

Surveillance and Calibration Verification Using Autoassociative Neural Networks Surveillance and Calibration Verification Using Autoassociative Neural Networks Darryl J. Wrest, J. Wesley Hines, and Robert E. Uhrig* Department of Nuclear Engineering, University of Tennessee, Knoxville,

More information

Automatic Transcription of Monophonic Audio to MIDI

Automatic Transcription of Monophonic Audio to MIDI Automatic Transcription of Monophonic Audio to MIDI Jiří Vass 1 and Hadas Ofir 2 1 Czech Technical University in Prague, Faculty of Electrical Engineering Department of Measurement vassj@fel.cvut.cz 2

More information

A Numerical Approach to Understanding Oscillator Neural Networks

A Numerical Approach to Understanding Oscillator Neural Networks A Numerical Approach to Understanding Oscillator Neural Networks Natalie Klein Mentored by Jon Wilkins Networks of coupled oscillators are a form of dynamical network originally inspired by various biological

More information

Learning New Articulator Trajectories for a Speech Production Model using Artificial Neural Networks

Learning New Articulator Trajectories for a Speech Production Model using Artificial Neural Networks Learning New Articulator Trajectories for a Speech Production Model using Artificial Neural Networks C. S. Blackburn and S. J. Young Cambridge University Engineering Department (CUED), England email: csb@eng.cam.ac.uk

More information

Chapter 2 Channel Equalization

Chapter 2 Channel Equalization Chapter 2 Channel Equalization 2.1 Introduction In wireless communication systems signal experiences distortion due to fading [17]. As signal propagates, it follows multiple paths between transmitter and

More information

Application of Feed-forward Artificial Neural Networks to the Identification of Defective Analog Integrated Circuits

Application of Feed-forward Artificial Neural Networks to the Identification of Defective Analog Integrated Circuits eural Comput & Applic (2002)11:71 79 Ownership and Copyright 2002 Springer-Verlag London Limited Application of Feed-forward Artificial eural etworks to the Identification of Defective Analog Integrated

More information

THE USE OF ARTIFICIAL NEURAL NETWORKS IN THE ESTIMATION OF THE PERCEPTION OF SOUND BY THE HUMAN AUDITORY SYSTEM

THE USE OF ARTIFICIAL NEURAL NETWORKS IN THE ESTIMATION OF THE PERCEPTION OF SOUND BY THE HUMAN AUDITORY SYSTEM INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS VOL. 8, NO. 3, SEPTEMBER 2015 THE USE OF ARTIFICIAL NEURAL NETWORKS IN THE ESTIMATION OF THE PERCEPTION OF SOUND BY THE HUMAN AUDITORY SYSTEM

More information

EUSIPCO

EUSIPCO EUSIPCO 23 56974827 COMPRESSIVE SENSING RADAR: SIMULATION AND EXPERIMENTS FOR TARGET DETECTION L. Anitori, W. van Rossum, M. Otten TNO, The Hague, The Netherlands A. Maleki Columbia University, New York

More information

A Comparison of Particle Swarm Optimization and Gradient Descent in Training Wavelet Neural Network to Predict DGPS Corrections

A Comparison of Particle Swarm Optimization and Gradient Descent in Training Wavelet Neural Network to Predict DGPS Corrections Proceedings of the World Congress on Engineering and Computer Science 00 Vol I WCECS 00, October 0-, 00, San Francisco, USA A Comparison of Particle Swarm Optimization and Gradient Descent in Training

More information

Mobile Radio Propagation: Small-Scale Fading and Multi-path

Mobile Radio Propagation: Small-Scale Fading and Multi-path Mobile Radio Propagation: Small-Scale Fading and Multi-path 1 EE/TE 4365, UT Dallas 2 Small-scale Fading Small-scale fading, or simply fading describes the rapid fluctuation of the amplitude of a radio

More information

J. C. Brégains (Student Member, IEEE), and F. Ares (Senior Member, IEEE).

J. C. Brégains (Student Member, IEEE), and F. Ares (Senior Member, IEEE). ANALYSIS, SYNTHESIS AND DIAGNOSTICS OF ANTENNA ARRAYS THROUGH COMPLEX-VALUED NEURAL NETWORKS. J. C. Brégains (Student Member, IEEE), and F. Ares (Senior Member, IEEE). Radiating Systems Group, Department

More information

Smart antenna technology

Smart antenna technology Smart antenna technology In mobile communication systems, capacity and performance are usually limited by two major impairments. They are multipath and co-channel interference [5]. Multipath is a condition

More information

Enhancement of Speech Signal Based on Improved Minima Controlled Recursive Averaging and Independent Component Analysis

Enhancement of Speech Signal Based on Improved Minima Controlled Recursive Averaging and Independent Component Analysis Enhancement of Speech Signal Based on Improved Minima Controlled Recursive Averaging and Independent Component Analysis Mohini Avatade & S.L. Sahare Electronics & Telecommunication Department, Cummins

More information

HIGH ORDER MODULATION SHAPED TO WORK WITH RADIO IMPERFECTIONS

HIGH ORDER MODULATION SHAPED TO WORK WITH RADIO IMPERFECTIONS HIGH ORDER MODULATION SHAPED TO WORK WITH RADIO IMPERFECTIONS Karl Martin Gjertsen 1 Nera Networks AS, P.O. Box 79 N-52 Bergen, Norway ABSTRACT A novel layout of constellations has been conceived, promising

More information

Neural Network Classifier and Filtering for EEG Detection in Brain-Computer Interface Device

Neural Network Classifier and Filtering for EEG Detection in Brain-Computer Interface Device Neural Network Classifier and Filtering for EEG Detection in Brain-Computer Interface Device Mr. CHOI NANG SO Email: cnso@excite.com Prof. J GODFREY LUCAS Email: jglucas@optusnet.com.au SCHOOL OF MECHATRONICS,

More information

Recurrent neural networks Modelling sequential data. MLP Lecture 9 Recurrent Networks 1

Recurrent neural networks Modelling sequential data. MLP Lecture 9 Recurrent Networks 1 Recurrent neural networks Modelling sequential data MLP Lecture 9 Recurrent Networks 1 Recurrent Networks Steve Renals Machine Learning Practical MLP Lecture 9 16 November 2016 MLP Lecture 9 Recurrent

More information

Simulating and Testing of Signal Processing Methods for Frequency Stepped Chirp Radar

Simulating and Testing of Signal Processing Methods for Frequency Stepped Chirp Radar Test & Measurement Simulating and Testing of Signal Processing Methods for Frequency Stepped Chirp Radar Modern radar systems serve a broad range of commercial, civil, scientific and military applications.

More information

AN ANN BASED FAULT DETECTION ON ALTERNATOR

AN ANN BASED FAULT DETECTION ON ALTERNATOR AN ANN BASED FAULT DETECTION ON ALTERNATOR Suraj J. Dhon 1, Sarang V. Bhonde 2 1 (Electrical engineering, Amravati University, India) 2 (Electrical engineering, Amravati University, India) ABSTRACT: Synchronous

More information

NEURALNETWORK BASED CLASSIFICATION OF LASER-DOPPLER FLOWMETRY SIGNALS

NEURALNETWORK BASED CLASSIFICATION OF LASER-DOPPLER FLOWMETRY SIGNALS NEURALNETWORK BASED CLASSIFICATION OF LASER-DOPPLER FLOWMETRY SIGNALS N. G. Panagiotidis, A. Delopoulos and S. D. Kollias National Technical University of Athens Department of Electrical and Computer Engineering

More information

UWB Small Scale Channel Modeling and System Performance

UWB Small Scale Channel Modeling and System Performance UWB Small Scale Channel Modeling and System Performance David R. McKinstry and R. Michael Buehrer Mobile and Portable Radio Research Group Virginia Tech Blacksburg, VA, USA {dmckinst, buehrer}@vt.edu Abstract

More information

ESE531 Spring University of Pennsylvania Department of Electrical and System Engineering Digital Signal Processing

ESE531 Spring University of Pennsylvania Department of Electrical and System Engineering Digital Signal Processing University of Pennsylvania Department of Electrical and System Engineering Digital Signal Processing ESE531, Spring 2017 Final Project: Audio Equalization Wednesday, Apr. 5 Due: Tuesday, April 25th, 11:59pm

More information

2 TD-MoM ANALYSIS OF SYMMETRIC WIRE DIPOLE

2 TD-MoM ANALYSIS OF SYMMETRIC WIRE DIPOLE Design of Microwave Antennas: Neural Network Approach to Time Domain Modeling of V-Dipole Z. Lukes Z. Raida Dept. of Radio Electronics, Brno University of Technology, Purkynova 118, 612 00 Brno, Czech

More information

Reduction of PAR and out-of-band egress. EIT 140, tom<at>eit.lth.se

Reduction of PAR and out-of-band egress. EIT 140, tom<at>eit.lth.se Reduction of PAR and out-of-band egress EIT 140, tomeit.lth.se Multicarrier specific issues The following issues are specific for multicarrier systems and deserve special attention: Peak-to-average

More information

S Simulation program SEAMCAT

S Simulation program SEAMCAT S-72.333 Post Graduate Course in Radiocommunications Spring 2001 Simulation program SEAMCAT (The Spectrum Engineering Advanced Monte Carlo Analysis Tool) Pekka Ollikainen pekka.ollikainen@thk.fi Page 1

More information

Hardware Implementation of Proposed CAMP algorithm for Pulsed Radar

Hardware Implementation of Proposed CAMP algorithm for Pulsed Radar 45, Issue 1 (2018) 26-36 Journal of Advanced Research in Applied Mechanics Journal homepage: www.akademiabaru.com/aram.html ISSN: 2289-7895 Hardware Implementation of Proposed CAMP algorithm for Pulsed

More information

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods 19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com

More information

Pulse Compression Techniques of Phase Coded Waveforms in Radar

Pulse Compression Techniques of Phase Coded Waveforms in Radar International Journal of Scientific & Engineering Research Volume 3, Issue 8, August-2012 1 Pulse Compression Techniques of Phase d Waveforms in Radar Mohammed Umar Shaik, V.Venkata Rao Abstract Matched

More information

Image Enhancement in Spatial Domain

Image Enhancement in Spatial Domain Image Enhancement in Spatial Domain 2 Image enhancement is a process, rather a preprocessing step, through which an original image is made suitable for a specific application. The application scenarios

More information

Audio Restoration Based on DSP Tools

Audio Restoration Based on DSP Tools Audio Restoration Based on DSP Tools EECS 451 Final Project Report Nan Wu School of Electrical Engineering and Computer Science University of Michigan Ann Arbor, MI, United States wunan@umich.edu Abstract

More information

COMPUTATION OF RADIATION EFFICIENCY FOR A RESONANT RECTANGULAR MICROSTRIP PATCH ANTENNA USING BACKPROPAGATION MULTILAYERED PERCEPTRONS

COMPUTATION OF RADIATION EFFICIENCY FOR A RESONANT RECTANGULAR MICROSTRIP PATCH ANTENNA USING BACKPROPAGATION MULTILAYERED PERCEPTRONS ISTANBUL UNIVERSITY- JOURNAL OF ELECTRICAL & ELECTRONICS ENGINEERING YEAR VOLUME NUMBER : 23 : 3 : (663-67) COMPUTATION OF RADIATION EFFICIENCY FOR A RESONANT RECTANGULAR MICROSTRIP PATCH ANTENNA USING

More information

Chapter 2 Transformation Invariant Image Recognition Using Multilayer Perceptron 2.1 Introduction

Chapter 2 Transformation Invariant Image Recognition Using Multilayer Perceptron 2.1 Introduction Chapter 2 Transformation Invariant Image Recognition Using Multilayer Perceptron 2.1 Introduction A multilayer perceptron (MLP) [52, 53] comprises an input layer, any number of hidden layers and an output

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks ABSTRACT Just as life attempts to understand itself better by modeling it, and in the process create something new, so Neural computing is an attempt at modeling the workings

More information

An Approximation Algorithm for Computing the Mean Square Error Between Two High Range Resolution RADAR Profiles

An Approximation Algorithm for Computing the Mean Square Error Between Two High Range Resolution RADAR Profiles IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, VOL., NO., JULY 25 An Approximation Algorithm for Computing the Mean Square Error Between Two High Range Resolution RADAR Profiles John Weatherwax

More information

Systolic modular VLSI Architecture for Multi-Model Neural Network Implementation +

Systolic modular VLSI Architecture for Multi-Model Neural Network Implementation + Systolic modular VLSI Architecture for Multi-Model Neural Network Implementation + J.M. Moreno *, J. Madrenas, J. Cabestany Departament d'enginyeria Electrònica Universitat Politècnica de Catalunya Barcelona,

More information

Auditory modelling for speech processing in the perceptual domain

Auditory modelling for speech processing in the perceptual domain ANZIAM J. 45 (E) ppc964 C980, 2004 C964 Auditory modelling for speech processing in the perceptual domain L. Lin E. Ambikairajah W. H. Holmes (Received 8 August 2003; revised 28 January 2004) Abstract

More information

CHAPTER 6 SIGNAL PROCESSING TECHNIQUES TO IMPROVE PRECISION OF SPECTRAL FIT ALGORITHM

CHAPTER 6 SIGNAL PROCESSING TECHNIQUES TO IMPROVE PRECISION OF SPECTRAL FIT ALGORITHM CHAPTER 6 SIGNAL PROCESSING TECHNIQUES TO IMPROVE PRECISION OF SPECTRAL FIT ALGORITHM After developing the Spectral Fit algorithm, many different signal processing techniques were investigated with the

More information