DURING the last ten years, hardware implementations of

Size: px
Start display at page:

Download "DURING the last ten years, hardware implementations of"

Transcription

1 Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August -9, 13 Design Study of Efficient Digital Order-Based STDP Neuron Implementations for Extracting Temporal Features David Roclin, Olivier Bichler, Christian Gamrat, Simon J. Thorpe and Jacques-Olivier Klein Abstract Spiking neural networks are naturally asynchronous and use pulses to carry information. In this paper, we consider implementing such networks on a digital chip. We used an event-based simulator and we started from a previously established simulation, which emulates an analog spiking neural network, that can extract complex and overlapping, temporally correlated features. We modified this simulation to allow an easier integration in an embedded digital implementation. We first show that a four bits synaptic weight resolution is enough to achieve the best performance, although the network remains functional down to a bits weight resolution. Then we show that a linear leak could be implemented to simplify the s leakage calculation. Finally, we demonstrate that an order-based STDP with a fixed number of potentiated s as low as is efficient for features extraction. A simulation including these modifications, which lighten and increase the efficiency of digital spiking neural network implementation shows that the learning behavior is not affected, with a recognition rate of 9% in a cars trajectories detection application. I. INTRODUCTION DURING the last ten years, hardware implementations of Spiking Neural Networks (SNN) have been proposed using both analog (as for example in BrainScaleS / FACETS [1] [3] and Neurogrid projects []) and digital hardware, as in the SpiNNaker [5] and CAVIAR projects []. These hardware implementations differ from each other considering their model complexity and purpose. The BrainScaleS project aims to emulate the brain at a fine scale with the purpose of helping neuroscientists simulate small parts of the human brain 1 3 to 1 5 faster than biology. In contrast, SpiNNAker relies on simple models to simulate complex networks with the aim of running in biological real-time for integration with robotic systems [5]. As Spike-Timing-Dependent Plasticity (STDP) is believed to be one of the primary learning rules in SNN [7], [], its implementation is a major focus in these projects, and is challenging for both analog and digital hardware [9] [1]. This learning mechanism has been proven useful in This work was partially supported by the French National Agency for Research (ANR) Blanc program, within the project NEuroMorphic hardware for Smart vision Sensor (NEMESIS) under contract ANR-11- BS D. Roclin, O. Bichler and C. Gamrat are with the CEA, LIST, Laboratory For Enhancing Reliability of Embedded systems, F Gif-sur-Yvette, France (corresponding authors; phone: (+33) ; fax: (+33) ; david.roclin@cea.fr, olivier.bichler@cea.fr, christian.gamrat@cea.fr). S. J. Thorpe is with the CNRS Université Toulouse 3, Centre de Recherche Cerveau & Cognition, Toulouse, France ( simon.thorpe@cerco.upstlse.fr). J. O. Klein is with the IEF, Univ Paris-Sud, UMR, Orsay, F-915 ( jacques-olivier.klein@u-psud.fr). neuromorphic applications including feature extraction [13] [15] or handwriting recognition [1]. In a vision application, we showed by simulation that simple s arranged in a Winner-Take-All feedforward network can extract complex and overlapping, temporally correlated features in a video stream [15]. In order to reduce the complexity of a SNN for implementation, especially on embedded digital architecture where integration is a major constrain, we propose to evaluate the impact on feature extraction efficiency of three simplifications: 1) Limited synaptic weight bit resolution. ) Implementation of a linear leak to simplify the calculation of al potentials 3) Implementation of an order-based STDP, where only a fixed number of the most recently activated s get potentiated. Pre-synaptic spike AER decoder levels? Event address (1) () (3) Post-synaptic spike Timestamp FIFO Pre-synaptic spike address Vs Exponential leak Linear leak Fig. 1. The three modifications proposed for a SNN: 1) Evaluation of the impact of the synaptic weight bit resolution; ) Implementation of a linear leak to simplify the calculation of al potentials; 3) Implementation of an order-based STDP, where a fixed number of the most recently activated s get potentiated. We will show that these simplifications still allow good learning performance for temporal pattern extraction from an asynchronous video stream. The paper is organized as follows: A background review on the al model and STDP implementation in the SNN is covered in section I, and we describe the methodology and simulation environment in section II. Experiments and results are presented in section III and finally, we discuss our findings and conclusions in sections IV and V /13/$ IEEE 1

2 A. Neurons Model II. BACKGROUND The s are Leaky, Integrate and Fire (LIF) and are described in [15]. They integrate pulses from input s weighted by their s. When the membrane potential or integration of a reaches a defined threshold, the emits a spike to the output of the system, then enters in a refractory period T refrac. In addition, other s are inhibited during a period T inhibit. The general equation governing a LIF is shown in equation 1, where τ leak is the leak time constant, u(t) is the dimensionless membrane potential, I(t) represents the current flowing into the and α is a constant. τ leak. du(t) dt B. Spike-Timing-Dependent Plasticity = u(t) + αi(t) (1) The STDP unsupervised learning paradigm extracts temporally correlated features from the input data of a SNN. In this work, we use a simplified STDP rule described in [15]. This simplified STDP rule employs the time difference on a between the last pre-synaptic (T pre ) spike and the post-synaptic spike (T post ) to determine if this is undergoing a Long Term Depression (LTD) or a Long Term Potentiation (LTP). If T = T post T pre, is in the LTP window T ltp then the synaptic weight is increased as shown in equation. Otherwise, the synaptic weight is systematically decreased as shown in equation 3. In both cases, the weight change magnitudes w + and w are constant and independent on T LTP: < T < T ltp w = w + w + () LTD: otherwise w = w w (3) III. METHODOLOGY In this paper, we make step by step modifications of the functional analog and asynchronous neural network that we introduced in [15] in order to simplify the digital hardware needed to implement such a network. A. Simulation Environment Lateral inhibition AER Sensor 1,3 spiking pixels 1 1 layer s Fig.. Architecture of the SNN, consisting of one layer of s fully connected to the TMPDIFF1 DVS AER camera. Lateral inhibition between the output s is also implemented. 1 In the original simulation, we demonstrated that a first layer could learn partial trajectories and that a second layer could combine them to reconstitute the complete trajectory. In this paper, only the first layer is kept, which performs equally well for the purpose of cars detection. As shown in Figure, the network has one layer of s fully connected to the output of a TMPDIFF1 Dynamic Vision Sensor (DVS) that uses Address Event Representation (AER) [17] (recorded data from [1]). The camera was placed on a bridge over a freeway and filmed the cars passing underneath. In the sequence, there is a total of 7 cars over a period of one minute and seconds on a six lane freeway. The camera resolution is 1 1 and each pixel can send two types of events: ON when the light intensity increases, or OFF when the light intensity decreases. These events are transmitted to the SNN asynchronously using the AER protocol. Each in the network has 3,7 s (1 1 ) making a total of 1,9, s (3,7 ). After learning, the s become specialized to specific traffic lanes. This network is able to learn to detect cars passing in each traffic lane in a completely unsupervised way. In the following experiments we label the best for each lane: Lane 1, Lane,..., Lane, respectively. Keeping only the best for each lane, a recognition rate of 9% is achieved. The neural parameters are identical. The original setup used analog s which have levels (ignoring variability). Each synaptic weight is initialized at a random value. B. Xnet Simulator We used the event-based simulator developed by our group called Xnet. This simulator, which is developed to answer the need for a nanodevice oriented simulator, allows a vast variety of spiking neural network configurations to be tested with the freedom to modify the behavior of both s and s. Although we are working with recorded data, the simulation can be made on line in real time. In the following simulations, seven major parameters can be adjusted. They are presented in table I. TABLE I LIST OF NEURON S AND SYNAPSE S PARAMETERS IN XNET. W len I thres T LT P T refrac T inhibit τ leak ( w +, w ) Synaptic weight resolution Neuron s threshold LTP time window Refractory period Inhibition period Leak time constant LTP and LTD weight inc/decrement values The simulation tool offers the possibility of calculating the error for each lane by comparing the results of each output s spiking activity with a reference data file as shown in figure 3 (top). These errors are needed to grade how well the network learns the different traffic lanes in order to adjust and identify the best set of parameters. 11

3 Activity φ Error Activity Making a difference of two discrete sets of data is not relevant as shown in figure 3 (top), therefore, we convolve each spike train with a Gaussian function to produce two continuous signals as shown in figure 3 (middle). To generate an error for a specific lane, we compute the integral of the absolute value of the difference between these two continuous signals (figure 3, bottom) as shown in equation where A represents the spike train activity and φ a Gaussian function. Error = (A reference φ) (A φ) dt () An error of one can either represent a miss (the did not spike when a car was present in this lane) or a false positive (the spiked but there was no car). In figure 3, the output emits a false positive at seconds and missed two cars at 7 seconds in the sequence. This lane has an error of 3.3 which closely matches the number of mistakes (3). To find the best set of parameters for each network architecture and for each / model combination, our simulator offers the possibility to perform genetic evolutions. Each parameter starts at a given value with a standard deviation. In this paper, the genetic evolution ran 3 generations with 1 simulations per generation. Neuron Expected False Positive misses Neuron Expected help determine the minimum number of resistance levels a memristor would require for such applications. Results: To find the lowest acceptable resolution, we ran genetic evolutions for each bit resolution and compared the error results. We ran 7 evolutions, from bits s down to bits s. The results show an average (over 1 runs) of four errors for each lane for all bit resolutions except for bits as shown in figure. This means that the network is functional down to 3 bits. The parameters resulting from the genetic evolutions are presented in table II TABLE II VALUE OF THE NEURONS PARAMETERS FOR DIFFERENT SYNAPTIC WEIGHT BIT RESOLUTIONS CORRESPONDING TO FIGURE. W len I thres τ leak T LTP T inhibit T refrac (w + (bits) (/W max) (ms) (ms) (ms) (ms), w ) (,1) (,1) (,1) (,1) (,1) (,1) (,1) Lane 1 Lane Lane 3 Lane Lane 5 Lane Lane not learned 5 Integration of the difference 3 1 bits 3 bits bits 5 bits bits 7 bits bits Synaptic weight bit resolution 1 runs Fig. 3. Example of error calculation: (top) spike train comparison, (middle) two convolution products between each spike train and a Gaussian function, (bottom) absolute value of the difference of the two convolution products. IV. EXPERIMENTS AND RESULTS A. Synaptic Weight Resolution The bit resolution used for synaptic weights is a question that has already been tackled by studies showing that SNNs are able to detect correlated activities with weight resolutions as low a bits [19]. For a digital SNN, where area is a major constraint, the objective is to find the lowest bit resolution while the network maintains the ability to learn and detect visual features. For an analog SNN, where a could be implemented by a single memristor [9] [11], this study will Fig.. Average error for each lane over 1 runs from to bits synaptic weight resolution. Lanes with a red cross are not learned reliably over 1 runs and generated a high error. Error bar represents ±1 standard deviation. From these results, the neural threshold I thres is the only parameter which evolves significantly since it depends on the maximum synaptic value. We asked if it was possible to find a common set of parameters that work at all bit resolutions ( bits to bits) by normalizing I thres with the maximum weight W max = ( W len 1) (for example for a 3 bits synaptic weight, I thres becomes I thres W max =I thres 7). We ran an extensive genetic evolution and the best set of parameters is shown in table III. The average errors over 1 runs are presented in figure 5. The architecture is the same as described earlier with one layer of s. These results show that it is possible to find a common set of parameters that work at all bit resolutions. It allows a 1

4 Error TABLE III VALUE OF THE NEURONS PARAMETERS OF THE RESULTS PRESENTED IN FIGURE W len I thres τ leak T LTP T inhibit T refrac (w + (bits) (/W max) (ms) (ms) (ms) (ms), w ) to (3,1) Lane 1 Lane Lane 3 Lane Lane 5 Lane Lane not learned bits 3 bits bits 5 bits bits 7 bits bits Synaptic weight bit resolution common parameters 1 runs Fig. 5. Simulation scores with a common set of parameters for every synaptic weight bit resolution. digital architecture to vary the synaptic bit resolution without changing any parameters of the s. In the following experiments, we used -bit synaptic weights, which offers the best performances for a small number of bits. However the network remains functional down to bits. B. Linear Leak The s from the SNN are Leaky Integrate and Fire; hence the leak is a major component of the behavior. The equation governing the s integration update at time t + t is shown in equation 5, where t is the previous update time, τ Leak is the leak time constant and w is the synaptic weight to integrate: ( u t+ t = u t. exp t ) + w (5) τ leak An implementation of an exponential leak on a digital system is space consuming and adds latency due to the calculation complexity. For example, it requires about 37 equivalent gates to implement an exponential function in a floating point unit []. Another solution is to sample the exponential term of the equation beforehand and to store it in memory (Lookup Table). This solution, although faster, is space consuming. Therefore, we propose to implement a linear leak to eliminate these costs. The equation governing a linear leak is shown below, where V leak is the leak rate in second 1 (assuming that the integration is dimensionless). u t+ t = max(, u t t.v leak ) + w () In a digital environment, the s membrane potential or integration can be updated in an event-based fashion, or synchronously. a) synchronous update: The membrane potential is updated at a fixed time step. Therefore t = T CLOCK is a constant which makes the exponential term a constant. The leak update is therefore equivalent to a single multiplication. In the case of a linear leak, the update is equivalent to a single subtraction. Exponential: u t+ t = u t.c exp (7) Linear: u t+ t = max(, u t C lin ) () b) event-based update: In this case, the potential is updated only upon receiving an event (ie: incoming spike). Therefore t is variable and must be evaluated by the system by implementing a digital counter for each output. For the exponential leak, the equivalent of two multiplications and an exponential have to be performed (equation 5). For the linear leak, a multiplication followed by a subtraction has to be done (equation ). No matter which updating method the designer chooses, event-based or synchronous, the linear leak is always simpler to implement than an exponential one. Results: Our network was simulated with a linear leak. To simplify and accelerate the simulation, the number of s was reduced to, which proved to be sufficient, with a -bit synaptic weights. The genetic evolution generated the parameters shown in table IV. The average error over 1 runs is presented in figure. These results show a minimal augmentation of the errors, which indicates that a linear leak did not affect the learning capability. TABLE IV VALUE OF THE NEURONS PARAMETERS FOR THE LINEAR LEAK RESULTS, PRESENTED IN FIGURE. W len I thres V leak T LTP T inhibit T refrac (w + (bits) (/W max) (ms 1 ) (ms) (ms) (ms), w ) (3,1) 1 1 Linear leak applied results over 1 runs lane 1 lane lane 3 lane lane 5 lane Fig.. Errors for each lane after applying a linear leak to the potential. 13

5 C. Order-Based STDP In a digital system, the STDP rule requires the implementation of a timestamp for each input (in our case : 1 1 or 3,7 input s) that will store the time of the last spike. When an output fires, it compares each synaptic activation timestamp to the LTP window time and decides if the will undergo a LTP or a LTD. This implementation is either time consuming if the system performs 3,7 serial comparisons or space consuming if it is performed in parallel due to the large number of required comparators and counters. Our hypothesis is that the order of arrival of the presynaptic spikes is as important as their precise arrival time. This idea has already been proposed in [1] and used in SpikeNet []. We propose to replace the time-based LTP window with a First In First Out (FIFO) buffer with a fixed size corresponding to a pre-defined number of events in the LTP window. The s in the LTP window will not depend on the relative time of the pre- and post-synaptic spikes but on the last s activated before the output activation. In the proposed implementation, each time an input spikes, it adds its address to a predefined size FIFO in memory. This FIFO is shared by all the s, which saves area compared to the timestamp based implementation. When an output fires, it performs a LTP on the s present in the FIFO and a LTD on all the others. In this new implementation, if an input spikes twice in a short time before a post-synaptic spike, its address is present twice in the FIFO and the weight of its is therefore increased twice. There is no need to verify if the events are already present in the FIFO, which would defeat the simplicity of the implementation. Results: We implemented an order-based STDP using a FIFO in the Xnet simulator. The size of the FIFO replaced the T LT P time window as the configurable parameter. The architecture is based on the previous one: the network has one layer of s with a linear leak and -bit synaptic weights. The genetic evolution generated the parameters shown in table V with the FIFO size set at 1. TABLE V VALUE OF THE NEURONS PARAMETERS OF THE RESULTS PRESENTED IN FIGURE 7 FOR THE FIFO SIZE 1. W len I thres V leak FIFO T inhibit T refrac (w + (bits) (/W max) (ms 1 ) (events) (ms) (ms), w ) (15,1) (15,1) (7,1) (3,1) (1,1) (1,5) (1,5) (1,1) (1,15) FIFO size 1 runs lane 1 lane lane 3 lane lane 5 lane Fig. 7. Average error per lane for different sized FIFO over 1 runs. The size of the FIFO is expressed in a percentage over the total number of s for each. To further explore the optimum FIFO size, we ran genetic evolutions with different FIFO sizes. The results are presented in figure 7, with the average error per lane over 1 runs. The network is functional for FIFO sizes between and 11,5 events. Table VI shows the actual number of s potentiated (in average) for each FIFO size. For example, with a FIFO size of 1, the number of s actually potentiated is less, as some s can be activated multiple times in the latest 1 events. In that case, the resulting synaptic potentiation is proportional to the number of activations. In our example, an average of 73 s are being potentiated one or multiple times each time an output is activated. To help in visualizing the results, a reconstruction of the synaptic weight after learning for different FIFO sizes is shown in figure. For the integration of such system in a digital architecture, a minimum FIFO size is desirable to simplify the hardware implementation. This work shows that a FIFO of events is already efficient, with an average recognition rate of 97%. This implies that implementing a memory that stores AER addresses (a 15 bits address in our case) shared by all output s is a valid solution. D. All-in-one Results Summary Our SNN was simulated with all the modifications described earlier. We performed a last global optimization for all the parameters in our network with -bit synaptic weights, linear leak and order-based STDP. The architecture consists of one layer of s, resulting in a total number of s of 1 1 or 55,3 s. The error for each lane over 1 runs is presented in figure 9, and the parameters of this simulation are given in table VII. Figure 9 shows that even when all four simplifications are implemented, the recognition rate still matches the one obtained with the original simulation in figure, leading to an average global detection rate of 9%. 1

6 TABLE VI SIZE OF THE FIFO AS PRESENTED IN FIGURE 7 WITH THE AVERAGE NUMBER OF POTENTIATED SYNAPSES FOR EACH OUTPUT NEURON S SPIKE OVER 3,7 SYNAPSES. FIFO size Synapses potentiated Synapses potentiated (nb events) (nb s) (%) TABLE VII VALUE OF THE NEURONS AND SYNAPSES PARAMETERS WITH ALL OF THE MODIFICATIONS APPLIED. W len I thres V leak FIFO T inhibit T refrac (w + (bits) (/W max) (ms 1 ) (events) (ms) (ms), w ) (1,1) lane 1 lane lane 3 lane lane 5 lane 1 All-in one results 1 runs Fig. 9. Simulation results for s with bits synaptic weights, linear leak, and order-based STDP. FIFO size events FIFO size events FIFO size 15 events FIFO size events FIFO size 19 events FIFO size 3 events Fig.. Reconstruction of the synaptic weight after learning for different sized FIFO. V. DISCUSSION AND CONCLUSION Even though we simulated our SNN with a -bit synaptic weights, our network was still functional down to -bit weights(see figure 5). However, it is clear that this cannot be the optimum bit resolution for any SNN. In vision, a distinction has to be made between two main types of problems, which are detection and recognition (or classification). In our case, detection, -bit resolution may be enough to detect a car passing. Nevertheless, if we want to recognize a special type of car or even differentiate cars from trucks from motorcycles, then higher resolution s are probably necessary. This point is further developed in [1], where it is shown that for a handwritten digit recognition problem (MNIST database), the recognition rate of the system drops significantly when a has less than approximately levels (with a semi-multiplicative increment rule for the s update instead of an additive one). In this paper, we did not consider simulating down to 1 bit synaptic weights. In order to do so, some sort of stochastic programming must be used. Without stochasticity, the synaptic weights of a after learning would correspond to the LTP and LTD of its last firing. Synapses being in the last LTP window would be at 1 and the remaining s at. In a digital environment, stochasticity can be performed with a pseudo-random number generator (PRNG) where a small percentage of the s in the LTP window would be potentiated as well as a small percentage of the s in the LTD window would be depressed. Alternatively in an analog environment, stochasticity can be performed by exploiting the intrinsic stochasticity of nanodevices by sending weak programming pulses to modify or to not modify the resistance value of a memristor [3]. From our results, it appears clear that the size of the learned object is proportional to the size of the FIFO. In the car learning application, each learns a pattern that is roughly the same size as the size of a car at a given position. Therefore our case might be perfect for an order-based STDP with a pre-defined FIFO buffer. If the s have to learn different sized patterns, then, it might not work as well. One solution would be to have s with different FIFO sizes in order to learn multiple size patterns, which is currently under investigation. Conclusion: In this paper we consider developing a digital hardware implementation of a spiking neural network for a vision application as originally described in [15]. To make the integration of such a system possible, we consider three different ways of simplifying the proposed SNN architecture and learning model. First, we demonstrated that this SNN is still functional with only -bit synaptic weights, which will save space in a digital architecture, or reduce the memristor multi-level requirements for an analog architecture. Second, we show that the leak which is commonly exponential in an analog environment (due to the simplicity of the capacitor s discharge through a resistor) could be implemented linearly 15

7 in a digital architecture without affecting the behavior of the SNN. Finally, we proposed an altered STDP model in which the order of events is used instead of the precise arrival time of events, that considerably simplifies the implementation. We are currently investigating design implementations of the SNN described in this paper in VHDL to propose a customizable lightweight digital with an order-based STDP. Such SNN could then be implemented to be able to receive AER data from various biology emulating sensors (vision, auditory, olfactory,...). ACKNOWLEDGMENT The authors would like to thank Caaliph Andriamisaina, Marc Duranton, Michel Paindavoine and Olivier Temam for fruitful discussions. REFERENCES [1] J. Schemmel, D. Brüderle, A. Grübl, M. Hock, K. Meier, and S. Millner, A wafer-scale neuromorphic hardware system for large-scale neural modeling, in Circuits and Systems (ISCAS), Proceedings of 1 IEEE International Symposium on, 1, pp [] S. Scholze, S. Schiefer, J. Partzsch, S. Hartmann, C. G. Mayr, S. Höppner, H. Eisenreich, S. Henker, B. Vogginger, and R. Schüffny, VLSI implementation of a. Gevent/s packet based AER interface with routing and event sorting functionality, Frontiers in Neuroscience, vol. 5, no. 117, 11. [3] S. Millner, A. Grübl, K. Meier, J. Schemmel, and M.-O. Schwartz, A VLSI implementation of the adaptive exponential integrate-and-fire model, in Advances in Neural Information Processing Systems 3, J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, Eds., 1, pp [] S. Choudhary, S. Sloan, S. Fok, A. Neckar, E. Trautmann, P. Gao, T. Stewart, C. Eliasmith, and K. Boahen, Silicon s that compute, in Artificial Neural Networks and Machine Learning ICANN 1, ser. Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1, vol. 755, pp [5] S. Furber, D. Lester, L. Plana, J. Garside, E. Painkras, S. Temple, and A. Brown, Overview of the SpiNNaker system architecture, Computers, IEEE Transactions on, vol. PP, no. 99, p. 1, 1. [] R. Serrano-Gotarredona, M. Oster, P. Lichtsteiner, A. Linares- Barranco, R. Paz-Vicente, F. Gomez-Rodriguez, L. Camunas-Mesa, R. Berner, M. Rivas-Perez, T. Delbruck, S.-C. Liu, R. Douglas, P. Hafliger, G. Jimenez-Moreno, A. Ballcels, T. Serrano-Gotarredona, A. Acosta-Jimenez, and B. Linares-Barranco, Caviar: A 5k, 5m, 1g connects/s aer hardware sensory-processing-learningactuating system for high-speed visual object recognition and tracking, Neural Networks, IEEE Transactions on, vol., no. 9, pp , sept. 9. [7] H. Markram, J. Lübke, M. Frotscher, and B. Sakmann, Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs, Science, vol. 75, no. 597, pp , [] G.-q. Bi and M.-m. Poo, Synaptic modifications in cultured hippocampal s: Dependence on spike timing, synaptic strength, and postsynaptic cell type, The Journal of Neuroscience, vol. 1, no., pp , 199. [9] G. Snider, Spike-timing-dependent learning in memristive nanodevices, in Nanoscale Architectures,. NANOARCH. IEEE International Symposium on, june, pp [1] B. Linares-Barranco and T. Serrano-Gotarredona, Memristance can explain spike-time-dependent-plasticity in neural s, Nature Precedings, 9. [11] S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder, and W. Lu, Nanoscale memristor device as in neuromorphic systems, Nano Letters, vol. 1, no., pp , 1. [1] X. Jin, A. Rast, F. Galluppi, S. Davies, and S. Furber, Implementing spike-timing-dependent plasticity on spinnaker neuromorphic hardware, in Neural Networks (IJCNN), The 1 International Joint Conference on, july 1, pp. 1. [13] B. Nessler, M. Pfeiffer, and W. Maass, STDP enables spiking s to detect hidden causes of their inputs, in NIPS, 9, pp [1] T. Masquelier and S. J. Thorpe, Unsupervised learning of visual features through spike timing dependent plasticity, PLoS Comput Biol, vol. 3, no., p. e31, 7. [15] O. Bichler, D. Querlioz, S. J. Thorpe, J.-P. Bourgoin, and C. Gamrat, Extraction of temporally correlated features from dynamic vision sensors with spike-timing-dependent plasticity, Neural Networks, vol. 3, pp , 1. [1] D. Querlioz, O. Bichler, and C. Gamrat, Simulation of a memristorbased spiking neural network immune to device variations, in Neural Networks (IJCNN), The 11 International Joint Conference on, 11, pp [17] P. Lichtsteiner, C. Posch, and T. Delbruck, A 1x1 1 db 15 µs latency asynchronous temporal contrast vision sensor, IEEE J. Solid- State Circuits, vol. 3, no., pp. 5 57,. [1] (11) DVS1 dynamic vision sensor silicon retina data. [Online]. Available: [19] T. Pfeil, T. C. Potjans, S. Schrader, W. Potjans, J. Schemmel, M. Diesmann, and K. Meier, Is a -bit synaptic weight resolution enough? - constraints on enabling spike-timing dependent plasticity in neuromorphic hardware, Frontiers in Neuroscience, vol., no. 9, 1. [] l. Vázquez and E. Antelo, Implementation of the exponential function in a floating-point unit, Journal of VLSI signal processing systems for signal, image and video technology, vol. 33, pp , 3. [1] S. Thorpe, Spike arrival times: A highly efficient coding scheme for neural networks, Parallel processing in neural systems, pp. 91 9, 199. [] S. J. Thorpe, R. Guyonneau, N. Guilbaud, J.-M. Allegraud, and R. VanRullen, SpikeNet: real-time visual processing with one spike per, Neurocomputing, vol. 5, no., pp. 57,. [3] M. Suri, O. Bichler, D. Querlioz, G. Palma, E. Vianello, D. Vuillaume, C. Gamrat, and B. DeSalvo, CBRAM devices as binary s for low-power stochastic neuromorphic systems: Auditory (cochlea) and visual (retina) cognitive processing applications, in Electron Devices Meeting (IEDM), 1 IEEE International, 1. 1

Neuromorphic VLSI Event-Based devices and systems

Neuromorphic VLSI Event-Based devices and systems Neuromorphic VLSI Event-Based devices and systems Giacomo Indiveri Institute of Neuroinformatics University of Zurich and ETH Zurich LTU, Lulea May 28, 2012 G.Indiveri (http://ncs.ethz.ch/) Neuromorphic

More information

Impact of PCM resistance-drift in neuromorphic systems and drift-mitigation strategy

Impact of PCM resistance-drift in neuromorphic systems and drift-mitigation strategy Impact of PCM resistance-drift in neuromorphic systems and drift-mitigation strategy Manan Suri, Daniele Garbin, Olivier Bichler, Damien Querlioz, Dominique Vuillaume, Christian Gamrat, Barbara Desalvo

More information

Event-based neural computing on an autonomous mobile platform

Event-based neural computing on an autonomous mobile platform Event-based neural computing on an autonomous mobile platform Francesco Galluppi 1, Christian Denk 2, Matthias C. Meiner 2, Terrence C. Stewart 3, Luis A. Plana 1, Chris Eliasmith 3, Steve Furber 1 and

More information

SpiNNaker SPIKING NEURAL NETWORK ARCHITECTURE MAX BROWN NICK BARLOW

SpiNNaker SPIKING NEURAL NETWORK ARCHITECTURE MAX BROWN NICK BARLOW SpiNNaker SPIKING NEURAL NETWORK ARCHITECTURE MAX BROWN NICK BARLOW OVERVIEW What is SpiNNaker Architecture Spiking Neural Networks Related Work Router Commands Task Scheduling Related Works / Projects

More information

Integrate-and-Fire Neuron Circuit and Synaptic Device using Floating Body MOSFET with Spike Timing- Dependent Plasticity

Integrate-and-Fire Neuron Circuit and Synaptic Device using Floating Body MOSFET with Spike Timing- Dependent Plasticity JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.15, NO.6, DECEMBER, 2015 ISSN(Print) 1598-1657 http://dx.doi.org/10.5573/jsts.2015.15.6.658 ISSN(Online) 2233-4866 Integrate-and-Fire Neuron Circuit

More information

Copyright T. Delbruck,

Copyright T. Delbruck, Spiking silicon retina for digital vision Inst. of Neuroinformatics, UNI-ETH Zurich Tobi Delbruck Inst. of Neuroinformatics UZH-ETH Zurich Switzerland Patrick Lichtsteiner PhD project Funding: UZH-ETH

More information

Darwin: a neuromorphic hardware co-processor based on Spiking Neural Networks

Darwin: a neuromorphic hardware co-processor based on Spiking Neural Networks MOO PAPER SCIENCE CHINA Information Sciences February 2016, Vol 59 023401:1 023401:5 doi: 101007/s11432-015-5511-7 Darwin: a neuromorphic hardware co-processor based on Spiking Neural Networks Juncheng

More information

CMOS Analog Integrate-and-fire Neuron Circuit for Driving Memristor based on RRAM

CMOS Analog Integrate-and-fire Neuron Circuit for Driving Memristor based on RRAM JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.17, NO.2, APRIL, 2017 ISSN(Print) 1598-1657 https://doi.org/10.5573/jsts.2017.17.2.174 ISSN(Online) 2233-4866 CMOS Analog Integrate-and-fire Neuron

More information

98 IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 8, NO. 1, FEBRUARY 2014

98 IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 8, NO. 1, FEBRUARY 2014 98 IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 8, NO. 1, FEBRUARY 2014 An Event-Based Neural Network Architecture With an Asynchronous Programmable Synaptic Memory Saber Moradi, Student

More information

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw Review Analysis of Pattern Recognition by Neural Network Soni Chaturvedi A.A.Khurshid Meftah Boudjelal Electronics & Comm Engg Electronics & Comm Engg Dept. of Computer Science P.I.E.T, Nagpur RCOEM, Nagpur

More information

Implementation of STDP in Neuromorphic Analog VLSI

Implementation of STDP in Neuromorphic Analog VLSI Implementation of STDP in Neuromorphic Analog VLSI Chul Kim chk079@eng.ucsd.edu Shangzhong Li shl198@eng.ucsd.edu Department of Bioengineering University of California San Diego La Jolla, CA 92093 Abstract

More information

SYNAPTIC PLASTICITY IN SPINNAKER SIMULATOR

SYNAPTIC PLASTICITY IN SPINNAKER SIMULATOR SYNAPTIC PLASTICITY IN SPINNAKER SIMULATOR SpiNNaker a spiking neural network simulator developed by APT group The University of Manchester SERGIO DAVIES 18/01/2010 Neural network simulators Neural network

More information

Effects of Firing Synchrony on Signal Propagation in Layered Networks

Effects of Firing Synchrony on Signal Propagation in Layered Networks Effects of Firing Synchrony on Signal Propagation in Layered Networks 141 Effects of Firing Synchrony on Signal Propagation in Layered Networks G. T. Kenyon,l E. E. Fetz,2 R. D. Puffl 1 Department of Physics

More information

Two Transistor Synapse with Spike Timing Dependent Plasticity

Two Transistor Synapse with Spike Timing Dependent Plasticity Two Transistor Synapse with Spike Timing Dependent Plasticity Alfred M. Haas, Timir Datta, Pamela A. Abshire, Martin C. Peckerar Department of Electrical and Computer Engineering Institute for Systems

More information

Memristor Load Current Mirror Circuit

Memristor Load Current Mirror Circuit Memristor Load Current Mirror Circuit Olga Krestinskaya, Irina Fedorova, and Alex Pappachen James School of Engineering Nazarbayev University Astana, Republic of Kazakhstan Abstract Simple current mirrors

More information

Neuromorphic Event-Based Vision Sensors

Neuromorphic Event-Based Vision Sensors Inst. of Neuroinformatics www.ini.uzh.ch Conventional cameras (aka Static vision sensors) deliver a stroboscopic sequence of frames Silicon Retina Technology Tobi Delbruck Inst. of Neuroinformatics, University

More information

Wireless Spectral Prediction by the Modified Echo State Network Based on Leaky Integrate and Fire Neurons

Wireless Spectral Prediction by the Modified Echo State Network Based on Leaky Integrate and Fire Neurons Wireless Spectral Prediction by the Modified Echo State Network Based on Leaky Integrate and Fire Neurons Yunsong Wang School of Railway Technology, Lanzhou Jiaotong University, Lanzhou 730000, Gansu,

More information

Implementation of Neuromorphic System with Si-based Floating-body Synaptic Transistors

Implementation of Neuromorphic System with Si-based Floating-body Synaptic Transistors JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.17, NO.2, APRIL, 2017 ISSN(Print) 1598-1657 https://doi.org/10.5573/jsts.2017.17.2.210 ISSN(Online) 2233-4866 Implementation of Neuromorphic System

More information

A Synchronized Axon Hillock Neuron for Memristive Neuromorphic Systems

A Synchronized Axon Hillock Neuron for Memristive Neuromorphic Systems A Synchronized Axon Hillock Neuron for Memristive Neuromorphic Systems Ryan Weiss, Gangotree Chakma, and Garrett S. Rose IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, Massachusetts,

More information

The Characteristics of Binary Spike-Time- Dependent Plasticity in HfO 2 -Based RRAM and Applications for Pattern Recognition

The Characteristics of Binary Spike-Time- Dependent Plasticity in HfO 2 -Based RRAM and Applications for Pattern Recognition Zhou et al. Nanoscale Research Letters (2017) 12:244 DOI 10.1186/s11671-017-2023-y NANO EXPRESS The Characteristics of Binary Spike-Time- Dependent Plasticity in HfO 2 -Based RRAM and Applications for

More information

Integrate-and-Fire Neuron Circuit and Synaptic Device with Floating Body MOSFETs

Integrate-and-Fire Neuron Circuit and Synaptic Device with Floating Body MOSFETs JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.14, NO.6, DECEMBER, 2014 http://dx.doi.org/10.5573/jsts.2014.14.6.755 Integrate-and-Fire Neuron Circuit and Synaptic Device with Floating Body MOSFETs

More information

Silicon retina technology

Silicon retina technology Silicon retina technology Tobi Delbruck Inst. of Neuroinformatics, University of Zurich and ETH Zurich Sensors Group sensors.ini.uzh.ch Sponsors: Swiss National Science Foundation NCCR Robotics project,

More information

Artificial Neural Network Engine: Parallel and Parameterized Architecture Implemented in FPGA

Artificial Neural Network Engine: Parallel and Parameterized Architecture Implemented in FPGA Artificial Neural Network Engine: Parallel and Parameterized Architecture Implemented in FPGA Milene Barbosa Carvalho 1, Alexandre Marques Amaral 1, Luiz Eduardo da Silva Ramos 1,2, Carlos Augusto Paiva

More information

Poker-DVS and MNIST-DVS. Their History, How They Were Made, and Other Details

Poker-DVS and MNIST-DVS. Their History, How They Were Made, and Other Details ORIGINAL RESEARCH published: 22 December 2015 doi: 10.3389/fnins.2015.00481. Their History, How They Were Made, and Other Details Teresa Serrano-Gotarredona and Bernabé Linares-Barranco* Instituto de Microelectrónica

More information

SenseMaker IST Martin McGinnity University of Ulster Neuro-IT, Bonn, June 2004 SenseMaker IST Neuro-IT workshop June 2004 Page 1

SenseMaker IST Martin McGinnity University of Ulster Neuro-IT, Bonn, June 2004 SenseMaker IST Neuro-IT workshop June 2004 Page 1 SenseMaker IST2001-34712 Martin McGinnity University of Ulster Neuro-IT, Bonn, June 2004 Page 1 Project Objectives To design and implement an intelligent computational system, drawing inspiration from

More information

TIME encoding of a band-limited function,,

TIME encoding of a band-limited function,, 672 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 53, NO. 8, AUGUST 2006 Time Encoding Machines With Multiplicative Coupling, Feedforward, and Feedback Aurel A. Lazar, Fellow, IEEE

More information

Neuromorphic Engineering I. avlsi.ini.uzh.ch/classwiki. A pidgin vocabulary. Neuromorphic Electronics? What is it all about?

Neuromorphic Engineering I. avlsi.ini.uzh.ch/classwiki. A pidgin vocabulary. Neuromorphic Electronics? What is it all about? Neuromorphic Engineering I Time and day : Lectures Mondays, 13:15-14:45 Lab exercise location: Institut für Neuroinformatik, Universität Irchel, Y55 G87 Credits: 6 ECTS credit points Exam: Oral 20-30 minutes

More information

Real Time Neuromorphic Camera Architecture Implemented with Quadratic Emphasis in an FPGA

Real Time Neuromorphic Camera Architecture Implemented with Quadratic Emphasis in an FPGA International Journal of Electronics and Electrical Engineering Vol. 5, No. 3, June 2017 Real Time Neuromorphic Camera Architecture Implemented with Quadratic Emphasis in an FPGA Elizabeth Fonseca Chavez1,

More information

Energy-efficient Hybrid CMOS-NEMS LIF Neuron Circuit in 28 nm CMOS Process

Energy-efficient Hybrid CMOS-NEMS LIF Neuron Circuit in 28 nm CMOS Process Energy-efficient Hybrid CMOS-NEMS LIF Neuron Circuit in 28 nm CMOS Process Saber Moradi Computer Systems Laboratory Yale University, New Haven, CT 652 saber.moradi@yale.edu Sunil A. Bhave School of Electrical

More information

John Lazzaro and John Wawrzynek Computer Science Division UC Berkeley Berkeley, CA, 94720

John Lazzaro and John Wawrzynek Computer Science Division UC Berkeley Berkeley, CA, 94720 LOW-POWER SILICON NEURONS, AXONS, AND SYNAPSES John Lazzaro and John Wawrzynek Computer Science Division UC Berkeley Berkeley, CA, 94720 Power consumption is the dominant design issue for battery-powered

More information

arxiv: v1 [cs.ne] 16 Nov 2016

arxiv: v1 [cs.ne] 16 Nov 2016 Training Spiking Deep Networks for Neuromorphic Hardware arxiv:1611.5141v1 [cs.ne] 16 Nov 16 Eric Hunsberger Centre for Theoretical Neuroscience University of Waterloo Waterloo, ON N2L 3G1 ehunsber@uwaterloo.ca

More information

PROGRAMMABLE ANALOG PULSE-FIRING NEURAL NETWORKS

PROGRAMMABLE ANALOG PULSE-FIRING NEURAL NETWORKS 671 PROGRAMMABLE ANALOG PULSE-FIRING NEURAL NETWORKS Alan F. Murray Alister Hamilton Dept. of Elec. Eng., Dept. of Elec. Eng., University of Edinburgh, University of Edinburgh, Mayfield Road, Mayfield

More information

Separation and Recognition of multiple sound source using Pulsed Neuron Model

Separation and Recognition of multiple sound source using Pulsed Neuron Model Separation and Recognition of multiple sound source using Pulsed Neuron Model Kaname Iwasa, Hideaki Inoue, Mauricio Kugler, Susumu Kuroyanagi, Akira Iwata Nagoya Institute of Technology, Gokiso-cho, Showa-ku,

More information

Analog Axon Hillock Neuron Design for Memristive Neuromorphic Systems

Analog Axon Hillock Neuron Design for Memristive Neuromorphic Systems University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Masters Theses Graduate School 12-2017 Analog Axon Hillock Neuron Design for Memristive Neuromorphic Systems Ryan John

More information

Night-time pedestrian detection via Neuromorphic approach

Night-time pedestrian detection via Neuromorphic approach Night-time pedestrian detection via Neuromorphic approach WOO JOON HAN, IL SONG HAN Graduate School for Green Transportation Korea Advanced Institute of Science and Technology 335 Gwahak-ro, Yuseong-gu,

More information

SpikeStream: A Fast and Flexible Simulator of Spiking Neural Networks

SpikeStream: A Fast and Flexible Simulator of Spiking Neural Networks SpikeStream: A Fast and Flexible Simulator of Spiking Neural Networks David Gamez Department of Computer Science, University of Essex, Colchester, C04 3SQ, UK daogam@essex.ac.uk Abstract. SpikeStream is

More information

Cognitronics: Resource-efficient Architectures for Cognitive Systems. Ulrich Rückert Cognitronics and Sensor Systems.

Cognitronics: Resource-efficient Architectures for Cognitive Systems. Ulrich Rückert Cognitronics and Sensor Systems. Cognitronics: Resource-efficient Architectures for Cognitive Systems Ulrich Rückert Cognitronics and Sensor Systems 14th IWANN, 2017 Cadiz, 14. June 2017 rueckert@cit-ec.uni-bielefeld.de www.ks.cit-ec.uni-bielefeld.de

More information

Systolic modular VLSI Architecture for Multi-Model Neural Network Implementation +

Systolic modular VLSI Architecture for Multi-Model Neural Network Implementation + Systolic modular VLSI Architecture for Multi-Model Neural Network Implementation + J.M. Moreno *, J. Madrenas, J. Cabestany Departament d'enginyeria Electrònica Universitat Politècnica de Catalunya Barcelona,

More information

NEUROMORPHIC vision sensors are biologically inspired

NEUROMORPHIC vision sensors are biologically inspired TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, VOL. XX, NO. X, MONTH 2018 1 O(N)-Space Spatiotemporal Filter for Reducing Noise in Neuromorphic Vision Sensors Alireza Khodamoradi, Member, IEEE, and Ryan

More information

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani Outline Introduction Soft Computing (SC) vs. Conventional Artificial Intelligence (AI) Neuro-Fuzzy (NF) and SC Characteristics 2 Introduction

More information

Artificial Retina Using A Hybrid Neural Network With Spatial Transform Capability

Artificial Retina Using A Hybrid Neural Network With Spatial Transform Capability Artificial Retina Using A Hybrid Neural Network With Spatial Transform Capability Richard Wood Honeywell, Inc., Space Payloads Ontario, Canada Alexander McGlashan Niagara College Welland, Ontario C.B.

More information

Supplementary Materials for

Supplementary Materials for advances.sciencemag.org/cgi/content/full/2/6/e1501326/dc1 Supplementary Materials for Organic core-sheath nanowire artificial synapses with femtojoule energy consumption Wentao Xu, Sung-Yong Min, Hyunsang

More information

Live Hand Gesture Recognition using an Android Device

Live Hand Gesture Recognition using an Android Device Live Hand Gesture Recognition using an Android Device Mr. Yogesh B. Dongare Department of Computer Engineering. G.H.Raisoni College of Engineering and Management, Ahmednagar. Email- yogesh.dongare05@gmail.com

More information

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Real-Time Face Detection and Tracking for High Resolution Smart Camera System Digital Image Computing Techniques and Applications Real-Time Face Detection and Tracking for High Resolution Smart Camera System Y. M. Mustafah a,b, T. Shan a, A. W. Azman a,b, A. Bigdeli a, B. C. Lovell

More information

FOR multi-chip neuromorphic systems, the address event

FOR multi-chip neuromorphic systems, the address event 48 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL. 54, NO. 1, JANUARY 2007 AER EAR: A Matched Silicon Cochlea Pair With Address Event Representation Interface Vincent Chan, Student Member,

More information

On Intelligence Jeff Hawkins

On Intelligence Jeff Hawkins On Intelligence Jeff Hawkins Chapter 8: The Future of Intelligence April 27, 2006 Presented by: Melanie Swan, Futurist MS Futures Group 650-681-9482 m@melanieswan.com http://www.melanieswan.com Building

More information

Sensors & Transducers 2014 by IFSA Publishing, S. L.

Sensors & Transducers 2014 by IFSA Publishing, S. L. Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Neural Circuitry Based on Single Electron Transistors and Single Electron Memories Aïmen BOUBAKER and Adel KALBOUSSI Faculty

More information

Coding and computing with balanced spiking networks. Sophie Deneve Ecole Normale Supérieure, Paris

Coding and computing with balanced spiking networks. Sophie Deneve Ecole Normale Supérieure, Paris Coding and computing with balanced spiking networks Sophie Deneve Ecole Normale Supérieure, Paris Cortical spike trains are highly variable From Churchland et al, Nature neuroscience 2010 Cortical spike

More information

Optical hybrid analog-digital signal processing based on spike processing in neurons

Optical hybrid analog-digital signal processing based on spike processing in neurons Invited Paper Optical hybrid analog-digital signal processing based on spike processing in neurons Mable P. Fok 1, Yue Tian 1, David Rosenbluth 2, Yanhua Deng 1, and Paul R. Prucnal 1 1 Princeton University,

More information

Hardware Implementation of a PCA Learning Network by an Asynchronous PDM Digital Circuit

Hardware Implementation of a PCA Learning Network by an Asynchronous PDM Digital Circuit Hardware Implementation of a PCA Learning Network by an Asynchronous PDM Digital Circuit Yuzo Hirai and Kuninori Nishizawa Institute of Information Sciences and Electronics, University of Tsukuba Doctoral

More information

Neuromorphic Analog VLSI

Neuromorphic Analog VLSI Neuromorphic Analog VLSI David W. Graham West Virginia University Lane Department of Computer Science and Electrical Engineering 1 Neuromorphic Analog VLSI Each word has meaning Neuromorphic Analog VLSI

More information

Memristive Operational Amplifiers

Memristive Operational Amplifiers Procedia Computer Science Volume 99, 2014, Pages 275 280 BICA 2014. 5th Annual International Conference on Biologically Inspired Cognitive Architectures Memristive Operational Amplifiers Timur Ibrayev

More information

Implicit Fitness Functions for Evolving a Drawing Robot

Implicit Fitness Functions for Evolving a Drawing Robot Implicit Fitness Functions for Evolving a Drawing Robot Jon Bird, Phil Husbands, Martin Perris, Bill Bigge and Paul Brown Centre for Computational Neuroscience and Robotics University of Sussex, Brighton,

More information

A Complete Hardware Implementation of an Integrated Sound Localization and Classification System based on Spiking Neural Networks

A Complete Hardware Implementation of an Integrated Sound Localization and Classification System based on Spiking Neural Networks A Complete Hardware Implementation of an Integrated Sound Localization and Classification System based on Spiking Neural Networks Mauricio Kugler, Kaname Iwasa, Victor Alberto Parcianello Benso, Susumu

More information

Imaging serial interface ROM

Imaging serial interface ROM Page 1 of 6 ( 3 of 32 ) United States Patent Application 20070024904 Kind Code A1 Baer; Richard L. ; et al. February 1, 2007 Imaging serial interface ROM Abstract Imaging serial interface ROM (ISIROM).

More information

Voltage Controlled Delay Line Applied with Memristor in Delay Locked Loop

Voltage Controlled Delay Line Applied with Memristor in Delay Locked Loop 2014 Fifth International Conference on Intelligent Systems, Modelling and Simulation Voltage Controlled Delay Line Applied with Memristor in Delay Locked Loop Siti Musliha Ajmal Binti Mokhtar Faculty of

More information

New computing architectures for Green ICT

New computing architectures for Green ICT New computing architectures for Green ICT Marc Duranton Embedded Computing Laboratory (LCE) CEA LIST DACLE September 5th, 2011 Impact of ICT on global energy consumption Marc Duranton September 5th, 2011

More information

High-Speed Stochastic Circuits Using Synchronous Analog Pulses

High-Speed Stochastic Circuits Using Synchronous Analog Pulses High-Speed Stochastic Circuits Using Synchronous Analog Pulses M. Hassan Najafi and David J. Lilja najaf@umn.edu, lilja@umn.edu Department of Electrical and Computer Engineering, University of Minnesota,

More information

This list supersedes the one published in the November 2002 issue of CR.

This list supersedes the one published in the November 2002 issue of CR. PERIODICALS RECEIVED This is the current list of periodicals received for review in Reviews. International standard serial numbers (ISSNs) are provided to facilitate obtaining copies of articles or subscriptions.

More information

Neuromorphic computing

Neuromorphic computing Neuromorphic computing Robotics M.Sc. programme in Computer Science l.vannucci@sssup.it April 21st, 2016 Outline 1. Introduction 2. Fundamentals of neuroscience 3. Simulating the brain 4. Software and

More information

A Neuromorphic VLSI Device for Implementing 2-D Selective Attention Systems

A Neuromorphic VLSI Device for Implementing 2-D Selective Attention Systems IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 6, NOVEMBER 2001 1455 A Neuromorphic VLSI Device for Implementing 2-D Selective Attention Systems Giacomo Indiveri Abstract Selective attention is a mechanism

More information

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont.

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont. TSTE17 System Design, CDIO Lecture 5 1 General project hints 2 Project hints and deadline suggestions Required documents Modulation, cont. Requirement specification Channel coding Design specification

More information

Real-Time Selective Harmonic Minimization in Cascaded Multilevel Inverters with Varying DC Sources

Real-Time Selective Harmonic Minimization in Cascaded Multilevel Inverters with Varying DC Sources Real-Time Selective Harmonic Minimization in Cascaded Multilevel Inverters with arying Sources F. J. T. Filho *, T. H. A. Mateus **, H. Z. Maia **, B. Ozpineci ***, J. O. P. Pinto ** and L. M. Tolbert

More information

A Bioinspired 128x128 Pixel Dynamic-Vision- Sensor

A Bioinspired 128x128 Pixel Dynamic-Vision- Sensor A Bioinspired 128x128 Pixel Dynamic-Vision- Sensor T. Serrano-Gotarredona, J. A. Leñero-Bardallo, and B. Linares-Barranco Instituto de Microelectrónica de Sevilla (IMSE-CNM-CSIC-US) AbstractThis paper

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

Energy-efficient neuromorphic computing with magnetic tunnel junctions

Energy-efficient neuromorphic computing with magnetic tunnel junctions Energy-efficient neuromorphic computing with magnetic tunnel junctions CNRS/Thales, France Jacob Torrejon Mathieu Riou Flavio Abreu Araujo Paolo Bortolotti Vincent Cros Julie Grollier AIST, Japan Sumito

More information

A VLSI Convolutional Neural Network for Image Recognition Using Merged/Mixed Analog-Digital Architecture

A VLSI Convolutional Neural Network for Image Recognition Using Merged/Mixed Analog-Digital Architecture A VLSI Convolutional Neural Network for Image Recognition Using Merged/Mixed Analog-Digital Architecture Keisuke Korekado a, Takashi Morie a, Osamu Nomura b, Hiroshi Ando c, Teppei Nakano a, Masakazu Matsugu

More information

Neural circuits in mixed-signal VLSI Towards new computing paradigms?

Neural circuits in mixed-signal VLSI Towards new computing paradigms? Neural circuits in mixed-signal VLSI Towards new computing paradigms? Seminar - Stockholm University - March 2007 Karlheinz Meier Kirchhoff-Institut für Physik Ruprecht-Karls-Universität Heidelberg A

More information

Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm

Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm Maximum Likelihood Sequence Detection (MLSD) and the utilization of the Viterbi Algorithm Presented to Dr. Tareq Al-Naffouri By Mohamed Samir Mazloum Omar Diaa Shawky Abstract Signaling schemes with memory

More information

FULLY INTEGRATED CURRENT-MODE SUBAPERTURE CENTROID CIRCUITS AND PHASE RECONSTRUCTOR Alushulla J. Ambundo 1 and Paul M. Furth 2

FULLY INTEGRATED CURRENT-MODE SUBAPERTURE CENTROID CIRCUITS AND PHASE RECONSTRUCTOR Alushulla J. Ambundo 1 and Paul M. Furth 2 FULLY NTEGRATED CURRENT-MODE SUBAPERTURE CENTROD CRCUTS AND PHASE RECONSTRUCTOR Alushulla J. Ambundo 1 and Paul M. Furth 1 Mixed-Signal-Wireless (MSW), Texas nstruments, Dallas, TX aambundo@ti.com Dept.

More information

Introduction to Neuromorphic Computing Insights and Challenges. Todd Hylton Brain Corporation

Introduction to Neuromorphic Computing Insights and Challenges. Todd Hylton Brain Corporation Introduction to Neuromorphic Computing Insights and Challenges Todd Hylton Brain Corporation hylton@braincorporation.com Outline What is a neuromorphic computer? Why is neuromorphic computing confusing?

More information

Target detection in side-scan sonar images: expert fusion reduces false alarms

Target detection in side-scan sonar images: expert fusion reduces false alarms Target detection in side-scan sonar images: expert fusion reduces false alarms Nicola Neretti, Nathan Intrator and Quyen Huynh Abstract We integrate several key components of a pattern recognition system

More information

Playing CHIP-8 Games with Reinforcement Learning

Playing CHIP-8 Games with Reinforcement Learning Playing CHIP-8 Games with Reinforcement Learning Niven Achenjang, Patrick DeMichele, Sam Rogers Stanford University Abstract We begin with some background in the history of CHIP-8 games and the use of

More information

Power efficient Spiking Neural Network Classifier based on memristive crossbar network for spike sorting application

Power efficient Spiking Neural Network Classifier based on memristive crossbar network for spike sorting application 1 Power efficient Spiking Neural Network Classifier based on memristive crossbar network for spike sorting application Anand Kumar Mukhopadhyay 1, Graduate Student Member, IEEE, Indrajit Chakrabarti 2,

More information

Module 1: Introduction to Experimental Techniques Lecture 2: Sources of error. The Lecture Contains: Sources of Error in Measurement

Module 1: Introduction to Experimental Techniques Lecture 2: Sources of error. The Lecture Contains: Sources of Error in Measurement The Lecture Contains: Sources of Error in Measurement Signal-To-Noise Ratio Analog-to-Digital Conversion of Measurement Data A/D Conversion Digitalization Errors due to A/D Conversion file:///g /optical_measurement/lecture2/2_1.htm[5/7/2012

More information

Signal Processing in Mobile Communication Using DSP and Multi media Communication via GSM

Signal Processing in Mobile Communication Using DSP and Multi media Communication via GSM Signal Processing in Mobile Communication Using DSP and Multi media Communication via GSM 1 M.Sivakami, 2 Dr.A.Palanisamy 1 Research Scholar, 2 Assistant Professor, Department of ECE, Sree Vidyanikethan

More information

VLSI Implementation of a Neuromime Pulse Generator for Eckhorn Neurons. Ben Sharon, and Richard B. Wells

VLSI Implementation of a Neuromime Pulse Generator for Eckhorn Neurons. Ben Sharon, and Richard B. Wells VLSI Implementation of a Neuromime Pulse Generator for Eckhorn Neurons Ben Sharon, and Richard B. Wells Authors affiliations: Ben Sharon, and Richard B. Wells (MRC Institute, University of Idaho, BEL 317,

More information

2. Simulated Based Evolutionary Heuristic Methodology

2. Simulated Based Evolutionary Heuristic Methodology XXVII SIM - South Symposium on Microelectronics 1 Simulation-Based Evolutionary Heuristic to Sizing Analog Integrated Circuits Lucas Compassi Severo, Alessandro Girardi {lucassevero, alessandro.girardi}@unipampa.edu.br

More information

DESIGN AND IMPLEMENTATION OF AN ALGORITHM FOR MODULATION IDENTIFICATION OF ANALOG AND DIGITAL SIGNALS

DESIGN AND IMPLEMENTATION OF AN ALGORITHM FOR MODULATION IDENTIFICATION OF ANALOG AND DIGITAL SIGNALS DESIGN AND IMPLEMENTATION OF AN ALGORITHM FOR MODULATION IDENTIFICATION OF ANALOG AND DIGITAL SIGNALS John Yong Jia Chen (Department of Electrical Engineering, San José State University, San José, California,

More information

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE A Thesis by Andrew J. Zerngast Bachelor of Science, Wichita State University, 2008 Submitted to the Department of Electrical

More information

CN510: Principles and Methods of Cognitive and Neural Modeling. Neural Oscillations. Lecture 24

CN510: Principles and Methods of Cognitive and Neural Modeling. Neural Oscillations. Lecture 24 CN510: Principles and Methods of Cognitive and Neural Modeling Neural Oscillations Lecture 24 Instructor: Anatoli Gorchetchnikov Teaching Fellow: Rob Law It Is Much

More information

A high performance photonic pulse processing device

A high performance photonic pulse processing device A high performance photonic pulse processing device David Rosenbluth 2, Konstantin Kravtsov 1, Mable P. Fok 1, and Paul R. Prucnal 1 * 1 Princeton University, Princeton, New Jersey 08544, U.S.A. 2 Lockheed

More information

A Divide-and-Conquer Approach to Evolvable Hardware

A Divide-and-Conquer Approach to Evolvable Hardware A Divide-and-Conquer Approach to Evolvable Hardware Jim Torresen Department of Informatics, University of Oslo, PO Box 1080 Blindern N-0316 Oslo, Norway E-mail: jimtoer@idi.ntnu.no Abstract. Evolvable

More information

For personal use only

For personal use only 30 June 2016 BrainChip Acquires French based Computer Vision Technology Company Spikenet Technology BrainChip Holdings Limited ( BrainChip ) is pleased to advise that it has signed a binding term sheet

More information

Fig. 1. Electronic Model of Neuron

Fig. 1. Electronic Model of Neuron Spatial to Temporal onversion of Images Using A Pulse-oupled Neural Network Eric L. Brown and Bogdan M. Wilamowski University of Wyoming eric@novation.vcn.com, wilam@uwyo.edu Abstract A new electronic

More information

Interfacing of neuromorphic vision, auditory and olfactory sensors with digital neuromorphic circuits

Interfacing of neuromorphic vision, auditory and olfactory sensors with digital neuromorphic circuits Edith Cowan University Research Online Theses: Doctorates and Masters Theses 2016 Interfacing of neuromorphic vision, auditory and olfactory sensors with digital neuromorphic circuits Anup Vanarse Edith

More information

SIGNAL PROCESSING ALGORITHMS FOR HIGH-PRECISION NAVIGATION AND GUIDANCE FOR UNDERWATER AUTONOMOUS SENSING SYSTEMS

SIGNAL PROCESSING ALGORITHMS FOR HIGH-PRECISION NAVIGATION AND GUIDANCE FOR UNDERWATER AUTONOMOUS SENSING SYSTEMS SIGNAL PROCESSING ALGORITHMS FOR HIGH-PRECISION NAVIGATION AND GUIDANCE FOR UNDERWATER AUTONOMOUS SENSING SYSTEMS Daniel Doonan, Chris Utley, and Hua Lee Imaging Systems Laboratory Department of Electrical

More information

Weebit Nano (ASX: WBT) Silicon Oxide ReRAM Technology

Weebit Nano (ASX: WBT) Silicon Oxide ReRAM Technology Weebit Nano (ASX: WBT) Silicon Oxide ReRAM Technology Amir Regev VP R&D Leti Memory Workshop June 2017 1 Disclaimer This presentation contains certain statements that constitute forward-looking statements.

More information

INF3430 Clock and Synchronization

INF3430 Clock and Synchronization INF3430 Clock and Synchronization P.P.Chu Using VHDL Chapter 16.1-6 INF 3430 - H12 : Chapter 16.1-6 1 Outline 1. Why synchronous? 2. Clock distribution network and skew 3. Multiple-clock system 4. Meta-stability

More information

A Low Energy Architecture for Fast PN Acquisition

A Low Energy Architecture for Fast PN Acquisition A Low Energy Architecture for Fast PN Acquisition Christopher Deng Electrical Engineering, UCLA 42 Westwood Plaza Los Angeles, CA 966, USA -3-26-6599 deng@ieee.org Charles Chien Rockwell Science Center

More information

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network 436 JOURNAL OF COMPUTERS, VOL. 5, NO. 9, SEPTEMBER Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network Chung-Chi Wu Department of Electrical Engineering,

More information

Optimizing Brainstorm s Architecture

Optimizing Brainstorm s Architecture 28 June 2016 - ONR Annual Program Review - Stanford CA Optimizing Brainstorm s Architecture Kwabena Boahen Bioengineering and Electrical Engineering (by courtesy) Stanford University Eliasmith, 2013 Outline

More information

SUCCESSIVE approximation register (SAR) analog-todigital

SUCCESSIVE approximation register (SAR) analog-todigital 426 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 62, NO. 5, MAY 2015 A Novel Hybrid Radix-/Radix-2 SAR ADC With Fast Convergence and Low Hardware Complexity Manzur Rahman, Arindam

More information

FACE RECOGNITION USING NEURAL NETWORKS

FACE RECOGNITION USING NEURAL NETWORKS Int. J. Elec&Electr.Eng&Telecoms. 2014 Vinoda Yaragatti and Bhaskar B, 2014 Research Paper ISSN 2319 2518 www.ijeetc.com Vol. 3, No. 3, July 2014 2014 IJEETC. All Rights Reserved FACE RECOGNITION USING

More information

Josephson Junction Simulation of Neurons Jackson Ang ong a, Christian Boyd, Purba Chatterjee

Josephson Junction Simulation of Neurons Jackson Ang ong a, Christian Boyd, Purba Chatterjee Josephson Junction Simulation of Neurons Jackson Ang ong a, Christian Boyd, Purba Chatterjee Outline Motivation for the paper. What is a Josephson Junction? What is the JJ Neuron model? A comparison of

More information

Co-evolution for Communication: An EHW Approach

Co-evolution for Communication: An EHW Approach Journal of Universal Computer Science, vol. 13, no. 9 (2007), 1300-1308 submitted: 12/6/06, accepted: 24/10/06, appeared: 28/9/07 J.UCS Co-evolution for Communication: An EHW Approach Yasser Baleghi Damavandi,

More information

Chapter 2 Direct-Sequence Systems

Chapter 2 Direct-Sequence Systems Chapter 2 Direct-Sequence Systems A spread-spectrum signal is one with an extra modulation that expands the signal bandwidth greatly beyond what is required by the underlying coded-data modulation. Spread-spectrum

More information

Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC)

Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC) Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC) Introduction (1.1) SC Constituants and Conventional Artificial Intelligence (AI) (1.2) NF and SC Characteristics (1.3) Jyh-Shing Roger

More information

Lecture 13 Read: the two Eckhorn papers. (Don t worry about the math part of them).

Lecture 13 Read: the two Eckhorn papers. (Don t worry about the math part of them). Read: the two Eckhorn papers. (Don t worry about the math part of them). Last lecture we talked about the large and growing amount of interest in wave generation and propagation phenomena in the neocortex

More information

FACETS Project Presentation

FACETS Project Presentation FACETS FP6-2004-IST-FETPI 15879 Fast Analog Computing with Emergent Transient States FACETS Project Presentation Report Version: 1.0 Report Preparation: Prof. Dr. Karlheinz Meier Classification: Public

More information