A Simple Design and Implementation of Reconfigurable Neural Networks

Similar documents
CHAPTER 4 MIXED-SIGNAL DESIGN OF NEUROHARDWARE

Analog Implementation of Neo-Fuzzy Neuron and Its On-board Learning

SOLIMAN A. MAHMOUD Department of Electrical Engineering, Faculty of Engineering, Cairo University, Fayoum, Egypt

USING EMBEDDED PROCESSORS IN HARDWARE MODELS OF ARTIFICIAL NEURAL NETWORKS

Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India. Fig.1.Neuron and its connection

Systolic modular VLSI Architecture for Multi-Model Neural Network Implementation +

II. Previous Work. III. New 8T Adder Design

Design and Implementation of Complex Multiplier Using Compressors

PROGRAMMABLE ANALOG PULSE-FIRING NEURAL NETWORKS

Design Of Arthematic Logic Unit using GDI adder and multiplexer 1

DESIGN OF MULTIPLE CONSTANT MULTIPLICATION ALGORITHM FOR FIR FILTER

An Optimized Wallace Tree Multiplier using Parallel Prefix Han-Carlson Adder for DSP Processors

John Lazzaro and John Wawrzynek Computer Science Division UC Berkeley Berkeley, CA, 94720

Application of Feed-forward Artificial Neural Networks to the Identification of Defective Analog Integrated Circuits

Artificial Neural Network Engine: Parallel and Parameterized Architecture Implemented in FPGA

FOR HIGH SPEED LOW POWER APPLICATIONS USING RADIX-4 MODIFIED BOOTH ENCODER

Yet, many signal processing systems require both digital and analog circuits. To enable

Implementation of Carry Select Adder using CMOS Full Adder

IN SEVERAL wireless hand-held systems, the finite-impulse

Adder (electronics) - Wikipedia, the free encyclopedia

A design of 16-bit adiabatic Microprocessor core

BUILDING BLOCKS FOR CURRENT-MODE IMPLEMENTATION OF VLSI FUZZY MICROCONTROLLERS

MINE 432 Industrial Automation and Robotics

GENERATION OF TANGENT HYPERBOLIC SIGMOID FUNCTION FOR MICROCONTROLLER BASED DIGITAL IMPLEMENTATIONS OF NEURAL NETWORKS

AN EFFICIENT APPROACH TO MINIMIZE POWER AND AREA IN CARRY SELECT ADDER USING BINARY TO EXCESS ONE CONVERTER

FPGA Implementation of Area Efficient and Delay Optimized 32-Bit SQRT CSLA with First Addition Logic

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw

CHAPTER 1 INTRODUCTION

DESIGN AND IMPLEMENTATION OF AREA EFFICIENT, LOW-POWER AND HIGH SPEED 128-BIT REGULAR SQUARE ROOT CARRY SELECT ADDER

A Novel Approach for High Speed and Low Power 4-Bit Multiplier

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016

AREA AND DELAY EFFICIENT DESIGN FOR PARALLEL PREFIX FINITE FIELD MULTIPLIER

Evolutionary Electronics

IMPLEMENTATION OF NEURAL NETWORK IN ENERGY SAVING OF INDUCTION MOTOR DRIVES WITH INDIRECT VECTOR CONTROL

Modified Booth Encoding Multiplier for both Signed and Unsigned Radix Based Multi-Modulus Multiplier

Low Power Approach for Fir Filter Using Modified Booth Multiprecision Multiplier

Design of a VLSI Hamming Neural Network For arrhythmia classification

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

NOVEL HIGH SPEED IMPLEMENTATION OF 32 BIT MULTIPLIER USING CSLA and CLAA

MICROELECTRONIC IMPLEMENTATIONS OF CONNECTIONIST NEURAL NETWORKS

Faster and Low Power Twin Precision Multiplier

Application of ANN to Predict Reinforcement Height of Weld Bead under Magnetic Field

Hybrid Discrete-Continuous Signal Processing: Employing Field-Programmable Analog Components for Energy-Sparing Computation

Performance Analysis of Multipliers in VLSI Design

A Compact Design of 8X8 Bit Vedic Multiplier Using Reversible Logic Based Compressor

POWER EFFICIENT DESIGN OF COUNTER ON.12 MICRON TECHNOLOGY

Performance Analysis of a 64-bit signed Multiplier with a Carry Select Adder Using VHDL

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY

International Journal for Research in Applied Science & Engineering Technology (IJRASET) Design A Power Efficient Compressor Using Adders Abstract

New Four-Quadrant CMOS Current-Mode and Voltage-Mode Multipliers

Implementation of 256-bit High Speed and Area Efficient Carry Select Adder

NEURAL PROCESSOR AS A MIXED-MODE SINGLE CHIP

A New network multiplier using modified high order encoder and optimized hybrid adder in CMOS technology

Single Chip FPGA Based Realization of Arbitrary Waveform Generator using Rademacher and Walsh Functions

QUATERNARY LOGIC LOOK UP TABLE FOR CMOS CIRCUITS

A VLSI Convolutional Neural Network for Image Recognition Using Merged/Mixed Analog-Digital Architecture

A Low Power Single Phase Clock Distribution Multiband Network

Night-time pedestrian detection via Neuromorphic approach

Vector Arithmetic Logic Unit Amit Kumar Dutta JIS College of Engineering, Kalyani, WB, India

ALTHOUGH zero-if and low-if architectures have been

A new 6-T multiplexer based full-adder for low power and leakage current optimization

Trade-Offs in Multiplier Block Algorithms for Low Power Digit-Serial FIR Filters

Low Power 3-2 and 4-2 Adder Compressors Implemented Using ASTRAN

Low-Power Multipliers with Data Wordlength Reduction

Comparative Analysis of Multiplier in Quaternary logic

Area Efficient and Low Power Reconfiurable Fir Filter

Efficient Carry Select Adder Using VLSI Techniques With Advantages of Area, Delay And Power

Design of an optimized multiplier based on approximation logic

Functional Integration of Parallel Counters Based on Quantum-Effect Devices

High Performance Low-Power Signed Multiplier

Design and Implementation of High Speed Carry Select Adder Korrapatti Mohammed Ghouse 1 K.Bala. 2

Geometric Neurodynamical Classifiers Applied to Breast Cancer Detection. Tijana T. Ivancevic

Design and Implementation of High Speed Carry Select Adder

DESIGN OF LOW POWER REVERSIBLE COMPRESSORS USING SINGLE ELECTRON TRANSISTOR

Human factor and computational intelligence limitations in resilient control systems

Implementation of 32-Bit Unsigned Multiplier Using CLAA and CSLA

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

VLSI Implementation & Design of Complex Multiplier for T Using ASIC-VLSI

Implementation of Cmos Adder for Area & Energy Efficient Arithmetic Applications

AREA OPTIMIZED ARITHMETIC AND LOGIC UNIT USING LOW POWER 1-BIT FULL ADDER

Implementation of Low Power 32 Bit ETA Adder

PERFORMANCE COMPARISON OF HIGHER RADIX BOOTH MULTIPLIER USING 45nm TECHNOLOGY

A fast programmable frequency divider with a wide dividing-ratio range and 50% duty-cycle

NNC for Power Electronics Converter Circuits: Design & Simulation

High Speed, Low power and Area Efficient Processor Design Using Square Root Carry Select Adder

Improved Linearity CMOS Multifunctional Structure for VLSI Applications

National Conference on Emerging Trends in Information, Digital & Embedded Systems(NC e-tides-2016)

AREA EFFICIENT DISTRIBUTED ARITHMETIC DISCRETE COSINE TRANSFORM USING MODIFIED WALLACE TREE MULTIPLIER

PVT Insensitive Reference Current Generation

High Speed Binary Counters Based on Wallace Tree Multiplier in VHDL

AN EFFICIENT DESIGN OF ROBA MULTIPLIERS 1 BADDI. MOUNIKA, 2 V. RAMA RAO M.Tech, Assistant professor

444 Index. F Fermi potential, 146 FGMOS transistor, 20 23, 57, 83, 84, 98, 205, 208, 213, 215, 216, 241, 242, 251, 280, 311, 318, 332, 354, 407

Analog-to-Digital Converters using not Multi-Level but Multi-Bit Feedback Paths

Option 1: A programmable Digital (FIR) Filter

Multiple-Layer Networks. and. Backpropagation Algorithms

Microprocessor Implementation of Fuzzy Systems and Neural Networks Jeremy Binfet Micron Technology

Investigation on Performance of high speed CMOS Full adder Circuits

A New Model for Thermal Channel Noise of Deep-Submicron MOSFETS and its Application in RF-CMOS Design

Design of Delay-Power Efficient Carry Select Adder using 3-T XOR Gate

Design of 8-4 and 9-4 Compressors Forhigh Speed Multiplication

Transcription:

A Simple Design and Implementation of Reconfigurable Neural Networks Hazem M. El-Bakry, and Nikos Mastorakis Abstract There are some problems in hardware implementation of digital combinational circuits. In contrast, analog design has the advantages of both economy and easy to implement compared with the digital design. In this paper, a simple design and implementation of analog reconfigurable artificial neural network is presented. A novel design of arithmetic unit that including full-adder, full-subtractor, 2-bit digital multiplier and 2-bit digital divider is introduced. The proposed neural network has been realized by hardware components and the results are simulated using H-spice program. Practical results confirm the theoretical considerations. A I. INTRODUCTION dvances in MOS VLSI have made it possible to integrate neural networks of large sizes on a single-chip [,2]. Hardware realizations make it possible to execute the forward pass operation of neural networks at high speeds, thus making neural networks possible candidates for real-time applications. Other advantages of hardware realizations as compared to software implementations are the lower per unit cost and small size system. Analog circuit techniques provide area-efficient implementations of the functions required in a neural network, namely, multiplication, summation, and the sigmoid transfer characteristic [3]. In this paper, we describe the design of a reconfigurable neural network in analog hardware and demonstrate experimentally how a reconfigurable artificial neural network approach is used in implementation of arithmetic unit that including full-adder, full-subtractor, 2-bit digital multiplier, and 2-bit digital divider. One of the main reasons for using analog electronics to realize network hardware is that simple analog circuits (for example adders, sigmoid, and multipliers) can realize several of the operations in neural networks. Nowadays, there is a growing demand for large as well as fast neural processors to provide solutions for difficult problems. Designers may use either analog or digital technologies to implement neural network models. The analog approach boasts compactness and high speed. On the other hand, Manuscript received December 4, 28. H. M. El-Bakry is with the Faculty of Computer Science & Information Systems, Mansoura University, EGYPT (phone: 2-5-237356; fax: 2-5-222442; e-mail: helbakry2@yahoo.com). N. Mastorakis is with Department of Computer Science, Military Institutions of University Education (MIUE) -Hellenic Naval Academy, Greece. digital implementations offer flexibility and adaptability, but only at the expense of speed and silicon area consumption. II. ANALOG IMPLEMENTATION OF RECONFIGURABLE NEURAL NETWORKS A) Implementation of artificial neuron Implementation of analog neural networks means that using only analog computation [4,6,8]. Artificial neural network as the name indicates, is the interconnection of artificial neurons that tend to simulate the nervous system of human brain [5]. Neural networks are modeled as simple processors (neurons) that are connected together via weights. The weights can be positive (excitatory) or negative (inhibitory). Such weights can be realized by resistors as shown in Fig.. The computed weights may have positive or negative values. The corresponding resistors that represent these weights can be determined as follow [6]: Wpp = Rp w in = -R f / R in i =, 2,, n () i R2p n Win... Rpp..... Rpp The exact values of these resistors can be calculated as presented in [4,8]. The summing circuit accumulates all the input-weighted signals and then passes to the output through the transfer function [3]. The main problem with the electronic neural networks is the realization of resistors which are fixed and have many problems in hardware implementation [7]. Such resistors are not easily adjustable or controllable. As a consequence, they can be used neither for learning, nor can they be used for recall when another task needs to be solved. So the calculated resistors corresponding to the obtainable weights can be implemented by using CMOS transistors operating in continuous mode (triode region) as shown in Fig. 2. The equivalent resistance between terminal and 2 is given by [9]: R eq = /[K(V g 2V th )]

B) Reconfigurability The interconnection of synapses and neurons determines the topology of a neural network. Reconfigurability is defined as the ability to alter the topology of the neural network [9]. Using switches in the interconnections between synapses and neurons permits one to change the network topology as shown in Fig.3. These switches are called "reconfiguration switches". The concept of reconfigurability should not be confused with weight programmability. Weight programmability is defined as the ability to alter the values of the weights in each synapse. In Fig.3, weight programmability involves setting the values of the weights w, w 2, w 3,...., w n. Although reconfigurability can be achieved by setting weights of some synapses to zero value, this would be very inefficient in hardware. C) The need for reconfigurable systems Reconfigurability is desirable for several reasons []:. Providing a general problem-solving environment. 2. Correcting offsets. 3. Ease of testing. 4. Reconfiguration for isolating defects. III. DESING OF ARITHMETIC UNIT USING RECONFIGURABLE ANNS In previous paper [2], a neural design for logic functions by using modular neural networks was presented. Here, a simple design for the arithmetic unit using reconfigurable neural networks is presented. The aim is to have a complete design for ALU by using the benefits of both modular and reconfigurable neural networks. A) Full-Adder/Full-Subtractor Implementation Using ANN Full-adder/full-subtractor problem is solved practically and a neural network is simulated and implemented using the back-propagation algorithm for the purpose of learning this network []. The network is learned to map the functions of full-adder and fullsubtractor. The problem is to classify the patterns shown in Table correctly. The computed values of weights and their corresponding values of resistors are described in Table 2. After completing the design of the network, simulations are carried out to test both the design and performance of this network by using H-spice. Experimental results confirm the proposed theoretical considerations. Fig. 4 shows the construction of full-adder/fullsubtractor neural network. The network consists of three neurons and 2-connection weights. B) 2-Bit Digital Multiplier Implementation 2-bit digital multiplier can be realized easily using the traditional feed-forward artificial neural network []. As shown in Fig. 5, the implementation of 2-bit digital multiplier using the traditional architecture of a feedforward artificial neural network requires 4-neurons, 2-synaptic weights in the input-hidden layer, and 4- neurons, 2-synaptic weights in the hidden-output layer. Hence, the total number of neurons is 8-neurons with 4-synaptic weights. In the present work, a new design of 2-bit digital multiplier has been adopted. The new design requires only 5-neurons with 2-synaptic weights as shown in Fig. 6. The network receives two digital words, each word has 2-bit, and the output of the network gives the resulting multiplication. The network is trained by the training set shown in Table 3. During the training phase, these input/output pairs are fed to the network and in each iteration; the weights are modified until reached to the optimal values. The optimal value of the weights and their corresponding resistance values are shown in Table 4. The proposed circuit has been realized by hardware means and the results have been tested using H-spice computer program. Both the actual and computer results are found to be very close to the correct results. C) 2-Bit Digital Divider Implementation 2-bit digital divider can be realized easily using the artificial neural network. As shown in Fig. 7, the implementation of 2-bit digital divider using neural network requires 4-neurons, 2-synaptic weights in the input-hidden layer, and 4-neurons, 5-synaptic weights in the hidden-output layer. Hence, the total number of neurons is 8-neurons with 35-synaptic weights. The network receives two digital words, each word has 2-bit, and the output of the network gives two digital words one for the resulting division and the other for the resulting remainder. The network is trained by the training set shown in Table 5. The values of the weights and their corresponding resistance values are shown in Table 6. The results have been tested using H-spice computer program. Computer results are found to be very close to the correct results. Arithmetic operations namely, addition, subtraction, multiplication, and division can be realized easily using a reconfigurable artificial neural network. The proposed network consists of only 8-neurons, 67- connection weights, and 32-reconfiguration switches. Fig. 8 shows the block diagram of the arithmetic operation using reconfigurable neural network. The network includes full-adder, full-subtractor, 2-bit digital multiplier, and 2-bit digital divider. The proposed circuit is realized by hardware means and the results are tested using H-spice computer program. Both the actual and computer results are found to be very close to the correct results. The computed values of weights and their corresponding values of resistors are described in Tables 2,4,6. After completing the design of the network, simulations are carried out to test both the design and performance of this network by using H-

spice. Experimental results confirm the proposed theoretical considerations as shown in Tables 7,8. VI. CONCLUSION A new concept for realizing arithmetic unit that includes full-adder, full-subtractor, 2-bit digital multiplier, and 2-bit digital divider by using analog reconfigurable artificial neural networks has been presented. The proposed full-network has been realized by hardware means and the results have been tested using H-spice computer program. Both the actual and computer results are found to be very close to the correct results. REFERENCES [] Srinagesh Satyanarayna, Yannis P. Tsividis, and Hans Peter graf, A Reconfigurable VLSI Neural Network, IEEE Journal of Solid State Circuits, vol. 27, no., January 992. [2] E. R. Vittos, Analog VLSI Implementation of Neural Networks, in proc. Int. Symp. Circuits Syst. (new Orleans, LA), 99, pp. 2524-2527. [3] H. P. graf and L. D. Jackel, Analog Electronic Neural Network Circuits, IEEE Circuits Devices Mag., vol. 5, pp. 44-49, July 989. [4] H. M. EL-Bakry, M. A. Abo-Elsoud, and H. H. Soliman and H. A. El-Mikati " Design and Implementation of 2-bit Logic functions Using Artificial Neural Networks," Proc. of the 6 th International Conference on Microelectronics (ICM'96), Cairo, Egypt, 6-8 Dec., 996. [5] Simon Haykin, Neural Network : A comprehensive foundation, Macmillan college publishing company, 994. [6] Jack M. Zurada, Introduction to Artificial Neural Systems, West Publishing Company, 992. [7] C. Mead, and M. Ismail, Analog VLSI Implementation of Neural Systems, Kluwer Academic Publishers, USA, 989 [8] H. M. EL-Bakry, M. A. Abo-Elsoud, and H. H. Soliman and H. A. El-Mikati " Implementation of 2-bit Logic functions Using Artificial Neural Networks," Proc. of the 6 th International Conference on Computer Theory and Applications, Alex., Egypt, 3-5 Sept., 996, pp. 283-288. [9] I. S. Han and S. B. Park, Voltage-Controlled Linear Resistor by Using two MOS Transistors and its Applications to RC Active Filter MOS Integration, Proceedings of the IEEE, Vol.72, No., Nov. 984, pp. 655-657. [] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning Representations by Back-Propagation Error, Nature, vol.323, pp.533-536,986. [] Laurene Fausett, Fundamentals of Neural Network : Architectures, Algorithms, and Applications, Prentice Hall International. [2] H. M. El-bakry, Complexity Reduction Using Modular Neural Networks, Proc. of IEEE IJCNN 3, Portland, Oregon, pp. 222-227, July, 2-24, 23. R nn Table I Truth table of full-adder/full-subtractor V nn R f Full- Full- Adder Subtractor x y z S C D B V n V p R n R p - V o R pp V pp R o Fig.. Implementation of positive and negative weights using only one opamp.

Table II Computed weights and their corresponding resistances of full-adder/full-subtractor I / P Neuron () Neuron Neuron Weight Resistance Weight Resistance Weight Resistance.8 5 6.6 5 6.6 2.8 5 6.6 5 6.6 3.8 -. Rf -. Rf -.. Rf -. Rf -. Rf Table III 2-bit digital multiplier training set Table IV Weight values and their corresponding resistance values O/P Neuron W. Value Resistor A 2 B A O () B 2 -. A N 4 N5 -. -3. 2. 45 45 33 68 2 2 N4 A B -. -. 3. 3. 3. 3. -. 2 2 2 2 2 B 2 -. V g X X 2 W 2 W 2 O/ P Fig.2. Two MOS transistor as a linear resistor. X n W n Switches for Reconfiguration Fig.3. Neuron with reconfigurable switches.

Full-adder C S A W W 2 W 7 B W 3 W B W 8 A W 4 W 3 B W 5 W 9 W 6 W 2 B D Full-subtractor Fig. 4. full-adder/full-subtractor implementation. Table V 2-bit digital multiplier training set O/P Table VI Weight values and their corresponding resistance values Neuron W. Val. Resistor - 56 B A O A - 56 () B A 5 5 5 27 27 27 2 B - - 2 2 56 A - - 2 2 A -4.5 22 2 (6) (7) (8) B A B N3 N N3 N N 4 N N 2-4.5 - -2-3 25-25 -5-5 -5 2 22 5 33 2 5 4 7 22 22 22 Table VII Practical and Simulation results after the summing circuit of full-adder/full-subtractor I/p Neuron() Neuron Neuron X Y Z Practical Simulated Practical Simulated Practical Simulated -7-3.435-3.435-2.5968 3.374 3.374-2.5968 3.274 3.374 3.376 3.4366-3.38-2.5968-3.4372 3.374 3.376-3.38-3.38 3.376-3.38-3.38 3.423 3.48 3.42 3.48 3.42

Hidden-Layer Output-Layer A () O A () O (6) B (7) B (8) Fig. 5. 2-Bit digital multiplier using traditional feed-forward neural network Fig. 6. A novel design for2-bit multiplier using neural network Table VIII Practical and Simulation results after the summing circuit of 2-bit digital multiplier Neuron () Neuron Neuron Neuron Neuron Pract. Sim. Pract. Sim. Pract. Sim. Pract. Sim. Pract. Sim. - -3.49-3.43-3.447 - -2.68-3.34 - - -3.49 -.63 -.355-2.68-2.68 -.63 -.355-2.68-2.68-3.49-3.43-2.68 3.39-3.43-2.68-2.68 3.397 -.63 -.355 3.39 3.39 3.424 -.63 -.355 3.39 - -.63 -.355 - -2.68 -.63 -.355 - - 3.399-2.68-2.68 3.399-2.68-2.68 -.63 -.355-2.68 3.39 -.63 -.355-2.68-2.68 3.399 3.39 3.39-3.398-2.7-2.7.86 2.59 3.39

A Hidden-Layer () A B Output-Layer O (6) B (7) (8) Fig. 7. 2-Bit digital divider using neural network. Connection weights Selection C C 2 Neurons A Full-Adder B Full- Subtractor 2 Bit Digital Multiplier Reconfiguration switches Neurons O 2 Bit Digital Divider Fig. 8. Block diagram of arithmetic unit using reconfigurable neural network.