Simulate IFFT using Artificial Neural Network Haoran Chang, Ph.D. student, Fall 2018

Similar documents
5.1 Graphing Sine and Cosine Functions.notebook. Chapter 5: Trigonometric Functions and Graphs

Section 5.2 Graphs of the Sine and Cosine Functions

1 Graphs of Sine and Cosine

Graphing Sine and Cosine

Data Acquisition Systems. Signal DAQ System The Answer?

Algebra and Trig. I. The graph of

CHAPTER 4 IMPLEMENTATION OF ADALINE IN MATLAB

5.3-The Graphs of the Sine and Cosine Functions

Section 7.6 Graphs of the Sine and Cosine Functions

5.3 Trigonometric Graphs. Copyright Cengage Learning. All rights reserved.

Geometry Problem Solving Drill 11: Right Triangle


Lecture notes on Waves/Spectra Noise, Correlations and.

Graph of the Sine Function

Section 7.1 Graphs of Sine and Cosine

Copyright 2009 Pearson Education, Inc. Slide Section 8.2 and 8.3-1

Trigonometric Equations

AC Theory and Electronics

Figure 1. The unit circle.

Introduction to signals and systems

CHAPTER 9. Sinusoidal Steady-State Analysis

EE202 Circuit Theory II , Spring

G(f ) = g(t) dt. e i2πft. = cos(2πf t) + i sin(2πf t)

7.3 The Unit Circle Finding Trig Functions Using The Unit Circle Defining Sine and Cosine Functions from the Unit Circle

Amplitude, Reflection, and Period

Find all the remaining sides, angles and area of the following triangles

WARM UP. 1. Expand the expression (x 2 + 3) Factor the expression x 2 2x Find the roots of 4x 2 x + 1 by graphing.

Basic Signals and Systems

Precalculus ~ Review Sheet

Graphs of other Trigonometric Functions

Digital Video and Audio Processing. Winter term 2002/ 2003 Computer-based exercises

3. Use your unit circle and fill in the exact values of the cosine function for each of the following angles (measured in radians).

In Exercises 1-12, graph one cycle of the given function. State the period, amplitude, phase shift and vertical shift of the function.

LRC Circuit PHYS 296 Your name Lab section

Solutions to Exercises, Section 5.6

Math 102 Key Ideas. 1 Chapter 1: Triangle Trigonometry. 1. Consider the following right triangle: c b

EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING

of the whole circumference.

Chapter 3, Part 4: Intro to the Trigonometric Functions

FIR Filter Design by Frequency Sampling or Interpolation *

MATH 1113 Exam 3 Review. Fall 2017

Alternating voltages and currents

Chapter 1. Electronics and Semiconductors

Section 7.7 Graphs of the Tangent, Cotangent, Cosecant, and Secant Functions

Notes on Fourier transforms

Hideo Okawara s Mixed Signal Lecture Series. DSP-Based Testing Fundamentals 13 Inverse FFT

CHAPTER 14 ALTERNATING VOLTAGES AND CURRENTS

The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D.

the input values of a function. These are the angle values for trig functions

Mod E - Trigonometry. Wednesday, July 27, M132-Blank NotesMOM Page 1

Secondary Math Amplitude, Midline, and Period of Waves

Appendix III Graphs in the Introductory Physics Laboratory

Digital Signal Processing 2/ Advanced Digital Signal Processing Lecture 11, Complex Signals and Filters, Hilbert Transform Gerald Schuller, TU Ilmenau

The period is the time required for one complete oscillation of the function.

Section 2.4 General Sinusoidal Graphs

Unit Circle: Sine and Cosine

Sound is the human ear s perceived effect of pressure changes in the ambient air. Sound can be modeled as a function of time.

Chapter 4 Trigonometric Functions

Calculus II Final Exam Key

1. Measure angle in degrees and radians 2. Find coterminal angles 3. Determine the arc length of a circle

CHAPTER 2! AMPLITUDE MODULATION (AM)

You analyzed graphs of functions. (Lesson 1-5)

Chapter 6: Periodic Functions

B.Tech II Year II Semester (R13) Supplementary Examinations May/June 2017 ANALOG COMMUNICATION SYSTEMS (Electronics and Communication Engineering)

Section 5.2 Graphs of the Sine and Cosine Functions

Trigonometric Identities

Unit 5 Graphing Trigonmetric Functions

Algebra and Trig. I. In the last section we looked at trigonometric functions of acute angles. Note the angles below are in standard position.

6.02 Practice Problems: Modulation & Demodulation

Digital Signal Processing Techniques

Sinusoids and Phasors (Chapter 9 - Lecture #1) Dr. Shahrel A. Suandi Room 2.20, PPKEE

The Series RLC Circuit and Resonance

Electrochemical Impedance Spectroscopy

What is a Sine Function Graph? U4 L2 Relate Circle to Sine Activity.pdf

Sonoma State University Department of Engineering Science Spring 2017

The Sine Function. Precalculus: Graphs of Sine and Cosine

Frequency Division Multiplexing Spring 2011 Lecture #14. Sinusoids and LTI Systems. Periodic Sequences. x[n] = x[n + N]

2.4 Translating Sine and Cosine Functions

First, frequency is being used in terms of radians per second which is often called "j*w0". The relationship is as such.

Fourier transforms, SIM

Section 8.4: The Equations of Sinusoidal Functions

Interference in stimuli employed to assess masking by substitution. Bernt Christian Skottun. Ullevaalsalleen 4C Oslo. Norway

Bakiss Hiyana binti Abu Bakar JKE, POLISAS BHAB

Fall Music 320A Homework #2 Sinusoids, Complex Sinusoids 145 points Theory and Lab Problems Due Thursday 10/11/2018 before class

ELECTRONOTES APPLICATION NOTE NO Hanshaw Road Ithaca, NY Nov 7, 2014 MORE CONCERNING NON-FLAT RANDOM FFT

Multiple-Layer Networks. and. Backpropagation Algorithms

Circuit Analysis-II. Circuit Analysis-II Lecture # 2 Wednesday 28 th Mar, 18

6.02 Fall 2012 Lecture #12

Extra Practice for Section I: Chapter 4

1 ONE- and TWO-DIMENSIONAL HARMONIC OSCIL- LATIONS

CS 229 Final Project: Using Reinforcement Learning to Play Othello

Modulation. Digital Data Transmission. COMP476 Networked Computer Systems. Analog and Digital Signals. Analog and Digital Examples.

Please grab the warm up off of the chair in the front of the room and begin working!

Precalculus Lesson 9.2 Graphs of Polar Equations Mrs. Snow, Instructor

SINUSOIDS February 4, ELEC-281 Network Theory II Wentworth Institute of Technology. Bradford Powers Ryan Ferguson Richard Lupa Benjamin Wolf

Phasor. Phasor Diagram of a Sinusoidal Waveform

Sinusoids. Lecture #2 Chapter 2. BME 310 Biomedical Computing - J.Schesser

Chapter 11. Alternating Current

Data Conversion Circuits & Modulation Techniques. Subhasish Chandra Assistant Professor Department of Physics Institute of Forensic Science, Nagpur

2. Be able to evaluate a trig function at a particular degree measure. Example: cos. again, just use the unit circle!

Transcription:

Simulate IFFT using Artificial Neural Network Haoran Chang, Ph.D. student, Fall 2018 1. Preparation 1.1 Dataset The training data I used is generated by the trigonometric functions, sine and cosine. There are totally four types of signals: A sin(2π t) + B sin(2π 5t) + 2 sin(ω 3t) A sin(2π t) B sin(2π 5t) + 2 sin(ω 3t) A cos(2π t) + B sin(2π 5t) + 2 sin(ω 3t) A cos(2π t) B sin(2π 5t) + 2 sin(ω 3t) While t is the time, whose range is [0, 5), step 0.005. A and B have the same range: [1, 3), step = 0.2. ω is from 2π to 2π 6 (exclusive), step 2π 0.5. Thus, there are 10 different A values, 10 different B values and 10 different ω values. For each type, there are 10 * 10 * 10 = 1,000 different signals. Totally, I have 4 * 1,000 = 4,000 signals. Use different amplitudes (A and B), and angular frequency ω to create different signals. Do FFT on these signals, and the result of that will be the input of my neural network. The original signals will be the outputs (or the labels). Figure 1. One of the signals and the corresponding FFT (real and imaginary). The signal function is: 2 sin(2π t) + 3 sin(2π 5t) + 2 sin(3.5 2π 3t). 1.2 Neural Network Model My neural network model is a simple fully connected network (or dense neural network). Totally, I have 4,000 signals, then I randomly chose 100 of them to be the testing set and rest of them to be the training set. The number of the training iterations is 10,000. In every iteration, randomly choose 100 signals from the training set to train the network. Loss function is the sum of the squared difference.

(Eq1) error = (x i x i ) 2 Say every signal is an 1D array, x i is the original signal, while x i is the reconstructed signal. 2. Experiment 2.1 Different Number of Nodes At first, I used 1000 nodes for each layer, since every signal I use has the same size 1000. Totally, the neural network has four hidden layers. But the result looked not very good (fig 2). Figure 2. Original signal (left) and the reconstructed signal (right) from the neural network. Using 1000 nodes per each hidden layer. I noticed that my input size is not 1000 but 2000, because FFT has two parts: the real part and the imaginary part. So I increased the number of hidden nodes, from 1000 to 2000, but still the neural network didn t work well. Even using 4000 nodes, there was nothing improved (fig 3).

Figure 3. Reconstructed signals. Right: using 2000 nodes per hidden layers. Left: using 4000 nodes per hidden layers. Note that for one pair of one signal S i and the corresponding reconstruction R i, the RMS is: (Eq 2) RMS signal = (s i r i ) 2 Where size is the number of values in S i. Here the size is 1,000. Then the average RMS of a model is: size (eq 3) RMS model = RMS signal N Where N is the number of samples in the test set. Here N is 100. The results show in table 1. From the error curves, we can find that using more nodes will not improve the results (fig 4a). So I tried to used less number of nodes then train the network. The performance is better than using thousands of nodes (fig 4b). The following figures (4-7) shows the learning pattern for my experiments. The Y-axis for each curve is total errors over all training data, and X-axis is the number of training iterations as I mentioned in Section 1.2. Figure 4a. Using 1000, 2000, 4000 nodes per hidden layer

Figure 4b. Using 10, 20, 40 nodes per hidden layer However, from fig 4b we can see that using 40 nodes is better others. I also tried some other number of hidden nodes, like 30, 50, 70, 100 and so on, but 40 is the best when using four hidden layers. 2.2 Different Number of Hidden Layers Therefore, in the next stage, I used 40 nodes per layer, but different number of layers. Figure 5 shows the result. In figure 5 I noticed that using three layers can get the same performance but need more time (more training steps). However, the result of using two layers is worse than the others. Figure 5. Same number of hidden nodes, but different number of layers

Then another problem raised: how about using less number layers but more number of nodes? Figure 6 shows the result. It s very interesting that using more nodes does not affect too much. Figure 6. using more nodes when the number of hidden layers is less On the other hand, another experiment tells me that using more layers will not affect too much. In this experiment, I used the same number of nodes, 40, but increased the number of hidden layers. From the result (fig 7), we can easily notice that using more layers won t improve or decrease the performance a lot. Figure 7. Using 40 nodes per layer, but different number of hidden layers. 2.3 Result Figure 8 shows one of the original signals and the corresponding reconstructed signals. The neural network model of this is using 4 hidden layers, 40 nodes per hidden layer.

Figure 8. Original signal and the corresponding reconstruction. RMS is 0.6890, computed by eq 2. As I mentioned before, there are totally 4,000 signals. For cross-validation, I randomly chose 100 of them to be the test set, and rest of them to be the training set. This process is iterated for??? number of times. Then I computed the average RMS for each model over all iterations Model RMS Training time (second) 10 nodes per layer, 4 hidden layers 1.4594 34.9669 20 nodes per layer, 4 hidden layers 0.9954 35.6812 40 nodes per layer, 4 hidden layers 0.6517 35.9120 1000 nodes per layer, 4 hidden layers 2.1543 80.0613 2000 nodes per layer, 4 hidden layers 2.2325 169.2275 4000 nodes per layer, 4 hidden layers 2.1158 461.7935 Table 1. RMS of each model. 3. Conclusion According to those experiments I tried, the number of nodes is more important than the number of hidden layers in my case. When the number of nodes is fixed, increase the number of layers will not improve the performance. Choosing the right number of nodes will apparently improve the performance. However, the performance will definitely decrease if we decrease the number of hidden layers too many. Overall, using 4 hidden layers and 40 neurons (nodes) per layer has the best performance. Perhaps in frequency domain, we don t need too many classifiers. If we use too many, it may overfit.

A few future steps for this work are: (1) increase number of training sets and diversify them over more types of training signals, for robust IFFT learning. (2) Independent test set should also be used apart from cross-validation, and diversity of such test set will show robustness of IFTT-learned model. (3) Learning curve by gradually increasing training set (X-axis: training set size and Y-axis: accuracy of reconstruction) will show the stability of learning. (4) Check if the model is symmetric, i.e., if given a signal on right-side as input would produce its FFT on left-side of the network.