Development of a Time-Restricted Region-Suppressed ER-SAM Beamformer and its Application to an Auditory Evoked Field Study

Size: px
Start display at page:

Download "Development of a Time-Restricted Region-Suppressed ER-SAM Beamformer and its Application to an Auditory Evoked Field Study"

Transcription

1 Development of a Time-Restricted Region-Suppressed ER-SAM Beamformer and its Application to an Auditory Evoked Field Study by Daniel Davis Eugene Wong A thesis submitted in conformity with the requirements for the degree of Master of Applied Science Institute of Biomaterials and Biomedical Engineering University of Toronto Copyright by Daniel Davis Eugene Wong 8

2 Development of a Time-Restricted Region-Suppressed ER-SAM Beamformer and its Application to an Auditory Evoked Field Study ABSTRACT Daniel Davis Eugene Wong Master of Applied Science Institute of Biomaterials and Biomedical Engineering University of Toronto 8 This study evaluated a time-restricted region-suppressed event-related synthetic aperture magnetoencephalography (TRRS-ER-SAM) beamformer algorithm against equivalent current dipole (ECD), and event-related synthetic aperture magnetoencephalography (ER-SAM) postprocessing methods for magnetoencephalography data. This evaluation was done numerically and with auditory evoked field (AEF) data elicited by binaurally presented 5 Hz tones. The TRRS-ER-SAM beamformer demonstrated robustness to noise, and the ability to handle coherent sources. The TRRS-ER-SAM algorithm was then applied to a study of Nm AEFs in 8 subjects aged -5 years. The study examined the effects of age, stimulus frequency, and right-sided monaural versus binaural stimulation on the Nm location, amplitude, and latency. It was found that age affected the Nm latency; stimulus frequency affected the Nm location, amplitude, and latency; and monaural versus binaural stimulation affected the Nm amplitude. In the context of these effects, the auditory pathway structure and neurophysiological changes due to maturation were discussed. ii

3 ACKNOWLEDGEMENTS I would like to thank the following individuals for their inspiration and encouragement during my Master s studies: My parents for their support in allowing me to pursue my research and higher education, and for the wonderful dinners that fueled my brain. My supervisors, Robert Harrison and Karen Gordon for giving me the guidance, as well the freedom to develop my own data collection and analysis protocols. Their extensive knowledge on the subject of auditory neuroscience and stimulating discussions has made my graduate studies a wonderful learning experience. My committee members Bill Gaetz, Doug Cheyne, John Sled, and Blake Papsin for their advice and guidance in the development of the experiments. I would like to thank Bill Gaetz for assisting in the pilot recordings and for his advice on MEG setup, and Doug Cheyne for sharing his knowledge on beamforming and encouraging me to combine region suppression with his ER-SAM beamformer. Roy Sharma for his advice on setting up EEG equipment in the MEG, especially on reducing the noise produced by the EEG wires. Sonya Bells, Ruth Weiss, and Tammy Rayner for their assistance in MEG and MRI data collection. Finally, I would like to thank the University of Toronto Institute for Biomaterials and Biomedical Engineering for the 6-7 and 7-8 University of Toronto Open Fellowships. iii

4 ABBREVIATIONS AEF AEP AER ANOVA ECD EEG ER-SAM FFT ISI MEG MRI N/Nm P/Pm P/Pm SAM SNR SQUID TRRS-ER-SAM VOI Auditory evoked field Auditory evoked potential Auditory evoked response Analysis of variance Equivalent current dipole Electroencephalography Event-related SAM Fast Fourier transform Interstimulus interval Magnetoencephalography Magnetic resonance imaging AEP/AEF deflection with latency of around ms in adults AEP/AEF deflection with latency of around 5 ms in adults AEP/AEF deflection with latency of around ms in adults Synthetic aperture magnetoencephalography Signal to noise ratio Superconducting quantum interference device Time-restricted region-suppressed ER-SAM Volume of interest iv

5 TABLE OF CONTENTS List of Tables... vii List of Figures... viii. Introduction.... MEG Post-Processing Algorithm Evaluation.... Application of TRRS-ER-SAM to an AEF Study....3 Thesis Roadmap Background Neuromagnetic Signals Electroencephalography and Magnetoencephalography Introduction to Analysis of MEG Data....4 MEG Head Models....5 Coregistration of MEG and MRI Data Equivalent Current Dipole Analysis Introduction to Beamformer Analysis Synthetic Aperture Beamforming Theory Beamforming and Correlated Sources.... Statistical Thresholding of Data Intersubject Brain Geometry Normalization From Sound to Auditory Evoked Potentials/Fields Auditory System Development Data Collection Stimulus Generation Stim-Trigger Design Audiometer Design Earphone Setup Data Collection Parameters Physical Setup of the MEG Environment Magnetic Resonance Imaging Table of Subjects Data Analysis I Standard Post-Processing Tools Coregistration of MEG and MRI Data Equivalent Current Dipole Analysis Synthetic Aperture Magnetoencephalography Analysis Nm Identification Using Beamformers Data Analysis II: Development of a Time-Restricted Region-Suppressed Synthetic Aperture Magnetoencephalography Beamformer Time Restriction Region Suppression Localizing Coherent Sources Evaluation of the TRRS-ER-SAM Beamformer Using Numerical Simulations Evaluation of the TRRS-ER-SAM Beamformer Using AEF Data Application to an AEF Study Results I: Evaluation of MEG Post-Processing Algorithms v

6 6. ER-SAM and TRRS-ER-SAM Beamformer Evaluations Identifying the Nm Evaluation of MEG Data Post-Processing Methods Using AEF Data Results II: Application of the TRRS-ER-SAM Beamformer to an AEF Study ANOVA Analysis Identifying the Central Nm Location Frequency Related Changes of the Nm Age Related Changes of the Nm Effects of Monaural versus Binaural Stimuli on the Nm Discussion TRRS-ER-SAM Beamformer Evaluation AEF Study Results Conclusions Appendix I: Multiple Coherent Source Proof... Appendix II: PC Serial-to-Experiment Interface Circuit... Appendix III: List of Modifications Made to NUTMEG... 4 Appendix IV: Angle Out of Plane Derivation... 6 Appendix V: Processing Time Analysis of Coherent Source Gradient Search Algorithm... 8 Reference... 9 vi

7 LIST OF TABLES Table 3..I: Audigy ZS Specifications Table 3.8.I: Table of Subjects Table 6.3.I: 5 Hz Binaural ECD Nm Source Localization MNI Coordinates. 75 Table 6.3.II: 5 Hz Binaural ER-SAM Nm Source Localization MNI Coordinates Table 6.3.III : 5 Hz Binaural ECD Nm Source Localization MNI Coordinates.. 75 Table 7..I: ANOVA Analysis of Nm Location. 7 Table 7..I: Nm Source Localization MNI Coordinates. 78 Table 7.3.I: Paired T-Test: Effect of Frequency on Right Hemisphere Y-Coordinate... 8 Table 7.3.II: Paired T-Test: Effect of Frequency on Right Hemisphere Z-Coordinate.. 8 Table 7.3.III: Frequency Effects: Nm Amplitude Paired T-Test p-values 8 Table 7.3.IV: Frequency Effects: Nm Amplitude Paired T-Test Power... 8 Table 7.3.V: Nm Latency Paired T-Test p-values 8 Table 7.3.VI: Nm Latency Paired T-Test Power...83 Table 7.4.I: Fitted Curves from Exponential Regression 86 Table 7.4.II: p-values for Exponential Regression Exponent T-test Table 7.5.I: Paired T-Test Results of Monaural Versus Binaural Stimulus Effects on the Right Hemsiphere Nm Y-Coordinate. 87 Table 7.5.II: Hemisphere Effects: Nm Amplitude Paired T-Test Results 88 Table 7.5.III: Monaural Versus Binaural Laterality Index T-Test.. 9 vii

8 LIST OF FIGURES All original figures, with the exception of Fig.... Fig..3.. (a) Non-averaged single trial data from a pilot auditory evoked field recording. (b) The average over all trials.... Fig..5.. (a) Left and right pre-auricular lipid markers on MRI image from a 8 year old subject. (b) Nasion lipid marker. (c) Coordinate system formed by the fiducial markers Fig..6.. ECD analysis illustration using half-head sensor auditory evoked field pilot data. (a) ECD analysis attempts to minimize the difference between the observed and the calculated field pattern by tweaking dipole location, orientation and moment using a gradient search. Blue and red areas indicate magnetic flux source and sink. (b) The dipole solution is shown, coregistered with the subject s MRI image. Here, the dipole is located in the planum temporale Fig..7.. Schematic diagram of beamformer signal processing algorithm applied to a hypothetical recording. Data from each channel is multiplied by a weight value (i.e. w, w,, w M ). The resulting products are then summed. For mathematical purposes, these weight values are typically organized as a weight vector Fig..7.. Half-head beamformer results from an auditory evoked field pilot recording of a year old subject using monaural 5 Hz tones. (a) Activation map of left hemisphere Nm activation. This was generated using the MEG data that gave the anatomically meaningless dipole solution using ECD analysis in Fig. 5a-c. (b) Virtual sensor data for the voxel with the greatest activation. The red line corresponds to the time point used to generate the activation map Fig.... (a) Averaged auditory evoked field data obtained from a subject. It was bandpass filtered between -3 Hz. (b) Noise floor data obtained by plus-minus averaging of the auditory evoked field. (c) Histogram of omnibus noise magnitude from noise floor beamformer output. In this case, the for a 95% confidence level, the beamformer magnitude cutoff is 6.5x Fig.... MNI coordinate system overlaid on top of a subject brain that was normalized to the MNI template Fig.... Schematic of MNI warping process. The subject MRI is warped the dimensions of the MNI template. The transform coefficients used for the warp can be used to transform between unwarped MRI coordinates and MNI coordinates Fig.... Diagram of the auditory pathways, courtesy of Guyton & Hall [9]. Note that this diagram only illustrates input from one ear. For a complete picture, imagine a parallel system with input from the other ear Fig Stimulus waveform analysis. (a) and (c) are waveforms for 6 and 4 bit 5Hz sounds attenuated to -9 dbfs. (b) and (d) are the corresponding power spectra for the 6 and 4 bit sounds... 3 Fig Plot of sound level output by EAR earphones in relation to sound card digital input level Fig Hz tone waveform. A hanning window applied to the first and last sixteenth provides a smooth attack and decay... 3 viii

9 Fig (a) Audio data and serial port controlled stim-line. The recording begins on the falling edge of the stim-line. (b) Audio data and circuit controlled stim-line. The oscilloscope recording began on the rising edge of the stim-line Fig Software and hardware module communication diagram Fig Pulse sequence of stim-trigger circuit. The stim line is raised when audio data is detected. When the computer sends the reset (RST) signal through the RTS serial port, the stim line is reset Fig Log produced by the audiometer for one of the subjects. Each circle represents a tone presented by the audiometer. The dashed red line indicates the hearing threshold chosen by the audiometer Fig Etymotic research EAR earphones. The default tubes shown in this diagram were replaced with 4.5 cm vascular access tubing Fig Layout of the MEG room for AEF recordings Fig D MRI image overlaid on photo of subject Fig (a) The Nm pseudo-z value for one particular recording varies as the window size decreases below 5 ms. The Nm pseudo-z value for weight calculation over the entire post-stimulus window was.. (b) These measurements were performed using data from a year old artifact free-subject with a clear Nm response. It would be expected that beamformer performance would worsen with poorer quality datasets or datasets with a smaller Nm response Fig Graphical illustration of gradient search algorithm. All search points are fixed, except for one. The non-fixed search point is moved from voxel to neighboring voxel, attempting to maximize the search parameter... 5 Fig Runtime analysis of coherent source gradient search algorithm reveals that the growth rate is always less than V, which is the growth rate of a brute force search algorithm when N is increased to N+. The ripples and missing values on the surface are a result of computational failure to evaluate the large factorials in (5.3.). The actual values are approximately midway between the ripples Fig Schematic depiction of suppressed region detection and generation algorithm for a two coherent sources. Each dot represents a coherent source point found by the gradient search algorithm. The letter associated with each dot represents the output from one gradient search run. The number subscript beside the letter indicates the search point for that particular run (two search points per run). Suppressed regions, depicted by the dashed boxes, are formed around (A, B, C ) and (A, B, C ) because the points in each group are located within 75% of a box length from each other. No box is formed around D because D is not close enough to (A, B, C ). No boxes are formed around E and F dots because the groups (E, F ) and (E, F ) do not contain enough elements. In this illustration, the number of points per group required for a suppressed region to be formed around a dot is 3. The dots G and G are ignored because they are too close together Fig (a) Schematic of source placement. (b) Magnetic field of simulated data with sources at (,-3,4) mm and (,3,4) mm Fig Reconstructed activation map of signal power over to 5 ms time interval using standard SAM beamformer. The intensity values are based on normalized lead fields Fig (a) Coherent source search algorithm creates suppressed regions around the coherent sources. The box size used to generate the suppressed regions was 3x3x3 mm. (b) Reconstructed activation map of signal power over to 5 ms time interval using coherent source localization algorithm and modified SAM beamformer. The peaks ix

10 are located at (,-3,4) mm and (,3,4) mm. The intensity values are based on normalized lead fields. (c) Reconstructed non-normalized source amplitude plots obtained from the modified SAM beamformer. The actual source amplitudes are shown by the dotted lines, which are hard to see due to the very close overlay with the reconstructed amplitudes Fig (a) Activation map obtained from conventional SAM beamformer over a to 5 ms time window. Only one source is seen at (6,, 4) mm. (b) The reconstructed source dipole moment at (6,, 4) mm. The estimated dipole moment is the solid line and the actual dipole moment is the dotted line. The estimated dipole moment is evidently distorted Fig (a) Activation map obtained by region suppressed SAM beamformer over a to 5 ms time window, using suppressed regions detected by a coherent source gradient search. (b) Reconstructed sources using the region suppressed SAM beamformer.. 6 Fig Reconstructed activation map of three coherent sources over the time interval of to 5 ms using a conventional SAM beamformer Fig (a) Reconstructed activation map of three coherent sources using the region suppressed SAM beamformer over the time interval of to 5 ms. (b) Reconstructed dipole moment of the three detected sources. The solid line is the estimated dipole moment and the dotted line is the actual dipole moment. Both lines overlap very closely Fig (a) The averaged MEG data. The 3-5 Hz frequency band over the time window indicated by the dotted lines was used to compute the beamformer outputs. There is a peak at 4 Hz corresponding to the 4 Hz amplitude modulation of the auditory stimulus. The source of the peak at 4 Hz could not be identified. (b) MEG channel frequency spectra. A frequency-restriction window of 38-4 Hz was used for computing the optimal dipole orientations. (c) The tomographic image at 4 Hz created by a standard SAM beamformer implementation. All activity shown was below the 95% confidence interval Fig (a) The output of the coherent source gradient search algorithm. Note the tendency of certain coordinates to be consistently generated from different runs. (b) The suppressed regions generated from the gradient search output. A 3x3x3 mm box size was chosen to form the suppressed regions. (c) The 4 Hz tomographic image created using a region suppressed event-related SAM beamformer, thresholded at a 95% confidence interval. The activation peaks are indicated by the red dots. (d) The frequency spectra of the localized sources. It is worth mentioning that, the 37 Hz peak in sensor space (Fig. 6..8b) did not yield any significant peaks Fig Two virtual sensor plots from the right hemisphere of the year old subject at different frequencies show different polarity patterns. AEF peaks are labeled by comparing the plots with labeled averaged data. (a) was generated using data from a Hz binaural stimulus and (b) was generated using data from a 5 Hz binaural stimulus Fig Magnitude plot of the right hemisphere virtual sensor for 5 Hz monaural stimuli. The chosen Nm peak latency is indicated by the vertical red line. This is confirmed as the Nm peak by comparing the plot with (b) the averaged dataset where the vertical red line is at the same latency as in the magnitude plot Fig A surface plot of right hemisphere virtual sensors for 5 Hz stimuli aids in identifying Nm progression with age. Given that the Nm is the dominant peak for older subjects, we assign the Nm in younger children based on peak connectivity with those of the older children x

11 Fig Same as Fig. 6..3, except viewed on the opposite side Fig (a) Averaged MEG sensor data and EEG trace from Cz electrode for a Hz binaural recording on a 4 year old subject. Fields are labeled in the MEG sensor data and potentials are labeled on the EEG trace. By visual inspection, the Pm, Nm and Pm fields line up with the P, N and P fields. (b) Virtual Sensor data from the same recording with statistically significant fields labeled. The latencies of the Nm and Pm fields are relatively close to that of the potentials. This data was lowpass filtered at Hz to make comparison easier Fig (a) MEG and EEG traces from 5 Hz monaural recording of year old subject, previously shown in Fig Auditory evoked fields and potentials are labeled assuming the N wave has a reversed polarity. (b) The 5 Hz monaural Pm, Nm, and Nm fields are labeled on the virtual sensor trace for comparison with the EEG trace. A Hz lowpass filter was applied to make comparison easier. (c) MEG and EEG traces from Hz monaural stimulus. Although the EEG trace has a low SNR, we make tentative measurements of the latencies of the P, N, P, and N waves based on their occurrence near their corresponding MEG fields. (d) Virtual sensor data for Hz monaural stimulus. A Hz lowpass filter was applied to make comparison easier. The Nm and Nm fields have the correct polarity. The Pm field is not labeled because it was below the statistical significance threshold. The valley between the Nm and Nm would be read on an EEG trace as a P Fig (a) Averaged auditory evoked field data obtained from a subject. It was bandpass filtered between -3 Hz. (b) Noise floor data obtained by plus-minus averaging of the auditory evoked field. (c) Histogram of omnibus noise magnitude from noise floor beamformer output. In this case, for α =.5, the beamformer magnitude cutoff is 4.7x -6. (d) Conventional ER-SAM analysis reconstructs both sources. They are both visible above a 95% confidence level threshold and are located at MEG coordinates (-, 4, 5) mm and (5, -35, 5) mm. (e) 4x4x4 mm suppression region boxes are created around the reconstructed sources. (f) Region suppressed ER-SAM reconstructs the sources with a higher magnitude than the conventional ER- SAM beamformer. (g) The Nm dipole moment is measured by computing the nonnormalized virtual sensors at the maxima of the point spread functions around the two sources... 7 Fig (a) Averaged auditory evoked field data obtained from a subject using 5 Hz binaural stimuli. It was bandpass filtered between -3 Hz. (b) Noise floor data obtained by plus-minus averaging of the auditory evoked field. (c) Histogram of omnibus noise magnitude from noise floor beamformer output. In this case, for α =.5, the beamformer magnitude cutoff is.8x -6. (d) Only one source is detectable with conventional ER-SAM analysis. Nothing resembling an Nm peak could be found in the left hemisphere. (e) A coherent source search found coherent sources at MEG coordinates (5, 5, 7) mm and (5, -5, 6) mm. 4x4x4 mm suppression region boxes were created around these points. (f) Region suppressed ER-SAM reconstructs the sources with a higher magnitude than the conventional ER- SAM beamformer. The left hemisphere source is greater than the 95% confidence level threshold. (g) The Nm dipole moment is measured by computing the nonnormalized virtual sensors at the maxima of the point spread functions around the two sources Fig Comparison of ECD, ER-SAM, and TRRS-ER-SAM algorithms in their ability to localize the Nm AEF in 7 subjects. The vertical red line in the MEG sensor traces indicates the Nm latency used as the center of the 3 ms ECD localization window. xi

12 No ECD analysis could be performed for the 6 year old subject due to the presence of a permanent retainer artifact. For ECD MR slices with multiple dipoles marked, the axial MR slices are where the slightly larger dipole marker is located. The other dipole markers are located in separate slices, but their projected positions are shown for reference. Beamformer images were thresholded at a.5 significance level Fig Grand mean activation of Nm sources in the left and right hemisphere shown on an MNI-warped averaged brain from all subjects. The activation maps were normalized in magnitude by dividing by the 95% confidence level threshold, as well as in space by warping them to the MNI brain Fig Graphs of monaural and binaural right hemisphere Nm MNI y and z-coordinates in relation to stimulus frequency Fig (a) Plot of amplitude versus frequency for the monaural stimulus condition. (b) Plot of amplitude versus frequency for the binaural stimulus condition Fig (a) Plot of latency versus frequency for the monaural stimulus condition. (b) Plot of latency versus frequency for the binaural stimulus condition Fig Effect of age on the left hemisphere Nm MNI z-coordinate Fig Plot of Nm dipole moments for 5 Hz stimuli. MonL and MonR are the left and right hemisphere source datasets for monaural stimuli. Likewise, BinL and BinR are the left and right hemisphere source datasets for binaural stimuli. Linear regression analysis indicates that there are no significant linear trends in the data (p MonL =.54, p MonR =.44, p BinL =.67, p BinR =.5). It is possible that no trends were detected due to insufficient data as indicated by low test power (power MonL =.94, power MonR =., power BinL =.6, power BinR =.89) Fig Plot of Nm dipole moments for Hz stimuli. Linear regression analysis indicates that there are no significant linear trends in the data (p MonL =.74, p MonR =.49, p BinL =.7, p BinR =.36). It is possible that no trends were detected due to insufficient data as indicated by low test power (power MonL =.6, power MonR =., power BinL =.6, power BinR =.43) Fig Plot of Nm dipole moments for 6 Hz stimuli. Linear regression analysis indicates that there are no significant linear trends in the data (p MonL =.48, p MonR =.49, p BinL =.99, p BinR =.753). It is possible that no trends were detected due to insufficient data as indicated by low test power (power MonL =.5, power MonR =.98, power BinL =.34, power BinR =.44) Fig Effect of age on Nm latency for (a) 5 Hz tones, (b) Hz tones, and (c) 6 Hz tones. MonL and BinL refers to left hemisphere data for monaural and binaural stimuli, and MonR and BinR refers to right hemisphere data for monaural and binaural stimuli Fig Point plots comparing the effects of monaural versus binaural stimuli on the Nm right hemisphere MNI y-coordinates at different stimulus frequencies Fig Nm laterality indexes for monaural and binaural stimuli over different frequencies. The laterality index was computed by dividing the left hemisphere source dipole moment by that of the right xii

13 . INTRODUCTION Magnetoencephalography (MEG) is a means of recording cortical activity by measuring the magnetic field given off by the brain, via the use of extracranial sensors. While the data recorded by the sensors can be used directly for data analysis, the sensor data does not in itself provide much information about the cortical processes that gave rise to the recorded magnetic fields. Data post-processing methods can provide more information about what is going on in the cortex such as where the activity occurs, how large the activity is, and how it changes in time. In this thesis the performance of a proposed time-restricted region-suppressed ER-SAM beamformer algorithm was evaluated against commonly used magnetoencephalography data post-processing algorithms to determine its viability in analyzing auditory evoked field data. An auditory evoked field study was then conducted in normal hearing children and young adults using. We presented 4 ms tonal stimuli either monaurally (right sided) or binaurally. The tone frequencies were 5,, and 6 Hz. The aim of the study was to identify characteristics of a developing auditory pathway in the processing of single-frequency tonal stimulation. In the study, the effects of subject age, tone frequency, and mode of presentation (monaural/binaural) on the cortical response were examined.. MEG Post-Processing Algorithm Evaluation In this thesis, standard equivalent current dipole (ECD) and event-related synthetic aperture magnetoencephalography (ER-SAM) beamformer methods for post-processing of magnetoencephalography data were compared to evaluate their feasibility as suitable tools for locating and measuring the dipole moment of auditory evoked fields. ECD analysis is prone to interference from uncorrelated noise. On the other hand, ER-SAM is robust to uncorrelated noise [], but since it is based on the linearly constrained minimum variance beamformer, reconstructed sources have reduced amplitudes when they are coherent []. This makes detection of coherent sources difficult when ER-SAM is employed.

14 A. Is the TRRS-ER-SAM a Better Post-Processing Method for AEF Analysis? It was hypothesized that a time-restricted region-suppressed event-related SAM (TRRS- ER-SAM) beamformer would yield a better signal-to-noise ratio (SNR) than ECD or ER-SAM in analyzing the auditory evoked field (AEF). This would result in a greater probability of detection and more accurate signal estimation. Being based on the ER-SAM algorithm, the TRRS-ER-SAM beamformer should share the same robustness to uncorrelated noise. The addition of time-restriction should allow more accurate characterization of specific evoked fields, and the addition of region suppression should allow the beamformer to accurately measure coherent sources [3]. This beamformer was derived and evaluated against the standard ECD and ER-SAM methods on the basis of performance in signal detection and estimation in numerical simulations, and for the Nm auditory evoked field. Prior studies have localized the source of the Nm AEF to the core auditory cortex in Heschl s gyrus [4, 5]; hence, it was hypothesized that any Nm detections from the post-processing algorithms should localize to this structure.. Application of TRRS-ER-SAM to an AEF Study In the past, auditory evoked field/potentials studies have mostly been analyzed by examining trial-averaged channel data, or by using ECD. Analyzing trial-averaged data for amplitude information is particularly prone to errors since it does not account for the distance between the subject s head and the MEG sensors, or the effect of different head shapes on EEG sensor locations. Such a method may require an average over a large number of subjects to cancel out these variations. ECD analysis solves this problem by measuring the dipole moment itself; however, ECD analysis is prone to noise, and often requires the placement of an arbitrary number of dipoles to obtain a good fit. Surprisingly, very few studies have applied beamforming algorithms to the analysis of the auditory evoked field [6, 7], possibly due to the presence of coherent sources in auditory evoked responses. The present study employs the use of magnetoencephalography with the TRRS-ER-SAM beamformer to measure differences in cortical activation patterns evoked by binaural versus right-sided monaural tonal stimulation at 5, and 6 Hz in 8 normal hearing right-handed children and young adults aged -5 years. The TRRS-ER-SAM beamformer enabled statistically significant results to be retrieved from such a relatively small

15 3 group due to the high SNR of its output. The use of beamformer source localization aided in the identification of relations between neuroanatomical structure and auditory function. Through the analysis of neural source generator location, amplitude, and latency, insights can be made into the structure of the normal hearing auditory system and how it develops with age. Knowledge of how the structure and development of the normal auditory pathway affect the measured response is an essential step before analyzing data from subjects with impaired auditory systems. It can enable inferences to be made between measured responses from auditory impaired subjects and the underlying abnormal pathway organization and disruption in normal auditory development. Of the Pm, Nm and Pm AEFs described in the literature, the Nm was the focus of the present study as there is more literature available on this particular AEF and its neuroelectric correlate, the N. Steps were taken to ensure that the Nm was being consistently measured across the subject age range. The study localized the Nm neural source generator and examined differences in Nm location, amplitude, and latency in response to: a. stimulus frequency, b. subject age, c. binaural and monaural tonal stimuli, d. and hemisphere of measurement. In this study, the following set of questions (and sub-questions) were posed: A. What is the Central Neuroanatomical Structure Responsible for the Nm? The localization of the maximum Nm activation was hypothesized to originate near Heschl s gyrus, which has been shown to be a major contributor to the Nm field [4, 5]. Identifying the central Nm location can help associate neuroanatomical structure with auditory processing functions, which can be indirectly determined by identifying factors that affect the Nm location, amplitude, and latency. B. Does Stimulus Frequency Affect the Cortical Response?. Nm Localization: Does Stimulus Frequency Affect Nm Localization? The auditory cortex has been found to have multiple regions that are tonotopically organized [8-]. This means that various stimulus frequencies are represented at slightly different Nm source locations as small as mm per octave []. Studying the tonotopic organization can provide information on how frequencies are represented in the auditory system.

16 4 For example, using dipole modeling of electroencephalography data, Guiraud and colleagues have found evidence of similarities, as well as differences between the tonotopic organization of normal hearing subjects and cochlear implant users [3]. However, given the errors in coregistration, limited voxel resolution used (5x5x5 mm), subject movement, and the limited subject numbers, it was hypothesized that differences in Nm localization with varying frequency would not be large enough to be detected in this present study.. Nm Amplitude: Does Stimulus Frequency Affect the Nm Amplitude? Examining Nm amplitude dependence on frequency would indicate the nature of tonotopicity in the auditory system. The physical differences between each frequency representation could result in differences in Nm amplitude with respect to frequency. Based prior studies [, 4], it was hypothesized that the Nm amplitude would decrease with increasing frequency. 3. Nm Latency: Does Frequency Affect the Nm Latency? Examination of frequency effects on the Nm latency would reveal the influence of tonotopicity on auditory pathway delays. These delays could arise from path length differences or additional processing stages, thereby providing insight into the tonotopic structure of the auditory system. Based on findings by Roberts and colleagues [5], and Ren [6], it was hypothesized that latency would decrease with increasing stimulus frequency. C. Does Subject Age Affect the Cortical Response?. Nm Localization: Does Subject Age Affect the Nm Location? EEG measures of AEPs have shown an age related medial shift in radially oriented sources [7]. It was, however, found that the radial sources are not correlated with the N, suggesting that the N source generator runs tangential to the brain surface. There was no age related change detected in tangential sources; hence it was hypothesized that that no age related changes in location would be detected in the N s neuromagnetic correlate in the study sample group.. Nm Amplitude: Does Subject Age Affect the Nm Amplitude? The effect of age on Nm amplitude was examined to determine whether there are maturational changes in the auditory system that affect the cortical response. These potential changes could involve modifications to the number of neurons at the source generator site, more synchronous neural activity, or possibly changes to the orientation of the dipole representation of

17 the neural activity. Based on previous studies, it was hypothesized that the Nm amplitude would increase with increasing age [8, 9]. 3. Nm Latency: Does Subject Age Affect the Nm Latency? The effect of age on Nm latency was examined to determine if and to what degree the auditory pathways increase in transmission efficiency through maturation. Based on findings by Ponton and colleagues [8], it was hypothesized that with increasing subject age, the Nm latency would decrease. These age related changes would reflect maturational changes in the auditory system. D. Are There Measurable Differences Between Monaural and Binaural Stimuli?. Nm Localization: Are There Different Nm Source Generator Locations for the Processing of Monaural and Binaural Stimuli? Differences between processing of binaural and monaural stimuli may lead to differences in Nm localization. Again, given the errors in coregistration, limited voxel resolution used, and the limited subject numbers, it was hypothesized that differences in localization would not be large enough to be detected in this study sample group.. Nm Amplitude: Do Differences Between Monaural and Binaural Stimuli Affect Nm Amplitude? Examining the differences in Nm amplitude between monaural and binaural stimuli would provide information on the organization of underlying pathways that lead to the auditory cortex. Because non-linear processes may be involved in processing of binaural input, it was hypothesized that the binaurally evoked Nm response would not equal the linear sum of monaurally evoked responses. 3. Nm Latency: Is There a Latency Difference Between Monaural and Binaural Stimuli? The effect of monaural versus binaural stimulation on the Nm latency was examined to determine whether differences in monaural and binaural stimulus processing would affect the transmission time of the auditory signal. It was hypothesized that Nm latency would occur at longer latencies in response to binaural compared with monaural stimulation due to additional processes involved in comparing the inputs from both ears for binaurally related tasks such as sound localization. Differences in the Nm latency between hemispheres in response to monaural and binaural stimuli were also examined. Hemispheric differences in Nm latency would indicate the degree to which pathways leading to each Nm source generator in the cortex differ. 5

18 6 Differences in response to monaural stimulation could indicate path length differences, whereas differences with binaural stimulation could reflect different roles of each hemisphere in signal processing. 4. Nm Lateralization: Is There a Lateralization Difference Between Monaural and Binaural Stimuli? The lateralization of the Nm to a particular hemisphere in response to monaural versus binaural stimuli was also examined. This would provide information on the relative connections between each ear and each hemisphere, and also on hemispheric dominance in auditory processing. It was hypothesized that in response to monaural stimuli, the Nm activation would be lateralized toward the hemisphere contralateral to the auditory stimulus more so than for binaural stimulation. Similar findings were shown by Fujiki and colleagues using steady-state responses [6]. For a right sided monaural stimulus, it was expected that there would be greater activation in the left hemisphere than the right. For binaural stimuli, it was expected that fairly equal activation would occur in both hemispheres. We also hypothesized that there would be an increase in amplitude between monaural and binaural responses due to a binaural summation effect..3 Thesis Roadmap In the next chapter, a background is provided of the current state of the art in magnetoencephalography data analysis, as well as a review of auditory pathway organization and development. In chapter three, the design of the auditory stimulation equipment, the experimental setup in the MEG environment, and the recording protocol are described. In chapter four, the data analysis protocols for ECD and ER-SAM are outlined, along with the procedure used to verify that the Nm field was being consistently measured across subjects. In chapter five, the TRRS-ER-SAM algorithm is derived, along with a complementary coherent source search algorithm. At the end of the fifth chapter, the methods used to evaluate the TRRS- ER-SAM algorithm against the ECD and ER-SAM algorithms are outlined. The statistical analysis methods used in the AEF study to analyze the TRRS-ER-SAM data across subjects are also described. In chapter six, the results of the evaluation of the TRRS-ER-SAM algorithm against ECD and ER-SAM using numerical simulations and AEF data are shown. In chapter seven, the data obtained from the application of the TRRS-ER-SAM algorithm are statistically

19 analyzed. Finally, in chapter eight, the results of the evaluation of the TRRS-ER-SAM algorithm and the AEF study are discussed. 7

20 . BACKGROUND In this section, an overview is presented of the neurophysiological underpinnings of the extracranial neuromagnetic and neuroelectric signals. Both magnetoencephalography and electroencephalography measures of these signals are then discussed. This is followed by a review of the theory behind ECD and ER-SAM post-processing methods for MEG data. In the context of using these measures to study factors that influence the auditory evoked response, the physical and neurological structure of the auditory system is described, along with the associated cortical auditory evoked fields. The development of the auditory system is also reviewed to provide an insight to the physiological basis behind the auditory evoked field maturational changes being studied.. Neuromagnetic Signals Before discussing the neuroimaging of electromagnetic signals from the brain, it is worth reviewing the structural components of the brain that give rise to these signals. The brain is composed in part of interconnected neurons, each of which consist of a dendritic tree, a cell body, and an axon. Chemical transmitters from presynaptic neurons result in an opening of dendritic Na + channels at the synapse. This results in an increase or decrease of the local intracellular potential, depending on the type of transmitter released, which propagates toward the axon hillock. This propagation allows the various synapses along dendrite to contribute, in varying degrees depending on their location, to the resting potential at the axon hillock. When this resting potential rises above a certain threshold, an action potential is generated along the axon. The rising or depolarization phase of the action potential is a result of an opening of certain ion channels which allows ions to flow into the cell. The majority of these ion channels are sodium ion channels; hence, for simplicity, we will denote them as Na + ion channels. The falling or depolarization phase of the action potential is the result of the closing of Na + ion channels and the opening of another type of ion channels which allows ions to flow out of the cell. The majority of these ion channels are potassium ion channels; hence, we will simply 8

21 9 denote them as K + ion channels. After an initial undershoot of the potential, the cell returns to resting state once the K + channels are closed. The rate of action potentials is determined by the duration of the refractory period. At the axon terminal, the action potential results in a release of chemical transmitters at the synapses. The maximum rate of action potentials is determined by the refractory period where Na + channels remain inactive for a period of time. A higher resting potential reduces the refractory period and increases the rate of action potentials. In this way, electrically encoded information can be transmitted and processed through a network of neurons. There is a flow of ions through the dendritic trees that occurs due to electric potential differences between the synapse and the axon hillock during synaptic activity. This flow of ions forms a primary current. In order to complete the current loop, there is also a flow of ions in the volume outside the dendritic tree, known as a secondary current. When a sufficient number of neurons exhibit synchronized activity and are parallel in axis, the resulting electric or magnetic fields produced by this activity can be detected by extracranial sensors. In general, it is supposed that the magnetic field produced by the primary currents that is detected by magnetoencephalography, and the electric field produced by secondary currents that is detected by electroencephalography []. Magnetic fields travel in a circular fashion around the direction of the current, following the right-hand rule, whereas electric fields travel in the direction of the current. This means that magnetic fields that are detectable outside the head are most likely produced by neurons that run tangential to the scalp surface, whereas electric fields detectable outside the head are most likely produced by neurons that are orthogonal to the scalp surface. Because the secondary currents generate the detected electric fields, it is necessary to have a good model of the head tissues in order to account for different conductivities.. Electroencephalography and Magnetoencephalography Electroencephalography (EEG) and magnetoencephalography (MEG) are functional neuroimaging modalities that use an array of sensors surrounding the head to measure signals emitted by the brain during sensory and cognitive processes. EEG measures the electric field while MEG measures the magnetic field. These imaging modalities have the advantage that they are noninvasive, and have a temporal resolution on the order of milliseconds. The advantage MEG has over EEG is that magnetic fields are not affected by the skull and other extra cerebral

22 tissues; hence it does not exhibit the distortion and smearing of electric potentials []. This means that accurate source analysis of MEG fields only requires a realistically shaped head model, while EEG source analysis requires a multicompartment model of conductivities for the brain, skull, cerebrospinal fluid and scalp. This also means that MEG field amplitude isocontour lines in sensor space are much closer together than for EEG (assuming the field/potential distributions are normalized to account for unit differences), allowing one to make a guess of source location based on visual inspection of how the detected fields are distributed among the sensors. However, the tradeoff between MEG and EEG is that MEG requires more sophisticated equipment. The EEG measures the potential difference on the head surface []. In EEG, electrodes are attached to the scalp, and one is attached to a reference point such as the nose or an ear lobe. A grounding electrode is also attached to the head to help minimize background noise. In the basic setup, signals detected by the electrodes are fed through an amplifier stage, then through an analog-to-digital filter, and then to a storage device. The typical amplitude of an EEG signals is on the order of microvolts. Due to the relative simplicity of its configuration, the EEG has been a mainstay in the field of clinical neuroscience since its first reported recording by Berger in 99 [3]. The first recording of biomagnetic signals of the brain was reported 968 by Cohen [4]. Early instrumentation used inductance coils for signal measurement, and averaging of many trials were required to obtain a signal. During the 97s and 8s, highly sensitive superconducting quantum interference devices (SQUIDs) were adapted to measuring biomagnetic signals [5]. In the early 9s, the first multichannel MEG was introduced allowing for more advanced signal processing. Biomagnetic fields have a low magnitude. For instance, the typical magnetic field generated by the human brain is on the order of femtoteslas, while the field produced by a transistor at m is on the order of picoteslas. Hence, noise reduction methods need to be implemented in the modern MEG system [5]. The MEG device is enclosed in a µ-metal shielded room to minimize background noise. Further noise cancellation is implemented using first order gradiometers, reference sensors, and adaptive balancing. First order gradiometers consist of two magnetometer coils in opposite orientations. By subtracting the signals measured by the two coils, fields from distant sources are cancelled because both coils detect similar signals; however, fields from nearby sources will produce different signals in the coils and will result in a signal detectable by the gradiometers. Reference sensors use magnetometers and first

23 order gradiometers to synthesize higher order gradiometers. Adaptive balancing can be used to compute the synthesized higher order gradiometers such that correlated noise is minimized..3 Introduction to Analysis of MEG Data As is shown in Fig..3.a, raw data from the MEG has a very low signal to noise ratio (SNR); hence, averaging over multiple trials is typically done to aid visual identification of the auditory evoked fields and for equivalent current dipole analysis (Fig..3.b). a) b) Fig..3.. (a) Non-averaged single trial data from a pilot auditory evoked field recording. (b) The average over all trials. In order to convert raw data detected by either EEG or MEG sensors to a functional image where neural activity is spatially localized, post-processing algorithms are applied. This involves finding an inverse solution to the magnetic field distribution detected by the sensors. The main obstacle that each algorithm needs to deal with is that the problem is ill-posed, which means that there are infinite source configurations within a volume that could yield the same electric or magnetic field distribution (shown by Helmholtz in 853)..4 MEG Head Models Both equivalent current dipole (ECD) and beamformer post-processing algorithms

24 attempt to localize the sources that give rise to the field pattern detected by the sensors. To do this, a spherical head model is required to compute current dipole forward solutions, where a hypothetical unit dipole is placed within the head volume and the resulting magnetic field that would be detected by the sensors is calculated. Computation of the forward solution requires knowledge of the conductivity profile and shape of the conducting volume. For whole-head MEG systems with radially oriented sensors, the brain is typically modeled as a uniformly conducting sphere with only primary currents, which greatly simplifies the calculation. For a single sphere conductor model [6] centered at the origin, the vector magnetic field measured at location r created by current dipole vector q inside the sphere at location r is calculated as µ 4πF ( r) = ( Fq r q r r F ) B, (.4.) 7 where µ = 4π Hm is the permeability of free space, F is a scalar term given by ( r a + r r r) F = a, (.4.) F is a vector term given by F = ( r a + a a r + a + r ) ( a + r + a a r) r r, (.4.3) and a r r. In essence, when implemented in a head model that describes the head as a spherical conducting volume, (.4.) calculates the field that would be detected by a given MEG sensor for a given hypothetical current dipole placed within the head volume. Magnetoencephalography recordings are performed with localization coils placed on the nasion, and left and right preauricular points on the head. These points form a coordinate system, against which the MEG sensor locations and orientations can be described (Fig..5.). Using the locations of the localization coils, a single-sphere head model can be created without anatomical information by placing a sphere at an approximate location within the head volume such that the sphere surface would approximately follow the surface of a typical scalp. Anatomical information can provide a more accurate profile of the head shape. This can allow for more accurate placement of the sphere in single sphere head models for equivalent current dipole or beamformer analysis. It can also allow one to create a slightly more complex multi-sphere head model for more accurate analyses. In a multi-sphere head model, each MEG sensor is assigned a local sphere, which follows the profile of the scalp closest to the sensor. The forward solution for a dipole within the head volume is then calculated by computing the magnetic field contribution of the dipole to each sensor, using each sensor s local sphere center.

25 3 Because different local sphere centers are being used, the Cartesian dipole orientation is not immediately convertible into spherical coordinates, which is sometimes preferable. One way to convert to spherical coordinates is to take the spherical coordinate origin as the local sphere center of the sensor that picks up the largest magnetic field component for the given dipole..5 Coregistration of MEG and MRI Data To choose the local sphere centers for head model generation using anatomical information, or to display spatial neural activation plots generated by post-processing algorithms over an anatomically meaningful background, MEG and MRI data need to be coregistered. This is done by marking the MEG head localizer coil fiduciary positions with MRI lipid markers as shown in Fig..5.a-b. These fiducial markers establish a common coordinate system between the two imaging modalities []. This coordinate system is depicted in Fig..5.c. (a) (b) (c) left right nasion Fig..5.. (a) Left and right pre-auricular lipid markers on MRI image from a 8 year old subject. (b) Nasion lipid marker. (c) Coordinate system formed by the fiducial markers..6 Equivalent Current Dipole Analysis Equivalent current dipole (ECD) analysis finds the location, orientation and magnitude of a pre-specified number of dipoles that minimizes the difference between the forward solution and the actual averaged-data sensor values over a defined time interval (Fig..6.). This method assumes that the detected cortical activation pattern is caused by a neuronal current distribution over a fairly small region. When measured from a distance, the net contribution of these neurons can be represented by an equivalent current dipole.

26 4 a) b) Fig..6.. ECD analysis illustration using half-head sensor auditory evoked field pilot data. (a) ECD analysis attempts to minimize the difference between the observed and the calculated field pattern by tweaking dipole location, orientation and moment using a gradient search. Blue and red areas indicate magnetic flux source and sink. (b) The dipole solution is shown, coregistered with the subject s MRI image. Here, the dipole is located in the planum temporale..7 Introduction to Beamformer Analysis Beamformers are a type of adaptive spatial filter that examine the brain voxel by voxel and measure the contribution of each voxel to the measured field, thereby producing a tomographic image. This is done by summing the product of each sensor multiplied by a predetermined weight value. A schematic of beamformer processing is shown in Fig..7.. Beamformers act as a spatial filter, ideally permitting only the signal from a point in space to pass, while blocking out all other signals. However, the number of other signals in space that can be blocked is equal to N- where N is the number of sensors. This is because a finite number of sensors prevents us from forming a sufficient number of mathematical constraints. This means that beamformer algorithms must construct the filter to preferably block out the contribution of the most significant sources that lie outside the location being measured.

27 5 Fig..7.. Schematic diagram of beamformer signal processing algorithm applied to a hypothetical recording. Data from each channel is multiplied by a weight value (i.e. w, w,, w M ). The resulting products are then summed. For mathematical purposes, these weight values are typically organized as a weight vector. MEG beamformer analysis is typically done using variations of the linearly constrained minimum variance (LCMV) algorithm. This algorithm is an adaptive type, which means that it does not require the specification of the number of sources. This type of beamformer minimizes the contribution of coherent sources from other regions in space as much as possible, while enforcing unit gain on orthogonal vectors at the point of interest. The algorithm requires a head shape model for accurate computation of the dipole-to-magnetic field forward solution, also known as the lead field. Because there is less lead field contribution from voxels further away from the sensor array, sensor noise tends to become amplified in these voxels. In order to compensate for this, a depth-normalization method is needed for accurate source localization, typically in the form of weight normalization or lead field normalization. In the beamformers examined in this thesis, lead field normalization is used. When performing LCMV beamformer analysis, it must be kept in mind that multiple correlated sources can produce incorrect source localization as they produce a magnetic field sensor pattern that is time-locked. This could happen in the case of bilateral activation where Nm neural source generators in the left and right auditory cortices can activate in a highly synchronous fashion [3, 7]. Beamformer analysis can produce a tomographic activation map (Fig..7.a) overlaid on an MR image and virtual sensor data (Fig..7.b). The activation map indicates voxel activity at one point in time and virtual sensor data indicates the activity of one voxel over the entire time span. It should be noted that the activation map does not indicate a region of activity. Instead, it is a point spread function created by the fact that the lead fields of voxels close to the source location are not orthogonal to the lead fields from the source location. This causes the voxels

28 near the source location to pick up some source activity. information is taken at the peak of this point spread function. 6 Source location and magnitude a) b) Fig..7.. Half-head beamformer results from an auditory evoked field pilot recording of a year old subject using monaural 5 Hz tones. (a) Activation map of left hemisphere Nm activation. This was generated using the MEG data that gave the anatomically meaningless dipole solution using ECD analysis in Fig. 5a-c. (b) Virtual sensor data for the voxel with the greatest activation. The red line corresponds to the time point used to generate the activation map..8 Synthetic Aperture Beamforming Theory Output from the basic LCMV beamformer yields output that contains both the signal and the noise. In order to increase the signal to noise ratio, two popular extensions of the basic LCMV beamformer are the eigenvalue beamformer and the synthetic aperture magnetoencephalography (SAM) beamformer. In order to improve the signal to noise ratio, the eigenvalue beamformer performs an eigendecomposition of the sensor covariance matrix, forming uncorrelated projections of the sensor data onto orthonormal basis vectors (eigenvectors). These eigenvectors and their corresponding projections are sorted in order of their contribution to the variance in the sensor data. This eigendecomposition essentially allows the user to select the eigenvectors that are believed to be part of the signal subspace, thereby omitting the remaining eigenvectors with the

29 assumption that they are part of the noise subspace. This reduces the dimensionality of the data and acts as a filter to separate the signal from the noise. We have chosen to evaluate the SAM beamformer in this thesis because unlike the eigenvalue beamformer it does not require selection of eigenvalues. In attempting to accurately measure the dipole moment, the use of eigenvalues would require the user to be sure the entire signalspace is captured, which can be a somewhat arbitrary process. A. Definitions We define b m (t) as the magnetic field measured by the mth detector at time t. The set of measured data from multiple detectors is represented by the vector b(t) [b (t),b (t),,b M (t)] T, where M is the total number of detectors and superscript T indicates the matrix transpose. For multiple trials, b m (t) is averaged over all trials. Spatial locations are represented by a three dimensional vector r such that r = (x,y,z). The source distribution is assumed to be generated by Q discrete sources. These sources are represented as equivalent current dipoles located at locations r, r,, r Q. The qth source is expressed as a function of time t by s(r q, t). The lead field column vectors l θ ( r) and l φ ( r) are the magnetic fields that would be produced by unit dipoles in the θ and φ directions at location r i. The orthogonal orientations used in this paper are the θ and φ directions of the local sphere belonging to the sensor with the greatest signal pickup for the measured voxel, using the multisphere head model. This ensures that θ and φ are tangential to the surface of the head. Using this coordinate system avoids the need for a third radial r lead field orientation since radial sources do not make a significant contribution to the measured magnetic field [7]. The second order moment matrix of the recorded data, conventionally known as the T covariance matrix, is denoted as R b such that b( t) b ( t) b 7 R = where indicates the ensemble average, which is in practice approximated by the time average. Note that the second order moment matrix is only truly a covariance matrix when the means of the data are zero. For datasets consisting of multiple trials, R b can be calculated over all trials as is done in ER-SAM. If this is done, any DC offset removal should be applied equally over all trials. For Q sources, the second order source-moment matrix R s is defined as

30 α µ αα L µ Nαα Q = µ αα α L µ Nα α Q R s (.8.) M M O M µ Qαα Q µ Qα α Q L α Q where s ( r t) α and µ ij is the correlation coefficient between the ith and jth source []. i i, Hence for uncorrelated sources, the source-moment matrix is simply a diagonal matrix of source powers. B. Conventional LCMV Beamforming The estimated virtual sensor signal ŝ( r,t) at a voxel as measured by an adaptive spatial filter is given by T ( r, t) W ( r) b( t) sˆ = (.8.) [ ] where W( r) w ( r) w ( r) is the weight matrix and s ˆ( r, t) sˆ ( r, t), sˆ ( r, t), and ( r) θ, φ [ ] θ φ 8 w θ and w φ ( r) are the weight column vectors for the θ and φ directions, respectively. Similarly, θ ( r ) and s ˆ ( r,t) s ˆ,t directions at location r. φ are the column vectors representing the estimated signals in the θ and φ The linearly constrained minimum variance beamformer (LCMV) formulation (.8.3)-(.8.5) seeks weights that minimize the contribution of other sources to the measured source, and at the same time enforce unit gain for each vector while suppressing the contribution of the orthogonal vector [8]. T ( r) R w ( r) min w subject to wθ θ b θ l T θ l T θ T ( r) w ( r) =, l ( r) w ( r) = θ φ T ( r) R w ( r) min w subject to wφ φ b T ( r) w, l ( r) w = θ = φ φ φ φ (.8.3) (.8.4) In full matrix form, this can be rewritten as T T min W( r) R W( r) subject to W ( r) L( r) = I W b [ ]. where the lead field matrix L( r) l ( r) l ( r) W θ, φ,(.8.5) Using the Lagrange multiplier method [8], the solution for the weight matrix is [ ] T ( r) = R L( r) L ( r) R L( r) b b. (.8.6)

31 C. Lead Field Normalization and Dipole Moment Estimation 9 Lead field normalization is necessary to prevent increased sensitivity of the beamformer at voxels located furthest away from the sensors. Without lead field normalization, lead fields from voxels far away from the sensors are weak. As such, sensor noise will be amplified. This results in an over-sensitivity of the beamformer to voxels toward the center of the head. In order to compensate for this, lead field normalization is performed (.8.7) before each computation involving a lead field matrix [9]. The beamformer output s ˆ( r,t) is therefore non-dimensional. ( ) L( r) L N = L r (.8.7) The lead field normalized tomographic output is used for source localization. Estimation of dipole moments at location r is done by not normalizing the lead fields. The non-normalized beamformer output s ˆ( r,t) at a localized source will yield the dipole moment of the source over time. D. Synthetic Aperture Magnetoencephalography For each location r, the LCMV beamformer presented thus far provides two estimated signal vectors in the θ and φ directions. These two vectors will contain both signal and noise components. In order to improve the signal to noise ratio, SAM takes advantage of the assumption that neural source generators can be represented by dipoles, which has been implicit in the LCMV derivation. Since dipoles have a single orientation, SAM estimates the dipole vector orientation at each location r and suppresses the orthogonal vector, which is assumed to contain only noise. Early implementations of the SAM algorithm used a non-linear search for this dipole orientation [8], but this search involves a fair amount of processing time. An alternative to this would be to take the source orientation directly from the virtual sensor output of a multidimensional eigenvalue beamformer [3]; however, this still involves selection of dominant eigenvalues to represent the signalspace. In the described implementation [3], the dipole orientation at each location r is estimated by eigendecomposition of the x source covariance matrix ( r) R W( r) Rˆ T W. (.8.8) s = b The estimated source orientation of r n is equal to the eigenvector corresponding to the largest eigenvalue of the Rˆ s matrix []. The orthogonal orientations V orth (r) at each location r are calculated as well using (5.3.6), where V src (r) is the estimated source orientation vector at r. Equation (.8.9) ensures that the resulting vectors remain tangential to the surface of the head.

32 [ ] [ ] T ( r) = V ( r) Vorth src (.8.9) The lead field vector is created again using the lead fields corresponding to the source orientations and their orthogonal orientations. This is done using l l ( r) [ V ] l ( r) [ V ] l ( r) = and (.8.) src src θ θ + src φ ( r) [ V ] l ( r) [ V ] l ( r) orth orth θ θ + where [ φ =, (.8.) ] θ src orth φ φ V and [ V src ] φ are the θ and φ components of the ( r) [ V orth ] θ and [ V orth ] φ are the θ and φ components of the ( r) V src vector, respectively; and V orth vector, respectively. The weights are then recomputed using (.8.6), where L ( r ) [ l ( r ), l ( r )] n src n orth n =. The first column of the weight matrix corresponds to the estimated dipole orientation and is the only column needed to compute the estimated signal at location r using (.8.). E. Event-Related Synthetic Aperture Magnetoencephalography Event-related synthetic aperture magnetoencephalography (ER-SAM) can be used to improve the rank of the sensor covariance matrix, which results in a selection of beamformer weights which are more robust against noise that is not correlated across event-related trials, such as noise produced by permanent retainers []. ER-SAM is performed by computing the matrix across all trials and all time points of interest. This means that the time window of interest, which is typically the post-stimulus time window, for each trial is concatenated with the corresponding time windows from all the other trials. The resulting concatenated sensor data is then used to compute the covariance matrix, instead of the averaged sensor data. The weight vector computed using the event-related source covariance matrix is used to compute the estimated signal at each voxel location r using (.8.), with b(t) still being the averaged sensor data. It can be noted that using averaged sensor data in (.8.) is mathematically equivalent to taking b(t) to be the individual trial data and using (.8.) to compute the estimated signal for each trial, then averaging the estimated signal obtained over all trials. Rˆ s

33 .9 Beamforming and Correlated Sources A. Signal Leakage and Cancellation In the presence of coherent sources, the signal measured by the beamformer at the location of the actual source exhibits time course distortions due to signal leakage between the sources []. means that The conventional LCMV beamformer assumes that all sources are uncorrelated. This Rˆ s is nearly a diagonal matrix. It is also assumed that the lead fields from different source locations are orthogonal to each other and that there are fewer sources than sensors. To deduce the relationship between Rˆ s and R b, we first note that in the formulation of the LCMV beamformer, the estimated second order source moment matrix is given as ( r) R W( r) Rˆ T W. (.9.) W s = b By combining (.9.) and [ ] T ( r) = R L( r) L ( r) R L( r) b b, (.9.) we can alternatively compute T [ ( ) ( )] L r R L Rˆ s as Rˆ s = b r. (.9.3) If there are M sensors and there are Q sources where Q < M, we denote the lead field of the ith source as l(r i ) on the interval i Q and denote l(r j ) on the interval Q <j M as a vector of zeros with the same length as l(r i ). L(r) is then defined as a composite lead field square matrix such that L ( r) [ l( r ) l( r ),, l( )] b ( ) R L( r) T s =. We can then write, K R = L r. (.9.4) written as Q r M This gives the relationship between q= ( r ) l ( r ) T R = α l. (.9.5) b q src q src q Rˆ s and R b. If R s is a diagonal matrix, then R b can be where l src (r q ) is the lead field contribution from the source at r q. Because of the orthogonality of the lead field vectors, one can also write R Q b = q= α q T [ l ( r ) l ( r ) ] src q src q. (.9.6)

34 Given that l T [ ] l( r ) T ( r ) l( r ) l( r ) j i i j = l( ri ) l( r j ) ( r ) l( r ) l i j, (.9.7) if (.9.3) were used to obtain Rˆ s, the result would be if the source and measurement lead fields are orthogonal and α i if the they are parallel. This allows one to probe for the location of the source using the lead field from one location at a time. If, however, there are multiple correlated sources, Rˆ s is then no longer a diagonal matrix, which means that R b cannot be written as (.9.5). In this case, a composite lead field matrix containing the lead field vectors from all the correlated sources is needed to reconstruct Rˆ s. power: We proceed to examine the effect of multiple coherent sources on the measured source. Two Coherent Sources: For two sources, Sekihara and colleagues demonstrated that the beamformer estimate of source intensities could be expressed as (, t) = α ( ) sˆ r µ (.9.8) (, t) = α ( ) sˆ r µ (.9.9) where r and r are the locations of the two sources []. To examine the effect of high source correlation on source intensity estimates, the limit of (.9.8) as µ approaches unit is taken. lim µ (, t) = sˆ r (.9.) This indicates that for two highly correlated sources, the estimated source intensity decreases to zero, indicating signal cancellation. B. Multiple Coherent Sources Using a similar derivation used by Sekihara and colleagues, it can be shown that for any number of coherent sources, the measured signal becomes suppressed. This derivation is shown in Appendix I. C. Thesis Objectives in the Context of Beamformer Development Dalal and colleagues described a region suppression method for beamformers that can overcome the coherent source problem [3]. This method is similar to what is used in radar

35 3 systems to allow detection of contacts by nulling the contribution of jamming sources [3]. In this thesis, the region suppression method was coupled with a time-restricted ER-SAM beamformer. The resulting TRRS-ER-SAM beamformer was evaluated for accuracy in localizing and measuring correlated sources using numerical simulations. The TRRS-ER-SAM beamformer was finally evaluated against ECD and ER-SAM analysis methods for applicability to the analysis of auditory evoked fields.. Statistical Thresholding of Data The magnitude of the normalized beamformer output as presented thus far is unitless and does not provide any information on how large the signal is with respect to the noise. Here, we describe an omnibus noise thresholding test used to set an acceptance criterion for the data [33]. The goal of this test is to determine, at a given significance level, whether we are able to reject the null hypothesis: H H : S N, : S > N where S is the measured signal and N is the mean noise level. The omnibus noise test allows us to evaluate these hypotheses without knowledge of the spatial distribution or the probability distribution of the noise. Since we re dealing with the magnitude of the beamformer output, this will be a one-tailed test. First, a plus-minus average is taken of an even number of trials to obtain a noise-only dataset. For example, see Fig...b. This dataset is processed by an ER-SAM beamformer to obtain the magnitude of normalized voxel values over all space and time. At each time point, the maximum voxel value is then taken, to yield a vector with a length equal to the number of time points as shown in Fig...c. Random samples are then taken from this vector to form a histogram of the omnibus noise level. Because the confidence level is defined as ( α )%, where α is the significance level, then we take the threshold level as the value that is greater than ( α )% of all the other samples. When we measure a normalized signal from beamformer analysis of the original data, if its magnitude is greater than the threshold level, the null hypothesis is rejected in favor of the alternative hypothesis.

36 (a) Auditory Evoked Field - 5 Hz, Monaural (8 yrs) (b) Noise Floor - 5 Hz, Monaural (8 yrs) (c) Hz, Monaural Omnibus Noise Histogram, 3 Samples (8 yrs) Magnetic Field (ft) 5-5 Magnetic Field (ft) 5-5 Number of Occurences Time (ms) Time (ms) Normalized Omnibus Noise Magnitude x -5 Fig.... (a) Averaged auditory evoked field data obtained from a subject. It was bandpass filtered between -3 Hz. (b) Noise floor data obtained by plus-minus averaging of the auditory evoked field. (c) Histogram of omnibus noise magnitude from noise floor beamformer output. In this case, the for a 95% confidence level, the beamformer magnitude cutoff is 6.5x -6.. Intersubject Brain Geometry Normalization In comparing neuroanatomical structures across subjects, it would be difficult to use a coordinate system with a scale based on the physical measurement of each subject s brain. This is because each subject s brain has a unique geometry and size. The Montreal Neurological Institute (MNI) brain is an important tool in comparing neuroanatomical locations because it allows these locations to be described in a coordinate system that does not vary between subjects. The commonly used MNI brain template is an averaged T-weighted MR scan from 5 subjects. The scale of the MNI coordinate system is the same as the scale of MNI brain template. Fig... depicts the axes of the MNI coordinate system. The origin is approximately located at the anterior commissure, with the y-axis running through the posterior commissure. y+ x+ z+ z+ x+ y+ Fig.... MNI coordinate system overlaid on top of a subject brain that was normalized to the MNI template.

37 5 To convert a location in a subject s brain into MNI coordinates, the subject s brain must be warped using basis functions to match the shape of the MNI brain as illustrated in Fig.... This warping provides a transform between the subject s MRI coordinates and the MNI coordinates. It should be noted that a similar template, known as the Talairach atlas has served as a coordinate based reference for neuroanatomical structures [34]. Another transform must be used to convert from MNI coordinates to Talairach coordinates.. From Sound to Auditory Evoked Potentials/Fields In order to evaluate the MEG data post-processing methods, they were tested using auditory evoked field data. In this section, we describe the auditory system to provide a better understanding of the neural processes that occur between auditory stimulation and cortical activation. A diagram of the auditory system is shown in Fig....

38 6 Fig.... Diagram of the auditory pathways, courtesy of Guyton & Hall [9]. Note that this diagram only illustrates input from one ear. For a complete picture, imagine a parallel system with input from the other ear. When sound pressure is presented to the ear, the waves impinge on the tympanic membrane which deflects the three ossicles of the middle ear, namely the malleus, incus, and stapes. The stapes in turn articulates the membrane of the fenestra ovalis. Vibrations of the fenestra ovalis create a traveling wave in the cochlear fluid, causing shearing forces on stereocilia of the hair cells. Bending of the stereocilia in one direction causes depolarization of the hair cell via the opening of mechanically gated K + channels [9]. Bending of the stereocilia in the other direction causes repolarization of the hair cell. Hair cell depolarization triggers the release of neurotransmitters to the dendrites of the spiral ganglion cells. When sufficient neurotransmitters are released, the spiral ganglion cell fires an action potential that is transmitted to the brain via the auditory nerve fiber. The auditory nerve fiber feeds into the cochlear nucleus, which modifies the signal. For instance, upon the onset of a sound, the firing rate increases abruptly before settling to a lower steady state level for the duration of the sound [35]. The cochlear nucleus then sends signals to the superior olivary complex, the lateral lemniscus, and the inferior colliculus in the midbrain. At the level of the superior olivary complex and the inferior colliculus, there are significant contralateral nerve fiber projections, which is reflective of the role these groups of neurons play in the integration of auditory sensory information from both ears [9]. As with other sensory systems, these contralateral nerve fiber projections result in a strong contralaterality in the auditory system. Thus, activity from the right ear has about 8% functional connectivity with

39 7 the right cortex and % ipsilateral projections [6]. Signals from the inferior colliculus are then transmitted to the medial geniculate nucleus, and then to the primary auditory cortex. Normally, all of the above processes operate in a system with two functional ears. This is particularly important postnatally when environmental sounds are influential in functional development of the hearing. When normal hearing is disrupted, this contralaterality is lost. Suziki and colleagues demonstrated a gradual loss of contralaterality in subjects with sudden onset unilateral hearing loss using speech sounds. Animal studies using immunostaining [36] and neuroreceptor binding [37] have shown evidence of auditory pathway reorganization upon unilateral cochlear ablation. This suggests that contralaterality is important in the processing of auditory input. It is worth noting that tonotopic organization is maintained throughout the auditory pathways [9]. When sound arrives at the cochlea, it is broken down into its frequency components by the varying stiffness of the basilar membrane. This means that only certain nerve fibers will respond to certain sound frequencies, depending on the portion of the cochlea they innervate. This tonotopic organization is also present in the primary auditory cortex, where certain areas respond to certain frequencies. Often upon the onset of an auditory stimulus, the EEG and MEG traces display three late latency peaks []. In EEG terminology, these peaks are designated as the P, N, and P potentials. In adults, the P is a positive deflection at around 5 ms, the N is a negative deflection at around ms, and the P is a positive deflection at around ms. In MEG terminology, which borrows from EEG terminology, they are designated as the Pm, Nm, and Pm fields. Previous studies have localized these fields to the primary auditory cortex [38, 39]. A. Thesis Objectives in the Context of Auditory Pathway Organization Through the use of monaural and binaural stimuli, this study explored the effects of the contralaterality of the auditory pathways on the Nm late latency response. The study also examined whether the response to binaural stimulation is a linear summation of the response to monaural stimuli in both ears. It is possible that there may be processes in the auditory pathways that modify the Nm response to a binaural stimulus in such a way that the response is not a linear summation of two monaural stimuli. This thesis also explored whether the tonotopic organization of the auditory system influences the latency and amplitude of the Nm response. Latency differences with varying tone frequencies could reflect differences due to varying path lengths, or additional signal processing stages. Tonotopic amplitude differences could indicate variations in neuron number,

40 8 synchronicity, or radial orientation at different tonotopic source generators; however it was hypothesized that the cortical tonotopic organization would not be detectable by MEG source localization due to coregistration errors, subject movement, and the voxel resolution being used in this study (5x5x5 mm)..3 Auditory System Development The subjects from whom the auditory evoked field data were obtained ranged between and 5 years of age. Between these ages, maturational changes occur within the auditory system; hence, it is appropriate to review what is known about the underlying processes that give rise to the maturational changes detected in EEG and MEG traces. The typical mammalian auditory system is approximately symmetrical at least up to the level of core auditory cortex. In humans primates, and perhaps some other species, there is some hemispheric specialization. For example, in humans, left and right sides of the brain appear to have different roles in speech perception and production. Generally, the symmetry of auditory pathway development is controlled by a complex combination of factors. Early embryological development is under genetic control, with chemotaxic neuronal growth and synaptogenesis. In later gestation, neural pathways are strengthened by intrinsic neural activity (i.e. spontaneous firing) and extrinsic, acoustically driven cochlear afferent activity patterns. In postnatal periods, environmental sound signals drive cortical specializations such as detection and processing of auditory information [4]. Ponton and colleagues have shown maturational changes in long latency responses up to about years of age in response to click stimuli using electroencephalography measures [7]. However, the ECD method employed made the assumption of symmetrical source location. This assumption would introduce measurement errors because the brain is, in reality, asymmetric. For right handed subjects, the left hemisphere tends to be larger than the right hemisphere [4]. The ECD method also fitted a non-moving dipole over the entire time course of the recorded auditory evoked response (AER). This makes the assumption that all auditory evoked fields and their subcomponents occur at the same location. There has been some evidence from intracranial recordings that individual AER components are located at distinct locations [4]. It is possible that these AEPs may have different orientations as well. It has also been reported that the N complex consists of several neural source generators [4, 5, 43, 44]. Hence, the fitted

41 9 dipole location may not correspond to the actual location of the N. This means that the N latency reported may correspond to the latency of whichever random N subcomponent source generator the dipole happens to localize nearest to, but not necessarily the latency of the source generator at the Heschl s gyrus which produces the maximal N activation [4, 5]. Finally, the Ponton study performed ECD analysis on grand mean age-grouped data, which could introduce further errors that are correlated with age since no heads are identical, and there would be an increasing head size with increasing age. Head geometry influences electric potential propagation and the position of the EEG sensors. A. Thesis Objectives in the Context of Auditory Development This study aimed to improve on the methods used in the Ponton study, albeit with the use of fewer subjects. A SAM based beamformer was used to analyze the auditory evoked field data from individual subjects. Since SAM measures the dipole orientation at each voxel, with the implementation of time-restriction, the results should describe the Nm with respect to the dipole orientation and location of the neural population that contributes to its maximum amplitude. Rather than using click stimuli which consists of a wide frequency band, this study used pure tone stimuli. This allowed us to observe whether the maturational changes occur at all frequencies or are frequency dependent. This study explored the effects of maturation on the amplitude and latency of the Nm late latency response between early teenage years and early adulthood.

42 3. DATA COLLECTION In this section, we describe the apparatus and protocol used to obtain the auditory evoked fields used to evaluate the magnetoencephalography data post-processing algorithms, and eventually in the study of right-sided monaural and binaural evoked fields. The protocols were approved by the Research Ethics Board at the Hospital for Sick Children, where the study was carried out. 3. Stimulus Generation The stimuli were produced by a laptop using a Sound Blaster Audigy ZS sound card (see Table 3..I for specifications). A DirectSound dynamic link library (DLL) module was written in C++ to take advantage of the multichannel 4-bit sound provided by the sound card. 4-bit sound is important because in order to produce highly attenuated sounds with sufficient quality, there must be sufficient discrete data points available to properly represent the waveform. Although a programmable analog attenuator would have been an alternative solution, the 4-bit sound card offered a more cost effective and portable solution. TABLE 3..I AUDIGY ZS SPECIFICATIONS Signal-to-noise Ratio (AES7, A-Weighted, 4 db (V Rated Output) khz bandwidth) Total Harmonic Distortion + khz.6% (V Rated Output) (AES7) Frequency Response (+/-3dB, 4-bit/96kHz < Hz to 46 khz (V Rated Output) input) With the current EAR earphone setup, sounds at threshold are typically attenuated down to -9 db of the maximum volume during the thresholding procedure. For a typical 6-bit sound, the number of discrete data points available at -9 db is: [ exp( 9 ) ] 6 = 3 ceil (3..) With 4-bit sound, the number of discrete data points available at -9 db is: [ exp( 9 ) ] 4 = 53 ceil (3..) 3

43 Fig. 3.. illustrates the signals and their power spectra for both 6 and 4-bit sounds at 5 Hz, attenuated to -9 db of their maximum. 3 (a) 5 Hz, 6-Bit Sound Attenuated to -9 db of Maximum (b) 5 Hz, 6-Bit Sound Power Spectrum Relative Amplitude. -. Power Time (s) x Frequency (Hz) (c).8 5 Hz, 4-Bit Sound Attenuated to -9 db (d) 5 Hz, 4-Bit Sound Power Spetrum Relative Amplitude. -. Power Time (s) x Frequency (Hz) x -3 Fig Stimulus waveform analysis. (a) and (c) are waveforms for 6 and 4 bit 5Hz sounds attenuated to -9 dbfs. (b) and (d) are the corresponding power spectra for the 6 and 4 bit sounds. Since Matlab was being used to provide a user interface and to generate the sounds, we wrote a Java class to interface between Matlab and the DirectSound DLL module. Linear attenuation was ensured both by measuring sound card output voltage and earphone sound level. A db drop in programmed sound level results in both a db drop in voltage level and in measured sound level as shown in Fig. 3...

44 3 EAR Amplitude Growth EAR Sound Level (db Digitized Sound Level (dbfs) 5 Hz Hz 6 Hz Fig Plot of sound level output by EAR earphones in relation to sound card digital input level. Sounds presented are of 4 ms duration, with a hanning window applied to the first and last sixteenth of their duration (Fig. 3..3). Although 4 ms is relatively long compared to middle latency responses, it is assumed that these responses are to the onset of the stimulus. Even if they are not onset responders, this is not a problem as the goal of this study is to define a measure that can be used to compare differences in sound perception. Decreasing the duration of the stimulus creates problems because it would require greater sound presentation amplitudes to be detected, and would prevent the frequency of the tone from being recognized by the listener. Hz Tone Relative Amplitude Time (s) (ms) Fig Hz tone waveform. A hanning window applied to the first and last sixteenth provides a smooth attack and decay.

45 x 3. Stim-Trigger Design 33 In order to synchronize the stimulation with the recording, a stim-trigger signal needed to be generated. Two methods of stim-trigger production were considered. The first was simply to use a third channel auditory output as the trigger. This didn t work as the frequency range of the sound card was too high to permit accurate sampling by the recording device. The second method was to use a MAX3 RS3-to-TTL chip to allow the computer to send a stim-trigger signal through the serial port via the CTS pin. Unfortunately, the computer was unable to precisely and consistently fire a serial port based stim-trigger exactly when the sound was presented, giving a variance of about five milliseconds. This could be somewhat remedied by setting the DLL function to have CriticalPriority; however, this would adversely affect other functions running at the same time because a CriticalPriority task hogs the CPU, and there would still be a delay of a few milliseconds between the stim-trigger and sound presentation (Fig. 3..a). This required the development of an additional circuit. The final stim-trigger circuit (Appendix II) consists of a digital SR-latch which is connected to a third audio channel via a Schmitt trigger. This third audio channel plays a 5 khz tone that has the same duration and timing as the tones played in the first two channels used for sound presentation, but it does not have a hanning window applied and is always played at dbfs. The computer resets the latch (logical false) through serial port communication. When audio data is played, the latch becomes set (logical true) until it is reset again (Fig. 3..). This setup results in a stim-trigger that consistently fires within a nanosecond of the audio stimulus onset (Fig. 3..b). Fig illustrates the general software-hardware architecture block diagram of the final setup. V x=35µs V V a). b) V ms Sep6 4: Apr7 3:55 Fig (a) Audio data and serial port controlled stim-line. The recording begins on the falling edge of the stim-line. (b) Audio data and circuit controlled stim-line. The oscilloscope recording began on the rising edge of the stim-line. ms

46 34 Audio In - Stim Out RST In Fig Pulse sequence of stim-trigger circuit. The stim line is raised when audio data is detected. When the computer sends the reset (RST) signal through the RTS serial port, the stim line is reset. MATLAB JAVA (JNI) C++ DLL Software Hardware DirectSound API Sound Card Set Windows COM API Reset MAX3 Schmitt Trigger Gated SR Latch Button EAR Earphones Trigger Out Fig Software and hardware module communication diagram.

47 3.3 Audiometer Design 35 For each subject, it was important to accurately determine their auditory threshold so that the stimuli could be presented at a level relative to the threshold. A procedure in Matlab was written to perform a bracketing procedure to determine each subject s auditory threshold. Sound was presented through the sound card and the subject responded by pushing a button which was connected to the computer through the serial port. In the bracketing procedure (Fig. 3.3.), sound is presented in db decrements from the listener s super-threshold region, until the listener is unable to hear the sound. The sound is then presented at this level again to confirm that the listener is indeed no longer able to hear the sound. Sounds are then presented in 5 db increments until the listener is able to hear the sound. The sound is then presented at this level again to confirm that the listener can hear the sound. The half-way point between the levels where the listener stopped hearing the sounds and where the listener began hearing the sounds again is taken as the threshold. -7 Audiometer Log -75 Loudness (dbfs) Detected confirmation tone. Missed detection. Detected confirmation tone 5. Detection resumed Missed detection 4. Missed confirmation tone Sample # Fig Log produced by the audiometer for one of the subjects. Each circle represents a tone presented by the audiometer. The dashed red line indicates the hearing threshold chosen by the audiometer. In order to avoid false responses, the algorithm uses random intervals between sound presentations. Also, if the listener pushes the button when no sound is played, an additional random delay is inserted.

48 3.4 Earphone Setup 36 Etymotic research EAR earphones were used for sound presentation (Fig. 3.4.). In order to prevent electrical signals and wire movement of the earphones from producing an overwhelming artifact in the recordings, it was necessary to use 4.5 cm vascular access tubing to increase the distance between the piezo speakers and the dewar. Assuming the speed of sound in the tubes is 34 m/s, the tube length adds approximately a 3 ms sound propagation delay between the piezo speakers and the ear buds. Fig Etymotic research EAR earphones. The piezo speakers are inside the red and blue boxes. The default tubes shown in this diagram were replaced with 4.5 cm vascular access tubing. 3.5 Data Collection Parameters Data was collected at a 75 Hz analog-to-digital conversion rate, with a ms prestimulus interval and a 6 ms post stimulus interval. An online Hz lowpass filter was used during the recordings. To minimize environmental noise, adaptive balancing was performed the morning of each recording. Auditory thresholding was performed within the MEG environment for each recorded frequency using right-sided monaural and binaural stimuli to ensure that the sound was being presented at 4 dbsl (sensory level) during data recordings. The frequencies used were 5,, and 6 Hz. Stimuli were presented at a.±. s interstimulus interval. Typically,

49 37 - trials per recording were done depending on time available and noise level. Each recording took approximately.5 hours, including equipment setup time. The auditory stimulus sequence was started before commencing each recording to avoid startup effects. The order of frequency presentation was randomized to prevent any potential effects of subject fatigue from influencing the group data. Head localizer coils were attached to the nasion and left and right pre-auricular points on the subject s head. Recordings with head movement greater than.3 cm were rejected and repeated. 3.6 Physical Setup of the MEG Environment The MEG used in this study was a CTF MEG TM system built by VSM MedTech Ltd. and located at the Hospital for Sick Children. It consists of a dewar which encases the SQUID sensors, which are submerged in liquid helium to maintain superconductivity. The subject s head is inserted into a cavity in the dewar during recordings. Head localizer coils on the subject s head help the experimenter check that the subject remains motionless, and are also used for later coregistration of the sensor locations to the subject s magnetic resonance image (MRI). The layout of the MEG room is shown in Fig A video with subtitles and no sound was projected onto a screen to help keep the subject alert. The video likely does not influence the averaged results because it is not synchronized with the auditory stimulus. Head localizer coils were attached to the subject s MRI fiduciary points for coregistration with MRI data. In cases where the MEG and MRI were performed on separate days, a photo was taken of the subject with head localizer coil positions marked. Some of the subjects were fitted with EEG electrodes with one at the Cz position, a reference electrode on the nose and a ground electrode on the left ear lobe. To minimize any artifacts due to the wires, it was found that it was best to have the wire from the Cz electrode run down the midline of the head towards the back. The ground electrode would be attached to the forehead and the wire would run down the forehead and across the cheek. The reference electrode would be attached to the tip of the nose and the wire would run across the cheek, ensuring that it does not cross the ground electrode wire. The electrode wires are attached to the left side of the chair by tape for stability, again ensuring they did not cross.

50 38 MEG DSQ Rack Recording Workstations Stimulation Laptop Projector Fig Layout of the MEG room for AEF recordings. 3.7 Magnetic Resonance Imaging Anatomical 3D FSPGR magnetic resonance imaging of the subjects was performed for source localization during post-processing of the MEG data. Each subject was fitted with MRI compatible fiducial lipid markers on the nasion and pre-auricular points. When the MRI was performed on the same day as the MEG, the MRI fiducial markers were placed in the exact same locations as the head localizer coils in the MEG. When the MRI was performed on a different day, the MRI fiducial markers were placed on the nasion and pre-auricular areas, with an expected offset between the MRI markers and the MEG head localizer coils. 3.8 Table of Subjects Eight subjects were used in the study. They ranged between and 5 years of age. Table 3.8.I lists the subjects along with relevant details. Their ages are rounded to the nearest half-year.

51 39 Age Gender (M/F) MEG-MRI delay (days) TABLE 3.8.I TABLE OF SUBJECTS Auditory Threshold Levels (dbfs) EEG Cz Availability Right-Sided Monaural Binaural 5 Hz Hz 6 Hz 5 Hz Hz 6 Hz M Yes 5 F No 5.5 F No 6 M No 8 M No M Yes 4 F Yes 5 F N/A No MR No

52 4. DATA ANALYSIS I STANDARD POST-PROCESSING TOOLS In this section, the methods used to analyze the MEG data using ECD and ER-SAM algorithms are described. Given that these methods use a multisphere head model generated using an anatomical MR image, and that they display their results overlaid on top of an MR image, the coregistration process is outlined first. 4. Coregistration of MEG and MRI Data For subjects who underwent MR and MEG imaging on the same day, coregistration was straightforward since the MEG head localizer coils were placed in the same locations as the MR fiducials. When the MEG and MRI were taken on separate days, photographs were taken of the subject after the MEG recording session to capture the position of the MEG head localizer coils. A 3D MRI reconstruction was then overlaid on the photo to deduce the appropriate fiduciary points on the MRI (Fig. 4..). nasion Fig D MRI image overlaid on photo of subject. 4

53 4. Equivalent Current Dipole Analysis 4 For dipole analysis, the data were bandpass filtered offline between and 3 Hz in order to remove high frequency noise, as has been done in auditory evoked potential (AEP) studies [45, 46]. Eye blink artifacts for subjects recorded were removed by dropping trials with MLT3 channel values that exceed pt due to the tendency of this channel to pick up ocular artifacts. Identification of the Nm field on the trial-averaged channel plots was carried out by using a study performed by Herdman and colleagues [7] as a reference. Subjects with orthodontic metals could not be used because their data was too noisy for ECD analysis to locate a dipole or even the latency of the Nm. ECD analysis was performed by performing a spatio-temporal fit of up to four dipoles to the observed field pattern over a 3 ms window centered on the Nm field, using a multi-sphere head model. The purpose of the 3 ms time window was to minimize the influence of high frequency noise on the solution. The dipole configuration that yielded the least error in the fit was taken as the final result. 4.3 Synthetic Aperture Magnetoencephalography Analysis The event-related synthetic aperture magnetoencephalography beamformer algorithm described in Section.8 was implemented in NUTMEG [47], a MEG beamformer toolbox for Matlab. The modifications made to NUTMEG in order to accommodate this algorithm are described in Appendix III. All recorded trials were used since the event-related method provides robustness against uncorrelated noise []. The data were bandpass filtered between and 3 Hz using a fourth order butterworth filter. A multisphere head model was used for lead field computation. The brain volume was chosen, using MRI coregistration, as the volume of interest wherein voxel values were computed. The voxel resolution used was 5x5x5 mm. This resolution was chosen as it allows the beamformer solution to be computed within a reasonable amount of time, and with a reasonable amount of memory consumption. It is also only slightly greater than the 3 mm head movement allowed during recordings.

54 4.4 Nm Identification Using Beamformers 4 Measures were taken to ensure consistent identification of the Nm field in beamformer virtual sensor data over the maturational age range used in this thesis, and to validate its correlation with the N AEP. We first compared the N from available EEG traces to the Nm field in the MEG sensor traces, and to the virtual sensor beamformer data. This was done in order to obtain a better understanding of the relationship of the virtual sensor Nm field between all three modalities to ensure the same field was being measured across all subjects. This also ensured that the Nm was indeed a neuromagnetic correlate of the N, so that references to EEG AEP studies would be valid. Virtual sensors were then compared across all subjects to provide a surface plot that illustrates the evolution of the AEF with increasing subject age. This plot was used to verify that the Nm, or at least the same AEF component, was being consistently measured across all subjects.

55 5. DATA ANALYSIS II: DEVELOPMENT OF A TIME-RESTRICTED REGION-SUPPRESSED SYNTHETIC APERTURE MAGNETOENCEPHALOGRAPHY BEAMFORMER In this section, the time-restricted region-suppressed ER-SAM (TRRS-ER-SAM) algorithm is derived. First, the need for time restriction is discussed and the implementation is explained. The incorporation of the region-suppression method into the time-restricted ER-SAM beamformer algorithm is then described, along with the development of a supplementary coherent source search algorithm which is used to identify the regions that should be suppressed. Subsequently, the methods used to evaluate the TRRS-ER-SAM beamformer against ECD and ER-SAM algorithms are outlined. Finally, the statistical analysis methods used to analyze the magnetoencephalgraphy data for the auditory evoked field study are listed. 5. Time Restriction One assumption SAM makes is that the dipole sources that occur at a given location, or are spatially located close to one another have the same orientation. This is because SAM computes the dominant dipole orientation for the entire time series, which may not be the actual dipole orientation of the specific dipole being measured. In the context of measuring the Nm auditory evoked field, this assumption holds that the Nm has the same orientation as the Pm, Pm and Nm. There has been no previous study confirming this assumption, and given that the Nm polarity with respect to the Pm and Pm fields was somewhat variable, as discussed in Section 4.4, restriction of the time interval over which the optimal dipole orientation was computed was implemented in order to ensure proper measurement of the Nm field. To limit the optimal dipole computation to only the Nm, ER-SAM analysis was performed, with the beamformer weights only calculated over a window centered on the peak of the Nm. This yielded a larger pseudo-z value for the Nm, but the drawback was that the window size adversely affected the performance of the beamformer by reducing the rank of the sensor covariance matrix, causing the pseudo-z results obtained to vary with window size (Fig. 43

56 5..). This meant that there was no way to know if the increased value was a result of a better dipole orientation or if it was due to poorer beamformer performance. 44 a) b) Nm center t = (6 ms) Fig (a) The Nm pseudo-z value for one particular recording varies as the window size decreases below 5 ms. The Nm pseudo-z value for weight calculation over the entire poststimulus window was.. (b) These measurements were performed using data from a year old artifact free-subject with a clear Nm response. It would be expected that beamformer performance would worsen with poorer quality datasets or datasets with a smaller Nm response. An alternative method for restricting the time window used for the evoked field of interest was to use two windows. A restricted time window could be used form the Rˆ s matrix when computing the optimal dipole orientation, while the entire post-stimulus time window could be used to form the accomplished by using W [ ] T ( r) = R L( r) L ( r) R L( r) b Rˆ s matrix when calculating the weights. This is mathematically b (5..) to calculate the weights where Rb is the sensor covariance matrix calculated over the entire poststimulus time window. ( r) R W( r) Rˆ T = W (5..) s b, wind is then used to calculate the where Rˆ s matrix used for determining the optimal dipole orientation, R b, wind is the sensor covariance matrix calculated over the restricted time window. This strategy avoids the problem of a short time window reducing the rank of the sensor covariance matrix. The restricted time window can be determined either from the MEG sensor trace or from an initial standard ER-SAM beamformer run. 5. Region Suppression Beamformers are a type of adaptive spatial filter that examine the brain voxel by voxel

57 45 and measure the contribution of each voxel to the measured field, thereby producing a tomographic image. This method is fairly immune to uncorrelated noise [48]. In cases where all sources are at most moderately correlated, beamformers avoid the need for a priori knowledge of source numbers and locations. However, when strongly correlated sources exist, their beamformer reconstructions show reduced signal intensities and time course distortions []. For processing steady-state auditory evoked response data, Herdman and colleagues used half-head sensor data to minimize interaction between the two sources in opposite hemispheres [7]. This method may not be ideal. Using equivalent current dipole analysis in pediatric subjects, Pang and colleagues found that there was sufficient interaction between the fields produced by the two sources in opposite auditory cortices to affect source localization [49]. This meant that half-head sensor coverage would result in a lateral displacement of the single modeled ipsilateral source. Hence, with coherent sources, we can expect that half-head sensor data may still result in some contribution from the contralateral source, leading to less accurate measurement of amplitude and latencies. To solve the problem of correlated neural activities, a region suppression method was proposed by Dalal et al. [3]. This method requires the specification of the approximate region where each source is located. For each source that is measured, the contribution from all other sources is suppressed, thereby preventing them from affecting the measured source. A. Region Suppressed Synthetic Aperture Magnetoencephalography ) Basic Region Suppression: To reconstruct the source time series, the region suppression algorithm described by Dalal and colleagues [3] was coupled with an event-related synthetic aperture (ER-SAM) algorithm []. This algorithm suppresses the interfering coherent sources so that they do not influence the measurement of each other. This algorithm was further enhanced to automatically determine which region to suppress so that the entire volume of interest could be computed in a single pass. Suppression of the signal contribution of another point in space to the beamformer output was achieved by adding an additional null constraint to the LCMV formulation (.8.3)-(.8.4), where rσq is the location of the qth suppressed voxel [3]. The LCMV formulation is rewritten as l T θ l min w wθ T θ R b w T ( r) w =, l ( r) θ subject to = θ φ φ T ( r ) w =, l ( r ) w = T θ Σq θ w φ Σq φ (5..)

58 l T θ l min w R w subject to wφ T φ T ( r) w =, l ( r) b φ = θ φ φ T ( r ) w =, l ( r ) w = T θ Σq θ w φ Σq φ (5..) 46 These additional null constraints are implemented by expanding the lead field matrix L(r) to contain the singular value decomposed lead fields from the suppressed voxels. In effect, these null constraints explicitly force the beamformer weights to block the contribution of sources within r Σ. C For each region Σ i around a coherent source, we let ( Σ ) [ l ( r ) l ( r ),, l ( r ) l ( r )] s i Σ, i, φ Σi, θ Σ, i, Q φ Σi, Q θ K, where Q is the number of voxels within the ith suppressed region, ( r Σi,q ) l θ and ( r Σi,q ) l φ are the orthogonal lead field components from the location of the qth voxel within the suppressed region. In order to reduce the number of lead fields required for region suppression, singular value decomposition can be applied to C s (Σ i ) so that only the dominant components are suppressed. The reduction of the lead fields required to form the null constraint gives the LCMV beamformer weights more degrees of freedom which can be used to suppress uncorrelated sources. In order to reduce the noise contribution to the measured signal, the ER-SAM algorithm measures the activity projected onto the estimated source dipole orientation []. The estimated source dipole orientation V src ( r) is taken as the eigenvector corresponding to the largest eigenvalue of the x matrix formed by the first two rows and columns of computed using T [ ( ) ( )] L r R L Rˆ s = b r, (5..3) Rˆ s, which is [ ] without time restriction, or (5..)-(5..) using time-restriction. In these equations, the leadfield matrix becomes L ( r) l ( r), l ( r), C ( Σ), K, C ( Σ ) using [ ] [ ] T ( r) = V ( r) Vorth src. (5..4) =. The orthogonal vector is computed θ φ s The lead fields for the estimated source orientation and its orthogonal component are computed using l l ( r) [ V ] l ( r) [ V ] l ( r) = and (5..5) src src θ θ + src ( r) [ V ] l ( r) [ V ] l ( r) orth orth θ θ + φ φ =, (5..6) orth φ φ s I

59 where [ ] θ src V and [ V src ] φ are the θ and φ components of the ( r) [ V orth ] θ and [ V orth ] φ are the θ and φ components of the ( r) fields are then used to form the composite lead field matrix SAM ( ) = [ l ( r), l ( r), C ( Σ ), C ( Σ )] L r K,. (5..7) W src orth The weight matrix is calculated using T ( r) = R L ( r) L ( r) R L ( r) s [ ] b SAM SAM b SAM. (5..8) s I V src vector respectively; and 47 V orth vector respectively. These lead The weight vector corresponding to l src ( r) is then the first column of the weight matrix. This weight vector is used to compute the virtual sensor time series at voxel r using T ( r, t) W ( r) b( t) sˆ =. (5..9) ) Selective Region Suppression: In order to allow the region suppressed ER-SAM algorithm to compute the entire volume of interest in one pass, a suppressed region selection routine was implemented. This routine prevents the algorithm from measuring a voxel and suppressing it at the same time, which would lead to an ill-posed problem. In the routine, for I suppressed regions, only I- regions are suppressed at any one time. The region that is not suppressed is the one that is closest to the voxel r being measured. Mathematically, this means that we remove C s (Σ i ) from L(r) and L SAM (r) when performing the source orientation and weight computations, where Σ i is the region to which the measured voxel r is closest. To simplify calculations, the distance between r and Σ i is computed as the distance between r and the centroid of the box that forms a boundary around Σ i. In our implementation, since all suppressed regions are presented to the region suppressed ER-SAM beamformer in a single file, the suppressed regions are distinguished from one another using a flood fill procedure. B. Lead Field Normalization and Amplitude Estimation Lead field normalization is necessary to prevent increased sensitivity of the beamformer at voxels located furthest away from the sensors. Without lead field normalization, lead fields from voxels far away from the sensors are weak. As such, sensor noise that localizes to these voxels will be amplified. This results in an over-sensitivity of the beamformer to voxels toward the center of the head. In order to compensate for this, lead field normalization is performed (5..) before each computation involving a lead field matrix [9]. ( ) L( r) L N = L r (5..) The computed source time-series amplitudes are thus normalized amplitudes. The nonnormalized amplitude estimate is calculated as

60 ( r, t) sˆ N ( r, t) l ( r) s ˆ =. (5..) src Localizing Coherent Sources Because suppression regions must be defined, the time-restricted region-suppressed ER- SAM (TRRS-ER-SAM) beamformer method presented thus far introduces the problem that a priori information of approximate source locations must be obtained. To limit the need for a priori information, a solution was proposed by Brookes and colleagues [5]; however it is computationally intensive as it implements a brute force iterative method to search for all possible locations of two correlated sources. We propose an alternate, more efficient, method for identifying the locations of correlated sources. This is done by the employment of a gradient search algorithm to maximize the estimated power gain and correlation coefficient between sampled voxels. The efficiency of the method allows us to extend its application to situations where multiple correlated sources exist. A. Localizing Coherent Sources ) First Stage: The first stage of the algorithm is to use a gradient search to maximize the power gained by reconstructing multiple voxels together using a composite lead field matrix instead of reconstructing them individually. This takes advantage of the fact that correlated sources have reduced source intensities when measured individually. We first define search points r, r,, r N as N locations within the region of interest (i.e. the brain volume), where N is the number of coherent sources. We denote the power measured at a location r n using the conventional LCMV beamforming approach as P { } ( r ) tr Rˆ ( r ) n = s n, (5.3.) where tr {} denotes the trace of the matrix. Only the lead field of r n is used to compute R s. We define P ( )( r ) r, r, K, rn n as the power measured at location r n when a composite lead field matrix is used to calculate R ˆ ( r, r,, ) L [ ] s r N ( r) = l ( r ) l ( r ), l ( r ), l ( r ),, l ( r ) l ( ). P ( )( r ) θ, φ θ φ K θ N, φ r N, N g= P ( r,r,,r )( r ) = ˆ n R s ( r, r, K rn ) K, (5.3.) n g,n g K such that that r, r, K, rn n is calculated as

61 where n-g,n-g refers to the row and column indices of the referenced matrix element. 49 The power gain P G of measuring r n while suppressing other voxels versus measuring r n without suppression is given as P G P K =. (5.3.3) ( r, r,, r )( rn ) N ( r )( rn ), r, K, rn P ( r ) n When attempting to maximize this power gain variable using a gradient search, it is noticed that as r i approaches r j for any i, j {,, K, N}, this power gain becomes infinite because the solution to P becomes ill defined due to non-orthogonal lead fields. In order prevent the gradient search from making r i approach r j, we multiply P G by a weighting factor P W G [ svd( Rˆ )] = P min, (5.3.4) s where svd( Rˆ s ) is the set of singular values of Rˆ s. As r i approaches r j, Rˆ s becomes low rank resulting in a lower minimum singular value. This reduces the value of P W and prevents the gradient search from allowing r i to approach r j (i.e. preventing the search points from getting too close together). The gradient search algorithm, generically depicted in Fig. 5.3., aims to maximize P W. The approach we take is to assign N search points to N randomly selected locations r, r,, r N. The first search point is moved from one voxel location to another, in the direction of increasing P W, until a maximum is found. The same is then done with the second search point and so on. Changing the location of one search point changes the P W profile with respect to the other search point. This sequence is then repeated until all search points no longer need to be moved as they are all at the maximum P W.

62 5 Fig Graphical illustration of gradient search algorithm. All search points are fixed, except for one. The non-fixed search point is moved from voxel to neighboring voxel, attempting to maximize the search parameter. ) Second Stage: Because P W is a weighted measure, under low SNR conditions, the locations of r, r,, r N obtained by P W tend to be slightly shifted from the actual locations of the coherent sources. Hence in the second stage, we take the top K combinations of r, r,, r N that yielded the largest P W in the first stage and for each combination, use a gradient search again to maximize the signal correlation coefficient between the N locations. This non-weighted measure eliminates the shift in localization of the coherent sources. To avoid instances where the search point lead fields are too parallel to each other, which would result in a rank deficient sensor covariance matrix, the aforementioned top K combinations only include results from the first stage where all search point lead fields are at least lmin radians out of plane. lmin well in most cases. We denote is user defined. We have found that π 6 radians works l as a measure of orthogonality between the lead fields of two voxels. For this, we use the angle between l θ ( r) or ( r) one is smaller. l( r, r sq ) cos sin sin cos is computed as T [ cos ( lθ ( r) lθ ( rs ) ] q T [ ( l ( r) l ( r )] Its derivation is shown in Appendix IV. L θ ( r, r, ) φ sq l φ and the plane formed by l φ ( r sq ) and l θ ( r sq ) sin + sin cos T [ cos ( lφ ( r) lθ ( rs ) ] q T [ ( l ( r) l ( r )] φ φ sq. (5.3.5), whichever R ˆ s K, r N is first computed using the composite lead field [ ] ( r) = l ( r ) l ( r ), l ( r ), l ( r ),, l ( r ) l ( ) with θ, φ θ φ K θ N, φ r N

63 Rˆ T [ ( ) ( )] = L r R L r s b. (5.3.6) 5 Rˆ s is then used to estimate the source orientations of each r n. The estimated source orientation V src (r n ) is equal to the eigenvector corresponding to the largest eigenvalue of the x matrix formed by the elements at rows n and n, and columns n and n of the Rˆ s matrix []. The orthogonal orientations V orth (r n ) of each r n are calculated as well using (5.3.7), where V src (r n ) is the estimated source orientation vector at r n. resulting vectors remain tangential to the surface of the head. [ ] [ ] T V ( r ) V ( r ) (5.3.7) orth n = src n Equation (5.3.7) ensures that the The lead field vector is created again using the lead fields corresponding to the source orientations and their orthogonal orientations. This is done using l l src orth ( r ) [ V ] l ( r ) [ V ] l ( r ) n = and (5.3.8) src θ θ n + src φ φ ( r ) [ V ] l ( r ) [ V ] l ( r ) n where [ =, (5.3.9) orth θ θ n + ] θ src orth φ φ n n V and [ V src ] φ are the θ and φ components of the ( r src n ) [ V orth ] θ and [ V orth ] φ are the θ and φ components of the ( ) orth r n V vector, respectively; and V vector, respectively. This reconstituted lead field matrix is used to recalculate the R s matrix. The R s matrix is then used to estimate the correlation coefficient ( r, ) (5.3.). (, r ) [ Rˆ s ] i, j {[ Rˆ ] [ ˆ ] } s R s ˆ µ r =. (5.3.) i j i,i j, j We define the sum of all ( r, ) N i ( r, r,, rn ) ˆ µ ( ri, r j ) ˆ µ K Σ = i= j= i r j ˆµ i r j between the two voxels i ˆµ for all combinations of r i and r j as r and r j using A gradient search algorithm similar to the one described in the first stage is then employed. The difference is that instead of maximizing P W it aims to maximize µˆ Σ, and instead of using random initial values for r, r,, r N, it uses the values from the first stage. This gradient search is repeated for each set of r i and r j carried over from the first stage. It is still possible for r i and r j to end up in the same location as the correlation coefficient between a signal and itself is. Hence, we discard any results where any of the search points end up in the same position. The gradient search is repeated a user specified number times using random initial voxel

64 locations. The results of the gradient searches are finally listed, ranked by P W. 5 B. Computational Enhancements ) Multiplying symmetric matrices: When the product of a matrix multiplication operation is known to be symmetrical, as in (5.3.), it is not necessary to compute the entire matrix. In this case, the multiplication can be done using [ ] = [ AB] = [ A] [ B] nm mn M m AB, (5.3.) m= n= mn nm where M is number of rows (or columns) of the resulting square matrix. ) Calculation of P : Over the multiple searches required by the algorithm, it is likely that the number of times P will be calculated exceeds the number of voxels. Hence, calculating P for every voxel in advance helps save some time. C. Run-Time Analysis The presence of noise means that not every gradient search run will successfully detect the coherent sources. Due to the probabilistic nature of this type of search algorithm, we measure the processing time by the number of search trials required to achieve a given probability of detection. The processing time of the proposed coherent source search algorithm N is o[ log( P ) log( X N! ) ] D, where P D is the probability of detection, X is the fractional volume around a coherent source where a search point will be influenced by the point spread function of the source, and N is the number of sources being sought. This is derived in Appendix V. To compare this to a brute force algorithm, we first note that a brute force algorithm has a N processing time of o ( V ), where V is the number of voxels in the region of interest. The algorithms are compared by examining the fractional increase in processing time when N increases by. The processing time required for a brute force algorithm is increased V times when N is increased by. The processing time for the gradient search algorithm increases N log( X N! ) N [ X ( N + )! ] TN + = (5.3.) + T log N times for each increase in N. We evaluate this numerically by letting X = V, which is the lowest value X can obtain due to voxel discretization. As shown in Fig. 5.3., it is found that for any realistic value of N and V, (5.3.) always evaluates to a value less than V. This means that the gradient search algorithm can search for larger numbers of coherent sources with less impact on processing time than with a brute force search.

65 53 Processing Time Analysis of Coherent Source Gradient Search Algorithm Increase in Number of Search Runs for Desired P D Number of Coherent Sources (N) x 4 Number of Voxels (V) Fig Runtime analysis of coherent source gradient search algorithm reveals that the growth rate is always less than V, which is the growth rate of a brute force search algorithm when N is increased to N+. The ripples and missing values on the surface are a result of computational failure to evaluate the large factorials in (5.3.). The actual values are approximately midway between the ripples. D. Generation of Suppressed Regions from Coherent Source Search Results The coherent source gradient search algorithm outputs a list, with each entry being the result from one of the gradient search runs. Each entry contains N points, where N is the number of sources being sought. Since each entry is not necessarily a successful result, we developed an algorithm to scan the search results and pull out the most likely coherent sources and to generate suppressed regions around them, in the shape of boxes. The size of each box is specified by the user. To handle distributed sources or noise around sources, which could cause successful coherent source search results to settle in slightly different locations, one requirement for this algorithm was that it be able to find regions of potential coherent source points that are close together. Fig illustrates the algorithm that was used to identify the most likely coherent source points using results from multiple gradient search runs. The maximum acceptable distance between points for them to be considered part of the same suppressed region is set to 75% the length of a box that is to form the suppressed region. Gradient search runs with points that are closer than one box length are ignored. When there are enough points within a given

66 54 region, as specified by the user, a suppressed region is formed around them. This is done by generating a box with user defined dimensions around each point. The box size used in this paper was 3x3x3 mm Fig Schematic depiction of suppressed region detection and generation algorithm for a two coherent sources. Each dot represents a coherent source point found by the gradient search algorithm. The letter associated with each dot represents the output from one gradient search run. The number subscript beside the letter indicates the search point for that particular run (two search points per run). Suppressed regions, depicted by the dashed boxes, are formed around (A, B, C ) and (A, B, C ) because the points in each group are located within 75% of a box length from each other. No box is formed around D because D is not close enough to (A, B, C ). No boxes are formed around E and F dots because the groups (E, F ) and (E, F ) do not contain enough elements. In this illustration, the number of points per group required for a suppressed region to be formed around a dot is 3. The dots G and G are ignored because they are too close together. 5.4 Evaluation of the TRRS-ER-SAM Beamformer Using Numerical Simulations The TRRS-ER-SAM and coherent source localization algorithms were implemented in NUTMEG, a MEG beamformer analysis toolbox for MATLAB [47]. The simulation module simulated a CTF neuromagnetometer with 45 channels by using the MEG sensor configuration, subject anatomy, and head model from a previous MEG recording. Both ER-SAM and TRRS- ER-SAM were applied to three numerical simulation scenarios, which are described as follows:

67 A. Two Coherent Sources 55 A single trial dataset was generated with two coherent sources with identical amplitude and Gaussian white noise was added to yield an SNR of. B. Two Coherent Sources and an Interferer The previous simulation with two coherent sources was modified so that the coherent sources had different amplitudes. A third source was added with a phase shift to yield a. C. Three Coherent Sources The previous simulation was modified so that all three sources were coherent and had different amplitudes. 5.5 Evaluation of the TRRS-ER-SAM Beamformer Using AEF Data A. Application to Steady-State Evoked Field Data Steady state evoked fields produced by binaurally presented amplitude modulated tones yield results that are highly suppressed under conventional beamformer analysis. Herdman and colleagues used half head sensor data for beamformer analysis [7]. They needed to use half-head sensor data to avoid the effects of correlated sources; however, half-head sensor data may not completely eliminate these effects as it has been shown with ECD analysis using half-head sensors that there is still influence from the contralateral source [49]. The study localized the steady state evoked field to around the primary auditory cortex. Hence, applying the ER-SAM and TRRS-ER-SAM beamformers to data recorded with the same parameters served as a way to compare the two beamformers, using previously published results as a reference. A 4 Hz amplitude modulated 5 Hz tone with a duration of s was presented to a subject whose cortical response was recorded using magnetoencephalography. A s interstimulus interval was used and trials were recorded at a 5 Hz sampling rate. The data was filtered from to 5 Hz, with an additional notch filter at 6 Hz. Both ER-SAM and TRRS-ER-SAM algorithms were used in the frequency domain to generate a tomographic image of the steady state response using the 3-5 Hz band over the 6-6 ms time window. A voxel resolution of 5x5x5 mm was used. ECD analysis was not performed because of the noise level, which necessitated that the beamformer analysis be performed in the frequency domain.

68 B. Evaluation of Nm Localization 56 The TRRS-ER-SAM algorithm was evaluated against the ECD and ER-SAM beamformers using AEF data collected from the 7 subjects who had anatomical MR images available, based on detectability of the Nm field. The binaural 5 Hz auditory stimuli datasets were used because at 5 Hz, the Nm was easiest to detect on the MEG sensor traces. Detection of the Nm on the MEG sensor trace is necessary for ECD to be attempted. Detectability for ECD analysis was based on: (a) reasonable detectability of the Nm field on the trial-averaged channel traces, (b) distance between the Nm and the Heschl s Gyrus, which is the structure shown to be responsible for the Nm by earlier studies [4, 5], and (c) localization consistency as measured by the standard deviation between subjects. Detectability for beamformer analysis was based on (a) successful identification of the Nm as outlined in Section 4.4, (b) distance between the Nm and the Heschl s Gyrus, and (c) localization consistency. 5.6 Application to an AEF Study The TRRS-ER-SAM beamformer was applied to an AEF study using the data collected as described in Section 3. All subjects were used in this study. For the subject who did not have an anatomical MRI available, photographic coregistration was done with another subject with a similar head size, by aligning facial features. A single-sphere head model was used for this subject. This study used the TRRS-ER-SAM beamformer to examine the effects of stimulus frequency, subject age, and monaural versus binaural stimulation on Nm location, amplitude, and latency. After localization of the Nm and the measurement of its dipole moment and latency for all datasets, the effects of subject age, stimulus frequency, and monaural versus binaural stimuli were examined for each of these measures. A. ANOVA Analyses ANOVA analysis was used as a first step to determine significant differences in the data that would require further analysis. The subsequent analysis would determine whether the significant differences reflected trends in the data as a result of subject or treatment factors.

69 . Nm Location: 57 A two-way repeated measures ANOVA analysis was applied to the three Nm MNI coordinates for both hemisphere sources. Age was the subject variable, and frequency and stimulus (monaural/binaural) were the treatment levels.. Nm Amplitude: The effects of age and frequency on the Nm dipole moment were examined. A two-way repeated measures ANOVA was applied to the Nm dipole moment with age as the subject variable, and frequency and stimulus (monaural/binaural) as the treatment levels. 3. Nm Latency: A two-way repeated measures ANOVA was applied to the Nm latency using age as the subject variable, and frequency and stimulus (monaural/binaural) as the treatment levels. B. Nm Central Location Nm grand mean locations were calculated across all frequencies and subjects to identify the central neuroanatomical structure for the Nm source generator. This structure has so far been hypothesized to be Heschl s gyrus. C. Effects of Frequency. Nm Localization: Does Stimulus Frequency Affect Nm Localization? A paired t-test was used to evaluate the statistical significance of differences in localization.. Nm Amplitude: Does Stimulus Frequency Affect the Nm Amplitude? A paired t-test was used to compare the effects of frequency on the Nm amplitude. 3. Nm Latency: Does Stimulus Frequency Affect the Nm Latency? The effects of age and frequency on the Nm latency were examined by performing a paired t-test on the latency differences between frequencies. D. Effects of Age. Nm Localization: Does Subject Age Affect the Nm Location? Nm MNI coordinate was plotted against subject age to determine if there were any identifiable trends in the data. If a trend was present, then an appropriate regression analysis would be applied.

70 58. Nm Amplitude: Does Subject Age Affect the Nm Amplitude? Linear regression analysis was performed to determine whether there was any detectable linear correlation between subject age and Nm amplitude. 3. Nm Latency: Does Subject Age Affect the Nm Latency? Exponential regression analysis was performed on Nm latency versus subject age. The minimum Nm latency was used as the constant offset for the exponential regression equation. Statistical analysis of the exponents for the fitted curves was then performed. E. Monaural Versus Binaural Stimuli. Nm Location: Does Monaural Versus Binaural Stimuli Affect the Nm Location? A paired t-test was used to determine whether Nm location differences between monaural and binaural stimulation were significant.. Nm Amplitude: Does Monaural Versus Binaural Stimuli Affect the Nm Amplitude? A paired t-test was used to examine the differences in Nm amplitude for each hemisphere for monaural and binaural stimuli. To examine lateralization differences, a laterality index was computed by dividing the left and right hemisphere Nm dipole moments. T-tests were applied to analyze differences between monaural and binaural laterality indexes at each frequency. 3. Nm Latency: Does Monaural Versus Binaural Stimuli Affect the Nm Latency? For each frequency and hemisphere, a one-way repeated measures ANOVA was performed with age as the subject variable and stimulus (monaural/binaural) as the treatment level for Nm latency data. 4. Nm Lateralization: Does Monaural Versus Binaural Stimuli Affect Nm Lateralization? Laterality indexes were computed by dividing the left hemisphere Nm dipole moment by the right hemisphere Nm dipole moment. Only datasets where both dipole moments could be detected were used. A t-test was then performed to determine whether the laterality indexes for monaural stimuli were significantly different from those of binaural stimuli.

71 6. RESULTS I: EVALUATION OF MEG POST-PROCESSING ALGORITHMS In this section, the MEG post-processing algorithms are evaluated. The first part uses numerical simulations to evaluate the TRRS-ER-SAM and ER-SAM algorithms in their ability to handle coherent sources. The consistent identification of the Nm using beamformer data is then discussed. Examples are then given for the application of TRRS-ER-SAM and ER-SAM beamformers to the detection of the Nm. Finally, all MEG post-processing algorithms are compared side by side in their ability to detect and localize the Nm. 6. ER-SAM and TRRS-ER-SAM Beamformer Evaluations A. Numerical Simulations To compare the ER-SAM and the time-restricted region-suppressed ER-SAM (TRRS- ER-SAM) beamformer algorithms, we used numerical simulations to generate sinusoidal coherent sources in a simulated brain. The numerical simulations also served to compare gradient search and brute force search algorithms. This was done by determining how many voxel measurements are used by a gradient search and comparing this to the number of voxel measurements required by a hypothetical brute force search.. Two Coherent Sources As was previously done in a coherent source simulation by Dalal and colleagues [3], a single trial dataset was generated with two coherent sources (µ = ) with a nam amplitude, oscillating sinusoidally at 3 Hz (Fig. 6..b). These sources were randomly oriented and positioned at coordinates (,-3,4) mm and (,3,4) mm using the coordinate system defined in Fig. 6..a. Gaussian white noise was added to yield a signal to noise ratio (SNR) of. Using conventional ER-SAM to reconstruct the activations yielded weak, diffuse sources as shown in Fig We began a coherent source search for two sources. Over gradient search runs, the source localization algorithm was able to localize the coherent sources with a 6% rate of success. Hence, for at least a 99% probability of detection, 7 runs are required for this 59

72 configuration. With an average of 37 computations of the 6 Rˆ s matrix per gradient search run,.x 4 Rˆ s computations would have been required for 7 runs. In contrast, a hypothetical brute force search for two sources would have required approximately 5! 8 =.!! ( 5 ) Rˆ s computations, assuming a similar VOI of 5, voxels. Using the output from the gradient searches, a suppressed region was generated. These suppressed regions are shown in Fig. 6..3a. a) b) 6 x -5 Simulated Data: Sources, SNR = 4 Magnetic Field (T) Time (ms) Fig (a) Schematic of source placement. (b) Magnetic field of simulated data with sources at (,- 3,4) mm and (,3,4) mm. x Normalized Magnitude Fig Reconstructed activation map of signal power over to 5 ms time interval using standard SAM beamformer. The intensity values are based on normalized lead fields. Another coherent source search was performed for three sources. This search did not yield any consistent results; hence, no suppressed region could be formed.

73 Using the suppressed regions formed using a two coherent source search, the TRRS-ER- SAM method successfully reconstructed the source locations and time-series (Fig. 6..3b-c). 6 (a) (b) (c) x Dipole Moment (nam) - Activation at [ 3 4] Time (ms) Activation [ -3 4] Time (ms) Fig (a) Coherent source search algorithm creates suppressed regions around the coherent sources. The box size used to generate the suppressed regions was 3x3x3 mm. (b) Reconstructed activation map of signal power over to 5 ms time interval using coherent source localization algorithm and modified SAM beamformer. The peaks are located at (,-3,4) mm and (,3,4) mm. The intensity values are based on normalized lead fields. (c) Reconstructed non-normalized source amplitude plots obtained from the modified SAM beamformer. The actual source amplitudes are shown by the dotted lines, which are hard to see due to the very close overlay with the reconstructed amplitudes. Dipole Moment (nam) -. Two Coherent Sources and an Interferer The previous simulation with two coherent sources was modified so that the source at (,3,4) mm had an amplitude of.5 nam. A nam, 3 Hz third source with a phase lag of π 8 radians was also added at coordinates (6,,4) mm. The phase lag produces a correlation coefficient of.9 between the third source and the other two sources. Reconstruction with conventional ER-SAM localized the source at (6,, 4) mm, however it exhibited reduced intensity and a phase shift, as shown in Fig A search was performed to seek out two coherent sources. Out of 3 runs, 4% localized correctly, finding the (,3,4) mm and (,-3,4) mm sources. This means that 3 gradient searches are required to achieve a 99% probability of detection. With an average of 384 computations of the Rˆ s matrix per gradient search run, 4.3x 4 Rˆ s computations would have been required for 3 runs. This still contrasts with the.x 8 computations of Rˆ s required by a hypothetical brute force search for two coherent sources. A number of the searches localized a point near the source at (6,,4) mm and one of the other sources; however none localized accurately due to the unrepresented source. Another search was performed to seek out three coherent sources. Out of 4 gradient searches,.3% localized correctly this means that 9 runs are required to obtain a 99%

74 probability of detection of all three sources. With an average of 563 computations of the 6 Rˆ s matrix per gradient search run,.3x 6 Rˆ s computations would have been required for 9 runs. In contrast, a hypothetical brute force search for three coherent sources would have required approximately 5.6x Rˆ s computations, assuming a VOI of 5, voxels. Using output from the two coherent source search, the suppressed region generator created suppression regions around all three regions. Using this, the TRRS-ER-SAM beamformer was able to accurately reconstruct the three sources, as shown in Fig (a) x -8 (b).5 SAM Reconstruction of Source at [6 4] Normalized Magnitude Amplitude (nam) Time (ms) Fig (a) Activation map obtained from conventional SAM beamformer over a to 5 ms time window. Only one source is seen at (6,, 4) mm. (b) The reconstructed source dipole moment at (6,, 4) mm. The estimated dipole moment is the solid line and the actual dipole moment is the dotted line. The estimated dipole moment is evidently distorted (a) x Normalized Magnitude (b) Dipole Moment (nam) Activation at [ 3 4] Activation at [ -3 4] Activation at [6 4] Time (ms) Fig (a) Activation map obtained by region suppressed SAM beamformer over a to 5 ms time window, using suppressed regions detected by a coherent source gradient search. (b) Reconstructed sources using the region suppressed SAM beamformer. 3. Three Coherent Sources The previous simulation was modified so that the source at (6,,4) mm had an amplitude of.75 nam and zero phase lag. Using a standard ER-SAM beamformer, the reconstructed source activity image showed only weak diffuse sources, as shown in Fig

75 A search performed to seek out two coherent sources did not localize any sources. In a subsequent search performed to seek three coherent sources,.85% of runs correctly located the sources. This means that 539 runs are required to obtain a 99% probability of detection. With an average of 73 computations of the Rˆ s matrix per gradient search run, 3.8x 5 Rˆ s computations would have been required for 539 runs. This again contrasts with a hypothetical brute force search for three coherent sources, which would have required approximately 5.6x Rˆ s computations, assuming a VOI of 5, voxels. Using the suppressed region generated from the coherent source search results, the TRRS-ER-SAM beamformer was able to accurately reconstruct all three sources, as shown in Fig x Normalized Magnitude Fig Reconstructed activation map of three coherent sources over the time interval of to 5 ms using a conventional SAM beamformer. (a) x Normalized Magnitude (b) Dipole Moment (nam) Activation at [ 3 4] Activation at [ -3 4] Activation at [6 4] Time (ms) Fig (a) Reconstructed activation map of three coherent sources using the region suppressed SAM beamformer over the time interval of to 5 ms. (b) Reconstructed dipole moment of the three detected sources. The solid line is the estimated dipole moment and the dotted line is the actual dipole moment. B. Application to Steady State Evoked Fields Steady state evoked fields produced by binaurally presented amplitude modulated tones yield results that are highly suppressed under conventional beamformer analysis. Herdman and colleagues used half head sensor data for beamformer analysis [7]. Here, we apply the regionsuppressed beamformer method we have just described to steady stated evoked field data. A 4 Hz amplitude modulated 5 Hz tone with a duration of s was presented to a subject whose cortical response was recorded using magnetoencephalography. A s interstimulus interval was used and trials were recorded at a 5 Hz sampling rate. Due to the amount of noise, the standard ER-SAM beamformer was applied in the frequency domain. This involved performing a Fourier transform of the data, and using the real part of the Rˆ s

76 matrix to compute the optimal dipole orientations. The real part of the 64 Rˆ s matrix was used because the imaginary part corresponds to the correlation between orthogonal frequency components, which for the purposes of dipole orientation estimation, should be zero. A 4 Hz tomographic image of the steady state response was generated using the 3-5 Hz band over the 6-6 ms time window. A frequency-restriction window of 38-4 Hz was used for computing the optimal dipole orientations. This resulted in a low intensity activation pattern distributed over much of the head volume, as shown in Fig. 6..8c. 5 x -3 Steady State Evoked Field Recording x - MEG Channel Frequency Spectrum (a) (b) (c) 4 Magnetic Field (T) Time (ms) Spectral Power Frequency (Hz) Fig (a) The averaged MEG data. The 3-5 Hz frequency band over the time window indicated by the dotted lines was used to compute the beamformer outputs. There is a peak at 4 Hz corresponding to the 4 Hz amplitude modulation of the auditory stimulus. The source of the peak at 4 Hz could not be identified. (b) MEG channel frequency spectra. A frequency-restriction window of 38-4 Hz was used for computing the optimal dipole orientations. (c) The tomographic image at 4 Hz created by a standard SAM beamformer implementation. All activity shown was below the 95% confidence interval. Normalized Magnitude gradient search runs were performed. This yielded suppressed regions shown in Fig. 6..9a. The suppressed regions were used with the TRRS-ER-SAM. The tomographic image (Fig. 6..9c) shows activations in the temporal region.

77 65 (a) (b) (c) Normalized Magnitude (d) Normalized Power x - Left Hemisphere Frequency Spectrum MNI: (-44.6, -.7,.6) mm Frequency (Hz) Right Hemisphere Frequency Spectrum x - MNI: (44., -39.7, 8.7) mm Normalized Power Frequency (Hz) Fig (a) The output of the coherent source gradient search algorithm. Note the tendency of certain coordinates to be consistently generated from different runs. (b) The suppressed regions generated from the gradient search output. A 3x3x3 mm box size was chosen to form the suppressed regions. (c) The 4 Hz tomographic image created using a region suppressed event-related SAM beamformer, thresholded at a 95% confidence interval. The activation peaks are indicated by the red dots. (d) The frequency spectra of the localized sources. It is worth mentioning that, the 37 Hz peak in sensor space (Fig. 6..8b) did not yield any significant peaks 6. Identifying the Nm (c) A. Identification Based on MEG Sensor Traces For ECD analysis, the MEG sensor trace is the only information available for peak marking. A paper by Herdman and colleagues was used as a reference for peak marking [7]. Additionally, based on AEP studies [8, 46], the Nm was taken as the peak occurring between 8- ms for older subjects and the peak occurring between -4 ms for younger subjects, as a general guideline. This of course assumes that the Nm and N share the same neural source generator.

78 66 B. Identification Using Beamformer Virtual Sensor Traces. Polarity of the Nm With Respect to Pm and Pm: The pattern of source polarity in the virtual sensor data was examined in hopes that it could be useful in identifying the Pm, Nm and Nm as is typically done in EEG auditoryevoked potential (AEP) analysis. In the majority of the cases, the Nm had the opposite polarity of the Pm and Pm fields, and the same polarity as the Nm field. However, for a few subjects, the polarity pattern from one hemisphere could change between two different frequencies. This can be seen in Fig For this reason, magnitude plots (Fig. 6..) were found to be a more reliable way for identifying AEF peaks. a) b) Nm Nm Nm Pm Pm Nm Fig Two virtual sensor plots from the right hemisphere of the year old subject at different frequencies show different polarity patterns. AEF peaks are labeled by comparing the plots with labeled averaged data. (a) was generated using data from a Hz binaural stimulus and (b) was generated using data from a 5 Hz binaural stimulus. Pm Nm Nm Fig Magnitude plot of the right hemisphere virtual sensor for 5 Hz monaural stimuli. The chosen Nm peak latency is indicated by the vertical red line. This is confirmed as the Nm peak by comparing the plot with (b) the averaged dataset where the vertical red line is at the same latency as in the magnitude plot.

79 . Ensuring Consistency Across All Subjects: 67 To ensure the same peak was being measured across all subjects, the Nm was identified by first thresholding the beamformer data at α =.5, using the omnibus noise method, to ensure that noise was not being measured. Again, the Nm was taken as the peak occurring between 8- ms for older subjects and the peak occurring between -4 ms for younger subjects, as a general guideline. Plotting virtual sensor data over the age range of the subjects (Fig ) also aided in ensuring consistency when labeling the Nm peak, as well as the Pm, Pm, and Nm peaks which occur in close spatial proximity to the Nm and therefore show up on the same virtual sensor trace. Care should be taken not to interpret the Nm virtual sensor traces as an exact measure of the Pm, Pm, or Nm fields. In order to take an exact measure, the virtual sensor must be evaluated at the location of those peaks. Fig A surface plot of right hemisphere virtual sensors for 5 Hz stimuli aids in identifying Nm progression with age. Given that the Nm is the dominant peak for older subjects, we assign the Nm in younger children based on peak connectivity with those of the older children. A view of Fig at a different angle (Fig. 6..4) shows that the Pm field is relatively small, if not absent, for the younger subjects. At around 6 years of age, the Pm becomes noticeable near the Nm. As the subjects get older, the Pm is more visible as the Nm latency decreases.

80 68 Fig Same as Fig. 6..3, except viewed on the opposite side. 3. Ensuring Consistency using the N Latency: Since the Nm is the neuromagnetic correlate of the N EEG potential, we compared the Nm latencies on the MEG sensor and beamformer virtual sensor traces with the EEG Cz traces recorded from two subjects. Data from the 4 year old subject (Fig. 6..5) shows that the latency of the Nm fields marked in both MEG sensor and beamformer virtual sensor traces correspond closely with the N field in the EEG trace. Slight discrepancies in latency between the N and Nm are due to the fact that the EEG trace was recorded from the Cz location, which blurs the contributions of the Nm field from both hemispheres.

81 69 (a) Pm Nm P am P bm (b) 5 4 Virtual Sensor - Hz Binaural, Left Hemisphere (4 yrs) Nm (88 ms) 3 N (96 ms) N (85 ms) Dipole Moment (nam) - - P (45 ms) P a (34 ms) P b (97 ms) -3-4 P b m (4 ms) P a m (7 ms) Time (ms) Fig (a) Averaged MEG sensor data and EEG trace from Cz electrode for a Hz binaural recording on a 4 year old subject. Fields are labeled in the MEG sensor data and potentials are labeled on the EEG trace. By visual inspection, the Pm, Nm and Pm fields line up with the P, N and P fields. (b) Virtual Sensor data from the same recording with statistically significant fields labeled. The latencies of the Nm and Pm fields are relatively close to that of the potentials. This data was lowpass filtered at Hz to make comparison easier. Data from the year old subject presented an interesting case because the Nm appears to change polarity with respect to the Pm and Nm between 5 Hz and Hz stimulus frequencies (Fig. 6..6b,d). This polarity change also manifests in the EEG trace (Fig. 6..6a,c). Although polarity of the 5 Hz EEG trace might suggest that the peak between the labeled P and N is really the N, and the labeled N is really the P, latencies obtained from other studies show that the P and P should be approximately ms apart [8, 46]. Combined with the labeled beamformer virtual sensor surface plot (Fig ), this suggests that based on latency, the labeling is consistent with previous studies and also between subjects in the present study. The fact that the N/Nm reversal is observed in both MEG and EEG datasets demonstrates reproducibility of this reversal between the two imaging modalities.

82 7 (a) Pm Nm Nm (b) 5 Virtual Sensor - 5 Monaural, Right Hemisphere ( yrs) Nm (3 ms) 5 P (77.8 ms) N (45. ms) N (33.6 ms) Dipole Moment (nam) Nm (36 ms) Pm (77.6 ms) Time (ms) (c) Pm Nm Nm (d) 6 Virtual Sensor - Hz Monaural, Right Hemisphere ( yrs) 4 N (58.5 ms) P (68 ms) P (65.5 ms) N (37.6 ms) Dipole Moment (ms) Nm ( ms) - Nm (4 ms) Time (ms) Fig (a) MEG and EEG traces from 5 Hz monaural recording of year old subject, previously shown in Fig Auditory evoked fields and potentials are labeled assuming the N wave has a reversed polarity. (b) The 5 Hz monaural Pm, Nm, and Nm fields are labeled on the virtual sensor trace for comparison with the EEG trace. A Hz lowpass filter was applied to make comparison easier. (c) MEG and EEG traces from Hz monaural stimulus. Although the EEG trace has a low SNR, we make tentative measurements of the latencies of the P, N, P, and N waves based on their occurrence near their corresponding MEG fields. (d) Virtual sensor data for Hz monaural stimulus. A Hz lowpass filter was applied to make comparison easier. The Nm and Nm fields have the correct polarity. The Pm field is not labeled because it was below the statistical significance threshold. The valley between the Nm and Nm would be read on an EEG trace as a P.

83 7 6.3 Evaluation of MEG Data Post-Processing Methods Using AEF Data A. Beamformer Application to Event Related Auditory Evoked Fields. Example : For the binaural stimuli used in this study, the auditory evoked fields are usually at most moderately correlated. This means that if both Nm sources in the left and right hemisphere are strong enough, they won t suppress each other so much that they become masked by the noise. This makes establishing areas for region suppression easier. The TRRS-ER-SAM beamformer can then be used to more accurately reconstruct the sources. One example is illustrated in Fig In this example, the two Nm source locations could be seen in the auditory cortex, and they were above the 95% confidence level (Fig. 6.3.d); however, due to the temporal overlap in the Nm peaks that is visible in Fig. 6.3.e, it was advisable to perform region suppression around these areas to ensure that no coherent source suppression is distorting the data. The TRRS-ER-SAM beamformer yielded stronger Nm activations (Fig. 6.3.f).

84 7 Auditory Evoked Field - 5 Hz, Binaural (8 yrs) Noise Floor - 5 Hz, Binaural (8 yrs) (a) 5 (b) 8 (c) 5 5 Hz, Binaural Omnibus Noise Histogram, 3 Samples (8 yrs) Magnetic Field (ft) (d) Magnetic Field (ft) (e) Numberof Occurences Time (ms) Time (ms) Normalized Omnibus Noise Magnitude x -6 (f) Normalized Magnitude Normalized Magnitude (g) Dipole Moment (nam) Dipole Moment (nam) 3 Left Hemisphere Virtual Sensor at MNI (-34.4, -38.3, 7.6) mm Time (ms) 6 4 Nm Right Hemisphere Virtual Sensor at MNI (39.5, -4.8, 3.) mm Nm Time (ms) Fig (a) Averaged auditory evoked field data obtained from a 8 year old subject. It was bandpass filtered between -3 Hz. (b) Noise floor data obtained by plus-minus averaging of the auditory evoked field. (c) Histogram of omnibus noise magnitude from noise floor beamformer output. In this case, for α =.5, the beamformer magnitude cutoff is 4.7x -6. (d) Conventional ER-SAM analysis reconstructs both sources. They are both visible above a 95% confidence level threshold and are located at MEG coordinates (-, 4, 5) mm and (5, -35, 5) mm. (e) 4x4x4 mm suppression region boxes are created around the reconstructed sources. (f) Region suppressed ER-SAM reconstructs the sources with a higher magnitude than the conventional ER-SAM beamformer. (g) The Nm dipole moment is measured by computing the non-normalized virtual sensors at the maxima of the point spread functions around the two sources.. Example : For subjects who happen to have highly coherent sources, or if sources are weaker, as tends to be the case for and 6 Hz tones, it is difficult to identify the Nm field with conventional ER-SAM analysis. In this situation, the coherent source gradient search is particularly useful. This next example is a 5 Hz binaural auditory evoked response recording (Fig. 6.3.a). With conventional ER-SAM beamformer analysis only the Nm source in the right hemisphere was identifiable, as is seen in Fig. 6.3.d. All voxel values near the Nm time point in the left hemisphere were below the 95% confidence level. A coherent source gradient search yielded

85 suppression regions shown in Fig. 6.3.e. A suppressed region ER-SAM beamformer analysis successfully reconstructed the Nm peaks in the left and right hemisphere. 73 (a) (b) (c) 5 Auditory Evoked Field - 5 Hz, Binaural (5.5 yrs) Noise Floor - 5 Hz, Binaural (5.5 yrs) Hz, Binaural Omnibus Noise Histogram, 3 Samples (5.5 yrs) 6 4 Magnetic Field (ft) 5-5 Magnetic Field (ft) - -4 Number of Occurences Time (ms) Time (ms) Normalized Omnibus Noise Magnitude x -6 (d) Normalized Magnitude (e) (f) Normalized Magnitude Dipole Moment (nam) Dipole Moment (nam) Left Hemisphere Virtual Sensor at MNI (-4.6, -.8,.6) mm Time (ms) - Nm Right Hemisphere Virtual Sensor at MNI (47.9, -3.7, 3.) mm Nm Time (ms) Fig (a) Averaged auditory evoked field data obtained from a 5.5 year old subject using 5 Hz binaural stimuli. It was bandpass filtered between -3 Hz. (b) Noise floor data obtained by plus-minus averaging of the auditory evoked field. (c) Histogram of omnibus noise magnitude from noise floor beamformer output. In this case, for α =.5, the beamformer magnitude cutoff is.8x -6. (d) Only one source is detectable with conventional ER-SAM analysis. Nothing resembling an Nm peak could be found in the left hemisphere. (e) A coherent source search found coherent sources at MEG coordinates (5, 5, 7) mm and (5, -5, 6) mm. 4x4x4 mm suppression region boxes were created around these points. (f) Region suppressed ER-SAM reconstructs the sources with a higher magnitude than the conventional ER-SAM beamformer. The left hemisphere source is greater than the 95% confidence level threshold. (g) The Nm dipole moment is measured by computing the non-normalized virtual sensors at the maxima of the point spread functions around the two sources. (g) B. Evaluation of Nm Localization For the 7 subjects with anatomical information available, the ECD, ER-SAM, and TRRS- ER-SAM beamformers were applied to the MEG datasets generated using binaural 5 Hz auditory stimuli. These post-processing algorithms were used to locate the Nm AEF. The results are shown Fig

86 74 Age (yrs) 5 Hz Binaural Stimuli ECD ER-SAM TRRS-ER-SAM MEG Data Auditory Evoked Field - 5 Hz Binaural ( yrs) 5 5 Magnetic Field (ft) Time (ms) Auditory Evoked Field - 5 Hz Binaural (5 Hz) 5 5 Magnetic Field (ft) Time (ms) Auditory Evoked Field - 5 Hz, Binaural (5.5 yrs) Magnetic Field (ft) Time (ms) Auditory Evoked Field - 5 Hz Binaural (6 yrs) x 6 Magnetic Field (ft) Time (ms) No Nm Identified in MEG Sensor Trace Auditory Evoked Field - 5 Hz Binaural (8 yrs) Magnetic Field (ft) Time (ms) 3 4 Auditory Evoked Field - 5 Hz Binaural (3 yrs) 5 3 Magnetic Field (ft) Time (ms) 3 4 Auditory Evoked Field - 5 Hz Binaural (4 yrs) 5 4 Magnetic Field (ft) Time (ms) 3 4 Fig Comparison of ECD, ER-SAM, and TRRS-ER-SAM algorithms in their ability to localize the Nm AEF in 7 subjects. The vertical red line in the MEG sensor traces indicates the Nm latency used as the center of the 3 ms ECD localization window. No ECD analysis could be performed for the 6 year old subject due to the presence of a permanent retainer artifact. For ECD MR slices with multiple dipoles marked, the axial MR slices are where the slightly larger dipole marker is located. The other dipole markers are located in separate slices, but their projected positions are shown for reference. Beamformer images were thresholded at a.5 significance level.

EE 791 EEG-5 Measures of EEG Dynamic Properties

EE 791 EEG-5 Measures of EEG Dynamic Properties EE 791 EEG-5 Measures of EEG Dynamic Properties Computer analysis of EEG EEG scientists must be especially wary of mathematics in search of applications after all the number of ways to transform data is

More information

HBM2006: MEG/EEG Brain mapping course MEG/EEG instrumentation and experiment design. Florence, June 11, 2006

HBM2006: MEG/EEG Brain mapping course MEG/EEG instrumentation and experiment design. Florence, June 11, 2006 HBM2006: MEG/EEG Brain mapping course MEG/EEG instrumentation and experiment design Florence, June 11, 2006 Lauri Parkkonen Brain Research Unit Low Temperature Laboratory Helsinki University lauri@neuro.hut.fi

More information

Magnetoencephalography and Auditory Neural Representations

Magnetoencephalography and Auditory Neural Representations Magnetoencephalography and Auditory Neural Representations Jonathan Z. Simon Nai Ding Electrical & Computer Engineering, University of Maryland, College Park SBEC 2010 Non-invasive, Passive, Silent Neural

More information

A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL

A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL 9th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, -7 SEPTEMBER 7 A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL PACS: PACS:. Pn Nicolas Le Goff ; Armin Kohlrausch ; Jeroen

More information

KYMATA DATASET 3.01: README

KYMATA DATASET 3.01: README KYMATA DATASET 3.01: README Kymata s information processing pathways are generated from electromagnetic measurements of the human cortex. These raw measurements are available for download from https://kymata-atlas.org/datasets.

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 4: Data analysis I Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single neuron

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 AUDITORY EVOKED MAGNETIC FIELDS AND LOUDNESS IN RELATION TO BANDPASS NOISES

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 AUDITORY EVOKED MAGNETIC FIELDS AND LOUDNESS IN RELATION TO BANDPASS NOISES 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 AUDITORY EVOKED MAGNETIC FIELDS AND LOUDNESS IN RELATION TO BANDPASS NOISES PACS: 43.64.Ri Yoshiharu Soeta; Seiji Nakagawa 1 National

More information

Beyond Blind Averaging Analyzing Event-Related Brain Dynamics

Beyond Blind Averaging Analyzing Event-Related Brain Dynamics Beyond Blind Averaging Analyzing Event-Related Brain Dynamics Scott Makeig Swartz Center for Computational Neuroscience Institute for Neural Computation University of California San Diego La Jolla, CA

More information

Spectro-Temporal Methods in Primary Auditory Cortex David Klein Didier Depireux Jonathan Simon Shihab Shamma

Spectro-Temporal Methods in Primary Auditory Cortex David Klein Didier Depireux Jonathan Simon Shihab Shamma Spectro-Temporal Methods in Primary Auditory Cortex David Klein Didier Depireux Jonathan Simon Shihab Shamma & Department of Electrical Engineering Supported in part by a MURI grant from the Office of

More information

780. Biomedical signal identification and analysis

780. Biomedical signal identification and analysis 780. Biomedical signal identification and analysis Agata Nawrocka 1, Andrzej Kot 2, Marcin Nawrocki 3 1, 2 Department of Process Control, AGH University of Science and Technology, Poland 3 Department of

More information

PSYC696B: Analyzing Neural Time-series Data

PSYC696B: Analyzing Neural Time-series Data PSYC696B: Analyzing Neural Time-series Data Spring, 2014 Tuesdays, 4:00-6:45 p.m. Room 338 Shantz Building Course Resources Online: jallen.faculty.arizona.edu Follow link to Courses Available from: Amazon:

More information

from signals to sources asa-lab turnkey solution for ERP research

from signals to sources asa-lab turnkey solution for ERP research from signals to sources asa-lab turnkey solution for ERP research asa-lab : turnkey solution for ERP research Psychological research on the basis of event-related potentials is a key source of information

More information

COMMUNICATIONS BIOPHYSICS

COMMUNICATIONS BIOPHYSICS XVI. COMMUNICATIONS BIOPHYSICS Prof. W. A. Rosenblith Dr. D. H. Raab L. S. Frishkopf Dr. J. S. Barlow* R. M. Brown A. K. Hooks Dr. M. A. B. Brazier* J. Macy, Jr. A. ELECTRICAL RESPONSES TO CLICKS AND TONE

More information

BME 599a Applied Electrophysiology Midterm (Thursday 10/12/00 09:30)

BME 599a Applied Electrophysiology Midterm (Thursday 10/12/00 09:30) 1 BME 599a Applied Electrophysiology Midterm (Thursday 10/12/00 09:30) Time : 45 minutes Name : MARKING PRECEDENT Points : 70 USC ID : Note : When asked for short written answers please pay attention to

More information

Nonuniform multi level crossing for signal reconstruction

Nonuniform multi level crossing for signal reconstruction 6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven

More information

Large-scale cortical correlation structure of spontaneous oscillatory activity

Large-scale cortical correlation structure of spontaneous oscillatory activity Supplementary Information Large-scale cortical correlation structure of spontaneous oscillatory activity Joerg F. Hipp 1,2, David J. Hawellek 1, Maurizio Corbetta 3, Markus Siegel 2 & Andreas K. Engel

More information

40 Hz Event Related Auditory Potential

40 Hz Event Related Auditory Potential 40 Hz Event Related Auditory Potential Ivana Andjelkovic Advanced Biophysics Lab Class, 2012 Abstract Main focus of this paper is an EEG experiment on observing frequency of event related auditory potential

More information

Lab #9: Compound Action Potentials in the Toad Sciatic Nerve

Lab #9: Compound Action Potentials in the Toad Sciatic Nerve Lab #9: Compound Action Potentials in the Toad Sciatic Nerve In this experiment, you will measure compound action potentials (CAPs) from an isolated toad sciatic nerve to illustrate the basic physiological

More information

Neural Coding of Multiple Stimulus Features in Auditory Cortex

Neural Coding of Multiple Stimulus Features in Auditory Cortex Neural Coding of Multiple Stimulus Features in Auditory Cortex Jonathan Z. Simon Neuroscience and Cognitive Sciences Biology / Electrical & Computer Engineering University of Maryland, College Park Computational

More information

Evoked Potentials (EPs)

Evoked Potentials (EPs) EVOKED POTENTIALS Evoked Potentials (EPs) Event-related brain activity where the stimulus is usually of sensory origin. Acquired with conventional EEG electrodes. Time-synchronized = time interval from

More information

a. Use (at least) window lengths of 256, 1024, and 4096 samples to compute the average spectrum using a window overlap of 0.5.

a. Use (at least) window lengths of 256, 1024, and 4096 samples to compute the average spectrum using a window overlap of 0.5. 1. Download the file signal.mat from the website. This is continuous 10 second recording of a signal sampled at 1 khz. Assume the noise is ergodic in time and that it is white. I used the MATLAB Signal

More information

Signal Processing for Digitizers

Signal Processing for Digitizers Signal Processing for Digitizers Modular digitizers allow accurate, high resolution data acquisition that can be quickly transferred to a host computer. Signal processing functions, applied in the digitizer

More information

Low-Frequency Transient Visual Oscillations in the Fly

Low-Frequency Transient Visual Oscillations in the Fly Kate Denning Biophysics Laboratory, UCSD Spring 2004 Low-Frequency Transient Visual Oscillations in the Fly ABSTRACT Low-frequency oscillations were observed near the H1 cell in the fly. Using coherence

More information

EC209 - Improving Signal-To-Noise Ratio (SNR) for Optimizing Repeatable Auditory Brainstem Responses

EC209 - Improving Signal-To-Noise Ratio (SNR) for Optimizing Repeatable Auditory Brainstem Responses EC209 - Improving Signal-To-Noise Ratio (SNR) for Optimizing Repeatable Auditory Brainstem Responses Aaron Steinman, Ph.D. Director of Research, Vivosonic Inc. aaron.steinman@vivosonic.com 1 Outline Why

More information

Image Quality/Artifacts Frequency (MHz)

Image Quality/Artifacts Frequency (MHz) The Larmor Relation 84 Image Quality/Artifacts (MHz) 42 ω = γ X B = 2πf 84 0.0 1.0 2.0 Magnetic Field (Tesla) 1 A 1D Image Magnetic Field Gradients Magnet Field Strength Field Strength / Gradient Coil

More information

10. Phase Cycling and Pulsed Field Gradients Introduction to Phase Cycling - Quadrature images

10. Phase Cycling and Pulsed Field Gradients Introduction to Phase Cycling - Quadrature images 10. Phase Cycling and Pulsed Field Gradients 10.1 Introduction to Phase Cycling - Quadrature images The selection of coherence transfer pathways (CTP) by phase cycling or PFGs is the tool that allows the

More information

Motor Imagery based Brain Computer Interface (BCI) using Artificial Neural Network Classifiers

Motor Imagery based Brain Computer Interface (BCI) using Artificial Neural Network Classifiers Motor Imagery based Brain Computer Interface (BCI) using Artificial Neural Network Classifiers Maitreyee Wairagkar Brain Embodiment Lab, School of Systems Engineering, University of Reading, Reading, U.K.

More information

Wide band pneumatic sound system for MEG

Wide band pneumatic sound system for MEG Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Wide band pneumatic sound system for MEG Raicevich, G. (1), Burwood, E. (1), Dillon, H. Johnson,

More information

Shift of ITD tuning is observed with different methods of prediction.

Shift of ITD tuning is observed with different methods of prediction. Supplementary Figure 1 Shift of ITD tuning is observed with different methods of prediction. (a) ritdfs and preditdfs corresponding to a positive and negative binaural beat (resp. ipsi/contra stimulus

More information

Interference in stimuli employed to assess masking by substitution. Bernt Christian Skottun. Ullevaalsalleen 4C Oslo. Norway

Interference in stimuli employed to assess masking by substitution. Bernt Christian Skottun. Ullevaalsalleen 4C Oslo. Norway Interference in stimuli employed to assess masking by substitution Bernt Christian Skottun Ullevaalsalleen 4C 0852 Oslo Norway Short heading: Interference ABSTRACT Enns and Di Lollo (1997, Psychological

More information

SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication

SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication INTRODUCTION Digital Communication refers to the transmission of binary, or digital, information over analog channels. In this laboratory you will

More information

Matched filter. Contents. Derivation of the matched filter

Matched filter. Contents. Derivation of the matched filter Matched filter From Wikipedia, the free encyclopedia In telecommunications, a matched filter (originally known as a North filter [1] ) is obtained by correlating a known signal, or template, with an unknown

More information

Evaluation Method of Magnetic Sensors Using the Calibrated Phantom for Magnetoencephalography

Evaluation Method of Magnetic Sensors Using the Calibrated Phantom for Magnetoencephalography J. Magn. Soc. Jpn., 41, 7-74 (217) Evaluation Method of Magnetic Sensors Using the Calibrated Phantom for Magnetoencephalography D. Oyama, Y. Adachi, and G. Uehara Applied Electronics Laboratory,

More information

Chapter 17 Waves in Two and Three Dimensions

Chapter 17 Waves in Two and Three Dimensions Chapter 17 Waves in Two and Three Dimensions Slide 17-1 Chapter 17: Waves in Two and Three Dimensions Concepts Slide 17-2 Section 17.1: Wavefronts The figure shows cutaway views of a periodic surface wave

More information

AUDL 4007 Auditory Perception. Week 1. The cochlea & auditory nerve: Obligatory stages of auditory processing

AUDL 4007 Auditory Perception. Week 1. The cochlea & auditory nerve: Obligatory stages of auditory processing AUDL 4007 Auditory Perception Week 1 The cochlea & auditory nerve: Obligatory stages of auditory processing 1 Think of the ear as a collection of systems, transforming sounds to be sent to the brain 25

More information

2920 J. Acoust. Soc. Am. 102 (5), Pt. 1, November /97/102(5)/2920/5/$ Acoustical Society of America 2920

2920 J. Acoust. Soc. Am. 102 (5), Pt. 1, November /97/102(5)/2920/5/$ Acoustical Society of America 2920 Detection and discrimination of frequency glides as a function of direction, duration, frequency span, and center frequency John P. Madden and Kevin M. Fire Department of Communication Sciences and Disorders,

More information

Michael F. Toner, et. al.. "Distortion Measurement." Copyright 2000 CRC Press LLC. <

Michael F. Toner, et. al.. Distortion Measurement. Copyright 2000 CRC Press LLC. < Michael F. Toner, et. al.. "Distortion Measurement." Copyright CRC Press LLC. . Distortion Measurement Michael F. Toner Nortel Networks Gordon W. Roberts McGill University 53.1

More information

Supplementary Information for Common neural correlates of real and imagined movements contributing to the performance of brain machine interfaces

Supplementary Information for Common neural correlates of real and imagined movements contributing to the performance of brain machine interfaces Supplementary Information for Common neural correlates of real and imagined movements contributing to the performance of brain machine interfaces Hisato Sugata 1,2, Masayuki Hirata 1,3, Takufumi Yanagisawa

More information

PREDICTION OF FINGER FLEXION FROM ELECTROCORTICOGRAPHY DATA

PREDICTION OF FINGER FLEXION FROM ELECTROCORTICOGRAPHY DATA University of Tartu Institute of Computer Science Course Introduction to Computational Neuroscience Roberts Mencis PREDICTION OF FINGER FLEXION FROM ELECTROCORTICOGRAPHY DATA Abstract This project aims

More information

Instruction Manual for Concept Simulators. Signals and Systems. M. J. Roberts

Instruction Manual for Concept Simulators. Signals and Systems. M. J. Roberts Instruction Manual for Concept Simulators that accompany the book Signals and Systems by M. J. Roberts March 2004 - All Rights Reserved Table of Contents I. Loading and Running the Simulators II. Continuous-Time

More information

The Effect of Brainwave Synchronization on Concentration and Performance: An Examination of German Students

The Effect of Brainwave Synchronization on Concentration and Performance: An Examination of German Students The Effect of Brainwave Synchronization on Concentration and Performance: An Examination of German Students Published online by the Deluwak UG Research Department, December 2016 Abstract This study examines

More information

Figure S3. Histogram of spike widths of recorded units.

Figure S3. Histogram of spike widths of recorded units. Neuron, Volume 72 Supplemental Information Primary Motor Cortex Reports Efferent Control of Vibrissa Motion on Multiple Timescales Daniel N. Hill, John C. Curtis, Jeffrey D. Moore, and David Kleinfeld

More information

Neurophysiology. The action potential. Why should we care? AP is the elemental until of nervous system communication

Neurophysiology. The action potential. Why should we care? AP is the elemental until of nervous system communication Neurophysiology Why should we care? AP is the elemental until of nervous system communication The action potential Time course, propagation velocity, and patterns all constrain hypotheses on how the brain

More information

The Electroencephalogram. Basics in Recording EEG, Frequency Domain Analysis and its Applications

The Electroencephalogram. Basics in Recording EEG, Frequency Domain Analysis and its Applications The Electroencephalogram Basics in Recording EEG, Frequency Domain Analysis and its Applications Announcements Papers: 1 or 2 paragraph prospectus due no later than Monday March 28 SB 1467 3x5s The Electroencephalogram

More information

Lauri Parkkonen. Jyväskylä Summer School 2013

Lauri Parkkonen. Jyväskylä Summer School 2013 Jyväskylä Summer School 2013 COM7: Electromagnetic Signals from The Human Brain: Fundamentals and Analysis (TIEJ659) Pre-processing of MEG data Lauri Parkkonen Dept. Biomedical Engineering and Computational

More information

Matching and Locating of Cloud to Ground Lightning Discharges

Matching and Locating of Cloud to Ground Lightning Discharges Charles Wang Duke University Class of 05 ECE/CPS Pratt Fellow Matching and Locating of Cloud to Ground Lightning Discharges Advisor: Prof. Steven Cummer I: Introduction When a lightning discharge occurs

More information

CN510: Principles and Methods of Cognitive and Neural Modeling. Neural Oscillations. Lecture 24

CN510: Principles and Methods of Cognitive and Neural Modeling. Neural Oscillations. Lecture 24 CN510: Principles and Methods of Cognitive and Neural Modeling Neural Oscillations Lecture 24 Instructor: Anatoli Gorchetchnikov Teaching Fellow: Rob Law It Is Much

More information

MRI SYSTEM COMPONENTS Module One

MRI SYSTEM COMPONENTS Module One MRI SYSTEM COMPONENTS Module One 1 MAIN COMPONENTS Magnet Gradient Coils RF Coils Host Computer / Electronic Support System Operator Console and Display Systems 2 3 4 5 Magnet Components 6 The magnet The

More information

(i) Understanding the basic concepts of signal modeling, correlation, maximum likelihood estimation, least squares and iterative numerical methods

(i) Understanding the basic concepts of signal modeling, correlation, maximum likelihood estimation, least squares and iterative numerical methods Tools and Applications Chapter Intended Learning Outcomes: (i) Understanding the basic concepts of signal modeling, correlation, maximum likelihood estimation, least squares and iterative numerical methods

More information

Biomedical Engineering Evoked Responses

Biomedical Engineering Evoked Responses Biomedical Engineering Evoked Responses Dr. rer. nat. Andreas Neubauer andreas.neubauer@medma.uni-heidelberg.de Tel.: 0621 383 5126 Stimulation of biological systems and data acquisition 1. How can biological

More information

(i) Determine the admittance parameters of the network of Fig 1 (f) and draw its - equivalent circuit.

(i) Determine the admittance parameters of the network of Fig 1 (f) and draw its - equivalent circuit. I.E.S-(Conv.)-1995 ELECTRONICS AND TELECOMMUNICATION ENGINEERING PAPER - I Some useful data: Electron charge: 1.6 10 19 Coulomb Free space permeability: 4 10 7 H/m Free space permittivity: 8.85 pf/m Velocity

More information

Multi-channel SQUID-based Ultra-Low Field Magnetic Resonance Imaging in Unshielded Environment

Multi-channel SQUID-based Ultra-Low Field Magnetic Resonance Imaging in Unshielded Environment Multi-channel SQUID-based Ultra-Low Field Magnetic Resonance Imaging in Unshielded Environment Andrei Matlashov, Per Magnelind, Shaun Newman, Henrik Sandin, Algis Urbaitis, Petr Volegov, Michelle Espy

More information

Hearing Research 296 (2013) 25e35. Contents lists available at SciVerse ScienceDirect. Hearing Research

Hearing Research 296 (2013) 25e35. Contents lists available at SciVerse ScienceDirect. Hearing Research Hearing Research 296 (2013) 25e35 Contents lists available at SciVerse ScienceDirect Hearing Research journal homepage: www.elsevier.com/locate/heares Research paper Steady-state MEG responses elicited

More information

Chapter 5. Signal Analysis. 5.1 Denoising fiber optic sensor signal

Chapter 5. Signal Analysis. 5.1 Denoising fiber optic sensor signal Chapter 5 Signal Analysis 5.1 Denoising fiber optic sensor signal We first perform wavelet-based denoising on fiber optic sensor signals. Examine the fiber optic signal data (see Appendix B). Across all

More information

Classification of Four Class Motor Imagery and Hand Movements for Brain Computer Interface

Classification of Four Class Motor Imagery and Hand Movements for Brain Computer Interface Classification of Four Class Motor Imagery and Hand Movements for Brain Computer Interface 1 N.Gowri Priya, 2 S.Anu Priya, 3 V.Dhivya, 4 M.D.Ranjitha, 5 P.Sudev 1 Assistant Professor, 2,3,4,5 Students

More information

Reconstruction of Current Distribution and Termination Impedances of PCB-Traces by Magnetic Near-Field Data and Transmission-Line Theory

Reconstruction of Current Distribution and Termination Impedances of PCB-Traces by Magnetic Near-Field Data and Transmission-Line Theory Reconstruction of Current Distribution and Termination Impedances of PCB-Traces by Magnetic Near-Field Data and Transmission-Line Theory Robert Nowak, Stephan Frei TU Dortmund University Dortmund, Germany

More information

Removal of ocular artifacts from EEG signals using adaptive threshold PCA and Wavelet transforms

Removal of ocular artifacts from EEG signals using adaptive threshold PCA and Wavelet transforms Available online at www.interscience.in Removal of ocular artifacts from s using adaptive threshold PCA and Wavelet transforms P. Ashok Babu 1, K.V.S.V.R.Prasad 2 1 Narsimha Reddy Engineering College,

More information

CHAPTER 2 FIR ARCHITECTURE FOR THE FILTER BANK OF SPEECH PROCESSOR

CHAPTER 2 FIR ARCHITECTURE FOR THE FILTER BANK OF SPEECH PROCESSOR 22 CHAPTER 2 FIR ARCHITECTURE FOR THE FILTER BANK OF SPEECH PROCESSOR 2.1 INTRODUCTION A CI is a device that can provide a sense of sound to people who are deaf or profoundly hearing-impaired. Filters

More information

Laboratory Assignment 2 Signal Sampling, Manipulation, and Playback

Laboratory Assignment 2 Signal Sampling, Manipulation, and Playback Laboratory Assignment 2 Signal Sampling, Manipulation, and Playback PURPOSE This lab will introduce you to the laboratory equipment and the software that allows you to link your computer to the hardware.

More information

Application of Fourier Transform in Signal Processing

Application of Fourier Transform in Signal Processing 1 Application of Fourier Transform in Signal Processing Lina Sun,Derong You,Daoyun Qi Information Engineering College, Yantai University of Technology, Shandong, China Abstract: Fourier transform is a

More information

Chapter 5. Clock Offset Due to Antenna Rotation

Chapter 5. Clock Offset Due to Antenna Rotation Chapter 5. Clock Offset Due to Antenna Rotation 5. Introduction The goal of this experiment is to determine how the receiver clock offset from GPS time is affected by a rotating antenna. Because the GPS

More information

MAGNETIC RESONANCE IMAGING

MAGNETIC RESONANCE IMAGING CSEE 4620 Homework 3 Fall 2018 MAGNETIC RESONANCE IMAGING 1. THE PRIMARY MAGNET Magnetic resonance imaging requires a very strong static magnetic field to align the nuclei. Modern MRI scanners require

More information

Application Note 7. Digital Audio FIR Crossover. Highlights Importing Transducer Response Data FIR Window Functions FIR Approximation Methods

Application Note 7. Digital Audio FIR Crossover. Highlights Importing Transducer Response Data FIR Window Functions FIR Approximation Methods Application Note 7 App Note Application Note 7 Highlights Importing Transducer Response Data FIR Window Functions FIR Approximation Methods n Design Objective 3-Way Active Crossover 200Hz/2kHz Crossover

More information

UWB Small Scale Channel Modeling and System Performance

UWB Small Scale Channel Modeling and System Performance UWB Small Scale Channel Modeling and System Performance David R. McKinstry and R. Michael Buehrer Mobile and Portable Radio Research Group Virginia Tech Blacksburg, VA, USA {dmckinst, buehrer}@vt.edu Abstract

More information

Micro-state analysis of EEG

Micro-state analysis of EEG Micro-state analysis of EEG Gilles Pourtois Psychopathology & Affective Neuroscience (PAN) Lab http://www.pan.ugent.be Stewart & Walsh, 2000 A shared opinion on EEG/ERP: excellent temporal resolution (ms

More information

1 Introduction. 2 The basic principles of NMR

1 Introduction. 2 The basic principles of NMR 1 Introduction Since 1977 when the first clinical MRI scanner was patented nuclear magnetic resonance imaging is increasingly being used for medical diagnosis and in scientific research and application

More information

(i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters

(i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters FIR Filter Design Chapter Intended Learning Outcomes: (i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters (ii) Ability to design linear-phase FIR filters according

More information

Supplementary Figure 1

Supplementary Figure 1 Supplementary Figure 1 Left aspl Right aspl Detailed description of the fmri activation during allocentric action observation in the aspl. Averaged activation (N=13) during observation of the allocentric

More information

Psychoacoustic Cues in Room Size Perception

Psychoacoustic Cues in Room Size Perception Audio Engineering Society Convention Paper Presented at the 116th Convention 2004 May 8 11 Berlin, Germany 6084 This convention paper has been reproduced from the author s advance manuscript, without editing,

More information

Advanced Test Equipment Rentals ATEC (2832)

Advanced Test Equipment Rentals ATEC (2832) Established 1981 Advanced Test Equipment Rentals www.atecorp.com 800-404-ATEC (2832) Electric and Magnetic Field Measurement For Isotropic Measurement of Magnetic and Electric Fields Evaluation of Field

More information

BME 3113, Dept. of BME Lecture on Introduction to Biosignal Processing

BME 3113, Dept. of BME Lecture on Introduction to Biosignal Processing What is a signal? A signal is a varying quantity whose value can be measured and which conveys information. A signal can be simply defined as a function that conveys information. Signals are represented

More information

Week 1: EEG Signal Processing Basics

Week 1: EEG Signal Processing Basics D-ITET/IBT Week 1: EEG Signal Processing Basics Gabor Stefanics (TNU) EEG Signal Processing: Theory and practice (Computational Psychiatry Seminar: Spring 2015) 1 Outline -Physiological bases of EEG -Amplifier

More information

ESE531 Spring University of Pennsylvania Department of Electrical and System Engineering Digital Signal Processing

ESE531 Spring University of Pennsylvania Department of Electrical and System Engineering Digital Signal Processing University of Pennsylvania Department of Electrical and System Engineering Digital Signal Processing ESE531, Spring 2017 Final Project: Audio Equalization Wednesday, Apr. 5 Due: Tuesday, April 25th, 11:59pm

More information

DESIGN, CONSTRUCTION, AND THE TESTING OF AN ELECTRIC MONOCHORD WITH A TWO-DIMENSIONAL MAGNETIC PICKUP. Michael Dickerson

DESIGN, CONSTRUCTION, AND THE TESTING OF AN ELECTRIC MONOCHORD WITH A TWO-DIMENSIONAL MAGNETIC PICKUP. Michael Dickerson DESIGN, CONSTRUCTION, AND THE TESTING OF AN ELECTRIC MONOCHORD WITH A TWO-DIMENSIONAL MAGNETIC PICKUP by Michael Dickerson Submitted to the Department of Physics and Astronomy in partial fulfillment of

More information

What Makes a Good VNA?

What Makes a Good VNA? Introduction Everyone knows that a good VNA should have both excellent hardware performance and an easy to use software interface with useful post-processing capabilities. But there are numerous VNAs in

More information

Detection, Interpolation and Cancellation Algorithms for GSM burst Removal for Forensic Audio

Detection, Interpolation and Cancellation Algorithms for GSM burst Removal for Forensic Audio >Bitzer and Rademacher (Paper Nr. 21)< 1 Detection, Interpolation and Cancellation Algorithms for GSM burst Removal for Forensic Audio Joerg Bitzer and Jan Rademacher Abstract One increasing problem for

More information

Retina. last updated: 23 rd Jan, c Michael Langer

Retina. last updated: 23 rd Jan, c Michael Langer Retina We didn t quite finish up the discussion of photoreceptors last lecture, so let s do that now. Let s consider why we see better in the direction in which we are looking than we do in the periphery.

More information

VOL. 3, NO.11 Nov, 2012 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved.

VOL. 3, NO.11 Nov, 2012 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved. Effect of Fading Correlation on the Performance of Spatial Multiplexed MIMO systems with circular antennas M. A. Mangoud Department of Electrical and Electronics Engineering, University of Bahrain P. O.

More information

Removal of Line Noise Component from EEG Signal

Removal of Line Noise Component from EEG Signal 1 Removal of Line Noise Component from EEG Signal Removal of Line Noise Component from EEG Signal When carrying out time-frequency analysis, if one is interested in analysing frequencies above 30Hz (i.e.

More information

Fig. 1. Electronic Model of Neuron

Fig. 1. Electronic Model of Neuron Spatial to Temporal onversion of Images Using A Pulse-oupled Neural Network Eric L. Brown and Bogdan M. Wilamowski University of Wyoming eric@novation.vcn.com, wilam@uwyo.edu Abstract A new electronic

More information

The fundamentals of detection theory

The fundamentals of detection theory Advanced Signal Processing: The fundamentals of detection theory Side 1 of 18 Index of contents: Advanced Signal Processing: The fundamentals of detection theory... 3 1 Problem Statements... 3 2 Detection

More information

(i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters

(i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters FIR Filter Design Chapter Intended Learning Outcomes: (i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters (ii) Ability to design linear-phase FIR filters according

More information

Haptic control in a virtual environment

Haptic control in a virtual environment Haptic control in a virtual environment Gerard de Ruig (0555781) Lourens Visscher (0554498) Lydia van Well (0566644) September 10, 2010 Introduction With modern technological advancements it is entirely

More information

Improving TDR/TDT Measurements Using Normalization Application Note

Improving TDR/TDT Measurements Using Normalization Application Note Improving TDR/TDT Measurements Using Normalization Application Note 1304-5 2 TDR/TDT and Normalization Normalization, an error-correction process, helps ensure that time domain reflectometer (TDR) and

More information

EWGAE 2010 Vienna, 8th to 10th September

EWGAE 2010 Vienna, 8th to 10th September EWGAE 2010 Vienna, 8th to 10th September Frequencies and Amplitudes of AE Signals in a Plate as a Function of Source Rise Time M. A. HAMSTAD University of Denver, Department of Mechanical and Materials

More information

SIMULATING RESTING CORTICAL BACKGROUND ACTIVITY WITH FILTERED NOISE. Journal of Integrative Neuroscience 7(3):

SIMULATING RESTING CORTICAL BACKGROUND ACTIVITY WITH FILTERED NOISE. Journal of Integrative Neuroscience 7(3): SIMULATING RESTING CORTICAL BACKGROUND ACTIVITY WITH FILTERED NOISE Journal of Integrative Neuroscience 7(3): 337-344. WALTER J FREEMAN Department of Molecular and Cell Biology, Donner 101 University of

More information

Advanced Digital Signal Processing Part 2: Digital Processing of Continuous-Time Signals

Advanced Digital Signal Processing Part 2: Digital Processing of Continuous-Time Signals Advanced Digital Signal Processing Part 2: Digital Processing of Continuous-Time Signals Gerhard Schmidt Christian-Albrechts-Universität zu Kiel Faculty of Engineering Institute of Electrical Engineering

More information

CHAPTER 6 INTRODUCTION TO SYSTEM IDENTIFICATION

CHAPTER 6 INTRODUCTION TO SYSTEM IDENTIFICATION CHAPTER 6 INTRODUCTION TO SYSTEM IDENTIFICATION Broadly speaking, system identification is the art and science of using measurements obtained from a system to characterize the system. The characterization

More information

Design of a digital holographic interferometer for the. ZaP Flow Z-Pinch

Design of a digital holographic interferometer for the. ZaP Flow Z-Pinch Design of a digital holographic interferometer for the M. P. Ross, U. Shumlak, R. P. Golingo, B. A. Nelson, S. D. Knecht, M. C. Hughes, R. J. Oberto University of Washington, Seattle, USA Abstract The

More information

MAKING TRANSIENT ANTENNA MEASUREMENTS

MAKING TRANSIENT ANTENNA MEASUREMENTS MAKING TRANSIENT ANTENNA MEASUREMENTS Roger Dygert, Steven R. Nichols MI Technologies, 1125 Satellite Boulevard, Suite 100 Suwanee, GA 30024-4629 ABSTRACT In addition to steady state performance, antennas

More information

The Discrete Fourier Transform. Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido

The Discrete Fourier Transform. Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido The Discrete Fourier Transform Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido CCC-INAOE Autumn 2015 The Discrete Fourier Transform Fourier analysis is a family of mathematical

More information

GAMMA-GAMMA CORRELATION Latest Revision: August 21, 2007

GAMMA-GAMMA CORRELATION Latest Revision: August 21, 2007 C1-1 GAMMA-GAMMA CORRELATION Latest Revision: August 21, 2007 QUESTION TO BE INVESTIGATED: decay event? What is the angular correlation between two gamma rays emitted by a single INTRODUCTION & THEORY:

More information

Investigating Electromagnetic and Acoustic Properties of Loudspeakers Using Phase Sensitive Equipment

Investigating Electromagnetic and Acoustic Properties of Loudspeakers Using Phase Sensitive Equipment Investigating Electromagnetic and Acoustic Properties of Loudspeakers Using Phase Sensitive Equipment Katherine Butler Department of Physics, DePaul University ABSTRACT The goal of this project was to

More information

Lecture 13 Read: the two Eckhorn papers. (Don t worry about the math part of them).

Lecture 13 Read: the two Eckhorn papers. (Don t worry about the math part of them). Read: the two Eckhorn papers. (Don t worry about the math part of them). Last lecture we talked about the large and growing amount of interest in wave generation and propagation phenomena in the neocortex

More information

Imagine the cochlea unrolled

Imagine the cochlea unrolled 2 2 1 1 1 1 1 Cochlea & Auditory Nerve: obligatory stages of auditory processing Think of the auditory periphery as a processor of signals 2 2 1 1 1 1 1 Imagine the cochlea unrolled Basilar membrane motion

More information

Simulating a PTA with metronomes and microphones: A user s guide for a double-metronome timing & correlation demonstration

Simulating a PTA with metronomes and microphones: A user s guide for a double-metronome timing & correlation demonstration Simulating a PTA with metronomes and microphones: A user s guide for a double-metronome timing & correlation demonstration October 21, 2015 Page 1 Contents I Purpose....................................................

More information

Locating good conductors by using the B-field integrated from partial db/dt waveforms of timedomain

Locating good conductors by using the B-field integrated from partial db/dt waveforms of timedomain Locating good conductors by using the integrated from partial waveforms of timedomain EM systems Haoping Huang, Geo-EM, LLC Summary An approach for computing the from time-domain data measured by an induction

More information

FREQUENCY RESPONSE AND LATENCY OF MEMS MICROPHONES: THEORY AND PRACTICE

FREQUENCY RESPONSE AND LATENCY OF MEMS MICROPHONES: THEORY AND PRACTICE APPLICATION NOTE AN22 FREQUENCY RESPONSE AND LATENCY OF MEMS MICROPHONES: THEORY AND PRACTICE This application note covers engineering details behind the latency of MEMS microphones. Major components of

More information

RECENT applications of high-speed magnetic tracking

RECENT applications of high-speed magnetic tracking 1530 IEEE TRANSACTIONS ON MAGNETICS, VOL. 40, NO. 3, MAY 2004 Three-Dimensional Magnetic Tracking of Biaxial Sensors Eugene Paperno and Pavel Keisar Abstract We present an analytical (noniterative) method

More information

FFT 1 /n octave analysis wavelet

FFT 1 /n octave analysis wavelet 06/16 For most acoustic examinations, a simple sound level analysis is insufficient, as not only the overall sound pressure level, but also the frequency-dependent distribution of the level has a significant

More information