Multi-sensory integration using sparse spatio-temporal encoding
|
|
- Emery Roger Allison
- 5 years ago
- Views:
Transcription
1 Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August -9, 0 Multi-sensory integration using sparse spatio-temporal encoding A. Ravishankar Rao, Guillermo Cecchi Abstract The external world consists of objects that stimulate multiple sensory pathways simultaneously, such as auditory, visual and touch. Our brains receive and process these sensory streams to arrive at a coherent internal representation of the world. Though much attention has been paid to these streams individually, their integration is comparatively less well understood. In this paper we propose the principle of sparse spatio-temporal encoding as a foundation to build a framework for multi-sensory integration. We derive the dynamics that govern a network of oscillatory units that achieves phase synchronization, and is capable of binding related attributes of objects. We simulate objects that produce simultaneous visual and auditory input activations. We demonstrate that our system can bind features in both these sensory modalities. We examine the effect of varying a tuning function that governs the ability of the units to synchronize, and show that by broadening this function we reduce the ability of the network to disambiguate mixtures of objects. Thus, our model offers the potential to study brain disorders such as autism, which may arise from a disruption of synchronization. I. INTRODUCTION Objects in the natural world typically excite multiple senses simultaneously. Our sensory apparatus has evolved to explore different facets of external objects, spanning the senses of vision, hearing, touch, olfaction and taste. Our brains learn to integrate the information from these senses in order to create rich, multimodal representations of objects. The individual senses such as vision and hearing, and their early cortical processing pathways have been studied in great detail. However, comparatively less effort has been devoted to studying the integration and interaction between these senses. One of the challenges is that the dimensionality of the problem is increased considerably. Furthermore, there is no single cortical area where such interactions take place, and multiple divergent pathways need to be investigated. Examples of such integration areas include the superior colliculus, which receives both auditory and visual input [], [], the superior temporal sulcus [] and the pre-frontal []. Imaging studies have also identified tri-sensory areas which respond to a combination of tactile, audio and visual inputs []. Obtaining detailed cortical recordings from such integration areas is challenging, as it involves training animals to perform specific behavioral tasks and recording from select neurons []. In contrast, techniques such as optical imaging can obtain activity spread over a larger cortical area, and have been applied to understand the localized processing of information in the visual []. There are many neuroscientific [], theoretical and modeling issues that need to be examined when one considers The authors are at the T.J. Watson IBM Research Center, Yorktown Heights, NY 09, USA. ravirao@us.ibm.com, and gcecchi@us.ibm.com multi-sensory integration. Some of the modeling issues consist of understanding the right spatio-temporal abstractions of neural behavior, and developing a principled approach to explore interactions between the units in the system. We build on an earlier model we developed based on sparse spatiotemporal encoding of sensory inputs [], []. We tested this model on visual objects, showed its capability to bind visual features related to a single object, and demonstrated its ability to separate combinations of objects. We extend this model by including an additional simulated auditory stream as an input, and investigate the interactions between the auditory and visual input streams. Our results show that the same framework of sparse spatio-temporal encoding can be applied successfully to a combination of sensory streams. The value in creating a model for multisensory integration is that it allows us to explore higher level issues in brain function. This becomes relevant when we consider the functioning of both normal brains and those that exhibit certain deficits. For instance, one of the disease models for autism involves the inability to achieve proper temporal binding for features from multiple sensory streams []. The architecture and model we propose in the current paper has the potential to investigate such issues in multi-sensory processing. The remainder of this paper is organized as follows. In Section II we describe the computational foundation of our model. In Section III we present results that demonstrate the capability of our model to encode joint audio-visual stimuli arising from simulated objects, and to separate mixtures of these objects into their constituents. We examine the implications of our findings, and their relationship to existing literature in Section IV. II. METHODS In earlier work, we used the principle of sparse spatiotemporal encoding to derive the dynamics of a network for processing sensory information [], []. We now extend this model, developed for a single sensory modality, to a combination of two sensory modalities. We describe the system with the aid of Figure, which consists of a two layer system with two input streams, the audio and visual inputs. We name these streams visual and audio for the sake of concreteness, to illustrate the key concepts. Our method should be applicable to other combinations of sensory inputs such as tactile and visual for instance, or even to tri-sensory inputs. Let x denote units in the lower layer visual, u denote units in the lower layer auditory, and y denote units in the upper layer association. The visual is connected by a weight matrix W to the association //$.00 0 IEEE 0
2 y Upper Layer, Association W nj x j [ + cos(φ j θ n )] Lower Layer, Auditory V W Lower Layer, Visual Δy n j + j V nj u j [ + cos(ξ j θ n )] αy n γ k y k [ + cos(θ k θ n )] () u x Lower Layer, Auditory u Feed-forward connectivity (A) Lateral connectivity y (B) Feedback connectivity (C) Upper Layer, Association Upper Layer, Association x Lower Layer, Visual Fig.. This figure shows the connections from visual and auditory cortices to a higher level area, termed the association. (A) shows feedforward connections, (B) shows lateral connections and (C) shows feedback connections. The auditory is connected by a weight matrix V to the association. Each unit is considered to be an oscillator with an amplitude, frequency and phase of oscillation. If all the units have a similar nominal frequency, their behavior can be described in terms of phasors of the form x n e iφn for the visual, u n e iξn for the auditory and y n e iθn for the association layer. Here, x n and u n denote the amplitudes of units in the visual and auditory cortices and y n denotes amplitudes of units in the association. Similarly, φ n and ξ n are the phases of the n th unit in the visual and auditory cortices. and θ n refers to phases of in the upper layer association. Equations - describe the instantaneous evolution of the system, starting with a set of initial conditions. Δθ n W nj x j sin(φ j θ n ) j + V nj u j sin(ξ j θ n ) j γ y k sin(θ k θ n ) () k Δφ n W jn y j sin(θ j φ n ) () j Δξ n V jn y j sin(θ j ξ n ) () j In this paper, we assume the initial conditions for the lower layer consist of pixel values of a -D visual image constituting a visual stimulus and a -D audio image consisting of a simultaneously presented auditory stimulus. We choose this representation to aid the interpretation of the system s function. In general, the two lower layers could represent any cortical areas. The initial values for the upper layer, y can be set to zero. The values of the phases are randomized. The update rules in Equations - are applied, upon which the system exhibits transients which then settle down, say after 00 iterations. The synaptic weights W are modified only after this settling period as follows. ΔW ij y i x j [ + cos(φ j θ i )] () A similar update rule is used for V as follows. ΔV ij y i u j [ + cos(ξ j θ i )] () The network configuration consists of dynamical units arranged as follows: (a) Lower layers designated by u and x which receive multisensory audio and visual input respectively. The amplitude output of these units depends only on their inputs, whereas the phase is a function of their natural frequency and feedback interactions with a top layer; (b) A top layer designated by y that receive inputs from the bottom u and x layers via feed-forward connections. Top layer units determine their individual amplitude and phase dynamics by integrating the input amplitudes weighted by a function of relative phase differences; (c) The bottom layer receives feedback input from the top layer, which affects only the phase of the bottom layer s units. This behavior has been described by Equations -. In our simulation, the lower layer visual consists of x units, each of which receives a visual intensity value as input. Similarly, the lower layer auditory consists of x units, each of which receives an auditory intensity value as input. 0
3 Object # Object # Object # Object # Fig.. This figure shows the representation of four objects in the visual. The objects differ in shape as well as gray level. Audio Obj. # Audio Obj. # Audio Obj. # Audio Obj. # Fig.. A representation of objects in the auditory. This figure shows idealized tonotopic maps associated with the visual objects shown in Figure The upper layer y consists of units. There are all-to-all connections between units in the lower layers x and the upper layer y, and similarly between u and y. Furthermore, the units in the upper layer possess all-to-all lateral connections. Finally, there are all-to-all feedback connections from y to x and from y to u. Learning leads to a winner-take-all dynamics upon presentation of one of the learned inputs. We choose an input set consisting of simple visual objects such as a square, triangle, cross, circle and so on, as shown in Figure. These visual objects are also associated with corresponding audio objects, as shown in Figure. The interpretation here is that each object generates a paired visual and auditory input pattern. The auditory objects are idealized representations of tonotopic maps, where different frequencies are represented in an ordered spatial fashion []. When an input is presented, we pair the auditory and visual representations, and present the corresponding stimulus at the Audio object # V u y x W Paired presentation of the visual and auditory representations Upper Layer, Association Visual object # Fig.. When an object is presented as a stimulus, we initialize the lower layers to its visual and auditory representations as shown. We have used the visual and auditory representations of object # shown in Figures and respectively. lower layer, as depicted in Figure. The network operates in two stages consisting of learning and performance. During the learning stage, a randomly selected object is presented as input, and the network activity is allowed to settle. Following this the Hebbian learning rules in equations and are applied. The process is repeated over, 000 trials. The system typically shows a winner-takeall behavior at the upper layer y for each input presented. Furthermore, after the training, a unique winner is associated with each input. Note that the network training is done in an unsupervised fashion. As shown in Figures - of [], when two inputs are combined and presented to the lower layer x, it results in two units, termed the winners, being activated in the upper layer y. These units are identified as the two units with the highest and second-highest amplitude respectively. Furthermore, the phases of the winners in layer y are synchronized with the phases of units in the lower layer x that correspond to the two individual inputs. As explained in [], the interpretation of this behavior is that different units can be simultaneously active while having phases that are maximally apart from each other. We define a measure termed the separation accuracy, which captures the ability of the network to correctly identify mixtures of inputs. Suppose unit i in the upper layer is the winner for an input x, and unit j is the winner for input x. If units i and j in the upper layer are also winners when the input presented is the mixture x + x, then we say the separation is performed correctly, otherwise not. The ratio of the total number of correctly separated cases to the total number of cases investigated is the separation accuracy. A related measure concerns the ability of the network to perform segmentation. The accuracy of phase segmentation is measured by computing the fraction of the units of the lower layer that correspond to a given object and are within some tolerance level of the phase of the upper layer unit that represents the same object. III. RESULTS Figures and show the behavior of the system when objects and are presented simultaneously. In Figure we examine the superposition of the visual aspects of these two objects, whereas in Figure we examine the superposition of the auditory cues associated with the same two objects. In Figure, the third row shows that the two winners in the upper layer are approximately degrees out of phase with each other. The phasors representing the winners have been color coded in blue and red so that they can be compared against the phasors in the lower layer. We can readily observe that the lower layer phasors in blue correspond to visual object #, and are synchronized with the upper layer winner, also represented in blue. Similarly, the red phasors show that there is a close phase similarity between the components of visual object # and the winner in the upper layer that represents this object. Furthermore, Figure shows that this congruence also extends to the auditory representations of the same two objects. Thus, there is a phase synchronization 0
4 between the units in the lower layer auditory and visual maps corresponding to a given object and also the upper layer winner that represents the composite audio-visual object. Similarly, Figures and show the ability of the network to separate a mixture of objects and, involving both the audio and visual representations of these objects. A. Varying the tuning function for integration of phase information From Equation we note that the amplitudes of the upper layer, y n are a function of the phase differences between the input unit j in the lower layer, and the target unit n in the upper layer as follows. Δy n [ + cos(φ j θ n )] () This is plotted in Figure 9(A). We now vary the tuning function shown by making it both broader (Figure 9(B)) and narrower (Figure 9(C)) than the original, and examining its effect on the network s function. We use the concepts of separation accuracy and segmentation accuracy to quantify the desired network function. The tuning function can be characterized by a measure such as the full-width at half maximum (FWHM). We vary the FWHM of the tuning function and measure its effect on network performance. The resulting relationship is shown in Figures 0 and. The FWHM of the original tuning curve in equation, as plotted in Figure 9(A), is.. When the tuning width is increased from. to., we observe from Figure 0 that both the separation and segmentation accuracy decline, indicating poorer network performance. A similar decline is observed in Figure as the tuning width is decreased from.to.. B. Varying the number of iterations for settling It would appear that as the tuning function is made narrow, it would allow for a finer discrimination capacity between objects, as a unit in the network would be responsive only to other units in the network with very similar phases. Since we did not see this effect directly in Figure, we must examine other variables that affect the system dynamics. One such variable is the number of iterations used for settling. Recall that the learning rules in equations and are applied only after the network settles. We offer the intuition that a narrow tuning function should also be combined with a longer settling time in order to improve the network performance. Figures and show that as the number of iterations representing settling time is increased, it is accompanied by an increase in separation and segmentation accuracies. This demonstrates that there is a tradeoff between the width of the tuning function and the separation and segmentation accuracies. IV. DISCUSSION In Figures 0 and we examine the effect of varying the tuning function on the separation and segmentation accuracy. For the selected settling time, which is 00 iterations, the best performance is achieved with the original tuning function shown in Equation. Visual obj. # visual obj. # Phase of first winner with first winner Superposed objects Phase of second winner with second winner Fig.. Illustrating the behavior of phase information in the visual stream. This shows the superposition of objects and, and their corresponding visual maps. The grayscale image of the superposed objects is normalized before being displayed. The phase of the first winner corresponding to object in the upper layer is 0. radians. The phase of the second winner in the upper layer corresponding to object is.. The activity in the lower layer units of the visual map is displayed as a vector field. The magnitude of the vector reflects the amount of activity in the unit, and the direction encodes the phase of the unit. 0
5 Audio obj. # Audio obj. # Visual obj. # Visual obj. # Superposed objects Superposed objects Phase of first winner Phase of second winner Phase of first winner Phase of second winner with first winner with second winner with first winner with second winner Fig.. Illustrating the behavior of phase information in the auditory stream. This shows the superposition of objects and, and the corresponding auditory maps. The phase of the first winner corresponding to object in the upper layer is 0. radians. The phase of the second winner in the upper layer corresponding to object is.. The phases of the lower layer in the auditory maps are shown in the bottom row. Fig.. This shows the superposition of objects and, and the corresponding visual maps. The phase of the first winner corresponding to object in the upper layer is. radians. The phase of the second winner in the upper layer corresponding to object is 0. radians. The lower layer units depicted are from the visual map. 0
6 Tuning function Amplitude Phase difference in radians (A) Original tuning function, defined by y =(+cos(x)) Audio obj. # Audio obj. # Superposed objects Tuning function Amplitude Phase difference in radians (B) Broader tuning function, defined by y =tanh(0( + cos(x))) Tuning function Amplitude Phase difference in radians (C) Narrower tuning function. The curve is defined by y =(+cos(x)) / Phase of first winner with first winner Phase of second winner with second winner Fig.. This shows the superposition of objects and, and the corresponding auditory maps. The phase of the first winner corresponding to object in the upper layer is. radians. The phase of the second winner in the upper layer corresponding to object is 0. radians. The lower layer units depicted are from the auditory map. Fig. 9. Different tuning functions investigated. The tuning function affects the integration of phase information from multiple inputs, and hence influences the performance of the overall network. When the tuning function is made broader, the performance of the network declines. This behavior is in agreement with findings in autistic subjects, as reported by Foss-Feig et al [9]. In this study, it was shown that autistic subjects integrate multi-sensory cues over a longer binding window. This has been implicated as one of the mechanisms that may explain the behavior of autistic subjects. The tuning function we have used in our model can be considered to represent a temporal binding window. Thus, our model directly demonstrates that a wider tuning function, or temporal binding window adversely affects the ability of the oscillatory network to correctly identify combinations of audio-visual sensory inputs. Further effort is required to tailor our model to the specific experimental protocol reported in Foss-Feig et al [9], and this is the subject of future research. We also show that narrowing the tuning function has a similar effect in reducing the network performance (Figure ). Nakamura [0] presents an overview of techniques for the processing of synchronously delivered multimodal signals such as an audio-visual input stream. According to his terminology, the method presented in our paper would be considered a form of an early integration model, where the input signals are directly transmitted to a bi-modal classifier 09
7 Separation accuracy Full width half maximum of tuning curve (A) Variation in separation accuracy as tuning width is increased Segmentation accuracy Full width half maximum of tuning curve (B) Variation in segmentation accuracy as tuning width is increased Separation accuracy Number of iterations (settling time) (A) Variation in separation accuracy as the number of settling iterations is increased Segmentation accuracy Number of iterations (settling time) (B) Variation in segmentation accuracy as the number of settling iterations is increased Fig. 0. Results showing variation of separation accuracy with the tuning function as the full-width half maximum measure of tuning (FWHM) is increased from the value of. corresponding to Equation. Fig.. Results showing variation of separation accuracy with the number of settling iterations. The FWHM of the tuning function here is. Separation accuracy Full width half maximum of tuning curve (A) Variation in separation accuracy as tuning width is decreased from. to. Segmentation accuracy Full width half maximum of tuning curve (B) Variation in segmentation accuracy as tuning width is decreased from. to. Separation accuracy Number of iterations (settling time) (A) Variation in separation accuracy as the number of settling iterations is increased Segmentation accuracy Number of iterations (settling time) (B) Variation in segmentation accuracy as the number of settling iterations is increased Fig.. Results showing variation of separation accuracy with the tuning function as the full-width half maximum measure of tuning (FWHM) is decreased from the value of. corresponding to Equation. Fig.. Results showing variation of separation accuracy with the number of settling iterations. The FWHM of the tuning function here is. 0
8 (which is the upper layer in our network). However, one difference in our model is that we utilize feedback connections to modify the phase of the lower layer that receives the input. Such feedback is not present in the models described by Nakamura [0]. We briefly review studies in the field of neuroscience that identify brain regions where multi-modal, or cross-modal information is integrated. Fuster et al. [] showed that cells in the prefrontal in monkeys are capable of associating visual and auditory stimuli over time. De Gelder and Bertelson [] point out that different types of relationships can exist during multisensory integration. The pairs of individual stimuli that are used to evoke a multisensory response could be arbitrary or naturalistic. An arbitrary pairing is created specifically for an experiment, and could consist of a high frequency tone paired with a rectangle or a low frequency tone paired with a square. In the present paper, we have chosen to go this route, so that we can first establish the viability of our model using synchronizing network elements. Future research will consist of using audio-visual stimuli occurring in natural environments. Iarocci and McDonald [] review the relationship between sensory integration and autism. Brock et al. [] suggest that brain development in autism is impaired by a lack of integration amongst brain areas that need to interact with each other to solve behavioral tasks. They propose that this impairment takes the form of a deficit in temporal binding. Though they did not propose a formal computational model for how this might happen, our earlier work [], [] provided a computational framework for achieving temporal binding. The research presented in the current paper examines the effect of changing one of the synchronization enablers, namely the tuning function, on the ability of the network to bind audio-visual features. Other ways of affecting synchronization include the reduction of functional connectivity within brain networks []. Chou et al. [] present a self-organizing map based approach to integrate audio and visual inputs. They mainly explore spatial organization issues and not the temporal interactions as we have presented. There is increasing interest in investigating the neural correlates of multi-sensory perception using techniques such as event-related potentials [] and fmri []. Our understanding of multi-sensory integration is still at an early stage. Many effects, such as that of the interaction of feedback pathways, and the role of direct connections between primary sensory areas are still being investigated. Further research is required to build computational models that capture both the spatial organization within the multisensory cortical areas and the temporal interactions involved in binding features across sensing modalities. These computational models will need to be verified and validated against experimental findings in neuroscience. V. CONCLUSION In this paper we presented a computational model for multi-sensory integration of two input streams consisting of auditory and visual information. The dynamics of this model are derived from the principles of sparse spatio-temporal encoding. The model is capable of grouping, or binding related object features in the two sensory streams through phase synchrony. Our model can also identify the components of audio-visual objects that have been combined or mixed. We investigate the performance of our model by varying the tuning function that governs phase synchronization, and show that broader tuning functions disrupt the ability of the model units to effectively integrate multi-modal inputs. This behavior has the potential to serve as a foundation to explore deficits in brain function such as autism. Acknowledgement We appreciate helpful comments from the reviewers. REFERENCES [] B. E. Stein and T. R. Stanford, Multisensory integration: current issues from the perspective of the single neuron, Nature Reviews Neuroscience, vol. 9, no., pp., 00. [] M. Casey, A. Pavlou, and A. Timotheoue, Audio-visual localization with hierarchical topographic maps: Modeling the superior colliculus, Neurocomputing, 0. [] J. M. Fuster, M. Bodner, J. K. Kroger et al., Cross-modal and crosstemporal association in neurons of frontal, Nature, vol. 0, no., pp. 0, 000. [] Y. Xiao, R. Rao, G. Cecchi, and E. Kaplan, Improved mapping of information distribution across the cortical surface with the support vector machine, Neural Networks, vol., no., pp., 00. [] A. R. Rao, G. A. Cecchi, C. C. Peck, and J. R. Kozloski, Unsupervised segmentation with dynamical units, IEEE Trans. Neural Networks, Jan 00. [] A. Rao and G. Cecchi, An objective function utilizing complex sparsity for efficient segmentation in multi-layer oscillatory networks, International Journal of Intelligent Computing and Cybernetics, vol., no., pp. 0, 00. [] G. Iarocci and J. McDonald, Sensory integration and the perceptual experience of persons with autism, Journal of autism and developmental disorders, vol., no., pp. 90, 00. [] E. Formisano, D. Kim, F. Di Salle, P. van de Moortele, K. Ugurbil, and R. Goebel, Mirror-symmetric tonotopic maps in human primary auditory, Neuron, vol. 0, no., pp. 9 9, 00. [9] J. H. Foss-Feig, L. D. Kwakye, C. J. Cascio, C. P. Burnette, H. Kadivar, W. L. Stone, and M. T. Wallace, An extended multisensory temporal binding window in autism spectrum disorders, Experimental Brain Research, vol. 0, no., pp. 9, 00. [0] S. Nakamura, Statistical multimodal integration for audio-visual speech processing, Neural Networks, IEEE Transactions on, vol., no., pp., 00. [] B. De Gelder and P. Bertelson, Multisensory integration, perception and ecological validity, Trends in cognitive sciences, vol., no. 0, pp. 0, 00. [] J. Brock, C. C. Brown, J. Boucher, G. Rippon et al., The temporal binding deficit hypothesis of autism, Development and psychopathology, vol., no., pp. 09, 00. [] P. J. Uhlhaas and W. Singer, Neural synchrony in brain disorders: relevance for cognitive dysfunctions and pathophysiology, Neuron, vol., no., pp., 00. [] S. M. Chou, A. P. Paplinski, and L. Gustafsson, Speaker-dependent bimodal integration of chinese phonemes and letters using multimodal self-organizing networks, in Neural Networks, 00. IJCNN 00. International Joint Conference on. IEEE, 00, pp.. [] P. Jing, T. Yin, and Y. Bo, Phase synchrony measurement of erp based on complex wavelet during visual-audio multisensory integration, in Industrial Control and Electronics Engineering (ICICEE), 0 International Conference on. IEEE, 0, pp. 0. [] K. O. Bushara, T. Hanakawa, I. Immisch, K. Toma, K. Kansaku, and M. Hallett, Neural correlates of cross-modal binding, Nature neuroscience, vol., no., pp. 90 9, 00.
Spectro-Temporal Methods in Primary Auditory Cortex David Klein Didier Depireux Jonathan Simon Shihab Shamma
Spectro-Temporal Methods in Primary Auditory Cortex David Klein Didier Depireux Jonathan Simon Shihab Shamma & Department of Electrical Engineering Supported in part by a MURI grant from the Office of
More informationA Vestibular Sensation: Probabilistic Approaches to Spatial Perception (II) Presented by Shunan Zhang
A Vestibular Sensation: Probabilistic Approaches to Spatial Perception (II) Presented by Shunan Zhang Vestibular Responses in Dorsal Visual Stream and Their Role in Heading Perception Recent experiments
More informationIII. Publication III. c 2005 Toni Hirvonen.
III Publication III Hirvonen, T., Segregation of Two Simultaneously Arriving Narrowband Noise Signals as a Function of Spatial and Frequency Separation, in Proceedings of th International Conference on
More informationInterference in stimuli employed to assess masking by substitution. Bernt Christian Skottun. Ullevaalsalleen 4C Oslo. Norway
Interference in stimuli employed to assess masking by substitution Bernt Christian Skottun Ullevaalsalleen 4C 0852 Oslo Norway Short heading: Interference ABSTRACT Enns and Di Lollo (1997, Psychological
More informationLow-Frequency Transient Visual Oscillations in the Fly
Kate Denning Biophysics Laboratory, UCSD Spring 2004 Low-Frequency Transient Visual Oscillations in the Fly ABSTRACT Low-frequency oscillations were observed near the H1 cell in the fly. Using coherence
More information19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007
19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 MODELING SPECTRAL AND TEMPORAL MASKING IN THE HUMAN AUDITORY SYSTEM PACS: 43.66.Ba, 43.66.Dc Dau, Torsten; Jepsen, Morten L.; Ewert,
More informationTIME encoding of a band-limited function,,
672 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 53, NO. 8, AUGUST 2006 Time Encoding Machines With Multiplicative Coupling, Feedforward, and Feedback Aurel A. Lazar, Fellow, IEEE
More informationNeural Blind Separation for Electromagnetic Source Localization and Assessment
Neural Blind Separation for Electromagnetic Source Localization and Assessment L. Albini, P. Burrascano, E. Cardelli, A. Faba, S. Fiori Department of Industrial Engineering, University of Perugia Via G.
More informationPsychology of Language
PSYCH 150 / LIN 155 UCI COGNITIVE SCIENCES syn lab Psychology of Language Prof. Jon Sprouse 01.10.13: The Mental Representation of Speech Sounds 1 A logical organization For clarity s sake, we ll organize
More informationNonuniform multi level crossing for signal reconstruction
6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven
More informationComputing with Biologically Inspired Neural Oscillators: Application to Color Image Segmentation
Computing with Biologically Inspired Neural Oscillators: Application to Color Image Segmentation Authors: Ammar Belatreche, Liam Maguire, Martin McGinnity, Liam McDaid and Arfan Ghani Published: Advances
More informationPerception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner.
Perception of pitch AUDL4007: 11 Feb 2010. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence Erlbaum, 2005 Chapter 7 1 Definitions
More informationPerception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner.
Perception of pitch BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb 2009. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence
More informationPerception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb A. Faulkner.
Perception of pitch BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb 2008. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence Erlbaum,
More informationPSYC696B: Analyzing Neural Time-series Data
PSYC696B: Analyzing Neural Time-series Data Spring, 2014 Tuesdays, 4:00-6:45 p.m. Room 338 Shantz Building Course Resources Online: jallen.faculty.arizona.edu Follow link to Courses Available from: Amazon:
More informationVision V Perceiving Movement
Vision V Perceiving Movement Overview of Topics Chapter 8 in Goldstein (chp. 9 in 7th ed.) Movement is tied up with all other aspects of vision (colour, depth, shape perception...) Differentiating self-motion
More informationInvariant Object Recognition in the Visual System with Novel Views of 3D Objects
LETTER Communicated by Marian Stewart-Bartlett Invariant Object Recognition in the Visual System with Novel Views of 3D Objects Simon M. Stringer simon.stringer@psy.ox.ac.uk Edmund T. Rolls Edmund.Rolls@psy.ox.ac.uk,
More informationVision V Perceiving Movement
Vision V Perceiving Movement Overview of Topics Chapter 8 in Goldstein (chp. 9 in 7th ed.) Movement is tied up with all other aspects of vision (colour, depth, shape perception...) Differentiating self-motion
More informationComparison of Haptic and Non-Speech Audio Feedback
Comparison of Haptic and Non-Speech Audio Feedback Cagatay Goncu 1 and Kim Marriott 1 Monash University, Mebourne, Australia, cagatay.goncu@monash.edu, kim.marriott@monash.edu Abstract. We report a usability
More informationCrossmodal Attention & Multisensory Integration: Implications for Multimodal Interface Design. In the Realm of the Senses
Crossmodal Attention & Multisensory Integration: Implications for Multimodal Interface Design Charles Spence Department of Experimental Psychology, Oxford University In the Realm of the Senses Wickens
More informationLecture 5. The Visual Cortex. Cortical Visual Processing
Lecture 5 The Visual Cortex Cortical Visual Processing 1 Lateral Geniculate Nucleus (LGN) LGN is located in the Thalamus There are two LGN on each (lateral) side of the brain. Optic nerve fibers from eye
More informationThe Basic Kak Neural Network with Complex Inputs
The Basic Kak Neural Network with Complex Inputs Pritam Rajagopal The Kak family of neural networks [3-6,2] is able to learn patterns quickly, and this speed of learning can be a decisive advantage over
More informationSupplementary Figures
Supplementary Figures Supplementary Figure 1. The schematic of the perceptron. Here m is the index of a pixel of an input pattern and can be defined from 1 to 320, j represents the number of the output
More informationModeling cortical maps with Topographica
Modeling cortical maps with Topographica James A. Bednar a, Yoonsuck Choe b, Judah De Paula a, Risto Miikkulainen a, Jefferson Provost a, and Tal Tversky a a Department of Computer Sciences, The University
More informationEncoding of Naturalistic Stimuli by Local Field Potential Spectra in Networks of Excitatory and Inhibitory Neurons
Encoding of Naturalistic Stimuli by Local Field Potential Spectra in Networks of Excitatory and Inhibitory Neurons Alberto Mazzoni 1, Stefano Panzeri 2,3,1, Nikos K. Logothetis 4,5 and Nicolas Brunel 1,6,7
More informationMECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES
INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL
More informationVisual Rules. Why are they necessary?
Visual Rules Why are they necessary? Because the image on the retina has just two dimensions, a retinal image allows countless interpretations of a visual object in three dimensions. Underspecified Poverty
More informationTracking of Rapidly Time-Varying Sparse Underwater Acoustic Communication Channels
Tracking of Rapidly Time-Varying Sparse Underwater Acoustic Communication Channels Weichang Li WHOI Mail Stop 9, Woods Hole, MA 02543 phone: (508) 289-3680 fax: (508) 457-2194 email: wli@whoi.edu James
More informationSalient features make a search easy
Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second
More informationA comparative study of different feature sets for recognition of handwritten Arabic numerals using a Multi Layer Perceptron
Proc. National Conference on Recent Trends in Intelligent Computing (2006) 86-92 A comparative study of different feature sets for recognition of handwritten Arabic numerals using a Multi Layer Perceptron
More informationMaps in the Brain Introduction
Maps in the Brain Introduction 1 Overview A few words about Maps Cortical Maps: Development and (Re-)Structuring Auditory Maps Visual Maps Place Fields 2 What are Maps I Intuitive Definition: Maps are
More informationCOLOR IMAGE QUALITY EVALUATION USING GRAYSCALE METRICS IN CIELAB COLOR SPACE
COLOR IMAGE QUALITY EVALUATION USING GRAYSCALE METRICS IN CIELAB COLOR SPACE Renata Caminha C. Souza, Lisandro Lovisolo recaminha@gmail.com, lisandro@uerj.br PROSAICO (Processamento de Sinais, Aplicações
More informationColor Science. What light is. Measuring light. CS 4620 Lecture 15. Salient property is the spectral power distribution (SPD)
Color Science CS 4620 Lecture 15 1 2 What light is Measuring light Light is electromagnetic radiation Salient property is the spectral power distribution (SPD) [Lawrence Berkeley Lab / MicroWorlds] exists
More informationImage Simulator for One Dimensional Synthetic Aperture Microwave Radiometer
524 Progress In Electromagnetics Research Symposium 25, Hangzhou, China, August 22-26 Image Simulator for One Dimensional Synthetic Aperture Microwave Radiometer Qiong Wu, Hao Liu, and Ji Wu Center for
More informationIntroduction. Chapter Time-Varying Signals
Chapter 1 1.1 Time-Varying Signals Time-varying signals are commonly observed in the laboratory as well as many other applied settings. Consider, for example, the voltage level that is present at a specific
More informationA Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency
A Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency Shunsuke Hamasaki, Atsushi Yamashita and Hajime Asama Department of Precision
More informationEMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS
EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS DAVIDE MAROCCO STEFANO NOLFI Institute of Cognitive Science and Technologies, CNR, Via San Martino della Battaglia 44, Rome, 00185, Italy
More informationA New Adaptive Channel Estimation for Frequency Selective Time Varying Fading OFDM Channels
A New Adaptive Channel Estimation for Frequency Selective Time Varying Fading OFDM Channels Wessam M. Afifi, Hassan M. Elkamchouchi Abstract In this paper a new algorithm for adaptive dynamic channel estimation
More informationDrum Transcription Based on Independent Subspace Analysis
Report for EE 391 Special Studies and Reports for Electrical Engineering Drum Transcription Based on Independent Subspace Analysis Yinyi Guo Center for Computer Research in Music and Acoustics, Stanford,
More informationSignals and Systems Lecture 9 Communication Systems Frequency-Division Multiplexing and Frequency Modulation (FM)
Signals and Systems Lecture 9 Communication Systems Frequency-Division Multiplexing and Frequency Modulation (FM) April 11, 2008 Today s Topics 1. Frequency-division multiplexing 2. Frequency modulation
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 1pPPb: Psychoacoustics
More informationPERCEIVING MOTION CHAPTER 8
Motion 1 Perception (PSY 4204) Christine L. Ruva, Ph.D. PERCEIVING MOTION CHAPTER 8 Overview of Questions Why do some animals freeze in place when they sense danger? How do films create movement from still
More informationEffects of Firing Synchrony on Signal Propagation in Layered Networks
Effects of Firing Synchrony on Signal Propagation in Layered Networks 141 Effects of Firing Synchrony on Signal Propagation in Layered Networks G. T. Kenyon,l E. E. Fetz,2 R. D. Puffl 1 Department of Physics
More informationTone-in-noise detection: Observed discrepancies in spectral integration. Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O.
Tone-in-noise detection: Observed discrepancies in spectral integration Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O. Box 513, NL-5600 MB Eindhoven, The Netherlands Armin Kohlrausch b) and
More informationCross-modal integration of auditory and visual apparent motion signals: not a robust process
Cross-modal integration of auditory and visual apparent motion signals: not a robust process D.Z. van Paesschen supervised by: M.J. van der Smagt M.H. Lamers Media Technology MSc program Leiden Institute
More informationECC419 IMAGE PROCESSING
ECC419 IMAGE PROCESSING INTRODUCTION Image Processing Image processing is a subclass of signal processing concerned specifically with pictures. Digital Image Processing, process digital images by means
More informationCN510: Principles and Methods of Cognitive and Neural Modeling. Neural Oscillations. Lecture 24
CN510: Principles and Methods of Cognitive and Neural Modeling Neural Oscillations Lecture 24 Instructor: Anatoli Gorchetchnikov Teaching Fellow: Rob Law It Is Much
More informationPaper Body Vibration Effects on Perceived Reality with Multi-modal Contents
ITE Trans. on MTA Vol. 2, No. 1, pp. 46-5 (214) Copyright 214 by ITE Transactions on Media Technology and Applications (MTA) Paper Body Vibration Effects on Perceived Reality with Multi-modal Contents
More informationSensation and Perception. Sensation. Sensory Receptors. Sensation. General Properties of Sensory Systems
Sensation and Perception Psychology I Sjukgymnastprogrammet May, 2012 Joel Kaplan, Ph.D. Dept of Clinical Neuroscience Karolinska Institute joel.kaplan@ki.se General Properties of Sensory Systems Sensation:
More information258 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2003
258 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2003 Genetic Design of Biologically Inspired Receptive Fields for Neural Pattern Recognition Claudio A.
More informationComplex-valued neural networks fertilize electronics
1 Complex-valued neural networks fertilize electronics The complex-valued neural networks are the networks that deal with complexvalued information by using complex-valued parameters and variables. They
More informationUNEQUAL POWER ALLOCATION FOR JPEG TRANSMISSION OVER MIMO SYSTEMS. Muhammad F. Sabir, Robert W. Heath Jr. and Alan C. Bovik
UNEQUAL POWER ALLOCATION FOR JPEG TRANSMISSION OVER MIMO SYSTEMS Muhammad F. Sabir, Robert W. Heath Jr. and Alan C. Bovik Department of Electrical and Computer Engineering, The University of Texas at Austin,
More informationTNS Journal Club: Efficient coding of natural sounds, Lewicki, Nature Neurosceince, 2002
TNS Journal Club: Efficient coding of natural sounds, Lewicki, Nature Neurosceince, 2002 Rich Turner (turner@gatsby.ucl.ac.uk) Gatsby Unit, 18/02/2005 Introduction The filters of the auditory system have
More informationNeuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani
Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani Outline Introduction Soft Computing (SC) vs. Conventional Artificial Intelligence (AI) Neuro-Fuzzy (NF) and SC Characteristics 2 Introduction
More informationAn Auditory Localization and Coordinate Transform Chip
An Auditory Localization and Coordinate Transform Chip Timothy K. Horiuchi timmer@cns.caltech.edu Computation and Neural Systems Program California Institute of Technology Pasadena, CA 91125 Abstract The
More informationAutomatic Transcription of Monophonic Audio to MIDI
Automatic Transcription of Monophonic Audio to MIDI Jiří Vass 1 and Hadas Ofir 2 1 Czech Technical University in Prague, Faculty of Electrical Engineering Department of Measurement vassj@fel.cvut.cz 2
More informationNeural Coding of Multiple Stimulus Features in Auditory Cortex
Neural Coding of Multiple Stimulus Features in Auditory Cortex Jonathan Z. Simon Neuroscience and Cognitive Sciences Biology / Electrical & Computer Engineering University of Maryland, College Park Computational
More informationA CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL
9th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, -7 SEPTEMBER 7 A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL PACS: PACS:. Pn Nicolas Le Goff ; Armin Kohlrausch ; Jeroen
More informationNeuronal correlates of pitch in the Inferior Colliculus
Neuronal correlates of pitch in the Inferior Colliculus Didier A. Depireux David J. Klein Jonathan Z. Simon Shihab A. Shamma Institute for Systems Research University of Maryland College Park, MD 20742-3311
More informationTHE USE OF ARTIFICIAL NEURAL NETWORKS IN THE ESTIMATION OF THE PERCEPTION OF SOUND BY THE HUMAN AUDITORY SYSTEM
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS VOL. 8, NO. 3, SEPTEMBER 2015 THE USE OF ARTIFICIAL NEURAL NETWORKS IN THE ESTIMATION OF THE PERCEPTION OF SOUND BY THE HUMAN AUDITORY SYSTEM
More informationCSC475 Music Information Retrieval
CSC475 Music Information Retrieval Sinusoids and DSP notation George Tzanetakis University of Victoria 2014 G. Tzanetakis 1 / 38 Table of Contents I 1 Time and Frequency 2 Sinusoids and Phasors G. Tzanetakis
More informationThe Anne Boleyn Illusion is a six-fingered salute to sensory remapping
Loughborough University Institutional Repository The Anne Boleyn Illusion is a six-fingered salute to sensory remapping This item was submitted to Loughborough University's Institutional Repository by
More informationSmart antenna for doa using music and esprit
IOSR Journal of Electronics and Communication Engineering (IOSRJECE) ISSN : 2278-2834 Volume 1, Issue 1 (May-June 2012), PP 12-17 Smart antenna for doa using music and esprit SURAYA MUBEEN 1, DR.A.M.PRASAD
More informationVariable Step-Size LMS Adaptive Filters for CDMA Multiuser Detection
FACTA UNIVERSITATIS (NIŠ) SER.: ELEC. ENERG. vol. 7, April 4, -3 Variable Step-Size LMS Adaptive Filters for CDMA Multiuser Detection Karen Egiazarian, Pauli Kuosmanen, and Radu Ciprian Bilcu Abstract:
More informationIntroduction to Computational Neuroscience
Introduction to Computational Neuroscience Lecture 4: Data analysis I Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single neuron
More informationSimulating Biological Motion Perception Using a Recurrent Neural Network
Simulating Biological Motion Perception Using a Recurrent Neural Network Roxanne L. Canosa Department of Computer Science Rochester Institute of Technology Rochester, NY 14623 rlc@cs.rit.edu Abstract People
More informationCoding and computing with balanced spiking networks. Sophie Deneve Ecole Normale Supérieure, Paris
Coding and computing with balanced spiking networks Sophie Deneve Ecole Normale Supérieure, Paris Cortical spike trains are highly variable From Churchland et al, Nature neuroscience 2010 Cortical spike
More informationChapter 2 Channel Equalization
Chapter 2 Channel Equalization 2.1 Introduction In wireless communication systems signal experiences distortion due to fading [17]. As signal propagates, it follows multiple paths between transmitter and
More informationTouch Perception and Emotional Appraisal for a Virtual Agent
Touch Perception and Emotional Appraisal for a Virtual Agent Nhung Nguyen, Ipke Wachsmuth, Stefan Kopp Faculty of Technology University of Bielefeld 33594 Bielefeld Germany {nnguyen, ipke, skopp}@techfak.uni-bielefeld.de
More informationModulating motion-induced blindness with depth ordering and surface completion
Vision Research 42 (2002) 2731 2735 www.elsevier.com/locate/visres Modulating motion-induced blindness with depth ordering and surface completion Erich W. Graf *, Wendy J. Adams, Martin Lages Department
More informationSIMULATING RESTING CORTICAL BACKGROUND ACTIVITY WITH FILTERED NOISE. Journal of Integrative Neuroscience 7(3):
SIMULATING RESTING CORTICAL BACKGROUND ACTIVITY WITH FILTERED NOISE Journal of Integrative Neuroscience 7(3): 337-344. WALTER J FREEMAN Department of Molecular and Cell Biology, Donner 101 University of
More informationBeyond Blind Averaging Analyzing Event-Related Brain Dynamics
Beyond Blind Averaging Analyzing Event-Related Brain Dynamics Scott Makeig Swartz Center for Computational Neuroscience Institute for Neural Computation University of California San Diego La Jolla, CA
More informationPreeti Rao 2 nd CompMusicWorkshop, Istanbul 2012
Preeti Rao 2 nd CompMusicWorkshop, Istanbul 2012 o Music signal characteristics o Perceptual attributes and acoustic properties o Signal representations for pitch detection o STFT o Sinusoidal model o
More information15110 Principles of Computing, Carnegie Mellon University
1 Last Time Data Compression Information and redundancy Huffman Codes ALOHA Fixed Width: 0001 0110 1001 0011 0001 20 bits Huffman Code: 10 0000 010 0001 10 15 bits 2 Overview Human sensory systems and
More informationChapter 8: Perceiving Motion
Chapter 8: Perceiving Motion Motion perception occurs (a) when a stationary observer perceives moving stimuli, such as this couple crossing the street; and (b) when a moving observer, like this basketball
More informationNeural Processing of Amplitude-Modulated Sounds: Joris, Schreiner and Rees, Physiol. Rev. 2004
Neural Processing of Amplitude-Modulated Sounds: Joris, Schreiner and Rees, Physiol. Rev. 2004 Richard Turner (turner@gatsby.ucl.ac.uk) Gatsby Computational Neuroscience Unit, 02/03/2006 As neuroscientists
More informationOrientation-sensitivity to facial features explains the Thatcher illusion
Journal of Vision (2014) 14(12):9, 1 10 http://www.journalofvision.org/content/14/12/9 1 Orientation-sensitivity to facial features explains the Thatcher illusion Department of Psychology and York Neuroimaging
More informationDiscrimination of Virtual Haptic Textures Rendered with Different Update Rates
Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Seungmoon Choi and Hong Z. Tan Haptic Interface Research Laboratory Purdue University 465 Northwestern Avenue West Lafayette,
More informationThe EarSpring Model for the Loudness Response in Unimpaired Human Hearing
The EarSpring Model for the Loudness Response in Unimpaired Human Hearing David McClain, Refined Audiometrics Laboratory, LLC December 2006 Abstract We describe a simple nonlinear differential equation
More informationEmbodiment illusions via multisensory integration
Embodiment illusions via multisensory integration COGS160: sensory systems and neural coding presenter: Pradeep Shenoy 1 The illusory hand Botvinnik, Science 2004 2 2 This hand is my hand An illusion of
More informationWeek 15. Mechanical Waves
Chapter 15 Week 15. Mechanical Waves 15.1 Lecture - Mechanical Waves In this lesson, we will study mechanical waves in the form of a standing wave on a vibrating string. Because it is the last week of
More informationThe Effect of Frequency Shifting on Audio-Tactile Conversion for Enriching Musical Experience
The Effect of Frequency Shifting on Audio-Tactile Conversion for Enriching Musical Experience Ryuta Okazaki 1,2, Hidenori Kuribayashi 3, Hiroyuki Kajimioto 1,4 1 The University of Electro-Communications,
More informationEnhanced MLP Input-Output Mapping for Degraded Pattern Recognition
Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition Shigueo Nomura and José Ricardo Gonçalves Manzan Faculty of Electrical Engineering, Federal University of Uberlândia, Uberlândia, MG,
More informationFigure S3. Histogram of spike widths of recorded units.
Neuron, Volume 72 Supplemental Information Primary Motor Cortex Reports Efferent Control of Vibrissa Motion on Multiple Timescales Daniel N. Hill, John C. Curtis, Jeffrey D. Moore, and David Kleinfeld
More informationImage Processing by Bilateral Filtering Method
ABHIYANTRIKI An International Journal of Engineering & Technology (A Peer Reviewed & Indexed Journal) Vol. 3, No. 4 (April, 2016) http://www.aijet.in/ eissn: 2394-627X Image Processing by Bilateral Image
More informationPerformance Analysis of a 1-bit Feedback Beamforming Algorithm
Performance Analysis of a 1-bit Feedback Beamforming Algorithm Sherman Ng Mark Johnson Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2009-161
More informationEE 791 EEG-5 Measures of EEG Dynamic Properties
EE 791 EEG-5 Measures of EEG Dynamic Properties Computer analysis of EEG EEG scientists must be especially wary of mathematics in search of applications after all the number of ways to transform data is
More informationAn Adaptive Algorithm for Speech Source Separation in Overcomplete Cases Using Wavelet Packets
Proceedings of the th WSEAS International Conference on Signal Processing, Istanbul, Turkey, May 7-9, 6 (pp4-44) An Adaptive Algorithm for Speech Source Separation in Overcomplete Cases Using Wavelet Packets
More informationDual Mechanisms for Neural Binding and Segmentation
Dual Mechanisms for Neural inding and Segmentation Paul Sajda and Leif H. Finkel Department of ioengineering and Institute of Neurological Science University of Pennsylvania 220 South 33rd Street Philadelphia,
More informationLecture 13 Read: the two Eckhorn papers. (Don t worry about the math part of them).
Read: the two Eckhorn papers. (Don t worry about the math part of them). Last lecture we talked about the large and growing amount of interest in wave generation and propagation phenomena in the neocortex
More informationApplications of Music Processing
Lecture Music Processing Applications of Music Processing Christian Dittmar International Audio Laboratories Erlangen christian.dittmar@audiolabs-erlangen.de Singing Voice Detection Important pre-requisite
More informationSPLIT MLSE ADAPTIVE EQUALIZATION IN SEVERELY FADED RAYLEIGH MIMO CHANNELS
SPLIT MLSE ADAPTIVE EQUALIZATION IN SEVERELY FADED RAYLEIGH MIMO CHANNELS RASHMI SABNUAM GUPTA 1 & KANDARPA KUMAR SARMA 2 1 Department of Electronics and Communication Engineering, Tezpur University-784028,
More informationDigital image processing vs. computer vision Higher-level anchoring
Digital image processing vs. computer vision Higher-level anchoring Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception
More informationEvolving High-Dimensional, Adaptive Camera-Based Speed Sensors
In: M.H. Hamza (ed.), Proceedings of the 21st IASTED Conference on Applied Informatics, pp. 1278-128. Held February, 1-1, 2, Insbruck, Austria Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors
More informationChapter 73. Two-Stroke Apparent Motion. George Mather
Chapter 73 Two-Stroke Apparent Motion George Mather The Effect One hundred years ago, the Gestalt psychologist Max Wertheimer published the first detailed study of the apparent visual movement seen when
More informationLab/Project Error Control Coding using LDPC Codes and HARQ
Linköping University Campus Norrköping Department of Science and Technology Erik Bergfeldt TNE066 Telecommunications Lab/Project Error Control Coding using LDPC Codes and HARQ Error control coding is an
More information40 Hz Event Related Auditory Potential
40 Hz Event Related Auditory Potential Ivana Andjelkovic Advanced Biophysics Lab Class, 2012 Abstract Main focus of this paper is an EEG experiment on observing frequency of event related auditory potential
More informationForce versus Frequency Figure 1.
An important trend in the audio industry is a new class of devices that produce tactile sound. The term tactile sound appears to be a contradiction of terms, in that our concept of sound relates to information
More informationImplicit Fitness Functions for Evolving a Drawing Robot
Implicit Fitness Functions for Evolving a Drawing Robot Jon Bird, Phil Husbands, Martin Perris, Bill Bigge and Paul Brown Centre for Computational Neuroscience and Robotics University of Sussex, Brighton,
More informationChaotic-Based Processor for Communication and Multimedia Applications Fei Li
Chaotic-Based Processor for Communication and Multimedia Applications Fei Li 09212020027@fudan.edu.cn Chaos is a phenomenon that attracted much attention in the past ten years. In this paper, we analyze
More information