Methods. Experimental Stimuli: We selected 24 animals, 24 tools, and 24

Similar documents
Supplementary Figure 1

Supplementary Material

MRI Summer Course Lab 2: Gradient Echo T1 & T2* Curves

SUPPLEMENTARY INFORMATION

Imaging the brain at ultra-high resolution using 3D FatNavs

Bodies are Represented as Wholes Rather Than Their Sum of Parts in the Occipital-Temporal Cortex

S1 Table. Characterization of the articles (n=20) included for systematic review. (A) population, acquisition and analysis parameters; (B)

Page 1 of 9. Protocol: adult_other_adni3_study_human_ge_3t_25w_ _ _1. 3 Plane Localizer. 3 Plane Localizer PATIENT POSITION

Image Quality/Artifacts Frequency (MHz)

k y 2k y,max k x 2k x,max

2014 M.S. Cohen all rights reserved

The Neural Basis of Intuitive Best Next-Move Generation in Board Game Experts

ACRIN 6686 / RTOG 0825

Cardiac MR. Dr John Ridgway. Leeds Teaching Hospitals NHS Trust, UK

a. Use (at least) window lengths of 256, 1024, and 4096 samples to compute the average spectrum using a window overlap of 0.5.

KYMATA DATASET 3.01: README

SIEMENS MAGNETOM Skyra syngo MR D13

M R I Physics Course. Jerry Allison Ph.D., Chris Wright B.S., Tom Lavin B.S., Nathan Yanasak Ph.D. Department of Radiology Medical College of Georgia

PET Performance Evaluation of MADPET4: A Small Animal PET Insert for a 7-T MRI Scanner

Here I present more details about the methods of the experiments which are. described in the main text, and describe two additional examinations which

H uman perception is not a sequence of snapshots of the outer world but a constructive process to cope with

Simultaneous Multi-Slice (Slice Accelerated) Diffusion EPI

Advanced MSK MRI Protocols at 3.0T. Garry E. Gold, M.D. Associate Professor Department of Radiology Stanford University

Matlab for FMRI Module 2: BOLD signals, Matlab and the general linear model Instructor: Luis Hernandez-Garcia

Image Distortion Maps 1

Philips Site Yearly Performance Evaluation Philips Openview 16-Jan-08. Table of Contents

Supplementary Information for Common neural correlates of real and imagined movements contributing to the performance of brain machine interfaces

Chapter 73. Two-Stroke Apparent Motion. George Mather

NEMA Standards Publication MS (R2014) Determination of Signal-to-Noise Ratio (SNR) in Diagnostic Magnetic Resonance Imaging

Module 2. Artefacts and Imaging Optimisation for single shot methods. Content: Introduction. Phase error. Phase bandwidth. Chemical shift review

1 Introduction. 2 The basic principles of NMR

FEATURE. Adaptive Temporal Aperture Control for Improving Motion Image Quality of OLED Display

MR Basics: Module 8 Image Quality

Annual Ceiling Amount: $

Lesson 06: Pulse-echo Imaging and Display Modes. These lessons contain 26 slides plus 15 multiple-choice questions.

First-level fmri modeling. UCLA Advanced NeuroImaging Summer School, 2010

Consumer Behavior when Zooming and Cropping Personal Photographs and its Implications for Digital Image Resolution

MR Advance Techniques. Flow Phenomena. Class II

Works-in-Progress package Version 1.0. For the SIEMENS Magnetom. Installation and User s Guide NUMARIS/4VA21B. January 22, 2003

Distributed representation of objects in the human ventral visual pathway (face perception functional MRI object recognition)

Chiara Secco. PET Performance measurements of the new LSO-Based Whole Body PET/CT. Scanner biograph 16 HI-REZ using the NEMA NU Standard.

the human chapter 1 Traffic lights the human User-centred Design Light Vision part 1 (modified extract for AISD 2005) Information i/o

ECC419 IMAGE PROCESSING

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII

Tangible pictures: Viewpoint effects and linear perspective in visually impaired people

Practical Image and Video Processing Using MATLAB

PERIMETRY A STANDARD TEST IN OPHTHALMOLOGY

Proceedings of Meetings on Acoustics

Preprocessing on Digital Image using Histogram Equalization: An Experiment Study on MRI Brain Image

Image Processing Of Oct Glaucoma Images And Information Theory Analysis

Structured-Light Based Acquisition (Part 1)

MARP. MR Accreditation Program Quality Control Beyond Just the Scans and Measurements July 2005

Fusiform Face Area in Chess Expertise

Off-line EEG analysis of BCI experiments with MATLAB V1.07a. Copyright g.tec medical engineering GmbH

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1

Supplementary Figure 1. Scanning Electron Microscopy images of the pristine electrodes. (a) negative electrode and (b) positive electrode.

A triangulation method for determining the perceptual center of the head for auditory stimuli

A Three-Dimensional Evaluation of Body Representation Change of Human Upper Limb Focused on Sense of Ownership and Sense of Agency

Visibility of Uncorrelated Image Noise

Superfast phase-shifting method for 3-D shape measurement

MR Basics: Module 6 Pulse Sequences

Self-motion perception from expanding and contracting optical flows overlapped with binocular disparity

Improve Image Quality of Transversal Relaxation Time PROPELLER and FLAIR on Magnetic Resonance Imaging

Image Enhancement in Spatial Domain

Image Enhancement in the Spatial Domain (Part 1)

AD-A lji llllllllllii l

Evaluation of sliding window correlation performance for characterizing dynamic functional connectivity and brain states

Preprocessing and Segregating Offline Gujarati Handwritten Datasheet for Character Recognition

Supplemental Information. Long-Term Memory for Affiliates in Ravens. Markus Boeckle and Thomas Bugnyar. Supplemental Inventory

It Takes Two Skilled Recognition of Objects Engages Lateral Areas in Both Hemispheres

40 Hz Event Related Auditory Potential

Enhancing 3D Audio Using Blind Bandwidth Extension

Grayscale and Resolution Tradeoffs in Photographic Image Quality. Joyce E. Farrell Hewlett Packard Laboratories, Palo Alto, CA

Low-Frequency Transient Visual Oscillations in the Fly

Pulse Sequence Design and Image Procedures

VISUAL VESTIBULAR INTERACTIONS FOR SELF MOTION ESTIMATION

Invariant Object Recognition in the Visual System with Novel Views of 3D Objects

A specialized face-processing network consistent with the representational geometry of monkey face patches

TimTX TrueShape. The parallel transmit architecture of the future. Answers for life.

An Auditory Localization and Coordinate Transform Chip

The Use of Color in Multidimensional Graphical Information Display

Pulse Sequence Design Made Easier

ABSTRACT. Keywords: Color image differences, image appearance, image quality, vision modeling 1. INTRODUCTION

The Size and Shape of Offshore Workers

Attenuation Correction in Hybrid MR-BrainPET Imaging

III. Publication III. c 2005 Toni Hirvonen.

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

A 120dB dynamic range image sensor with single readout using in pixel HDR

Preliminary Assessment of High Dynamic Range Displays for Pathology Detection Tasks. CIS/Kodak New Collaborative Proposal

The Effect of Opponent Noise on Image Quality

Supplemental Information. Visual Short-Term Memory. Compared in Rhesus Monkeys and Humans. Inventory of Supplemental Information

Part I Feature Extraction (1) Image Enhancement. CSc I6716 Spring Local, meaningful, detectable parts of the image.

functional MRI: A primer

Correlation of 2D Reconstructed High Resolution CT Data of the Temporal Bone and Adjacent Structures to 3D Images

P-35: Characterizing Laser Speckle and Its Effect on Target Detection

Guitar Music Transcription from Silent Video. Temporal Segmentation - Implementation Details

Response spectrum Time history Power Spectral Density, PSD

Midterm Review. Image Processing CSE 166 Lecture 10

Digital Image Processing

Perception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner.

Transcription:

Methods Experimental Stimuli: We selected 24 animals, 24 tools, and 24 nonmanipulable object concepts following the criteria described in a previous study. For each item, a black and white grayscale photo was selected (400x400 pixels) and the stimulus was recorded as a spoken word (22.050 khz, 16 Bit, native Italian speaker, female). Auditory stimuli were presented binauraly to participants in the scanner. The three stimulus types were matched on length in Italian (animals mean length = 7.0 letters; tools: 7.6; nonmanipulable: 7.8; one way Anova: F2,69 < 1). Stimuli (both auditory words and photographs) were presented with custom software (AS(imple)F(ramework)) written in Matlab utilizing the Psychophysics Toolbox extensions (Brainard, 1997; Pelli, 1997). The software for stimulus presentation is available on request from J. Schwarzbach. Localizer task: A mini block design was used, in which all 24 photographs from each semantic category were presented within 20 seconds (each stimulus presented for 50 refreshes of the monitor, refresh rate = 60Hz, ISI = 0). There were 20 seconds of fixation between mini blocks of stimuli. All picture stimuli (i.e., minblocks of items) were repeated three times throughout the run. The order of items within a block was random, as was the order of blocks. The run lasted approximately 10 minutes. A fourth category of objects (fruit/vegetables) was also included in the picture viewing experiment (data not shown). Participants viewed the stimuli through a mirror attached to the head coil adjusted to allow foveal viewing of a back projecting monitor.

Size judgment task with auditorily presented words: The details of the design for the auditory size judgment task are presented in the text of the article. Both blind and sighted participants were asked to keep their eyes closed throughout the experiment. The ISI for the six items within a mini block consisted of randomly selected intervals in the range of [.5X], [.75X], [.9X], [1.1X], [1.25X], and [1.5X] where X corresponds to the duration of the entire block (20 seconds) minus the total duration of all auditory wave files in the block, divided by 6. Participants: Twenty seven participants (21 sighted, 12 female; 3 congenitally blind, 2 female; 3 late blind, 1 female) were recruited from the Center for Mind/Brain Sciences volunteer pool and paid for participation in the study. The data for one sighted participant for the auditory task were excluded due to a failure to properly respond; that participant s data from the localizer task were retained. The datasets for both the auditory size judgment task as well as the picture viewing experiment were excluded for another sighted participant due to excessive head motion. This left 20 datasets from sighted participants for the localizer experiment, and 7 for the auditory size judgment task. The 13 participants who completed the localizer task but not the auditory size judgment task participated in a different experiment using auditory presentation of the same materials. Handedness was assessed with the Edinburgh inventory (Oldfield, 1971). All sighted participants who performed the auditory size judgment task were right handed; 2 of the 13 (remaining) sighted participants who completed the pictureviewing experiment were left handed (all others right handed). Two of the three S2

congenitally blind participants (CB1 and CB3) were right handed; CB2 was ambidextrous. All late blind participants were right handed. Sighted participants (mean age: 31.2yrs, standard deviation: 9.5yrs, range: 20yrs to 51yrs) had normal or corrected to normal vision (vision corrected using MR compatible lenses). Participant CB1 (female, age at testing 60yrs) was blind due to Retinitis Pigmentosa, CB2 (male, age at testing 20yrs) due to congenital glaucoma, and CB3 (female, age at testing 31 yrs) due to complete retinal damage at birth. Two of the three late blind individuals were blind due to adult onset retinitis pigmentosa; the third was blind due to glaucoma in childhood (this participant used prosthestic eyes, which were removed during MR scanning). The age at testing for the late blind participants was (LB1, 46yrs; LB2, 42yrs; LB3, 48yrs). All participants were examined by a medical doctor (GB) prior to participation in the study. MR data acquisition and analysis: MR data were collected at the Center for Mind/Brain Sciences, University of Trento, on a Bruker BioSpin MedSpec 4T. Before collecting functional data, a high (1x1x1 mm 3 ) resolution T1 weighted 3D MPRAGE anatomical sequence was performed (sagittal slice orientation, centric Phase Encoding, image matrix = 256x224 (Read x Phase), FoV = 256 mm x 224 mm (Read x Phase), 176 partitions with 1mm thickness, GRAPPA acquisition with acceleration factor = 2, duration = 5.36 minutes, TR = 2700, TE = 4.18, TI = 1020 ms, 7 flip angle). Functional data were collected using an echo planar 2D imaging sequence with phase over sampling (Image matrix: 70 x 64, TR: 2250ms TE: 33 ms, Flip angle: 76, Slice thickness = 3 mm, gap =.45mm, with 3x3 in plane resolution). Volumes S3

were acquired in the axial plane in 37 slices. Slice acquisition order was ascending interleaved odd even. All MR data were analyzed using Brain Voyager (v. 1.9). The first two volumes of functional data from each run were discarded prior to analysis. Preprocessing of the functional data included, in the following order, slice time correction (sinc interpolation), motion correction with respect to the first (remaining) volume in the run, and linear trend removal in the temporal domain (cutoff: 3 cycles within the run). Functional data were then registered (after contrast inversion of the first remaining volume) to high resolution de skulled anatomy on a participant by participant basis in native space. For each individual participant, echo planar and anatomical volumes were transformed into standardized (Talairach and Tournoux, 1988) space. A Gaussian spatial filter with a 4.5 mm full width at half maximum was applied to each volume. All functional data were analyzed using the general linear model in Brain Voyager. Experimental events (duration = 20 seconds) in the picture viewing experiment were convolved with a standard dual gamma hemodynamic response function. There were 4 regressors or interest (corresponding to the four stimulus types) and 6 regressors of no interest, corresponding to the motion parameters obtained during preprocessing. For the analyses of the auditory size judgment task, a finite impulse response model (modeling 6 TRs) was used with regressors for all stimulus events, the auditory response cue, and the outputs of motion correction. A random effects analysis was used to analyze the group data in the picture viewing experiment (n = 20). Fixed effects analyses with separate study (i.e., run) predictors S4

were used to analyze the data from the sighted participants performing auditory size judgments (n = 7), and the late (n = 3) and congenitally blind participants (n = 3). Beta estimates were standardized (z scores) with respect to the entire time course. S5

Figure Captions for Supplementary Figures Supplementary Figure S1. The ROI analyses reported in Figure 2 of the manuscript were also run for each individual participant who performed the auditory size judgment task. Plotted in the graphs are the difference in betas for the contrasts of (gray) tools > nonmanipulable; (red) tools > animals; and (blue) tools > non tools (animal + nonmanipulable). The ROIs for these analyses are those described in Figure 1. The data for individual blind participants are shown in panel A, while those for individual sighted participants in panel B. These results demonstrate that the pattern of results for late and congenitally blind participants summarized by the fixed effects ROI analyses reported in Figure 2, are not carried by individual participants. Supplementary Figure S2. Contrast maps were defined for each participant individually for the contrast of tool > non tool (animal + nonmanipulable), thresholded for all participants at p <.05, FDR corrected for the entire brain volume. The single subject maps were then overlaid and the probability of observing a significant effect (at the above threshold) is plotted for each group of participants (panel A: sighted performing auditory size judgments, panel B: late blind, and panel C: congenitally blind). This analysis demonstrates that the effects reported in Figures 2 and 3 of the manuscript for the auditory size judgment task, and which were run with fixed effects analyses, were not carried by a single blind participant (for either the late or congenitally blind groups). S6

Supplementary Figure S1 ROI Analysis By Subject for all Subjects who completed the Auditory Size Judgment Task A B CB3 CB2 CB1 LB3 LB2 LB1 Left Anterior IPS Left Inferior Parietal Lobule Left Posterior Superior Parietal Lobule -2 0 2 4 6 8-2 0 2 4 6 8-2 0 2 4 6 8 Difference in Betas Difference in Betas Difference in Betas S7 S6 S5 S4 S3 S2 S1-2 0 2 4 6 8-2 0 2 4 6 8-2 0 2 4 6 8 Difference in Betas Difference in Betas Difference in Betas =.1 > p >.05 = p <.05 = p <.01 = p <.001 = p <.00001 CB = Congenitally Blind LB = Late Blind S = Sighted Tool > NonManipulable Tool > Animal Tool > (Animal + NonManipulable)

Supplementary Figure S2 A Overlap among sighted participants in the auditory task for the contrast of tools > (animals + nonmanipulable) 27 29 31 33 35 43 45 47 49 51 Percent Overlap: 100% B R 53 39 55 41 57 ROI from sighted participants viewing pictures Overlap among late blind participants in the auditory task for the contrast of tools > (animals + nonmanipulable) Percent Overlap: 100% C 20% 37 30% ROI from sighted participants viewing pictures Overlap among congenitally blind participants in the auditory task for the contrast of tools > (animals + nonmanipulable) Percent Overlap: 100% 30% ROI from sighted participants viewing pictures L