Methods. 5.1 Eye movement recording techniques in general

Similar documents
SMALL VOLUNTARY MOVEMENTS OF THE EYE*

Lenses- Worksheet. (Use a ray box to answer questions 3 to 7)

AQA P3 Topic 1. Medical applications of Physics

CSE Thu 10/22. Nadir Weibel

X rays X-ray properties Denser material = more absorption = looks lighter on the x-ray photo X-rays CT Scans circle cross-sectional images Tumours

Slide 4 Now we have the same components that we find in our eye. The analogy is made clear in this slide. Slide 5 Important structures in the eye

Human Senses : Vision week 11 Dr. Belal Gharaibeh

Chapter Ray and Wave Optics

The introduction and background in the previous chapters provided context in

1.6 Beam Wander vs. Image Jitter

Phys214 Fall 2004 Midterm Form A

PHYS 202 OUTLINE FOR PART III LIGHT & OPTICS

OPTICAL SYSTEMS OBJECTIVES

Information for Physics 1201 Midterm 2 Wednesday, March 27

Chapter 16 Light Waves and Color

LOS 1 LASER OPTICS SET

III III 0 IIOI DID IIO 1101 I II 0II II 100 III IID II DI II

10.2 Images Formed by Lenses SUMMARY. Refraction in Lenses. Section 10.1 Questions

Digital Image Processing

STUDY OF ADULT STRABISMUS TESTING PROCEDURES MANUAL

Patents of eye tracking system- a survey

Yokohama City University lecture INTRODUCTION TO HUMAN VISION Presentation notes 7/10/14

1. INTRODUCTION: 2. EOG: system, handicapped people, wheelchair.

Speech, Hearing and Language: work in progress. Volume 12

The Human Eye and a Camera 12.1

EE119 Introduction to Optical Engineering Spring 2003 Final Exam. Name:

Simple method of determining the axial length of the eye

OPTICAL DEMONSTRATIONS ENTOPTIC PHENOMENA, VISION AND EYE ANATOMY

Information Guide. Synoptophore (Major Amblyoscope) Heading. Body copy. Body copy bold

binocular projection by electrophysiological methods. An account of some METHODS

Learning Intentions: P3 Revision. Basically everything in the unit of Physics 3

SACCADOMETER Eye surface irradiance - report version 1

LlIGHT REVIEW PART 2 DOWNLOAD, PRINT and submit for 100 points

ECEN 4606, UNDERGRADUATE OPTICS LAB

PHYSICS. Chapter 35 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT

Visual Perception of Images

Radial Polarization Converter With LC Driver USER MANUAL

Chapter 29/30. Wave Fronts and Rays. Refraction of Sound. Dispersion in a Prism. Index of Refraction. Refraction and Lenses

Eye-Tracking Methodolgy

Experiment HM-2: Electroculogram Activity (EOG)

Science 8 Unit 2 Pack:

Chapter 25. Optical Instruments

The 34th International Physics Olympiad

Lecture 26. PHY 112: Light, Color and Vision. Finalities. Final: Thursday May 19, 2:15 to 4:45 pm. Prof. Clark McGrew Physics D 134

Chapter: Sound and Light

Reading: Lenses and Mirrors; Applications Key concepts: Focal points and lengths; real images; virtual images; magnification; angular magnification.

Be aware that there is no universal notation for the various quantities.

Preview. Light and Reflection Section 1. Section 1 Characteristics of Light. Section 2 Flat Mirrors. Section 3 Curved Mirrors

Physiology Lessons for use with the BIOPAC Student Lab

THE SINUSOIDAL WAVEFORM

MEMS Optical Scanner "ECO SCAN" Application Notes. Ver.0

Automated Perimeter PTS 1000

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION

GIST OF THE UNIT BASED ON DIFFERENT CONCEPTS IN THE UNIT (BRIEFLY AS POINT WISE). RAY OPTICS

YOUNGS MODULUS BY UNIFORM & NON UNIFORM BENDING OF A BEAM

Optical design of a high resolution vision lens

Physiology Lessons for use with the Biopac Student Lab

Subtractive because upon reflection from a surface, some wavelengths are absorbed from the white light and subtracted from it.

CHAPTER 5 FINE-TUNING OF AN ECDL WITH AN INTRACAVITY LIQUID CRYSTAL ELEMENT

Insights into High-level Visual Perception

Optical Pumping Control Unit

Section 1: Sound. Sound and Light Section 1

Laser Speckle Reducer LSR-3000 Series

Seeing and Perception. External features of the Eye

The best retinal location"

Vision. The eye. Image formation. Eye defects & corrective lenses. Visual acuity. Colour vision. Lecture 3.5

Introduction to Radio Astronomy!

System Inputs, Physical Modeling, and Time & Frequency Domains

COURSE NAME: PHOTOGRAPHY AND AUDIO VISUAL PRODUCTION (VOCATIONAL) FOR UNDER GRADUATE (FIRST YEAR)

EE119 Introduction to Optical Engineering Fall 2009 Final Exam. Name:

Sensing. Autonomous systems. Properties. Classification. Key requirement of autonomous systems. An AS should be connected to the outside world.

TAP 313-1: Polarisation of waves

CHAPTER 5 CONCEPTS OF ALTERNATING CURRENT

GAS (Geometric Anti Spring) filter and LVDT (Linear Variable Differential Transformer) Enzo Tapia Lecture 2. KAGRA Lecture 2 for students

Experiments on the locus of induced motion

Ph 3455 The Franck-Hertz Experiment

Laser Telemetric System (Metrology)

An analysis of retinal receptor orientation

First and second order systems. Part 1: First order systems: RC low pass filter and Thermopile. Goals: Department of Physics

Thin Lenses * OpenStax

30 Lenses. Lenses change the paths of light.

P3 Essential Questions X Rays, CT Scans and Ultrasound

Chapter 36. Image Formation

1) An electromagnetic wave is a result of electric and magnetic fields acting together. T 1)

Chapter 36. Image Formation

Light sources can be natural or artificial (man-made)

Electromagnetic Induction - A

Mastery. Chapter Content. What is light? CHAPTER 11 LESSON 1 C A

Chapter 23 Study Questions Name: Class:

Vision 1. Physical Properties of Light. Overview of Topics. Light, Optics, & The Eye Chaudhuri, Chapter 8

Psych 333, Winter 2008, Instructor Boynton, Exam 1

L. R. & S. M. VISSANJI ACADEMY SECONDARY SECTION PHYSICS-GRADE: VIII OPTICAL INSTRUMENTS

Generation-V dual-purkinje-image eyetracker

Chapter 18 Optical Elements

Basic Principles of the Surgical Microscope. by Charles L. Crain

The knowledge and understanding for this unit is given below:

Atomic Force Microscopy (Bruker MultiMode Nanoscope IIIA)

Image Formation by Lenses

880 Quantum Electronics Optional Lab Construct A Pulsed Dye Laser

Transcription:

- 40-5. 5.1 Eye movement recording techniques in general Several methods have been described in the literature for the recording of eye movements. In general, the following techniques can be distinguished: - direct viewing - motion picture recording - photo-electric viewing - devices using reflection - electro-oculography - electromagnetic search coil - photo-electric oculography (detailed in 5.2). The different methods will be explained in some detail now. Direct viewing and motion picture recording Movements of the eyes can be measured with a travelling microscope, using blood vessels as reference marks. If the microscope is fitted with a cine camera, permanent recordings of eye movements can be made. With the development of fast cine film it became possible to photograph the whole eye at high speed and thus record rotations about the three axes simultaneously. The time resolution of this method is limited by the speed of the camera and the emulsion, and can hardly be better than 10 msec. The eye positions have to be measured from the film and the corresponding directions of the visual axes have to be calculated. This may be very time consuming. An advantage of this method is that binocular vision can be maintained (no dissociation is used). Photo-electric viewing A variety of techniques involve the projection of light on the eye and a photosensitive device that responds to the light reflected from the eye. For instance, a spot of light can be projected on the limbus, with a nearby photoresistor arranged to pick up the scattered light (Blakemore and Carpenter, in Carpenter, 1977). If the reflected light is not all received by the transducer, the quantity of light received by the transducer is more or less proportional to the area of sclera lying under the spot of light. This device can behave linearly over a range of some 10 o, with a time resolution of 10 msec. An extension of this method was introduced by Nykiel and Torok (Carpenter, 1977); the eye is diffusely illuminated with infrared and viewed by four photodetectors disposed symmetrically round the orbit. The whole assembly can be mounted on goggles worn by the subject; this eliminates problems related to head movements. This method is an early version of the infrared method used in our study, see section 5.2.

- 41 - Devices using reflection Light reflected on discontinuities in refraction index can also be used. Four reflected images can be formed when the light of a bright spot traverses the optical surfaces of the eye. These images are called Purkinje images. The first image is formed by the front surface of the cornea. The back of the cornea and the front and back surface of the lens form the second, third and fourth Purkinje image. The centre of rotation of the eye is not identical to the centre of curvature of the cornea, therefore a device that observes the first Purkinje image can measure eye movements; it is about half as sensitive as one that, for instance, looks at the border between the sclera and the iris. A method using the fourth Purkinje image was devised by Cornsweet and Crane (Carpenter, 1977). They use the first and the fourth Purkinje image, because these move in relation to one another on torsional movements. The marked curvature of the cornea makes corneal reflection unduly sensitive to small translations. A plane reflecting device, for instance a mirror mounted on a contact lens, can reduce this sensitivity. Queré (1981) used electro-oculography (EOG) to study vergence movements. In EOG, movements of the eye are recorded indirectly by periorbital electrodes. The cornea is approximately 1 mv positive with respect to the retina, a situation that creates an electrostatic field that moves with the eye movement. The range of measurement is 1 o to 40 o with a resolution of 1 o. Frequent calibration is essential because of nonlinearity and drift. For quantitative studies direct-current oculography is required, but it is very difficult to solve problems of baseline drift. For electro-- nystagmography, alternating- current coupled EOG is good enough, but stable eye position cannot be recorded in that case. Robinson (1963) described the scleral search-coil in a magnetic field. When the subject is exposed to an alternating magnetic field, eye position may be accurately recorded from the voltage generated in an 8-shaped coil of wire embedded in a scleral contact lens worn by the subject. Horizontal, vertical and torsional eye movements can be measured. A resolution of 15 seconds of arc and a linearity of about 2 % of full scale is claimed. This is the most accurate and most versatile method available, but it is not a contact-free technique which makes it of limited utility when large groups of young people have to be screened. The contact lens can give complaints of discomfort, certainly when the recording procedure takes several hours, as is the case in our experiments which we will describe below. 5.2 Selected method Our method of choice had to be contact-free, and suitable to be used for recording eye movements in young children. The infrared photo-electric method can be used for a long time without any discomfort for the subject. This was important because the experiment lasted at least one hour. Infrared photo-electric oculography ("infrared reflection") is a method based on the

- 42 - principle of reflection of infrared light by the sharp boundary between iris and sclera, the limbus. A set of infrared light-emitting diodes and a set of infraredsensitive detectors are mounted on the head in front of each eye so that the receptive fields match the iris-sclera transition, both on the nasal and on the temporal side. Upon horizontal rotation of the eye, for example in the case of abduction, the nasally positioned detector will measure an increased scleral infrared reflection, while the temporally placed detector measures a decreased iris reflection. Subtraction of the nasal and temporal detector signals gives a measure for eye position with respect to head position. Various reflection systems have been described with several serious drawbacks such as a limited linear range, complicated and time-consuming installation and calibration procedures and poor mechanical stability of the transducer with respect to the eye. In our experiments, the horizontal eye movements were recorded with the IRIS system (Reulen et al., 1988). figure 5.1 Schematic of the IRIS system. We summarize from Reulen et al.'s paper (1988) the following description of the IRIS system: The infrared light transducer consists of an array of nine infrared light emitting diodes (LED's type Siemens LD 269) and nine phototransistors (type Siemens BPX 69). Maximum infrared-light emission of the LEDs is at a wavelength of 950 nm, and the maximum sensitivity of the detectors lies at a wavelength of 850 nm. So there is a reasonable overlap. The oscillator generates a square wave signal with a frequency of 2.5 khz. The wave drives the current source, feeding a 50 ma current through all nine infrared light emitting diodes, which are mounted in parallel in the circuit. To measure horizontal

- 43 - eye position, the signals picked up by the most laterally located infrared-light detectors, (nasally the numbers 1+2 and temporally 8+9) are summed pairwise and then the sums are subtracted. The resulting signal (marked S1 in figure 5.1) is a 2.5 khz chopped signal, modulated by the amplitude of the eye-movement signal. This signal passes a bandpass filter (centre frequency 2.5 khz, rolloff 12dB/octave) and after that the resulting signal S2 is multiplied by the square wave signal to produce S3 (figure 5.1). This signal S3 passes a low pass filter (DC-100Hz; -3 db) and is amplified, thereby demodulating the eye movement amplitude signal. Using a Labmaster TM40 analogue/digital converter, the amplitude signals were read into an Olivetti M24-SP personal computer. The stimulus signal, the position of each of the two eyes and the signal for the difference in eye position were stored. The sample frequency was 200 Hz for each channel. The vergence signal is derived from the change in the relative position of the two eyes. For the data acquisition and data analysis, software was used that has been developed by the Neuro-ophthalmological System Group (Dr. G.H.van der Heijde, department of Medical Physics of the University of Amsterdam) and the technical section of the department of Ophthalmology at the University of Groningen. With the infrared device horizontal eye movements can be recorded linearly up to excursions of 30 o (Reulen et al., 1988). Sources of artifact in infrared recording of eye movements were investigated by Truong and Feldon (1987). They used an infrared device with two photodiodes (Texas Instruments) and found a range of linearity from -15 o (left) to +15 o (right) of the primary position. Non-linearity was found when the photodiodes were displaced horizontally. Displacement of the diodes towards the pupil caused artifacts because of reflection from the pupil and the contralateral iris. These artifacts became more obvious with increasing eccentricity of the eyes from the primary position. This could be confirmed in our own experiments (Koopmans, 1988).

- 44-5.3 Experiment configuration figure 5.2 Experiment configuration. A L and A R: Transducers and polaroid filters (horizontal and vertical; inserted in the transducers) B: Polaroid filters (horizontal and vertical) C: Two pairs of mirrors, one of each pair is mounted on a galvanometer D: Projectors E: Galvanometer driver F: Projection screen G: Subject G is the subject, with the transducers in front of the eyes. In each transducer the polaroid filter was inserted (A L, resp. A R). The subject had to fixate binocularly on the diagram projected on the screen (F) with polaroid filters in front of each projector. During calibration the polaroid filters B were removed. A step change of disparity could be given by a sudden displacement of one of the mirrors (C) mounted on a galvanometer driven by a galvanometer driver (E). If the mirror corresponding to the right projector moved, the image for the right eye was displaced (corresponding polarisation of A and B). A similar mechanism was used for the left eye. The step change could be given in a convergent and in a divergent direction, and back to the original position. In this thesis back to the original position from convergence is called convergence relaxation. Divergence relaxation means back to the original position from divergence. 5.4 Target Presentation The subject was comfortably seated with his head on a chin rest. After the calibration procedure, crossed polaroid filters were added on the projectors. Two

- 45 - crossed polariser slide projectors were used and image displacement was induced by rotating a deflecting mirror. Two identical contrast-rich images (figure 5.3) were presented at a distance of 2.86 m. This image was used earlier by Crone (1975). The subjects were instructed to focus on the tip of the nose. The experiment was done in a dark room. The subjects were instructed not to move their heads; they were wearing their own glasses. figure 5.3 Contrast-rich image (Crone, 1975). A complex image was chosen because simple images such as dots or lines may not induce motor responses when this response has to be based on peripheral fusion. The diameter of the projected image was 33 cm. At a distance of 2.86 m this is seen under an angle of 6.6 o. 5.5 Calibration 5.5.1 Calibration and data acquisition After establishing an optimal position for the transducer for the left eye and the transducer for the right eye, the subject had to fixate the image. The unpolarized image was moved in the form of a square wave over a distance of 25 cm on the screen. The distance between the subject and the screen was 2.86 m. This induces eye excursions of 5 o (arctan 0.25/2.86). About 12 square-wave stimuli were given with a frequency of 0.25 Hz (T=4 sec). The positions of both eyes were recorded, see figure 5.4.

- 46 - figure 5.4 Calibration signal with artifact. Signals of the right and the left eye during the calibration period. A blink of the eyes produces the artifact indicated. (Such an artifact has to be removed before the calibration factors can be calculated.) 5.5.2 Calculation of the calibration signal Artifacts such as blinks were removed. To exclude an artifact at least one period but possibly more periods had to be removed (figure 5.5). figure 5.5 Calibration signal after removing the artifact. After this an X/Y plot could be made with on the Y-axis the signal derived from the right eye and on the X-axis the signal derived from the left eye's excursions. Now with least squares-analysis, the relation between the signals from the right eye and the left eye could be calculated automatically (figure 5.6). y=mx+n where m=slope and n=offset m should be close to one and is the ratio of the amplifications of the two channels.

- 47 - n represents the correction for the signal from the right eye necessary for obtaining the same reading for both eyes if they fixate at the same spot. figure 5.6 An example of an X/Y plot of the calibration signal. Y-axis right eye; X-axis left eye. After this procedure, the calibration signal is drawn again, now with the signals for the right and left eye matched (figure 5.7). It is known that excursions of 5 o are made. Therefore, an absolute calibration can be made in degrees per volt by reading the example values when the cursor is positioned on both the maximum and minimum "equal" levels.

- 48 - figure 5.7 After calibration, excursions of five degrees in both eyes. The calibration file was not accepted, and thus the recording rejected, in cases where drift in the signal was seen during recording of one or both eyes, when too many artifacts (blinks) were found in the calibration file, or when the factor m was < 0.5 or > 2.0. After this procedure the vergence signal was derived from the change in the relative position of the two eyes by subtracting the two eye position signals. 5.6 Recording procedure After calibration, eye movements were recorded with polarized images during stimulation with excursions of the image corresponding to 2 and 4 prism diopters. Then again a calibration session followed, after which 6 and 8 prism diopter stimulations were recorded. After recording stimulation of the right eye, the same procedure was followed for the left eye. The step change was induced by tilting one of the two mirrors on a galvanometer, as was already shown in figure 5.2. Each stimulus was given 4 or 5 times, or less when the subject reported diplopia. Because diplopia occurred more often in divergence, many more responses occurred when the stimulus was convergence or convergence relaxation, than when the stimulus was divergence or divergence relaxation. See appendix A.

- 49 - prism diopter degrees 1 0.57 2 1.15 4 2.30 6 3.45 8 4.60 table 5.1 Prism diopters and corresponding degrees The starting position of the experiment was an identical projection of each projector (i.e., images superimposed). Because of the distance to the screen (2.86 m) and the pupil distance of e.g. 63 mm this means a convergence of 1.28 o in the neutral position, and a corresponding convergence angle for other pupil distances. Just before until about 5 seconds after each stimulus, the eye movements were recorded, (see figure 5.8 trace A and B; variables D and E result from later processing).

- 50 - figure 5.8 Screen dump from recorded signals. A = right eye B = left eye C = stimulus D = vergence velocity E = vergence (left eye - right eye) C= Upward means convergent stimulus in the right eye or divergent stimulus in the left eye, downward means divergent stimulus in the right eye or convergent stimulus in the left eye. 5.7 Derived parameters and descriptors The following characteristics are calculated automatically, having been indicated manually on the computer screen by means of cursors: 1.Maximum velocity (Vm) 2.Vergence latency (Vl)

- 51-3.Response amplitude (Ra) 4.Response duration (Rd) 5.Latency first saccade in expected direction (Sl) figure 5.9 The dynamic characteristics indicated. A = right eye; B = left eye; C = stimulus; D = vergence velocity; E = vergence. Figure 5.9 shows the location of these dynamic characteristics. A short explanation can be given as follows. 1.Vergence latency is defined as the time that elapses after presentation of the stimulus until the first obvious sign of vergence is observed. 2.Response amplitude is the absolute value of the first maximum in the vergence amplitude after the stimulus, possibly, therefore, during an "overshoot". In other words, always in the direction of the stimulus. 3.Maximum velocity. The vergence velocity is the first derivative of the vergence. Its peak value after the stimulus is the maximum velocity. Usually the vergence velocity also showed a peak value at the moment of the saccade. Peak values that coincided with the saccades were not accepted (see figure 5.9). One reason for this peak value could be imperfect calibration. Another reason is an inequality of the saccade. This has also been described by Hung (1994). The vergence velocity artifact was easy to recognize once the sampling frequency was increased from 100 Hz to 200 Hz, the artifact was then identified as being related to the saccade. 4.Response duration is defined as the time that elapses after 10% of the vergence

- 52 - response was seen, until 90% of the vergence response was reached. 5.Latency first saccade, this is the time that elapses after the stimulus until the first saccade that occurs and is related to the stimulus (a step-wise stimulus of the right eye in a convergent direction is usually followed by a saccade of the right and left eye to the left and vice versa (Straub, 1989)). We also scored whether or not the saccade that followed the stimulus was related to the stimulus (in the direction of the shift of the image, see below). A latency higher then 2000 msec was rare. But the response duration was only limited by the length (in time) of the computer screen, which was 5 seconds. The time that elapsed between two stimuli was usually about 5 seconds (one screen length). The stimuli were given in a random direction at variable intervals. If, incidentally, less than 1000 msec elapsed between two stimuli, only the response to the second stimulus was processed. figure 5.10 Velocity (=V) (the small amplitude signal), and acceleration (=A), the second derivative of the vergence.

- 53 - Apart from these five parameters for which numerical values were obtained, several qualitative descriptors were also derived from the recordings: The shape of the velocity and vergence recording was described in terms of five items: 1.One peak/more peaks (example in figure 5.11) (vergence velocity) 2.Overshoot or no overshoot (example in figure 5.12) (vergence) 3.Prolonged response (example in figure 5.13) (vergence) 4.First saccade in the direction of the stimulus or not (right eye / left eye) 5.Acceleration related to the stimulus or not (acceleration) Acceleration, the second derivative of the vergence was computed automatically and displayed. Only accelerations clearly related (in time) to the stimulus were taken into account (see figure 5.10). figure 5.11 More peaks in vergence velocity (D). (see arrows).

- 54 - figure 5.12 Overshoot in vergence (E). (See arrow) figure 5.13 Second or prolonged vergence reaction (E). Motor responses were recorded both after stimulation of the eye and after removal of the stimulus, for example convergence and convergence relaxation (C-), which is used to indicate removal of the convergent stimulus. Hard copies of all vergence

- 55 - recordings were made and the parameters mentioned above were calculated and scored by the same person (the author). The computed data were aggregated for each subject, and groups of subjects were compared.