Blood Vessel Tracking Technique for Optic Nerve Localisation for Field 1-3 Color Fundus Images

Similar documents
Fovea and Optic Disc Detection in Retinal Images with Visible Lesions

CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA

Automatic Detection Of Optic Disc From Retinal Images. S.Sherly Renat et al.,

Locating Blood Vessels in Retinal Images by Piece-wise Threshold Probing of a Matched Filter Response

Optic Disc Boundary Approximation Using Elliptical Template Matching

DETECTION OF OPTIC DISC BY USING THE PRINCIPLES OF IMAGE PROCESSING

Procedure to detect anatomical structures in optical fundus images

Comparison of two algorithms in the automatic segmentation of blood vessels in fundus images

A new method for segmentation of retinal blood vessels using morphological image processing technique

SEGMENTATION OF BRIGHT REGION OF THE OPTIC DISC FOR EYE DISEASE PREDICTION

Drusen Detection in a Retinal Image Using Multi-level Analysis

Segmentation Of Optic Disc And Macula In Retinal Images

OPTIC DISC LOCATION IN DIGITAL FUNDUS IMAGES

Exudates Detection Methods in Retinal Images Using Image Processing Techniques

Image Database and Preprocessing

Segmentation of Blood Vessels and Optic Disc in Fundus Images

An Efficient Pre-Processing Method to Extract Blood Vessel, Optic Disc and Exudates from Retinal Images

Blood Vessel Tree Reconstruction in Retinal OCT Data

RETINAL VESSEL SKELETONIZATION USING SCALE-SPACE THEORY

Adaptive Vision Leveraging Digital Retinas: Extracting Meaningful Segments

Gaussian and Fast Fourier Transform for Automatic Retinal Optic Disc Detection

Hybrid Method based Retinal Optic Disc Detection

AUTOMATED DRUSEN DETECTION IN A RETINAL IMAGE USING MULTI-LEVEL ANALYSIS

ABSTRACT I. INTRODUCTION II. REVIEW OF PREVIOUS METHODS. et al., the OD is usually the brightest component on

Segmentation approaches of optic cup from retinal images: A Survey

An Efficacious Method of Cup to Disc Ratio Calculation for Glaucoma Diagnosis Using Super pixel

Digital Retinal Images: Background and Damaged Areas Segmentation

Optic Disc Approximation using an Ensemble of Processing Methods

Localization of Optic Disc and Macula using Multilevel 2-D Wavelet Decomposition Based on Haar Wavelet Transform

Department of Ophthalmology, Perelman School of Medicine at the University of Pennsylvania

A SPATIAL ENHANCEMENT SCHEME WITH SUPERVISED PIXEL CLASSIFICATION FOR BLOOD VESSEL DETECTION IN RETINAL IMAGE

Segmentation of Blood Vessel in Retinal Images and Detection of Glaucoma using BWAREA and SVM

Macula centred, giving coverage of the temporal retinal. Disc centred. Giving coverage of the nasal retina.

AUTOMATIC DETECTION OF HEDGES AND ORCHARDS USING VERY HIGH SPATIAL RESOLUTION IMAGERY

Adaptive Fingerprint Binarization by Frequency Domain Analysis

Automatic Detection of Optic Disc and Optic Cup using Simple Linear Iterative Clustering

The TRC-NW8F Plus: As a multi-function retinal camera, the TRC- NW8F Plus captures color, red free, fluorescein

Blood Vessel Segmentation of Retinal Images Based on Neural Network

DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA)

Vision Defect Identification System (VDIS) using Knowledge Base and Image Processing Framework

Research Article. Detection of blood vessel Segmentation in retinal images using Adaptive filters

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

AUTOMATIC MACULA DETECTION IN HUMAN EYE FUNDUS AUTO- FLUORESCENCE IMAGES: APPLICATION TO EYE DISEASE LOCALIZATION

Retinal blood vessel extraction

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

Target detection in side-scan sonar images: expert fusion reduces false alarms

Midterm Examination CS 534: Computational Photography

STUDY NOTES UNIT I IMAGE PERCEPTION AND SAMPLING. Elements of Digital Image Processing Systems. Elements of Visual Perception structure of human eye

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

SPATIAL VISION. ICS 280: Visual Perception. ICS 280: Visual Perception. Spatial Frequency Theory. Spatial Frequency Theory

HIGH ORDER MODULATION SHAPED TO WORK WITH RADIO IMPERFECTIONS

Method for Real Time Text Extraction of Digital Manga Comic

EYE ANATOMY. Multimedia Health Education. Disclaimer

Image Processing for feature extraction

A Fast and Reliable Method for Early Detection of Glaucoma

Automatic Morphological Segmentation and Region Growing Method of Diagnosing Medical Images

Abstract Terminologies. Ridges: Ridges are the lines that show a pattern on a fingerprint image.

AN AUTOMATIC SCREENING METHOD TO DETECT OPTIC DISC IN THE RETINA

Chapter 4 Investigation of OFDM Synchronization Techniques

A Retinal Image Enhancement Technique for Blood Vessel Segmentation Algorithm

Postprocessing of nonuniform MRI

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

Feature Extraction of Human Lip Prints

Proposed Method for Off-line Signature Recognition and Verification using Neural Network

Pattern Recognition 46 (2013) Contents lists available at SciVerse ScienceDirect. Pattern Recognition

Lecture 2 Digital Image Fundamentals. Lin ZHANG, PhD School of Software Engineering Tongji University Fall 2016

Automatic No-Reference Quality Assessment for Retinal Fundus Images Using Vessel Segmentation

A diabetic retinopathy detection method using an improved pillar K-means algorithm

Histogram-based Threshold Selection of Retinal Feature for Image Registration

Going beyond the surface of your retina

Visual Search using Principal Component Analysis

Physics 2310 Lab #5: Thin Lenses and Concave Mirrors Dr. Michael Pierce (Univ. of Wyoming)

A fast approach to human retina optic disc segmentation using fuzzy c-means level set evolution

COMPUTER PHANTOMS FOR SIMULATING ULTRASOUND B-MODE AND CFM IMAGES

Retinal Image Analysis for Diagnosis of Glaucoma Using Arm Processor

What s Fundus photography s purpose? Why do we take them? Why do we do it? Why do we do it? Why do we do it? 11/3/2014. To document the retina

Research Article An Approach to Evaluate Blurriness in Retinal Images with Vitreous Opacity for Cataract Diagnosis

Bayesian Estimation of Tumours in Breasts Using Microwave Imaging

Keywords: - Gaussian Mixture model, Maximum likelihood estimator, Multiresolution analysis

License Plate Localisation based on Morphological Operations

Digital Image Processing

IMAGE PROCESSING PROJECT REPORT NUCLEUS CLASIFICATION

An Analysis of Image Denoising and Restoration of Handwritten Degraded Document Images

Introduction. American Journal of Cancer Biomedical Imaging

1200 "h278" 2500 "h563"

Computing for Engineers in Python

OCT - Anatomy of a Scan. OCT - Anatomy of a Scan. OCT Imaging. OCT Imaging

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain

The New Method for Blood Vessel Segmentation and Optic Disc Detection

ARIC Neurocognitive Study. ARIC Vessel Measurement System

Usefulness of Retina Codes in Biometrics

Image Filtering. Median Filtering

Visual Optics. Visual Optics - Introduction

Enhanced Functionality of High-Speed Image Processing Engine SUREengine PRO. Sharpness (spatial resolution) Graininess (noise intensity)

Introduction to Image Analysis with

PERFORMANCE ANALYSIS OF LINEAR AND NON LINEAR FILTERS FOR IMAGE DE NOISING

Research Article Vessel Extraction of Conjunctival Images Using LBPs and ANFIS

CX-1 digital retinal camera mydriatic & non-mydriatic. Redefining true versatility.

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT:

Transcription:

Blood Tracing Technique for Optic Nerve Localisation for Field 1-3 Color Fundus Images Hwee Keong Lam, Opas Chutatape School of Electrical and Electronic Engineering Nanyang Technological University, Nanyang Ave., Singapore 639798 huiqiang@pmail.ntu.edu.sg, eopas@ntu.edu.sg Abstract This paper considers the problem of locating the optic nerve center, a place where the blood vessel and nerve emanate. Our algorithm first identifies the main blood vessel, which is characterized by large width and dar red color, by using amplitude modified second-order Gaussian filter. The optic nerve center is then found by tracing along this main blood vessel to a convergence point. 80 ocular fundus images of various spatial resolutions with and without disease conditions were tested and a success rate of 86% for finding the optic nerve is achieved. It should be stressed that the by-product of this algorithm, i.e. the main blood vessel found, can be used to segment the entire blood vessel networ by exploiting their interconnectivity. In a healthy retinal image, the optic dis can be easily identified as a bright circular region. Figure 1 shows a healthy retinal image, with the optic dis clearly identified at the middle right part of the image. The main blood vessel is also identified, one in the upper portion of the image and another at the bottom portion of the image. As can be seen, the main blood vessel is the widest and darest vessel in the image. The center of the optic nerve is also labeled. This is the point which we are interested to locate. Index Terms Fundus image, optic nerve, retinal vessel, matched filter. 1. Introduction Main Blood Macula Optic nerve center Ophthalmologists have long used fundus photography to access the health condition of a person. There are seven standard fields in fundus imaging that are considered the gold standard. Field 1 is centered on the optic dis. Field 2 is centered on the macula. Field 3 is temporal to the macula, including the fovea at 3:00 or 9:00 o cloc position. These fields are of particular interest to clinicians, and consequently to our wor here. Definitions of the other fields can be obtained in [9]-[10]. The optic dis and the macula are important parts of the retina. The optic dis is the only place where the central retinal artery and central retinal vein emanate [1], supplying the retina with oxygen and nutrients. The nerve cells, which transmit information to and from the brain, will also have to pass through the optic dis. The retina is extremely susceptible to systemic and eye-related diseases, e.g. diabetes, glaucoma and age related diseases. If the pathology is near or on the optic dis, vision impairment is at a higher ris. Thus, locating the optic dis is of high importance, especially for diseased retinal images. Figure 1: A healthy retinal image Figure 2 [11] shows a diseased retinal image. Clearly, the optic dis cannot be identified as a bright circular region. However, the optic nerve center could still be identified if the main blood vessel is traced to a convergence point. Main Blood Figure 2: The optic nerve being obscured by haemorrhage Optic nerve center

2. Related wor The optic dis has traditionally been identified as the largest area of pixels having the highest gray level in the image [3]. This bottom-up method wors well in normal fundus images but will give a wrong location when large areas of exudates are present. This is simply due to the fact that the color and intensity of exudates are similar to that of the optic dis. A top-down approach combined with bottom-up approach is used in [4] to locate the optic dis. A simple clustering method is first applied on the intensity image to locate the possible regions where the optic dis may appear. The optic dis is then identified based on the distance measured between the candidate areas and the model sub-image based on the principal component analysis (PCA) technique. This model-based method has been shown to be quite robust even with the presence of large areas of bright lesions. However, this method alone may not wor best in all variations of fundus images. A voting type method is used in [5] to find the location of the center of the optic dis. In this method, the entire vascular networ is segmented first. Then, blood vessel segments are modeled as line segments. Each line segment is again modeled as a fuzzy segment, whose area contributes votes to its constituent pixels. The votes are summed at each pixel to produce an image map. The map is then blurred and thresholded to determine the strongest point of convergence, which is taen to be the center of the optic nerve. Based on twenty ocular fundus images, a success rate of 65% is reported. In [6], the detection of optic nerve is based upon tracing the vessel networ to a common starting point. Similarly, the entire vascular networ has to be segmented first. The tracing process then uses the angles between vessels at branching points to identify the trun. The result is shown for two images only and no quantitative results are provided. Our wor is different from previous methods in that we do not mae use of any intensity characteristics of the optic dis nor do we need to segment out the vascular networ before we find the center of the optic nerve. Instead, we identify the main blood vessel and then use it to locate the center of the optic nerve. This method is useful when the priority is to locate the optic dis and macula. The macula can be easily located once the optic dis is found [2]. 3. Method Our method to identify the center of the optic nerve consists of two parts. First, we identify the main blood vessel by using the amplitude modified second-order Gaussian filter [14]. Then we trac along the main blood vessel to a convergence point. Section 3.1 describes the method used to identify the main blood vessel and section 3.2 describes the tracing algorithm. 3.1 Locating the Main Blood 3.1.1 Choosing Seed Points inside the Main Blood In field 1, 2 and 3 fundus images, the optic dis is frequently found in the region 0.4 to 0.6 of the height of the image. Thus we can segment the image into 3 regions - the upper region from the top of the image to 0.6 of the height of the image, the middle region that is 0.4 to 0.6 of the height of the image and lower region which is from 0.4 of the height of the image to the bottom of the image. Analysis of main blood vessel will be carried out in the upper and lower region only. Also, the green plane is used since it has the highest contrast [13]. In the upper and lower region, horizontal lines were drawn across the image and the pixels along the lines are analysed. They are first convolved with ernels described in [14] and the matched filter response (MFR) for the line is noted. A 0 and 45 ernel with σ=2.5 is shown in Figure 3a and Figure3b respectively. This procedure is similar to that used by Collorec and Coatrieux [15] but it addresses the problem of finding local intensity minima using 1-D sliding window length N s. A small N s can detect thin vessels but will locate multiple local minima on thic vessels. However, it has been observed that MFR values higher than 350 corresponds to blood vessels. Figure 3c illustrates this. 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.0 0.0 0.0 0.0 0.0 0.0-0.3-0.3-0.3-0.3-0.3-0.3-0.3-0.3-0.3-0.3-0.3 0.0 0.0 0.0 0.0 0.0 0.0-0.8-0.8-0.8-0.8-0.8-0.8-0.8-0.8-0.8-0.8-0.8 0.0 0.0 0.0 0.0 0.0 0.0-1.0-1.0-1.0-1.0-1.0-1.0-1.0-1.0-1.0-1.0-1.0 0.0 0.0 0.0 0.0 0.0 0.0-0.8-0.8-0.8-0.8-0.8-0.8-0.8-0.8-0.8-0.8-0.8 0.0 0.0 0.0 0.0 0.0 0.0-0.3-0.3-0.3-0.3-0.3-0.3-0.3-0.3-0.3-0.3-0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 (a) 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.2 0.3 0.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.4 0.2-0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.4 0.2-0.2-0.6-0.9 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.4 0.2-0.2-0.6-0.9-1.0-0.9 0.0 0.0 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.4 0.2-0.2-0.6-0.9-1.0-0.9-0.6-0.2 0.0 0.0 0.0 0.2 0.3 0.4 0.5 0.4 0.2-0.2-0.6-0.9-1.0-0.9-0.6-0.2 0.2 0.4 0.0 0.0 0.0 0.4 0.5 0.4 0.2-0.2-0.6-0.9-1.0-0.9-0.6-0.2 0.2 0.4 0.5 0.4 0.0 0.0 0.0 0.4 0.2-0.2-0.6-0.9-1.0-0.9-0.6-0.2 0.2 0.4 0.5 0.4 0.3 0.2 0.0 0.0 0.0-0.2-0.6-0.9-1.0-0.9-0.6-0.2 0.2 0.4 0.5 0.4 0.3 0.2 0.1 0.0 0.0 0.0 0.0-0.9-1.0-0.9-0.6-0.2 0.2 0.4 0.5 0.4 0.3 0.2 0.1 0.0 0.0 0.0 0.0 0.0 0.0-0.9-0.6-0.2 0.2 0.4 0.5 0.4 0.3 0.2 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0-0.2 0.2 0.4 0.5 0.4 0.3 0.2 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.5 0.4 0.3 0.2 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.3 0.2 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.1 0.0 0.0 0.0 0.0 0.0 0.0 (b)

A step size of 8 is chosen because the main blood vessel is generally not very tortuous and a large step size means faster tracing speed. Due to digitizing error, the point (i +1, j +1 ) may not be in the center of the blood vessel. A search in a 5x5 neighborhood is performed and the highest MFR value is chosen to be (i +1, j +1 ). To determine if (i +1, j +1 ) is inside a vessel, the pixels in a 3x3 window are convolved with the ernels and the direction of the highest scoring ernel is noted. All the directions of the highest scoring ernels in this window must be similar as points inside a vessel should have similar directions. Furthermore, to ensure that tracing proceed along the same vessel, φ and φ -1 must have similar direction. If any condition is violated, tracing stops. (c) Figure 3: (a) A 0 ernel with σ=2.5. (b) A 45 ernel with σ=2.5. (c) Segments with MFR value more than 350 are mared with thic lines for better viewing. Apart from blood vessels, edges of bright objects e.g. optic dis boundary and exudates boundaries, also gives high MFR values. To eliminate these false points, the left and right contrasts in these segments are detected. The contrast is defined as the difference between the maximum and minimum intensity values. Both contrasts must be above a threshold to be considered as points inside a blood vessel. A value of 15 is chosen in our case. From the remaining candidate seed points, the one with the highest MFR value is the seed point for the line. 3.1.2 Tracing from Seed Points If tracing proceeds for more than 5 iterations in the same direction, all its points are stored and measured for its width. The width is measured using the method described in [14], taing note that the length of the ernel is greater than the width. All points traced from a seed point have the same unique label number. If tracing is less than 5 iterations, the points are not stored. This threshold is to prevent boundaries of optic dis and exudates to be labeled as vessel. 3.1.3 Choosing the Main Blood From the measurements made during tracing, the width of the largest vessel can be found. The path with the most number of points with similar width is identified as the main blood vessel. A measure of similarity is taen to be 0.2 less than the maximum width. Figure 4 shows the main blood vessel being highlighted using this method. From each seed point, the vessel is traced in both forward and bacward directions, along with obtaining the width of the vessel. The next point (i +1, j +1 ) is found from the current point (i, j ) using 8.sin i + 8.cos + 1 i = + 1 8.sin i 8.cos and φ φ = φ ( i, j ), if φ( i, j ) ( i, j ) π, if φ( i, j ) φ φ forward bacward 1 1 π / 2 > π / 2 1a) 1b) 2a) 2b) where φ(i, j ) is the vessel direction which can be found from the ernel with highest response. Figure 4: The main blood vessel is highlighted 3.2 Tracing to Convergence The starting points for tracing to convergence in both the upper and lower region are the points nearest to the middle region. From the starting points, the one in the upper region will trac down while the one in the lower region will trac up alternately. The tracing algorithm is similar to that detailed in section 3.1.2 except that for the upper region it is tracing in

bacward direction while for the lower region it is tracing in forward direction, a step size of 4 is used for finer tracing, a search window of 3x3 is used for compensating digitization and there is only 1 iteration. A small step size is used here to prevent tracing from jumping to another vessel as the optic dis has a high density of blood vessels inside it. Initialise a starting point in top region and a starting point in bottom region Has maximum number of iteration reached? optic nerve found Tracing from the upper and lower region proceeds alternately and independently until the stopping criteria described in section 3.1.2 are met. For instance, if tracing for the top region is stopped, the bottom region still continues until the stopping criteria are met or a convergence point is found. The convergence point is the midpoint between the upper and lower point if they are within a 30x30 neighborhood or if both are stopped before reaching this neighborhood, they must be within a 120x120 neighborhood. These windows are chosen after observing that the radius of optic dis is around 60 pixels for a 700x605 image. Figure 5a shows the result of tracing to convergence point. As can be seen, there is no guarantee that the point will not trac beyond the convergence point. Thus, an improved technique taes care of this problem. The new algorithm is outlined in Figure 6. From the two starting points, a midpoint is calculated. If the midpoint is above the midline, a line that is at a position half the height of the image, the upper point will trac only and vice versa. If tracing for upper point is terminated, this condition will be overruled and the bottom point will trac only and vice versa. When the distance between the two points in the x or y direction is less than 30 pixels, both points will trac together. When the two points are inside a 30x30 neighborhood, or a 120x120 neighborhood if both are terminated early, the midpoint is the center of the optic nerve. The process is repeated until the optic nerve is found or deemed to be unidentifiable or a maximum number of iteration is reached. Figure 5b shows the result of this improved tracing algorithm. (a) (b) Figure 5: (a) Result of using the original tracing method. (b) Result of using the improved tracing control technique. tice that it is nearer to the true optic nerve center. Is distance between them in both x and y directions < 30 pixels? Is distance between them in either x or y direction < 30 pixels? Is the midpoint above the midline? Only bottom point tracs Midpoint is the optic nerve center Both top and bottom points trac concurrently Only top point tracs Figure 6: Improved tracing control technique 4. Results Our method was tested on 80 fundus images of resolution ranging from 250x184 to 700x605 and in disease and non-disease conditions. The center of the optic nerve is hand labeled by 2 observers who were briefed on how to identify the points. The optic nerve center is considered successfully identified if the convergence point is within the optic dis or is within 60 pixels from the mean point located by the observers, whichever is more appropriate for its spatial resolution. Out of 80 images, the optic nerve was successfully located for 69 of them, giving a success rate of 86%. Table 1 shows the error mean and error standard deviation of the located optic nerve center using the algorithm when compared to the observers mean location. We can see that the located optic nerve center is close to the location that the observers labeled and is well within the optic dis, using the mean radius of the optic dis to be 60 pixels Image size Error mean Error standard deviation Smaller or equal to 13.8 9.1 512x512 Larger than 512x512 22.8 12.4 Table 1: Results of our experiment

5. Conclusion We have presented a new way of locating the optic nerve center without using the intensity level properties. By first identifying the main blood vessel using amplitude modified second order Gaussian filter, we can then trac along it to a convergence point. That convergence point is the optic nerve center. Our method has an extra advantage in that the main blood vessel found can be further used to segment the vascular networ, by using the connectivity property of blood vessels. References [1] C.Oyster, The Human Eye: Structure and Function. Sinauer Associates Publishing, 1999, pg. 719. [2] L.Gagnon, M.Lalonde, M.Beaulieu and M.C.Boucher, Procedure to detect anatomical structures in optical fundus images, Proceedings of Conference Medical Imaging 2001: Image Processing (SPIE #4322), San Diego, 19-22 February 2001, pp. 1218-1225. [11] http://www.parl.clemson.edu/stare/nerve/stareimages.tar [12] A.Hoover, V.Kouznetsova and M.Goldbaum, Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response, IEEE Transactions on Medical Imaging, Vol. 19,. 3, March 2000, pp. 203-210. [13] M.Lalonde, L.Gagnon, M.C.Bouchert, nrecursive paired tracing for vessel extraction from retinal images, Proceeding of the Conference Vision Interface 2000, Montreal, Mai 2000, pp.61-68. [14] Luo Gang, Opas Chutatape, S.M. Krishnan, Detection and measurement of retinal vessels in fundus images using amplitude modified second-order Gaussian filter, IEEE Transaction on Biomedical Engineering, Vol. 49,. 2, February 2002, pp 168-172. [15] R.Collorec and J.L.Coatrieux, Vectorial tracing and directed contour finder for vascular networ in digital subtraction angiography, Pattern Recognition, Vol. 8,. 5, December 1998, pp. 353-358. [3] S.Tamura, Y.Oamoto and K.Yanashima, Zerocrossing interval correction in tracing eye-fundus blood vessels, Pattern Recognition, Vol. 21,. 3, pp. 227-233, 1988. [4] Huiqi Li, Opas Chutatape, Automatic location of the optic dis in retinal images, Proceedings of IEEE International Conference on Image Processing, 2001, pp. 837-840. [5] A.Hoover and M.Goldbaum, Fuzzy convergence, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1998, pp. 716-721. [6] K. Aita and H.Kuga, A computer method of understanding ocular fundus images, in Pattern Recognition, Vol. 15,. 6, 1982, pp. 431-443. [7] S.Chaudhuri, S.Chatterje, N.atz M.Nelson and M.Goldbaum, Detection of blood vessels in retinal images using two-dimensional matched filters, IEEE Transaction on Medical Imaging, Vol. 8, pp.263-269, Sept. 1989. [8] T.Y.Zhang, and C.Y.Suen, A fast parallel algorithm for thinning digital patterns, Communications of the ACM, Vol. 27,. 3, 1984, pp. 236-239. [9] http://eyephoto.ophth.wisc.edu/photography/protocols/a IDS/AIDSPhotoProtocol.html [10] http://eyephoto.ophth.wisc.edu/photography/protocols/m od7-ver1.a.html