CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA

Similar documents
Image Database and Preprocessing

Fovea and Optic Disc Detection in Retinal Images with Visible Lesions

A new method for segmentation of retinal blood vessels using morphological image processing technique

OPTIC DISC LOCATION IN DIGITAL FUNDUS IMAGES

ABSTRACT I. INTRODUCTION II. REVIEW OF PREVIOUS METHODS. et al., the OD is usually the brightest component on

DETECTION OF OPTIC DISC BY USING THE PRINCIPLES OF IMAGE PROCESSING

SEGMENTATION OF BRIGHT REGION OF THE OPTIC DISC FOR EYE DISEASE PREDICTION

Automatic Detection Of Optic Disc From Retinal Images. S.Sherly Renat et al.,

An Efficient Pre-Processing Method to Extract Blood Vessel, Optic Disc and Exudates from Retinal Images

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT:

Segmentation Of Optic Disc And Macula In Retinal Images

Optic Disc Approximation using an Ensemble of Processing Methods

Comparison of two algorithms in the automatic segmentation of blood vessels in fundus images

Digital Retinal Images: Background and Damaged Areas Segmentation

Preprocessing and Segregating Offline Gujarati Handwritten Datasheet for Character Recognition

ECC419 IMAGE PROCESSING

Blood Vessel Tracking Technique for Optic Nerve Localisation for Field 1-3 Color Fundus Images

Macula centred, giving coverage of the temporal retinal. Disc centred. Giving coverage of the nasal retina.

Correction of Clipped Pixels in Color Images

Optic Disc Boundary Approximation Using Elliptical Template Matching

Carmen Alonso Montes 23rd-27th November 2015

Gaussian and Fast Fourier Transform for Automatic Retinal Optic Disc Detection

Chapter 6. [6]Preprocessing

Detection and Verification of Missing Components in SMD using AOI Techniques

Digital Image Processing

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

An Efficacious Method of Cup to Disc Ratio Calculation for Glaucoma Diagnosis Using Super pixel

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

Retinal blood vessel extraction

Segmentation of Blood Vessels and Optic Disc in Fundus Images

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

Digital Image Processing 3/e

Image Modeling of the Human Eye

KEYWORDS Cell Segmentation, Image Segmentation, Axons, Image Processing, Adaptive Thresholding, Watershed, Matlab, Morphological

Computing for Engineers in Python

ME 6406 MACHINE VISION. Georgia Institute of Technology

Table of contents. Vision industrielle 2002/2003. Local and semi-local smoothing. Linear noise filtering: example. Convolution: introduction

Automatic Morphological Segmentation and Region Growing Method of Diagnosing Medical Images

INDIAN VEHICLE LICENSE PLATE EXTRACTION AND SEGMENTATION

IDENTIFICATION OF FISSION GAS VOIDS. Ryan Collette

Finger print Recognization. By M R Rahul Raj K Muralidhar A Papi Reddy

CHAPTER 4 BACKGROUND

Blood Vessel Tree Reconstruction in Retinal OCT Data

CHAPTER-4 FRUIT QUALITY GRADATION USING SHAPE, SIZE AND DEFECT ATTRIBUTES

Segmentation of Blood Vessel in Retinal Images and Detection of Glaucoma using BWAREA and SVM

Localization of Optic Disc and Macula using Multilevel 2-D Wavelet Decomposition Based on Haar Wavelet Transform

Hybrid Method based Retinal Optic Disc Detection

Automatic Optic Disc Localization in Color Retinal Fundus Images

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

Fig Color spectrum seen by passing white light through a prism.

A SPATIAL ENHANCEMENT SCHEME WITH SUPERVISED PIXEL CLASSIFICATION FOR BLOOD VESSEL DETECTION IN RETINAL IMAGE

Segmentation of Microscopic Bone Images

Exudates Detection Methods in Retinal Images Using Image Processing Techniques

Drusen Detection in a Retinal Image Using Multi-level Analysis

Image Processing for feature extraction

The TRC-NW8F Plus: As a multi-function retinal camera, the TRC- NW8F Plus captures color, red free, fluorescein

A Review of Optical Character Recognition System for Recognition of Printed Text

DIABETIC retinopathy (DR) is the leading ophthalmic

Procedure to detect anatomical structures in optical fundus images

Vehicle Number Plate Recognition with Bilinear Interpolation and Plotting Horizontal and Vertical Edge Processing Histogram with Sound Signals

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

Chapter 17. Shape-Based Operations

Blood Vessel Segmentation of Retinal Images Based on Neural Network

Image Processing. Adam Finkelstein Princeton University COS 426, Spring 2019

Exercise questions for Machine vision

Study guide for Graduate Computer Vision

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Chapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis

ANALYZING THE EFFECT OF MULTI-CHANNEL MULTI-SCALE SEGMENTATION OF RETINAL BLOOD VESSELS

AUTOMATED DRUSEN DETECTION IN A RETINAL IMAGE USING MULTI-LEVEL ANALYSIS

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

AUTOMATED MALARIA PARASITE DETECTION BASED ON IMAGE PROCESSING PROJECT REFERENCE NO.: 38S1511

Libyan Licenses Plate Recognition Using Template Matching Method

A Study On Preprocessing A Mammogram Image Using Adaptive Median Filter

Automatic Licenses Plate Recognition System

An Improved Bernsen Algorithm Approaches For License Plate Recognition

1.Discuss the frequency domain techniques of image enhancement in detail.

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

Checkerboard Tracker for Camera Calibration. Andrew DeKelaita EE368

Image Filtering. Median Filtering

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII

Eye Disease Detection using Daubechies Wavelet Transform

Method to acquire regions of fruit, branch and leaf from image of red apple in orchard

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

Keywords: Image segmentation, pixels, threshold, histograms, MATLAB

THRESHOLD AMSLER GRID TESTING AND RESERVING POWER OF THE POTIC NERVE by MOUSTAFA KAMAL NASSAR. M.D. MENOFIA UNIVERSITY.

Comparison of Two Pixel based Segmentation Algorithms of Color Images by Histogram

Human Vision, Color and Basic Image Processing

This content has been downloaded from IOPscience. Please scroll down to see the full text.

Image Processing Of Oct Glaucoma Images And Information Theory Analysis

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

Colour Retinal Image Enhancement based on Domain Knowledge

Keyword: Morphological operation, template matching, license plate localization, character recognition.

Anna University, Chennai B.E./B.TECH DEGREE EXAMINATION, MAY/JUNE 2013 Seventh Semester

Introduction. American Journal of Cancer Biomedical Imaging

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

Counting Sugar Crystals using Image Processing Techniques

Filip Malmberg 1TD396 fall 2018 Today s lecture

Digital Image Processing. Lecture # 8 Color Processing

Pupil Segmentation of Abnormal Eye using Image Enhancement in Spatial Domain

Transcription:

90 CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA The objective in this chapter is to locate the centre and boundary of OD and macula in retinal images. In Diabetic Retinopathy, location of optic disc and macula is essential in diagnosing various pathological features like exudates and hemorrhages. It is essential to locate optic disc center along with boundary, and macula can be conveniently identified that helps in easy segmentation of exudates by masking OD. In normal retinal image, OD appears as a bright region, whereas in color retinal images, it appears as a bright yellowish or white region and is approximately oval in shape. In general, OD occupies one seventh of the entire retinal image Sinthanayothin et al (1999). Alternatively, the OD size varies from one person to another person and occupies one tenth to one fifth of the retinal image. The process of automatically detecting or localizing the OD intends to correctly detect the centroid or center point of the OD. OD boundary detection aims to correctly segment the OD by detecting the boundary between the retina and the optic nerve head. The location of OD center is very much essential for a number of reasons that are listed below. It acts as a distracter for segmentation of exudates. The exudates and OD have more or less the same brightness and it is difficult to segment by differentiating the exudates in the retinal images

91 Niemeijer et al (2007). Hence, it is important to predict the boundary of the OD, and masking of the same will facilitate the easy segmentation of the exudates. The boundary of OD helps in identifying the shape and size of the OD and is used to diagnose the abnormalities in the retina. The blood vessels are emerging from the OD and spread over the entire retinal area. Hence, predicting the OD center is essential for vessel tracking algorithms. The OD dimensions are used to measure abnormal features due to certain retinopathies such as glaucoma and diabetic retinopathies as stated by Youssif et al (2008). The change in shape, colour or depth of OD is an indicator of various retinal pathologies. OD center detection is a primary step in localization of macula region Lowell et al (2004)). The left and right eye images are identified from the location of the OD centre. Similar to OD center, location of macula center is important in the automatic screening system of DR. It is a prerequisite for segmenting the main anatomical features in the retinal image. In the retinal image in general, the macula is located at the center and temporal to the optic nerve. Macula is a small and highly sensitive part of the retina responsible for detailed central vision. The macula is commonly visible as a hazy dark area in retinal image. This is the area with the highest number of cones and rods per unit area in the retina. The macula acts as the central vision of the human eye that helps reading.

92 Location of Macula Center (MC) is essential to segment the abnormalities in the macula region. The abnormality and its severity of the disease in macula region can be identified only after locating the MC. It is also used in the grading of macular pathology such as diabetic maculopathy. The location of OD center and macula are dealt separately in this chapter. The methodology involved in locating the center and boundary of the OD and identification of the macula center are explained in the following subsections. The images from STARE database is used for testing and analysis. 4.1 LOCATION OF OPTIC DISC CENTER AND BOUNDARY The methodology adopted to locate the center of the OD and OD boundary and various stages involved in the process and algorithms applied is illustrated using a block diagram in a sequential way in Figure 4.1. Retinal image Location of OD Center Boundary Identification Otsu s Threshold Vessel Direction Green Band Red Band

93 Initially, OD region is segmented by Otsu s thresholding method as it is a bright region in the retinal image. In general, circle fitting algorithm is used to locate boundary of the OD with underlying assumption that it has approximately a circular shape. The location of OD center is also performed using vessel direction matched filter. The location of the OD boundary is carried out using anisotrophic diffusion method. The preprocessing, thresholding, vessel direction matched filter for locating the center of the OD and aniosotrophic diffusion method for fixing OD boundary are explained in the following subsections. 4.1.1 Preprocessing Preprocessing is the important primary step to remove unwanted details in the retinal image. It converts the input image into a suitable form for further processing. As the OD is the high intensity region in color retinal image, thresholding method is used for segmenting the OD. Gray scale image is desirable to apply the gray level thresholding to segment the OD. In this work, gray level thresholding is applied to segment the OD. Conversion of colour image into gray scale image is necessary for the application of gray level thresholding to segment the OD. The process of converting RGB color space to gray image is explained in the following section.

94 where G is the gray value of the image. r, - represents red colour space. g - represents green colour space. and b represents blue colour space. In the gray image, gray values vary from 0 to 255 and they represent the dark and bright pixel values respectively. Figure 4.2 (a) shows the color input image of the STARE database with the identification number im0077 and (b) shows the corresponding gray scale image. (a) (b) Figure 4.2 Preprocessed image (a) color image (b) gray image The gray scale image is subjected to thresholding technique to segment the OD and this is explained in the next section.

95 4.1.2 Thresholding Thresholding is one of the simplest methods for image segmentation. It divides the image into different regions according to the intensity distribution. Suitable threshold values are used to partition the image and a resultant binary image is obtained. The operation involved is denoted as function TH, as per the Equation (4.2). TH= TH [x, y, p(x, y), f(x, y)] (4.2) where f(x,y) - gray level of the point (x,y) p(x,y) - local property of the point (x,y), i.e., its neighborhood average gray level at center (x,y). A thresholding image G (x, y) is defined as 1 if f ( x, y) TH G( x, y) (4.3) 0 if f ( x, y) TH In the above equation, pixels labeled 1 correspond to bright (white) whereas pixels labeled 0 correspond to dark (black). In Equation (4.3) the value TH depends only on f(x,y) (gray level values), the threshold is called global. If TH depends on f(x,y) and p(x,y) (local property), the threshold is called local. In addition, if the TH value depends on the spatial coordinates X and Y, the thresholding is called dynamic or adaptive. Single threshold value partitions the image into two regions. Gray level value below the threshold is considered as one region, and above the threshold value is considered as another region. In multiple threshold, the image is partitioned into a number of regions. If there are two threshold values, the image will be divided into three regions.

96 The intensity of the retinal image varies unevenly throughout the image. Hence, single threshold value is not suitable to partition the image into different regions. Multiple threshold values segment the image, based on its intensity values. In this work, the OD will be separated as the intensity of the OD region is differing from its background. One such multilevel thresholding method suggested by Otsu is applied on the retinal images and is discussed in the next section. 4.1.2.1 Otsu s method In this method, the threshold values are chosen automatically. Gray image has 2 dimensional grayscale intensity functions, and it contains N pixels with gray levels from 1 to L. The probability of gray level i in an image denoted by P i is given in Equation (4.4) P i = f i / N. (4.4) where f i - Number of pixels with gray level i. N - Number of pixels in an image. In the bi-level thresholding of an image, the pixels are divided into two classes like class C1 with gray levels [1, 2,. t] and class C 2 with gray levels [t+1,., L]. Then, the gray level probability distributions for the two classes are given as C 1 : P 1 1 (t),.. P t / 1(t) and (4.5) C 2 : P t+1 2 (t), P t+2 / 2 (t),, P L 2 (t), (4.6)

97 where 1 t t i 1 p i and 2 t i L p i t 1 Also, the means for classes C 1 and C 2 are denoted as t 2 and are given in Equations (4.7) and (4.8) respectively. 1 t and t 1 t t i 1 L ipi / 1( ) (4.7) and 2 t ipi / 2 ( t) (4.8) i t 1 Let T be the mean intensity for the whole image and can be derived as mentioned in Equation (4.9). T = 1 1 + 2 2 (4.9) Also, it is observed that the following condition is true. 1 + 2 =1 (4.10) Otsu defined the variance between the two classes of the thresholded image as B 2 = 1 ( 1 - T ) 2 + 2 ( 2 - T ) 2 (4.11) For bi-level thresholding, Otsu verified that optimal threshold t* is chosen so that variance between the two classes B 2 is maximized; that is, `t* = Arg Max { B 2 } 1 t< L (4.12)

98 Similarly in multilevel thresholding, if there are M-1 thresholds, {t 1, t 2,, t M-1 } the original image is divided into M classes. Class 1 : C 1 for [1,, t1] Class 2 : C 2 for [t 1+1,, t 2 ],, Class i : C i for [t i-1+1,, t i ],, and Class M : C M for [t M-1+1,, L], The optimal thresholds {t1*, t2*,, tm-1*} are chosen by maximizing B 2 as follows: {t 1 *, t 2 *,, t M-1 *}= Arg Max { B 2 (t 1, t 2,, t M-1)}, 1 t 1 < < t M-1 < L where, B 2 M K 1 2 k ( k T ) (4.13) k p i (4.14) i ck k ip i / k (4.15) i ck Based on the Otsu s method, optimum threshold values are selected to segment the image into different regions. After the application of thresholding, OD is isolated in the retinal image. Along with the OD, some other unwanted noises are also present. Postprocessing is needed to segment the OD accurately. Morphological erosion and dilation (explained in Chapter 3, Section 3.4.) is applied to get better and accurate result. The resultant image after the application of thresholding method is given in Figures 4.3. (a) to (f). Input image and its gray scale image are given in Figures 4.3 (a) and (b) respectively. Figure 4.3 (c) shows the histogram plot of the gray scale image and the thresholding points are located in red color stars. Segmented optic disc is given in Figure 4.3 (d). Resultant image after the application of morphological operations and edges of the segmented region are shown in Figure 4.3 (e).

99 16000 (a) Threshold Points (b) 14000 12000 10000 8000 6000 4000 2000 0 0 50 100 150 200 250 300 Gray values (c) (d) (e) (f) Figure 4.3 Resultant image after Otsu s thresholding on normal retinal image (a) Input Image (b) Gray Image(c) Histogram of the gray image (d) Resultant image of morphological operation(e) Edges of the OD (f) Resultant image after the circle fitting algorithm

100 After the OD segmentation circle fitting algorithm is applied to encircle OD, approximate center of the OD is identified based on the pixels location in the segmented OD. Radius of the circle in terms of number of pixels is estimated by the brightness value. Center and radius of the OD are used to fix the circle in the retinal image. The resultant image after the circle fitting algorithm is superimposed into the original retinal image. Figure 4.3 (f) shows the segmented OD in retinal image and the encircled OD superimposed into the original image. The above process yields successful result for normal images. When the same process is applied to an abnormal retinal image in the STARE database im0002, it is found that the segmentation process could not be achieved to the desired level as the abnormal regions are also segmented along with OD by virtue of its similar intensity values. Resultant images at various stages after the application of thresholding on an abnormal image are shown in Figures 4.4 (a) to (d). Figure 4.4 (a) shows the abnormal retinal image (im0002). Figures 4.4 (b), (c) and (d) show the segmented output, detection of edges and encircled optic disc in the retinal image respectively.

101 (a) (b) (c) (d) Figure 4.4 Resultant images after Otsu s thresholding in abnormal retinal image (a) Input image (b) resultant image of morphological operation(c) Edges of the OD (d) Resultant image after the circle fitting algorithm Hence, a modified method to locate the OD is needed for abnormal retinal images. Vessel direction matched filter is implemented to locate the OD center and is explained in the next section.

102 4.1.3 Vessel Direction Matched Filter Normally, OD is the entry and exit sites of the blood vessels in the retina area. In the retinal image, all the blood vessels start from the OD and the directions of the vessels clearly show the location of the OD. This property is used to identify the center of the OD. The significance of various color band images and their usefulness is described in Section 3.1.1. In this case, Green Band (GB) of the image is selected for this technique to locate the OD center as it contains useful information regarding blood vessel and OD. In this method, preprocessing steps namely, illumination equalization and adaptive histogram equalization techniques are employed and are explained in the following section. 4.1.3.1 Illumination equalization The illumination in a retinal image is non-uniform due to the variation of the retinal response of the individual during image acquisition. Further, the variation in intensity of illumination during the recording of retinal images is another variable that affects the quality of images. Hence, equalization of illumination is essential before applying any algorithm for analysis. The non-uniform illumination defect is overcome by adjusting the intensity value of each pixel using the following Equation (4.16) proposed by Haar (2005). Ieq( r, c) I ( r, c) m I w ( r, c) (4.16) In the equation, the term m denotes the desired average intensity (128 in an 8-bit grayscale image) and I w ( r, c) indicates the mean intensity value of the pixels within a window W of size N N. A running window of size 40 40 was used by him to calculate the mean intensity value. He considered that the group of pixels used to calculate the mean average

103 intensity in the center of the image is higher than the number of pixels used in the border region. The same technique has been applied to the images of GB of the retinal images. Adaptive histogram equalization is applied to the resultant image after the application of illumination equalization. 4.1.3.2 Adaptive histogram equalization Adaptive Histogram Equalization (AHE) technique was proposed by Aliaa Abdel-Haleim et al (2008) to the retinal image for improving the contrast. This technique is suggested for the images to be applied after application of illumination equalization. It is more effective than the classical histogram equalization, especially when detecting small blood vessels characterized by low contrast levels. Adaptive histogram equalization (I AHE ) is obtained using the Equation (4.17) to the illumination equalization images for each pixel p. s[ I ( p) I AHE ( p) I ( p' )] * M p' R ( p ) 2 h r (4.17) where M is 255, R(p) denotes the neighborhood of pixel p (a square window with length h ), s[ I ( p) I ( p' ) ] =1 if I ( p) I ( p' ) >0 otherwise it is zero. The values of h and r were chosen to be 81 and 8, respectively. 4.1.3.3 Location of OD center In general, blood vessels are segmented using 2 D Gaussian matched-filter technique. Vessel Direction Map (VDM) is obtained from vessel segmentation algorithm. After segmentation, it is improved by eliminating the unwanted pixels from the image for further processing. In order to detect the center of the OD, a vessel direction-matched filter template proposed by Aliaa et al (2008) is used on the segmented blood vessels by

104 various methods explained in Chapter 3. Location of OD is carried out using 9x9 template with the retinal vasculature along all different orientations from 0 o to 165 o with an angular resolution of 15 o that is illustrated in Figure 4.5. 135 120 105 105 90 75 75 60 45 150 135 120 105 90 75 60 45 30 165 150 135 120 90 60 45 30 15 165 165 150 135 90 45 30 15 15 0 0 0 0 90 0 0 0 0 15 15 30 45 90 135 150 165 165 15 30 45 60 90 135 135 150 165 30 45 60 75 90 105 120 135 150 45 60 75 75 90 105 105 120 135 Figure 4.5 Proposed vessel direction matched filter template In order to match the direction of the vessels in the VDM, the template is resized to various sizes of 241x81, 361 x121, 481x161 and 601x201 using bilinear interpolation which is one of the basic resampling techniques. Each pixel of the segmented blood vessel is matched with the above sizes of templates. The maximum correlation is used to find the matching of the template and the VDM. From the response with respect to pixel intensity, the OD is predicted in retinal image. This algorithm is implemented on the selected images from STARE database. The results of the various steps are presented in Figures 4.6 (a) to (f). Figure 4.6 (a) shows the original image. The gray scale and the preprocessed image using illumination equalization and adaptive histogram equalization are shown in the Figures 4.6 (b), (c) and (d) respectively. The segmented blood vessels are shown in Figure 4.6 (e). Finally, the image with OD location is shown in Figure 4.6 (f).

105 (a) (b) (c) (d) (e) (f) Figure 4.6 Vessel direction matched filter applied for the fundus image(a)input image (b)gray scale image (c)illumination equalized image (d)adaptive histogram equalization (e)global Otsu thresholding(f)detected OD center

106 Similarly the various images from the gold standard database are referred to for this algorithm, and some of the resultant images after locating the OD centre are given in Figures 4.7 (a) to (d). Gold Standard database is established by a collaborative research group and it contains 15 images of healthy patients, 15 images of patients with diabetic retinopathy and 15 images of glaucomatous patients (http://www5.cs.fau.de /research/data/fundus-images/). 04_dr 08_dr 05_dr 06_dr Figure 4.7 Detection of OD center for various images from gold standard database

107 This method has been applied to both normal and abnormal images of gold standard database in which the ground truth of the OD center is represented as the pixel location in terms of row and column. In Figure 4.7, the located OD center is represented as + mark in red and black colors. Red and black color marks indicate the predicted OD center and the ground truth respectively. Pixel location of the OD center in terms of row and column of both the ground truth and the vessel direction matched filter for some of the images in gold standard database are tabulated in Table 4.1. The difference in location of OD center in terms of number of pixel in row as well as column is also given in Table 4.1. Table 4.1 Location of OD center in the retinal images ground truth OD center Predicted OD center Error image Row Column Row Column Row Column 01_dr 860 1180 815 1171 45 09 03_dr 900 1050 902 1055 02 05 04_dr 2768 1278 2784 1283 16 05 05_dr 984 1160 942 1172 42 12 06_dr 2638 1108 2671 1092 33 16 07_dr 888 1078 871 1068 17 10 08_dr 2695 1144 2725 1149 30 05 09_dr 2584 1280 2592 1276 08 04 10_dr 990 1199 953 1211 37 12 11_dr 2764 1225 2792 1228 28 03 12_dr 901 978 904 965 03 13 13_dr 2688 1269 2703 1272 15 03 14_dr 866 1133 857 1137 09 04 14_h 918 1078 889 1102 29 24 15_dr 2776 1121 2802 1120 26 01 Mean value 23 08

108 From Table 4.1, it is inferred that vessel direction-matched filter technique successfully locates the OD center in both normal and abnormal images. The difference between the ground truth and the predicted value is less and hence this algorithm is suitable for predicting OD center. Center of the OD is used to locate the boundary of the OD. The methodology adopted to locate the boundary of the OD is explained in the next section. 4.2 OPTIC DISC BOUNDARY steps. The methodology of OD boundary location involves the following Separation of Color bands Segmentation of blood vessel from GB Elimination of blood vessel in RB using anisotrophic diffusion. Detection of OD center Location of OD boundary using Intensity Gradient method The separation of color band has been briefed in Chapter 3 Section 3.1.1. Compared to other color bands, Red Band (RB) of the image has better visibility of OD. Hence, RB images are suitable to detect the boundary of the OD. Figure 4.8 shows the red band of the retinal image.

109 Figure 4.8 Red band image Blood vessel in the RB image misleads the location of OD contour as it has emerged from OD. Hence, the elimination of blood vessels is essential in the task of predicting OD boundary. The segmentation of blood vessels using various techniques is explained in Chapter 3. Blood vessels are clearly seen in the GB image and it is used to identify the blood vessels. On the other hand, the OD is visible in RB images. Hence, the identified blood vessels are eliminated in RB images using anisotropic diffusion method and the OD boundary is predicted successfully. The process of elimination of the blood vessel using anisotrophic diffusion technique is briefed below. 4.2.1 Anisotropic Diffusion Anisotropic diffusion is a non-linear and space-variant transformation of the original image. It is a technique used to reduce noise in an image without removing significant parts of the image content, typically edges, lines or other details that are important for the interpretation of the image. It resembles the process of creating a scale-space, where an image generates a parameterized family of successively more and more blurred images based on a diffusion process and consequently, they are used for effective removal of blood vessel structures.

110 The process is applied on the RB of retinal image in order to diffuse predicted blood vessels. In this process, diffusivity function usually lies on the identified blood vessel structures in the retinal image and is set to unity. The customized diffusivity function is given by comparing the first pixel in RB against the first pixel in vessel segmented image of green band. If the pixel value is 1, then the region is considered to be within the blood vessels. If the pixel value is 0, then the region is considered to be not within the blood vessel structures. After several iterations of the anisotropic diffusion process, small features and blood vessel structures are effectively removed while optic disc contour is still well preserved. Finally, the vessel reduced with enhanced optic disc image is obtained. Figures 4.9 (a) and (b) show the red band image and the resultant image after application of anisotropic diffusion. Figure 4.9 Image after the application of anisotropic diffusion (a) Red band image (b) anisotropic diffused image OD center pixel is located by using vessel direction-matched filter and the algorithm is already explained in the first part of the Chapter 4. OD center location acts as the seed point to locate the boundary of the OD. For each degree (0 to 359) in x-axis and y-axis, all the pixel values in horizontal scale are taken from origin or center point. The intensity gradient of each degree is calculated. The local maximum points closest to the center of the

111 optic disc are chosen as the contour points. The algorithm used to locate the contour of the OD is applied to the various images from the STARE database. Resultant image after locating the OD boundary for an input image with identification numbers im0009, im0025 and im0031 from the STARE database is given in Figure 4.10 (a) to (c).. Figure 4.10 Resultant images after locating OD center and boundary The detection of the OD boundary is important for segmenting the pathological features like exudates and hemorrhages. In general, both the exudates and the OD appear as bright regions. Often, it misleads while detecting exudates. The detection of exudates and the segmentation of the same are possible only if the OD boundary is detected and removed properly. It is performed by masking the OD after predicting the boundary and the process involved in masking is briefed in the next sub-section.

112 4.2.2 Masking of Optic Disc In order to mask the OD, the detected optic disc contour and the pixels contained in the region are turned to black colour in the image. So, masking of OD will not lead to any misdetection during exudate segmentation. The algorithm used to mask the OD is applied to the various images from the STARE database and some of the resultant images after locating and masking the OD are given in Figures 4.11 (a) to (c). (a) (b) (c) Figure 4.11 OD masked image

113 Like center of the OD and boundary, the location of macula is also an important step towards prediction of various features related to retinal images as the images are acquired using macula as center. Further, it is important to identify any abnormal feature present in macula that leads to Diabetic Maculopathy. The methodology involved in location of macula center is discussed in the next section. 4.3 LOCATION OF MACULA CENTER Right and left eye retinal images are easily identified by the pixel location of the OD center in retinal images. Based on the information of the OD center, macula center is correctly marked in both right and left eye of retinal images. The proposed methodology to locate the macula center from the OD center is briefed in the next sub-section with respect to left and right eye of macula-centered retinal images. The proposed geometrical approach successfully detects the macula center in selected images of STARE database including normal and abnormal images. 4.3.1 Geometrical Approach for Location of Macula Center Detecting the location of macula is a challenging job in diabetic retinopathy. However, many researchers have proposed various methods of locating macula centre of healthy and unhealthy images. Geometrical approach is proposed to locate the approximate center of the macula for both left and right eye of macula-centered retinal images. Further, from this approximate center, the exact location of the macula center can be predicted using advanced image processing technique. The procedure adopted in this proposed geometric approach is briefed in this section. In the retinal images, the OD center are to be located using the technique briefed in the previous section and then the macula is to be located. In general, OD is positioned in either right or left side of the retinal image.

114 From the location of the OD centre in retinal image, it is possible to predict that the image belongs to right or left eye. Depending on the left or right eye image, the proposed geometric approach is to be applied. A successful attempt has been made to locate the macula center using a geometric technique in this work. In this technique, from the OD centre a point (A) is located at 2.5D distance from the optic disc where D is the diameter of the OD. The intersection of the straight line drawn at 30 0 from the optic centre and the vertical line drawn from A is located as macula centre. The pictorial representation of the location of macula centre using the above method is shown in Figures 4.12 (a) and (b). In the figure, the line drawn at a distance of 2.5 D to locate point A is shown as a horizontal line from the optic centre. The line drawn at 30 0 is shown as an inclined line from the optic centre. The intersection of the line drawn from A and the inclined line is located as the macula centre. Figure 4.12 (a) and (b) show the location of macula center in right and left eye images respectively. OD center A Macula center (a) Macula center A OD center (b) Figure 4.12. Geometrical approach of the location of macula centre

115 The above method is applied to the STARE database images after locating the OD center. Some of the resultant images after location of the macula center are shown in Figure 4.13. In Figure 4.13 location of OD center and macula center in retinal image is marked in red colour. For better visualization macula center is projected by the square. Figure 4.13 Sample macula centered images in the STARE database The validation of the result is obtained by the application of this technique for three images of left and right retinal images as shown in Figures 4.13 (a) to (f). The first three images (a) to (c) correspond to the left eye and remaining three (d) to (f) correspond to the right eye images.

116 4.4 RESULTS AND DISCUSSION OD center is located by Otsu s thresholding method. Hence, it is not suitable for the abnormal retinal images. Vessel direction-matched filter is applied to locate the OD center in both normal and abnormal images. This algorithm is evaluated using the gold standard database. Results show that vessel-direction matched filter successfully locates the OD center in both normal and abnormal images. The estimated error in locating the OD centre by the vessel direction-matched filter and the ground truth is in the order of 23 and 8 pixels (mean values) in X and Y location respectively, but this value is quite acceptable. Location of OD center is used to identify the left and the right eye of the retinal images. From this decision, geometrical approach is used to locate the macula center. Geometrical approach to locate the macula center successfully locates the center in both the left and the right eye of the retinal images. Proper identification of OD center determines the success of this method. The algorithm to locate the OD boundary sucessfully locates the boundaries of both normal and abnormal retinal images. This method is suitable to locate the rough boundary, but it fails to locate the accurate shape and contour of the OD. Even though this method is used to mask the OD in the abnormal images, it improves the pathological segmentation like exudates in the retinal images. OD masked image is more appropriate for segmentation of the exudates.

117 4.5 SUMMARY In this chapter, the detecion of OD center, its boundary and macula center is carried out sucessfully with various techniques. The images from databases like STARE and gold standard have been used with the technique and the results are presented. It is a vital step for segemntation of pathological features like exudates and hemorrages which are discussed in the next chapter.