Image Processing Method for Crop Status Management

Size: px
Start display at page:

Download "Image Processing Method for Crop Status Management"

Transcription

1 Chapter 5 Image Processing Method for Crop Status Management This chapter discusses the concept of image processing algorithms and its implementation for crop status management issues for sugarcane crop. Primarily there are three issues: 1. Measurement of growth 2. Estimation of the chlorophyll content and 3. Disease severity of the leaf Section 5.1 discusses the background of sugarcane crop and further sections would highlight technical details (Implementation and Results). 5.1 Introduction The duration of sugarcane crop in India ranges from months, a 12 months crop is most common. Experimentally and by research it has been proved that for high yield of the sugarcane, careful crop status management is essential during the germination and tillring stages of the growth [2]. Soil quality, fertilizer and micronutrients, water and other environmental factors such as temperature and humidity play an important role in the growth of the sugarcane. Compared to other crops, the cultivation of sugarcane demands more water ( times more than other crop like jawar). The application of fertilizers and pesticides is also quite more [40]. Considering all these factors the productivity of sugarcane crop draws attention of farmers. 53

2 The growth of sugarcane is measured by stalk and leaf dry weight in gram per centimeter square but this destructive method is rarely used for growth measurement [58]. Measurement of vertical height of the sugarcane is commonly used method in practice to measure the growth. Traditionally, the plant geometrical parameters are measured using tape and protector [59]. Though these measuring tools are simple, the procedures are difficult and also results are not accurate. The plant leaf colour is again commonly used tool to specify health status of the plant. Chlorophyll is a green pigment found in almost all plants, which allows the plants to obtain energy from light [20]. The loss of chlorophyll content in leaves occurs due to nutrient imbalance, excessive use of pesticide, environmental changes and ageing. Various kinds of colour plates are available for estimation of chlorophyll content of plants [60]. Chlorophyll meter (SPAD), have been developed to estimate leaf chlorophyll content [23]. These tools are a good option to chemical analysis method and remote sensing method used to find chlorophyll content of the plants [61]. Most of these techniques are quite accurate but they are rarely used in practice because of high cost of SPAD meter, unavailability of remote sensing system and other constraints. Another issue, concerning the sugarcane agriculture management is monitoring and controlling of the diseases. For example fungi-caused diseases in sugarcane are the most predominant which appear as spots on the leaves. These spots prevent the vital process of photosynthesis and hence, to a large extent it affects the growth of the plant and consequently the yield. In case of severe infection, the leaf gets totally covered with spots impacting a loss of yield [31]. One may suggest the use of pesticides but excessive use of pesticide against the protection of disease, however, increases the cost of production and results in environmental degradation [62]. This drawback can be removed by estimating severity of the disease and targeting the diseased places, with the appropriate quantity and concentration of pesticide. Traditionally, the naked eye observation method is used in the production practices to measure the severity of disease. However, the results are subjective and are not possibly used to measure the disease precisely [26]. Grid counting method can be used to improve the accuracy of estimation but it is a cumbersome and time consuming procedure. 54

3 Figure 5.1: Details of sugarcane plant Therefore, there is need of effective technique for growth measurement, chlorophyll measurement and disease severity measurement which must be a cost effective and accurate support for the sugarcane agriculture management. How to use image processing algorithms to solve the issues discussed so far in next subsequent sections. 5.2 Growth Measurement In sugarcane, a standard way of growth measurement is to measure the length between the +1 dewlap and lower end of a main stem [63]. Where, +1 dewlap is nothing but a fully developed leaf as shown in Figure 5.1. The dewlap acts as a hinge between the blade and leaf sheath. Similarly, the developments of growth of sugarcane are referred as +1 dewlap, second +2 dewlaps and so on. Undeveloped leaves, higher than the +1 leaf are called the zero dewlaps. During the early days of the sugarcane growth, the length of the leaf sheath grows rapidly. The apparent growth of the sugarcane is the sum of true growth along with the growth of the leaf sheath [64]. In this understanding, the length between the first dewlap and the lower end of a main stem is considered to be an indicator of the sugarcane growth Analysis and Design During the experimentation, three good quality eye buds of sugarcane (Sugarcane seed type: Cultivar Co-86032) were planted in the pots (on the 1st June 2010). Soil selected in pot was lumpy soil whose contents are N , P-5.38, K-69.44, Cu-0.16, Fe-2.68, Mn-1.24, Zn The pots were kept in open space (North-East side) so that they got maximum sunlight. Images of the plant were captured in interval of five days 55

4 Figure 5.2: Sugarcane plant structures with 0 and +1 dewlap upto3monthsandatthesametimegrowthwasmeasuredphysicallyusingmeterscale. It was observed that growth of stem was in the straight direction and leaves were spreading in several way alternatively. Growing structures were also observed to be different for individual plants of sugarcane. One can easily distinguish the position of the dewlaps in an image as the eyes are able to recognize the boundary shape and colour features of the plants. To measure the growth using image processing algorithm it is necessary to distinguish the distinctive boundary shape and colour features of sugarcane plants [65]. Certain features which are considered are as follows: 1. Stem: Shape- Straight, Colour- Red, Green, 2. Leaf blade: Shape-Curve gently, Colour- Green, 3. Tip of leaf blade: Shape- Acute angle, Colour- Green, 4. Dewlap: Shape- Obtuse angle at the joint of a leaf blade and a leaf sheath, Colour - Green. Since the structure of each plant is different with respect to its degree of growth. There are following three classifications of cases of plant structure: Case 1 : Those plants which have only one leaf, is an undeveloped leaf. Such plants are discarded from measurement procedure. Pictorial understanding of this possibility is shown in Figure 5.2(a). Case 2 : If there is one leaf axil seen from the stem edge and the other leaf is undeveloped along the stem, there could be two possibilities: 56

5 Figure 5.3: Sugarcane plant structures with +2 dewlaps 1. If the slope of left leaf is gentler than undeveloped leaf, then left leaf is +1 dewlap and undeveloped leaf is 0 leaf. 2. If the slope of right leaf is gentler than the undeveloped leaf, then the right leaf is +1 dewlap and undeveloped leaf is 0 leaf. Pictorial understanding of these two possibilities is shown in Figure 5.2(b),(c). Case 3 : Iftherearetwoleafaxils seenfrom thestemedgeand undevelopedleafalong the stem, then +1 dewlap is at left, +2 dewlap is at right and 0 leaf is along the stem, as shown in Figure 5.3(a),(b). First fully developed leaf: This is the +1 leaf of sugarcane. As per expert s suggestions, a leaf is termed as first fully developed leaf if angle between A and B of plant models is 20 0 i.e. AOB = φ 20 0 (5.1) Where, A - Leaf blade boundary, B - Stem boundary, O- Joining point of A and B. Thus, the concept of measuring the growth of the sugarcane plant lies between the lower ends of the stem to +1 dewlap. 57

6 5.2.2 Implementation In this research, there is a sequence of steps to be followed in order to correctly measure the growth of the plant. Algorithm: 1. Acquire the image 2. Resize the image and convert to proper format 3. Segment the image (Convert to digital image) 4. Based on the segmented image, measure the properties of image 5. Compute the reference height and reference factor from these properties 6. For measurement of height: (a) Compute the properties of segmented image (b) Smooth the image to trace object from bottom of the image (c) Find all left and right branches (d) Compute the angle of interaction and decide whether it is a branch (e) When the last branch is reached, count the number of pixels from first point to end point (last branch) (f) Calculate growth of the plant by using reference factor calculation step. A short description of the process and steps involved are: 1. Image acquisition : Digital camera specification- Make Nikon, 12 Mega pixels, 4X zoom. A digital camera of above specification was used to acquire images. For proper visibility of plant and easy analysis, following terms were considered. (a) Background: In the young stages of the plant, leaf colour is green and stem colour shed ranges from Red to Green. In this case white background is used for accurate segmentation. (b) Standard reference: A mm size brown strip is fixed on white background which is used as standard reference in growth measurement. (c) Adjustable position: The distance between camera and pot is adjusted in such a way that camera covers the full area of stem from lower end to top end along with brown strip in an image. 58

7 Figure 5.4: Original image of sugarcane plant (d) Environment: All images are captured in cool white fluorescent light.the sample image of sugarcane plant is shown in Figure Image re-sizing : A high resolution camera was used for acquisition of colour image size , the image is down sampled by factor of 10 (resized to ), for low processing time. The resizing of an image was performed by the process of the interpolation [44]. 3. RGB to grayscale conversion : Input image is first converted to the grayscale intensity image. The formula used to covert input colour image to gray scale is given in equation (5.2). I = 0.3R+0.59G+0.11B (5.2) Image is captured in constant cool white fluorescent light placing the plant in front of the white background, results into a large difference in between gray values of two groups, object and background is as shown in Figure 5.5. Figure 5.5: Gray scale image of sugarcane plant 59

8 4. Image segmentation : Image segmentation is the important step to separate the different regions with special significance in the image, these regions should not intersect with each other and each region should meet the consistency conditions in specific regions [66]. In this experiment standard reference and plant is separated from the white background. Segmentation technique: From the image statistics, maximum and minimum gray levels are obtained and threshold value is computed by averaging of these levels [67]. This threshold value converts the gray scale image to a binary image. The thresholding value for sample image comes out to be 91 and the resultant binary image is as shown in Figure 5.6. Figure 5.6: Binary image of sugarcane plant 5. Image filtering :The binary image is negated and morphological operation (close and fill) is used to filter the image. Morphological operations are based on relations of two sets. One is an image and the second is a small probe, called a structuring element. The structuring element systematically traverses the image and its relation to the image in each position is stored in the output image [47]. Closing tends to smooth sections of contour; it generally fuses narrow breaks and the long thin gulf, eliminates small holes and fills gaps in contour. The closing of set A by structuring element B, denoted A B is defined as : A B = (A B) Θ B (5.3) Closing might result in amalgamations of disconnected components, which generated new holes. Region filling had to be adapted. The algorithms for region filling based on set dilation, complementation, and intersections, as shown in Equation 5.4 X k = (X k 1 B) A C k = 1,2,3,... (5.4) 60

9 Where, A is a set of boundary and B is structuring element. The algorithms terminates at iteration step k if X k = X k 1. In the sample image during eroding operation lower leaf is removed from the image because of very thin edge. 6. Computation of reference factor : The binary image includes reference object and plant. To calculate the reference factor it is necessary to separate the standard reference from the image. The image consists of two labeled regions corresponding to plant and reference object. Objects labeled image is shown in Figure 5.7. Figure 5.7: Plant and reference object labeled image From labeled image the area of reference object is computed. In sample labeled image, area of reference object is 269 pixels, therefore the reference object is separated from plant object assuming area less than 300, and greater than 30 pixels to remove unwanted pixels in the image. Reference factor is calculated as Reference Height(H r ) = Max rowsize Min rowsize (5.5) In sample image maximum and minimum row size is 62 and 03 respectively. Hence, Standard reference height Reference Factor(R f ) = (5.6) H r In sample image H r = 59 and Standard reference height = 90mm. Thus, R f = This value will be required to calculate the actual growth of the plant at the end. 61

10 Figure 5.8: Cropped image of plant 7. Computation of growth of sugarcane plant : For the sample plant shown in Figure 5.7, pixels corresponding to the reference object are less than 300. After morphological filtering, based on the black pixels from top to bottom, the cropped image is as shown in Figure 5.8. This extracted area is considered as the dewlap search region. Extraction of a dewlap : Dewlaps are searched on the assumption that they appear on the stem edge in alternate position. It is difficult to search the edge of the stem if withered leaves exist in an image, so withered leaf is eliminated before the dewlap detection. The boundary shape of the main stem is almost straight in each plant. Clockwise boundary tracking is performed on the dewlap search area [68]. For the binary image, all white pixels belong to the object and all black pixels constitute the background as shown in Figure 5.9 Figure 5.9: Boundary tracking 62

11 The y-coordinate of the start point was the same as the y-coordinate of the lower end of the stem. Numbers from the start n, called boundary numbers, and coordinates (X p(n),y p(n) ) were recorded. Chain coding expressing the moving direction of adjacent pixels on a boundary is one way to provide for shape recognition. Since the chain pixels are limited to -180,-135,-90,-45, 0, 45, 90 and 135 degrees. Calculated boundary curvature which was defined by an exterior angle formed with the present boundary pixel and pixels 20 pixels apart from it in the backward and forward directions to increase the angular resolution. The boundary curvature φ (n) was calculated and recorded for all the boundary pixels. Figure 5.10 shows the curvatures at a dewlap, the tip of a leaf and an axial. Figure 5.10: Boundary tracking in both directions After tracing in both directions (right and left) and obtaining the boundary curvature, we get exterior angle [69]. The leaf tips or axils are clearly detected from the boundary curvature, therefore the axils that correspond to the dewlap are detected first and then dewlaps are determined based on the locations of the axils [70]. The boundary pixels that satisfy the following inequalities are confirmed as candidates for axils. Referring Figure 5.11, AOB = φ

12 Figure 5.11: Sugarcane plant structure with +1 dewlap Where, A = leaf blade boundary, B = Stem boundary and φ=angle between A and B If two or more pixels satisfy this inequality at one axil, the nearest one to the low end of the stem is selected as the true axis. The detected axils are searched by the distance from the lower end of the stem [71, 72]. The pair (Xai,Yai) consists of the x and y coordinates of the i th highest axial. The objective of this process is to detect the dewlap of the +1 leaf. However the structure of each shoot differed according to its degree of growth and facing direction. In sample image the structure of sugarcane plant is similar to case 3 that described earlier in different growth structures of plant. There are two or more axils. The boundary number of the highest axial p and that of the second highest one q is compared. If p is smaller than q, the left side of the shoot is searched. Here, it is assumed that if angle φ is greater than or equal to 20 0 it is a fully developed leaf. Angle φ is calculated as below: 64

13 Where, d p = Dot product of the vectors, φ = acos d p A.B 180 π A and B = Length of the vectors and acos = Inverse cosine results in radians. (5.7) Computation of growth of plant : To detect the developed leaf and its angles, stem is traced in bottom up fashion. The top most leaf blade with angle φ 20 0 and the highest distance from the lower end is the +1 dewlap [73]. The distance of +1 dewlap from the lower end is the growth of the plant. i.e. L = (E max R f ) Averagingfactor (5.8) Where, L = Growth of the plant in mm, E max = Maximum pixel count from lower end (as computed), R f = Reference factor (as computed). Averaging factor = 0.86 (Since the pot and standard reference are not in the same plane, we consider this factor 0.86; value is obtained after iterative evaluation over 50 images). For the sample image, E max = 47 and R f = Thus, L = 61.65mm The +1 dewlap detected sample image is as shown in Figure Results and Discussions In sugarcane agriculture it is essential to monitor the growth during the first two stages that is Germination and Till-ring for efficient fertilization and irrigation purpose. The 65

14 Figure 5.12: +1 dewlap detected image of plant approximate period of these growth stages is of 120 days. In this research the vertical growth of sugarcane plants, planted in pots is measured by traditional method of measurement that is by using meter scale under the guidance of sugarcane agricultural scientist. These readings are considered as standard measurements (SM) for validation of results of designed algorithm for growth measurement. For experimentation purpose, 72 sample images of sugarcane plants are tested using the designed algorithm. Tabular representation of measurement of growth of these 72 samples is given in Appendix-C. The objective of this algorithm was to measure the growth (i. e. the length of the +1 dewlap from the lower end of stem). Accuracy of measurement of growth by designed algorithm is calculated by referring the readings of standard measurements. Equation 5.9 and 5.10 gives Deviation Factor (DF) and Accuracy (AC) as: DF = (SM EM) 100 (5.9) SM AC = 100 DF (5.10) Where, DF - Deviation Factor, AC - Accuracy, SM - Standard measurement (measurement by manual method) and EM - Experimental measurement (measurement by Algorithm) The accuracy of measurement of growth by proposed method comes out to be 96%. A comparison of growth measured by the proposed method and growth measured by 66

15 60 Correlation between growth measurement Growth by proposed algorithm Plant 01 RMSE = Growth by meter scale Figure 5.13: Correlation between growth measured by standard and experimental measurement method meter scale is shown in Figure The Root Mean Square Error (RMSE) of computed values with respect to meter scale values is The overall result indicates that there is a good agreement between growth measured by meter scale and growth measured by proposed algorithm. 5.3 Chlorophyll Measurement Chlorophyll is a green pigment which is the basic ingredient of the leaf of a plant [74]. It is due to the presence of chlorophyll, that the leaves are green colours in nature. Chlorophyll absorbs certain wavelengths of light within the spectrum of visible light as shown in Figure It absorbs both red region (long wavelength) and the blue region (short wavelength) of the visible light spectrum while reflect green colour wavelength which makes the plant appear to be green [75, 76]. Plants are able to satisfy their energy requirements by absorbing light from the blue and red parts of the spectrum. However, there is still a large spectral region between 500 to 600 nm where chlorophyll absorbs little amount of light. 67

16 Figure 5.14: Absorption spectra of chlorophyll Chlorophyll measuring meters measure the optical absorption of a leaf to estimate its chlorophyll content [77]. Chlorophyll molecules absorb in the blue and red bands, but not the green and infra-red bands, thus meters measure the amount of absorption at these bands to estimate the amount of chlorophyll present in the leaf Analysis and Design From the frequency spectra of absorption of chlorophyll, it can be seen that there is good relation between chlorophyll content of leaf and primary colours light (Red, Green, and Blue). The intensity of colour for each leaf disk is an arbitrary unit of intensity called inverse integrated gray value per pixel [26]. It is proportional to the actual concentration of chlorophyll per pixel. Thus the concentration of chlorophyll per pixel can be measured from the colour image of leaf. The chlorophyll value of sugarcane leaf can be measured by chlorophyll meter but it tends to be costlier [25]. It also suffers from another limitation that the readings cannot be taken at low light intensity. Thus there is a need of a cost effective system which can overcome the above limitation. Hence an algorithm is designed to measure the chlorophyll content of the sugarcane leaf using RGB based image analysis. Presumably, any colour can be decomposed into the primary colour components (RGB) and the intensity of an individual colour upto some extent can be represented by brightness in a digital image [78]. Thus it is pos- 68

17 sible to develop a mathematical correlation between chlorophyll content of the leaf of sugarcane plant and the brightness values of the primary colours. For the purpose of verification of results, the chlorophyll value of sugarcane leaf was measured as SPAD (Soil Plant Analysis Development) values with the help of chlorophyll meter (CM 1000 meter manufactured by Spectrum Technology USA). The chlorophyll meter measure transmission of Red light (650nm) at which chlorophyll absorbs light, as well as transmission of Infrared light at 940 nm, where no absorption occurs. The instrument calculates a SPAD value based on these two transmission values and it is well correlated with chlorophyll content. The chlorophyll meter calculates the SPAD values (M) by means of the Equation M = log[(i 940/I 940 )/(I 650/I 650 )] = log[(i 940/I 650 )/(I 650/I 940 )] (5.11) An exponential relationship exists between the SPAD value and chlorophyll content. The leaf chlorophyll meter output can easily be converted into the amount of leaf chlorophyll content (µmolm 2 ) by the exponential equation 10M 0.261, where M represents the meter output. The meter readings are considered as a standard measured value. Proposed image processing based method of chlorophyll measurement: Taking into consideration, the targeted area of leaf by the Chlorophyll meter (CMY 1000) during measurement, the leaf was cut to 6 2 cm size and the image of leaf was captured by digital camera. From this image of leaf, mean brightness of primary colour (R, G, and B) were recorded. The colour components were processed separately and data were sorted based on the colour distribution as shown in Figure Figure 5.15: The histogram of the brightness values of the primary colours 69

18 The histogram represented the number of pixels occurring at each colour image with a particular brightness value. The mean brightness values of the three- colour components with standard deviations of a sample were obtained from the histogram and are presented in Table 5.1. Table 5.1: Mean brightness value with standard deviation and Coefficient of variation Description Red Green Blue Mean brightness value ± ± ± 3.82 Coefficient of variation The small coefficients of variations for three primary colours indicated that the mean brightness values were highly reproducible. The mean brightness values of the three primary colours were linearly correlated to obtain the characteristics RGB model as below Y = ar + bg + cb (5.12) Where, Y is predicated chlorophyll, R, G and B are basic primary colours and a, b and c are model parameters In the next step these primary colours were transformed into spectral parameters such as Hue (H), Saturation (S) and Value (V) and the chlorophyll content of the leaf is obtained as in [78] Y = ah +bs +cv (5.13) Where, Y is predicated chlorophyll, H, S and V are spectral parameters of basic primary colours and a, b and c are model parameters Model parameters a, b and c are evaluated using regressive method (discussed in detail in further section). Thus the proposed system is designed for measurement of chlorophyll content of sugarcane leaf that involves: 1. Operation of capturing of image 70

19 2. Computation of mean H, S and V values and 3. Estimation of chlorophyll content of leaf Implementation In this research an algorithm is designed to measure the chlorophyll content of the sugarcane leaves. Algorithm: 1. Acquire the image of leaf 2. Preprocessing of image to convert in proper format Image resizing RGB to HSV conversion 3. Segmentation of leaf region Smoothing the image 4. Feature extraction Compute the model parameters a, b and c Estimate the chlorophyll value by formula 1. Acquiring the images : Following experimental set-up was made for image acquisition: (a) Digital camera: 12 Mega pixel, 4X zoom (Nikon make) was used for acquiring the image. (b) Background selection: Selection of background is an important step to distinguish object from background. Magenta colour is complementary to green and therefore provides good colour contrast, making hue separation successful, but sharp edge between magenta and green area results in noise during the background separation therefore white background was used [79]. (c) Sugarcane leaf was placed flat on white background in natural light. The optical axis of digital camera was adjusted perpendicular to the leaf plane to shoot images. 71

20 2. Preprocessing of image to convert into proper format : To separate leaf from the background, the following steps are followed. Re-size the image : Size of captured image is The image is down sampled by a factor of 10 (resized to ), to improve the speed of further process. The resizing of an image is performed by the process of the interpolation [44]. This resized colour image is used for HSV space conversion. RGB to HSV conversion : The hue (H) defines the colour according to wavelength, while saturation (S) is the amount of colour. An object that has a deep, rich tone has a high saturation and one that looks washed out has a low saturation. The last component value (V) describes the amount of light in the colour [80]. The main disadvantage of the RGB model is that humans do not see colour as a mix of three primaries. Rather our vision differentiate between hues with high or low saturation and intensity; which makes the HSV colour model closer to the human perception than the RGB model. In the HSV space it is easy to change the colour of an object in an image and still retain variations in intensity and saturation such as shadows and highlights. It is simply achieved by changing the hue component, which would be impossible in the RGB space. This feature implied that the effects of shading and lighting can be reduced [14]. Let r, g, b [0,1] be the red, green, and blue coordinates, respectively, of a colour in RGB space. Let maximum be the greatest of r,g and b and minimum the least. To find the hue angle, h [0,360] for HSV space is computed as: 0 if max = min ( 60 0 g b h = +00) mod360 0 if max = r max min 60 0 b r max min if max = g 60 0 r g max min if max = b (5.14) The value of h is generally normalized to lie between 0 and 360 0, and h = 0 is used when max = min (for gray values) though the hue has no geometric meaning there, where the saturation is zero. The values for s and v of an HSV colour are computed as: 72

21 Figure 5.16: Original image of sugarcane leaf 0 if max = 0 s = max min = 1 min otherwise max max (5.15) v = max (5.16) The range of HSV vector is a cube in the Cartesian coordinate system, but since hue is really a cyclic property, with a cut at red, visualizations of these spaces invariably involve hue circles. Cylindrical and conical depictions are most popular, spherical depictions are also possible. 3. Segmentation of leaf region : The digital image of leaf grabbed by camera is composed of leaf and background is shown in Figure To obtain leaf feature, first the leaf needed to be separated from background [81]. It is important to choose a proper colour space for effective image segmentation. In this experimentation, leaf segmentation is achieved by a threshold algorithm with I 1, I 2 and I 3 feature as follows: I 1 (find(d H T 1 )) = 1 (5.17) I 2 (find(d S T 2 )) = 1 (5.18) I 3 (find(d V T 3 )) = 1 (5.19) 73

22 Figure 5.17: RGB and HSV colour values Where, D H = Difference of absolute H value (i.e for green colour) and actual value of leaf for green colour, D S = Difference of absolute S value (i.e. 0.6 for green colour) and actual value of leaf for green colour, D V = Difference of absolute V value (i.e. 0.1 for green colour) and actual value of leaf for green colour, T 1,T 2 and T 3 = Tolerances at positive side i.e. 0.15, 0.5, and 0.5 respectively. I 1,I 2 and I 3 = Colour space characters. H,S and V constants are set from Figure Where, I = I 1.I 2.I 3 (5.20) 74

23 Figure 5.18: Binary image of leaf Here, I is binary image is shown in Figure 5.18, where white pixels correspond to leaf region and black pixels correspond to background. Image smoothing : In second step, binary image of leaf is morphologically processed to fill holes in the image. A hole is set of background pixels that cannot be reached by filling in the background from the edge of the image. Morphological process named closing (i.e. dilation followed by erosion) is performed that tends to smooth sections of contours, it generally fuses narrow breaks and long thin gulf, eliminates and fills gaps in the contour, that removes the noise pixels present in the leaf image [47]. Finally it bridges unconnected pixels in the image. In the third step, a new image is formed from the original image and binary image of leaf that consist only leaf without background which includes corresponding R, G and B values of each white pixels from original leaf image. Separated leaf image from background is shown in Figure In this way leaf is separated from background image and consumption of computing is significantly reduced because the follow-up algorithm will only work on the area of leaf [81]. 4. Feature extraction : Feature extraction includes colour image processing steps to measure the green colour purity to indicate the amount of chlorophyll present inleaf. Fromthisgreenimage, themeanvaluesofh,sandvspacearecomputed. Computation of model parameters : Model parameters in Equation (5.13) are determined by the least square method by using the individual H, S and V values and chlorophyll (Y) measured by chlorophyll meter of 75

24 Figure 5.19: Leaf separated from the background 120 leaves. The parameters a, b and c were determined by the matrixes from the H, S and V model as shown below [a b c] = [A T HSV A HSV ] 1 A T HSV Y (5.21) A HSV represents the mean spectral values and vector Y represented the chlorophyll content of the leaves of plants determined by the chlorophyll meter. The computed module parameters a, b and c values for sugarcane leaves are , and respectively. Estimation of chlorophyll value :Referring back to equation (5.13), the chlorophyll of sugarcane leaf computed using HSV colour space as below: Y = ah +bs +cv In a sample image the spectral parameters are H, S and V are , and respectively. Model parameters that have been already calculated are a= , b= and c= Therefore the estimated chlorophyll value by proposed algorithm is 59.4 and chlorophyll value of the same leaf measured by chlorophyll meter is Results and Discussions The chlorophyll content of sugarcane leaves is predicted by using the HSV model. The model parameters as determined by regressive method come out to be , and respectively. 76

25 250 Correlation between Chlorophyll measurement Chlorophyll by proposed algorithm RMSE = Chlorophyll by meter Figure 5.20: Correlation between actual chlorophyll and predicted chlorophyll A comparison of estimated chlorophyll value by algorithm and actual chlorophyll value measured by chlorophyll meter is shown in Figure The chlorophyll meter readings has already been found to be well correlated with leaf chlorophyll content extracted through organic solvent method. The Root Mean Square Error (RMSE) of estimated values of chlorophyll by algorithm with respect to chlorophyll measured by chlorophyll meter is , which clearly indicates that the values are almost overlapping each other. Tabular representation of measurement of chlorophyll by algorithm and by chlorophyll meter of these 120 samples is given in Appendix-D. 5.4 Disease Severity Measurement Plant disease symptoms can be measured in various ways [82] that quantify the intensity, prevalence, incidence and severity of disease. 1. Disease intensity is a general term used to describe the amount of disease present in population. 2. Disease prevalence is the proportion of fields, countries, states etc. where the disease is detected and reveals disease at grandeur scale than incidence. 77

26 3. Disease incidence is the proportion or percentage of plants diseased out of a total number assessed. 4. Disease severity is the area (relative or absolute) of the sampling unit (leaf or fruit) showing symptoms of disease. It is most often expressed as a percentage or proportion Analysis and Design In this work disease severity measure has been considered particularly for brown spot diseased sugarcane leaves. Brown spot is a fungal disease which causes reddish-brown to dark-brown spots on sugarcane leaves. The spots are oval and circular in shape, often surrounded by a yellow halve and are equally visible on both sides of the leaf [31]. The long axis of the spot is usually parallel to the midrib. They can vary from minute dots to spots attaining 1 cm in diameter. The disease severity of the plant leaves is measured by counting the number of pixels and defined as the ratio of number of pixels in diseased region and total pixels in leaf region [83]. It is expressed as below: Where, A d = Diseased leaf area, A l = Total leaf area. Disease Severity, S = A d A l (5.22) Here, A d = P d = (x,y) R d L(x,y) (5.23) A l = P l = (x,y) R l L(x,y) (5.24) Where, L = Leaf image, P d = Total pixel in diseased region, P l = Total pixel in leaf region, S = P d P l (5.25) 78

27 R d = Diseased region and R l = Leaf region. In sugarcane agronomy it is experimental that the length and width of leaf vary upto 1.5 meter and 3 inches respectively. To cover the whole leaf area in the image, the distance between camera and leaf must be adjusted proportionally. This tends to reduce the resolution of image and cause poor segmentation of diseased part of the leaf. Consequently to increase the accuracy of measurement of disease severity, the leaf is cut into the pieces. Another important subject in image capturing is that the evaporation rate of sugarcane is 150 to 200 times faster than other plants. Therefore sugarcane leaf tends to wrinkle directly after it is separated from the stalk. So, it is advised to capture the images quickly (without delay). Today there are no standards to measure the extent of disease severity, [31] therefore experiment has been carried out to test the performance of the proposed system. This experiment consists of drawing of standard shapes with a predefined dimension drawn using a tool such as Paint available as accessories of Microsoft Windows Operating System is shown in Figure The objective of this experiment is to check the accuracy of the algorithm of measurement of disease severity. Therefore, the known shapes such as Circle, Triangle, Square and Rectangles are given as an input to the algorithm. Imagination is that a green rectangle as a leaf area and small other known areas are the disease areas. During the experimentation, these small areas and their combinations are given as inputs to algorithm and compare the output of the algorithm as measured area with predefined area. Figure 5.21: Known area standard diagram 79

28 Physical descriptors are computed as: Percentage Accuracy (AC) and Percentage Deviation Factor (DF) is calculated from the equation 5.9 and 5.10 as: DF = (SM EM) 100 SM and AC = 100 DF The total area of green part corresponding to leaf is cm 2 and area of circle, rectangle, square, triangle and their addition of all areas are 6.81, 0.9, 1.98, 2.49 and cm 2 respectively is shown in Figure To compute the disease severity more accurately the proposed system work into four different steps that is: capture the image, compute the diseased leaf area, compute the total leaf area and calculate the disease severity. The detailed discussion on all these four steps is given in next subsequent subsections Implementation To calculate the brown spot disease severity of sugarcane leaf the designed algorithm is as follows: Algorithm : 1. Acquire the image of diseased leaf 2. Preprocessing of image to convert in proper format: Resize the image RGB to Grayscale conversion RGB to HSI conversion 3. Segmentation of leaf region 4. Computation of leaf area 5. Segmentation of diseased region Morphological processing 6. Computation of diseased area 7. Computation of the percentage diseased severity using diseased area and leaf area. 80

29 Figure 5.22: Original image of diseased leaf 1. Acquire the image of diseased leaf : The following describes the experimental set-up that was used for image acquisition. (a) The leaf was placed on a light panel so that the spots become more visible. (b) Light sources of cool white fluorescent light were placed at 45 0 on each side of the leaf so as to eliminate any reflections and to get even light everywhere. (c) The camera was placed just above the leaf so as to get a better snapshot. (d) To distinguish objects in an image an easily separable background is essential and hence white background was used in this experimentation. The sample image of diseased leaf is shown in Figure This image is used for further processing. 2. Preprocessing of image : These steps convert input sample image in more suitable format for the segmentation. Re-size the image : A digital camera used for capturing image was 12 mega pixels, input colour image size is , this image is down sampled to for low processing time. The down sampling of an image is performed by the process of the interpolation [44]. The resized colour image is used for gray scale conversion. RGB to Grayscale conversion : To reduce the complexity of obtaining leaf region and region of interest, background should be removed first therefore input image is converted to the grayscale intensity image. The input 81

30 Figure 5.23: Gray scale image of diseased leaf colour image is converted to gray scale using equation (5.2) is as below: I = 0.3R+0.59G+0.11B For proper contrast, the image of diseased leaf was taken on white background results into large difference in gray values of two region i.e. leaf region and background. This gray scale image is, shown in Figure 5.23, further used for separating leaf region from the background. RGB to HSI conversion : Certain aspects influencing the experimental process are: (a) The image quality is influenced by light changes and shadows. (b) The vein colour is usually shallower than the leaf colour and in the early stages of disease the green colour often fades. Therefore in the course of disease segmentation; the phenomenon of the wrong segmentation often exists [84, 85]. RGB colour space would not be suitable for this problem. Considering the above facts, the input RGB colour space is transformed to HSI colour space. HSI space is more suitable for visual characteristics of human beings. Since the brightness component is independent of the colour component, the colour component can be good to eliminate glare, shadow and other light factors during colour image segmentation [80]. The sample leaf image converted into HSI and H component of sample image is shown in Figure The H component image is used for diseased region segmentation. 82

31 Figure 5.24: RGB to H space converted image of diseased leaf 3. Image segmentation : In many vision applications, it is useful to isolate the regions of interest from the background. The level to which the subdivision is carried out depends on the problem being solved. That is, segmentation should stop when the objects of interest or the regions of interest in an application have been isolated. Thresholding techniques for segmentation were used [42]. Thresholding often provides an easy and convenient way to perform segmentation on the basis of the different intensities or colours in the foreground and background regions of an image. In this section two different thresholding techniques are implemented to obtain total leaf region pixels and pixels of diseased region of leaf. Brief understandings of these techniques are as: Leaf region segmentation : To count the total pixels in leaf region they must be separated from the background. The reflectance of the sugarcane leaf is stronger than its background in grayscale images. Therefore the following threshold function is employed to segment the leaf from the background. 83

32 0 f i (x,y) T i g i (x,y) = (5.26) f i (x,y) f i (x,y) T i Where, g i (x,y) is the segmented gray level at pixel (x,y) f i (x,y) is the original gray level at pixel (x,y) T i is threshold value and i represents the red, green and blue components. The use of histogram to separate the region is a common practice. Histogram of gray scale image is as shown in Figure It can be seen that it has bimodal characteristics. The left wave crest corresponds to the leaf region with lower gray value. There are splits between the background region and its corresponding wave crest. It can also be seen that there is a large difference between the gray value of leaf and background (distance between wave crest is more than 100 in case of sample image). Thus simple thresholding technique (using a single threshold value) would not give a good result [83]. Figure 5.25: Histogram of grayscale image of diseased leaf Otsu thresholding technique is used to separate the leaf region form the background [52, 53]. Otsu technique is an overall automatic threshold se- 84

33 Figure 5.26: Binary image of diseased leaf lection method, which calculates co- variance between the two groups i.e. between leaf region and background. Largest value of co- variance is selected as threshold value. For the sample image threshold value comes out to be and resultant binary image for the same is shown in Figure The total pixels corresponding to the leaf region can be easily counted from this binary image. 4. Computation of leaf area : To count the total white pixels in leaf region, the binary image is scanned in top-down format. The number of white pixels in the white non-zero region is counted using equation (5.14), P l = (x,y) R l L(x,y). The leaf pixels count for the sample image comes out to be Disease region segmentation : The accurate segmentation of disease region determines the success or failure of the experiment. As the diseases characteristics varies the boundaries between diseased and healthy part of leaf also varies which results in a weak edge. Another problem is the presence of boundary objects. The boundary objects are not complete and thus may hinder the process of accurate segmentation therefore the boundary objects have to be removed. Hence triangle thresholding method is used in this experiment [86]. 85

34 Figure 5.27: Triangle method to select threshold value Histogram of sample image is as shown in Figure To select the thresholding value, triangle is constructed by drawing a line between the maximum peak of the histogram at brightness b max and the lowest value b min in the image. The distance d between the line and the histogram h(b) is computed for all values of b from b min to b max. The maximum value of distance d is selected as the threshold value. In sample image, the coordinates of point b min are (13, 2), b max (175, 3220) and the threshold value is Using the resultant threshold, H component of the HSI image is thresholded to convert the Hue image into binary image as shown in Figure Further, this binary image is morphologically processed to fill holes in the binary. A hole is a set of background pixels that cannot be reached by filling in the background from the edge of the image. Thus the resultant image is one that would not have noise pixels at boundary, as shown in Figure Computation of diseased area : For the sample image shown in Figure 5.19 the pixel count of diseased region image (R d ) is given by equation (5.15), P d = (x,y) R d L(x,y) and is come out to be

35 Figure 5.28: Binary image of diseased leaf 7. Computation of the percentage diseased severity using diseased area and leaf area : Disease severity is calculated by using equation (5.20) and always expressed in percentage as: S = (x,y) R d L(x,y) (x,y) R l L(x,y) %S = %S = The measured disease severity is expressed in a more quantitative way by using disease severity scale developed by agricultural scientist Horsfall and Heuberger Figure 5.29: Filtered image of diseased leaf 87

36 [82]. They developed early interval scale, which is still widely perceived as a way to save time in disease assessment. A score of disease in plants is calculated and rated on the scale. Thus according to Table 5.2 disease severity of sample image is of category 1. Table 5.2: Disease severity scale developed by Horsfall and Heuberger Category Severity 0 Not infected % leaf area infected % leaf area infected % leaf area infected 4 >75% leaf area infected Results and Discussions In agriculture research laboratory graphical method is used to measure the disease severity of those leaves whose veins structure is individual to carry out water and nutrients to and from the leaves (e.g. Soybean, Betel). In these cases it is easy to cut the diseased part of the leaf and to mark on the graph paper. In sugarcane leaf veins are found in bundles that is called fibro vascular bundles that carries water and nutrients to and from the leaves. In addition to it there can be found other fibrous tissue for aiding in shaping and mechanically strengthening the leaves. This makes difficulty to cut the sugarcane leaf at diseased place to mark on the graph paper. Another issue related to graphical method not suitable for sugarcane leaf disease severity measurement is that the evaporation rate of sugarcane is 150 to 200 times faster than other plants. Therefore sugarcane leaf tends to wrinkle directly after it is separated from the stalk. Traditionally the naked eye observation method is used in the production practises to measure the severity of disease. However the results are subjective and are not accurate. For validation of proposed algorithm standard area diagram is designed as discussed earlier in this chapter. 88

37 This section describes the detailed test strategies and the results obtained. Test cases and experiments have been built to test accuracy of the algorithm. A total of 80 diseased leaves are tested using this algorithm. A tabular representation measurement of disease severity of these 80 samples is given in Appendix E. As per the algorithm developed in this work, area measurements for the pre-defined shapes are taken. A comparison is made between the two sets of values and through this the accuracy of the software is measured. Physical descriptors are computed by referring equation (5.9 and 5.10) as: DF = (SM EM) 100 SM and AC = 100 DF Accuracy and results for individual feature is listed in Table 5.3. Table 5.3: Comparison of measurement of features Standard Expt. Sr. Deviation Accuracy Shape Measure Measure No (DF) (AC) (SM) (EM) 1 Circle Rectangle Square Triangle Group of all above Circle and Rectangle Circle and Square Circle and Triangle Rectangle and Square Rectangle and Triangle Square and Triangle The average accuracy of the entire feature comes out to be 98.60%, where in the rectangle feature gives ideal results. The above data confirms the accuracy (validity) of the system for measurement of the disease severity. 5.5 Concluding Remarks Algorithm developed for growth measurement of sugarcane plant tested on different conditions of the +1 dewlap. The results of designed algorithm are compared with results of traditional method used to measure the growth of sugarcane plant (i.e. 89

38 protector- meter scale method) for validation. The accuracy of measurement of the growth by algorithm is greater than 96% and RMSE value , indicate that there is a good agreement between growth measured by using meter scale and proposed method. The accuracy is higher at constant light condition, detecting the boundary curvature. We could find the axils, the leaf tips and the dewlap. However it was difficult to determine the dewlap of the +1 leaf if the 0 leaf was almost fully developed. In determination of chlorophyll content of sugarcane leaves using HSV colour space. Linear mathematical HSV model is proposed to co-relate with the chlorophyll content apart from the simple correlation analysis. For the validation of results of proposed algorithm these results are compared with results of chlorophyll meter. The resultant RMSE value is , indicates that there is a good agreement between chlorophyll measured by chlorophyll meter and proposed method. Disease symptoms of the plant vary significantly under the different stages of the disease. Hence the accuracy depends to some extent upon segmentation of the image. Otsu threshold segmentation is used for leaf region segmentation but this thresholding method is not suitable to disease region segmentation because of varying characteristics of the disease. Triangle method of the thresholding is used to segment the disease region. There is no any standard method to measure the disease severity of sugarcane leaf so for the validation of the results of designed algorithm known area standard diagram is designed. The average accuracy of the algorithm is tested using known area standard diagram and is 98.60%. This indicates that the designed algorithm can measure the disease severity of the leaf more accurately. 90

Fig Color spectrum seen by passing white light through a prism.

Fig Color spectrum seen by passing white light through a prism. 1. Explain about color fundamentals. Color of an object is determined by the nature of the light reflected from it. When a beam of sunlight passes through a glass prism, the emerging beam of light is not

More information

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing For a long time I limited myself to one color as a form of discipline. Pablo Picasso Color Image Processing 1 Preview Motive - Color is a powerful descriptor that often simplifies object identification

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

Chapter 17. Shape-Based Operations

Chapter 17. Shape-Based Operations Chapter 17 Shape-Based Operations An shape-based operation identifies or acts on groups of pixels that belong to the same object or image component. We have already seen how components may be identified

More information

CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA

CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA 90 CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA The objective in this chapter is to locate the centre and boundary of OD and macula in retinal images. In Diabetic Retinopathy, location of

More information

Digital Image Processing. Lecture # 8 Color Processing

Digital Image Processing. Lecture # 8 Color Processing Digital Image Processing Lecture # 8 Color Processing 1 COLOR IMAGE PROCESSING COLOR IMAGE PROCESSING Color Importance Color is an excellent descriptor Suitable for object Identification and Extraction

More information

Figure 1: Energy Distributions for light

Figure 1: Energy Distributions for light Lecture 4: Colour The physical description of colour Colour vision is a very complicated biological and psychological phenomenon. It can be described in many different ways, including by physics, by subjective

More information

Unit 8: Color Image Processing

Unit 8: Color Image Processing Unit 8: Color Image Processing Colour Fundamentals In 666 Sir Isaac Newton discovered that when a beam of sunlight passes through a glass prism, the emerging beam is split into a spectrum of colours The

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

Digital Image Processing (DIP)

Digital Image Processing (DIP) University of Kurdistan Digital Image Processing (DIP) Lecture 6: Color Image Processing Instructor: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture, University of Kurdistan,

More information

Chapter 3 Part 2 Color image processing

Chapter 3 Part 2 Color image processing Chapter 3 Part 2 Color image processing Motivation Color fundamentals Color models Pseudocolor image processing Full-color image processing: Component-wise Vector-based Recent and current work Spring 2002

More information

6 Color Image Processing

6 Color Image Processing 6 Color Image Processing Angela Chih-Wei Tang ( 唐之瑋 ) Department of Communication Engineering National Central University JhongLi, Taiwan 2009 Fall Outline Color fundamentals Color models Pseudocolor image

More information

Color Image Processing

Color Image Processing Color Image Processing Jesus J. Caban Outline Discuss Assignment #1 Project Proposal Color Perception & Analysis 1 Discuss Assignment #1 Project Proposal Due next Monday, Oct 4th Project proposal Submit

More information

Reference Free Image Quality Evaluation

Reference Free Image Quality Evaluation Reference Free Image Quality Evaluation for Photos and Digital Film Restoration Majed CHAMBAH Université de Reims Champagne-Ardenne, France 1 Overview Introduction Defects affecting films and Digital film

More information

Color Image Processing. Gonzales & Woods: Chapter 6

Color Image Processing. Gonzales & Woods: Chapter 6 Color Image Processing Gonzales & Woods: Chapter 6 Objectives What are the most important concepts and terms related to color perception? What are the main color models used to represent and quantify color?

More information

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL Instructor : Dr. K. R. Rao Presented by: Prasanna Venkatesh Palani (1000660520) prasannaven.palani@mavs.uta.edu

More information

Introduction to Color Theory

Introduction to Color Theory Systems & Biomedical Engineering Department SBE 306B: Computer Systems III (Computer Graphics) Dr. Ayman Eldeib Spring 2018 Introduction to With colors you can set a mood, attract attention, or make a

More information

Preprocessing and Segregating Offline Gujarati Handwritten Datasheet for Character Recognition

Preprocessing and Segregating Offline Gujarati Handwritten Datasheet for Character Recognition Preprocessing and Segregating Offline Gujarati Handwritten Datasheet for Character Recognition Hetal R. Thaker Atmiya Institute of Technology & science, Kalawad Road, Rajkot Gujarat, India C. K. Kumbharana,

More information

Estimation of Moisture Content in Soil Using Image Processing

Estimation of Moisture Content in Soil Using Image Processing ISSN 2278 0211 (Online) Estimation of Moisture Content in Soil Using Image Processing Mrutyunjaya R. Dharwad Toufiq A. Badebade Megha M. Jain Ashwini R. Maigur Abstract: Agriculture is the science or practice

More information

Understand brightness, intensity, eye characteristics, and gamma correction, halftone technology, Understand general usage of color

Understand brightness, intensity, eye characteristics, and gamma correction, halftone technology, Understand general usage of color Understand brightness, intensity, eye characteristics, and gamma correction, halftone technology, Understand general usage of color 1 ACHROMATIC LIGHT (Grayscale) Quantity of light physics sense of energy

More information

Introduction to computer vision. Image Color Conversion. CIE Chromaticity Diagram and Color Gamut. Color Models

Introduction to computer vision. Image Color Conversion. CIE Chromaticity Diagram and Color Gamut. Color Models Introduction to computer vision In general, computer vision covers very wide area of issues concerning understanding of images by computers. It may be considered as a part of artificial intelligence and

More information

INSTITUTIONEN FÖR SYSTEMTEKNIK LULEÅ TEKNISKA UNIVERSITET

INSTITUTIONEN FÖR SYSTEMTEKNIK LULEÅ TEKNISKA UNIVERSITET INSTITUTIONEN FÖR SYSTEMTEKNIK LULEÅ TEKNISKA UNIVERSITET Some color images on this slide Last Lecture 2D filtering frequency domain The magnitude of the 2D DFT gives the amplitudes of the sinusoids and

More information

DISEASE DETECTION OF TOMATO PLANT LEAF USING ANDROID APPLICATION

DISEASE DETECTION OF TOMATO PLANT LEAF USING ANDROID APPLICATION ISSN 2395-1621 DISEASE DETECTION OF TOMATO PLANT LEAF USING ANDROID APPLICATION #1 Tejaswini Devram, #2 Komal Hausalmal, #3 Juby Thomas, #4 Pranjal Arote #5 S.P.Pattanaik 1 tejaswinipdevram@gmail.com 2

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Color Image Processing Christophoros Nikou cnikou@cs.uoi.gr University of Ioannina - Department of Computer Science and Engineering 2 Color Image Processing It is only after years

More information

CHAPTER-4 FRUIT QUALITY GRADATION USING SHAPE, SIZE AND DEFECT ATTRIBUTES

CHAPTER-4 FRUIT QUALITY GRADATION USING SHAPE, SIZE AND DEFECT ATTRIBUTES CHAPTER-4 FRUIT QUALITY GRADATION USING SHAPE, SIZE AND DEFECT ATTRIBUTES In addition to colour based estimation of apple quality, various models have been suggested to estimate external attribute based

More information

Application of Machine Vision Technology in the Diagnosis of Maize Disease

Application of Machine Vision Technology in the Diagnosis of Maize Disease Application of Machine Vision Technology in the Diagnosis of Maize Disease Liying Cao, Xiaohui San, Yueling Zhao, and Guifen Chen * College of Information and Technology Science, Jilin Agricultural University,

More information

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods 19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com

More information

Chapter 6. [6]Preprocessing

Chapter 6. [6]Preprocessing Chapter 6 [6]Preprocessing As mentioned in chapter 4, the first stage in the HCR pipeline is preprocessing of the image. We have seen in earlier chapters why this is very important and at the same time

More information

Appendix III Graphs in the Introductory Physics Laboratory

Appendix III Graphs in the Introductory Physics Laboratory Appendix III Graphs in the Introductory Physics Laboratory 1. Introduction One of the purposes of the introductory physics laboratory is to train the student in the presentation and analysis of experimental

More information

Study and Analysis of various preprocessing approaches to enhance Offline Handwritten Gujarati Numerals for feature extraction

Study and Analysis of various preprocessing approaches to enhance Offline Handwritten Gujarati Numerals for feature extraction International Journal of Scientific and Research Publications, Volume 4, Issue 7, July 2014 1 Study and Analysis of various preprocessing approaches to enhance Offline Handwritten Gujarati Numerals for

More information

Keyword: Morphological operation, template matching, license plate localization, character recognition.

Keyword: Morphological operation, template matching, license plate localization, character recognition. Volume 4, Issue 11, November 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Automatic

More information

Image and video processing

Image and video processing Image and video processing Processing Colour Images Dr. Yi-Zhe Song The agenda Introduction to colour image processing Pseudo colour image processing Full-colour image processing basics Transforming colours

More information

Computers and Imaging

Computers and Imaging Computers and Imaging Telecommunications 1 P. Mathys Two Different Methods Vector or object-oriented graphics. Images are generated by mathematical descriptions of line (vector) segments. Bitmap or raster

More information

IMAGE INTENSIFICATION TECHNIQUE USING HORIZONTAL SITUATION INDICATOR

IMAGE INTENSIFICATION TECHNIQUE USING HORIZONTAL SITUATION INDICATOR IMAGE INTENSIFICATION TECHNIQUE USING HORIZONTAL SITUATION INDICATOR Naveen Kumar Mandadi 1, B.Praveen Kumar 2, M.Nagaraju 3, 1,2,3 Assistant Professor, Department of ECE, SRTIST, Nalgonda (India) ABSTRACT

More information

Colors in Images & Video

Colors in Images & Video LECTURE 8 Colors in Images & Video CS 5513 Multimedia Systems Spring 2009 Imran Ihsan Principal Design Consultant OPUSVII www.opuseven.com Faculty of Engineering & Applied Sciences 1. Light and Spectra

More information

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE Najirah Umar 1 1 Jurusan Teknik Informatika, STMIK Handayani Makassar Email : najirah_stmikh@yahoo.com

More information

AGRICULTURE, LIVESTOCK and FISHERIES

AGRICULTURE, LIVESTOCK and FISHERIES Research in ISSN : P-2409-0603, E-2409-9325 AGRICULTURE, LIVESTOCK and FISHERIES An Open Access Peer Reviewed Journal Open Access Research Article Res. Agric. Livest. Fish. Vol. 2, No. 2, August 2015:

More information

Real Time Word to Picture Translation for Chinese Restaurant Menus

Real Time Word to Picture Translation for Chinese Restaurant Menus Real Time Word to Picture Translation for Chinese Restaurant Menus Michelle Jin, Ling Xiao Wang, Boyang Zhang Email: mzjin12, lx2wang, boyangz @stanford.edu EE268 Project Report, Spring 2014 Abstract--We

More information

Color images C1 C2 C3

Color images C1 C2 C3 Color imaging Color images C1 C2 C3 Each colored pixel corresponds to a vector of three values {C1,C2,C3} The characteristics of the components depend on the chosen colorspace (RGB, YUV, CIELab,..) Digital

More information

INDIAN VEHICLE LICENSE PLATE EXTRACTION AND SEGMENTATION

INDIAN VEHICLE LICENSE PLATE EXTRACTION AND SEGMENTATION International Journal of Computer Science and Communication Vol. 2, No. 2, July-December 2011, pp. 593-599 INDIAN VEHICLE LICENSE PLATE EXTRACTION AND SEGMENTATION Chetan Sharma 1 and Amandeep Kaur 2 1

More information

An NDVI image provides critical crop information that is not visible in an RGB or NIR image of the same scene. For example, plants may appear green

An NDVI image provides critical crop information that is not visible in an RGB or NIR image of the same scene. For example, plants may appear green Normalized Difference Vegetation Index (NDVI) Spectral Band calculation that uses the visible (RGB) and near-infrared (NIR) bands of the electromagnetic spectrum NDVI= + An NDVI image provides critical

More information

Color Image Processing

Color Image Processing Color Image Processing Color Fundamentals 2/27/2014 2 Color Fundamentals 2/27/2014 3 Color Fundamentals 6 to 7 million cones in the human eye can be divided into three principal sensing categories, corresponding

More information

LECTURE 07 COLORS IN IMAGES & VIDEO

LECTURE 07 COLORS IN IMAGES & VIDEO MULTIMEDIA TECHNOLOGIES LECTURE 07 COLORS IN IMAGES & VIDEO IMRAN IHSAN ASSISTANT PROFESSOR LIGHT AND SPECTRA Visible light is an electromagnetic wave in the 400nm 700 nm range. The eye is basically similar

More information

][ R G [ Q] Y =[ a b c. d e f. g h I

][ R G [ Q] Y =[ a b c. d e f. g h I Abstract Unsupervised Thresholding and Morphological Processing for Automatic Fin-outline Extraction in DARWIN (Digital Analysis and Recognition of Whale Images on a Network) Scott Hale Eckerd College

More information

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT:

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: IJCE January-June 2012, Volume 4, Number 1 pp. 59 67 NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: A COMPARATIVE STUDY Prabhdeep Singh1 & A. K. Garg2

More information

Digital Image Processing 3/e

Digital Image Processing 3/e Laboratory Projects for Digital Image Processing 3/e by Gonzalez and Woods 2008 Prentice Hall Upper Saddle River, NJ 07458 USA www.imageprocessingplace.com The following sample laboratory projects are

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

Version 6. User Manual OBJECT

Version 6. User Manual OBJECT Version 6 User Manual OBJECT 2006 BRUKER OPTIK GmbH, Rudolf-Plank-Str. 27, D-76275 Ettlingen, www.brukeroptics.com All rights reserved. No part of this publication may be reproduced or transmitted in any

More information

ME 6406 MACHINE VISION. Georgia Institute of Technology

ME 6406 MACHINE VISION. Georgia Institute of Technology ME 6406 MACHINE VISION Georgia Institute of Technology Class Information Instructor Professor Kok-Meng Lee MARC 474 Office hours: Tues/Thurs 1:00-2:00 pm kokmeng.lee@me.gatech.edu (404)-894-7402 Class

More information

Automatic Licenses Plate Recognition System

Automatic Licenses Plate Recognition System Automatic Licenses Plate Recognition System Garima R. Yadav Dept. of Electronics & Comm. Engineering Marathwada Institute of Technology, Aurangabad (Maharashtra), India yadavgarima08@gmail.com Prof. H.K.

More information

Using Curves and Histograms

Using Curves and Histograms Written by Jonathan Sachs Copyright 1996-2003 Digital Light & Color Introduction Although many of the operations, tools, and terms used in digital image manipulation have direct equivalents in conventional

More information

CHAPTER 6 COLOR IMAGE PROCESSING

CHAPTER 6 COLOR IMAGE PROCESSING CHAPTER 6 COLOR IMAGE PROCESSING CHAPTER 6: COLOR IMAGE PROCESSING The use of color image processing is motivated by two factors: Color is a powerful descriptor that often simplifies object identification

More information

Laboratory 1: Uncertainty Analysis

Laboratory 1: Uncertainty Analysis University of Alabama Department of Physics and Astronomy PH101 / LeClair May 26, 2014 Laboratory 1: Uncertainty Analysis Hypothesis: A statistical analysis including both mean and standard deviation can

More information

Color and More. Color basics

Color and More. Color basics Color and More In this lesson, you'll evaluate an image in terms of its overall tonal range (lightness, darkness, and contrast), its overall balance of color, and its overall appearance for areas that

More information

MODULE 4 LECTURE NOTES 1 CONCEPTS OF COLOR

MODULE 4 LECTURE NOTES 1 CONCEPTS OF COLOR MODULE 4 LECTURE NOTES 1 CONCEPTS OF COLOR 1. Introduction The field of digital image processing relies on mathematical and probabilistic formulations accompanied by human intuition and analysis based

More information

Introduction. The Spectral Basis for Color

Introduction. The Spectral Basis for Color Introduction Color is an extremely important part of most visualizations. Choosing good colors for your visualizations involves understanding their properties and the perceptual characteristics of human

More information

A Method of Multi-License Plate Location in Road Bayonet Image

A Method of Multi-License Plate Location in Road Bayonet Image A Method of Multi-License Plate Location in Road Bayonet Image Ying Qian The lab of Graphics and Multimedia Chongqing University of Posts and Telecommunications Chongqing, China Zhi Li The lab of Graphics

More information

Application Note (A13)

Application Note (A13) Application Note (A13) Fast NVIS Measurements Revision: A February 1997 Gooch & Housego 4632 36 th Street, Orlando, FL 32811 Tel: 1 407 422 3171 Fax: 1 407 648 5412 Email: sales@goochandhousego.com In

More information

Color and Color Model. Chap. 12 Intro. to Computer Graphics, Spring 2009, Y. G. Shin

Color and Color Model. Chap. 12 Intro. to Computer Graphics, Spring 2009, Y. G. Shin Color and Color Model Chap. 12 Intro. to Computer Graphics, Spring 2009, Y. G. Shin Color Interpretation of color is a psychophysiology problem We could not fully understand the mechanism Physical characteristics

More information

Color Reproduction. Chapter 6

Color Reproduction. Chapter 6 Chapter 6 Color Reproduction Take a digital camera and click a picture of a scene. This is the color reproduction of the original scene. The success of a color reproduction lies in how close the reproduced

More information

Interactive Computer Graphics

Interactive Computer Graphics Interactive Computer Graphics Lecture 4: Colour Graphics Lecture 4: Slide 1 Ways of looking at colour 1. Physics 2. Human visual receptors 3. Subjective assessment Graphics Lecture 4: Slide 2 The physics

More information

Positive Pixel Count Algorithm. User s Guide

Positive Pixel Count Algorithm. User s Guide Positive Pixel Count Algorithm User s Guide Copyright 2004, 2006 2008 Aperio Technologies, Inc. Part Number/Revision: MAN 0024, Revision B Date: December 9, 2008 This document applies to software versions

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Lecture # 10 Color Image Processing ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA Email:: ali.javed@uettaxila.edu.pk Office Room #:: 7 Pseudo-Color (False Color)

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lecture 4: Colour

CS 565 Computer Vision. Nazar Khan PUCIT Lecture 4: Colour CS 565 Computer Vision Nazar Khan PUCIT Lecture 4: Colour Topics to be covered Motivation for Studying Colour Physical Background Biological Background Technical Colour Spaces Motivation Colour science

More information

TO PLOT OR NOT TO PLOT?

TO PLOT OR NOT TO PLOT? Graphic Examples This document provides examples of a number of graphs that might be used in understanding or presenting data. Comments with each example are intended to help you understand why the data

More information

Color. Used heavily in human vision. Color is a pixel property, making some recognition problems easy

Color. Used heavily in human vision. Color is a pixel property, making some recognition problems easy Color Used heavily in human vision Color is a pixel property, making some recognition problems easy Visible spectrum for humans is 400 nm (blue) to 700 nm (red) Machines can see much more; ex. X-rays,

More information

Image Processing for Mechatronics Engineering For senior undergraduate students Academic Year 2017/2018, Winter Semester

Image Processing for Mechatronics Engineering For senior undergraduate students Academic Year 2017/2018, Winter Semester Image Processing for Mechatronics Engineering For senior undergraduate students Academic Year 2017/2018, Winter Semester Lecture 8: Color Image Processing 04.11.2017 Dr. Mohammed Abdel-Megeed Salem Media

More information

License Plate Localisation based on Morphological Operations

License Plate Localisation based on Morphological Operations License Plate Localisation based on Morphological Operations Xiaojun Zhai, Faycal Benssali and Soodamani Ramalingam School of Engineering & Technology University of Hertfordshire, UH Hatfield, UK Abstract

More information

EECS490: Digital Image Processing. Lecture #12

EECS490: Digital Image Processing. Lecture #12 Lecture #12 Image Correlation (example) Color basics (Chapter 6) The Chromaticity Diagram Color Images RGB Color Cube Color spaces Pseudocolor Multispectral Imaging White Light A prism splits white light

More information

Learning Log Title: CHAPTER 2: ARITHMETIC STRATEGIES AND AREA. Date: Lesson: Chapter 2: Arithmetic Strategies and Area

Learning Log Title: CHAPTER 2: ARITHMETIC STRATEGIES AND AREA. Date: Lesson: Chapter 2: Arithmetic Strategies and Area Chapter 2: Arithmetic Strategies and Area CHAPTER 2: ARITHMETIC STRATEGIES AND AREA Date: Lesson: Learning Log Title: Date: Lesson: Learning Log Title: Chapter 2: Arithmetic Strategies and Area Date: Lesson:

More information

Hello, welcome to the video lecture series on Digital image processing. (Refer Slide Time: 00:30)

Hello, welcome to the video lecture series on Digital image processing. (Refer Slide Time: 00:30) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module 11 Lecture Number 52 Conversion of one Color

More information

Image and video processing (EBU723U) Colour Images. Dr. Yi-Zhe Song

Image and video processing (EBU723U) Colour Images. Dr. Yi-Zhe Song Image and video processing () Colour Images Dr. Yi-Zhe Song yizhe.song@qmul.ac.uk Today s agenda Colour spaces Colour images PGM/PPM images Today s agenda Colour spaces Colour images PGM/PPM images History

More information

Digital Images. Back to top-level. Digital Images. Back to top-level Representing Images. Dr. Hayden Kwok-Hay So ENGG st semester, 2010

Digital Images. Back to top-level. Digital Images. Back to top-level Representing Images. Dr. Hayden Kwok-Hay So ENGG st semester, 2010 0.9.4 Back to top-level High Level Digital Images ENGG05 st This week Semester, 00 Dr. Hayden Kwok-Hay So Department of Electrical and Electronic Engineering Low Level Applications Image & Video Processing

More information

USE OF COLOR IN REMOTE SENSING

USE OF COLOR IN REMOTE SENSING 1 USE OF COLOR IN REMOTE SENSING (David Sandwell, Copyright, 2004) Display of large data sets - Most remote sensing systems create arrays of numbers representing an area on the surface of the Earth. The

More information

MAV-ID card processing using camera images

MAV-ID card processing using camera images EE 5359 MULTIMEDIA PROCESSING SPRING 2013 PROJECT PROPOSAL MAV-ID card processing using camera images Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING UNIVERSITY OF TEXAS AT ARLINGTON

More information

Color Image Processing

Color Image Processing Color Image Processing Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Color Used heavily in human vision. Visible spectrum for humans is 400 nm (blue) to 700

More information

Comparing Sound and Light. Light and Color. More complicated light. Seeing colors. Rods and cones

Comparing Sound and Light. Light and Color. More complicated light. Seeing colors. Rods and cones Light and Color Eye perceives EM radiation of different wavelengths as different colors. Sensitive only to the range 4nm - 7 nm This is a narrow piece of the entire electromagnetic spectrum. Comparing

More information

IMAGE PROCESSING TECHNIQUE TO COUNT THE NUMBER OF LOGS IN A TIMBER TRUCK

IMAGE PROCESSING TECHNIQUE TO COUNT THE NUMBER OF LOGS IN A TIMBER TRUCK IMAGE PROCESSING TECHNIQUE TO COUNT THE NUMBER OF LOGS IN A TIMBER TRUCK Asif Rahman 1, 2, Siril Yella 1, Mark Dougherty 1 1 Department of Computer Engineering, Dalarna University, Borlänge, Sweden 2 Department

More information

Visual Perception. Overview. The Eye. Information Processing by Human Observer

Visual Perception. Overview. The Eye. Information Processing by Human Observer Visual Perception Spring 06 Instructor: K. J. Ray Liu ECE Department, Univ. of Maryland, College Park Overview Last Class Introduction to DIP/DVP applications and examples Image as a function Concepts

More information

Color & Compression. Robin Strand Centre for Image analysis Swedish University of Agricultural Sciences Uppsala University

Color & Compression. Robin Strand Centre for Image analysis Swedish University of Agricultural Sciences Uppsala University Color & Compression Robin Strand Centre for Image analysis Swedish University of Agricultural Sciences Uppsala University Outline Color Color spaces Multispectral images Pseudocoloring Color image processing

More information

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi Department of E&TC Engineering,PVPIT,Bavdhan,Pune ABSTRACT: In the last decades vehicle license plate recognition systems

More information

Efficient 2-D Structuring Element for Noise Removal of Grayscale Images using Morphological Operations

Efficient 2-D Structuring Element for Noise Removal of Grayscale Images using Morphological Operations Efficient 2-D Structuring Element for Noise Removal of Grayscale Images using Morphological Operations Mangala A. G. Department of Master of Computer Application, N.M.A.M. Institute of Technology, Nitte.

More information

Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images

Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images A. Vadivel 1, M. Mohan 1, Shamik Sural 2 and A.K.Majumdar 1 1 Department of Computer Science and Engineering,

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES International Journal of Advanced Research in Engineering and Technology (IJARET) Volume 9, Issue 3, May - June 2018, pp. 177 185, Article ID: IJARET_09_03_023 Available online at http://www.iaeme.com/ijaret/issues.asp?jtype=ijaret&vtype=9&itype=3

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008. Digital Image Processing

More information

DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM AND SEGMENTATION TECHNIQUES

DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM AND SEGMENTATION TECHNIQUES International Journal of Information Technology and Knowledge Management July-December 2011, Volume 4, No. 2, pp. 585-589 DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM

More information

Automatic Electricity Meter Reading Based on Image Processing

Automatic Electricity Meter Reading Based on Image Processing Automatic Electricity Meter Reading Based on Image Processing Lamiaa A. Elrefaei *,+,1, Asrar Bajaber *,2, Sumayyah Natheir *,3, Nada AbuSanab *,4, Marwa Bazi *,5 * Computer Science Department Faculty

More information

Imaging Process (review)

Imaging Process (review) Color Used heavily in human vision Color is a pixel property, making some recognition problems easy Visible spectrum for humans is 400nm (blue) to 700 nm (red) Machines can see much more; ex. X-rays, infrared,

More information

Developing a New Color Model for Image Analysis and Processing

Developing a New Color Model for Image Analysis and Processing UDC 004.421 Developing a New Color Model for Image Analysis and Processing Rashad J. Rasras 1, Ibrahiem M. M. El Emary 2, Dmitriy E. Skopin 1 1 Faculty of Engineering Technology, Amman, Al Balqa Applied

More information

Bettina Selig. Centre for Image Analysis. Swedish University of Agricultural Sciences Uppsala University

Bettina Selig. Centre for Image Analysis. Swedish University of Agricultural Sciences Uppsala University 2011-10-26 Bettina Selig Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Electromagnetic Radiation Illumination - Reflection - Detection The Human Eye Digital

More information

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye Digital Image Processing 2 Digital Image Fundamentals Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Those who wish to succeed must ask the right preliminary questions Aristotle Images

More information

An Improved Bernsen Algorithm Approaches For License Plate Recognition

An Improved Bernsen Algorithm Approaches For License Plate Recognition IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) ISSN: 78-834, ISBN: 78-8735. Volume 3, Issue 4 (Sep-Oct. 01), PP 01-05 An Improved Bernsen Algorithm Approaches For License Plate Recognition

More information

Color Image Processing

Color Image Processing Color Image Processing Dr. Praveen Sankaran Department of ECE NIT Calicut February 11, 2013 Winter 2013 February 11, 2013 1 / 23 Outline 1 Color Models 2 Full Color Image Processing Winter 2013 February

More information

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye Digital Image Processing 2 Digital Image Fundamentals Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall,

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008. Digital Image Processing

More information

Color. Used heavily in human vision. Color is a pixel property, making some recognition problems easy

Color. Used heavily in human vision. Color is a pixel property, making some recognition problems easy Color Used heavily in human vision Color is a pixel property, making some recognition problems easy Visible spectrum for humans is 400 nm (blue) to 700 nm (red) Machines can see much more; ex. X-rays,

More information

Applying mathematics to digital image processing using a spreadsheet

Applying mathematics to digital image processing using a spreadsheet Jeff Waldock Applying mathematics to digital image processing using a spreadsheet Jeff Waldock Department of Engineering and Mathematics Sheffield Hallam University j.waldock@shu.ac.uk Introduction When

More information

MEASUREMENT OF ROUGHNESS USING IMAGE PROCESSING. J. Ondra Department of Mechanical Technology Military Academy Brno, Brno, Czech Republic

MEASUREMENT OF ROUGHNESS USING IMAGE PROCESSING. J. Ondra Department of Mechanical Technology Military Academy Brno, Brno, Czech Republic MEASUREMENT OF ROUGHNESS USING IMAGE PROCESSING J. Ondra Department of Mechanical Technology Military Academy Brno, 612 00 Brno, Czech Republic Abstract: A surface roughness measurement technique, based

More information

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS Equipment and accessories: an optical bench with a scale, an incandescent lamp, matte, a set of

More information