Neurophysiologically-motivated sensor fusion for visualization and characterization of medical imagery

Size: px
Start display at page:

Download "Neurophysiologically-motivated sensor fusion for visualization and characterization of medical imagery"

Transcription

1 Neurophysiologically-motivated sensor fusion for visualization and characterization of medical imagery Mario Aguilar Knowledge Systems Laboratory MCIS Department Jacksonville State University Jacksonville, AL Aaron L. Garrett Knowledge Systems Laboratory MCIS Department Jacksonville State University Jacksonville, AL Abstract - We describe an architecture for the fusion of multiple medical image modalities based on the organization of the color vision system in humans and primates. Specifically, the preprocessing of individual images and the fusion across modalities are based on the neural connectivity of retina and visual cortex. The resulting system enhances the original imagery, improves information contrast, and combines the complementary information of the various modalities. The system has the ability to both enhance and preserve important information. In addition, the fused imagery preserves the high spatial resolution of modalities such as MRI even when combining them with poor resolution images such as SPECT scans. Results of fusing various modalities are presented, including: a) fusion of functional MRI images, b) fusion of SPECT and MRI, c) fusion of visible and infrared endoscopic images. We conclude by discussing our recent results on utilizing multi-modality fused signatures for segmentation and pattern recognition. Keywords: Image Fusion, medical image processing, medical diagnosis aids. 1 Introduction According to a recent article in Advanced Imaging [1], 80 billion electronic images are produced each year. In addition to digital photography sources, most of these are produced by entertainment, industrial, and medical industries. In fact, two billion of these are generated for medical diagnosis. This of course makes for a great argument on the unavoidable consequence of the information age: information overload. This consequence is of great concern in the medical field where the cost of diagnosis and limitations on resources greatly affect the quality of patient care. A strategy is necessary to facilitate diagnosis and expedite analysis by specialists. To this effect, we explore the application of image fusion techniques to combine multiple modality medical imagery. The first goal of our pilot study is to produce a single color image that combines the information from all the relevant modalities and reduces workload. The fusion methods presented here were first introduced in the context of dual-band fusion for night vision applications [2, 3]. In that system, both a night-capable visible camera and a thermal infrared camera were combined in real-time to provide a single color image, which preserved the information from the two separate cameras. One of the main properties of the night vision fusion system is that it enables users to discover critical relationships between the bands, which were previously unexploited. The second goal of our study is to ascertain whether medical imaging of various modalities may benefit from this property, mainly in facilitating or improving the diagnosis capabilities of specialists. 2 Methods The greatest benefits of fusion are obtained when the combined modalities are complementary to each other. As described next, the modalities we selected have the property of either recording structural detail or metabolic/functional information of the region of interest. By their nature, these two types of information are important for diagnosis and are often used together to enhance accuracy. The most prevalent form of brain imaging is MRI or magnetic resonance imaging. In this modality, the patient is exposed to a controlled magnetic field that leads to energy being emitted by protons in the brain. The amount of energy emitted, mainly a function of proton density, is measured and imaged. These images, known as MRI-PD (proton density), convey structural information. In addition, MRI scanners are capable of selectively emitting energy at a given orientation with respect to the axis of the original magnetic field. In this case, relaxation times of the protons lead to separately weighted images, T1 and T2, which measure the relaxation to parallel and perpendicular axes respectively. T1- and T2-weighted modalities are capable of measuring such characteristics as fat, melanin content, blood flow, calcification, etc. Hence, these two modalities are associated with functional information. The MRI imagery utilized in this study was obtained from the Whole Brain Atlas [4]. The imagery includes the three MRI

2 modalities registered to each other as obtained from a healthy (i.e. normal anatomy) patient. We also went on to analyze the efficacy of the fusion method in combining MRI and SPECT imagery. SPECT, or single photon emission computed tomography, is a technique in which a radiolabeled compound is injected into the patient so that their emissions can later be measured. These emissions are indicative of functional changes such as metabolism, blood flow, etc. These images have been used in the past for detecting presence or progress of brain defects such as tumors. The registered SPECT images utilized in this study were also obtained from the Whole Brain Atlas [4]. Finally, we investigated the fusion of visible and nearinfrared (NIR) endoscopic images. These images were obtained using a CCD video camera attached to an endoscope. The visible image was obtained via the built-in capture function of the camera. The NIR image was obtained using an appropriate filter placed on the camera. The images we used are of the internal cavity of a pig s stomach. In the following subsections, we describe the fusion architecture and its components. First, we introduce the biological principles that inspired and guided the development of the fusion architecture. Then, we present the neural network-based fusion architecture for the case of two, three, and four band combinations. Finally, we detail the design of the non-linear image operator used in preprocessing and combining the images in each of the stages of the fusion architecture. 2.1 Biological fusion systems The visual system of primates and humans contains three types of light sensors known as cones, which have overlapping sensitivities (short, medium, and long wavelengths). It is through the combination of these three sensor sets that we obtain our perception of color. Circuitry in the retina is functionally divided into two stages. The first one utilizes non-linear neural activations and lateral inhibition within bands to enhance and normalize the inputs. The second stage utilizes similar neural components in an arrangement of connections that lead to between-band competition that in turn produces a number of combinations of the three original bands [5]. This last stage of processing enhances the complementary information that exists in each of the bands (e.g. a spectral decorrelation operation). The fusion architecture presented here is motivated by this basic connectivity. Each of the processing stages is implemented via a non-linear neural network known as the shunt (described later in section 2.3). Further definition of specific band combinations in our system found inspiration in the connectivity of neurons in the fusion system of some species of rattle and boa snakes. These snakes possess a series of sensory pits capable of detecting thermal signatures of their surroundings (i.e. thermal infrared sensors). These sensors are used in conjunction with visual input to allow the snake to detect, locate, and capture its prey. Newman and Hartline [6] discovered that neurons in the optic tectum (an area of the brain associated with visual processing) of these snakes were being modulated by inputs from both types of sensors, visual and thermal. Their studies went on to demonstrate the very non-linear relationship that existed between the activation of neurons by each of these modalities. It is in this stage of processing that a dual-band fusion process begins to combine the signals to produce the perceptual experience of the snake. These non-linear combinations lead to information decorrelation not unlike what is usually targeted by principal component analysis techniques. However, in the biological systems and in our architecture, the non-linear operator has a very narrow spatial window providing a better-tuned decorrelation. In addition, the operator is modulated by more globally defined statistical characteristics of the input that produce normalization, smoothing, and between-band calibration. 2.2 A neural network-based fusion architecture The basic fusion architecture consists of two distinct processing stages. In the first one, as in the retina, we utilize a non-linear neural network (i.e. shunt, see next subsection) to obtain within-band image enhancement and normalization. This produces contrast enhancement, dynamic range calibration, and normalization of input images. The second stage adopts the use of the same nonlinear neural network operator to produce between-band decorrelation, information enhancement, and fusion. These stages for the two-band fusion case are illustrated in Figure 1. Here, concentric circles indicate a shunting neural network operator as described in section 2.3. The shunt combinations of the second stage, as shown in Figure 1, provide three unique sets of information rich images. The first combination performs an operation that decorrelates band 2 (MRI-T2) from band 1 (MRI-T1). In other words, it enhances information that is present in band 1 but not in band 2. The resulting image is mapped to the red channel of the color display. The second combination performs the reverse operation, mainly, enhancement of information unique to band 2. This combination is mapped to the blue channel. The final shunt contrast enhances the linear combination of the two bands. In effect, this produces an image in which areas with information common to both bands is enhanced. This image is mapped to the green channel. Another processing stage may be introduced prior to producing the final color image that remaps the color

3 assignments from those derived by the fusion process. Here, a mapping from RGB to HSV space allows the operator to manipulate the appearance of the image (e.g. hue remap) to obtain a more natural coloring scheme. The modified HSV values are mapped back to RGB to be used in generating the final color fused image. A second form of the two-band fusion architecture was implemented for fusing Visible and NIR endoscopic imagery. Here, the second stage produces the two biased decorrelations of the bands. These combinations are then mapped to the red and blue channels. Finally, to preserve the high contrast and natural appearance of the Visible band, its shunted image is mapped to the green channel. The resulting fused color image will convey between-band information in terms of color contrast (Blue vs. Red), while its brightness profile and resolution will be mainly defined by the Visible band imagery. The architecture for three-band MRI fusion is illustrated in Figure 3. Here, the first stage of processing is as before, where each of the input bands is separately contrast enhanced and normalized. Then, two between-band shunting operations produce distinct fusion products. The first one decorrelates the information between bands 1 (MRI-T1) and 2 (MRI-PD). The second does it for bands 3 (MRI-T2) and 2. In this case, the information derived is that which is unique to bands 1 and 3. The resulting fused images are then mapped to the I and Q (also known as redgreen and blue-yellow) components of the YIQ color space of the image. The Y or achromatic component is derived from the enhanced band 2 image that provides the most faithful structural details. The YIQ components are then mapped to RGB space. Figure 1. Two-band fusion architecture used for processing functional MRI imagery. Concentric circles represent a shunting neural network operator. See Text for details. Figure 3. Three-band MRI fusion architecture. See text for details. The architecture for four-band fusion is shown in Figure 4. Here, the second stage of processing produces the decorrelation between T1-weighted and SPECT, as well as between T2-weighted and SPECT. Notice that the decorrelation is done in both directions for each of the pairs. The most noticeable difference is the addition of a third processing stage. This additional between-band competition leads to further color contrast enhancement and decorrelation as suggested by connectivity in primary visual cortex in primates. The two resulting decorrelated images are mapped to the chromatic I and Q channels. Once again, to preserve the high resolution of the MRI imagery, the structural modality MRI-PD image is mapped to the Y or achromatic channel. Figure 2. Two-band fusion architecture for combination of visible and NIR endoscopic images. See text for details. Figure 4. Four band MRI/SPECT fusion architecture. See text for details.

4 2.3 The shunting image operator The basic building block of the architecture, represented as concentric circles in figures 1-4, is a non-linear neural network known as a shunting neural network [7]. This neural network, which acts like a filter, models the dynamics of neuron activation due to three contributions: an excitatory one-to-one input, an inhibitory input from surrounding neurons, and passive activation decay (i.e. a leaky integrator). The expression that captures these interactions in a dynamical system is defined in terms of the following differential equation: ij ij C S ( B xij )[ CI ] ij ( D + xij )[ GS I ] ij x& = Ax + (1) Here, x ij is the activation of each cell ij receiving input from each of the pixels in the input image. A is a decay rate, B and D are the maximum and minimum activation levels respectively and are set to 1 in the simulations, C and G s (a Gaussian) serve to weigh the excitatory input (I c ) vs. lateral inhibitory (I s ) image inputs I. The neural network consists of a two dimensional array of these shunting neurons with dimensions ij corresponding to the width and height of the input image. When the input is applied, the network rapidly reaches equilibrium, which produces the resulting output image. This equilibrium state can be understood in terms of the possible values of x after the neuron has reached a steady state as shown in the following equation: x ij = A C S + [ CI GS I ] ij C S + [ CI + GS I ] ij Here, it is straightforward to understand the numerator as a contrast enhancement operation since it represents a difference of Gaussians. The denominator serves to normalize the activation of x with respect to the activity of its neighborhood. In effect, the combination of these two operations leads to the dynamic range compression of the input image in conjunction with contrast enhancement. Parameter A serves to control the characteristics of the operator, from ratio processing (when A is small with respect to the local statistics) to linear filtering (when A is comparatively large). In the case in which the operator is used to combine two bands, the inputs mapped to the center and surround are derived from the each of the input images. In the case where band 1 is mapped to the center, each of the pixels from band 1 are used to drive the excitatory input of their corresponding shunting operator. Then, a corresponding area of the image for band 2 is used as the surround input that is fed into the same shunt operator. The result is the contrast enhancement of information in band 1 as matched (2) against band 2. The relationship between this operation and decorrelation has been previously documented [8]. 3 Results We present two series of results that demonstrate the image enhancement and fusion characteristics of the shunting operator. The first subsection presents results of processing by the first stage of the fusion architecture. The second subsection demonstrates the final color fused results obtained with each of the architectures presented in section Image enhancement results As previously described, the first stage of the fusion architecture applies the shunting operator to each of the input modalities in order to contrast enhance and normalize them. Figure 5 presents a comparison of the original threemodality MRI imagery (left column) and the shunting-based pre-processed results (right column) as noted in the captions. These results could be compared to those obtained with strategies such as histogram equalization which help to remap the dynamic range of the image. In such approaches, global statistics drive the remapping of gray-scale values to obtain a more uniform distribution. Unfortunately, such remapping can adversely lead to information loss because no considerations are made for local contrast information. In contrast, the shunting operator first enhances this important local information and follows with the normalization, which leads to the dynamic range remapping. 3.2 Image fusion results The first investigation focused on the fusion of MRI imagery of brain data. First, in applying dual-band fusion, we used the architecture of Figure 1 using the T1- and T2- weighted axial images as shown at the top of Figure 6. The resulting color fused image (bottom of Fig. 6) demonstrates the combination of both bands. Notice the use of brightness and color contrast to convey information from the two original images. Figure 6 also presents anatomical labelings for distinct areas identified in each of the original images, as well as in the color fused image. It is obvious, for example, that blood vessels are readily identified in the T2-weighted image but not in the T1-weighted image. As shown in the color fused result image, all anatomical identifications made for the T1 and T2 images are clearly captured and enhanced by the fusion architecture.

5 Original Shunted MRI Proton Density MRI T1-weighted image MRI T1-weighted MRI T2-weighted image MRI T2-weighted Figure 5. Image comparison between original coronal MRI imagery (PD, proton density, T1- weighted, and T2-weighted) from a normal patient (left column) and the corresponding shuntingprocessed imagery (right column). Color fused result Figure 6. Two-band fusion of MRI T1- (top) and T2-weighted (middle) imagery. Images have been labeled to indicate significant structural landmarks identified in each of the images. Bottom image shows the color-fused result, which demonstrates the preservation of information from the original imagery. In addition, obvious correlations are highlighted by the color differences across the image.

6 Next, we applied the fusion architecture of Figure 3 to combine all three MRI modalities. As previously explained, PD represents a measure of structural information, while T1 and T2 capture both structural and complementary functional information. For this reason, we paired each of the functional/structural modalities with the PD image such that decorrelation would lead to a purer measure of functional information. As shown in Figure 7, the fused image has captured much of the same complementary information as presented in Figure 6. In this case, however, the more targeted decorrelation has produced coloring patterns that highlight unique functional information. For instance, we see areas with a gradual change from red, indicating strong contribution from T2, to pink, which suggest presence of information in T1 as well. The fact that we only see a gradient from green to red suggests that all information being presented through the color contrast of the image arises from the functional information present in T1 and T2. On the other hand, the brightness contrast mainly derived from the PD image aids in preserving the structural information. By introducing SPECT registered imagery to our studies, we are able to explore the use of a four-band fusion architecture to combine them with the three MRI modalities. As shown in Figure 4, we paired the two MRI modalities associated with functional information (T1 and T2) against the SPECT image. Here, their decorrelation is further enhanced through a second application of the shunt operator (double opponent stage). Proton density MRI Color fused Figure 7. Three-band fusion of MRI imagery. Proton density MRI (left) was fused with the corresponding T1- and T2-weighted MRI images as presented in Figure 3. The image on the right presents the color fused resulting image. SPECT Color fused Figure 8. Four-band fusion of MRI imagery (fusion architecture as presented in Figure 4.) The image on the left corresponds to a registered SPECT scan, and the image on the right presents the color-fused result. The resulting decorrelated information carries a strong signal associated with information that is present in the SPECT image but is not present in the T1 or T2 images and vice versa. This information is mapped to the chromatic information of the final color image. As in the previous experiment, the PD modality of the MRI imagery is mapped to the achromatic channel to preserve the structural information and its higher resolution. The resulting color fused image is presented in Figure 8. Here, we again see the preservation of the MRI combined information. In addition, the SPECT information derived from this combination is preserved as the area with green hues in the middle of the image. The presence of the SPECT information in the final image could be made more or less evident, depending on the task, through the modulation of a fusion weighting factor controlled by the users through an interactive interface. The final investigation involved the fusion of endoscopic imagery (described in section 2). As explained, two modalities were studied, visible and near infrared. The visible modality is typically used to aid surgeons during their procedures. Near infrared signals on the other hand are being investigated as a means for helping surgeons in identifying blood vessels which hide behind fatty surfaces. This is somewhat evident by the distinct brightness profile that the NIR image presents in the blood vessel running horizontally across the middle of the image in Fig. 9 (highlighted area). This difference arises from the unique NIR signature of fatty tissue as compared to that of blood vessels. The result of the two-band fusion is presented at the bottom of Figure 9. The image, which uses the red vs. blue hue space to code information, possesses a very natural appearance that may aid in understanding the information content. In addition, the fused image clearly shows the

7 effectiveness of this architecture in combining the information (e.g. easily identifiable blood vessels) while preserving information and the high detail level from the original visible band. preserving information from the original input imagery. We demonstrated them in fusing various modalities of medical imagery. These modalities included those that target measurement of structural information and those designed for measuring metabolic processes associated with functional information. We have developed a visualization interface that facilitates navigation and understanding of the resulting data (Figure 10). While these results emphasize the use of fusion techniques for visualization purposes, we are currently investigating their use in the context of image segmentation and pattern recognition. Visible Band NIR Band Color Fused Figure 9. Fusion of registered visible band and nearinfrared imagery obtained from an endoscopic camera. Notice that the area circled in the NIR clearly shows the separation of blood vessel and fatty tissue. On the other hand, the visible band lacks sufficient contrast in the same area. The resulting fused image preserves this information. 4 Discussion We presented four fusion architectures derived from neurophysiological principles of sensor fusion. These architectures provide a method for combining and As previously described, the nature of the fusion combinations produces enhanced information content. Mainly, the fusion process produces a number of betweenband combinations with unique decorrelation characteristics. These combinations, together with the single-band shunted imagery, define a ``richer set of features used to represent each pixel in the image. Then, unsupervised clustering algorithm can be applied to obtain unattended segmentation of the data. This technique has been applied in segmenting the skull from the rest of the data in MRI imagery to create a 3D model (see top-left inset in Figure 10) that is more accurate than obtained with alternative methods (Garrett and Aguilar, in preparation). In addition, in work in progress, we have extended the supervised ARTMAP algorithm [10] to provide orderindependent learning for classification and recognition. Here, an interactive user interface is used to allow medical experts to define areas of interest (AOI) representing prototypes for various brain defects. With this information, and samples of healthy areas, the augmented ARTMAP system learns to discriminate between the various types of AOIs. Such user-leveraged learning techniques have been successfully applied in the context of multi-sensor band fusion for remote sensing [11]. We are currently studying the use of these techniques in characterizing metastatic carcinoma and applying the recognition system to analyze progression of the disease. Similarly, we are investigating the use of the system in identifying reduced perfusion in brains affected by Alzheimer s in order to identify critical complications of the disease. Future efforts will include assessing the image fusion techniques in reducing workload and facilitating diagnosis by medical experts. In addition, we are currently seeking collaboration with medical investigators in order to validate the pattern recognition system. Acknowledgements The work on fusion of endoscopic imagery was initiated while the first author was with the Sensor Exploitation Group at MIT Lincoln Laboratory. The support and

8 assistance by the staff, in particular Allen Waxman and David Fay, are gratefully acknowledged. All other work was supported by a Faculty Research Grant awarded to the first author by the faculty research committee and Jacksonville State University. Opinions, interpretations, and conclusions are those of the authors and not necessarily endorsed by the committee or Jacksonville State University. References [1] M. Aguilar, D.A. Fay, W.D. Ross, A.M. Waxman, D.B. Ireland, and J.P. Racamato, Real-time fusion of low-light CCD and uncooled IR imagery for color night vision, Proc. Of SPIE Conf. On Enhanced and Synthetic Vision, 3364, [2] A.M. Waxman, M. Aguilar, R.A. Baxter, D.A. Fay, D.B. Ireland, J.P. Racamato, and W.D. Ross, Opponent color fusion of multisensor imagery: visible, IR and SAR, Proc. Of the Meeting of the IRIS Specialty Group on Passive Sensors, I, pp.43-61, [3] P. Eggleston, Asset management and image overload: handling 80 billion images a year., Advanced Imaging, pp.12-16, [4] K.A. Johnson and J.A. Becker, Whole Brain Atlas, [5] P. Schiller and N.K. Logothetis, The color-opponent and broad-band channels of the primate visual system, Trends in Neuroscience, 13, pp , [6] E.A. Newman and P.H. Hartline, Integration of visual and infrared information in bimodal neurons of the rattlesnake optic tectum, Science, 213, pp , [7] S. Grossberg, Neural Networks and Natural Intelligence, Cambridge, MA: MIT Press, [8] M. Aguilar and A.M. Waxman, Comparison of opponentcolor neural processing and principal components analysis in the fusion of visible and thermal IR imagery. Proc. of the Vision, Recognition, and Action: Neural Models of Mind and Machine Conference, Boston, MA, [9] G.A. Carpenter, S. Grossberg, and J.H. Reynolds, ARTMAP: Supervised real-time learning and classification of non-stationary data by a self-organizing neural network. Neural Networks, 4, , [10] W.D. Ross, A.M. Waxman, W.W. Streilein, M. Aguilar, J. Verly, F. Liu, M.I. Braun, P. Harmon, and S. Rak, Multi- Sensor 3D Image Fusion and Interactive Search. In Proc. 3 rd International Conference on Information Fusion, Paris, France. Figure 10. Screen captured image of the fusion visualization tool. In this example, the user has selected to view the fusion of the three fmri images.

MED-LIFE: A DIAGNOSTIC AID FOR MEDICAL IMAGERY

MED-LIFE: A DIAGNOSTIC AID FOR MEDICAL IMAGERY MED-LIFE: A DIAGNOSTIC AID FOR MEDICAL IMAGERY Joshua R New, Erion Hasanbelliu and Mario Aguilar Knowledge Systems Laboratory, MCIS Department Jacksonville State University, Jacksonville, AL ABSTRACT We

More information

Concealed Weapon Detection Using Color Image Fusion

Concealed Weapon Detection Using Color Image Fusion Concealed Weapon Detection Using Color Image Fusion Zhiyun Xue, Rick S. Blum Electrical and Computer Engineering Department Lehigh University Bethlehem, PA, U.S.A. rblum@eecs.lehigh.edu Abstract Image

More information

Digital Image Processing

Digital Image Processing What is an image? Digital Image Processing Picture, Photograph Visual data Usually two- or three-dimensional What is a digital image? An image which is discretized, i.e., defined on a discrete grid (ex.

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

Color Image Processing

Color Image Processing Color Image Processing Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Color Used heavily in human vision. Visible spectrum for humans is 400 nm (blue) to 700

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lecture 4: Colour

CS 565 Computer Vision. Nazar Khan PUCIT Lecture 4: Colour CS 565 Computer Vision Nazar Khan PUCIT Lecture 4: Colour Topics to be covered Motivation for Studying Colour Physical Background Biological Background Technical Colour Spaces Motivation Colour science

More information

On Contrast Sensitivity in an Image Difference Model

On Contrast Sensitivity in an Image Difference Model On Contrast Sensitivity in an Image Difference Model Garrett M. Johnson and Mark D. Fairchild Munsell Color Science Laboratory, Center for Imaging Science Rochester Institute of Technology, Rochester New

More information

Colors in Images & Video

Colors in Images & Video LECTURE 8 Colors in Images & Video CS 5513 Multimedia Systems Spring 2009 Imran Ihsan Principal Design Consultant OPUSVII www.opuseven.com Faculty of Engineering & Applied Sciences 1. Light and Spectra

More information

Medical Images Analysis and Processing

Medical Images Analysis and Processing Medical Images Analysis and Processing - 25642 Emad Course Introduction Course Information: Type: Graduated Credits: 3 Prerequisites: Digital Image Processing Course Introduction Reference(s): Insight

More information

Digital Image Processing. Lecture # 8 Color Processing

Digital Image Processing. Lecture # 8 Color Processing Digital Image Processing Lecture # 8 Color Processing 1 COLOR IMAGE PROCESSING COLOR IMAGE PROCESSING Color Importance Color is an excellent descriptor Suitable for object Identification and Extraction

More information

Pixel Level Weighted Averaging Technique for Enhanced Image Fusion in Mammography

Pixel Level Weighted Averaging Technique for Enhanced Image Fusion in Mammography Pixel Level Weighted Averaging Technique for Enhanced Image Fusion in Mammography Abstract M Prema Kumar, Associate Professor, Dept. of ECE, SVECW (A), Bhimavaram, Andhra Pradesh. P Rajesh Kumar, Professor

More information

Performance Comparison of Mean, Median and Wiener Filter in MRI Image De-noising

Performance Comparison of Mean, Median and Wiener Filter in MRI Image De-noising Performance Comparison of Mean, Median and Wiener Filter in MRI Image De-noising 1 Pravin P. Shetti, 2 Prof. A. P. Patil 1 PG Student, 2 Assistant Professor Department of Electronics Engineering, Dr. J.

More information

On Contrast Sensitivity in an Image Difference Model

On Contrast Sensitivity in an Image Difference Model On Contrast Sensitivity in an Image Difference Model Garrett M. Johnson and Mark D. Fairchild Munsell Color Science Laboratory, Center for Imaging Science Rochester Institute of Technology, Rochester New

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

LECTURE 07 COLORS IN IMAGES & VIDEO

LECTURE 07 COLORS IN IMAGES & VIDEO MULTIMEDIA TECHNOLOGIES LECTURE 07 COLORS IN IMAGES & VIDEO IMRAN IHSAN ASSISTANT PROFESSOR LIGHT AND SPECTRA Visible light is an electromagnetic wave in the 400nm 700 nm range. The eye is basically similar

More information

Feature Detection Performance with Fused Synthetic and Sensor Images

Feature Detection Performance with Fused Synthetic and Sensor Images PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 43rd ANNUAL MEETING - 1999 1108 Feature Detection Performance with Fused Synthetic and Sensor Images Philippe Simard McGill University Montreal,

More information

CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA

CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA 90 CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA The objective in this chapter is to locate the centre and boundary of OD and macula in retinal images. In Diabetic Retinopathy, location of

More information

MATLAB DIGITAL IMAGE/SIGNAL PROCESSING TITLES

MATLAB DIGITAL IMAGE/SIGNAL PROCESSING TITLES MATLAB DIGITAL IMAGE/SIGNAL PROCESSING TITLES -2018 S.NO PROJECT CODE 1 ITIMP01 2 ITIMP02 3 ITIMP03 4 ITIMP04 5 ITIMP05 6 ITIMP06 7 ITIMP07 8 ITIMP08 9 ITIMP09 `10 ITIMP10 11 ITIMP11 12 ITIMP12 13 ITIMP13

More information

The Human Brain and Senses: Memory

The Human Brain and Senses: Memory The Human Brain and Senses: Memory Methods of Learning Learning - There are several types of memory, and each is processed in a different part of the brain. Remembering Mirror Writing Today we will be.

More information

Introduction to Computer Vision and image processing

Introduction to Computer Vision and image processing Introduction to Computer Vision and image processing 1.1 Overview: Computer Imaging 1.2 Computer Vision 1.3 Image Processing 1.4 Computer Imaging System 1.6 Human Visual Perception 1.7 Image Representation

More information

Fig Color spectrum seen by passing white light through a prism.

Fig Color spectrum seen by passing white light through a prism. 1. Explain about color fundamentals. Color of an object is determined by the nature of the light reflected from it. When a beam of sunlight passes through a glass prism, the emerging beam of light is not

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 2 Aug 24 th, 2017 Slides from Dr. Shishir K Shah, Rajesh Rao and Frank (Qingzhong) Liu 1 Instructor TA Digital Image Processing COSC 6380/4393 Pranav Mantini

More information

Detection and Verification of Missing Components in SMD using AOI Techniques

Detection and Verification of Missing Components in SMD using AOI Techniques , pp.13-22 http://dx.doi.org/10.14257/ijcg.2016.7.2.02 Detection and Verification of Missing Components in SMD using AOI Techniques Sharat Chandra Bhardwaj Graphic Era University, India bhardwaj.sharat@gmail.com

More information

III: Vision. Objectives:

III: Vision. Objectives: III: Vision Objectives: Describe the characteristics of visible light, and explain the process by which the eye transforms light energy into neural. Describe how the eye and the brain process visual information.

More information

Color Image Processing

Color Image Processing Color Image Processing Jesus J. Caban Outline Discuss Assignment #1 Project Proposal Color Perception & Analysis 1 Discuss Assignment #1 Project Proposal Due next Monday, Oct 4th Project proposal Submit

More information

AP PSYCH Unit 4.2 Vision 1. How does the eye transform light energy into neural messages? 2. How does the brain process visual information? 3.

AP PSYCH Unit 4.2 Vision 1. How does the eye transform light energy into neural messages? 2. How does the brain process visual information? 3. AP PSYCH Unit 4.2 Vision 1. How does the eye transform light energy into neural messages? 2. How does the brain process visual information? 3. What theories help us understand color vision? 4. Is your

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

TSBB15 Computer Vision

TSBB15 Computer Vision TSBB15 Computer Vision Lecture 9 Biological Vision!1 Two parts 1. Systems perspective 2. Visual perception!2 Two parts 1. Systems perspective Based on Michael Land s and Dan-Eric Nilsson s work 2. Visual

More information

Image and video processing (EBU723U) Colour Images. Dr. Yi-Zhe Song

Image and video processing (EBU723U) Colour Images. Dr. Yi-Zhe Song Image and video processing () Colour Images Dr. Yi-Zhe Song yizhe.song@qmul.ac.uk Today s agenda Colour spaces Colour images PGM/PPM images Today s agenda Colour spaces Colour images PGM/PPM images History

More information

Lecture 4 Foundations and Cognitive Processes in Visual Perception From the Retina to the Visual Cortex

Lecture 4 Foundations and Cognitive Processes in Visual Perception From the Retina to the Visual Cortex Lecture 4 Foundations and Cognitive Processes in Visual Perception From the Retina to the Visual Cortex 1.Vision Science 2.Visual Performance 3.The Human Visual System 4.The Retina 5.The Visual Field and

More information

Bettina Selig. Centre for Image Analysis. Swedish University of Agricultural Sciences Uppsala University

Bettina Selig. Centre for Image Analysis. Swedish University of Agricultural Sciences Uppsala University 2011-10-26 Bettina Selig Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Electromagnetic Radiation Illumination - Reflection - Detection The Human Eye Digital

More information

Target detection in side-scan sonar images: expert fusion reduces false alarms

Target detection in side-scan sonar images: expert fusion reduces false alarms Target detection in side-scan sonar images: expert fusion reduces false alarms Nicola Neretti, Nathan Intrator and Quyen Huynh Abstract We integrate several key components of a pattern recognition system

More information

GE 113 REMOTE SENSING. Topic 7. Image Enhancement

GE 113 REMOTE SENSING. Topic 7. Image Enhancement GE 113 REMOTE SENSING Topic 7. Image Enhancement Lecturer: Engr. Jojene R. Santillan jrsantillan@carsu.edu.ph Division of Geodetic Engineering College of Engineering and Information Technology Caraga State

More information

Chapter 3 Part 2 Color image processing

Chapter 3 Part 2 Color image processing Chapter 3 Part 2 Color image processing Motivation Color fundamentals Color models Pseudocolor image processing Full-color image processing: Component-wise Vector-based Recent and current work Spring 2002

More information

Assessment of Multi-Sensor Neural Image Fusion and Fused Data Mining for Land Cover Classification

Assessment of Multi-Sensor Neural Image Fusion and Fused Data Mining for Land Cover Classification Assessment of Multi-Sensor Neural Image Fusion and Fused Data Mining for Land Cover Classification M. Pugh Air Force Research Laboratory Information Directorate Rome, NY, U.S.A. mark.pugh@rl.af.mil A.

More information

Reference Free Image Quality Evaluation

Reference Free Image Quality Evaluation Reference Free Image Quality Evaluation for Photos and Digital Film Restoration Majed CHAMBAH Université de Reims Champagne-Ardenne, France 1 Overview Introduction Defects affecting films and Digital film

More information

Chapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis

Chapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis Chapter 2: Digital Image Fundamentals Digital image processing is based on Mathematical and probabilistic models Human intuition and analysis 2.1 Visual Perception How images are formed in the eye? Eye

More information

Fusion of Colour and Monochromatic Images with Chromacity Preservation

Fusion of Colour and Monochromatic Images with Chromacity Preservation Fusion of Colour and Monochromatic Images with Chromacity Preservation Rade Pavlović Faculty of Technical Sciences Trg Dositeja Obradovica 6 11 Novi Sad, Serbia rade_pav@yahoo.com Vladimir Petrović Imaging

More information

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods 19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com

More information

Color Science. What light is. Measuring light. CS 4620 Lecture 15. Salient property is the spectral power distribution (SPD)

Color Science. What light is. Measuring light. CS 4620 Lecture 15. Salient property is the spectral power distribution (SPD) Color Science CS 4620 Lecture 15 1 2 What light is Measuring light Light is electromagnetic radiation Salient property is the spectral power distribution (SPD) [Lawrence Berkeley Lab / MicroWorlds] exists

More information

Digital Image Processing

Digital Image Processing Part 1: Course Introduction Achim J. Lilienthal AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapters 1 & 2 2011-04-05 Contents 1. Introduction

More information

PERCEPTUALLY-ADAPTIVE COLOR ENHANCEMENT OF STILL IMAGES FOR INDIVIDUALS WITH DICHROMACY. Alexander Wong and William Bishop

PERCEPTUALLY-ADAPTIVE COLOR ENHANCEMENT OF STILL IMAGES FOR INDIVIDUALS WITH DICHROMACY. Alexander Wong and William Bishop PERCEPTUALLY-ADAPTIVE COLOR ENHANCEMENT OF STILL IMAGES FOR INDIVIDUALS WITH DICHROMACY Alexander Wong and William Bishop University of Waterloo Waterloo, Ontario, Canada ABSTRACT Dichromacy is a medical

More information

Lecture 4. Opponent Colors. Hue Cancellation Experiment HUV Color Space

Lecture 4. Opponent Colors. Hue Cancellation Experiment HUV Color Space Lecture 4 Opponent Colors Hue Cancellation Experiment HUV Color Space 20 40 60 80 100 120 50 100 150 200 250 20 40 60 80 100 120 50 100 150 200 250 20 40 60 80 100 120 50 100 150 200 250 20 40 60 80 100

More information

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Why is this important What are the major approaches Examples of digital image enhancement Follow up exercises

More information

Preprocessing on Digital Image using Histogram Equalization: An Experiment Study on MRI Brain Image

Preprocessing on Digital Image using Histogram Equalization: An Experiment Study on MRI Brain Image Preprocessing on Digital Image using Histogram Equalization: An Experiment Study on MRI Brain Image Musthofa Sunaryo 1, Mochammad Hariadi 2 Electrical Engineering, Institut Teknologi Sepuluh November Surabaya,

More information

Image interpretation and analysis

Image interpretation and analysis Image interpretation and analysis Grundlagen Fernerkundung, Geo 123.1, FS 2014 Lecture 7a Rogier de Jong Michael Schaepman Why are snow, foam, and clouds white? Why are snow, foam, and clouds white? Today

More information

Digital images. Digital Image Processing Fundamentals. Digital images. Varieties of digital images. Dr. Edmund Lam. ELEC4245: Digital Image Processing

Digital images. Digital Image Processing Fundamentals. Digital images. Varieties of digital images. Dr. Edmund Lam. ELEC4245: Digital Image Processing Digital images Digital Image Processing Fundamentals Dr Edmund Lam Department of Electrical and Electronic Engineering The University of Hong Kong (a) Natural image (b) Document image ELEC4245: Digital

More information

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching.

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching. Remote Sensing Objectives This unit will briefly explain display of remote sensing image, geometric correction, spatial enhancement, spectral enhancement and classification of remote sensing image. At

More information

Imaging Process (review)

Imaging Process (review) Color Used heavily in human vision Color is a pixel property, making some recognition problems easy Visible spectrum for humans is 400nm (blue) to 700 nm (red) Machines can see much more; ex. X-rays, infrared,

More information

Fusion of MRI and CT Brain Images by Enhancement of Adaptive Histogram Equalization

Fusion of MRI and CT Brain Images by Enhancement of Adaptive Histogram Equalization International Journal of Scientific & Engineering Research Volume 4, Issue 1, January-2013 1 Fusion of MRI and CT Brain Images by Enhancement of Adaptive Histogram Equalization Prof.P.Natarajan, N.Soniya,

More information

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG

An Introduction to Geomatics. Prepared by: Dr. Maher A. El-Hallaq خاص بطلبة مساق مقدمة في علم. Associate Professor of Surveying IUG An Introduction to Geomatics خاص بطلبة مساق مقدمة في علم الجيوماتكس Prepared by: Dr. Maher A. El-Hallaq Associate Professor of Surveying IUG 1 Airborne Imagery Dr. Maher A. El-Hallaq Associate Professor

More information

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing For a long time I limited myself to one color as a form of discipline. Pablo Picasso Color Image Processing 1 Preview Motive - Color is a powerful descriptor that often simplifies object identification

More information

White light can be split into constituent wavelengths (or colors) using a prism or a grating.

White light can be split into constituent wavelengths (or colors) using a prism or a grating. Colors and the perception of colors Visible light is only a small member of the family of electromagnetic (EM) waves. The wavelengths of EM waves that we can observe using many different devices span from

More information

IMAGE INTENSIFICATION TECHNIQUE USING HORIZONTAL SITUATION INDICATOR

IMAGE INTENSIFICATION TECHNIQUE USING HORIZONTAL SITUATION INDICATOR IMAGE INTENSIFICATION TECHNIQUE USING HORIZONTAL SITUATION INDICATOR Naveen Kumar Mandadi 1, B.Praveen Kumar 2, M.Nagaraju 3, 1,2,3 Assistant Professor, Department of ECE, SRTIST, Nalgonda (India) ABSTRACT

More information

CSE 564: Scientific Visualization

CSE 564: Scientific Visualization CSE 564: Scientific Visualization Lecture 5: Image Processing Klaus Mueller Stony Brook University Computer Science Department Klaus Mueller, Stony Brook 2003 Image Processing Definitions Purpose: - enhance

More information

The Physiology of the Senses Lecture 1 - The Eye

The Physiology of the Senses Lecture 1 - The Eye The Physiology of the Senses Lecture 1 - The Eye www.tutis.ca/senses/ Contents Objectives... 2 Introduction... 2 Accommodation... 3 The Iris... 4 The Cells in the Retina... 5 Receptive Fields... 8 The

More information

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image Background Computer Vision & Digital Image Processing Introduction to Digital Image Processing Interest comes from two primary backgrounds Improvement of pictorial information for human perception How

More information

Locating the Query Block in a Source Document Image

Locating the Query Block in a Source Document Image Locating the Query Block in a Source Document Image Naveena M and G Hemanth Kumar Department of Studies in Computer Science, University of Mysore, Manasagangotri-570006, Mysore, INDIA. Abstract: - In automatic

More information

Color Science. CS 4620 Lecture 15

Color Science. CS 4620 Lecture 15 Color Science CS 4620 Lecture 15 2013 Steve Marschner 1 [source unknown] 2013 Steve Marschner 2 What light is Light is electromagnetic radiation exists as oscillations of different frequency (or, wavelength)

More information

DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM AND SEGMENTATION TECHNIQUES

DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM AND SEGMENTATION TECHNIQUES International Journal of Information Technology and Knowledge Management July-December 2011, Volume 4, No. 2, pp. 585-589 DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM

More information

Enhancing thermal video using a public database of images

Enhancing thermal video using a public database of images Enhancing thermal video using a public database of images H. Qadir, S. P. Kozaitis, E. A. Ali Department of Electrical and Computer Engineering Florida Institute of Technology 150 W. University Blvd. Melbourne,

More information

MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES

MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES 1. Introduction Digital image processing involves manipulation and interpretation of the digital images so

More information

Thermography. White Paper: Understanding Infrared Camera Thermal Image Quality

Thermography. White Paper: Understanding Infrared Camera Thermal Image Quality Electrophysics Resource Center: White Paper: Understanding Infrared Camera 373E Route 46, Fairfield, NJ 07004 Phone: 973-882-0211 Fax: 973-882-0997 www.electrophysics.com Understanding Infared Camera Electrophysics

More information

Evaluation of Algorithms for Fusing Infrared and Synthetic Imagery

Evaluation of Algorithms for Fusing Infrared and Synthetic Imagery Evaluation of Algorithms for Fusing Infrared and Synthetic Imagery Philippe Simard a, Norah K. Link b and Ronald V. Kruk b a McGill University, Montreal, Quebec, Canada b CAE Electronics Ltd., St-Laurent,

More information

USE OF COLOR IN REMOTE SENSING

USE OF COLOR IN REMOTE SENSING 1 USE OF COLOR IN REMOTE SENSING (David Sandwell, Copyright, 2004) Display of large data sets - Most remote sensing systems create arrays of numbers representing an area on the surface of the Earth. The

More information

Introduction. MIA1 5/14/03 4:37 PM Page 1

Introduction. MIA1 5/14/03 4:37 PM Page 1 MIA1 5/14/03 4:37 PM Page 1 1 Introduction The last two decades have witnessed significant advances in medical imaging and computerized medical image processing. These advances have led to new two-, three-

More information

Imaging with hyperspectral sensors: the right design for your application

Imaging with hyperspectral sensors: the right design for your application Imaging with hyperspectral sensors: the right design for your application Frederik Schönebeck Framos GmbH f.schoenebeck@framos.com June 29, 2017 Abstract In many vision applications the relevant information

More information

Frequency Domain Based MSRCR Method for Color Image Enhancement

Frequency Domain Based MSRCR Method for Color Image Enhancement Frequency Domain Based MSRCR Method for Color Image Enhancement Siddesha K, Kavitha Narayan B M Assistant Professor, ECE Dept., Dr.AIT, Bangalore, India, Assistant Professor, TCE Dept., Dr.AIT, Bangalore,

More information

Understanding Infrared Camera Thermal Image Quality

Understanding Infrared Camera Thermal Image Quality Access to the world s leading infrared imaging technology Noise { Clean Signal www.sofradir-ec.com Understanding Infared Camera Infrared Inspection White Paper Abstract You ve no doubt purchased a digital

More information

ME 6406 MACHINE VISION. Georgia Institute of Technology

ME 6406 MACHINE VISION. Georgia Institute of Technology ME 6406 MACHINE VISION Georgia Institute of Technology Class Information Instructor Professor Kok-Meng Lee MARC 474 Office hours: Tues/Thurs 1:00-2:00 pm kokmeng.lee@me.gatech.edu (404)-894-7402 Class

More information

Image and video processing

Image and video processing Image and video processing Processing Colour Images Dr. Yi-Zhe Song The agenda Introduction to colour image processing Pseudo colour image processing Full-colour image processing basics Transforming colours

More information

Reading instructions: Chapter 6

Reading instructions: Chapter 6 Lecture 8 in Computerized Image Analysis Digital Color Processing Hamid Sarve hamid@cb.uu.se Reading instructions: Chapter 6 Electromagnetic Radiation Visible light (for humans) is electromagnetic radiation

More information

Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images

Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images A. Vadivel 1, M. Mohan 1, Shamik Sural 2 and A.K.Majumdar 1 1 Department of Computer Science and Engineering,

More information

Characterization of LF and LMA signal of Wire Rope Tester

Characterization of LF and LMA signal of Wire Rope Tester Volume 8, No. 5, May June 2017 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info ISSN No. 0976-5697 Characterization of LF and LMA signal

More information

Color and perception Christian Miller CS Fall 2011

Color and perception Christian Miller CS Fall 2011 Color and perception Christian Miller CS 354 - Fall 2011 A slight detour We ve spent the whole class talking about how to put images on the screen What happens when we look at those images? Are there any

More information

CSE 332/564: Visualization. Fundamentals of Color. Perception of Light Intensity. Computer Science Department Stony Brook University

CSE 332/564: Visualization. Fundamentals of Color. Perception of Light Intensity. Computer Science Department Stony Brook University Perception of Light Intensity CSE 332/564: Visualization Fundamentals of Color Klaus Mueller Computer Science Department Stony Brook University How Many Intensity Levels Do We Need? Dynamic Intensity Range

More information

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro Cvision 2 Digital Imaging António J. R. Neves (an@ua.pt) & João Paulo Silva Cunha & Bernardo Cunha IEETA / Universidade de Aveiro Outline Image sensors Camera calibration Sampling and quantization Data

More information

Image Processing by Bilateral Filtering Method

Image Processing by Bilateral Filtering Method ABHIYANTRIKI An International Journal of Engineering & Technology (A Peer Reviewed & Indexed Journal) Vol. 3, No. 4 (April, 2016) http://www.aijet.in/ eissn: 2394-627X Image Processing by Bilateral Image

More information

Segmentation of Liver CT Images

Segmentation of Liver CT Images Segmentation of Liver CT Images M.A.Alagdar 1, M.E.Morsy 2, M.M.Elzalabany 3 1,2,3 Electronics And Communications Department-.Faculty Of Engineering Mansoura University, Egypt. Abstract In this paper we

More information

Multispectral Enhancement towards Digital Staining

Multispectral Enhancement towards Digital Staining Multispectral Enhancement towards Digital Staining The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Published Version

More information

A Novel Approach for MRI Image De-noising and Resolution Enhancement

A Novel Approach for MRI Image De-noising and Resolution Enhancement A Novel Approach for MRI Image De-noising and Resolution Enhancement 1 Pravin P. Shetti, 2 Prof. A. P. Patil 1 PG Student, 2 Assistant Professor Department of Electronics Engineering, Dr. J. J. Magdum

More information

Automatic Morphological Segmentation and Region Growing Method of Diagnosing Medical Images

Automatic Morphological Segmentation and Region Growing Method of Diagnosing Medical Images International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 2, Number 3 (2012), pp. 173-180 International Research Publications House http://www. irphouse.com Automatic Morphological

More information

Automatic Locating the Centromere on Human Chromosome Pictures

Automatic Locating the Centromere on Human Chromosome Pictures Automatic Locating the Centromere on Human Chromosome Pictures M. Moradi Electrical and Computer Engineering Department, Faculty of Engineering, University of Tehran, Tehran, Iran moradi@iranbme.net S.

More information

Color. Used heavily in human vision. Color is a pixel property, making some recognition problems easy

Color. Used heavily in human vision. Color is a pixel property, making some recognition problems easy Color Used heavily in human vision Color is a pixel property, making some recognition problems easy Visible spectrum for humans is 400 nm (blue) to 700 nm (red) Machines can see much more; ex. X-rays,

More information

TDI2131 Digital Image Processing

TDI2131 Digital Image Processing TDI2131 Digital Image Processing Image Enhancement in Spatial Domain Lecture 3 John See Faculty of Information Technology Multimedia University Some portions of content adapted from Zhu Liu, AT&T Labs.

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 2 Aug 23 rd, 2018 Slides from Dr. Shishir K Shah, Rajesh Rao and Frank (Qingzhong) Liu 1 Instructor Digital Image Processing COSC 6380/4393 Pranav Mantini

More information

Digital image processing vs. computer vision Higher-level anchoring

Digital image processing vs. computer vision Higher-level anchoring Digital image processing vs. computer vision Higher-level anchoring Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 1 Patrick Olomoshola, 2 Taiwo Samuel Afolayan 1,2 Surveying & Geoinformatic Department, Faculty of Environmental Sciences, Rufus Giwa Polytechnic, Owo. Nigeria Abstract: This paper

More information

Fundamentals of CMOS Image Sensors

Fundamentals of CMOS Image Sensors CHAPTER 2 Fundamentals of CMOS Image Sensors Mixed-Signal IC Design for Image Sensor 2-1 Outline Photoelectric Effect Photodetectors CMOS Image Sensor(CIS) Array Architecture CIS Peripherals Design Considerations

More information

Background Adaptive Band Selection in a Fixed Filter System

Background Adaptive Band Selection in a Fixed Filter System Background Adaptive Band Selection in a Fixed Filter System Frank J. Crosby, Harold Suiter Naval Surface Warfare Center, Coastal Systems Station, Panama City, FL 32407 ABSTRACT An automated band selection

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Lecture # 3 Digital Image Fundamentals ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA Email:: ali.javed@uettaxila.edu.pk Office Room #:: 7 Presentation Outline

More information

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc.

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc. Human Vision and Human-Computer Interaction Much content from Jeff Johnson, UI Wizards, Inc. are these guidelines grounded in perceptual psychology and how can we apply them intelligently? Mach bands:

More information

Proposed Method for Off-line Signature Recognition and Verification using Neural Network

Proposed Method for Off-line Signature Recognition and Verification using Neural Network e-issn: 2349-9745 p-issn: 2393-8161 Scientific Journal Impact Factor (SJIF): 1.711 International Journal of Modern Trends in Engineering and Research www.ijmter.com Proposed Method for Off-line Signature

More information

Sensors and Sensing Cameras and Camera Calibration

Sensors and Sensing Cameras and Camera Calibration Sensors and Sensing Cameras and Camera Calibration Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 20.11.2014

More information

Color images C1 C2 C3

Color images C1 C2 C3 Color imaging Color images C1 C2 C3 Each colored pixel corresponds to a vector of three values {C1,C2,C3} The characteristics of the components depend on the chosen colorspace (RGB, YUV, CIELab,..) Digital

More information

FEATURE EXTRACTION AND CLASSIFICATION OF BONE TUMOR USING IMAGE PROCESSING. Mrs M.Menagadevi-Assistance Professor

FEATURE EXTRACTION AND CLASSIFICATION OF BONE TUMOR USING IMAGE PROCESSING. Mrs M.Menagadevi-Assistance Professor FEATURE EXTRACTION AND CLASSIFICATION OF BONE TUMOR USING IMAGE PROCESSING Mrs M.Menagadevi-Assistance Professor N.GirishKumar,P.S.Eswari,S.Gomathi,S.Chanthirasekar Department of ECE K.S.Rangasamy College

More information

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA)

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA) A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA) Suma Chappidi 1, Sandeep Kumar Mekapothula 2 1 PG Scholar, Department of ECE, RISE Krishna

More information

A Study On Preprocessing A Mammogram Image Using Adaptive Median Filter

A Study On Preprocessing A Mammogram Image Using Adaptive Median Filter A Study On Preprocessing A Mammogram Image Using Adaptive Median Filter Dr.K.Meenakshi Sundaram 1, D.Sasikala 2, P.Aarthi Rani 3 Associate Professor, Department of Computer Science, Erode Arts and Science

More information

Harmless screening of humans for the detection of concealed objects

Harmless screening of humans for the detection of concealed objects Safety and Security Engineering VI 215 Harmless screening of humans for the detection of concealed objects M. Kowalski, M. Kastek, M. Piszczek, M. Życzkowski & M. Szustakowski Military University of Technology,

More information

Prof. Feng Liu. Winter /09/2017

Prof. Feng Liu. Winter /09/2017 Prof. Feng Liu Winter 2017 http://www.cs.pdx.edu/~fliu/courses/cs410/ 01/09/2017 Today Course overview Computer vision Admin. Info Visual Computing at PSU Image representation Color 2 Big Picture: Visual

More information