Proceedings of Student-Faculty Research Day, CSIS, Pace University, May 3 rd, 2013
|
|
- Silvia Cain
- 6 years ago
- Views:
Transcription
1 Proceedings of Student-Faculty Research Day, CSIS, Pace University, May 3 rd, 2013 Comparative Analysis of Feature Extraction Capabilities between Machine and Human in Visual Pattern Recognition Tasks Utilizing a Pattern Classification Framework Amir Schur, Sung-Hyuk Cha, and Charles C. Tappert Pace University Seidenberg School of CSIS, White Plains, NY amirschur@aol.com; {scha, ctappert}@pace.edu Abstract There have been many recent advances in pattern recognition technologies, particularly those involving visual pattern recognition tasks. How do these machine capabilities compare to human capabilities in visual pattern recognition tasks? Which can perform better in the feature extraction processes, machine or human? This study compares machine and human in color and shape recognition tasks, as part of a visual pattern recognition system. A pattern classification framework will be used to provide a foundation for understanding where this work fits in context with other work. This is a work in progress for a dissertation by the first author. Keywords pattern recognition, feature extraction, human and machine comparisons I. INTRODUCTION What is the pattern classification framework? What are the current scientific advances in understanding human capabilities in this area? And what about machine capabilities? A typical pattern classification system is comprised of the following components: data capture, feature extraction, and classification [1]. Data capture involves obtaining raw data related to the object through sensors, and for visual objects the sensor is usually a camera. Feature extraction reduces the amount of raw information captured in the visual image of the object by measuring certain features. In face recognition, for example, this can be the distance between eye brows, length of the nose, etc. The last component is classification where the feature space is divided into decision regions. This is where the object is assigned a category. This division into three separate processes is also the typical biometric system architecture, in which the three separate technologies are signal acquisition technology, feature extraction technology, and matching technology [2]. Using the three pattern classification system components described above, machine and human capabilities will be explored in visual pattern recognition tasks. A. Data Capture In the visual recognition task performed by human, this activity is accomplished physically from the cornea through the aqueous humor, eye lens to the retina. The mechanism in the eye, initial portion of the human visual system, works much the same as a camera [3]. The shutter of camera can close or open depending upon the amount of light needed to expose the film in the back of the camera. The lens of a camera is able to focus on objects far away and up close with the help of mirrors and other mechanical devices. Both the human eye and a camera use something called a lens in fact, they both use the same type of lens, a converging lens. In the camera, the lens focuses the light onto a piece of film. The film has chemicals in it that basically trap the image on it, making it permanent. Instead of film, the eye uses the retina [4]. In a visual recognition task the process starts with either the human or the camera preprocessing an image. We either take a picture of an object with a camera or we see the object with our eye. There are definitely some differences between the human eye and the camera. For example the highest resolution digital camera currently available processes up to 80 megapixels, whereas our eye can process around 560 megapixels [5]. B. Feature extraction. Advancements in neuroscience have allowed us to capture what data actually gets transmitted from the retina to the brain. Berkeley researchers Frank Werblin and Botond Roska proved that there are between 10 and 12 output channels from the eye to the brain, each carrying a different, stripped-down representation of the visual world. One group of cells sends information only about edges (changes in contrast), another detects large areas of uniform colors, and another is sensitive only to the backgrounds behind the figure of interest [6]. This model is often used in the artificial intelligence field as sparse coding, which is a method of data reduction [7]. More data does not necessarily mean better performance. On the computer side, there have been tremendous advancements in visual feature extraction technologies. There are color based feature extractions, with can be grouped into: RGB, HSB, LAB, YCbCr, etc. There are texture based extraction technologies such as gabor filter and gist. Then there are shape/contour based feature extraction methods, which often are grouped into: distance vs angles, distance projection, min/max, area ratio, object count, etc. [8]. C. Classification/Matching The part of the brain responsible for human ability to deal with patterns of information is the neocortex [7]. The neocortex is also responsible for sensory perception, including visual perception. The primary visual cortex is the part of the neocortex that receives visual input from the retina. How does C4.1
2 the visual cortex process information? There is still much to learn, as we do not know enough about what happens here. In fact, science is yet to provide a full understanding of the brain, so it is not possible to propose accurate overall data models in the visual cortex [9]. Various methods of machine learning and application of advanced statistical methods for classifications can be considered equivalent to the human capabilities of visual object classification and decision making. II. RESEARCH FOCUS After extensive literature search in the three areas of pattern recognition, as elaborated in the introduction section above, it was decided to focus research in the area of feature extraction to perform a comparison between human and machine capability in visual recognition tasks. There is not enough concrete knowledge of the visual cortex or neocortex decision making process to perform any comparative analysis between our brain and any machine or computer system. There could be some comparison done to learn the similarities between our physical eye and visual processing technologies such as cameras. The fact that now we know quite a bit about how data is being transmitted from the retina to the brain, allows for more exploration in comparative analysis in this area. We will try to elaborate further on what is happening physically in the human visual recognition process from the retina to the brain. We will then elaborate on various technologies, focusing on computer systems, which are applicable in this focus area. A. Retina and visual data processing The retina contains two types of cells: rods which handle vision in low light, and cones which handle color vision and detail. When light contacts these two types of cells, a series of complex chemical reactions occur. The chemical that is formed (activated rhodopsin) creates electrical impulses in the optic nerve. The process is as follows [10]: The cell membrane (outer layer) of a rod cell has an electric charge. When light activates rhodopsin, it causes a reduction in cyclic guanosine monophosphate, which causes this electric charge to increase. This produces an electric current along the cell. When more light is detected, more rhodopsin is activated and more electric current is produced. This electric impulse eventually reaches a ganglion cell, and then the optic nerve. The nerves reach the optic chasm, where the nerve fibers from the inside half of each retina cross to the other side of the brain, but the nerve fibers from the outside half of the retina stay on the same side of the brain. These fibers eventually reach the back of the brain (occipital lobe). This is where vision is interpreted and is called the primary visual cortex. Cone pigments, which are color-responsive chemicals in the cones, are very similar to the chemicals in the rods. The retinal portion of the chemical is the same; however the scotopsin is replaced with photopsins. Therefore, the colorresponsive pigments are made of retinal and photopsins. There are three kinds of color-sensitive pigments: red- sensitive pigments, green-sensitive pigment, and blue- sensitive pigment. Each cone cell has one of these pigments so that it is sensitive to that color. The human eye can sense almost any gradation of color when red, green and blue are mixed. There are more chemical reactions that occur in the retina, but as elaborated in the color detection and low-light vision above, the activities are done physiologically utilizing bodyproduced chemical to perform the functions. This complex and automated system can be replicated outside the human body. What data does the brain actually get? Though we may think we capture all information, it turns out that we are just receiving hints, edges in space and time. Researcher Frank Werblin and Botond Roska showed that there are between 10 and 12 output channels from the eye to the brain, each carrying a different, stripped-down representation of the visual world. One group of cells sends information only about edges (changes in contrast); another group detects large areas of uniform colors, and the last group in sensitive only to the backgrounds behind figure of interest. Thus we can conclude there are three group of data transmitted between the retina and the brain: contrast, color and shape /contour. B. Visual Feature Extraction Technologies There are common implementations of visual feature extraction based on color, contour and texture. We will try to briefly describe what they are. These implementations are available even on basic computer systems as code libraries than can be utilized to build any software. Attention must be given to the concept and algorithm implementation. Any changes in scientific finding will definitely impact these software libraries. Also development in technology might make them obsolete or less used. 1) Color based methods There are many ways of extracting color information from an image, such as RGB (red green and blue), HSV (hue, saturation and value), LAB (L is for lightness and a and b for the color-opponent dimensions), YCbCr (Luma, blue difference, red difference). Just like the cone pigments in the retina, the RGB method color space comes from an additive model in which the three primitive colors red/green/blue are added together to reproduce the entire range of colors. This method is utilized in photography, television and computers. At theprograming level, for example, there is a color method in both AWT (Abstract Window Toolkit) and swing package (the two main packages in Java for Graphical User Interface). Color (int r, int g, int b). This method will create a color with the specified red, green, and blue integers, with values in the range between 0 and 255. The HSV color model was created by Alvy Ray Smith (one of the co-founders of Pixar Animation Studios), as a more user friendly alternative for designers. The hue parameter is circular, instead of ranging from 0 to 255 as RGB. C4.2
3 example in flower recognition counting the number of petals can help differentiate the contour of the flower. Fig. 1. HSV color model There are several variations of this model, such as HSL and HIS. HSL stands for hue, saturation, and lightness where as HIS stands for hue, saturation, and intensity. The LAB color space includes all perceivable colors which exceeds those of the RGB. An extension of this model is the CIELAB, which is the most complete color space as specified by the International Commission on Illumination. YCbCr is not an absolute color space; rather, it is a way of encoding RGB information. 2) Texture based methods Among various approaches to texture feature extraction, gabor filter has emerged as one of the most popular one. Gabor filter-based feature extractors can be interpreted as nonlinear functions that map images from original space to feature space, where each image is represented by its features [11]. Gist model provides high-level context information (a segment within a site) of a visual object using coarse features. Researchers find that scenes from differing segments contrast in a global manner, and this can be captured and utilized as a basis for recognition. The opposite spectrum is the salience model, where low level texture analysis is performed on a visual object [12]. These two models are often combined for object recognition. 3) Shape/contour based methods. The first task to complete in any contour based calculation is to separate the image from its background. Once the image is retrieved, there are various methods to represent the contour in a particular form. Various calculations are then performed to distinguish the object from other similar objects. There are various types of calculations that can be done to represent an object s contour. These calculations are typically not performed alone, but grouped together to try to capture various aspects of an object s contour. One well known method involves calculating distances versus angles. In this method, the distances from the contour point and the center of gravity are computed by going through the contour for all angles between 0 and 2PI for a given step size. Determining a starting point is necessary. Another method involves calculating the minimum and maximum distances from a contour point. The ratio of the minimum and maximum distance can also be calculated. Counting any distinct features of an object is another technique often employed. For III. PLANNED RESEARCH ACTIVITIES As elaborated above, the focus of this research is to perform a comparative analysis between human and machine capabilities in visual recognition tasks. Data from retina to the brain can be grouped into edges (changes in contrast), colors, and the backgrounds behind the figure of interest. Various technological advances in visual feature extraction are not too far away from those categories, and include color, texture and shape/contour. With these conditions, we can attempt to compare color extraction capabilities and shape recognition capabilities between human and machine. We will not perform contrast comparisons, since this seems to be impractical due to the lack of available use cases in this area. As the framework for analysis is the pattern classification framework, comparison will be done in terms of object identification, in particular in a feature extraction process. Accuracy of results is calculated by determining the number of the correct objects selected utilizing color recognition and contour recognition techniques. Another control mechanism is the time factor, the time taken to recognize an object which must be within acceptable parameters. The first hypothesis is that computer systems are better than humans in color recognition. The second hypothesis is that humans are most likely still better than computer systems in shape or contour recognition. A. CAVIAR Model Computer Assisted Visual InterActive Recognition (CAVIAR) is a model where the machine and human are interactive in a visual recognition task. In CAVIAR, human and machine interaction is continuous and can be performed in every step of the visual object recognition process. The human can even override the final result of classification. Utilizing the human-machine interaction model, experiments with CAVIAR proved that a higher accuracy level of visual pattern recognition was achieved interactively compared to that achieved by the machine alone or by the human alone [13]. The CAVIAR model has been ported to a handheld computing device application for flower recognition, and the application is called IVS (Interactive Visual System). IVS exploits the pattern recognition capabilities of humans and the computational power of a computer to identify flowers based on features that are interactively extracted from an image and submitted for comparison to a species database [14]. This flower identification software has six activities that are designed to be done automatically, but still can be overridden by human input. If human input is added on any activities, the AUTO button can complete the other activities automatically. The six activities are: 1. Determining the dominant color of the flower petal. 2. Determining the secondary, less dominant, color of the flower petal. C4.3
4 3. Determining the color of the stamen or center portion of the flower. 4. Counting the number of petals. 5. Getting the horizontal and vertical bounds of the flower, basically isolating the object from the background. 6. Getting the horizontal and vertical bounds of a flower petal to isolate a petal and measure its bounds. As stated above, the first three activities are associated with color recognition, while the latter three activities concern contour/shape recognition. In order to perform a balanced experimental design in these human-machine tasks, three separate experiments are described below. B. Experiment Design To compare machine and human in performing certain tasks, some control mechanisms need to be established. This will be done by performing the object recognition tasks in three ways: machine only, human only, and machine and human combined. The machine and human combination will be done in two separate ways. The first is to capture human input in color recognition tasks, while performing all other tasks automatically. The second will be utilizing human in contour recognition, while the other tasks are done automatically by the computer. 1) Machine only In this experiment, machine visual performance will be measured only in a particular visual recognition task. For the flower identification task this will be done using the AUTO feature within IVS. As computer systems are fast and able to store large amount of data, the machine time for task completion will likely be fast. The experimental design, however, must limit the acceptable completion time by humans. Therefore, any tasks that are completed beyond a reasonable threshold must be considered as a failure. 2) Human only In this experiment, human visual performance will be measured only in a particular visual recognition task. In this regard, a question comes to mind as to whether the machine is compared to an expert in the area of interest, or to an amateur. It is generally known that despite recent advances in the fields of computer vision and machine learning, well-trained expert humans are still generally more proficient than machines in recognizing most patterns [15]. Therefore, for this research we will focus only on human amateurs in the comparison. Prior knowledge or expertise is a variable factor that we do not want to introduce into the experiments. For flower recognition, the task is to get untrained participants to identify the type of flower. The participants will have access to a flower guide book and the flowers to be identified will be retrieved from that guide book. 3) Machine and human combined The intention of the combination is to measure separately the human input for the color recognition and for the shape recognition tasks for comparative purposes. Thus, for each sub-experiment one task will be done by the human participant, while the rest will be automated. The computer operations, of course, will be consistent as the algorithm in the software is set. Within the six available tasks within IVS, the human will perform the first three tasks manually for color identification, while all other tasks are performed automatically. And for shape recognition, the human will perform the last three tasks manually while the others are performed automatically. C. Data Collection In a data collection process 535 flower images were obtained and stored in IVS during a setup stage, and these images were collected not for comparative purposes but rather to observe the effect of human interaction [16]. Thus the manmachine data part cannot be used for this purpose. Another data collection must be performed for human-machine analysis by using the strategy described above. The writer has been fortunate to receive compiled code of IVS from Dr. Jie Zou [13], who advised that the code be reverse compiled, as they were never obfuscated. This will allow the writer to analyze exact methods utilized at the code level and map them to common feature extraction technologies as described above. This code-level access will assist in analyzing the experimental results and arriving at the final findings that is, in concluding whether the machine or human is better at the various tasks. IV. CONCLUSIONS The writer intends to perform comparative analysis between human and machine capabilities in visual pattern recognition tasks, particularly color and shape/contour recognition. The IVS (Interactive Visual System) tool will be utilized for data collection. Research will focus on comparing color recognition capabilities and shape recognition capabilities as part of a visual pattern recognition task. If the final finding shows that the machine is more accurate than human, then we can conclude the specific technology method used by IVS is better than human capability. If the human performs better than the machine, other techniques for color or texture recognition may need to be evaluated. Other tools may need to be utilized or modification of IVS to utilize a particular technique might be warranted. REFERENCES [1] Duda, Richard, O., Hart, P.E., and Stork, D.G., Pattern Classification (2nd ed) John Wiley & Sons, [2] R. Bolle, J. Connell, S. Pankanti, N. Ratha, and A. Senior, Guide to biometrics. New York: Springer, [3] University of Michigan Kellogg Eye Center, How the Eye Works, assessed from anatomy.html. C4.4
5 [4] Richards, Beth, Sept 26, 2010, Differences Between Human Eye and Camera, Assessed from /article/ differences-between-human -eye-and-camera/ [5] Notes on the Resolution and Other Details of the Human Eye, assessed from html, on March 20, [6] Roska, Botond, and Frank Werblin. "Vertical interactions across ten parallel, stacked representations in the mammalian retina." Nature (2001): [7] Kurtzweil, Ray. How to Create a Mind: The Secret of Human Though Revealed, Penguin Group, London, [8] Vuarnoz, Vincent. "Flower Recognition.", assessed from _flowerrecognition.pdf, July 30, [9] Vincent de Ladurantaye, Jean Rouat and Jacques Vanden- Abeele (2012). Models of Information Processing in the Visual Cortex, Visual Cortex - Current Status and Perspectives, edited by Stephane Molotchnikoff, ISBN: , InTech, DOI: /50616.Available from: visual-cortex-current-statusand-perspectives/models-of-information-processing-in-thevisual-cortex. [10] Bianco, Carl, MD, How Vision Works, accessed February 16, [11] Li, Weitao, et al. "Selection of Gabor filters for improved texture feature extraction." Image Processing (ICIP), th IEEE International Conference on. IEEE, [12] C. Siagian, L. Itti, Rapid Biologically-Inspired Scene Classification Using Features Shared with Visual Attention, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 2, pp , Feb [13] Jie Zou, Computer Assisted Visual Interactive Recognition: Caviar. Ph.D. Dissertation. Rensselaer Polytechnic Institute, Troy, NY, USA. Advisor(s) George Nagy, [14] Interactive Visual System, Arthur Evans, John Sikorski, Patricia Thomas, Jie Zou, George Nagy, Sung-Hyuk Cha, Charles Tappert, [15] Coetzer, Johannes, Swanepoel, Jacques (Department of Mathematical Sciences, Stellenbosch University, Stellenbosch, South Africa) and Sabourin, Robert ( Ecole de Technologie Sup erieure, University of Queb ec, Montr eal, Canada), Efficient cost-sensitive human-machine collaboration for offline signature verification, accessed from: cations/ 2012/Coetzer_SPIE_DRR_2012.pdf. [16] Kathryn Durfee, Neville Kapoor, Matthew Muccioli, Richard Smart, David Wilkins, and Amir Schur, An Evaluation of the Effect of Human Interaction on the Accuracy of the Interactive Visual System, Proceedings of Student-Faculty Research Day, CSIS, Pace University, May 4th, C4.5
Speed and Accuracy Improvements in Visual Pattern Recognition Tasks by Employing Human Assistance
Speed and Accuracy Improvements in Visual Pattern Recognition Tasks by Employing Human Assistance Amir I. Schur and Charles C. Tappert Abstract This study investigates methods of enhancing human-computer
More informationCS 565 Computer Vision. Nazar Khan PUCIT Lecture 4: Colour
CS 565 Computer Vision Nazar Khan PUCIT Lecture 4: Colour Topics to be covered Motivation for Studying Colour Physical Background Biological Background Technical Colour Spaces Motivation Colour science
More informationThe Special Senses: Vision
OLLI Lecture 5 The Special Senses: Vision Vision The eyes are the sensory organs for vision. They collect light waves through their photoreceptors (located in the retina) and transmit them as nerve impulses
More informationEC-433 Digital Image Processing
EC-433 Digital Image Processing Lecture 2 Digital Image Fundamentals Dr. Arslan Shaukat 1 Fundamental Steps in DIP Image Acquisition An image is captured by a sensor (such as a monochrome or color TV camera)
More informationAn Overview of Biometrics. Dr. Charles C. Tappert Seidenberg School of CSIS, Pace University
An Overview of Biometrics Dr. Charles C. Tappert Seidenberg School of CSIS, Pace University What are Biometrics? Biometrics refers to identification of humans by their characteristics or traits Physical
More informationAP PSYCH Unit 4.2 Vision 1. How does the eye transform light energy into neural messages? 2. How does the brain process visual information? 3.
AP PSYCH Unit 4.2 Vision 1. How does the eye transform light energy into neural messages? 2. How does the brain process visual information? 3. What theories help us understand color vision? 4. Is your
More informationVision. PSYCHOLOGY (8th Edition, in Modules) David Myers. Module 13. Vision. Vision
PSYCHOLOGY (8th Edition, in Modules) David Myers PowerPoint Slides Aneeq Ahmad Henderson State University Worth Publishers, 2007 1 Vision Module 13 2 Vision Vision The Stimulus Input: Light Energy The
More informationProf. Feng Liu. Winter /09/2017
Prof. Feng Liu Winter 2017 http://www.cs.pdx.edu/~fliu/courses/cs410/ 01/09/2017 Today Course overview Computer vision Admin. Info Visual Computing at PSU Image representation Color 2 Big Picture: Visual
More informationIII: Vision. Objectives:
III: Vision Objectives: Describe the characteristics of visible light, and explain the process by which the eye transforms light energy into neural. Describe how the eye and the brain process visual information.
More informationEye. Eye Major structural layer of the wall of the eye is a thick layer of dense C.T.; that layer has two parts:
General aspects Sensory receptors ; External or internal environment. A stimulus is a change in the environmental condition which is detectable by a sensory receptor 1 Major structural layer of the wall
More informationSensory receptors External internal stimulus change detectable energy transduce action potential different strengths different frequencies
General aspects Sensory receptors ; respond to changes in the environment. External or internal environment. A stimulus is a change in the environmental condition which is detectable by a sensory receptor
More informationLecture 4 Foundations and Cognitive Processes in Visual Perception From the Retina to the Visual Cortex
Lecture 4 Foundations and Cognitive Processes in Visual Perception From the Retina to the Visual Cortex 1.Vision Science 2.Visual Performance 3.The Human Visual System 4.The Retina 5.The Visual Field and
More informationAS Psychology Activity 4
AS Psychology Activity 4 Anatomy of The Eye Light enters the eye and is brought into focus by the cornea and the lens. The fovea is the focal point it is a small depression in the retina, at the back of
More informationSensory and Perception. Team 4: Amanda Tapp, Celeste Jackson, Gabe Oswalt, Galen Hendricks, Harry Polstein, Natalie Honan and Sylvie Novins-Montague
Sensory and Perception Team 4: Amanda Tapp, Celeste Jackson, Gabe Oswalt, Galen Hendricks, Harry Polstein, Natalie Honan and Sylvie Novins-Montague Our Senses sensation: simple stimulation of a sense organ
More informationCSCE 763: Digital Image Processing
CSCE 763: Digital Image Processing Spring 2018 Yan Tong Department of Computer Science and Engineering University of South Carolina Today s Agenda Welcome Tentative Syllabus Topics covered in the course
More informationHW- Finish your vision book!
March 1 Table of Contents: 77. March 1 & 2 78. Vision Book Agenda: 1. Daily Sheet 2. Vision Notes and Discussion 3. Work on vision book! EQ- How does vision work? Do Now 1.Find your Vision Sensation fill-in-theblanks
More informationLecture 3: Grey and Color Image Processing
I22: Digital Image processing Lecture 3: Grey and Color Image Processing Prof. YingLi Tian Sept. 13, 217 Department of Electrical Engineering The City College of New York The City University of New York
More informationThe Eye. Morphology of the eye (continued) Morphology of the eye. Sensation & Perception PSYC Thomas E. Van Cantfort, Ph.D
Sensation & Perception PSYC420-01 Thomas E. Van Cantfort, Ph.D The Eye The Eye The function of the eyeball is to protect the photoreceptors The role of the eye is to capture an image of objects that we
More informationVisual System I Eye and Retina
Visual System I Eye and Retina Reading: BCP Chapter 9 www.webvision.edu The Visual System The visual system is the part of the NS which enables organisms to process visual details, as well as to perform
More informationDigital Image Processing
Part 1: Course Introduction Achim J. Lilienthal AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapters 1 & 2 2011-04-05 Contents 1. Introduction
More informationFundamentals of Computer Vision
Fundamentals of Computer Vision COMP 558 Course notes for Prof. Siddiqi's class. taken by Ruslana Makovetsky (Winter 2012) What is computer vision?! Broadly speaking, it has to do with making a computer
More informationLecture 2 Digital Image Fundamentals. Lin ZHANG, PhD School of Software Engineering Tongji University Fall 2016
Lecture 2 Digital Image Fundamentals Lin ZHANG, PhD School of Software Engineering Tongji University Fall 2016 Contents Elements of visual perception Light and the electromagnetic spectrum Image sensing
More informationContent Based Image Retrieval Using Color Histogram
Content Based Image Retrieval Using Color Histogram Nitin Jain Assistant Professor, Lokmanya Tilak College of Engineering, Navi Mumbai, India. Dr. S. S. Salankar Professor, G.H. Raisoni College of Engineering,
More informationColor Image Processing
Color Image Processing Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Color Used heavily in human vision. Visible spectrum for humans is 400 nm (blue) to 700
More informationThe eye* The eye is a slightly asymmetrical globe, about an inch in diameter. The front part of the eye (the part you see in the mirror) includes:
The eye* The eye is a slightly asymmetrical globe, about an inch in diameter. The front part of the eye (the part you see in the mirror) includes: The iris (the pigmented part) The cornea (a clear dome
More informationColor and perception Christian Miller CS Fall 2011
Color and perception Christian Miller CS 354 - Fall 2011 A slight detour We ve spent the whole class talking about how to put images on the screen What happens when we look at those images? Are there any
More informationBettina Selig. Centre for Image Analysis. Swedish University of Agricultural Sciences Uppsala University
2011-10-26 Bettina Selig Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Electromagnetic Radiation Illumination - Reflection - Detection The Human Eye Digital
More informationVision. By: Karen, Jaqui, and Jen
Vision By: Karen, Jaqui, and Jen Activity: Directions: Stare at the black dot in the center of the picture don't look at anything else but the black dot. When we switch the picture you can look around
More informationVision Defect Identification System (VDIS) using Knowledge Base and Image Processing Framework
Vishal Dahiya* et al. / (IJRCCT) INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER AND COMMUNICATION TECHNOLOGY Vol No. 1, Issue No. 1 Vision Defect Identification System (VDIS) using Knowledge Base and Image
More informationWhat is Color Gamut? Public Information Display. How do we see color and why it matters for your PID options?
What is Color Gamut? How do we see color and why it matters for your PID options? One of the buzzwords at CES 2017 was broader color gamut. In this whitepaper, our experts unwrap this term to help you
More informationHOW THE EYE EVOLVED By Adrea R. Benkoff, M.D.
HOW THE EYE EVOLVED By Adrea R. Benkoff, M.D. HOW THE EYE EVOLVED BY ADREA R. BENKOFF, M.D. CREATIONISM vs. NATURAL SELECTION The complex structure of the eye has been used as evidence to support the theory
More informationImage Extraction using Image Mining Technique
IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,
More informationSegmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images
Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images A. Vadivel 1, M. Mohan 1, Shamik Sural 2 and A.K.Majumdar 1 1 Department of Computer Science and Engineering,
More informationDigital Image Processing
Digital Image Processing Lecture # 3 Digital Image Fundamentals ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA Email:: ali.javed@uettaxila.edu.pk Office Room #:: 7 Presentation Outline
More informationDetection of external stimuli Response to the stimuli Transmission of the response to the brain
Sensation Detection of external stimuli Response to the stimuli Transmission of the response to the brain Perception Processing, organizing and interpreting sensory signals Internal representation of the
More informationReading. Foley, Computer graphics, Chapter 13. Optional. Color. Brian Wandell. Foundations of Vision. Sinauer Associates, Sunderland, MA 1995.
Reading Foley, Computer graphics, Chapter 13. Color Optional Brian Wandell. Foundations of Vision. Sinauer Associates, Sunderland, MA 1995. Gerald S. Wasserman. Color Vision: An Historical ntroduction.
More informationImage and video processing (EBU723U) Colour Images. Dr. Yi-Zhe Song
Image and video processing () Colour Images Dr. Yi-Zhe Song yizhe.song@qmul.ac.uk Today s agenda Colour spaces Colour images PGM/PPM images Today s agenda Colour spaces Colour images PGM/PPM images History
More informationVision, Color, and Illusions. Vision: How we see
HDCC208N Fall 2018 One of many optical illusions - http://www.physics.uc.edu/~sitko/lightcolor/19-perception/19-perception.htm Vision, Color, and Illusions Vision: How we see The human eye allows us to
More informationiris pupil cornea ciliary muscles accommodation Retina Fovea blind spot
Chapter 6 Vision Exam 1 Anatomy of vision Primary visual cortex (striate cortex, V1) Prestriate cortex, Extrastriate cortex (Visual association coretx ) Second level association areas in the temporal and
More informationCOLOR and the human response to light
COLOR and the human response to light Contents Introduction: The nature of light The physiology of human vision Color Spaces: Linear Artistic View Standard Distances between colors Color in the TV 2 How
More informationVision. The eye. Image formation. Eye defects & corrective lenses. Visual acuity. Colour vision. Lecture 3.5
Lecture 3.5 Vision The eye Image formation Eye defects & corrective lenses Visual acuity Colour vision Vision http://www.wired.com/wiredscience/2009/04/schizoillusion/ Perception of light--- eye-brain
More informationIris Recognition using Histogram Analysis
Iris Recognition using Histogram Analysis Robert W. Ives, Anthony J. Guidry and Delores M. Etter Electrical Engineering Department, U.S. Naval Academy Annapolis, MD 21402-5025 Abstract- Iris recognition
More informationLight. intensity wavelength. Light is electromagnetic waves Laser is light that contains only a narrow spectrum of frequencies
Image formation World, image, eye Light Light is electromagnetic waves Laser is light that contains only a narrow spectrum of frequencies intensity wavelength Visible light is light with wavelength from
More informationColor & Graphics. Color & Vision. The complete display system is: We'll talk about: Model Frame Buffer Screen Eye Brain
Color & Graphics The complete display system is: Model Frame Buffer Screen Eye Brain Color & Vision We'll talk about: Light Visions Psychophysics, Colorimetry Color Perceptually based models Hardware models
More informationSensation. What is Sensation, Perception, and Cognition. All sensory systems operate the same, they only use different mechanisms
Sensation All sensory systems operate the same, they only use different mechanisms 1. Have a physical stimulus (e.g., light) 2. The stimulus emits some sort of energy 3. Energy activates some sort of receptor
More informationSensation. Sensation. Perception. What is Sensation, Perception, and Cognition
All sensory systems operate the same, they only use different mechanisms Sensation 1. Have a physical stimulus (e.g., light) 2. The stimulus emits some sort of energy 3. Energy activates some sort of receptor
More informationA Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots
Applied Mathematical Sciences, Vol. 6, 2012, no. 96, 4767-4771 A Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots Anna Gorbenko Department
More informationColor Perception. Color, What is It Good For? G Perception October 5, 2009 Maloney. perceptual organization. perceptual organization
G892223 Perception October 5, 2009 Maloney Color Perception Color What s it good for? Acknowledgments (slides) David Brainard David Heeger perceptual organization perceptual organization 1 signaling ripeness
More informationVisual Perception. human perception display devices. CS Visual Perception
Visual Perception human perception display devices 1 Reference Chapters 4, 5 Designing with the Mind in Mind by Jeff Johnson 2 Visual Perception Most user interfaces are visual in nature. So, it is important
More informationVision Basics Measured in:
Vision Vision Basics Sensory receptors in our eyes transduce light into meaningful images Light = packets of waves Measured in: Brightness amplitude of wave (high=bright) Color length of wave Saturation
More informationThe Science Seeing of process Digital Media. The Science of Digital Media Introduction
The Human Science eye of and Digital Displays Media Human Visual System Eye Perception of colour types terminology Human Visual System Eye Brains Camera and HVS HVS and displays Introduction 2 The Science
More informationEYE ANATOMY. Multimedia Health Education. Disclaimer
Disclaimer This movie is an educational resource only and should not be used to manage your health. The information in this presentation has been intended to help consumers understand the structure and
More informationImage Processing for Mechatronics Engineering For senior undergraduate students Academic Year 2017/2018, Winter Semester
Image Processing for Mechatronics Engineering For senior undergraduate students Academic Year 2017/2018, Winter Semester Lecture 8: Color Image Processing 04.11.2017 Dr. Mohammed Abdel-Megeed Salem Media
More informationCS 544 Human Abilities
CS 544 Human Abilities Color Perception and Guidelines for Design Preattentive Processing Acknowledgement: Some of the material in these lectures is based on material prepared for similar courses by Saul
More information10/8/ dpt. n 21 = n n' r D = The electromagnetic spectrum. A few words about light. BÓDIS Emőke 02 October Optical Imaging in the Eye
A few words about light BÓDIS Emőke 02 October 2012 Optical Imaging in the Eye Healthy eye: 25 cm, v1 v2 Let s determine the change in the refractive power between the two extremes during accommodation!
More informationMarks + Channels. Large Data Visualization Torsten Möller. Munzner/Möller
Marks + Channels Large Data Visualization Torsten Möller Overview Marks + channels Channel effectiveness Accuracy Discriminability Separability Popout Channel characteristics Spatial position Colour Size
More informationChapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis
Chapter 2: Digital Image Fundamentals Digital image processing is based on Mathematical and probabilistic models Human intuition and analysis 2.1 Visual Perception How images are formed in the eye? Eye
More information11/23/11. A few words about light nm The electromagnetic spectrum. BÓDIS Emőke 22 November Schematic structure of the eye
11/23/11 A few words about light 300-850nm 400-800 nm BÓDIS Emőke 22 November 2011 The electromagnetic spectrum see only 1/70 of the electromagnetic spectrum The External Structure: The Immediate Structure:
More informationANALYSIS OF PARTIAL IRIS RECOGNITION
ANALYSIS OF PARTIAL IRIS RECOGNITION Yingzi Du, Robert Ives, Bradford Bonney, Delores Etter Electrical Engineering Department, U.S. Naval Academy, Annapolis, MD, USA 21402 ABSTRACT In this paper, we investigate
More informationFace Detection: A Literature Review
Face Detection: A Literature Review Dr.Vipulsangram.K.Kadam 1, Deepali G. Ganakwar 2 Professor, Department of Electronics Engineering, P.E.S. College of Engineering, Nagsenvana Aurangabad, Maharashtra,
More informationVisual Communication by Colours in Human Computer Interface
Buletinul Ştiinţific al Universităţii Politehnica Timişoara Seria Limbi moderne Scientific Bulletin of the Politehnica University of Timişoara Transactions on Modern Languages Vol. 14, No. 1, 2015 Visual
More informationColor Image Segmentation in RGB Color Space Based on Color Saliency
Color Image Segmentation in RGB Color Space Based on Color Saliency Chen Zhang 1, Wenzhu Yang 1,*, Zhaohai Liu 1, Daoliang Li 2, Yingyi Chen 2, and Zhenbo Li 2 1 College of Mathematics and Computer Science,
More informationLocating the Query Block in a Source Document Image
Locating the Query Block in a Source Document Image Naveena M and G Hemanth Kumar Department of Studies in Computer Science, University of Mysore, Manasagangotri-570006, Mysore, INDIA. Abstract: - In automatic
More informationVisual Perception. Overview. The Eye. Information Processing by Human Observer
Visual Perception Spring 06 Instructor: K. J. Ray Liu ECE Department, Univ. of Maryland, College Park Overview Last Class Introduction to DIP/DVP applications and examples Image as a function Concepts
More informationAPPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE
APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE Najirah Umar 1 1 Jurusan Teknik Informatika, STMIK Handayani Makassar Email : najirah_stmikh@yahoo.com
More informationWhat determines data speed?
PHY385-H1F Introductory Optics Class 12 Outline: Section 5.7, Sub-sections 5.7.1 5.7.6 Fibre-Optics The Human Eye Corrective Lenses Pinhole Camera Camera Depth of Field What determines data speed? Broadband
More informationPerformance Analysis of Color Components in Histogram-Based Image Retrieval
Te-Wei Chiang Department of Accounting Information Systems Chihlee Institute of Technology ctw@mail.chihlee.edu.tw Performance Analysis of s in Histogram-Based Image Retrieval Tienwei Tsai Department of
More informationQUALITY CHECKING AND INSPECTION BASED ON MACHINE VISION TECHNIQUE TO DETERMINE TOLERANCEVALUE USING SINGLE CERAMIC CUP
QUALITY CHECKING AND INSPECTION BASED ON MACHINE VISION TECHNIQUE TO DETERMINE TOLERANCEVALUE USING SINGLE CERAMIC CUP Nursabillilah Mohd Alie 1, Mohd Safirin Karis 1, Gao-Jie Wong 1, Mohd Bazli Bahar
More informationReverse Engineering the Human Vision System
Reverse Engineering the Human Vision System Reverse Engineering the Human Vision System Biologically Inspired Computer Vision Approaches Maria Petrou Imperial College London Overview of the Human Visual
More informationCS 4300 Computer Graphics. Prof. Harriet Fell Fall 2012 Lecture 4 September 12, 2012
CS 4300 Computer Graphics Prof. Harriet Fell Fall 2012 Lecture 4 September 12, 2012 1 What is color? from physics, we know that the wavelength of a photon (typically measured in nanometers, or billionths
More informationSensation & Perception
Sensation & Perception What is sensation & perception? Detection of emitted or reflected by Done by sense organs Process by which the and sensory information Done by the How does work? receptors detect
More informationIntroduction to computer vision. Image Color Conversion. CIE Chromaticity Diagram and Color Gamut. Color Models
Introduction to computer vision In general, computer vision covers very wide area of issues concerning understanding of images by computers. It may be considered as a part of artificial intelligence and
More informationHuman Visual System. Prof. George Wolberg Dept. of Computer Science City College of New York
Human Visual System Prof. George Wolberg Dept. of Computer Science City College of New York Objectives In this lecture we discuss: - Structure of human eye - Mechanics of human visual system (HVS) - Brightness
More informationPsychology in Your Life
Sarah Grison Todd Heatherton Michael Gazzaniga Psychology in Your Life FIRST EDITION Chapter 5 Sensation and Perception 2014 W. W. Norton & Company, Inc. Section 5.1 How Do Sensation and Perception Affect
More informationTopic 4: Lenses and Vision. Lens a curved transparent material through which light passes (transmit) Ex) glass, plastic
Topic 4: Lenses and Vision Lens a curved transparent material through which light passes (transmit) Ex) glass, plastic Double Concave Lenses Are thinner and flatter in the middle than around the edges.
More informationAn Efficient Color Image Segmentation using Edge Detection and Thresholding Methods
19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com
More informationFig Color spectrum seen by passing white light through a prism.
1. Explain about color fundamentals. Color of an object is determined by the nature of the light reflected from it. When a beam of sunlight passes through a glass prism, the emerging beam of light is not
More informationAUTOMATIC FACE COLOR ENHANCEMENT
AUTOMATIC FACE COLOR ENHANCEMENT Da-Yuan Huang ( 黃大源 ), Chiou-Shan Fuh ( 傅楸善 ) Dept. of Computer Science and Information Engineering, National Taiwan University E-mail: r97022@cise.ntu.edu.tw ABSTRACT
More informationComputer Graphics Si Lu Fall /27/2016
Computer Graphics Si Lu Fall 2017 09/27/2016 Announcement Class mailing list https://groups.google.com/d/forum/cs447-fall-2016 2 Demo Time The Making of Hallelujah with Lytro Immerge https://vimeo.com/213266879
More informationCOLOR. and the human response to light
COLOR and the human response to light Contents Introduction: The nature of light The physiology of human vision Color Spaces: Linear Artistic View Standard Distances between colors Color in the TV 2 Amazing
More informationAnalysis of Various Methodology of Hand Gesture Recognition System using MATLAB
Analysis of Various Methodology of Hand Gesture Recognition System using MATLAB Komal Hasija 1, Rajani Mehta 2 Abstract Recognition is a very effective area of research in regard of security with the involvement
More informationTSBB15 Computer Vision
TSBB15 Computer Vision Lecture 9 Biological Vision!1 Two parts 1. Systems perspective 2. Visual perception!2 Two parts 1. Systems perspective Based on Michael Land s and Dan-Eric Nilsson s work 2. Visual
More informationPHGY Physiology. SENSORY PHYSIOLOGY Vision. Martin Paré
PHGY 212 - Physiology SENSORY PHYSIOLOGY Vision Martin Paré Assistant Professor of Physiology & Psychology pare@biomed.queensu.ca http://brain.phgy.queensu.ca/pare The Process of Vision Vision is the process
More informationPHGY Physiology. The Process of Vision. SENSORY PHYSIOLOGY Vision. Martin Paré. Visible Light. Ocular Anatomy. Ocular Anatomy.
PHGY 212 - Physiology SENSORY PHYSIOLOGY Vision Martin Paré Assistant Professor of Physiology & Psychology pare@biomed.queensu.ca http://brain.phgy.queensu.ca/pare The Process of Vision Vision is the process
More informationDISEASE DETECTION OF TOMATO PLANT LEAF USING ANDROID APPLICATION
ISSN 2395-1621 DISEASE DETECTION OF TOMATO PLANT LEAF USING ANDROID APPLICATION #1 Tejaswini Devram, #2 Komal Hausalmal, #3 Juby Thomas, #4 Pranjal Arote #5 S.P.Pattanaik 1 tejaswinipdevram@gmail.com 2
More information2 The First Steps in Vision
2 The First Steps in Vision 2 The First Steps in Vision A Little Light Physics Eyes That See light Retinal Information Processing Whistling in the Dark: Dark and Light Adaptation The Man Who Could Not
More informationColor , , Computational Photography Fall 2017, Lecture 11
Color http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 11 Course announcements Homework 2 grades have been posted on Canvas. - Mean: 81.6% (HW1:
More informationColor , , Computational Photography Fall 2018, Lecture 7
Color http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 7 Course announcements Homework 2 is out. - Due September 28 th. - Requires camera and
More informationVision. Biological vision and image processing
Vision Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Methods for Image processing academic year 2017 2018 Biological vision and image processing The human visual perception
More informationDigital Image Processing COSC 6380/4393. Lecture 20 Oct 25 th, 2018 Pranav Mantini
Digital Image Processing COSC 6380/4393 Lecture 20 Oct 25 th, 2018 Pranav Mantini What is color? Color is a psychological property of our visual experiences when we look at objects and lights, not a physical
More informationthe human chapter 1 Traffic lights the human User-centred Design Light Vision part 1 (modified extract for AISD 2005) Information i/o
Traffic lights chapter 1 the human part 1 (modified extract for AISD 2005) http://www.baddesigns.com/manylts.html User-centred Design Bad design contradicts facts pertaining to human capabilities Usability
More informationIntegrated Digital System for Yarn Surface Quality Evaluation using Computer Vision and Artificial Intelligence
Integrated Digital System for Yarn Surface Quality Evaluation using Computer Vision and Artificial Intelligence Sheng Yan LI, Jie FENG, Bin Gang XU, and Xiao Ming TAO Institute of Textiles and Clothing,
More informationTGR EDU: EXPLORE HIGH SCHOOL DIGITAL TRANSMISSION
TGR EDU: EXPLORE HIGH SCHL DIGITAL TRANSMISSION LESSON OVERVIEW: Students will use a smart device to manipulate shutter speed, capture light motion trails and transmit their digital image. Students will
More informationVEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL
VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL Instructor : Dr. K. R. Rao Presented by: Prasanna Venkatesh Palani (1000660520) prasannaven.palani@mavs.uta.edu
More informationLecture 1: image display and representation
Learning Objectives: General concepts of visual perception and continuous and discrete images Review concepts of sampling, convolution, spatial resolution, contrast resolution, and dynamic range through
More informationUsing Color in Scientific Visualization
Using Color in Scientific Visualization Mike Bailey The often scant benefits derived from coloring data indicate that even putting a good color in a good place is a complex matter. Indeed, so difficult
More informationRetina. Convergence. Early visual processing: retina & LGN. Visual Photoreptors: rods and cones. Visual Photoreptors: rods and cones.
Announcements 1 st exam (next Thursday): Multiple choice (about 22), short answer and short essay don t list everything you know for the essay questions Book vs. lectures know bold terms for things that
More informationMATLAB DIGITAL IMAGE/SIGNAL PROCESSING TITLES
MATLAB DIGITAL IMAGE/SIGNAL PROCESSING TITLES -2018 S.NO PROJECT CODE 1 ITIMP01 2 ITIMP02 3 ITIMP03 4 ITIMP04 5 ITIMP05 6 ITIMP06 7 ITIMP07 8 ITIMP08 9 ITIMP09 `10 ITIMP10 11 ITIMP11 12 ITIMP12 13 ITIMP13
More informationSri Shakthi Institute of Engg and Technology, Coimbatore, TN, India.
Intelligent Forms Processing System Tharani B 1, Ramalakshmi. R 2, Pavithra. S 3, Reka. V. S 4, Sivaranjani. J 5 1 Assistant Professor, 2,3,4,5 UG Students, Dept. of ECE Sri Shakthi Institute of Engg and
More informationStudent Attendance Monitoring System Via Face Detection and Recognition System
IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 11 May 2016 ISSN (online): 2349-784X Student Attendance Monitoring System Via Face Detection and Recognition System Pinal
More information